diff --git a/spaces/1gistliPinn/ChatGPT4/Crack-VERIFIED-DriverEasy-432-No-Speed-Limit-BETTER.md b/spaces/1gistliPinn/ChatGPT4/Crack-VERIFIED-DriverEasy-432-No-Speed-Limit-BETTER.md deleted file mode 100644 index 24131a11020b3b610800f02734c28e3784c0cd89..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Crack-VERIFIED-DriverEasy-432-No-Speed-Limit-BETTER.md +++ /dev/null @@ -1,113 +0,0 @@ -## Crack DriverEasy 432 No Speed Limit !!BETTER!! - - - - ![Crack ##VERIFIED## DriverEasy 432 No Speed Limit !!BETTER!!](https://windowsactivator.info/wp-content/uploads/2019/08/NEW.jpg) - - - -**Click Here ===> [https://www.google.com/url?q=https%3A%2F%2Ffancli.com%2F2twsJL&sa=D&sntz=1&usg=AOvVaw0EjWpAaO53PNuu7wLr00Fn](https://www.google.com/url?q=https%3A%2F%2Ffancli.com%2F2twsJL&sa=D&sntz=1&usg=AOvVaw0EjWpAaO53PNuu7wLr00Fn)** - - - -# How to Crack DriverEasy 432 and Remove the Speed Limit - - - -DriverEasy is a popular software that helps you find and update drivers for your computer. However, the free version of DriverEasy has a speed limit of 30 KB/s, which can be very frustrating if you have a lot of drivers to download. In this article, I will show you how to crack DriverEasy 432 and remove the speed limit, so you can enjoy faster and smoother driver downloads. - - - -Disclaimer: This article is for educational purposes only. I do not condone or encourage any illegal or unethical use of DriverEasy or any other software. You are solely responsible for any consequences that may arise from following this tutorial. - - - -## Step 1: Download DriverEasy 432 and the Crack File - - - -The first step is to download DriverEasy 432 from the official website[^1^]. You can choose the free version or the trial version, it doesn't matter. After downloading, install DriverEasy on your computer. - - - -Next, you need to download the crack file for DriverEasy 432. You can find it on various websites that offer cracked software, such as HaxPC[^1^] or MediaLabs[^4^]. Be careful when downloading from these sites, as they may contain malware or viruses. Scan the crack file with your antivirus before using it. - - - -## Step 2: Copy and Paste the Crack File - - - -The second step is to copy and paste the crack file into the installation folder of DriverEasy. The installation folder is usually located at C:\Program Files\Easeware\DriverEasy. If you installed DriverEasy in a different location, you need to find it yourself. - - - -After locating the installation folder, open it and look for a file named DriverEasy.exe. This is the main executable file of DriverEasy. Right-click on it and select Rename. Change its name to something else, such as DriverEasy.bak. This will prevent DriverEasy from running normally. - - - -Then, copy the crack file that you downloaded earlier and paste it into the installation folder. Rename the crack file to DriverEasy.exe. This will replace the original executable file with the cracked one. - - - -## Step 3: Run DriverEasy and Enjoy - - - -The final step is to run DriverEasy and enjoy its full features without any speed limit. To do this, double-click on the crack file that you renamed to DriverEasy.exe. You should see a message saying "Driver Easy Pro Activated" at the bottom right corner of the window. - - - -Now you can scan your computer for missing or outdated drivers and download them at full speed. You can also access other advanced features of DriverEasy Pro, such as backup and restore drivers, offline scan, uninstall drivers, etc. - - - -Congratulations! You have successfully cracked DriverEasy 432 and removed the speed limit. However, keep in mind that this method may not work for future versions of DriverEasy, and it may also violate the terms of service of DriverEasy. Use it at your own risk. - - - -## Why Use DriverEasy? - - - -DriverEasy is a useful software that can help you keep your drivers up to date and improve your computer performance. Drivers are essential components that allow your hardware devices to communicate with your operating system. Without proper drivers, your devices may not work correctly or cause errors and crashes. - - - -However, finding and installing drivers manually can be a tedious and time-consuming task. You need to know the exact model and version of your devices, search for the compatible drivers on the manufacturer's website, download them one by one, and install them on your computer. Moreover, you need to check for driver updates regularly to ensure that your drivers are always the latest and most stable. - - - -That's where DriverEasy comes in handy. DriverEasy can scan your computer and detect all the devices that need drivers. It can then download and install the correct drivers for you with just one click. You don't need to worry about compatibility issues or downloading the wrong drivers. DriverEasy also has a large database of over 8 million drivers, so it can find almost any driver you need. - - - -## What are the Benefits of DriverEasy Pro? - - - -DriverEasy has two versions: Free and Pro. The free version allows you to scan and download drivers at a limited speed of 30 KB/s. The pro version unlocks all the features and removes the speed limit. You can get the pro version by purchasing a license key or by cracking it as shown in this article. - - - -Some of the benefits of DriverEasy Pro are: - - - -- Faster and unlimited driver downloads: You can download drivers at full speed without any restrictions. - -- One-click update: You can update all your drivers with just one click, saving you time and hassle. - -- Backup and restore drivers: You can backup your drivers before updating them, so you can restore them in case anything goes wrong. - -- Offline scan: You can scan your computer for drivers without an internet connection, which is useful if you have network problems. - -- Uninstall drivers: You can uninstall drivers that you no longer need or that cause issues on your computer. - -- Technical support: You can get professional and friendly support from the DriverEasy team if you have any questions or problems. - - - -These are some of the reasons why you may want to use DriverEasy Pro instead of the free version. However, remember that cracking DriverEasy Pro is illegal and unethical, and it may also expose you to security risks. If you like DriverEasy and want to support its development, you should buy a license key from the official website instead of cracking it. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eagle CAD 6.4.0 Torrent The Best Choice for Professional and Hobbyist PCB Designers.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eagle CAD 6.4.0 Torrent The Best Choice for Professional and Hobbyist PCB Designers.md deleted file mode 100644 index c053d540d448ac1702464126f7e686f9cc59a5da..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Eagle CAD 6.4.0 Torrent The Best Choice for Professional and Hobbyist PCB Designers.md +++ /dev/null @@ -1,17 +0,0 @@ - -

Extreme ghostbusters complete series download
DRD Systems VideoReDo TVSuite H 286 v5 9 4 719b full version
libro administracion profesional de proyectos yamal chamoun pdf
photoboof keygenerator full torrent
sure cuts a lot 4 crack
alerene zte free
devon.ke.dev.mahadev.dvdrip.xvid.ddr
Error Repair Professional v4.0.3 full version
koon krishi malayalam pdf download
crack family discografia completa descargar minecraft

-

AnyDVD HD v7.4.8.0 Final-BRD utorrent
font psl kanda modern extra.rar
bijbel in gewone taal ebook 18
EZ Green Screen Photoshop keygen
kitab hakikat insan pdf free downloadgolkes
Oxford English for Careers Nursing 2 pdf.rar
genetica medica jorde pdf download
menucool slider license crack 12
Frozen 2 movie full version free download
CommView for WiFi 5.2.484 Including WEP Hack

-

eagle cad 6.4.0 torrent


Download Zip > https://imgfil.com/2uy1tD



-

Mksensation Digital Piano Library For Kontakt Torrent
every child is special english subtitle 192
archicad 15 object library free download
il re leone film completo italiano torrent
rambo 4 full movie in hindi mp4 free download
AutoCAD 2014 XFORCE torrent
js0group dll catia v6r2009 crack
shifrin multivariable mathematics djvu download
Thor The Dark World 2013 1080p BrRip x264 YIFY 31
Short Kut - The Con is On hindi dubbed download

-

hotel courbet 2009 tinto brass download 48
izotope t pain effect serial number
Ls-Dreams.Issue.05.(Sweethearts).Movies.13-24
Send Blaster Pro Serial Key
video sex anjing vs manusia.iso
dispensing pharmacy by rm mehta ebook download
simlab 3d pdf exporter for 3ds max crack torrent
call of duty modern warfare 2 highly compressed only 37 mb mega
UFS Explorer Professional Recovery v7.19.6 Portable Serial Key keygen
Mohabbatein 1 full movie in hindi free download 720p

-

Billu Ustaad download 720p movies
Rig N Roll 3 Crack Key Serial
tp-link tl-wr340gd v5 firmware download
arduino compatible compiler for labview crack
mkvmerge gui v4.4.0 download
sagem f st 2804 original firmware
testmaker 9.3 crack
facebook password revealer online
f-secure freedome vpn cracked apk market
All AutoCAD LT 2009 Products Crack Keygen (x86x64) !Latest utorrent

-

fallrain 19191a764c
-europe-microcat-2013-torrent
[ -europe-microcat-2013-torrent ]
[ -europe-microcat-2013-torrent ]
[ -europe-microcat-2013-torrent ]
link= -europe-microcat-2013-torrent
link= -europe-microcat-2013-torrent
link= -europe-microcat-2013-torrent

-

phipan 19191a764c
-torrents-yves-pflieger
[ -torrents-yves-pflieger ]
[ -torrents-yves-pflieger ]
[ -torrents-yves-pflieger ]
link= -torrents-yves-pflieger
link= -torrents-yves-pflieger
link= -torrents-yves-pflieger

-

nantcor 19191a764c
-mera-dil-lutiya-punjabi-movie-torrent-download
[ -mera-dil-lutiya-punjabi-movie-torrent-download ]
[ -mera-dil-lutiya-punjabi-movie-torrent-download ]
[ -mera-dil-lutiya-punjabi-movie-torrent-download ]
link= -mera-dil-lutiya-punjabi-movie-torrent-download
link= -mera-dil-lutiya-punjabi-movie-torrent-download
link= -mera-dil-lutiya-punjabi-movie-torrent-download

-

raemala 19191a764c
-saab-the-great-movie-download-utorrent-kickass
[ -saab-the-great-movie-download-utorrent-kickass ]
[ -saab-the-great-movie-download-utorrent-kickass ]
[ -saab-the-great-movie-download-utorrent-kickass ]
link= -saab-the-great-movie-download-utorrent-kickass
link= -saab-the-great-movie-download-utorrent-kickass
link= -saab-the-great-movie-download-utorrent-kickass

-

laqukei 19191a764c
-booth-software-torrent
[ -booth-software-torrent ]
[ -booth-software-torrent ]
[ -booth-software-torrent ]
link= -booth-software-torrent
link= -booth-software-torrent
link= -booth-software-torrent

-

-

finkalm 19191a764c
-flaming-cliffs-3-keygen-torrent
[ -flaming-cliffs-3-keygen-torrent ]
[ -flaming-cliffs-3-keygen-torrent ]
[ -flaming-cliffs-3-keygen-torrent ]
link= -flaming-cliffs-3-keygen-torrent
link= -flaming-cliffs-3-keygen-torrent
link= -flaming-cliffs-3-keygen-torrent

-

edwivien 19191a764c
-version-14-2-torrent
[ -version-14-2-torrent ]
[ -version-14-2-torrent ]
[ -version-14-2-torrent ]
link= -version-14-2-torrent
link= -version-14-2-torrent
link= -version-14-2-torrent

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Ship Simulator Extremes Demo and explore the worlds famous harbors and locations.md b/spaces/1phancelerku/anime-remove-background/Download Ship Simulator Extremes Demo and explore the worlds famous harbors and locations.md deleted file mode 100644 index ee80faebf8cf9027d340368794c567f0ad8a9b4b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Ship Simulator Extremes Demo and explore the worlds famous harbors and locations.md +++ /dev/null @@ -1,146 +0,0 @@ -
-

Download Ship Simulator Extremes Demo: A Guide for Ship Enthusiasts

-

If you are a fan of ships and sailing, you might be interested in trying out Ship Simulator Extremes, a realistic and immersive simulation game that lets you experience the most extreme conditions on earth as a ship captain. In this guide, we will show you how to download the demo version of the game and what to expect from it.

-

What is Ship Simulator Extremes?

-

Ship Simulator Extremes is the latest installment of the acclaimed Ship Simulator series, developed by VSTEP and published by Paradox Interactive. The game was released in 2010 and has sold over 550,000 copies worldwide. The game features a wide range of vessels to captain, from hovercrafts and coast guard interceptors to mammoth tankers and luxury cruise liners. The game also includes exciting storylines and missions based on actual events in realistic environments at locations all over the world, such as the Antarctic, Bora Bora, Rotterdam, and Sydney. The game also has a save the environment campaign, where you can sail famous Greenpeace ships and take on ecological missions based on real events.

-

download ship simulator extremes demo


Download Ziphttps://jinyurl.com/2uNRI6



-

Features of Ship Simulator Extremes

-

Some of the main features of Ship Simulator Extremes are:

- -

System Requirements for Ship Simulator Extremes

-

Before you download the demo, make sure your PC meets the minimum system requirements for the game. Here are the specifications you need:

- - - - - - - - - - - - - - - - - -
Operating systemProcessorMemoryVideo cardHard disc spaceOther
Windows XP (Min. service pack 2), Windows Vista or Windows 7. 32 and 64 bits OS supported 3 Ghz P4 Intel or AMD equivalent processor 2GB (Windows XP) or 3GB (Vista or Windows 7) Geforce 8800GT or ATI Radeon HD 4850 with 256MB ram (Shader model 3.0) 3.5GB 4x PC DVD-ROM, mouse with scroll wheel, DirectX 9.0c compatible sound card
-

Reviews of Ship Simulator Extremes

-

Ship Simulator Extremes has received mixed reviews from critics and players. Some praised the game for its realism, variety, and graphics, while others criticized it for its bugs, glitches, and lack of polish. The game has a score of 63/100 on Metacritic and a user rating of 6.8/10 on IGN. Here are some of the pros and cons of the game according to the reviews:

- - - - - - - - - - - - - - - - - - - - - -
ProsCons
- Realistic and immersive simulation of ship handling and navigation - Buggy and unstable performance, especially in multiplayer mode
- Wide range of vessels and missions to choose from - Repetitive and boring gameplay, lack of challenge and feedback
- Beautiful graphics and sound effects, especially the water and weather system - Poor user interface and controls, lack of customization and options
- Interesting and relevant save the environment campaign - Unrealistic and exaggerated scenarios, lack of realism and authenticity
-

How to Download Ship Simulator Extremes Demo

-

If you want to try out Ship Simulator Extremes for yourself, you can download the demo version of the game for free from the official website or the Steam store page. Here are the steps you need to follow:

-

Step 1: Visit the official website or Steam store page

-

The first thing you need to do is to visit the official website of Ship Simulator Extremes at (1) or the Steam store page at (2). You can find more information about the game, such as screenshots, videos, news, and forums on these pages.

-

Step 2: Click on the download button or add to cart

-

On the official website, you will see a download button on the top right corner of the page. Click on it and you will be redirected to a page where you can choose your preferred download platform, such as GamersGate or Direct2Drive. You will need to create an account and pay a small fee to download the full version of the game. However, if you scroll down, you will see a link that says "Download Demo". Click on it and you will be able to download the demo version for free.[6] On the Steam store page, you will see an add to cart button on the right side of the page. Click on it and you will be able to purchase the full version of the game for $19.99. However, if you scroll down, you will see a link that says "Download Demo". Click on it and you will be able to download the demo version for free.[7]

-

Step 3: Follow the instructions to install and launch the demo

-

Once you have downloaded the demo file, you will need to follow the instructions to install and launch it on your PC. The file size is about 600 MB, so it might take some time depending on your internet speed. The installation process is simple and straightforward. Just follow the prompts and agree to the terms and conditions. After that, you can launch the demo from your desktop or start menu.[6][7]

-

download ship simulator extremes demo free
-download ship simulator extremes demo steam
-download ship simulator extremes demo pc
-download ship simulator extremes demo windows 10
-download ship simulator extremes demo mac
-download ship simulator extremes demo full version
-download ship simulator extremes demo crack
-download ship simulator extremes demo torrent
-download ship simulator extremes demo gameplay
-download ship simulator extremes demo missions
-download ship simulator extremes demo online
-download ship simulator extremes demo multiplayer
-download ship simulator extremes demo mods
-download ship simulator extremes demo patch
-download ship simulator extremes demo update
-download ship simulator extremes collection demo
-download ship simulator extremes ferry pack demo
-download ship simulator extremes ocean cruise ship demo
-download ship simulator extremes offshore vessel demo
-download ship simulator extremes cargo vessel demo
-download ship simulator extremes inland shipping demo
-download ship simulator extremes greenpeace campaign demo
-download ship simulator extremes coast guard missions demo
-download ship simulator extremes antarctic adventures demo
-download ship simulator extremes bora bora expeditions demo
-how to download ship simulator extremes demo
-where to download ship simulator extremes demo
-best site to download ship simulator extremes demo
-safe way to download ship simulator extremes demo
-easy way to download ship simulator extremes demo
-fast way to download ship simulator extremes demo
-tips for downloading ship simulator extremes demo
-guide for downloading ship simulator extremes demo
-review of downloading ship simulator extremes demo
-benefits of downloading ship simulator extremes demo
-requirements for downloading ship simulator extremes demo
-problems with downloading ship simulator extremes demo
-solutions for downloading ship simulator extremes demo
-alternatives to downloading ship simulator extremes demo
-comparison of downloading ship simulator extremes demo and other games
-reasons to download ship simulator extremes demo
-features of downloading ship simulator extremes demo
-advantages of downloading ship simulator extremes demo
-disadvantages of downloading ship simulator extremes demo
-pros and cons of downloading ship simulator extremes demo
-feedback on downloading ship simulator extremes demo
-testimonials on downloading ship simulator extremes demo
-ratings on downloading ship simulator extremes demo
-recommendations on downloading ship simulator extremes demo

-

What to Expect from Ship Simulator Extremes Demo

-

The demo version of Ship Simulator Extremes gives you a taste of what the full game has to offer. Here are some of the things you can expect from it:

-

Two playable singleplayer missions

-

The demo includes two playable singleplayer missions that are part of the save the environment campaign. The first one is called "Greenpeace - Save The Whale", where you have to sail a Greenpeace ship called Esperanza and stop a whaling vessel from hunting whales in Antarctica. The second one is called "Greenpeace - Mediterranean", where you have to sail another Greenpeace ship called Rainbow Warrior III and stop illegal fishing activities in the Mediterranean Sea. These missions are challenging and require you to use your skills and tactics to achieve your objectives.[6][7]

-

Three different vessels to captain

-

The demo also lets you captain three different vessels that are featured in the full game. These are the Greenpeace ships Esperanza and Rainbow Warrior III, and a coast guard interceptor. Each vessel has its own characteristics, such as speed, maneuverability, and equipment. You can switch between different views, such as bridge, deck, or free camera, to get a better perspective of your surroundings. You can also use the radio and the horn to communicate with other ships or the port.[6][7]

-

Realistic water and weather system

-

One of the most impressive aspects of Ship Simulator Extremes is the realistic water and weather system. The game uses a dynamic ocean simulation that creates waves, currents, and tides based on the wind and the moon. The game also features a day and night cycle and a weather system that can change from sunny to stormy in a matter of minutes. The water and weather effects have a direct impact on your ship's performance and handling, so you have to be prepared for any situation.[6][7]

-

Stunning graphics and sound effects

-

The game also boasts stunning graphics and sound effects that create an immersive and realistic experience. The game uses advanced shaders and lighting techniques to render the water, the sky, and the landscapes in high detail. The game also features realistic sound effects, such as the engine noise, the waves crashing, and the wind howling. The game also has a soundtrack that matches the mood and atmosphere of each mission.[6][7]

-

Conclusion

-

Ship Simulator Extremes is a simulation game that lets you experience the most extreme conditions on earth as a ship captain. The game features a wide range of vessels, missions, and locations to explore. The game also has a realistic water and weather system that affects your ship's performance and handling. The game also has stunning graphics and sound effects that create an immersive and realistic experience.

-

If you want to try out Ship Simulator Extremes for yourself, you can download the demo version of the game for free from the official website or the Steam store page. The demo includes two playable singleplayer missions, three different vessels to captain, and a glimpse of the realistic water and weather system. The demo is a great way to get a taste of what the full game has to offer.

-

We hope this guide has helped you learn more about Ship Simulator Extremes and how to download the demo version of the game. If you have any questions or feedback, feel free to leave a comment below. Happy sailing!

-

FAQs

-

Here are some of the frequently asked questions about Ship Simulator Extremes:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Yeager Hunter Legend and Uncover the Secrets of Planet Ekors in this 3D Action RPG for Android.md b/spaces/1phancelerku/anime-remove-background/Download Yeager Hunter Legend and Uncover the Secrets of Planet Ekors in this 3D Action RPG for Android.md deleted file mode 100644 index 647ddbc28ce7a894eed05cba3ed3f38cf87e7947..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Yeager Hunter Legend and Uncover the Secrets of Planet Ekors in this 3D Action RPG for Android.md +++ /dev/null @@ -1,115 +0,0 @@ -
-

How to Download and Play Yeager: Hunter Legend on Android

-

Are you looking for a new and exciting game to play on your Android device? Do you love monster hunting games with stunning graphics, immersive gameplay, and diverse challenges? If so, you might want to check out Yeager: Hunter Legend, a 3D action role-playing game that takes you to an alien world full of deadly creatures and dark secrets. In this article, we will tell you what Yeager: Hunter Legend is, how to download it on your Android device, and how to play it like a pro.

-

download game yeager android


Download File --->>> https://jinyurl.com/2uNTDQ



-

What is Yeager: Hunter Legend?

-

Yeager: Hunter Legend is a game developed by IGG.COM, the same company behind popular titles like Lords Mobile, Castle Clash, and Mobile Royale. It is a game that combines elements of action, role-playing, and monster hunting genres, set in a sci-fi fantasy world called Planet Ekors. You play as Yeager, an elite Vyderan hunter who is sent to retrieve a priceless stolen relic from the Empire. Along the way, you will encounter ferocious beasts, alien civilizations, and hidden secrets that will test your skills and courage.

-

A 3D action role-playing monster hunting game set in an alien world

-

One of the main features of Yeager: Hunter Legend is its stunning graphics and realistic animations that are powered by cutting-edge motion capture technology. The game boasts a vast and diverse open world that you can explore freely, with different biomes, weather effects, day-night cycles, and dynamic lighting. The game also has a rich story and lore that will immerse you in the mysterious Planet Ekors and its history.

-

A game with stunning graphics, intuitive combat, and unique team hunting system

-

Another feature of Yeager: Hunter Legend is its intuitive and action-oriented combat system that allows you to choose from five powerful weapon classes: Hunting Sword, Force Hammer, Fury Blades, Flux Blaster, and Eidolon Spear. Each weapon class has its own signature moves, combos, and abilities that you can master and customize according to your playstyle. You can also switch between two weapons during combat for more versatility and strategy.

-

download yeager hunter legend apk
-how to play yeager on android
-yeager 3d action rpg game download
-download yeager beta test android
-yeager monster hunting game android
-yeager android game review
-download yeager from google play store
-yeager apk latest version download
-yeager android game guide
-download yeager for android free
-yeager android game tips and tricks
-yeager igg.com game download
-download yeager offline mode android
-yeager android game system requirements
-download yeager mod apk android
-yeager android game best weapons
-download yeager for android tablet
-yeager android game cheats and hacks
-download yeager update for android
-yeager android game wiki
-download yeager on pc using emulator
-yeager android game discord server
-download yeager obb file for android
-yeager android game facebook page
-download yeager from apkcombo.com
-yeager android game gameplay video
-download yeager from apkpure.com
-yeager android game forum
-download yeager from gamingonphone.com
-yeager android game faq
-download yeager from newscientist.com
-yeager android game feedback and suggestions
-download yeager from the-sun.com
-yeager android game support and contact
-download yeager from yahoo.com
-yeager android game news and updates
-download yeager from wikipedia.org
-yeager android game ratings and reviews
-download yeager from montana.edu
-yeager android game features and benefits

-

The game also has a unique team hunting system that lets you hunt with up to three other players online. You can cooperate with your teammates to take down massive beasts using different tactics and skills. You can also chat with your teammates using voice or text messages, or use emojis and stickers to express yourself.

-

A game with five weapon classes, customizable equipment, and diverse monsters

-

Another feature of Yeager: Hunter Legend is its extensive customization options that let you create your own hunter style. You can hunt beasts for materials rich in Kallar, the powerful essence of your ancestors, to forge and upgrade your equipment. Equipment forged with Kallar-infused beast parts will even gain the appearance and traits of the beasts themselves. You can also equip ancient seals, mysterious artifacts that grant you legendary hunting prowess; install sigils on your Kallar arm to boost your physical aptitude and unlock new hunting skills; and choose your weapon school that fits your playstyle.

-

The game also has a diverse range of monsters that you can hunt, each with their own unique combat abilities, behaviors, weaknesses, and rewards. You will need to study and strategize for each monster to defeat them effectively. Some of the monsters include:

- -Description - - - - - -
NameType
BlazeclawFireA fiery feline beast that can unleash explosive fireballs and scorching claws.
GlacierhornIceA colossal rhino-like beast that can create icy spikes and charge with devastating force.
ThunderwingElectricA majestic bird-like beast that can soar in the sky and unleash lightning bolts and storms.
VenomtailPoisonA venomous lizard-like beast that can spit toxic projectiles and whip its tail with deadly accuracy.
ShadowfangDarkA stealthy wolf-like beast that can blend in the shadows and strike with swift and powerful bites.
-

How to Download Yeager: Hunter Legend on Android

-

If you are interested in playing Yeager: Hunter Legend on your Android device, you have three options to download it:

-

Download from Google Play Store

-

The easiest and safest way to download Yeager: Hunter Legend on your Android device is to use the official Google Play Store. You can simply search for the game on the store or use this link to access it. Then, you can tap on the Install button and wait for the game to download and install on your device. You will need at least 2.5 GB of free storage space and Android 5.0 or higher to run the game smoothly.

-

Download from APKPure or other third-party sources

-

If you cannot access the Google Play Store or prefer to use a different source, you can also download Yeager: Hunter Legend from APKPure or other third-party websites that offer APK files. APK files are the installation packages for Android applications that you can manually install on your device. However, you should be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device. To download Yeager: Hunter Legend from APKPure, you can use this link or search for the game on the website. Then, you can tap on the Download APK button and wait for the file to download on your device. You will need to enable the Unknown Sources option in your device settings to allow the installation of APK files from outside the Google Play Store. After that, you can open the downloaded file and follow the instructions to install the game on your device.

-

Download from LDPlayer or other Android emulators

-

If you want to play Yeager: Hunter Legend on your PC or laptop, you can also use an Android emulator to run the game on your computer. An Android emulator is a software that simulates an Android device on your computer, allowing you to access Android applications and games. One of the best Android emulators for gaming is LDPlayer, which offers high performance, compatibility, and customization features. To download Yeager: Hunter Legend from LDPlayer, you can use this link or search for the game on the LDPlayer website. Then, you can tap on the Download button and wait for the LDPlayer installer to download on your computer. You will need to run the installer and follow the instructions to install LDPlayer on your computer. After that, you can launch LDPlayer and search for Yeager: Hunter Legend on the built-in Google Play Store or use an APK file to install the game on LDPlayer. You will be able to play the game using your keyboard and mouse, or customize your controls according to your preference.

-

How to Play Yeager: Hunter Legend on Android

-

Now that you have downloaded Yeager: Hunter Legend on your Android device or emulator, you are ready to start playing it. Here are some tips and tricks to help you play the game like a pro:

-

Learn the combat mechanics and controls

-

The first thing you need to do is to familiarize yourself with the combat mechanics and controls of Yeager: Hunter Legend. The game uses a virtual joystick on the left side of the screen to move your character, and several buttons on the right side of the screen to perform different actions, such as attacking, dodging, switching weapons, using skills, and using items. You can also tap on the screen to interact with objects, NPCs, and menus.

-

The combat system of Yeager: Hunter Legend is based on timing, positioning, and strategy. You will need to observe your enemies' movements and patterns, dodge their attacks, exploit their weaknesses, and unleash your own combos and skills. You will also need to manage your stamina, which is consumed by attacking and dodging, and replenish it by resting or using items. You will also need to pay attention to your health, which is reduced by taking damage, and restore it by using items or healing skills. You can also use the Kallar arm to activate special hunting skills that can give you an edge in combat.

-

Choose your weapon class and weapon school

-

The next thing you need to do is to choose your weapon class and weapon school that suit your playstyle and preference. Yeager: Hunter Legend offers five weapon classes, each with its own strengths, weaknesses, and skills. They are:

- -

You can also choose your weapon school, which is a set of skills and abilities that you can unlock and upgrade for your weapon class. There are three weapon schools for each weapon class, each with its own focus and style. For example, the Hunting Sword has the following weapon schools:

- -

You can switch between different weapon classes and weapon schools at any time, so feel free to experiment and find your favorite combination.

-

Hunt beasts for materials and upgrade your equipment

-

The main activity of Yeager: Hunter Legend is hunting beasts for materials and upgrading your equipment. You can accept hunting quests from NPCs or other players, or explore the world and encounter beasts in the wild. You can hunt beasts solo or with a team of up to four players online. You will need to prepare for each hunt by choosing your equipment, items, skills, and strategy. You will also need to track down the beast, lure it out, fight it, weaken it, capture it or kill it, and harvest its parts.

-

You can use the materials you obtain from hunting beasts to forge and upgrade your equipment at the Forge Station. Equipment forged with Kallar-infused beast parts will gain the appearance and traits of the beasts themselves, giving you unique bonuses and effects. You can also customize your equipment by changing its color, adding decals, or applying seals. Seals are ancient artifacts that grant you legendary hunting prowess, such as increasing your damage, speed, defense, or Kallar power.

-

Explore the mysterious Planet Ekors and uncover its secrets

-

The last thing you need to do is to explore the mysterious Planet Ekors and uncover its secrets. Yeager: Hunter Legend has a vast and diverse open world that you can explore freely, with different biomes, weather effects, day-night cycles, and dynamic lighting. You can travel across the world using various vehicles, such as hoverboards, motorcycles, airships, or mechs. You can also interact with various objects, NPCs, and events in the world, such as collecting resources, solving puzzles, discovering lore, or triggering side quests.

-

The world of Yeager: Hunter Legend is full of secrets and mysteries that will challenge your curiosity and courage. You will encounter ancient ruins, alien civilizations, hidden dungeons, and legendary beasts that will reveal more about the history and secrets of Planet Ekors. You will also face the Empire, a ruthless faction that seeks to conquer the planet and its resources. You will need to fight against their soldiers, machines, and experiments as you uncover their sinister plans.

-

Conclusion

-

Yeager: Hunter Legend is a 3D action role-playing monster hunting game that takes you to an alien world full of deadly creatures and dark secrets. You can download it on your Android device from Google Play Store, APKPure or other third-party sources, or LDPlayer or other Android emulators. You can play it by choosing your weapon class and weapon school, hunting beasts for materials and upgrading your equipment, and exploring the mysterious Planet Ekors and uncovering its secrets. Yeager: Hunter Legend is a game that will keep you entertained and engaged for hours with its stunning graphics, immersive gameplay, and diverse challenges.

-

FAQs

-

Here are some of the frequently asked questions about Yeager: Hunter Legend:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/52Hz/SRMNet_thesis/model_arch/SRMNet.py b/spaces/52Hz/SRMNet_thesis/model_arch/SRMNet.py deleted file mode 100644 index 384bcce4ea526e7cace7ef10b63d19893afe21c7..0000000000000000000000000000000000000000 --- a/spaces/52Hz/SRMNet_thesis/model_arch/SRMNet.py +++ /dev/null @@ -1,225 +0,0 @@ -import torch -import torch.nn as nn - -##---------- Basic Layers ---------- -def conv3x3(in_chn, out_chn, bias=True): - layer = nn.Conv2d(in_chn, out_chn, kernel_size=3, stride=1, padding=1, bias=bias) - return layer - -def conv(in_channels, out_channels, kernel_size, bias=False, stride=1): - return nn.Conv2d( - in_channels, out_channels, kernel_size, - padding=(kernel_size // 2), bias=bias, stride=stride) - -def bili_resize(factor): - return nn.Upsample(scale_factor=factor, mode='bilinear', align_corners=False) - -##---------- Basic Blocks ---------- - -class UNetConvBlock(nn.Module): - def __init__(self, in_size, out_size, downsample): - super(UNetConvBlock, self).__init__() - self.downsample = downsample - self.block = SK_RDB(in_channels=in_size, growth_rate=out_size, num_layers=3) - if downsample: - self.downsample = PS_down(out_size, out_size, downscale=2) - - def forward(self, x): - out = self.block(x) - if self.downsample: - out_down = self.downsample(out) - return out_down, out - else: - return out - -class UNetUpBlock(nn.Module): - def __init__(self, in_size, out_size): - super(UNetUpBlock, self).__init__() - # self.up = nn.ConvTranspose2d(in_size, out_size, kernel_size=2, stride=2, bias=True) - self.up = PS_up(in_size, out_size, upscale=2) - self.conv_block = UNetConvBlock(in_size, out_size, False) - - def forward(self, x, bridge): - up = self.up(x) - out = torch.cat([up, bridge], dim=1) - out = self.conv_block(out) - return out - -##---------- Resizing Modules (Pixel(Un)Shuffle) ---------- -class PS_down(nn.Module): - def __init__(self, in_size, out_size, downscale): - super(PS_down, self).__init__() - self.UnPS = nn.PixelUnshuffle(downscale) - self.conv1 = nn.Conv2d((downscale**2) * in_size, out_size, 1, 1, 0) - - def forward(self, x): - x = self.UnPS(x) # h/2, w/2, 4*c - x = self.conv1(x) - return x - -class PS_up(nn.Module): - def __init__(self, in_size, out_size, upscale): - super(PS_up, self).__init__() - - self.PS = nn.PixelShuffle(upscale) - self.conv1 = nn.Conv2d(in_size//(upscale**2), out_size, 1, 1, 0) - - def forward(self, x): - x = self.PS(x) # h/2, w/2, 4*c - x = self.conv1(x) - return x - -##---------- Selective Kernel Feature Fusion (SKFF) ---------- -class SKFF(nn.Module): - def __init__(self, in_channels, height=3, reduction=8, bias=False): - super(SKFF, self).__init__() - - self.height = height - d = max(int(in_channels / reduction), 4) - - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.conv_du = nn.Sequential(nn.Conv2d(in_channels, d, 1, padding=0, bias=bias), nn.PReLU()) - - self.fcs = nn.ModuleList([]) - for i in range(self.height): - self.fcs.append(nn.Conv2d(d, in_channels, kernel_size=1, stride=1, bias=bias)) - - self.softmax = nn.Softmax(dim=1) - - def forward(self, inp_feats): - batch_size, n_feats, H, W = inp_feats[1].shape - - inp_feats = torch.cat(inp_feats, dim=1) - inp_feats = inp_feats.view(batch_size, self.height, n_feats, inp_feats.shape[2], inp_feats.shape[3]) - - feats_U = torch.sum(inp_feats, dim=1) - feats_S = self.avg_pool(feats_U) - feats_Z = self.conv_du(feats_S) - - attention_vectors = [fc(feats_Z) for fc in self.fcs] - attention_vectors = torch.cat(attention_vectors, dim=1) - attention_vectors = attention_vectors.view(batch_size, self.height, n_feats, 1, 1) - - attention_vectors = self.softmax(attention_vectors) - feats_V = torch.sum(inp_feats * attention_vectors, dim=1) - - return feats_V - -##---------- Dense Block ---------- -class DenseLayer(nn.Module): - def __init__(self, in_channels, out_channels, I): - super(DenseLayer, self).__init__() - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=3 // 2) - self.relu = nn.ReLU(inplace=True) - self.sk = SKFF(out_channels, height=2, reduction=8, bias=False) - - def forward(self, x): - x1 = self.relu(self.conv(x)) - # output = torch.cat([x, x1], 1) # -> RDB - output = self.sk((x, x1)) - return output - -##---------- Selective Kernel Residual Dense Block (SK-RDB) ---------- -class SK_RDB(nn.Module): - def __init__(self, in_channels, growth_rate, num_layers): - super(SK_RDB, self).__init__() - self.identity = nn.Conv2d(in_channels, growth_rate, 1, 1, 0) - self.layers = nn.Sequential( - *[DenseLayer(in_channels, in_channels, I=i) for i in range(num_layers)] - ) - self.lff = nn.Conv2d(in_channels, growth_rate, kernel_size=1) - - def forward(self, x): - res = self.identity(x) - x = self.layers(x) - x = self.lff(x) - return res + x - -##---------- testNet ---------- -class SRMNet(nn.Module): - def __init__(self, in_chn=3, wf=96, depth=4): - super(SRMNet, self).__init__() - self.depth = depth - self.down_path = nn.ModuleList() - self.bili_down = bili_resize(0.5) - self.conv_01 = nn.Conv2d(in_chn, wf, 3, 1, 1) - - # encoder of UNet-64 - prev_channels = 0 - for i in range(depth): # 0,1,2,3 - downsample = True if (i + 1) < depth else False - self.down_path.append(UNetConvBlock(prev_channels + wf, (2 ** i) * wf, downsample)) - prev_channels = (2 ** i) * wf - - # decoder of UNet-64 - self.up_path = nn.ModuleList() - self.skip_conv = nn.ModuleList() - self.conv_up = nn.ModuleList() - self.bottom_conv = nn.Conv2d(prev_channels, wf, 3, 1, 1) - self.bottom_up = bili_resize(2 ** (depth-1)) - - for i in reversed(range(depth - 1)): - self.up_path.append(UNetUpBlock(prev_channels, (2 ** i) * wf)) - self.skip_conv.append(nn.Conv2d((2 ** i) * wf, (2 ** i) * wf, 3, 1, 1)) - self.conv_up.append(nn.Sequential(*[bili_resize(2 ** i), nn.Conv2d((2 ** i) * wf, wf, 3, 1, 1)])) - # *[nn.Conv2d((2 ** i) * wf, wf, 3, 1, 1), bili_resize(2 ** i)]) - prev_channels = (2 ** i) * wf - - self.final_ff = SKFF(in_channels=wf, height=depth) - self.last = conv3x3(prev_channels, in_chn, bias=True) - - def forward(self, x): - img = x - scale_img = img - - ##### shallow conv ##### - x1 = self.conv_01(img) - encs = [] - ######## UNet-64 ######## - # Down-path (Encoder) - for i, down in enumerate(self.down_path): - if i == 0: # top layer - x1, x1_up = down(x1) - encs.append(x1_up) - elif (i + 1) < self.depth: # middle layer - scale_img = self.bili_down(scale_img) - left_bar = self.conv_01(scale_img) - x1 = torch.cat([x1, left_bar], dim=1) - x1, x1_up = down(x1) - encs.append(x1_up) - else: # lowest layer - scale_img = self.bili_down(scale_img) - left_bar = self.conv_01(scale_img) - x1 = torch.cat([x1, left_bar], dim=1) - x1 = down(x1) - - # Up-path (Decoder) - ms_result = [self.bottom_up(self.bottom_conv(x1))] - for i, up in enumerate(self.up_path): - x1 = up(x1, self.skip_conv[i](encs[-i - 1])) - ms_result.append(self.conv_up[i](x1)) - - # Multi-scale selective feature fusion - msff_result = self.final_ff(ms_result) - - ##### Reconstruct ##### - out_1 = self.last(msff_result) + img - - return out_1 - -if __name__ == "__main__": - from thop import profile - input = torch.ones(1, 3, 256, 256, dtype=torch.float, requires_grad=False) - - model = SRMNet(in_chn=3, wf=96, depth=4) - out = model(input) - flops, params = profile(model, inputs=(input,)) - - # RDBlayer = SK_RDB(in_channels=64, growth_rate=64, num_layers=3) - # print(RDBlayer) - # out = RDBlayer(input) - # flops, params = profile(RDBlayer, inputs=(input,)) - print('input shape:', input.shape) - print('parameters:', params/1e6) - print('flops', flops/1e9) - print('output shape', out.shape) diff --git a/spaces/AI-ANK/blackmirroroffice/README.md b/spaces/AI-ANK/blackmirroroffice/README.md deleted file mode 100644 index 26a65371cfe516786d892eabd3d81a3c71ac75ae..0000000000000000000000000000000000000000 --- a/spaces/AI-ANK/blackmirroroffice/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Blackmirroroffice -emoji: 👁 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/audiocraft/quantization/core_vq.py b/spaces/AIConsultant/MusicGen/audiocraft/quantization/core_vq.py deleted file mode 100644 index da02a6ce3a7de15353f0fba9e826052beb67c436..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/quantization/core_vq.py +++ /dev/null @@ -1,400 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from einops import rearrange, repeat -import flashy -import torch -from torch import nn, einsum -import torch.nn.functional as F - - -def exists(val: tp.Optional[tp.Any]) -> bool: - return val is not None - - -def default(val: tp.Any, d: tp.Any) -> tp.Any: - return val if exists(val) else d - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -def ema_inplace(moving_avg, new, decay: float): - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - - -def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5): - return (x + epsilon) / (x.sum() + n_categories * epsilon) - - -def uniform_init(*shape: int): - t = torch.empty(shape) - nn.init.kaiming_uniform_(t) - return t - - -def sample_vectors(samples, num: int): - num_samples, device = samples.shape[0], samples.device - - if num_samples >= num: - indices = torch.randperm(num_samples, device=device)[:num] - else: - indices = torch.randint(0, num_samples, (num,), device=device) - - return samples[indices] - - -def kmeans(samples, num_clusters: int, num_iters: int = 10): - dim, dtype = samples.shape[-1], samples.dtype - - means = sample_vectors(samples, num_clusters) - - for _ in range(num_iters): - diffs = rearrange(samples, "n d -> n () d") - rearrange( - means, "c d -> () c d" - ) - dists = -(diffs ** 2).sum(dim=-1) - - buckets = dists.max(dim=-1).indices - bins = torch.bincount(buckets, minlength=num_clusters) - zero_mask = bins == 0 - bins_min_clamped = bins.masked_fill(zero_mask, 1) - - new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype) - new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples) - new_means = new_means / bins_min_clamped[..., None] - - means = torch.where(zero_mask[..., None], means, new_means) - - return means, bins - - -def orthogonal_loss_fn(t): - # eq (2) from https://arxiv.org/abs/2112.00384 - n = t.shape[0] - normed_codes = l2norm(t) - identity = torch.eye(n, device=t.device) - cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes) - return ((cosine_sim - identity) ** 2).sum() / (n ** 2) - - -class EuclideanCodebook(nn.Module): - """Codebook with Euclidean distance. - - Args: - dim (int): Dimension. - codebook_size (int): Codebook size. - kmeans_init (bool): Whether to use k-means to initialize the codebooks. - If set to true, run the k-means algorithm on the first training batch and use - the learned centroids as initialization. - kmeans_iters (int): Number of iterations used for k-means algorithm at initialization. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - kmeans_init: int = False, - kmeans_iters: int = 10, - decay: float = 0.8, - epsilon: float = 1e-5, - threshold_ema_dead_code: int = 2, - ): - super().__init__() - self.decay = decay - init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros - embed = init_fn(codebook_size, dim) - - self.codebook_size = codebook_size - - self.kmeans_iters = kmeans_iters - self.epsilon = epsilon - self.threshold_ema_dead_code = threshold_ema_dead_code - - self.register_buffer("inited", torch.Tensor([not kmeans_init])) - self.register_buffer("cluster_size", torch.zeros(codebook_size)) - self.register_buffer("embed", embed) - self.register_buffer("embed_avg", embed.clone()) - - @torch.jit.ignore - def init_embed_(self, data): - if self.inited: - return - - embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters) - self.embed.data.copy_(embed) - self.embed_avg.data.copy_(embed.clone()) - self.cluster_size.data.copy_(cluster_size) - self.inited.data.copy_(torch.Tensor([True])) - # Make sure all buffers across workers are in sync after initialization - flashy.distrib.broadcast_tensors(self.buffers()) - - def replace_(self, samples, mask): - modified_codebook = torch.where( - mask[..., None], sample_vectors(samples, self.codebook_size), self.embed - ) - self.embed.data.copy_(modified_codebook) - - def expire_codes_(self, batch_samples): - if self.threshold_ema_dead_code == 0: - return - - expired_codes = self.cluster_size < self.threshold_ema_dead_code - if not torch.any(expired_codes): - return - - batch_samples = rearrange(batch_samples, "... d -> (...) d") - self.replace_(batch_samples, mask=expired_codes) - flashy.distrib.broadcast_tensors(self.buffers()) - - def preprocess(self, x): - x = rearrange(x, "... d -> (...) d") - return x - - def quantize(self, x): - embed = self.embed.t() - dist = -( - x.pow(2).sum(1, keepdim=True) - - 2 * x @ embed - + embed.pow(2).sum(0, keepdim=True) - ) - embed_ind = dist.max(dim=-1).indices - return embed_ind - - def postprocess_emb(self, embed_ind, shape): - return embed_ind.view(*shape[:-1]) - - def dequantize(self, embed_ind): - quantize = F.embedding(embed_ind, self.embed) - return quantize - - def encode(self, x): - shape = x.shape - # pre-process - x = self.preprocess(x) - # quantize - embed_ind = self.quantize(x) - # post-process - embed_ind = self.postprocess_emb(embed_ind, shape) - return embed_ind - - def decode(self, embed_ind): - quantize = self.dequantize(embed_ind) - return quantize - - def forward(self, x): - shape, dtype = x.shape, x.dtype - x = self.preprocess(x) - self.init_embed_(x) - - embed_ind = self.quantize(x) - embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype) - embed_ind = self.postprocess_emb(embed_ind, shape) - quantize = self.dequantize(embed_ind) - - if self.training: - # We do the expiry of code at that point as buffers are in sync - # and all the workers will take the same decision. - self.expire_codes_(x) - ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay) - embed_sum = x.t() @ embed_onehot - ema_inplace(self.embed_avg, embed_sum.t(), self.decay) - cluster_size = ( - laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon) - * self.cluster_size.sum() - ) - embed_normalized = self.embed_avg / cluster_size.unsqueeze(1) - self.embed.data.copy_(embed_normalized) - - return quantize, embed_ind - - -class VectorQuantization(nn.Module): - """Vector quantization implementation. - Currently supports only euclidean distance. - - Args: - dim (int): Dimension - codebook_size (int): Codebook size - codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): - channels_last (bool): Channels are the last dimension in the input tensors. - commitment_weight (float): Weight for commitment loss. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider - for orthogonal regularization. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - codebook_dim: tp.Optional[int] = None, - decay: float = 0.8, - epsilon: float = 1e-5, - kmeans_init: bool = False, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - channels_last: bool = False, - commitment_weight: float = 1., - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - _codebook_dim: int = default(codebook_dim, dim) - - requires_projection = _codebook_dim != dim - self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity()) - self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity()) - - self.epsilon = epsilon - self.commitment_weight = commitment_weight - - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - - self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size, - kmeans_init=kmeans_init, kmeans_iters=kmeans_iters, - decay=decay, epsilon=epsilon, - threshold_ema_dead_code=threshold_ema_dead_code) - self.codebook_size = codebook_size - - self.channels_last = channels_last - - @property - def codebook(self): - return self._codebook.embed - - @property - def inited(self): - return self._codebook.inited - - def _preprocess(self, x): - if not self.channels_last: - x = rearrange(x, "b d n -> b n d") - return x - - def _postprocess(self, quantize): - if not self.channels_last: - quantize = rearrange(quantize, "b n d -> b d n") - return quantize - - def encode(self, x): - x = self._preprocess(x) - x = self.project_in(x) - embed_in = self._codebook.encode(x) - return embed_in - - def decode(self, embed_ind): - quantize = self._codebook.decode(embed_ind) - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - return quantize - - def forward(self, x): - device = x.device - x = self._preprocess(x) - - x = self.project_in(x) - quantize, embed_ind = self._codebook(x) - - if self.training: - quantize = x + (quantize - x).detach() - - loss = torch.tensor([0.0], device=device, requires_grad=self.training) - - if self.training: - if self.commitment_weight > 0: - commit_loss = F.mse_loss(quantize.detach(), x) - loss = loss + commit_loss * self.commitment_weight - - if self.orthogonal_reg_weight > 0: - codebook = self.codebook - - if self.orthogonal_reg_active_codes_only: - # only calculate orthogonal loss for the activated codes for this batch - unique_code_ids = torch.unique(embed_ind) - codebook = codebook[unique_code_ids] - - num_codes = codebook.shape[0] - if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes: - rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes] - codebook = codebook[rand_ids] - - orthogonal_reg_loss = orthogonal_loss_fn(codebook) - loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight - - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - - return quantize, embed_ind, loss - - -class ResidualVectorQuantization(nn.Module): - """Residual vector quantization implementation. - - Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf - """ - def __init__(self, *, num_quantizers, **kwargs): - super().__init__() - self.layers = nn.ModuleList( - [VectorQuantization(**kwargs) for _ in range(num_quantizers)] - ) - - def forward(self, x, n_q: tp.Optional[int] = None): - quantized_out = 0.0 - residual = x - - all_losses = [] - all_indices = [] - - n_q = n_q or len(self.layers) - - for i, layer in enumerate(self.layers[:n_q]): - quantized, indices, loss = layer(residual) - residual = residual - quantized - quantized_out = quantized_out + quantized - all_indices.append(indices) - all_losses.append(loss) - - out_losses, out_indices = map(torch.stack, (all_losses, all_indices)) - return quantized_out, out_indices, out_losses - - def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor: - residual = x - all_indices = [] - n_q = n_q or len(self.layers) - for layer in self.layers[:n_q]: - indices = layer.encode(residual) - quantized = layer.decode(indices) - residual = residual - quantized - all_indices.append(indices) - out_indices = torch.stack(all_indices) - return out_indices - - def decode(self, q_indices: torch.Tensor) -> torch.Tensor: - quantized_out = torch.tensor(0.0, device=q_indices.device) - for i, indices in enumerate(q_indices): - layer = self.layers[i] - quantized = layer.decode(indices) - quantized_out = quantized_out + quantized - return quantized_out diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/conv.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/conv.py deleted file mode 100644 index a86505f4216231418390610e423e42b3b3b77b15..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/conv.py +++ /dev/null @@ -1,168 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F - -from modules.commons.common_layers import Embedding -from modules.fastspeech.tts_modules import LayerNorm - - -class LambdaLayer(nn.Module): - def __init__(self, lambd): - super(LambdaLayer, self).__init__() - self.lambd = lambd - - def forward(self, x): - return self.lambd(x) - - -def init_weights_func(m): - classname = m.__class__.__name__ - if classname.find("Conv1d") != -1: - torch.nn.init.xavier_uniform_(m.weight) - - -class ResidualBlock(nn.Module): - """Implements conv->PReLU->norm n-times""" - - def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0, - c_multiple=2, ln_eps=1e-12): - super(ResidualBlock, self).__init__() - - if norm_type == 'bn': - norm_builder = lambda: nn.BatchNorm1d(channels) - elif norm_type == 'in': - norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True) - elif norm_type == 'gn': - norm_builder = lambda: nn.GroupNorm(8, channels) - elif norm_type == 'ln': - norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps) - else: - norm_builder = lambda: nn.Identity() - - self.blocks = [ - nn.Sequential( - norm_builder(), - nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation, - padding=(dilation * (kernel_size - 1)) // 2), - LambdaLayer(lambda x: x * kernel_size ** -0.5), - nn.GELU(), - nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation), - ) - for i in range(n) - ] - - self.blocks = nn.ModuleList(self.blocks) - self.dropout = dropout - - def forward(self, x): - nonpadding = (x.abs().sum(1) > 0).float()[:, None, :] - for b in self.blocks: - x_ = b(x) - if self.dropout > 0 and self.training: - x_ = F.dropout(x_, self.dropout, training=self.training) - x = x + x_ - x = x * nonpadding - return x - - -class ConvBlocks(nn.Module): - """Decodes the expanded phoneme encoding into spectrograms""" - - def __init__(self, hidden_size, out_dims, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, - init_weights=True, is_BTC=True, num_layers=None, post_net_kernel=3): - super(ConvBlocks, self).__init__() - self.is_BTC = is_BTC - if num_layers is not None: - dilations = [1] * num_layers - self.res_blocks = nn.Sequential( - *[ResidualBlock(hidden_size, kernel_size, d, - n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple, - dropout=dropout, ln_eps=ln_eps) - for d in dilations], - ) - if norm_type == 'bn': - norm = nn.BatchNorm1d(hidden_size) - elif norm_type == 'in': - norm = nn.InstanceNorm1d(hidden_size, affine=True) - elif norm_type == 'gn': - norm = nn.GroupNorm(8, hidden_size) - elif norm_type == 'ln': - norm = LayerNorm(hidden_size, dim=1, eps=ln_eps) - self.last_norm = norm - self.post_net1 = nn.Conv1d(hidden_size, out_dims, kernel_size=post_net_kernel, - padding=post_net_kernel // 2) - if init_weights: - self.apply(init_weights_func) - - def forward(self, x, nonpadding=None): - """ - - :param x: [B, T, H] - :return: [B, T, H] - """ - if self.is_BTC: - x = x.transpose(1, 2) - if nonpadding is None: - nonpadding = (x.abs().sum(1) > 0).float()[:, None, :] - elif self.is_BTC: - nonpadding = nonpadding.transpose(1, 2) - x = self.res_blocks(x) * nonpadding - x = self.last_norm(x) * nonpadding - x = self.post_net1(x) * nonpadding - if self.is_BTC: - x = x.transpose(1, 2) - return x - - -class TextConvEncoder(ConvBlocks): - def __init__(self, dict_size, hidden_size, out_dims, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, init_weights=True, num_layers=None, post_net_kernel=3): - super().__init__(hidden_size, out_dims, dilations, kernel_size, - norm_type, layers_in_block, c_multiple, - dropout, ln_eps, init_weights, num_layers=num_layers, - post_net_kernel=post_net_kernel) - self.embed_tokens = Embedding(dict_size, hidden_size, 0) - self.embed_scale = math.sqrt(hidden_size) - - def forward(self, txt_tokens): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [B x T x C] - } - """ - x = self.embed_scale * self.embed_tokens(txt_tokens) - return super().forward(x) - - -class ConditionalConvBlocks(ConvBlocks): - def __init__(self, hidden_size, c_cond, c_out, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True, num_layers=None): - super().__init__(hidden_size, c_out, dilations, kernel_size, - norm_type, layers_in_block, c_multiple, - dropout, ln_eps, init_weights, is_BTC=False, num_layers=num_layers) - self.g_prenet = nn.Conv1d(c_cond, hidden_size, 3, padding=1) - self.is_BTC_ = is_BTC - if init_weights: - self.g_prenet.apply(init_weights_func) - - def forward(self, x, cond, nonpadding=None): - if self.is_BTC_: - x = x.transpose(1, 2) - cond = cond.transpose(1, 2) - if nonpadding is not None: - nonpadding = nonpadding.transpose(1, 2) - if nonpadding is None: - nonpadding = x.abs().sum(1)[:, None] - x = x + self.g_prenet(cond) - x = x * nonpadding - x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC - if self.is_BTC_: - x = x.transpose(1, 2) - return x diff --git a/spaces/AONYLMR/White-box-Cartoonization/wbc/network.py b/spaces/AONYLMR/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/AONYLMR/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/ASJMO/freegpt/client/js/theme-toggler.js b/spaces/ASJMO/freegpt/client/js/theme-toggler.js deleted file mode 100644 index 67e1a9501b70d54ab8a717f34983c012328e74a0..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/client/js/theme-toggler.js +++ /dev/null @@ -1,22 +0,0 @@ -var switch_theme_toggler = document.getElementById("theme-toggler"); - -switch_theme_toggler.addEventListener("change", toggleTheme); - -function setTheme(themeName) { - localStorage.setItem("theme", themeName); - document.documentElement.className = themeName; -} - -function toggleTheme() { - var currentTheme = localStorage.getItem("theme"); - var newTheme = currentTheme === "theme-dark" ? "theme-light" : "theme-dark"; - - setTheme(newTheme); - switch_theme_toggler.checked = newTheme === "theme-dark"; -} - -(function () { - var currentTheme = localStorage.getItem("theme") || "theme-dark"; - setTheme(currentTheme); - switch_theme_toggler.checked = currentTheme === "theme-dark"; -})(); diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/_base_/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/_base_/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Abdllh/Arabic_Poems_Generator/README.md b/spaces/Abdllh/Arabic_Poems_Generator/README.md deleted file mode 100644 index 5a598039fe9b0bf1a8cd321240694e4c31e6cd77..0000000000000000000000000000000000000000 --- a/spaces/Abdllh/Arabic_Poems_Generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Arabic Poems Generator -emoji: 🏢 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: Aalaa/Arabic_Poems_Generator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AbdoulGafar/woodsound/README.md b/spaces/AbdoulGafar/woodsound/README.md deleted file mode 100644 index ba43ad60e4909c57b4f99452ffb6eec6210e98bd..0000000000000000000000000000000000000000 --- a/spaces/AbdoulGafar/woodsound/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Woodsound -emoji: 👁 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/ambient.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/ambient.d.ts deleted file mode 100644 index 97e1793c771841b0a75647ccc7150f42feb43a2d..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/ambient.d.ts +++ /dev/null @@ -1,318 +0,0 @@ - -// this file is generated — do not edit it - - -/// - -/** - * Environment variables [loaded by Vite](https://vitejs.dev/guide/env-and-mode.html#env-files) from `.env` files and `process.env`. Like [`$env/dynamic/private`](https://kit.svelte.dev/docs/modules#$env-dynamic-private), this module cannot be imported into client-side code. This module only includes variables that _do not_ begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) _and do_ start with [`config.kit.env.privatePrefix`](https://kit.svelte.dev/docs/configuration#env) (if configured). - * - * _Unlike_ [`$env/dynamic/private`](https://kit.svelte.dev/docs/modules#$env-dynamic-private), the values exported from this module are statically injected into your bundle at build time, enabling optimisations like dead code elimination. - * - * ```ts - * import { API_KEY } from '$env/static/private'; - * ``` - * - * Note that all environment variables referenced in your code should be declared (for example in an `.env` file), even if they don't have a value until the app is deployed: - * - * ``` - * MY_FEATURE_FLAG="" - * ``` - * - * You can override `.env` values from the command line like so: - * - * ```bash - * MY_FEATURE_FLAG="enabled" npm run dev - * ``` - */ -declare module '$env/static/private' { - export const MONGODB_URL: string; - export const MONGODB_DB_NAME: string; - export const MONGODB_DIRECT_CONNECTION: string; - export const COOKIE_NAME: string; - export const HF_ACCESS_TOKEN: string; - export const HF_API_ROOT: string; - export const SERPER_API_KEY: string; - export const SERPAPI_KEY: string; - export const OPENID_CLIENT_ID: string; - export const OPENID_CLIENT_SECRET: string; - export const OPENID_SCOPES: string; - export const OPENID_PROVIDER_URL: string; - export const USE_CLIENT_CERTIFICATE: string; - export const CERT_PATH: string; - export const KEY_PATH: string; - export const CA_PATH: string; - export const CLIENT_KEY_PASSWORD: string; - export const REJECT_UNAUTHORIZED: string; - export const MODELS: string; - export const OLD_MODELS: string; - export const PARQUET_EXPORT_DATASET: string; - export const PARQUET_EXPORT_HF_TOKEN: string; - export const PARQUET_EXPORT_SECRET: string; - export const RATE_LIMIT: string; - export const MESSAGES_BEFORE_LOGIN: string; - export const ACSetupSvcPort: string; - export const ACSvcPort: string; - export const ALLUSERSPROFILE: string; - export const APPDATA: string; - export const CHROME_CRASHPAD_PIPE_NAME: string; - export const COLOR: string; - export const COLORTERM: string; - export const CommonProgramFiles: string; - export const CommonProgramW6432: string; - export const COMPUTERNAME: string; - export const ComSpec: string; - export const DriverData: string; - export const EDITOR: string; - export const EFC_38340: string; - export const EnableLog: string; - export const GIT_ASKPASS: string; - export const HOME: string; - export const HOMEDRIVE: string; - export const HOMEPATH: string; - export const INIT_CWD: string; - export const LANG: string; - export const LOCALAPPDATA: string; - export const LOGONSERVER: string; - export const NODE: string; - export const NODE_ENV: string; - export const NODE_EXE: string; - export const NPM_CLI_JS: string; - export const npm_command: string; - export const npm_config_cache: string; - export const npm_config_engine_strict: string; - export const npm_config_globalconfig: string; - export const npm_config_global_prefix: string; - export const npm_config_init_module: string; - export const npm_config_local_prefix: string; - export const npm_config_metrics_registry: string; - export const npm_config_node_gyp: string; - export const npm_config_noproxy: string; - export const npm_config_prefix: string; - export const npm_config_userconfig: string; - export const npm_config_user_agent: string; - export const npm_execpath: string; - export const npm_lifecycle_event: string; - export const npm_lifecycle_script: string; - export const npm_node_execpath: string; - export const npm_package_json: string; - export const npm_package_name: string; - export const npm_package_version: string; - export const NPM_PREFIX_NPM_CLI_JS: string; - export const NUMBER_OF_PROCESSORS: string; - export const OculusBase: string; - export const OneDrive: string; - export const OneDriveConsumer: string; - export const ORIGINAL_XDG_CURRENT_DESKTOP: string; - export const OS: string; - export const Path: string; - export const PATHEXT: string; - export const PROCESSOR_ARCHITECTURE: string; - export const PROCESSOR_IDENTIFIER: string; - export const PROCESSOR_LEVEL: string; - export const PROCESSOR_REVISION: string; - export const ProgramData: string; - export const ProgramFiles: string; - export const ProgramW6432: string; - export const PROMPT: string; - export const PSModulePath: string; - export const PUBLIC: string; - export const RlsSvcPort: string; - export const SESSIONNAME: string; - export const SystemDrive: string; - export const SystemRoot: string; - export const TEMP: string; - export const TERM_PROGRAM: string; - export const TERM_PROGRAM_VERSION: string; - export const TMP: string; - export const USERDOMAIN: string; - export const USERDOMAIN_ROAMINGPROFILE: string; - export const USERNAME: string; - export const USERPROFILE: string; - export const VSCODE_GIT_ASKPASS_EXTRA_ARGS: string; - export const VSCODE_GIT_ASKPASS_MAIN: string; - export const VSCODE_GIT_ASKPASS_NODE: string; - export const VSCODE_GIT_IPC_HANDLE: string; - export const VSCODE_INJECTION: string; - export const VSCODE_NONCE: string; - export const windir: string; -} - -/** - * Similar to [`$env/static/private`](https://kit.svelte.dev/docs/modules#$env-static-private), except that it only includes environment variables that begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) (which defaults to `PUBLIC_`), and can therefore safely be exposed to client-side code. - * - * Values are replaced statically at build time. - * - * ```ts - * import { PUBLIC_BASE_URL } from '$env/static/public'; - * ``` - */ -declare module '$env/static/public' { - export const PUBLIC_ORIGIN: string; - export const PUBLIC_SHARE_PREFIX: string; - export const PUBLIC_GOOGLE_ANALYTICS_ID: string; - export const PUBLIC_DEPRECATED_GOOGLE_ANALYTICS_ID: string; - export const PUBLIC_ANNOUNCEMENT_BANNERS: string; - export const PUBLIC_APP_NAME: string; - export const PUBLIC_APP_ASSETS: string; - export const PUBLIC_APP_COLOR: string; - export const PUBLIC_APP_DATA_SHARING: string; - export const PUBLIC_APP_DISCLAIMER: string; - export const PUBLIC_VERSION: string; -} - -/** - * This module provides access to runtime environment variables, as defined by the platform you're running on. For example if you're using [`adapter-node`](https://github.com/sveltejs/kit/tree/master/packages/adapter-node) (or running [`vite preview`](https://kit.svelte.dev/docs/cli)), this is equivalent to `process.env`. This module only includes variables that _do not_ begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) _and do_ start with [`config.kit.env.privatePrefix`](https://kit.svelte.dev/docs/configuration#env) (if configured). - * - * This module cannot be imported into client-side code. - * - * ```ts - * import { env } from '$env/dynamic/private'; - * console.log(env.DEPLOYMENT_SPECIFIC_VARIABLE); - * ``` - * - * > In `dev`, `$env/dynamic` always includes environment variables from `.env`. In `prod`, this behavior will depend on your adapter. - */ -declare module '$env/dynamic/private' { - export const env: { - MONGODB_URL: string; - MONGODB_DB_NAME: string; - MONGODB_DIRECT_CONNECTION: string; - COOKIE_NAME: string; - HF_ACCESS_TOKEN: string; - HF_API_ROOT: string; - SERPER_API_KEY: string; - SERPAPI_KEY: string; - OPENID_CLIENT_ID: string; - OPENID_CLIENT_SECRET: string; - OPENID_SCOPES: string; - OPENID_PROVIDER_URL: string; - USE_CLIENT_CERTIFICATE: string; - CERT_PATH: string; - KEY_PATH: string; - CA_PATH: string; - CLIENT_KEY_PASSWORD: string; - REJECT_UNAUTHORIZED: string; - MODELS: string; - OLD_MODELS: string; - PARQUET_EXPORT_DATASET: string; - PARQUET_EXPORT_HF_TOKEN: string; - PARQUET_EXPORT_SECRET: string; - RATE_LIMIT: string; - MESSAGES_BEFORE_LOGIN: string; - ACSetupSvcPort: string; - ACSvcPort: string; - ALLUSERSPROFILE: string; - APPDATA: string; - CHROME_CRASHPAD_PIPE_NAME: string; - COLOR: string; - COLORTERM: string; - CommonProgramFiles: string; - CommonProgramW6432: string; - COMPUTERNAME: string; - ComSpec: string; - DriverData: string; - EDITOR: string; - EFC_38340: string; - EnableLog: string; - GIT_ASKPASS: string; - HOME: string; - HOMEDRIVE: string; - HOMEPATH: string; - INIT_CWD: string; - LANG: string; - LOCALAPPDATA: string; - LOGONSERVER: string; - NODE: string; - NODE_ENV: string; - NODE_EXE: string; - NPM_CLI_JS: string; - npm_command: string; - npm_config_cache: string; - npm_config_engine_strict: string; - npm_config_globalconfig: string; - npm_config_global_prefix: string; - npm_config_init_module: string; - npm_config_local_prefix: string; - npm_config_metrics_registry: string; - npm_config_node_gyp: string; - npm_config_noproxy: string; - npm_config_prefix: string; - npm_config_userconfig: string; - npm_config_user_agent: string; - npm_execpath: string; - npm_lifecycle_event: string; - npm_lifecycle_script: string; - npm_node_execpath: string; - npm_package_json: string; - npm_package_name: string; - npm_package_version: string; - NPM_PREFIX_NPM_CLI_JS: string; - NUMBER_OF_PROCESSORS: string; - OculusBase: string; - OneDrive: string; - OneDriveConsumer: string; - ORIGINAL_XDG_CURRENT_DESKTOP: string; - OS: string; - Path: string; - PATHEXT: string; - PROCESSOR_ARCHITECTURE: string; - PROCESSOR_IDENTIFIER: string; - PROCESSOR_LEVEL: string; - PROCESSOR_REVISION: string; - ProgramData: string; - ProgramFiles: string; - ProgramW6432: string; - PROMPT: string; - PSModulePath: string; - PUBLIC: string; - RlsSvcPort: string; - SESSIONNAME: string; - SystemDrive: string; - SystemRoot: string; - TEMP: string; - TERM_PROGRAM: string; - TERM_PROGRAM_VERSION: string; - TMP: string; - USERDOMAIN: string; - USERDOMAIN_ROAMINGPROFILE: string; - USERNAME: string; - USERPROFILE: string; - VSCODE_GIT_ASKPASS_EXTRA_ARGS: string; - VSCODE_GIT_ASKPASS_MAIN: string; - VSCODE_GIT_ASKPASS_NODE: string; - VSCODE_GIT_IPC_HANDLE: string; - VSCODE_INJECTION: string; - VSCODE_NONCE: string; - windir: string; - [key: `PUBLIC_${string}`]: undefined; - [key: `${string}`]: string | undefined; - } -} - -/** - * Similar to [`$env/dynamic/private`](https://kit.svelte.dev/docs/modules#$env-dynamic-private), but only includes variables that begin with [`config.kit.env.publicPrefix`](https://kit.svelte.dev/docs/configuration#env) (which defaults to `PUBLIC_`), and can therefore safely be exposed to client-side code. - * - * Note that public dynamic environment variables must all be sent from the server to the client, causing larger network requests — when possible, use `$env/static/public` instead. - * - * ```ts - * import { env } from '$env/dynamic/public'; - * console.log(env.PUBLIC_DEPLOYMENT_SPECIFIC_VARIABLE); - * ``` - */ -declare module '$env/dynamic/public' { - export const env: { - PUBLIC_ORIGIN: string; - PUBLIC_SHARE_PREFIX: string; - PUBLIC_GOOGLE_ANALYTICS_ID: string; - PUBLIC_DEPRECATED_GOOGLE_ANALYTICS_ID: string; - PUBLIC_ANNOUNCEMENT_BANNERS: string; - PUBLIC_APP_NAME: string; - PUBLIC_APP_ASSETS: string; - PUBLIC_APP_COLOR: string; - PUBLIC_APP_DATA_SHARING: string; - PUBLIC_APP_DISCLAIMER: string; - PUBLIC_VERSION: string; - [key: `PUBLIC_${string}`]: string | undefined; - } -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/V50.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/V50.py deleted file mode 100644 index 81a95ba8db7211de946cce0711b52827145c9dca..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/V50.py +++ /dev/null @@ -1,67 +0,0 @@ -from __future__ import annotations - -import uuid - -import requests - -from ..typing import Any, CreateResult -from .base_provider import BaseProvider - - -class V50(BaseProvider): - url = 'https://p5.v50.ltd' - supports_gpt_35_turbo = True - supports_stream = False - needs_auth = False - working = False - - @staticmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, **kwargs: Any) -> CreateResult: - - conversation = "\n".join(f"{message['role']}: {message['content']}" for message in messages) - conversation += "\nassistant: " - - payload = { - "prompt" : conversation, - "options" : {}, - "systemMessage" : ".", - "temperature" : kwargs.get("temperature", 0.4), - "top_p" : kwargs.get("top_p", 0.4), - "model" : model, - "user" : str(uuid.uuid4()) - } - - headers = { - 'authority' : 'p5.v50.ltd', - 'accept' : 'application/json, text/plain, */*', - 'accept-language' : 'id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7', - 'content-type' : 'application/json', - 'origin' : 'https://p5.v50.ltd', - 'referer' : 'https://p5.v50.ltd/', - 'sec-ch-ua-platform': '"Windows"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36' - } - response = requests.post("https://p5.v50.ltd/api/chat-process", - json=payload, headers=headers, proxies=kwargs['proxy'] if 'proxy' in kwargs else {}) - - if "https://fk1.v50.ltd" not in response.text: - yield response.text - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ("top_p", "int"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" \ No newline at end of file diff --git a/spaces/Adapter/CoAdapter/ldm/modules/distributions/distributions.py b/spaces/Adapter/CoAdapter/ldm/modules/distributions/distributions.py deleted file mode 100644 index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/distributions/distributions.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) - + self.var - 1.0 - self.logvar, - dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - 1.0 - self.logvar + other.logvar, - dim=[1, 2, 3]) - - def nll(self, sample, dims=[1,2,3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/__init__.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/badgelabel/BadgeLabel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/badgelabel/BadgeLabel.js deleted file mode 100644 index fdbb2bbddb07c115c020040c7be6cad1ec4973e4..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/badgelabel/BadgeLabel.js +++ /dev/null @@ -1,49 +0,0 @@ -import OverlapSizer from '../overlapsizer/OverlapSizer.js'; - -const GetValue = Phaser.Utils.Objects.GetValue; -const BadgeKeys = { - leftTop: 'left-top', centerTop: 'center-top', rightTop: 'right-top', - leftCenter: 'left-center', center: 'center', rightCenter: 'right-center', - leftBottom: 'left-bottom', centerBottom: 'center-bottom', rightBottom: 'right-bottom' -} - -class Badge extends OverlapSizer { - constructor(scene, config) { - // Create sizer - super(scene, config); - this.type = 'rexBadge'; - - // Add elements - var background = GetValue(config, 'background', undefined); - if (background) { - this.addBackground(background); - } - this.addChildrenMap('background', background); - - // Base item - var main = GetValue(config, 'main', undefined); - if (main) { - this.add(main, { - key: 'main', - align: 'center', - expand: false, - }) - } - this.addChildrenMap('main', main); - - // Badges - for (var key in BadgeKeys) { - var badge = GetValue(config, key, undefined); - if (badge) { - this.add(badge, { - key: key, - align: BadgeKeys[key], - expand: false, - }) - this.addChildrenMap(key, badge); - } - } - } -} - -export default Badge; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PostResolveSize.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PostResolveSize.js deleted file mode 100644 index 6aed56dd544c5d2e6467897cd6667c51cee8bdab..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PostResolveSize.js +++ /dev/null @@ -1,4 +0,0 @@ -var PostResolveSize = function (width, height) { -} - -export default PostResolveSize; \ No newline at end of file diff --git a/spaces/Akmyradov/dost.ai/app.py b/spaces/Akmyradov/dost.ai/app.py deleted file mode 100644 index 9009fe698e7ba42110c51f3c5233aeaf06cb20ff..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/dost.ai/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import os -import gradio as gr -import whisper -import openai -import tempfile -from neon_tts_plugin_coqui import CoquiTTS - -model = whisper.load_model("small") - -class Dost: - LANGUAGES = list(CoquiTTS.langs.keys()) - coquiTTS = CoquiTTS() - OPENAI_API_KEY = os.environ["OPENAI_API_KEY"] - def __init__(self): - self.convHistory = [] - self.voice = None - self.result = [] - - def recognize(self, audio): - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - _, probs = model.detect_language(mel) - lang = max(probs, key=probs.get) - - options = whisper.DecodingOptions(fp16 = False) - result = whisper.decode(model, mel, options) - - print("-------------------RECOGNIZE---------------------") - print(result) - self.response(result.text, lang) - - def response(self, prompt, lang): - response = openai.Completion.create( - model="text-davinci-002", - prompt=f"You: {prompt}Friend: ", - temperature=0.5, - max_tokens=60, - top_p=1.0, - frequency_penalty=0.5, - presence_penalty=0.0, - stop=["You:"] - ) - choice = response['choices'][0]['text'] - print("-------------------RESPONSE---------------------") - print(choice) - self.convHistory.append((prompt, choice)) - self.result.append(self.convHistory) - print(self.convHistory[0]) - print(type(self.convHistory[0])) - self.say(choice, lang) - - def say(self, text, language): - coqui_langs = ['en' ,'es' ,'fr' ,'de' ,'pl' ,'uk' ,'ro' ,'hu' ,'bg' ,'nl' ,'fi' ,'sl' ,'lv' ,'ga'] - if language not in coqui_langs: - language = 'en' - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - self.coquiTTS.get_tts(text, fp, speaker = {"language" : language}) - print("-------------------AUDIOOUTPUT---------------------") - print("DONE", fp.name) - self.result.append(fp.name) - - def start(self, audio, state): - self.convHistory = state - self.result = [] - self.recognize(audio) - print(self.result) - return tuple(self.result) - -dost = Dost() -with gr.Blocks() as demo: - state = gr.State([]) - with gr.Row(): - with gr.Column(): - input_audio = gr.Audio(source="microphone", type="filepath") - btn = gr.Button("Submit") - conversation = gr.Chatbot(value=dost.convHistory) - output_audio = gr.Audio(label="AI voice response") - btn.click(dost.start, inputs=[input_audio, state], outputs=[conversation, output_audio]) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py deleted file mode 100644 index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py +++ /dev/null @@ -1,16 +0,0 @@ -import importlib -import os.path as osp - - -def get_config(config_file): - assert config_file.startswith('configs/'), 'config file setting must start with configs/' - temp_config_name = osp.basename(config_file) - temp_module_name = osp.splitext(temp_config_name)[0] - config = importlib.import_module("configs.base") - cfg = config.config - config = importlib.import_module("configs.%s" % temp_module_name) - job_cfg = config.config - cfg.update(job_cfg) - if cfg.output is None: - cfg.output = osp.join('work_dirs', temp_module_name) - return cfg \ No newline at end of file diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/libJPG/jpgd.h b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/libJPG/jpgd.h deleted file mode 100644 index a1c0cac61839a6f66a42c341f50d5e36faad9a93..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/libJPG/jpgd.h +++ /dev/null @@ -1,316 +0,0 @@ -// jpgd.h - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -#ifndef JPEG_DECODER_H -#define JPEG_DECODER_H - -#include -#include -#include - -namespace jpgd -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef unsigned short uint16; - typedef unsigned int uint; - typedef signed int int32; - - // Loads a JPEG image from a memory buffer or a file. - // req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA). - // On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB). - // Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly. - // Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp. -// BEGIN EPIC MOD -//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps); - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format); -// END EPIC MOD - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps); - - // Success/failure error codes. - enum jpgd_status - { - JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1, - JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE, - JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS, - JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH, - JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER, - JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS, - JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE, - JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR, - JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM - }; - - // Input stream interface. - // Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available. - // The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set. - // It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer. - // Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding. - class jpeg_decoder_stream - { - public: - jpeg_decoder_stream() { } - virtual ~jpeg_decoder_stream() { } - - // The read() method is called when the internal input buffer is empty. - // Parameters: - // pBuf - input buffer - // max_bytes_to_read - maximum bytes that can be written to pBuf - // pEOF_flag - set this to true if at end of stream (no more bytes remaining) - // Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0). - // Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full. - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0; - }; - - // stdio FILE stream class. - class jpeg_decoder_file_stream : public jpeg_decoder_stream - { - jpeg_decoder_file_stream(const jpeg_decoder_file_stream &); - jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &); - - FILE *m_pFile; - bool m_eof_flag, m_error_flag; - - public: - jpeg_decoder_file_stream(); - virtual ~jpeg_decoder_file_stream(); - - bool open(const char *Pfilename); - void close(); - - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag); - }; - - // Memory stream class. - class jpeg_decoder_mem_stream : public jpeg_decoder_stream - { - const uint8 *m_pSrc_data; - uint m_ofs, m_size; - - public: - jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { } - jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { } - - virtual ~jpeg_decoder_mem_stream() { } - - bool open(const uint8 *pSrc_data, uint size); - void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; } - - virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag); - }; - - // Loads JPEG file from a jpeg_decoder_stream. - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps); - - enum - { - JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4, - JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384 - }; - - typedef int16 jpgd_quant_t; - typedef int16 jpgd_block_t; - - class jpeg_decoder - { - public: - // Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc. - // methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline. - jpeg_decoder(jpeg_decoder_stream *pStream); - - ~jpeg_decoder(); - - // Call this method after constructing the object to begin decompression. - // If JPGD_SUCCESS is returned you may then call decode() on each scanline. - int begin_decoding(); - - // Returns the next scan line. - // For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1). - // Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4). - // Returns JPGD_SUCCESS if a scan line has been returned. - // Returns JPGD_DONE if all scan lines have been returned. - // Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info. - int decode(const void** pScan_line, uint* pScan_line_len); - - inline jpgd_status get_error_code() const { return m_error_code; } - - inline int get_width() const { return m_image_x_size; } - inline int get_height() const { return m_image_y_size; } - - inline int get_num_components() const { return m_comps_in_frame; } - - inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; } - inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); } - - // Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file). - inline int get_total_bytes_read() const { return m_total_bytes_read; } - - private: - jpeg_decoder(const jpeg_decoder &); - jpeg_decoder &operator =(const jpeg_decoder &); - - typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int); - - struct huff_tables - { - bool ac_table; - uint look_up[256]; - uint look_up2[256]; - uint8 code_size[256]; - uint tree[512]; - }; - - struct coeff_buf - { - uint8 *pData; - int block_num_x, block_num_y; - int block_len_x, block_len_y; - int block_size; - }; - - struct mem_block - { - mem_block *m_pNext; - size_t m_used_count; - size_t m_size; - char m_data[1]; - }; - - jmp_buf m_jmp_state; - mem_block *m_pMem_blocks; - int m_image_x_size; - int m_image_y_size; - jpeg_decoder_stream *m_pStream; - int m_progressive_flag; - uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES]; - uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size - uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size - jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables - int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported) - int m_comps_in_frame; // # of components in frame - int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor - int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor - int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector - int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID - int m_comp_h_blocks[JPGD_MAX_COMPONENTS]; - int m_comp_v_blocks[JPGD_MAX_COMPONENTS]; - int m_comps_in_scan; // # of components in scan - int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan - int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector - int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector - int m_spectral_start; // spectral selection start - int m_spectral_end; // spectral selection end - int m_successive_low; // successive approximation low - int m_successive_high; // successive approximation high - int m_max_mcu_x_size; // MCU's max. X size in pixels - int m_max_mcu_y_size; // MCU's max. Y size in pixels - int m_blocks_per_mcu; - int m_max_blocks_per_row; - int m_mcus_per_row, m_mcus_per_col; - int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU]; - int m_total_lines_left; // total # lines left in image - int m_mcu_lines_left; // total # lines left in this MCU - int m_real_dest_bytes_per_scan_line; - int m_dest_bytes_per_scan_line; // rounded up - int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y) - huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES]; - coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS]; - coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS]; - int m_eob_run; - int m_block_y_mcu[JPGD_MAX_COMPONENTS]; - uint8* m_pIn_buf_ofs; - int m_in_buf_left; - int m_tem_flag; - bool m_eof_flag; - uint8 m_in_buf_pad_start[128]; - uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128]; - uint8 m_in_buf_pad_end[128]; - int m_bits_left; - uint m_bit_buf; - int m_restart_interval; - int m_restarts_left; - int m_next_restart_num; - int m_max_mcus_per_row; - int m_max_blocks_per_mcu; - int m_expanded_blocks_per_mcu; - int m_expanded_blocks_per_row; - int m_expanded_blocks_per_component; - bool m_freq_domain_chroma_upsample; - int m_max_mcus_per_col; - uint m_last_dc_val[JPGD_MAX_COMPONENTS]; - jpgd_block_t* m_pMCU_coefficients; - int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU]; - uint8* m_pSample_buf; - int m_crr[256]; - int m_cbb[256]; - int m_crg[256]; - int m_cbg[256]; - uint8* m_pScan_line_0; - uint8* m_pScan_line_1; - jpgd_status m_error_code; - bool m_ready_flag; - int m_total_bytes_read; - - void free_all_blocks(); - // BEGIN EPIC MOD - UE_NORETURN void stop_decoding(jpgd_status status); - // END EPIC MOD - void *alloc(size_t n, bool zero = false); - void word_clear(void *p, uint16 c, uint n); - void prep_in_buffer(); - void read_dht_marker(); - void read_dqt_marker(); - void read_sof_marker(); - void skip_variable_marker(); - void read_dri_marker(); - void read_sos_marker(); - int next_marker(); - int process_markers(); - void locate_soi_marker(); - void locate_sof_marker(); - int locate_sos_marker(); - void init(jpeg_decoder_stream * pStream); - void create_look_ups(); - void fix_in_buffer(); - void transform_mcu(int mcu_row); - void transform_mcu_expand(int mcu_row); - coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y); - inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y); - void load_next_row(); - void decode_next_row(); - void make_huff_table(int index, huff_tables *pH); - void check_quant_tables(); - void check_huff_tables(); - void calc_mcu_block_order(); - int init_scan(); - void init_frame(); - void process_restart(); - void decode_scan(pDecode_block_func decode_block_func); - void init_progressive(); - void init_sequential(); - void decode_start(); - void decode_init(jpeg_decoder_stream * pStream); - void H2V2Convert(); - void H2V1Convert(); - void H1V2Convert(); - void H1V1Convert(); - void gray_convert(); - void expanded_convert(); - void find_eoi(); - inline uint get_char(); - inline uint get_char(bool *pPadding_flag); - inline void stuff_char(uint8 q); - inline uint8 get_octet(); - inline uint get_bits(int num_bits); - inline uint get_bits_no_markers(int numbits); - inline int huff_decode(huff_tables *pH); - inline int huff_decode(huff_tables *pH, int& extrabits); - static inline uint8 clamp(int i); - static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y); - static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y); - }; - -} // namespace jpgd - -#endif // JPEG_DECODER_H diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/composable_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/composable_stable_diffusion.py deleted file mode 100644 index 95292f5bdae82f2fdd857b5196f1ea993cf8b576..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/composable_stable_diffusion.py +++ /dev/null @@ -1,580 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import torch -from packaging import version -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import DiffusionPipeline -from diffusers.configuration_utils import FrozenDict -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from diffusers.utils import deprecate, is_accelerate_available, logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class ComposableStableDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate - # fix by only offloading self.safety_checker for now - cpu_offload(self.safety_checker.vision_model, device) - - @property - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, height, width, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if latents is None: - if device.type == "mps": - # randn does not work reproducibly on mps - latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device) - else: - latents = torch.randn(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - weights: Optional[str] = "", - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - if "|" in prompt: - prompt = [x.strip() for x in prompt.split("|")] - print(f"composing {prompt}...") - - if not weights: - # specify weights for prompts (excluding the unconditional score) - print("using equal positive weights (conjunction) for all prompts...") - weights = torch.tensor([guidance_scale] * len(prompt), device=self.device).reshape(-1, 1, 1, 1) - else: - # set prompt weight for each - num_prompts = len(prompt) if isinstance(prompt, list) else 1 - weights = [float(w.strip()) for w in weights.split("|")] - # guidance scale as the default - if len(weights) < num_prompts: - weights.append(guidance_scale) - else: - weights = weights[:num_prompts] - assert len(weights) == len(prompt), "weights specified are not equal to the number of prompts" - weights = torch.tensor(weights, device=self.device).reshape(-1, 1, 1, 1) - else: - weights = guidance_scale - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - text_embeddings.dtype, - device, - generator, - latents, - ) - - # composable diffusion - if isinstance(prompt, list) and batch_size == 1: - # remove extra unconditional embedding - # N = one unconditional embed + conditional embeds - text_embeddings = text_embeddings[len(prompt) - 1 :] - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = [] - for j in range(text_embeddings.shape[0]): - noise_pred.append( - self.unet(latent_model_input[:1], t, encoder_hidden_states=text_embeddings[j : j + 1]).sample - ) - noise_pred = torch.cat(noise_pred, dim=0) - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred[:1], noise_pred[1:] - noise_pred = noise_pred_uncond + (weights * (noise_pred_text - noise_pred_uncond)).sum( - dim=0, keepdims=True - ) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/embeddings.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/embeddings.py deleted file mode 100644 index a5a0c5549ee9d282b4eaa41d496255ad26b74699..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/embeddings.py +++ /dev/null @@ -1,546 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import math -from typing import Optional - -import numpy as np -import torch -from torch import nn - -from .activations import get_activation - - -def get_timestep_embedding( - timesteps: torch.Tensor, - embedding_dim: int, - flip_sin_to_cos: bool = False, - downscale_freq_shift: float = 1, - scale: float = 1, - max_period: int = 10000, -): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param embedding_dim: the dimension of the output. :param max_period: controls the minimum frequency of the - embeddings. :return: an [N x dim] Tensor of positional embeddings. - """ - assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array" - - half_dim = embedding_dim // 2 - exponent = -math.log(max_period) * torch.arange( - start=0, end=half_dim, dtype=torch.float32, device=timesteps.device - ) - exponent = exponent / (half_dim - downscale_freq_shift) - - emb = torch.exp(exponent) - emb = timesteps[:, None].float() * emb[None, :] - - # scale embeddings - emb = scale * emb - - # concat sine and cosine embeddings - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1) - - # flip sine and cosine embeddings - if flip_sin_to_cos: - emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1) - - # zero pad - if embedding_dim % 2 == 1: - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - - -def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False, extra_tokens=0): - """ - grid_size: int of the grid height and width return: pos_embed: [grid_size*grid_size, embed_dim] or - [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid_h = np.arange(grid_size, dtype=np.float32) - grid_w = np.arange(grid_size, dtype=np.float32) - grid = np.meshgrid(grid_w, grid_h) # here w goes first - grid = np.stack(grid, axis=0) - - grid = grid.reshape([2, 1, grid_size, grid_size]) - pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) - if cls_token and extra_tokens > 0: - pos_embed = np.concatenate([np.zeros([extra_tokens, embed_dim]), pos_embed], axis=0) - return pos_embed - - -def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): - if embed_dim % 2 != 0: - raise ValueError("embed_dim must be divisible by 2") - - # use half of dimensions to encode grid_h - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) - - emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) - return emb - - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D) - """ - if embed_dim % 2 != 0: - raise ValueError("embed_dim must be divisible by 2") - - omega = np.arange(embed_dim // 2, dtype=np.float64) - omega /= embed_dim / 2.0 - omega = 1.0 / 10000**omega # (D/2,) - - pos = pos.reshape(-1) # (M,) - out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) - return emb - - -class PatchEmbed(nn.Module): - """2D Image to Patch Embedding""" - - def __init__( - self, - height=224, - width=224, - patch_size=16, - in_channels=3, - embed_dim=768, - layer_norm=False, - flatten=True, - bias=True, - ): - super().__init__() - - num_patches = (height // patch_size) * (width // patch_size) - self.flatten = flatten - self.layer_norm = layer_norm - - self.proj = nn.Conv2d( - in_channels, embed_dim, kernel_size=(patch_size, patch_size), stride=patch_size, bias=bias - ) - if layer_norm: - self.norm = nn.LayerNorm(embed_dim, elementwise_affine=False, eps=1e-6) - else: - self.norm = None - - pos_embed = get_2d_sincos_pos_embed(embed_dim, int(num_patches**0.5)) - self.register_buffer("pos_embed", torch.from_numpy(pos_embed).float().unsqueeze(0), persistent=False) - - def forward(self, latent): - latent = self.proj(latent) - if self.flatten: - latent = latent.flatten(2).transpose(1, 2) # BCHW -> BNC - if self.layer_norm: - latent = self.norm(latent) - return latent + self.pos_embed - - -class TimestepEmbedding(nn.Module): - def __init__( - self, - in_channels: int, - time_embed_dim: int, - act_fn: str = "silu", - out_dim: int = None, - post_act_fn: Optional[str] = None, - cond_proj_dim=None, - ): - super().__init__() - - self.linear_1 = nn.Linear(in_channels, time_embed_dim) - - if cond_proj_dim is not None: - self.cond_proj = nn.Linear(cond_proj_dim, in_channels, bias=False) - else: - self.cond_proj = None - - self.act = get_activation(act_fn) - - if out_dim is not None: - time_embed_dim_out = out_dim - else: - time_embed_dim_out = time_embed_dim - self.linear_2 = nn.Linear(time_embed_dim, time_embed_dim_out) - - if post_act_fn is None: - self.post_act = None - else: - self.post_act = get_activation(post_act_fn) - - def forward(self, sample, condition=None): - if condition is not None: - sample = sample + self.cond_proj(condition) - sample = self.linear_1(sample) - - if self.act is not None: - sample = self.act(sample) - - sample = self.linear_2(sample) - - if self.post_act is not None: - sample = self.post_act(sample) - return sample - - -class Timesteps(nn.Module): - def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale_freq_shift: float): - super().__init__() - self.num_channels = num_channels - self.flip_sin_to_cos = flip_sin_to_cos - self.downscale_freq_shift = downscale_freq_shift - - def forward(self, timesteps): - t_emb = get_timestep_embedding( - timesteps, - self.num_channels, - flip_sin_to_cos=self.flip_sin_to_cos, - downscale_freq_shift=self.downscale_freq_shift, - ) - return t_emb - - -class GaussianFourierProjection(nn.Module): - """Gaussian Fourier embeddings for noise levels.""" - - def __init__( - self, embedding_size: int = 256, scale: float = 1.0, set_W_to_weight=True, log=True, flip_sin_to_cos=False - ): - super().__init__() - self.weight = nn.Parameter(torch.randn(embedding_size) * scale, requires_grad=False) - self.log = log - self.flip_sin_to_cos = flip_sin_to_cos - - if set_W_to_weight: - # to delete later - self.W = nn.Parameter(torch.randn(embedding_size) * scale, requires_grad=False) - - self.weight = self.W - - def forward(self, x): - if self.log: - x = torch.log(x) - - x_proj = x[:, None] * self.weight[None, :] * 2 * np.pi - - if self.flip_sin_to_cos: - out = torch.cat([torch.cos(x_proj), torch.sin(x_proj)], dim=-1) - else: - out = torch.cat([torch.sin(x_proj), torch.cos(x_proj)], dim=-1) - return out - - -class ImagePositionalEmbeddings(nn.Module): - """ - Converts latent image classes into vector embeddings. Sums the vector embeddings with positional embeddings for the - height and width of the latent space. - - For more details, see figure 10 of the dall-e paper: https://arxiv.org/abs/2102.12092 - - For VQ-diffusion: - - Output vector embeddings are used as input for the transformer. - - Note that the vector embeddings for the transformer are different than the vector embeddings from the VQVAE. - - Args: - num_embed (`int`): - Number of embeddings for the latent pixels embeddings. - height (`int`): - Height of the latent image i.e. the number of height embeddings. - width (`int`): - Width of the latent image i.e. the number of width embeddings. - embed_dim (`int`): - Dimension of the produced vector embeddings. Used for the latent pixel, height, and width embeddings. - """ - - def __init__( - self, - num_embed: int, - height: int, - width: int, - embed_dim: int, - ): - super().__init__() - - self.height = height - self.width = width - self.num_embed = num_embed - self.embed_dim = embed_dim - - self.emb = nn.Embedding(self.num_embed, embed_dim) - self.height_emb = nn.Embedding(self.height, embed_dim) - self.width_emb = nn.Embedding(self.width, embed_dim) - - def forward(self, index): - emb = self.emb(index) - - height_emb = self.height_emb(torch.arange(self.height, device=index.device).view(1, self.height)) - - # 1 x H x D -> 1 x H x 1 x D - height_emb = height_emb.unsqueeze(2) - - width_emb = self.width_emb(torch.arange(self.width, device=index.device).view(1, self.width)) - - # 1 x W x D -> 1 x 1 x W x D - width_emb = width_emb.unsqueeze(1) - - pos_emb = height_emb + width_emb - - # 1 x H x W x D -> 1 x L xD - pos_emb = pos_emb.view(1, self.height * self.width, -1) - - emb = emb + pos_emb[:, : emb.shape[1], :] - - return emb - - -class LabelEmbedding(nn.Module): - """ - Embeds class labels into vector representations. Also handles label dropout for classifier-free guidance. - - Args: - num_classes (`int`): The number of classes. - hidden_size (`int`): The size of the vector embeddings. - dropout_prob (`float`): The probability of dropping a label. - """ - - def __init__(self, num_classes, hidden_size, dropout_prob): - super().__init__() - use_cfg_embedding = dropout_prob > 0 - self.embedding_table = nn.Embedding(num_classes + use_cfg_embedding, hidden_size) - self.num_classes = num_classes - self.dropout_prob = dropout_prob - - def token_drop(self, labels, force_drop_ids=None): - """ - Drops labels to enable classifier-free guidance. - """ - if force_drop_ids is None: - drop_ids = torch.rand(labels.shape[0], device=labels.device) < self.dropout_prob - else: - drop_ids = torch.tensor(force_drop_ids == 1) - labels = torch.where(drop_ids, self.num_classes, labels) - return labels - - def forward(self, labels: torch.LongTensor, force_drop_ids=None): - use_dropout = self.dropout_prob > 0 - if (self.training and use_dropout) or (force_drop_ids is not None): - labels = self.token_drop(labels, force_drop_ids) - embeddings = self.embedding_table(labels) - return embeddings - - -class TextImageProjection(nn.Module): - def __init__( - self, - text_embed_dim: int = 1024, - image_embed_dim: int = 768, - cross_attention_dim: int = 768, - num_image_text_embeds: int = 10, - ): - super().__init__() - - self.num_image_text_embeds = num_image_text_embeds - self.image_embeds = nn.Linear(image_embed_dim, self.num_image_text_embeds * cross_attention_dim) - self.text_proj = nn.Linear(text_embed_dim, cross_attention_dim) - - def forward(self, text_embeds: torch.FloatTensor, image_embeds: torch.FloatTensor): - batch_size = text_embeds.shape[0] - - # image - image_text_embeds = self.image_embeds(image_embeds) - image_text_embeds = image_text_embeds.reshape(batch_size, self.num_image_text_embeds, -1) - - # text - text_embeds = self.text_proj(text_embeds) - - return torch.cat([image_text_embeds, text_embeds], dim=1) - - -class ImageProjection(nn.Module): - def __init__( - self, - image_embed_dim: int = 768, - cross_attention_dim: int = 768, - num_image_text_embeds: int = 32, - ): - super().__init__() - - self.num_image_text_embeds = num_image_text_embeds - self.image_embeds = nn.Linear(image_embed_dim, self.num_image_text_embeds * cross_attention_dim) - self.norm = nn.LayerNorm(cross_attention_dim) - - def forward(self, image_embeds: torch.FloatTensor): - batch_size = image_embeds.shape[0] - - # image - image_embeds = self.image_embeds(image_embeds) - image_embeds = image_embeds.reshape(batch_size, self.num_image_text_embeds, -1) - image_embeds = self.norm(image_embeds) - return image_embeds - - -class CombinedTimestepLabelEmbeddings(nn.Module): - def __init__(self, num_classes, embedding_dim, class_dropout_prob=0.1): - super().__init__() - - self.time_proj = Timesteps(num_channels=256, flip_sin_to_cos=True, downscale_freq_shift=1) - self.timestep_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim) - self.class_embedder = LabelEmbedding(num_classes, embedding_dim, class_dropout_prob) - - def forward(self, timestep, class_labels, hidden_dtype=None): - timesteps_proj = self.time_proj(timestep) - timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=hidden_dtype)) # (N, D) - - class_labels = self.class_embedder(class_labels) # (N, D) - - conditioning = timesteps_emb + class_labels # (N, D) - - return conditioning - - -class TextTimeEmbedding(nn.Module): - def __init__(self, encoder_dim: int, time_embed_dim: int, num_heads: int = 64): - super().__init__() - self.norm1 = nn.LayerNorm(encoder_dim) - self.pool = AttentionPooling(num_heads, encoder_dim) - self.proj = nn.Linear(encoder_dim, time_embed_dim) - self.norm2 = nn.LayerNorm(time_embed_dim) - - def forward(self, hidden_states): - hidden_states = self.norm1(hidden_states) - hidden_states = self.pool(hidden_states) - hidden_states = self.proj(hidden_states) - hidden_states = self.norm2(hidden_states) - return hidden_states - - -class TextImageTimeEmbedding(nn.Module): - def __init__(self, text_embed_dim: int = 768, image_embed_dim: int = 768, time_embed_dim: int = 1536): - super().__init__() - self.text_proj = nn.Linear(text_embed_dim, time_embed_dim) - self.text_norm = nn.LayerNorm(time_embed_dim) - self.image_proj = nn.Linear(image_embed_dim, time_embed_dim) - - def forward(self, text_embeds: torch.FloatTensor, image_embeds: torch.FloatTensor): - # text - time_text_embeds = self.text_proj(text_embeds) - time_text_embeds = self.text_norm(time_text_embeds) - - # image - time_image_embeds = self.image_proj(image_embeds) - - return time_image_embeds + time_text_embeds - - -class ImageTimeEmbedding(nn.Module): - def __init__(self, image_embed_dim: int = 768, time_embed_dim: int = 1536): - super().__init__() - self.image_proj = nn.Linear(image_embed_dim, time_embed_dim) - self.image_norm = nn.LayerNorm(time_embed_dim) - - def forward(self, image_embeds: torch.FloatTensor): - # image - time_image_embeds = self.image_proj(image_embeds) - time_image_embeds = self.image_norm(time_image_embeds) - return time_image_embeds - - -class ImageHintTimeEmbedding(nn.Module): - def __init__(self, image_embed_dim: int = 768, time_embed_dim: int = 1536): - super().__init__() - self.image_proj = nn.Linear(image_embed_dim, time_embed_dim) - self.image_norm = nn.LayerNorm(time_embed_dim) - self.input_hint_block = nn.Sequential( - nn.Conv2d(3, 16, 3, padding=1), - nn.SiLU(), - nn.Conv2d(16, 16, 3, padding=1), - nn.SiLU(), - nn.Conv2d(16, 32, 3, padding=1, stride=2), - nn.SiLU(), - nn.Conv2d(32, 32, 3, padding=1), - nn.SiLU(), - nn.Conv2d(32, 96, 3, padding=1, stride=2), - nn.SiLU(), - nn.Conv2d(96, 96, 3, padding=1), - nn.SiLU(), - nn.Conv2d(96, 256, 3, padding=1, stride=2), - nn.SiLU(), - nn.Conv2d(256, 4, 3, padding=1), - ) - - def forward(self, image_embeds: torch.FloatTensor, hint: torch.FloatTensor): - # image - time_image_embeds = self.image_proj(image_embeds) - time_image_embeds = self.image_norm(time_image_embeds) - hint = self.input_hint_block(hint) - return time_image_embeds, hint - - -class AttentionPooling(nn.Module): - # Copied from https://github.com/deep-floyd/IF/blob/2f91391f27dd3c468bf174be5805b4cc92980c0b/deepfloyd_if/model/nn.py#L54 - - def __init__(self, num_heads, embed_dim, dtype=None): - super().__init__() - self.dtype = dtype - self.positional_embedding = nn.Parameter(torch.randn(1, embed_dim) / embed_dim**0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim, dtype=self.dtype) - self.q_proj = nn.Linear(embed_dim, embed_dim, dtype=self.dtype) - self.v_proj = nn.Linear(embed_dim, embed_dim, dtype=self.dtype) - self.num_heads = num_heads - self.dim_per_head = embed_dim // self.num_heads - - def forward(self, x): - bs, length, width = x.size() - - def shape(x): - # (bs, length, width) --> (bs, length, n_heads, dim_per_head) - x = x.view(bs, -1, self.num_heads, self.dim_per_head) - # (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head) - x = x.transpose(1, 2) - # (bs, n_heads, length, dim_per_head) --> (bs*n_heads, length, dim_per_head) - x = x.reshape(bs * self.num_heads, -1, self.dim_per_head) - # (bs*n_heads, length, dim_per_head) --> (bs*n_heads, dim_per_head, length) - x = x.transpose(1, 2) - return x - - class_token = x.mean(dim=1, keepdim=True) + self.positional_embedding.to(x.dtype) - x = torch.cat([class_token, x], dim=1) # (bs, length+1, width) - - # (bs*n_heads, class_token_length, dim_per_head) - q = shape(self.q_proj(class_token)) - # (bs*n_heads, length+class_token_length, dim_per_head) - k = shape(self.k_proj(x)) - v = shape(self.v_proj(x)) - - # (bs*n_heads, class_token_length, length+class_token_length): - scale = 1 / math.sqrt(math.sqrt(self.dim_per_head)) - weight = torch.einsum("bct,bcs->bts", q * scale, k * scale) # More stable with f16 than dividing afterwards - weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype) - - # (bs*n_heads, dim_per_head, class_token_length) - a = torch.einsum("bts,bcs->bct", weight, v) - - # (bs, length+1, width) - a = a.reshape(bs, -1, 1).transpose(1, 2) - - return a[:, 0, :] # cls_token diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py deleted file mode 100644 index 17206a5171dcc357c589a1711afa52d87faeece0..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/pascal_voc12_aug.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_40k_pascal_context.py deleted file mode 100644 index d09931048f762cd2ac224d62c2fe2ed8e0e148c8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_40k_pascal_context.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './fcn_hr18_480x480_40k_pascal_context.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/Apex-X/ROOPOK/roop/processors/frame/core.py b/spaces/Apex-X/ROOPOK/roop/processors/frame/core.py deleted file mode 100644 index 498169d34a00e0a2547940380afd69967a2eca8c..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/ROOPOK/roop/processors/frame/core.py +++ /dev/null @@ -1,91 +0,0 @@ -import os -import sys -import importlib -import psutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from queue import Queue -from types import ModuleType -from typing import Any, List, Callable -from tqdm import tqdm - -import roop - -FRAME_PROCESSORS_MODULES: List[ModuleType] = [] -FRAME_PROCESSORS_INTERFACE = [ - 'pre_check', - 'pre_start', - 'process_frame', - 'process_frames', - 'process_image', - 'process_video', - 'post_process' -] - - -def load_frame_processor_module(frame_processor: str) -> Any: - try: - frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}') - for method_name in FRAME_PROCESSORS_INTERFACE: - if not hasattr(frame_processor_module, method_name): - raise NotImplementedError - except ModuleNotFoundError: - sys.exit(f'Frame processor {frame_processor} not found.') - except NotImplementedError: - sys.exit(f'Frame processor {frame_processor} not implemented correctly.') - return frame_processor_module - - -def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]: - global FRAME_PROCESSORS_MODULES - - if not FRAME_PROCESSORS_MODULES: - for frame_processor in frame_processors: - frame_processor_module = load_frame_processor_module(frame_processor) - FRAME_PROCESSORS_MODULES.append(frame_processor_module) - return FRAME_PROCESSORS_MODULES - - -def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None: - with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor: - futures = [] - queue = create_queue(temp_frame_paths) - queue_per_future = max(len(temp_frame_paths) // roop.globals.execution_threads, 1) - while not queue.empty(): - future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update) - futures.append(future) - for future in as_completed(futures): - future.result() - - -def create_queue(temp_frame_paths: List[str]) -> Queue[str]: - queue: Queue[str] = Queue() - for frame_path in temp_frame_paths: - queue.put(frame_path) - return queue - - -def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]: - queues = [] - for _ in range(queue_per_future): - if not queue.empty(): - queues.append(queue.get()) - return queues - - -def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None: - progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]' - total = len(frame_paths) - with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress: - multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress)) - - -def update_progress(progress: Any = None) -> None: - process = psutil.Process(os.getpid()) - memory_usage = process.memory_info().rss / 1024 / 1024 / 1024 - progress.set_postfix({ - 'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB', - 'execution_providers': roop.globals.execution_providers, - 'execution_threads': roop.globals.execution_threads - }) - progress.refresh() - progress.update(1) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__about__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__about__.py deleted file mode 100644 index 3551bc2d29846441299cf57b397b02fc164c99b9..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__about__.py +++ /dev/null @@ -1,26 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -__all__ = [ - "__title__", - "__summary__", - "__uri__", - "__version__", - "__author__", - "__email__", - "__license__", - "__copyright__", -] - -__title__ = "packaging" -__summary__ = "Core utilities for Python packages" -__uri__ = "https://github.com/pypa/packaging" - -__version__ = "21.3" - -__author__ = "Donald Stufft and individual contributors" -__email__ = "donald@stufft.io" - -__license__ = "BSD-2-Clause or Apache-2.0" -__copyright__ = "2014-2019 %s" % __author__ diff --git a/spaces/AttendAndExcite/Attend-and-Excite/app.py b/spaces/AttendAndExcite/Attend-and-Excite/app.py deleted file mode 100644 index f02fff60ba580e64662a54de0cc99cf462bfb69a..0000000000000000000000000000000000000000 --- a/spaces/AttendAndExcite/Attend-and-Excite/app.py +++ /dev/null @@ -1,289 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import spaces -import torch -from diffusers import StableDiffusionAttendAndExcitePipeline, StableDiffusionPipeline - -DESCRIPTION = """\ -# Attend-and-Excite - -This is a demo for [Attend-and-Excite](https://arxiv.org/abs/2301.13826). -Attend-and-Excite performs attention-based generative semantic guidance to mitigate subject neglect in Stable Diffusion. -Select a prompt and a set of indices matching the subjects you wish to strengthen (the `Check token indices` cell can help map between a word and its index). -""" - -if not torch.cuda.is_available(): - DESCRIPTION += "\n

Running on CPU 🥶 This demo does not work on CPU.

" - -if torch.cuda.is_available(): - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - model_id = "CompVis/stable-diffusion-v1-4" - ax_pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(model_id) - ax_pipe.to(device) - sd_pipe = StableDiffusionPipeline.from_pretrained(model_id) - sd_pipe.to(device) - - -MAX_INFERENCE_STEPS = 100 -MAX_SEED = np.iinfo(np.int32).max - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - - -def get_token_table(prompt: str) -> list[tuple[int, str]]: - tokens = [ax_pipe.tokenizer.decode(t) for t in ax_pipe.tokenizer(prompt)["input_ids"]] - tokens = tokens[1:-1] - return list(enumerate(tokens, start=1)) - - -@spaces.GPU -def run( - prompt: str, - indices_to_alter_str: str, - seed: int = 0, - apply_attend_and_excite: bool = True, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - scale_factor: int = 20, - thresholds: dict[int, float] = { - 10: 0.5, - 20: 0.8, - }, - max_iter_to_alter: int = 25, -) -> PIL.Image.Image: - if num_inference_steps > MAX_INFERENCE_STEPS: - raise gr.Error(f"Number of steps cannot exceed {MAX_INFERENCE_STEPS}.") - - generator = torch.Generator(device=device).manual_seed(seed) - if apply_attend_and_excite: - try: - token_indices = list(map(int, indices_to_alter_str.split(","))) - except Exception: - raise ValueError("Invalid token indices.") - out = ax_pipe( - prompt=prompt, - token_indices=token_indices, - guidance_scale=guidance_scale, - generator=generator, - num_inference_steps=num_inference_steps, - max_iter_to_alter=max_iter_to_alter, - thresholds=thresholds, - scale_factor=scale_factor, - ) - else: - out = sd_pipe( - prompt=prompt, - guidance_scale=guidance_scale, - generator=generator, - num_inference_steps=num_inference_steps, - ) - return out.images[0] - - -def process_example( - prompt: str, - indices_to_alter_str: str, - seed: int, - apply_attend_and_excite: bool, -) -> tuple[list[tuple[int, str]], PIL.Image.Image]: - token_table = get_token_table(prompt) - result = run( - prompt=prompt, - indices_to_alter_str=indices_to_alter_str, - seed=seed, - apply_attend_and_excite=apply_attend_and_excite, - ) - return token_table, result - - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - - with gr.Row(): - with gr.Column(): - prompt = gr.Text( - label="Prompt", - max_lines=1, - placeholder="A pod of dolphins leaping out of the water in an ocean with a ship on the background", - ) - with gr.Accordion(label="Check token indices", open=False): - show_token_indices_button = gr.Button("Show token indices") - token_indices_table = gr.Dataframe(label="Token indices", headers=["Index", "Token"], col_count=2) - token_indices_str = gr.Text( - label="Token indices (a comma-separated list indices of the tokens you wish to alter)", - max_lines=1, - placeholder="4,16", - ) - apply_attend_and_excite = gr.Checkbox(label="Apply Attend-and-Excite", value=True) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - num_inference_steps = gr.Slider( - label="Number of inference steps", - minimum=1, - maximum=MAX_INFERENCE_STEPS, - step=1, - value=50, - ) - guidance_scale = gr.Slider( - label="Guidance scale", - minimum=0, - maximum=50, - step=0.1, - value=7.5, - ) - run_button = gr.Button("Generate") - with gr.Column(): - result = gr.Image(label="Result") - - with gr.Row(): - examples = [ - [ - "A mouse and a red car", - "2,6", - 2098, - True, - ], - [ - "A mouse and a red car", - "2,6", - 2098, - False, - ], - [ - "A horse and a dog", - "2,5", - 123, - True, - ], - [ - "A horse and a dog", - "2,5", - 123, - False, - ], - [ - "A painting of an elephant with glasses", - "5,7", - 123, - True, - ], - [ - "A painting of an elephant with glasses", - "5,7", - 123, - False, - ], - [ - "A playful kitten chasing a butterfly in a wildflower meadow", - "3,6,10", - 123, - True, - ], - [ - "A playful kitten chasing a butterfly in a wildflower meadow", - "3,6,10", - 123, - False, - ], - [ - "A grizzly bear catching a salmon in a crystal clear river surrounded by a forest", - "2,6,15", - 123, - True, - ], - [ - "A grizzly bear catching a salmon in a crystal clear river surrounded by a forest", - "2,6,15", - 123, - False, - ], - [ - "A pod of dolphins leaping out of the water in an ocean with a ship on the background", - "4,16", - 123, - True, - ], - [ - "A pod of dolphins leaping out of the water in an ocean with a ship on the background", - "4,16", - 123, - False, - ], - ] - gr.Examples( - examples=examples, - inputs=[ - prompt, - token_indices_str, - seed, - apply_attend_and_excite, - ], - outputs=[ - token_indices_table, - result, - ], - fn=process_example, - cache_examples=os.getenv("CACHE_EXAMPLES") == "1", - examples_per_page=20, - ) - - show_token_indices_button.click( - fn=get_token_table, - inputs=prompt, - outputs=token_indices_table, - queue=False, - api_name="get-token-table", - ) - - gr.on( - triggers=[prompt.submit, token_indices_str.submit, run_button.click], - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=get_token_table, - inputs=prompt, - outputs=token_indices_table, - queue=False, - api_name=False, - ).then( - fn=run, - inputs=[ - prompt, - token_indices_str, - seed, - apply_attend_and_excite, - num_inference_steps, - guidance_scale, - ], - outputs=result, - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/AvinashRamesh23/AIEditor/app.py b/spaces/AvinashRamesh23/AIEditor/app.py deleted file mode 100644 index 27775f6315de44aaafe185222f053815d2e5747d..0000000000000000000000000000000000000000 --- a/spaces/AvinashRamesh23/AIEditor/app.py +++ /dev/null @@ -1,435 +0,0 @@ -import streamlit as st -import whisper -import re -from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip -from moviepy.editor import * -import math -from stable_whisper import modify_model,results_to_word_srt -import asyncio -from deepgram import Deepgram -from typing import Dict -import os -import moviepy.editor as mp -from pytube import YouTube -from time import sleep -import pandas as pd - -import calendar -import time - -current_GMT = time.gmtime() - -time_stamp = calendar.timegm(current_GMT) - -st.title('AI Editor for Content Creators!') - -@st.cache(suppress_st_warning=True) -#load whisper model -def load_model(model_selected): - #load medium model - model = whisper.load_model(model_selected) - # modify model to get word timestamp - modify_model(model) - return model - -#transcribe -@st.cache(suppress_st_warning=True) -def transcribe_video(vid,model_selected): - model = load_model(model_selected) - options = whisper.DecodingOptions(fp16=False,language="English") - result = model.transcribe(vid, **options.__dict__) - result['srt'] = whisper_result_to_srt(result) - return result - -#srt generation -def whisper_result_to_srt(result): - text = [] - for i,s in enumerate(result['segments']): - text.append(str(i+1)) - time_start = s['start'] - hours, minutes, seconds = int(time_start/3600), (time_start/60) % 60, (time_start) % 60 - timestamp_start = "%02d:%02d:%06.3f" % (hours, minutes, seconds) - timestamp_start = timestamp_start.replace('.',',') - time_end = s['end'] - hours, minutes, seconds = int(time_end/3600), (time_end/60) % 60, (time_end) % 60 - timestamp_end = "%02d:%02d:%06.3f" % (hours, minutes, seconds) - timestamp_end = timestamp_end.replace('.',',') - text.append(timestamp_start + " --> " + timestamp_end) - text.append(s['text'].strip() + "\n") - return "\n".join(text) - -#compute speaking_time -async def compute_speaking_time(transcript_data: Dict,data:str) -> None: - if 'results' in transcript_data: - transcript = transcript_data['results']['channels'][0]['alternatives'][0]['words'] - total_speaker_time = {} - speaker_words = [] - current_speaker = -1 - - for speaker in transcript: - speaker_number = speaker["speaker"] - - if speaker_number is not current_speaker: - current_speaker = speaker_number - speaker_words.append([speaker_number, [], 0]) - - try: - total_speaker_time[speaker_number][1] += 1 - except KeyError: - total_speaker_time[speaker_number] = [0,1] - - get_word = speaker["word"] - speaker_words[-1][1].append(get_word) - - total_speaker_time[speaker_number][0] += speaker["end"] - speaker["start"] - speaker_words[-1][2] += speaker["end"] - speaker["start"] - - for speaker, words, time_amount in speaker_words: - print(f"Speaker {speaker}: {' '.join(words)}") - data+=f"\nSpeaker {speaker}: {' '.join(words)}" - print(f"Speaker {speaker}: {time_amount}") - data+=f"\nSpeaker {speaker}: {time_amount}" - - - for speaker, (total_time, amount) in total_speaker_time.items(): - print(f"Speaker {speaker} avg time per phrase: {total_time/amount} ") - data+=f"\nSpeaker {speaker} avg time per phrase: {total_time/amount} " - print(f"Total time of conversation: {total_time}") - data+=f"\nTotal time of conversation: {total_time}" - return transcript,data - -#extract audio from video -def extract_write_audio(vd): - my_clip = mp.VideoFileClip(f'{vd}') - my_clip.audio.write_audiofile(f"audio.wav") - -#speaker diarization workflow -async def speaker_diarization_flow(PATH_TO_FILE): - audio = extract_write_audio(PATH_TO_FILE) - data = '' - DEEPGRAM_API_KEY = "3dc39bf904babb858390455b1a1399e221bf87f8" - deepgram = Deepgram(DEEPGRAM_API_KEY) - with open(PATH_TO_FILE, 'rb') as audio: - source = {'buffer': audio, 'mimetype': 'audio/wav'} - transcription = await deepgram.transcription.prerecorded(source, {'punctuate': True, 'diarize': True}) - transcript,final_data = await compute_speaking_time(transcription,data) - return final_data - -# speaker diarization main funciton -async def speaker_diarization(PATH_TO_FILE): - data = await speaker_diarization_flow(PATH_TO_FILE) - print("data is", data) - return data - -#find filler words -def filler_words_finder(result_data): - word_map_prior_edit=set() - word_map_after_edit=set() - #my filler words sample - filler_words={'um','ah','you know','mmm','mmm','er','uh','Hmm','actually','basically','seriously','mhm','uh huh','uh','huh','ooh','aah','ooh'} - filler_words_timestamp=set() - for keys in result_data: - if keys == 'segments': - prev=0 - for i in result_data[keys]: - for word in i['whole_word_timestamps']: - lower_case = re.sub(r'\W','',word['word'].lower()) - word_map_prior_edit.add(word['timestamp']) - if lower_case in filler_words or lower_case.startswith(('hm','aa','mm','oo')): - st.write(word['word'].lower(),word['timestamp']) - print(word['word'].lower(),word['timestamp']) - filler_words_timestamp.add(word['timestamp']) - prev=word['timestamp'] - continue - word_map_after_edit.add((prev,word['timestamp'])) - prev=word['timestamp'] - return word_map_after_edit, filler_words_timestamp - -def merge_overlapping_time_intervals(intervals): - stack = [] - result=[intervals[0]] - - for interval in intervals: - interval2=result[-1] - - if overlap(interval,interval2): - result[-1] = [min(interval[0],interval2[0]),max(interval[1],interval2[1])] - else: - result.append(interval) - - return result - -def overlap(interval1,interval2): - return min(interval1[1],interval2[1])-max(interval1[0],interval2[0]) >= 0 - -#assembly ai endpoints -import requests -transcript_endpoint = "https://api.assemblyai.com/v2/transcript" -upload_endpoint = "https://api.assemblyai.com/v2/upload" - -headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json" -} - -def upload_to_AssemblyAI(save_location): - CHUNK_SIZE = 5242880 - def read_file(filename): - with open(filename, 'rb') as _file: - while True: - print("chunk uploaded") - data = _file.read(CHUNK_SIZE) - if not data: - break - yield data - - upload_response = requests.post( - upload_endpoint, - headers=headers, data=read_file(save_location) - ) - print(upload_response.json()) - audio_url = upload_response.json()['upload_url'] - print('Uploaded to', audio_url) - return audio_url - - -def start_analysis(audio_url,type): - ## Start transcription job of audio file - data = { - 'audio_url': audio_url, - 'iab_categories': True, - 'content_safety': True, - "summarization": True, - "summary_type": "bullets", - "summary_model":type - } - if type=='conversational': - data["speaker_labels"]= True - - transcript_response = requests.post(transcript_endpoint, json=data, headers=headers) - print(transcript_response.json()) - transcript_id = transcript_response.json()['id'] - polling_endpoint = transcript_endpoint + "/" + transcript_id - print("Transcribing at", polling_endpoint) - return polling_endpoint - -def get_analysis_results(polling_endpoint): - status = 'submitted' - - while True: - print(status) - polling_response = requests.get(polling_endpoint, headers=headers) - status = polling_response.json()['status'] - # st.write(polling_response.json()) - # st.write(status) - if status == 'submitted' or status == 'processing' or status == 'queued': - print('not ready yet') - sleep(10) - - elif status == 'completed': - print('creating transcript') - return polling_response - break - - else: - print('error') - return False - break - -def pii_redact(audiourl,options): - print(options,audiourl) - endpoint = "https://api.assemblyai.com/v2/transcript" - json = { - "audio_url": audiourl, - "redact_pii": True, - "redact_pii_audio": True, - "redact_pii_policies": options - } - - headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json", - } - - response = requests.post(endpoint, json=json, headers=headers) - print(response.json()) - transcript_id = response.json()['id'] - polling_endpoint = endpoint + "/" + transcript_id - return polling_endpoint - -def pii_redact_audio(polling_endpoint): - status = 'submitted' - headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json", - } - while True: - print(status) - polling_response = requests.get(polling_endpoint, headers=headers) - status = polling_response.json()['status'] - if status == 'submitted' or status == 'processing' or status == 'queued': - print('not ready yet') - sleep(10) - - elif status == 'completed': - print('creating transcript') - return polling_response - break - - else: - print('error') - return False - break - -def download_redact_audio(pooling_enpoint): - headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json", - } - - redacted_audio_response = requests.get(pooling_enpoint + "/redacted-audio",headers=headers) - print(redacted_audio_response.json()) - redacted_audio = requests.get(redacted_audio_response.json()['redacted_audio_url']) - with open('redacted_audio.mp3', 'wb') as f: - f.write(redacted_audio.content) - -def redact_audio_video_display(vd,audio): - audioclip = AudioFileClip(audio) - clip = VideoFileClip(vd) - videoclip = clip.set_audio(audioclip) - videoclip.write_videofile("Redacted_video.mp4") - st.video("Redacted_video.mp4") - -async def main(uploaded_video,model_selected): - try: - vid = uploaded_video.name - with open(vid, mode='wb') as f: - f.write(uploaded_video.read()) # save video to disk - except: - with st.spinner('Downloading Yotube Video'): - yt = YouTube(uploaded_video) - title=yt.title - vid = f"{title}.mp4" - yt.streams.filter(file_extension="mp4").get_by_resolution("360p").download(filename=vid) - finally: - name = vid.split('.')[0] - preview = st.video(vid) - #extracting the transcription result - with st.spinner('Transcribing Video, Wait for it...'): - result = transcribe_video(vid,model_selected) - st.text_area("Edit Transcript",result["text"]) - col1, col2, col3, col4, col5, col6 = st.columns([1,1,1,1,1,1]) - tab1, tab2, tab3, tab4, tab5, tab6 = st.tabs(["Remove Filler Words","Edit Video" ,"Download SRT", "Perform Speaker Diarization","Content Analyzer","PII redactation"]) - - with tab1: - filler_word = st.button('Edit/Remove Filler Words with a click of a button') - if filler_word: - with st.spinner(text="In progress..."): - word_map_after_edit, filler_words_timestamp = filler_words_finder(result) - final_intervals = merge_overlapping_time_intervals(sorted(list(word_map_after_edit))) - subclips=[] - for start,end in final_intervals: - clip = VideoFileClip(vid) - tmp = clip.subclip(start,(end - end*0.1)) - subclips.append(tmp) - #concatenate subclips without filler words - final_clip = concatenate_videoclips(subclips) - final_clip.write_videofile(f"remove_{vid}") - preview = st.video(f"remove_{vid}") - - with tab2: - save = st.button('Edit') - - with tab3: - download = st.download_button('Download SRT', result['srt'],f'{name}.srt') - if download: - st.write('Thanks for downloading!') - - with tab4: - identify_download_speaker = st.button('Perform Speaker Diarization') - if identify_download_speaker: - with st.spinner(text="In progress..."): - results = await speaker_diarization(vid) - download_speaker = st.download_button("download speaker_diarization",results,'diarization_stats.txt') - if download_speaker: - st.write('Thanks for downloading!') - - with tab5: - type = st.selectbox('Summary Type?',('informative', 'conversational', 'catchy')) - Analyze_content = st.button("Start Content Analysis") - if Analyze_content: - with st.spinner(text="In progress..."): - audio = extract_write_audio(vid) - audio_url = upload_to_AssemblyAI("audio.wav") - # start analysis of the file - polling_endpoint = start_analysis(audio_url,type) - # receive the results - results = get_analysis_results(polling_endpoint) - - # separate analysis results - summary = results.json()['summary'] - content_moderation = results.json()["content_safety_labels"] - topic_labels = results.json()["iab_categories_result"] - - my_expander1 = st.expander(label='Summary') - my_expander2 = st.expander(label='Content Moderation') - my_expander3 = st.expander(label='Topic Discussed') - - with my_expander1: - st.header("Video summary") - st.write(summary) - - with my_expander2: - st.header("Sensitive content") - if content_moderation['summary'] != {}: - st.subheader('🚨 Mention of the following sensitive topics detected.') - moderation_df = pd.DataFrame(content_moderation['summary'].items()) - moderation_df.columns = ['topic','confidence'] - st.dataframe(moderation_df, use_container_width=True) - else: - st.subheader('✅ All clear! No sensitive content detected.') - - with my_expander3: - st.header("Topics discussed") - topics_df = pd.DataFrame(topic_labels['summary'].items()) - topics_df.columns = ['topic','confidence'] - topics_df["topic"] = topics_df["topic"].str.split(">") - expanded_topics = topics_df.topic.apply(pd.Series).add_prefix('topic_level_') - topics_df = topics_df.join(expanded_topics).drop('topic', axis=1).sort_values(['confidence'], ascending=False).fillna('') - st.dataframe(topics_df, use_container_width=True) - - with tab6: - options = st.multiselect('Select Policies to redact from video',["medical_process","medical_condition","blood_type","drug","injury","number_sequence","email_address","date_of_birth","phone_number","us_social_security_number","credit_card_number","credit_card_expiration","credit_card_cvv","date","nationality","event","language","location","money_amount","person_name","person_age","organization","political_affiliation","occupation","religion","drivers_license","banking_information"],["person_name", 'credit_card_number']) - Perform_redact = st.button("Start PII Redaction") - if Perform_redact: - with st.spinner(text="In progress..."): - audio = extract_write_audio(vid) - audio_url = upload_to_AssemblyAI("audio.wav") - print(audio_url) - print([ x for x in options ]) - polling_endpoint = pii_redact(audio_url,options) - results = pii_redact_audio(polling_endpoint) - download_redact_audio(polling_endpoint) - redact_audio_video_display(vid,"redacted_audio.mp3") - -Model_type = st.sidebar.selectbox("Choose Model",('Tiny - Best for Srt generation', 'Base - Best suited for various AI services', 'Medium - Use this model for filler word removal'),0) -upload_video = st.sidebar.file_uploader("Upload mp4 file",type=["mp4","mpeg"]) -youtube_url = st.sidebar.text_input("Enter a youtube video url") -# submit_button = st.sidebar.button("Extract Youtube Video") - -if Model_type.startswith("Tiny"): - model_selected = 'tiny.en' -if Model_type.startswith("Base"): - model_selected = 'base.en' -if Model_type.startswith("Small"): - model_selected = 'small.en' -if Model_type.startswith("Medium"): - model_selected = 'medium.en' - -if youtube_url: - asyncio.run(main(youtube_url,model_selected)) - -if upload_video: - asyncio.run(main(upload_video,model_selected)) - -st.sidebar.write("Kindly upload or provide youtube link with less a minute of video for faster performance and avoid excess usage of the free tier.") diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py deleted file mode 100644 index feb7a8222487756d38482da95183bbbcbbe96ed9..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py +++ /dev/null @@ -1,864 +0,0 @@ - -import math -import json -import copy -from typing import List, Dict -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.modeling.proposal_generator.build import PROPOSAL_GENERATOR_REGISTRY -from detectron2.layers import ShapeSpec, cat -from detectron2.structures import Instances, Boxes -from detectron2.modeling import detector_postprocess -from detectron2.utils.comm import get_world_size -from detectron2.config import configurable - -from ..layers.heatmap_focal_loss import heatmap_focal_loss_jit -from ..layers.heatmap_focal_loss import binary_heatmap_focal_loss -from ..layers.iou_loss import IOULoss -from ..layers.ml_nms import ml_nms -from ..debug import debug_train, debug_test -from .utils import reduce_sum, _transpose -from .centernet_head import CenterNetHead - -__all__ = ["CenterNet"] - -INF = 100000000 - -@PROPOSAL_GENERATOR_REGISTRY.register() -class CenterNet(nn.Module): - @configurable - def __init__(self, - # input_shape: Dict[str, ShapeSpec], - in_channels=256, - *, - num_classes=80, - in_features=("p3", "p4", "p5", "p6", "p7"), - strides=(8, 16, 32, 64, 128), - score_thresh=0.05, - hm_min_overlap=0.8, - loc_loss_type='giou', - min_radius=4, - hm_focal_alpha=0.25, - hm_focal_beta=4, - loss_gamma=2.0, - reg_weight=2.0, - not_norm_reg=True, - with_agn_hm=False, - only_proposal=False, - as_proposal=False, - not_nms=False, - pos_weight=1., - neg_weight=1., - sigmoid_clamp=1e-4, - ignore_high_fp=-1., - center_nms=False, - sizes_of_interest=[[0,80],[64,160],[128,320],[256,640],[512,10000000]], - more_pos=False, - more_pos_thresh=0.2, - more_pos_topk=9, - pre_nms_topk_train=1000, - pre_nms_topk_test=1000, - post_nms_topk_train=100, - post_nms_topk_test=100, - nms_thresh_train=0.6, - nms_thresh_test=0.6, - no_reduce=False, - debug=False, - vis_thresh=0.5, - pixel_mean=[103.530,116.280,123.675], - pixel_std=[1.0,1.0,1.0], - device='cuda', - centernet_head=None, - ): - super().__init__() - self.num_classes = num_classes - self.in_features = in_features - self.strides = strides - self.score_thresh = score_thresh - self.min_radius = min_radius - self.hm_focal_alpha = hm_focal_alpha - self.hm_focal_beta = hm_focal_beta - self.loss_gamma = loss_gamma - self.reg_weight = reg_weight - self.not_norm_reg = not_norm_reg - self.with_agn_hm = with_agn_hm - self.only_proposal = only_proposal - self.as_proposal = as_proposal - self.not_nms = not_nms - self.pos_weight = pos_weight - self.neg_weight = neg_weight - self.sigmoid_clamp = sigmoid_clamp - self.ignore_high_fp = ignore_high_fp - self.center_nms = center_nms - self.sizes_of_interest = sizes_of_interest - self.more_pos = more_pos - self.more_pos_thresh = more_pos_thresh - self.more_pos_topk = more_pos_topk - self.pre_nms_topk_train = pre_nms_topk_train - self.pre_nms_topk_test = pre_nms_topk_test - self.post_nms_topk_train = post_nms_topk_train - self.post_nms_topk_test = post_nms_topk_test - self.nms_thresh_train = nms_thresh_train - self.nms_thresh_test = nms_thresh_test - self.no_reduce = no_reduce - self.debug = debug - self.vis_thresh = vis_thresh - if self.center_nms: - self.not_nms = True - self.iou_loss = IOULoss(loc_loss_type) - assert (not self.only_proposal) or self.with_agn_hm - # delta for rendering heatmap - self.delta = (1 - hm_min_overlap) / (1 + hm_min_overlap) - if centernet_head is None: - self.centernet_head = CenterNetHead( - in_channels=in_channels, - num_levels=len(in_features), - with_agn_hm=with_agn_hm, - only_proposal=only_proposal) - else: - self.centernet_head = centernet_head - if self.debug: - pixel_mean = torch.Tensor(pixel_mean).to( - torch.device(device)).view(3, 1, 1) - pixel_std = torch.Tensor(pixel_std).to( - torch.device(device)).view(3, 1, 1) - self.denormalizer = lambda x: x * pixel_std + pixel_mean - - @classmethod - def from_config(cls, cfg, input_shape): - ret = { - # 'input_shape': input_shape, - 'in_channels': input_shape[ - cfg.MODEL.CENTERNET.IN_FEATURES[0]].channels, - 'num_classes': cfg.MODEL.CENTERNET.NUM_CLASSES, - 'in_features': cfg.MODEL.CENTERNET.IN_FEATURES, - 'strides': cfg.MODEL.CENTERNET.FPN_STRIDES, - 'score_thresh': cfg.MODEL.CENTERNET.INFERENCE_TH, - 'loc_loss_type': cfg.MODEL.CENTERNET.LOC_LOSS_TYPE, - 'hm_min_overlap': cfg.MODEL.CENTERNET.HM_MIN_OVERLAP, - 'min_radius': cfg.MODEL.CENTERNET.MIN_RADIUS, - 'hm_focal_alpha': cfg.MODEL.CENTERNET.HM_FOCAL_ALPHA, - 'hm_focal_beta': cfg.MODEL.CENTERNET.HM_FOCAL_BETA, - 'loss_gamma': cfg.MODEL.CENTERNET.LOSS_GAMMA, - 'reg_weight': cfg.MODEL.CENTERNET.REG_WEIGHT, - 'not_norm_reg': cfg.MODEL.CENTERNET.NOT_NORM_REG, - 'with_agn_hm': cfg.MODEL.CENTERNET.WITH_AGN_HM, - 'only_proposal': cfg.MODEL.CENTERNET.ONLY_PROPOSAL, - 'as_proposal': cfg.MODEL.CENTERNET.AS_PROPOSAL, - 'not_nms': cfg.MODEL.CENTERNET.NOT_NMS, - 'pos_weight': cfg.MODEL.CENTERNET.POS_WEIGHT, - 'neg_weight': cfg.MODEL.CENTERNET.NEG_WEIGHT, - 'sigmoid_clamp': cfg.MODEL.CENTERNET.SIGMOID_CLAMP, - 'ignore_high_fp': cfg.MODEL.CENTERNET.IGNORE_HIGH_FP, - 'center_nms': cfg.MODEL.CENTERNET.CENTER_NMS, - 'sizes_of_interest': cfg.MODEL.CENTERNET.SOI, - 'more_pos': cfg.MODEL.CENTERNET.MORE_POS, - 'more_pos_thresh': cfg.MODEL.CENTERNET.MORE_POS_THRESH, - 'more_pos_topk': cfg.MODEL.CENTERNET.MORE_POS_TOPK, - 'pre_nms_topk_train': cfg.MODEL.CENTERNET.PRE_NMS_TOPK_TRAIN, - 'pre_nms_topk_test': cfg.MODEL.CENTERNET.PRE_NMS_TOPK_TEST, - 'post_nms_topk_train': cfg.MODEL.CENTERNET.POST_NMS_TOPK_TRAIN, - 'post_nms_topk_test': cfg.MODEL.CENTERNET.POST_NMS_TOPK_TEST, - 'nms_thresh_train': cfg.MODEL.CENTERNET.NMS_TH_TRAIN, - 'nms_thresh_test': cfg.MODEL.CENTERNET.NMS_TH_TEST, - 'no_reduce': cfg.MODEL.CENTERNET.NO_REDUCE, - 'debug': cfg.DEBUG, - 'vis_thresh': cfg.VIS_THRESH, - 'pixel_mean': cfg.MODEL.PIXEL_MEAN, - 'pixel_std': cfg.MODEL.PIXEL_STD, - 'device': cfg.MODEL.DEVICE, - 'centernet_head': CenterNetHead( - cfg, [input_shape[f] for f in cfg.MODEL.CENTERNET.IN_FEATURES]), - } - return ret - - - def forward(self, images, features_dict, gt_instances): - features = [features_dict[f] for f in self.in_features] - clss_per_level, reg_pred_per_level, agn_hm_pred_per_level = \ - self.centernet_head(features) - grids = self.compute_grids(features) - shapes_per_level = grids[0].new_tensor( - [(x.shape[2], x.shape[3]) for x in reg_pred_per_level]) - - if not self.training: - return self.inference( - images, clss_per_level, reg_pred_per_level, - agn_hm_pred_per_level, grids) - else: - pos_inds, labels, reg_targets, flattened_hms = \ - self._get_ground_truth( - grids, shapes_per_level, gt_instances) - # logits_pred: M x F, reg_pred: M x 4, agn_hm_pred: M - logits_pred, reg_pred, agn_hm_pred = self._flatten_outputs( - clss_per_level, reg_pred_per_level, agn_hm_pred_per_level) - - if self.more_pos: - # add more pixels as positive if \ - # 1. they are within the center3x3 region of an object - # 2. their regression losses are small (= 0).squeeze(1) - reg_pred = reg_pred[reg_inds] - reg_targets_pos = reg_targets[reg_inds] - reg_weight_map = flattened_hms.max(dim=1)[0] - reg_weight_map = reg_weight_map[reg_inds] - reg_weight_map = reg_weight_map * 0 + 1 \ - if self.not_norm_reg else reg_weight_map - if self.no_reduce: - reg_norm = max(reg_weight_map.sum(), 1) - else: - reg_norm = max(reduce_sum(reg_weight_map.sum()).item() / num_gpus, 1) - - reg_loss = self.reg_weight * self.iou_loss( - reg_pred, reg_targets_pos, reg_weight_map, - reduction='sum') / reg_norm - losses['loss_centernet_loc'] = reg_loss - - if self.with_agn_hm: - cat_agn_heatmap = flattened_hms.max(dim=1)[0] # M - agn_pos_loss, agn_neg_loss = binary_heatmap_focal_loss( - agn_hm_pred, cat_agn_heatmap, pos_inds, - alpha=self.hm_focal_alpha, - beta=self.hm_focal_beta, - gamma=self.loss_gamma, - sigmoid_clamp=self.sigmoid_clamp, - ignore_high_fp=self.ignore_high_fp, - ) - agn_pos_loss = self.pos_weight * agn_pos_loss / num_pos_avg - agn_neg_loss = self.neg_weight * agn_neg_loss / num_pos_avg - losses['loss_centernet_agn_pos'] = agn_pos_loss - losses['loss_centernet_agn_neg'] = agn_neg_loss - - if self.debug: - print('losses', losses) - print('total_num_pos', total_num_pos) - return losses - - - def compute_grids(self, features): - grids = [] - for level, feature in enumerate(features): - h, w = feature.size()[-2:] - shifts_x = torch.arange( - 0, w * self.strides[level], - step=self.strides[level], - dtype=torch.float32, device=feature.device) - shifts_y = torch.arange( - 0, h * self.strides[level], - step=self.strides[level], - dtype=torch.float32, device=feature.device) - shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) - shift_x = shift_x.reshape(-1) - shift_y = shift_y.reshape(-1) - grids_per_level = torch.stack((shift_x, shift_y), dim=1) + \ - self.strides[level] // 2 - grids.append(grids_per_level) - return grids - - - def _get_ground_truth(self, grids, shapes_per_level, gt_instances): - ''' - Input: - grids: list of tensors [(hl x wl, 2)]_l - shapes_per_level: list of tuples L x 2: - gt_instances: gt instances - Retuen: - pos_inds: N - labels: N - reg_targets: M x 4 - flattened_hms: M x C or M x 1 - N: number of objects in all images - M: number of pixels from all FPN levels - ''' - - # get positive pixel index - if not self.more_pos: - pos_inds, labels = self._get_label_inds( - gt_instances, shapes_per_level) - else: - pos_inds, labels = None, None - heatmap_channels = self.num_classes - L = len(grids) - num_loc_list = [len(loc) for loc in grids] - strides = torch.cat([ - shapes_per_level.new_ones(num_loc_list[l]) * self.strides[l] \ - for l in range(L)]).float() # M - reg_size_ranges = torch.cat([ - shapes_per_level.new_tensor(self.sizes_of_interest[l]).float().view( - 1, 2).expand(num_loc_list[l], 2) for l in range(L)]) # M x 2 - grids = torch.cat(grids, dim=0) # M x 2 - M = grids.shape[0] - - reg_targets = [] - flattened_hms = [] - for i in range(len(gt_instances)): # images - boxes = gt_instances[i].gt_boxes.tensor # N x 4 - area = gt_instances[i].gt_boxes.area() # N - gt_classes = gt_instances[i].gt_classes # N in [0, self.num_classes] - - N = boxes.shape[0] - if N == 0: - reg_targets.append(grids.new_zeros((M, 4)) - INF) - flattened_hms.append( - grids.new_zeros(( - M, 1 if self.only_proposal else heatmap_channels))) - continue - - l = grids[:, 0].view(M, 1) - boxes[:, 0].view(1, N) # M x N - t = grids[:, 1].view(M, 1) - boxes[:, 1].view(1, N) # M x N - r = boxes[:, 2].view(1, N) - grids[:, 0].view(M, 1) # M x N - b = boxes[:, 3].view(1, N) - grids[:, 1].view(M, 1) # M x N - reg_target = torch.stack([l, t, r, b], dim=2) # M x N x 4 - - centers = ((boxes[:, [0, 1]] + boxes[:, [2, 3]]) / 2) # N x 2 - centers_expanded = centers.view(1, N, 2).expand(M, N, 2) # M x N x 2 - strides_expanded = strides.view(M, 1, 1).expand(M, N, 2) - centers_discret = ((centers_expanded / strides_expanded).int() * \ - strides_expanded).float() + strides_expanded / 2 # M x N x 2 - - is_peak = (((grids.view(M, 1, 2).expand(M, N, 2) - \ - centers_discret) ** 2).sum(dim=2) == 0) # M x N - is_in_boxes = reg_target.min(dim=2)[0] > 0 # M x N - is_center3x3 = self.get_center3x3( - grids, centers, strides) & is_in_boxes # M x N - is_cared_in_the_level = self.assign_reg_fpn( - reg_target, reg_size_ranges) # M x N - reg_mask = is_center3x3 & is_cared_in_the_level # M x N - - dist2 = ((grids.view(M, 1, 2).expand(M, N, 2) - \ - centers_expanded) ** 2).sum(dim=2) # M x N - dist2[is_peak] = 0 - radius2 = self.delta ** 2 * 2 * area # N - radius2 = torch.clamp( - radius2, min=self.min_radius ** 2) - weighted_dist2 = dist2 / radius2.view(1, N).expand(M, N) # M x N - reg_target = self._get_reg_targets( - reg_target, weighted_dist2.clone(), reg_mask, area) # M x 4 - - if self.only_proposal: - flattened_hm = self._create_agn_heatmaps_from_dist( - weighted_dist2.clone()) # M x 1 - else: - flattened_hm = self._create_heatmaps_from_dist( - weighted_dist2.clone(), gt_classes, - channels=heatmap_channels) # M x C - - reg_targets.append(reg_target) - flattened_hms.append(flattened_hm) - - # transpose im first training_targets to level first ones - reg_targets = _transpose(reg_targets, num_loc_list) - flattened_hms = _transpose(flattened_hms, num_loc_list) - for l in range(len(reg_targets)): - reg_targets[l] = reg_targets[l] / float(self.strides[l]) - reg_targets = cat([x for x in reg_targets], dim=0) # MB x 4 - flattened_hms = cat([x for x in flattened_hms], dim=0) # MB x C - - return pos_inds, labels, reg_targets, flattened_hms - - - def _get_label_inds(self, gt_instances, shapes_per_level): - ''' - Inputs: - gt_instances: [n_i], sum n_i = N - shapes_per_level: L x 2 [(h_l, w_l)]_L - Returns: - pos_inds: N' - labels: N' - ''' - pos_inds = [] - labels = [] - L = len(self.strides) - B = len(gt_instances) - shapes_per_level = shapes_per_level.long() - loc_per_level = (shapes_per_level[:, 0] * shapes_per_level[:, 1]).long() # L - level_bases = [] - s = 0 - for l in range(L): - level_bases.append(s) - s = s + B * loc_per_level[l] - level_bases = shapes_per_level.new_tensor(level_bases).long() # L - strides_default = shapes_per_level.new_tensor(self.strides).float() # L - for im_i in range(B): - targets_per_im = gt_instances[im_i] - bboxes = targets_per_im.gt_boxes.tensor # n x 4 - n = bboxes.shape[0] - centers = ((bboxes[:, [0, 1]] + bboxes[:, [2, 3]]) / 2) # n x 2 - centers = centers.view(n, 1, 2).expand(n, L, 2) - strides = strides_default.view(1, L, 1).expand(n, L, 2) - centers_inds = (centers / strides).long() # n x L x 2 - Ws = shapes_per_level[:, 1].view(1, L).expand(n, L) - pos_ind = level_bases.view(1, L).expand(n, L) + \ - im_i * loc_per_level.view(1, L).expand(n, L) + \ - centers_inds[:, :, 1] * Ws + \ - centers_inds[:, :, 0] # n x L - is_cared_in_the_level = self.assign_fpn_level(bboxes) - pos_ind = pos_ind[is_cared_in_the_level].view(-1) - label = targets_per_im.gt_classes.view( - n, 1).expand(n, L)[is_cared_in_the_level].view(-1) - - pos_inds.append(pos_ind) # n' - labels.append(label) # n' - pos_inds = torch.cat(pos_inds, dim=0).long() - labels = torch.cat(labels, dim=0) - return pos_inds, labels # N, N - - - def assign_fpn_level(self, boxes): - ''' - Inputs: - boxes: n x 4 - size_ranges: L x 2 - Return: - is_cared_in_the_level: n x L - ''' - size_ranges = boxes.new_tensor( - self.sizes_of_interest).view(len(self.sizes_of_interest), 2) # L x 2 - crit = ((boxes[:, 2:] - boxes[:, :2]) **2).sum(dim=1) ** 0.5 / 2 # n - n, L = crit.shape[0], size_ranges.shape[0] - crit = crit.view(n, 1).expand(n, L) - size_ranges_expand = size_ranges.view(1, L, 2).expand(n, L, 2) - is_cared_in_the_level = (crit >= size_ranges_expand[:, :, 0]) & \ - (crit <= size_ranges_expand[:, :, 1]) - return is_cared_in_the_level - - - def assign_reg_fpn(self, reg_targets_per_im, size_ranges): - ''' - TODO (Xingyi): merge it with assign_fpn_level - Inputs: - reg_targets_per_im: M x N x 4 - size_ranges: M x 2 - ''' - crit = ((reg_targets_per_im[:, :, :2] + \ - reg_targets_per_im[:, :, 2:])**2).sum(dim=2) ** 0.5 / 2 # M x N - is_cared_in_the_level = (crit >= size_ranges[:, [0]]) & \ - (crit <= size_ranges[:, [1]]) - return is_cared_in_the_level - - - def _get_reg_targets(self, reg_targets, dist, mask, area): - ''' - reg_targets (M x N x 4): long tensor - dist (M x N) - is_*: M x N - ''' - dist[mask == 0] = INF * 1.0 - min_dist, min_inds = dist.min(dim=1) # M - reg_targets_per_im = reg_targets[ - range(len(reg_targets)), min_inds] # M x N x 4 --> M x 4 - reg_targets_per_im[min_dist == INF] = - INF - return reg_targets_per_im - - - def _create_heatmaps_from_dist(self, dist, labels, channels): - ''' - dist: M x N - labels: N - return: - heatmaps: M x C - ''' - heatmaps = dist.new_zeros((dist.shape[0], channels)) - for c in range(channels): - inds = (labels == c) # N - if inds.int().sum() == 0: - continue - heatmaps[:, c] = torch.exp(-dist[:, inds].min(dim=1)[0]) - zeros = heatmaps[:, c] < 1e-4 - heatmaps[zeros, c] = 0 - return heatmaps - - - def _create_agn_heatmaps_from_dist(self, dist): - ''' - TODO (Xingyi): merge it with _create_heatmaps_from_dist - dist: M x N - return: - heatmaps: M x 1 - ''' - heatmaps = dist.new_zeros((dist.shape[0], 1)) - heatmaps[:, 0] = torch.exp(-dist.min(dim=1)[0]) - zeros = heatmaps < 1e-4 - heatmaps[zeros] = 0 - return heatmaps - - - def _flatten_outputs(self, clss, reg_pred, agn_hm_pred): - # Reshape: (N, F, Hl, Wl) -> (N, Hl, Wl, F) -> (sum_l N*Hl*Wl, F) - clss = cat([x.permute(0, 2, 3, 1).reshape(-1, x.shape[1]) \ - for x in clss], dim=0) if clss[0] is not None else None - reg_pred = cat( - [x.permute(0, 2, 3, 1).reshape(-1, 4) for x in reg_pred], dim=0) - agn_hm_pred = cat([x.permute(0, 2, 3, 1).reshape(-1) \ - for x in agn_hm_pred], dim=0) if self.with_agn_hm else None - return clss, reg_pred, agn_hm_pred - - - def get_center3x3(self, locations, centers, strides): - ''' - Inputs: - locations: M x 2 - centers: N x 2 - strides: M - ''' - M, N = locations.shape[0], centers.shape[0] - locations_expanded = locations.view(M, 1, 2).expand(M, N, 2) # M x N x 2 - centers_expanded = centers.view(1, N, 2).expand(M, N, 2) # M x N x 2 - strides_expanded = strides.view(M, 1, 1).expand(M, N, 2) # M x N - centers_discret = ((centers_expanded / strides_expanded).int() * \ - strides_expanded).float() + strides_expanded / 2 # M x N x 2 - dist_x = (locations_expanded[:, :, 0] - centers_discret[:, :, 0]).abs() - dist_y = (locations_expanded[:, :, 1] - centers_discret[:, :, 1]).abs() - return (dist_x <= strides_expanded[:, :, 0]) & \ - (dist_y <= strides_expanded[:, :, 0]) - - - def inference(self, images, clss_per_level, reg_pred_per_level, - agn_hm_pred_per_level, grids): - logits_pred = [x.sigmoid() if x is not None else None \ - for x in clss_per_level] - agn_hm_pred_per_level = [x.sigmoid() if x is not None else None \ - for x in agn_hm_pred_per_level] - - if self.only_proposal: - proposals = self.predict_instances( - grids, agn_hm_pred_per_level, reg_pred_per_level, - images.image_sizes, [None for _ in agn_hm_pred_per_level]) - else: - proposals = self.predict_instances( - grids, logits_pred, reg_pred_per_level, - images.image_sizes, agn_hm_pred_per_level) - if self.as_proposal or self.only_proposal: - for p in range(len(proposals)): - proposals[p].proposal_boxes = proposals[p].get('pred_boxes') - proposals[p].objectness_logits = proposals[p].get('scores') - proposals[p].remove('pred_boxes') - - if self.debug: - debug_test( - [self.denormalizer(x) for x in images], - logits_pred, reg_pred_per_level, - agn_hm_pred_per_level, preds=proposals, - vis_thresh=self.vis_thresh, - debug_show_name=False) - return proposals, {} - - - def predict_instances( - self, grids, logits_pred, reg_pred, image_sizes, agn_hm_pred, - is_proposal=False): - sampled_boxes = [] - for l in range(len(grids)): - sampled_boxes.append(self.predict_single_level( - grids[l], logits_pred[l], reg_pred[l] * self.strides[l], - image_sizes, agn_hm_pred[l], l, is_proposal=is_proposal)) - boxlists = list(zip(*sampled_boxes)) - boxlists = [Instances.cat(boxlist) for boxlist in boxlists] - boxlists = self.nms_and_topK( - boxlists, nms=not self.not_nms) - return boxlists - - - def predict_single_level( - self, grids, heatmap, reg_pred, image_sizes, agn_hm, level, - is_proposal=False): - N, C, H, W = heatmap.shape - # put in the same format as grids - if self.center_nms: - heatmap_nms = nn.functional.max_pool2d( - heatmap, (3, 3), stride=1, padding=1) - heatmap = heatmap * (heatmap_nms == heatmap).float() - heatmap = heatmap.permute(0, 2, 3, 1) # N x H x W x C - heatmap = heatmap.reshape(N, -1, C) # N x HW x C - box_regression = reg_pred.view(N, 4, H, W).permute(0, 2, 3, 1) # N x H x W x 4 - box_regression = box_regression.reshape(N, -1, 4) - - candidate_inds = heatmap > self.score_thresh # 0.05 - pre_nms_top_n = candidate_inds.view(N, -1).sum(1) # N - pre_nms_topk = self.pre_nms_topk_train if self.training else self.pre_nms_topk_test - pre_nms_top_n = pre_nms_top_n.clamp(max=pre_nms_topk) # N - - if agn_hm is not None: - agn_hm = agn_hm.view(N, 1, H, W).permute(0, 2, 3, 1) - agn_hm = agn_hm.reshape(N, -1) - heatmap = heatmap * agn_hm[:, :, None] - - results = [] - for i in range(N): - per_box_cls = heatmap[i] # HW x C - per_candidate_inds = candidate_inds[i] # n - per_box_cls = per_box_cls[per_candidate_inds] # n - - per_candidate_nonzeros = per_candidate_inds.nonzero() # n - per_box_loc = per_candidate_nonzeros[:, 0] # n - per_class = per_candidate_nonzeros[:, 1] # n - - per_box_regression = box_regression[i] # HW x 4 - per_box_regression = per_box_regression[per_box_loc] # n x 4 - per_grids = grids[per_box_loc] # n x 2 - - per_pre_nms_top_n = pre_nms_top_n[i] # 1 - - if per_candidate_inds.sum().item() > per_pre_nms_top_n.item(): - per_box_cls, top_k_indices = \ - per_box_cls.topk(per_pre_nms_top_n, sorted=False) - per_class = per_class[top_k_indices] - per_box_regression = per_box_regression[top_k_indices] - per_grids = per_grids[top_k_indices] - - detections = torch.stack([ - per_grids[:, 0] - per_box_regression[:, 0], - per_grids[:, 1] - per_box_regression[:, 1], - per_grids[:, 0] + per_box_regression[:, 2], - per_grids[:, 1] + per_box_regression[:, 3], - ], dim=1) # n x 4 - - # avoid invalid boxes in RoI heads - detections[:, 2] = torch.max(detections[:, 2], detections[:, 0] + 0.01) - detections[:, 3] = torch.max(detections[:, 3], detections[:, 1] + 0.01) - boxlist = Instances(image_sizes[i]) - boxlist.scores = torch.sqrt(per_box_cls) \ - if self.with_agn_hm else per_box_cls # n - # import pdb; pdb.set_trace() - boxlist.pred_boxes = Boxes(detections) - boxlist.pred_classes = per_class - results.append(boxlist) - return results - - - def nms_and_topK(self, boxlists, nms=True): - num_images = len(boxlists) - results = [] - for i in range(num_images): - nms_thresh = self.nms_thresh_train if self.training else \ - self.nms_thresh_test - result = ml_nms(boxlists[i], nms_thresh) if nms else boxlists[i] - if self.debug: - print('#proposals before nms', len(boxlists[i])) - print('#proposals after nms', len(result)) - num_dets = len(result) - post_nms_topk = self.post_nms_topk_train if self.training else \ - self.post_nms_topk_test - if num_dets > post_nms_topk: - cls_scores = result.scores - image_thresh, _ = torch.kthvalue( - cls_scores.float().cpu(), - num_dets - post_nms_topk + 1 - ) - keep = cls_scores >= image_thresh.item() - keep = torch.nonzero(keep).squeeze(1) - result = result[keep] - if self.debug: - print('#proposals after filter', len(result)) - results.append(result) - return results - - - def _add_more_pos(self, reg_pred, gt_instances, shapes_per_level): - labels, level_masks, c33_inds, c33_masks, c33_regs = \ - self._get_c33_inds(gt_instances, shapes_per_level) - N, L, K = labels.shape[0], len(self.strides), 9 - c33_inds[c33_masks == 0] = 0 - reg_pred_c33 = reg_pred[c33_inds].detach() # N x L x K - invalid_reg = c33_masks == 0 - c33_regs_expand = c33_regs.view(N * L * K, 4).clamp(min=0) - if N > 0: - with torch.no_grad(): - c33_reg_loss = self.iou_loss( - reg_pred_c33.view(N * L * K, 4), - c33_regs_expand, None, - reduction='none').view(N, L, K).detach() # N x L x K - else: - c33_reg_loss = reg_pred_c33.new_zeros((N, L, K)).detach() - c33_reg_loss[invalid_reg] = INF # N x L x K - c33_reg_loss.view(N * L, K)[level_masks.view(N * L), 4] = 0 # real center - c33_reg_loss = c33_reg_loss.view(N, L * K) - if N == 0: - loss_thresh = c33_reg_loss.new_ones((N)).float() - else: - loss_thresh = torch.kthvalue( - c33_reg_loss, self.more_pos_topk, dim=1)[0] # N - loss_thresh[loss_thresh > self.more_pos_thresh] = self.more_pos_thresh # N - new_pos = c33_reg_loss.view(N, L, K) < \ - loss_thresh.view(N, 1, 1).expand(N, L, K) - pos_inds = c33_inds[new_pos].view(-1) # P - labels = labels.view(N, 1, 1).expand(N, L, K)[new_pos].view(-1) - return pos_inds, labels - - - def _get_c33_inds(self, gt_instances, shapes_per_level): - ''' - TODO (Xingyi): The current implementation is ugly. Refactor. - Get the center (and the 3x3 region near center) locations of each objects - Inputs: - gt_instances: [n_i], sum n_i = N - shapes_per_level: L x 2 [(h_l, w_l)]_L - ''' - labels = [] - level_masks = [] - c33_inds = [] - c33_masks = [] - c33_regs = [] - L = len(self.strides) - B = len(gt_instances) - shapes_per_level = shapes_per_level.long() - loc_per_level = (shapes_per_level[:, 0] * shapes_per_level[:, 1]).long() # L - level_bases = [] - s = 0 - for l in range(L): - level_bases.append(s) - s = s + B * loc_per_level[l] - level_bases = shapes_per_level.new_tensor(level_bases).long() # L - strides_default = shapes_per_level.new_tensor(self.strides).float() # L - K = 9 - dx = shapes_per_level.new_tensor([-1, 0, 1, -1, 0, 1, -1, 0, 1]).long() - dy = shapes_per_level.new_tensor([-1, -1, -1, 0, 0, 0, 1, 1, 1]).long() - for im_i in range(B): - targets_per_im = gt_instances[im_i] - bboxes = targets_per_im.gt_boxes.tensor # n x 4 - n = bboxes.shape[0] - if n == 0: - continue - centers = ((bboxes[:, [0, 1]] + bboxes[:, [2, 3]]) / 2) # n x 2 - centers = centers.view(n, 1, 2).expand(n, L, 2) - - strides = strides_default.view(1, L, 1).expand(n, L, 2) # - centers_inds = (centers / strides).long() # n x L x 2 - center_grids = centers_inds * strides + strides // 2# n x L x 2 - l = center_grids[:, :, 0] - bboxes[:, 0].view(n, 1).expand(n, L) - t = center_grids[:, :, 1] - bboxes[:, 1].view(n, 1).expand(n, L) - r = bboxes[:, 2].view(n, 1).expand(n, L) - center_grids[:, :, 0] - b = bboxes[:, 3].view(n, 1).expand(n, L) - center_grids[:, :, 1] # n x L - reg = torch.stack([l, t, r, b], dim=2) # n x L x 4 - reg = reg / strides_default.view(1, L, 1).expand(n, L, 4).float() - - Ws = shapes_per_level[:, 1].view(1, L).expand(n, L) - Hs = shapes_per_level[:, 0].view(1, L).expand(n, L) - expand_Ws = Ws.view(n, L, 1).expand(n, L, K) - expand_Hs = Hs.view(n, L, 1).expand(n, L, K) - label = targets_per_im.gt_classes.view(n).clone() - mask = reg.min(dim=2)[0] >= 0 # n x L - mask = mask & self.assign_fpn_level(bboxes) - labels.append(label) # n - level_masks.append(mask) # n x L - - Dy = dy.view(1, 1, K).expand(n, L, K) - Dx = dx.view(1, 1, K).expand(n, L, K) - c33_ind = level_bases.view(1, L, 1).expand(n, L, K) + \ - im_i * loc_per_level.view(1, L, 1).expand(n, L, K) + \ - (centers_inds[:, :, 1:2].expand(n, L, K) + Dy) * expand_Ws + \ - (centers_inds[:, :, 0:1].expand(n, L, K) + Dx) # n x L x K - - c33_mask = \ - ((centers_inds[:, :, 1:2].expand(n, L, K) + dy) < expand_Hs) & \ - ((centers_inds[:, :, 1:2].expand(n, L, K) + dy) >= 0) & \ - ((centers_inds[:, :, 0:1].expand(n, L, K) + dx) < expand_Ws) & \ - ((centers_inds[:, :, 0:1].expand(n, L, K) + dx) >= 0) - # TODO (Xingyi): think about better way to implement this - # Currently it hard codes the 3x3 region - c33_reg = reg.view(n, L, 1, 4).expand(n, L, K, 4).clone() - c33_reg[:, :, [0, 3, 6], 0] -= 1 - c33_reg[:, :, [0, 3, 6], 2] += 1 - c33_reg[:, :, [2, 5, 8], 0] += 1 - c33_reg[:, :, [2, 5, 8], 2] -= 1 - c33_reg[:, :, [0, 1, 2], 1] -= 1 - c33_reg[:, :, [0, 1, 2], 3] += 1 - c33_reg[:, :, [6, 7, 8], 1] += 1 - c33_reg[:, :, [6, 7, 8], 3] -= 1 - c33_mask = c33_mask & (c33_reg.min(dim=3)[0] >= 0) # n x L x K - c33_inds.append(c33_ind) - c33_masks.append(c33_mask) - c33_regs.append(c33_reg) - - if len(level_masks) > 0: - labels = torch.cat(labels, dim=0) - level_masks = torch.cat(level_masks, dim=0) - c33_inds = torch.cat(c33_inds, dim=0).long() - c33_regs = torch.cat(c33_regs, dim=0) - c33_masks = torch.cat(c33_masks, dim=0) - else: - labels = shapes_per_level.new_zeros((0)).long() - level_masks = shapes_per_level.new_zeros((0, L)).bool() - c33_inds = shapes_per_level.new_zeros((0, L, K)).long() - c33_regs = shapes_per_level.new_zeros((0, L, K, 4)).float() - c33_masks = shapes_per_level.new_zeros((0, L, K)).bool() - return labels, level_masks, c33_inds, c33_masks, c33_regs # N x L, N x L x K \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/deform_conv.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/deform_conv.py deleted file mode 100644 index e5650c40673882c9164ddc56fd3ee63af0be730c..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/deform_conv.py +++ /dev/null @@ -1,116 +0,0 @@ -import torch -from torch import nn - -from detectron2.layers import Conv2d - - -class _NewEmptyTensorOp(torch.autograd.Function): - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return _NewEmptyTensorOp.apply(grad, shape), None - - -class DFConv2d(nn.Module): - """Deformable convolutional layer""" - def __init__( - self, - in_channels, - out_channels, - with_modulated_dcn=True, - kernel_size=3, - stride=1, - groups=1, - dilation=1, - deformable_groups=1, - bias=False, - padding=None - ): - super(DFConv2d, self).__init__() - if isinstance(kernel_size, (list, tuple)): - assert isinstance(stride, (list, tuple)) - assert isinstance(dilation, (list, tuple)) - assert len(kernel_size) == 2 - assert len(stride) == 2 - assert len(dilation) == 2 - padding = ( - dilation[0] * (kernel_size[0] - 1) // 2, - dilation[1] * (kernel_size[1] - 1) // 2 - ) - offset_base_channels = kernel_size[0] * kernel_size[1] - else: - padding = dilation * (kernel_size - 1) // 2 - offset_base_channels = kernel_size * kernel_size - if with_modulated_dcn: - from detectron2.layers.deform_conv import ModulatedDeformConv - offset_channels = offset_base_channels * 3 # default: 27 - conv_block = ModulatedDeformConv - else: - from detectron2.layers.deform_conv import DeformConv - offset_channels = offset_base_channels * 2 # default: 18 - conv_block = DeformConv - self.offset = Conv2d( - in_channels, - deformable_groups * offset_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - groups=1, - dilation=dilation - ) - nn.init.constant_(self.offset.weight, 0) - nn.init.constant_(self.offset.bias, 0) - ''' - for l in [self.offset, ]: - nn.init.kaiming_uniform_(l.weight, a=1) - torch.nn.init.constant_(l.bias, 0.) - ''' - self.conv = conv_block( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - deformable_groups=deformable_groups, - bias=bias - ) - self.with_modulated_dcn = with_modulated_dcn - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.dilation = dilation - self.offset_split = offset_base_channels * deformable_groups * 2 - - def forward(self, x, return_offset=False): - if x.numel() > 0: - if not self.with_modulated_dcn: - offset_mask = self.offset(x) - x = self.conv(x, offset_mask) - else: - offset_mask = self.offset(x) - offset = offset_mask[:, :self.offset_split, :, :] - mask = offset_mask[:, self.offset_split:, :, :].sigmoid() - x = self.conv(x, offset, mask) - if return_offset: - return x, offset_mask - return x - # get output shape - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // d + 1 - for i, p, di, k, d in zip( - x.shape[-2:], - self.padding, - self.dilation, - self.kernel_size, - self.stride - ) - ] - output_shape = [x.shape[0], self.conv.weight.shape[0]] + output_shape - return _NewEmptyTensorOp.apply(x, output_shape) \ No newline at end of file diff --git a/spaces/BartPoint/VoiceChange/infer_pack/modules.py b/spaces/BartPoint/VoiceChange/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Benson/text-generation/Examples/Descargar Dr Fone Desbloquear Para PC.md b/spaces/Benson/text-generation/Examples/Descargar Dr Fone Desbloquear Para PC.md deleted file mode 100644 index 648b1a1cbf668f7fb0037c62227b3f28d873261b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Dr Fone Desbloquear Para PC.md +++ /dev/null @@ -1,33 +0,0 @@ - -

Cómo descargar Dr Fone Unlock para PC

-

¿Alguna vez ha encontrado una situación en la que haya olvidado su contraseña, PIN, patrón o bloqueo de huellas dactilares en su teléfono? O tal vez usted compró un teléfono de segunda mano que está bloqueado por cuenta de iCloud o Google? ¿O tal vez desea solucionar algunos problemas del sistema en su teléfono, como la pantalla negra, el bucle de arranque o el logotipo atascado? Si usted está buscando una solución a estos problemas, entonces es posible que desee probar Dr Fone Unlock para PC.

-

descargar dr fone desbloquear para PC


Download ❤❤❤ https://bltlly.com/2v6Kpi



-

Dr Fone Unlock es un potente software que puede ayudarle a desbloquear su teléfono, reparar su sistema, recuperar sus datos, transferir sus archivos, copia de seguridad de sus chats, y cambiar su ubicación con facilidad. Es compatible con dispositivos iOS y Android y funciona con varios escenarios. En este artículo, le mostraremos cómo descargar Dr Fone Unlock para PC y cómo usarlo eficazmente.

-

Características de Dr Fone Unlock para PC

-

Dr Fone Unlock es más que una herramienta de desbloqueo de pantalla. Ofrece una solución móvil completa que puede satisfacer todas sus necesidades. Aquí están algunas de las características de Dr Fone Unlock para PC:

-
    -
  • Eliminar la pantalla de bloqueo, evitar el bloqueo de iCloud y FRP en dispositivos iOS/ Android. Dr Fone Unlock puede ayudarle a eliminar cualquier tipo de pantalla de bloqueo en su teléfono, como contraseña, PIN, patrón, huella digital o identificación de la cara. También puede ayudarlo a evitar el bloqueo de activación de iCloud o la verificación de cuentas de Google (FRP) en dispositivos iOS o Android. De esta manera, puede acceder a su teléfono sin problemas.
  • -
  • Solucionar problemas del sistema iOS/ Android, como pantalla negra, bucle de arranque, etc. Dr Fone Unlock también puede ayudarlo a solucionar varios problemas del sistema en su teléfono, como la pantalla negra de la muerte, bucle de arranque, pegado en el logotipo de Apple/ Samsung, etc. Puede reparar su sistema sin causar ninguna pérdida de datos o daños.
  • - -
  • Transferir datos entre iOS/Android y PC/iTunes. Dr Fone Unlock también puede ayudarle a transferir datos entre diferentes dispositivos y plataformas. Puede mover fácilmente todos sus datos o datos seleccionados de un teléfono a otro con un solo clic. También puede transferir datos desde su teléfono a su PC o iTunes y viceversa. -

    Conclusión

    -

    Dr Fone Unlock para PC es un software potente y versátil que puede ayudarle a desbloquear el teléfono, arreglar su sistema, recuperar sus datos, transferir sus archivos, copia de seguridad de sus chats, y cambiar su ubicación con facilidad. Es compatible con dispositivos iOS y Android y funciona con varios escenarios. Es fácil de usar, seguro y confiable, y ofrece múltiples herramientas en un solo software. Sin embargo, no es gratuito, requiere conexión a Internet y puede no funcionar para algunos dispositivos o situaciones. Por lo tanto, siempre debe comprobar la compatibilidad y las instrucciones del software antes de usarlo.

    -

    -

    Si usted está buscando una solución a sus problemas móviles, entonces es posible que desee probar Dr Fone Unlock para PC. Puede descargarlo desde el sitio web oficial e instalarlo en su PC en minutos. Luego puede usarlo para realizar varias operaciones en su dispositivo con pasos simples. También puede ponerse en contacto con el equipo de atención al cliente si tiene alguna pregunta o problema con el software.

    -

    Entonces, ¿qué estás esperando? Descargar Dr Fone desbloquear para PC hoy y disfrutar de todos sus beneficios y características!

    -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre Dr Fone Unlock para PC:

    -
      -
    • Q: ¿Es Dr Fone desbloquear para PC gratis?
    • -
    • A: No, Dr Fone Unlock para PC no es gratis. Es necesario comprar una licencia para utilizar todas sus características. Sin embargo, puede descargar una versión de prueba gratuita desde el sitio web oficial y utilizar algunas de las funciones de forma gratuita.
    • -
    • Q: ¿Es seguro Dr Fone Unlock para PC?
    • - -
    • Q: ¿Dr Fone Unlock para PC funciona para todos los dispositivos y situaciones?
    • -
    • A: No, Dr Fone Unlock para PC no funciona para todos los dispositivos y situaciones. Es compatible con la mayoría de los dispositivos iOS y Android y escenarios, pero no todos ellos. Algunos dispositivos o situaciones pueden tener diferentes requisitos o limitaciones que pueden impedir que el software funcione correctamente. Por lo tanto, siempre debe comprobar la compatibilidad y las instrucciones del software antes de usarlo.
    • -
    • Q: ¿Cuánto tiempo lleva Dr Fone Unlock para PC para realizar una operación?
    • -
    • A: El tiempo que Dr Fone Unlock para PC toma para realizar una operación depende de varios factores, como el tipo de operación, el tamaño de los datos, la velocidad de la conexión a Internet, etc. En términos generales, la mayoría de las operaciones se pueden hacer en minutos u horas. Sin embargo, algunas operaciones pueden tomar más tiempo dependiendo de la complejidad o dificultad de la situación.
    • -
    • Q: ¿Qué pasa si encuentro cualquier problema o error con Dr Fone Unlock para PC?
    • -
    • A: Si se encuentra con cualquier problema o error con Dr Fone Unlock para PC, puede tratar de solucionar el problema siguiendo los consejos y soluciones proporcionadas en el sitio web oficial o en la guía del usuario. También puede ponerse en contacto con el equipo de atención al cliente por correo electrónico o por teléfono si necesita más ayuda.
    • -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_functools.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_functools.py deleted file mode 100644 index 71f66bd03cb713a2190853bdf7170c4ea80d2425..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/importlib_metadata/_functools.py +++ /dev/null @@ -1,104 +0,0 @@ -import types -import functools - - -# from jaraco.functools 3.3 -def method_cache(method, cache_wrapper=None): - """ - Wrap lru_cache to support storing the cache data in the object instances. - - Abstracts the common paradigm where the method explicitly saves an - underscore-prefixed protected property on first call and returns that - subsequently. - - >>> class MyClass: - ... calls = 0 - ... - ... @method_cache - ... def method(self, value): - ... self.calls += 1 - ... return value - - >>> a = MyClass() - >>> a.method(3) - 3 - >>> for x in range(75): - ... res = a.method(x) - >>> a.calls - 75 - - Note that the apparent behavior will be exactly like that of lru_cache - except that the cache is stored on each instance, so values in one - instance will not flush values from another, and when an instance is - deleted, so are the cached values for that instance. - - >>> b = MyClass() - >>> for x in range(35): - ... res = b.method(x) - >>> b.calls - 35 - >>> a.method(0) - 0 - >>> a.calls - 75 - - Note that if method had been decorated with ``functools.lru_cache()``, - a.calls would have been 76 (due to the cached value of 0 having been - flushed by the 'b' instance). - - Clear the cache with ``.cache_clear()`` - - >>> a.method.cache_clear() - - Same for a method that hasn't yet been called. - - >>> c = MyClass() - >>> c.method.cache_clear() - - Another cache wrapper may be supplied: - - >>> cache = functools.lru_cache(maxsize=2) - >>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache) - >>> a = MyClass() - >>> a.method2() - 3 - - Caution - do not subsequently wrap the method with another decorator, such - as ``@property``, which changes the semantics of the function. - - See also - http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/ - for another implementation and additional justification. - """ - cache_wrapper = cache_wrapper or functools.lru_cache() - - def wrapper(self, *args, **kwargs): - # it's the first call, replace the method with a cached, bound method - bound_method = types.MethodType(method, self) - cached_method = cache_wrapper(bound_method) - setattr(self, method.__name__, cached_method) - return cached_method(*args, **kwargs) - - # Support cache clear even before cache has been created. - wrapper.cache_clear = lambda: None - - return wrapper - - -# From jaraco.functools 3.3 -def pass_none(func): - """ - Wrap func so it's not called if its first param is None - - >>> print_text = pass_none(print) - >>> print_text('text') - text - >>> print_text(None) - """ - - @functools.wraps(func) - def wrapper(param, *args, **kwargs): - if param is not None: - return func(param, *args, **kwargs) - - return wrapper diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Predict/predict.py b/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Predict/predict.py deleted file mode 100644 index 9a4cf4088fcb2114f9988edef842e4296d145570..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/MARCI-NFL-Betting/Source/Predict/predict.py +++ /dev/null @@ -1,166 +0,0 @@ -import xgboost as xgb -import numpy as np -import pandas as pd -import pickle as pkl -import os -import requests -from bs4 import BeautifulSoup -import warnings -warnings.filterwarnings("ignore") -from datetime import datetime - -# set dirs for other files -current_directory = os.path.dirname(os.path.abspath(__file__)) -parent_directory = os.path.dirname(current_directory) -data_directory = os.path.join(parent_directory, 'Data') -model_directory = os.path.join(parent_directory, 'Models') -pickle_directory = os.path.join(parent_directory, 'Pickles') - -file_path = os.path.join(data_directory, 'gbg_this_year.csv') -gbg = pd.read_csv(file_path, low_memory=False) - -file_path = os.path.join(data_directory, 'results.csv') -results = pd.read_csv(file_path, low_memory=False) - -# get team abbreviations -file_path = os.path.join(pickle_directory, 'team_name_to_abbreviation.pkl') -with open(file_path, 'rb') as f: - team_name_to_abbreviation = pkl.load(f) - -file_path = os.path.join(pickle_directory, 'team_abbreviation_to_name.pkl') -with open(file_path, 'rb') as f: - team_abbreviation_to_name = pkl.load(f) - -# get schedule -file_path = os.path.join(pickle_directory, 'schedule.pkl') -with open(file_path, 'rb') as f: - schedule = pkl.load(f) - -# load models -# moneyline -model = 'xgboost_ML_no_odds_71.4%' -file_path = os.path.join(model_directory, f'{model}.json') -xgb_ml = xgb.Booster() -xgb_ml.load_model(file_path) - -# over/under -model = 'xgboost_OU_no_odds_59.8%' -file_path = os.path.join(model_directory, f'{model}.json') -xgb_ou = xgb.Booster() -xgb_ou.load_model(file_path) - - -def get_week(): - headers = { - 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', - 'Accept-Encoding': 'gzip, deflate', - 'Accept-Language': 'en-US,en;q=0.9', - 'Cache-Control': 'max-age=0', - 'Connection': 'keep-alive', - 'Dnt': '1', - 'Upgrade-Insecure-Requests': '1', - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36' - } - url = 'https://www.nfl.com/schedules/' - resp = requests.get(url,headers=headers) - soup = BeautifulSoup(resp.text, 'html.parser') - h2_tags = soup.find_all('h2') - year = h2_tags[0].getText().split(' ')[0] - week = h2_tags[0].getText().split(' ')[-1] - return int(week), int(year) - - -def get_games(week): - # pull from NBC - #url = 'https://www.nbcsports.com/nfl/schedule' - #df = pd.read_html(url)[week-1] - df = schedule[week-1] - df['Away Team'] = [' '.join(i.split('\xa0')[1:]) for i in df['Away TeamAway Team']] - df['Home Team'] = [' '.join(i.split('\xa0')[1:]) for i in df['Home TeamHome Team']] - df['Date'] = pd.to_datetime(df['Game TimeGame Time']) - df['Date'] = df['Date'].dt.strftime('%A %d/%m %I:%M %p') - df['Date'] = df['Date'].apply(lambda x: f"{x.split()[0]} {int(x.split()[1].split('/')[1])}/{int(x.split()[1].split('/')[0])} {x.split()[2]}".capitalize()) - - return df[['Away Team','Home Team','Date']] - - -def get_one_week(home,away,season,week): - try: - max_GP_home = gbg.loc[((gbg['home_team'] == home) | (gbg['away_team'] == home)) & (gbg['GP'] < week)]['GP'].max() - max_GP_away = gbg.loc[((gbg['home_team'] == away) | (gbg['away_team'] == away)) & (gbg['GP'] < week)]['GP'].max() - - home_df = gbg.loc[((gbg['away_team']==home) | (gbg['home_team']==home)) & (gbg['Season']==season) & (gbg['GP']==max_GP_home)] - gbg_home_team = home_df['home_team'].item() - home_df.drop(columns=['game_id','home_team','away_team','Season','game_date'], inplace=True) - home_df = home_df[[i for i in home_df.columns if '.Away' not in i] if gbg_home_team==home else [i for i in home_df.columns if '.Away' in i]] - home_df.columns = [i.replace('.Away','') for i in home_df.columns] - - away_df = gbg.loc[((gbg['away_team']==away) | (gbg['home_team']==away)) & (gbg['Season']==season) & (gbg['GP']==max_GP_away)] - gbg_home_team = away_df['home_team'].item() - away_df.drop(columns=['game_id','home_team','away_team','Season','game_date'], inplace=True) - away_df = away_df[[i for i in away_df.columns if '.Away' not in i] if gbg_home_team==away else [i for i in away_df.columns if '.Away' in i]] - away_df.columns = [i.replace('.Away','') + '.Away' for i in away_df.columns] - - df = home_df.reset_index(drop=True).merge(away_df.reset_index(drop=True), left_index=True, right_index=True) - return df - except ValueError: - return pd.DataFrame() - - -def predict(home,away,season,week,total): - global results - - # finish preparing data - if len(home)>4: - home_abbrev = team_name_to_abbreviation[home] - else: - home_abbrev = home - - if len(away)>4: - away_abbrev = team_name_to_abbreviation[away] - else: - away_abbrev = away - - data = get_one_week(home_abbrev,away_abbrev,season,week) - data['Total Score Close'] = total - matrix = xgb.DMatrix(data.astype(float).values) - - # create game id - game_id = str(season) + '_0' + str(int(week)) + '_' + away_abbrev + '_' + home_abbrev - - try: - moneyline_result = results.loc[results['game_id']==game_id, 'winner'].item() - except: - moneyline_result = 'N/A' - - try: - ml_predicted_proba = xgb_ml.predict(matrix)[0][1] - winner_proba = max([ml_predicted_proba, 1-ml_predicted_proba]).item() - moneyline = {'Winner': [home if ml_predicted_proba>0.5 else away if ml_predicted_proba<0.5 else 'Toss-Up'], - 'Probabilities':[winner_proba], - 'Result': moneyline_result} - except: - moneyline = {'Winner': 'NA', - 'Probabilities':['N/A'], - 'Result': moneyline_result} - - try: - result = results.loc[results['game_id']==game_id, 'total'].item() - over_under_result = 'Over' if float(result)>float(total) else 'Push' if float(result)==float(total) else 'Under' - print(float(result), float(total)) - except: - over_under_result = 'N/A' - - try: - ou_predicted_proba = xgb_ou.predict(matrix)[0][1] - ou_proba = max([ou_predicted_proba, 1-ou_predicted_proba]).item() - - over_under = {'Over/Under': ['Over' if ou_predicted_proba>0.5 else 'Under'], - 'Probability': [ou_proba], - 'Result': over_under_result} - except: - over_under = {'Over/Under': 'N/A', - 'Probability': ['N/A'], - 'Result': over_under_result} - - return game_id, moneyline, over_under diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_roi_pooler.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_roi_pooler.py deleted file mode 100644 index 9aa3825c0196e4a6d89162e3d7c797e3d77b23bd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_roi_pooler.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging -import unittest -import torch - -from detectron2.modeling.poolers import ROIPooler -from detectron2.structures import Boxes, RotatedBoxes - -logger = logging.getLogger(__name__) - - -class TestROIPooler(unittest.TestCase): - def _rand_boxes(self, num_boxes, x_max, y_max): - coords = torch.rand(num_boxes, 4) - coords[:, 0] *= x_max - coords[:, 1] *= y_max - coords[:, 2] *= x_max - coords[:, 3] *= y_max - boxes = torch.zeros(num_boxes, 4) - boxes[:, 0] = torch.min(coords[:, 0], coords[:, 2]) - boxes[:, 1] = torch.min(coords[:, 1], coords[:, 3]) - boxes[:, 2] = torch.max(coords[:, 0], coords[:, 2]) - boxes[:, 3] = torch.max(coords[:, 1], coords[:, 3]) - return boxes - - def _test_roialignv2_roialignrotated_match(self, device): - pooler_resolution = 14 - canonical_level = 4 - canonical_scale_factor = 2 ** canonical_level - pooler_scales = (1.0 / canonical_scale_factor,) - sampling_ratio = 0 - - N, C, H, W = 2, 4, 10, 8 - N_rois = 10 - std = 11 - mean = 0 - feature = (torch.rand(N, C, H, W) - 0.5) * 2 * std + mean - - features = [feature.to(device)] - - rois = [] - rois_rotated = [] - for _ in range(N): - boxes = self._rand_boxes( - num_boxes=N_rois, x_max=W * canonical_scale_factor, y_max=H * canonical_scale_factor - ) - - rotated_boxes = torch.zeros(N_rois, 5) - rotated_boxes[:, 0] = (boxes[:, 0] + boxes[:, 2]) / 2.0 - rotated_boxes[:, 1] = (boxes[:, 1] + boxes[:, 3]) / 2.0 - rotated_boxes[:, 2] = boxes[:, 2] - boxes[:, 0] - rotated_boxes[:, 3] = boxes[:, 3] - boxes[:, 1] - rois.append(Boxes(boxes).to(device)) - rois_rotated.append(RotatedBoxes(rotated_boxes).to(device)) - - roialignv2_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type="ROIAlignV2", - ) - - roialignv2_out = roialignv2_pooler(features, rois) - - roialignrotated_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type="ROIAlignRotated", - ) - - roialignrotated_out = roialignrotated_pooler(features, rois_rotated) - - self.assertTrue(torch.allclose(roialignv2_out, roialignrotated_out, atol=1e-4)) - - def test_roialignv2_roialignrotated_match_cpu(self): - self._test_roialignv2_roialignrotated_match(device="cpu") - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_roialignv2_roialignrotated_match_cuda(self): - self._test_roialignv2_roialignrotated_match(device="cuda") - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/default_construct_range.h b/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/default_construct_range.h deleted file mode 100644 index 6c3856c142990a3230c3bc4f805c0cb0a5fbcb73..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/default_construct_range.h +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -namespace thrust -{ -namespace detail -{ - - -template -__host__ __device__ -inline void default_construct_range(Allocator &a, Pointer p, Size n); - - -} // end detail -} // end thrust - -#include - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/sort.h deleted file mode 100644 index 863189a1ea6bc39e5ae9c15a088cffab8060a1b9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/sort.h +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - -template - void stable_sort(execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last, - StrictWeakOrdering comp); - -template - void stable_sort_by_key(execution_policy &exec, - RandomAccessIterator1 keys_first, - RandomAccessIterator1 keys_last, - RandomAccessIterator2 values_first, - StrictWeakOrdering comp); - -} // end namespace detail -} // end namespace tbb -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/detectors_resnext.py b/spaces/CVPR/WALT/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 57d032fe37ed82d5ba24e761bdc014cc0ee5ac64..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,122 +0,0 @@ -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/yolo.py b/spaces/CVPR/WALT/mmdet/models/detectors/yolo.py deleted file mode 100644 index 240aab20f857befe25e64114300ebb15a66c6a70..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/yolo.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOV3(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLOV3, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/cyan/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/cyan/__init__.py deleted file mode 100644 index 62e511971bf32f7c928b17dc977ab3df4066544b..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/cyan/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme - - -def cyan(images: List[BuildImage], texts, args): - color = (78, 114, 184) - frame = images[0].convert("RGB").square().resize((500, 500)).color_mask(color) - frame.draw_text( - (400, 40, 480, 280), - "群\n青", - max_fontsize=80, - weight="bold", - fill="white", - stroke_ratio=0.04, - stroke_fill=color, - ).draw_text( - (200, 270, 480, 350), - "YOASOBI", - halign="right", - max_fontsize=40, - fill="white", - stroke_ratio=0.06, - stroke_fill=color, - ) - return frame.save_jpg() - - -add_meme("cyan", cyan, min_images=1, max_images=1, keywords=["群青"]) diff --git a/spaces/CloudOrc/SolidUI/README.md b/spaces/CloudOrc/SolidUI/README.md deleted file mode 100644 index e2a490b6a52aed6bb6c3f5c081611270fd173851..0000000000000000000000000000000000000000 --- a/spaces/CloudOrc/SolidUI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SolidUI -emoji: 🐠 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/StandardEncoding.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/StandardEncoding.py deleted file mode 100644 index bf1388624bef4763d26656497b7f3068cb23e307..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/StandardEncoding.py +++ /dev/null @@ -1,258 +0,0 @@ -StandardEncoding = [ - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - "space", - "exclam", - "quotedbl", - "numbersign", - "dollar", - "percent", - "ampersand", - "quoteright", - "parenleft", - "parenright", - "asterisk", - "plus", - "comma", - "hyphen", - "period", - "slash", - "zero", - "one", - "two", - "three", - "four", - "five", - "six", - "seven", - "eight", - "nine", - "colon", - "semicolon", - "less", - "equal", - "greater", - "question", - "at", - "A", - "B", - "C", - "D", - "E", - "F", - "G", - "H", - "I", - "J", - "K", - "L", - "M", - "N", - "O", - "P", - "Q", - "R", - "S", - "T", - "U", - "V", - "W", - "X", - "Y", - "Z", - "bracketleft", - "backslash", - "bracketright", - "asciicircum", - "underscore", - "quoteleft", - "a", - "b", - "c", - "d", - "e", - "f", - "g", - "h", - "i", - "j", - "k", - "l", - "m", - "n", - "o", - "p", - "q", - "r", - "s", - "t", - "u", - "v", - "w", - "x", - "y", - "z", - "braceleft", - "bar", - "braceright", - "asciitilde", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - "exclamdown", - "cent", - "sterling", - "fraction", - "yen", - "florin", - "section", - "currency", - "quotesingle", - "quotedblleft", - "guillemotleft", - "guilsinglleft", - "guilsinglright", - "fi", - "fl", - ".notdef", - "endash", - "dagger", - "daggerdbl", - "periodcentered", - ".notdef", - "paragraph", - "bullet", - "quotesinglbase", - "quotedblbase", - "quotedblright", - "guillemotright", - "ellipsis", - "perthousand", - ".notdef", - "questiondown", - ".notdef", - "grave", - "acute", - "circumflex", - "tilde", - "macron", - "breve", - "dotaccent", - "dieresis", - ".notdef", - "ring", - "cedilla", - ".notdef", - "hungarumlaut", - "ogonek", - "caron", - "emdash", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - "AE", - ".notdef", - "ordfeminine", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - "Lslash", - "Oslash", - "OE", - "ordmasculine", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - ".notdef", - "ae", - ".notdef", - ".notdef", - ".notdef", - "dotlessi", - ".notdef", - ".notdef", - "lslash", - "oslash", - "oe", - "germandbls", - ".notdef", - ".notdef", - ".notdef", - ".notdef", -] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/path/arc.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/path/arc.py deleted file mode 100644 index 3e0a211e043a9f52954a29ce95de9d2a9f1858d4..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/svgLib/path/arc.py +++ /dev/null @@ -1,153 +0,0 @@ -"""Convert SVG Path's elliptical arcs to Bezier curves. - -The code is mostly adapted from Blink's SVGPathNormalizer::DecomposeArcToCubic -https://github.com/chromium/chromium/blob/93831f2/third_party/ -blink/renderer/core/svg/svg_path_parser.cc#L169-L278 -""" -from fontTools.misc.transform import Identity, Scale -from math import atan2, ceil, cos, fabs, isfinite, pi, radians, sin, sqrt, tan - - -TWO_PI = 2 * pi -PI_OVER_TWO = 0.5 * pi - - -def _map_point(matrix, pt): - # apply Transform matrix to a point represented as a complex number - r = matrix.transformPoint((pt.real, pt.imag)) - return r[0] + r[1] * 1j - - -class EllipticalArc(object): - def __init__(self, current_point, rx, ry, rotation, large, sweep, target_point): - self.current_point = current_point - self.rx = rx - self.ry = ry - self.rotation = rotation - self.large = large - self.sweep = sweep - self.target_point = target_point - - # SVG arc's rotation angle is expressed in degrees, whereas Transform.rotate - # uses radians - self.angle = radians(rotation) - - # these derived attributes are computed by the _parametrize method - self.center_point = self.theta1 = self.theta2 = self.theta_arc = None - - def _parametrize(self): - # convert from endopoint to center parametrization: - # https://www.w3.org/TR/SVG/implnote.html#ArcConversionEndpointToCenter - - # If rx = 0 or ry = 0 then this arc is treated as a straight line segment (a - # "lineto") joining the endpoints. - # http://www.w3.org/TR/SVG/implnote.html#ArcOutOfRangeParameters - rx = fabs(self.rx) - ry = fabs(self.ry) - if not (rx and ry): - return False - - # If the current point and target point for the arc are identical, it should - # be treated as a zero length path. This ensures continuity in animations. - if self.target_point == self.current_point: - return False - - mid_point_distance = (self.current_point - self.target_point) * 0.5 - - point_transform = Identity.rotate(-self.angle) - - transformed_mid_point = _map_point(point_transform, mid_point_distance) - square_rx = rx * rx - square_ry = ry * ry - square_x = transformed_mid_point.real * transformed_mid_point.real - square_y = transformed_mid_point.imag * transformed_mid_point.imag - - # Check if the radii are big enough to draw the arc, scale radii if not. - # http://www.w3.org/TR/SVG/implnote.html#ArcCorrectionOutOfRangeRadii - radii_scale = square_x / square_rx + square_y / square_ry - if radii_scale > 1: - rx *= sqrt(radii_scale) - ry *= sqrt(radii_scale) - self.rx, self.ry = rx, ry - - point_transform = Scale(1 / rx, 1 / ry).rotate(-self.angle) - - point1 = _map_point(point_transform, self.current_point) - point2 = _map_point(point_transform, self.target_point) - delta = point2 - point1 - - d = delta.real * delta.real + delta.imag * delta.imag - scale_factor_squared = max(1 / d - 0.25, 0.0) - - scale_factor = sqrt(scale_factor_squared) - if self.sweep == self.large: - scale_factor = -scale_factor - - delta *= scale_factor - center_point = (point1 + point2) * 0.5 - center_point += complex(-delta.imag, delta.real) - point1 -= center_point - point2 -= center_point - - theta1 = atan2(point1.imag, point1.real) - theta2 = atan2(point2.imag, point2.real) - - theta_arc = theta2 - theta1 - if theta_arc < 0 and self.sweep: - theta_arc += TWO_PI - elif theta_arc > 0 and not self.sweep: - theta_arc -= TWO_PI - - self.theta1 = theta1 - self.theta2 = theta1 + theta_arc - self.theta_arc = theta_arc - self.center_point = center_point - - return True - - def _decompose_to_cubic_curves(self): - if self.center_point is None and not self._parametrize(): - return - - point_transform = Identity.rotate(self.angle).scale(self.rx, self.ry) - - # Some results of atan2 on some platform implementations are not exact - # enough. So that we get more cubic curves than expected here. Adding 0.001f - # reduces the count of sgements to the correct count. - num_segments = int(ceil(fabs(self.theta_arc / (PI_OVER_TWO + 0.001)))) - for i in range(num_segments): - start_theta = self.theta1 + i * self.theta_arc / num_segments - end_theta = self.theta1 + (i + 1) * self.theta_arc / num_segments - - t = (4 / 3) * tan(0.25 * (end_theta - start_theta)) - if not isfinite(t): - return - - sin_start_theta = sin(start_theta) - cos_start_theta = cos(start_theta) - sin_end_theta = sin(end_theta) - cos_end_theta = cos(end_theta) - - point1 = complex( - cos_start_theta - t * sin_start_theta, - sin_start_theta + t * cos_start_theta, - ) - point1 += self.center_point - target_point = complex(cos_end_theta, sin_end_theta) - target_point += self.center_point - point2 = target_point - point2 += complex(t * sin_end_theta, -t * cos_end_theta) - - point1 = _map_point(point_transform, point1) - point2 = _map_point(point_transform, point2) - target_point = _map_point(point_transform, target_point) - - yield point1, point2, target_point - - def draw(self, pen): - for point1, point2, target_point in self._decompose_to_cubic_curves(): - pen.curveTo( - (point1.real, point1.imag), - (point2.real, point2.imag), - (target_point.real, target_point.imag), - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/unicodedata/OTTags.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/unicodedata/OTTags.py deleted file mode 100644 index 859a3bcdcf29bcdda827edad766ffeab2f0b636a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/unicodedata/OTTags.py +++ /dev/null @@ -1,50 +0,0 @@ -# Data updated to OpenType 1.8.2 as of January 2018. - -# Complete list of OpenType script tags at: -# https://www.microsoft.com/typography/otspec/scripttags.htm - -# Most of the script tags are the same as the ISO 15924 tag but lowercased, -# so we only have to handle the exceptional cases: -# - KATAKANA and HIRAGANA both map to 'kana'; -# - spaces at the end are preserved, unlike ISO 15924; -# - we map special script codes for Inherited, Common and Unknown to DFLT. - -DEFAULT_SCRIPT = "DFLT" - -SCRIPT_ALIASES = { - "jamo": "hang", -} - -SCRIPT_EXCEPTIONS = { - "Hira": "kana", - "Hrkt": "kana", - "Laoo": "lao ", - "Yiii": "yi ", - "Nkoo": "nko ", - "Vaii": "vai ", - "Zmth": "math", - "Zinh": DEFAULT_SCRIPT, - "Zyyy": DEFAULT_SCRIPT, - "Zzzz": DEFAULT_SCRIPT, -} - -SCRIPT_EXCEPTIONS_REVERSED = { - "math": "Zmth", -} - -NEW_SCRIPT_TAGS = { - "Beng": ("bng2",), - "Deva": ("dev2",), - "Gujr": ("gjr2",), - "Guru": ("gur2",), - "Knda": ("knd2",), - "Mlym": ("mlm2",), - "Orya": ("ory2",), - "Taml": ("tml2",), - "Telu": ("tel2",), - "Mymr": ("mym2",), -} - -NEW_SCRIPT_TAGS_REVERSED = { - value: key for key, values in NEW_SCRIPT_TAGS.items() for value in values -} diff --git a/spaces/Datasculptor/AIart_sources_of_inspiration/app.py b/spaces/Datasculptor/AIart_sources_of_inspiration/app.py deleted file mode 100644 index be317f11b6d5252cf35832a4f84db8f56d180371..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/AIart_sources_of_inspiration/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import tensorflow as tf -import pandas as pd -import gradio as gr - -authors_df = pd.read_csv('authors.csv') -labels = sorted(list(authors_df.name)) - -model = tf.keras.models.load_model('efficientnetb0.h5') - -description = 'This is a DEMO that attempts to recognize the inspirations used by the AI art generator. After uploading a picture of an image, the application displays the predicted artist along with the probability of predicting the top three authors.The DEMO uses EfficientNetB0 convolutional neural network as a base model whose classifier was modified and trained the 8,000+ paintings from [Kaggle](https://www.kaggle.com/datasets/ikarus777/best-artworks-of-all-time) dataset. Model trained by osydorchuk89. Given the dataset limitations, the model only recognizes paintings of [50 artists](https://huggingface.co/spaces/osydorchuk/painting_authors/blob/main/authors.csv).' - -def predict_author(input): - if input is None: - return 'Please upload an image' - input = input.reshape((-1, 224, 224, 3)) - prediction = model.predict(input).flatten() - confidences = {labels[i]: float(prediction[i]) for i in range(50)} - return confidences - -demo = gr.Interface( - title='the AI art generator sources of inspiration', - description=description, - fn=predict_author, - inputs=gr.Image(shape=(224, 224)), - outputs=gr.Label(num_top_classes=3), - examples=['test_pics/eva_miro.jpg', 'test_pics/eva_bosch.jpg', 'test_pics/eva_miro_2.jpg', 'test_pics/eva_rtology.jpg'] - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/DhruvShek/chatlm/models.py b/spaces/DhruvShek/chatlm/models.py deleted file mode 100644 index 8f9b1e6aa592f570bfc610cf98c98c454cf2b7b7..0000000000000000000000000000000000000000 --- a/spaces/DhruvShek/chatlm/models.py +++ /dev/null @@ -1,162 +0,0 @@ -import torch -import torch.nn as nn -import math -import torch.nn.functional as F - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class Embeddings(nn.Module): - """ - Implements embeddings of the words and adds their positional encodings. - """ - def __init__(self, vocab_size, d_model, max_len = 50): - super(Embeddings, self).__init__() - self.d_model = d_model - self.dropout = nn.Dropout(0.1) - self.embed = nn.Embedding(vocab_size, d_model) - self.pe = self.create_positinal_encoding(max_len, self.d_model) - self.dropout = nn.Dropout(0.1) - - def create_positinal_encoding(self, max_len, d_model): - pe = torch.zeros(max_len, d_model).to(device) - for pos in range(max_len): # for each position of the word - for i in range(0, d_model, 2): # for each dimension of the each position - pe[pos, i] = math.sin(pos / (10000 ** ((2 * i)/d_model))) - pe[pos, i + 1] = math.cos(pos / (10000 ** ((2 * (i + 1))/d_model))) - pe = pe.unsqueeze(0) # include the batch size - return pe - - def forward(self, encoded_words): - embedding = self.embed(encoded_words) * math.sqrt(self.d_model) - embedding += self.pe[:, :embedding.size(1)] # pe will automatically be expanded with the same batch size as encoded_words - embedding = self.dropout(embedding) - return embedding - - - -class MultiHeadAttention(nn.Module): - - def __init__(self, heads, d_model): - - super(MultiHeadAttention, self).__init__() - assert d_model % heads == 0 - self.d_k = d_model // heads - self.heads = heads - self.dropout = nn.Dropout(0.1) - self.query = nn.Linear(d_model, d_model) - self.key = nn.Linear(d_model, d_model) - self.value = nn.Linear(d_model, d_model) - self.concat = nn.Linear(d_model, d_model) - - def forward(self, query, key, value, mask): - """ - query, key, value of shape: (batch_size, max_len, 512) - mask of shape: (batch_size, 1, 1, max_words) - """ - # (batch_size, max_len, 512) - query = self.query(query) - key = self.key(key) - value = self.value(value) - - # (batch_size, max_len, 512) --> (batch_size, max_len, h, d_k) --> (batch_size, h, max_len, d_k) - query = query.view(query.shape[0], -1, self.heads, self.d_k).permute(0, 2, 1, 3) - key = key.view(key.shape[0], -1, self.heads, self.d_k).permute(0, 2, 1, 3) - value = value.view(value.shape[0], -1, self.heads, self.d_k).permute(0, 2, 1, 3) - - # (batch_size, h, max_len, d_k) matmul (batch_size, h, d_k, max_len) --> (batch_size, h, max_len, max_len) - scores = torch.matmul(query, key.permute(0,1,3,2)) / math.sqrt(query.size(-1)) - scores = scores.masked_fill(mask == 0, -1e9) # (batch_size, h, max_len, max_len) - weights = F.softmax(scores, dim = -1) # (batch_size, h, max_len, max_len) - weights = self.dropout(weights) - # (batch_size, h, max_len, max_len) matmul (batch_size, h, max_len, d_k) --> (batch_size, h, max_len, d_k) - context = torch.matmul(weights, value) - # (batch_size, h, max_len, d_k) --> (batch_size, max_len, h, d_k) --> (batch_size, max_len, h * d_k) - context = context.permute(0,2,1,3).contiguous().view(context.shape[0], -1, self.heads * self.d_k) - # (batch_size, max_len, h * d_k) - interacted = self.concat(context) - return interacted - - - -class FeedForward(nn.Module): - - def __init__(self, d_model, middle_dim = 2048): - super(FeedForward, self).__init__() - - self.fc1 = nn.Linear(d_model, middle_dim) - self.fc2 = nn.Linear(middle_dim, d_model) - self.dropout = nn.Dropout(0.1) - - def forward(self, x): - out = F.relu(self.fc1(x)) - out = self.fc2(self.dropout(out)) - return out - - -class EncoderLayer(nn.Module): - - def __init__(self, d_model, heads): - super(EncoderLayer, self).__init__() - self.layernorm = nn.LayerNorm(d_model) - self.self_multihead = MultiHeadAttention(heads, d_model) - self.feed_forward = FeedForward(d_model) - self.dropout = nn.Dropout(0.1) - - def forward(self, embeddings, mask): - interacted = self.dropout(self.self_multihead(embeddings, embeddings, embeddings, mask)) - interacted = self.layernorm(interacted + embeddings) - feed_forward_out = self.dropout(self.feed_forward(interacted)) - encoded = self.layernorm(feed_forward_out + interacted) - return encoded - - -class DecoderLayer(nn.Module): - - def __init__(self, d_model, heads): - super(DecoderLayer, self).__init__() - self.layernorm = nn.LayerNorm(d_model) - self.self_multihead = MultiHeadAttention(heads, d_model) - self.src_multihead = MultiHeadAttention(heads, d_model) - self.feed_forward = FeedForward(d_model) - self.dropout = nn.Dropout(0.1) - - def forward(self, embeddings, encoded, src_mask, target_mask): - query = self.dropout(self.self_multihead(embeddings, embeddings, embeddings, target_mask)) - query = self.layernorm(query + embeddings) - interacted = self.dropout(self.src_multihead(query, encoded, encoded, src_mask)) - interacted = self.layernorm(interacted + query) - feed_forward_out = self.dropout(self.feed_forward(interacted)) - decoded = self.layernorm(feed_forward_out + interacted) - return decoded - - -class Transformer(nn.Module): - - def __init__(self, d_model, heads, num_layers, word_map): - super(Transformer, self).__init__() - - self.d_model = d_model - self.vocab_size = len(word_map) - self.embed = Embeddings(self.vocab_size, d_model) - self.encoder = nn.ModuleList([EncoderLayer(d_model, heads) for _ in range(num_layers)]) - self.decoder = nn.ModuleList([DecoderLayer(d_model, heads) for _ in range(num_layers)]) - self.logit = nn.Linear(d_model, self.vocab_size) - - def encode(self, src_words, src_mask): - src_embeddings = self.embed(src_words) - for layer in self.encoder: - src_embeddings = layer(src_embeddings, src_mask) - return src_embeddings - - def decode(self, target_words, target_mask, src_embeddings, src_mask): - tgt_embeddings = self.embed(target_words) - for layer in self.decoder: - tgt_embeddings = layer(tgt_embeddings, src_embeddings, src_mask, target_mask) - return tgt_embeddings - - def forward(self, src_words, src_mask, target_words, target_mask): - encoded = self.encode(src_words, src_mask) - decoded = self.decode(target_words, target_mask, encoded, src_mask) - out = F.log_softmax(self.logit(decoded), dim = 2) - return out diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.h b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.h deleted file mode 100644 index d0246aa06c3dcd5919111fdc914136014b9044b5..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright (c) SenseTime Research. All rights reserved. - -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/DunnBC22/Password_Strength_Classifier_with_CodeBERT/app.py b/spaces/DunnBC22/Password_Strength_Classifier_with_CodeBERT/app.py deleted file mode 100644 index 82a417e4b8879f246c55fe6e0769923f0e820cf3..0000000000000000000000000000000000000000 --- a/spaces/DunnBC22/Password_Strength_Classifier_with_CodeBERT/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import os - -os.system('python -m pip install --upgrade pip') -os.system('pip install transformers torch') - -import gradio as gr -from transformers import pipeline - -unique_classes = [ - "Weak", - "Medium", - "Strong" - ] - -id2label = {f"LABEL_{idx}":label for idx, label in enumerate(unique_classes)} - -def classify_password(text): - password_clf = pipeline(model="DunnBC22/codebert-base-Password_Strength_Classifier") - password_result = password_clf(text) - return f"The password is {id2label[password_result[0]['label']]} with a probability of {password_result[0]['score']*100:.2f}" - -title = "Classify Password Strength" -description = """ -This is a demo of a password classifier. The feedback should not be taken as a guarantee of a password's strength. -""" - -article = """ -

    -| CodeBERT: A Pre-Trained Model for Programming & Natural Languages -| Microsoft CodeBERT-Base Documentation -| My Code for this Fune-Tuned Project -| Dataset Source -|

    -""" - -examples = ['94311163nobp', 'mpompo1', 'dK4dWOjM1OAPeisw'] - -gr.Interface(fn=classify_password, - inputs=gr.inputs.Textbox(), - outputs=gr.outputs.Textbox(), - title=title, - article=article, - description=description, - examples=examples, - theme='abidlabs/dracula_revamped' - ).launch() \ No newline at end of file diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/Training.md b/spaces/EXPOSUREEE/Ai-Image-Enhancer/Training.md deleted file mode 100644 index 64704e1d2e1f334984232afd12b245235b274a9e..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/Training.md +++ /dev/null @@ -1,100 +0,0 @@ -# :computer: How to Train Real-ESRGAN - -The training codes have been released.
    -Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models. - -## Overview - -The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically, - -1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN. -1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss. - -## Dataset Preparation - -We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required.
    -You can download from : - -1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip -2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar -3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip - -For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales. - -We then crop DF2K images into sub-images for faster IO and processing. - -You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file): - -```txt -DF2K_HR_sub/000001_s001.png -DF2K_HR_sub/000001_s002.png -DF2K_HR_sub/000001_s003.png -... -``` - -## Train Real-ESRNet - -1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`. - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models - ``` -1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly: - ```yml - train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # modify to the root path of your folder - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt - io_backend: - type: disk - ``` -1. If you want to perform validation during training, uncomment those lines and modify accordingly: - ```yml - # Uncomment these for validation - # val: - # name: validation - # type: PairedImageDataset - # dataroot_gt: path_to_gt - # dataroot_lq: path_to_lq - # io_backend: - # type: disk - - ... - - # Uncomment these for validation - # validation settings - # val: - # val_freq: !!float 5e3 - # save_img: True - - # metrics: - # psnr: # metric name, can be arbitrary - # type: calculate_psnr - # crop_border: 4 - # test_y_channel: false - ``` -1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug - ``` -1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary. - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume - ``` - -## Train Real-ESRGAN - -1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`. -1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above. -1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug - ``` -1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary. - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume - ``` diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets.py deleted file mode 100644 index 5da3948c2f2e9edcc3cdac49bdf9f738e403de40..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets.py +++ /dev/null @@ -1,123 +0,0 @@ -import layers -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Eddycrack864/Applio-Inference/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/Eddycrack864/Applio-Inference/lib/uvr5_pack/lib_v5/nets_537227KB.py deleted file mode 100644 index a1bb530e006482704f234c2e739a695174142941..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/lib/uvr5_pack/lib_v5/nets_537227KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import numpy as np -from torch import nn -import torch.nn.functional as F - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/Epoching/3D_Photo_Inpainting/download.sh b/spaces/Epoching/3D_Photo_Inpainting/download.sh deleted file mode 100644 index a2ce8b2686dd62791510af335e306ca8de4ea28d..0000000000000000000000000000000000000000 --- a/spaces/Epoching/3D_Photo_Inpainting/download.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/sh -fb_status=$(wget --spider -S https://filebox.ece.vt.edu/ 2>&1 | grep "HTTP/1.1 200 OK") - -mkdir checkpoints - -echo "downloading from filebox ..." -wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/color-model.pth -wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/depth-model.pth -wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/edge-model.pth -wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/model.pt - -mv color-model.pth checkpoints/. -mv depth-model.pth checkpoints/. -mv edge-model.pth checkpoints/. -mv model.pt MiDaS/. - -echo "cloning from BoostingMonocularDepth ..." -git clone https://github.com/compphoto/BoostingMonocularDepth.git -mkdir -p BoostingMonocularDepth/pix2pix/checkpoints/mergemodel/ - -echo "downloading mergenet weights ..." -wget https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/latest_net_G.pth -mv latest_net_G.pth BoostingMonocularDepth/pix2pix/checkpoints/mergemodel/ -wget https://github.com/intel-isl/MiDaS/releases/download/v2/model-f46da743.pt -mv model-f46da743.pt BoostingMonocularDepth/midas/model.pt diff --git a/spaces/EronSamez/RVC_HFmeu/tensorlowest.py b/spaces/EronSamez/RVC_HFmeu/tensorlowest.py deleted file mode 100644 index eccd4dbf3494434e59f7defaae6ab91797263b90..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/tensorlowest.py +++ /dev/null @@ -1,123 +0,0 @@ -from tensorboard.backend.event_processing import event_accumulator - -import os -from shutil import copy2 -from re import search as RSearch -import pandas as pd -from ast import literal_eval as LEval - -weights_dir = 'weights/' - -def find_biggest_tensorboard(tensordir): - try: - files = [f for f in os.listdir(tensordir) if f.endswith('.0')] - if not files: - print("No files with the '.0' extension found!") - return - - max_size = 0 - biggest_file = "" - - for file in files: - file_path = os.path.join(tensordir, file) - if os.path.isfile(file_path): - file_size = os.path.getsize(file_path) - if file_size > max_size: - max_size = file_size - biggest_file = file - - return biggest_file - - except FileNotFoundError: - print("Couldn't find your model!") - return - -def main(model_name, save_freq, lastmdls): - global lowestval_weight_dir, scl - - tensordir = os.path.join('logs', model_name) - lowestval_weight_dir = os.path.join(tensordir, "lowestvals") - - latest_file = find_biggest_tensorboard(tensordir) - - if latest_file is None: - print("Couldn't find a valid tensorboard file!") - return - - tfile = os.path.join(tensordir, latest_file) - - ea = event_accumulator.EventAccumulator(tfile, - size_guidance={ - event_accumulator.COMPRESSED_HISTOGRAMS: 500, - event_accumulator.IMAGES: 4, - event_accumulator.AUDIO: 4, - event_accumulator.SCALARS: 0, - event_accumulator.HISTOGRAMS: 1, - }) - - ea.Reload() - ea.Tags() - - scl = ea.Scalars('loss/g/total') - - listwstep = {} - - for val in scl: - if (val.step // save_freq) * save_freq in [val.step for val in scl]: - listwstep[float(val.value)] = (val.step // save_freq) * save_freq - - lowest_vals = sorted(listwstep.keys())[:lastmdls] - - sorted_dict = {value: step for value, step in listwstep.items() if value in lowest_vals} - - return sorted_dict - -def selectweights(model_name, file_dict, weights_dir, lowestval_weight_dir): - os.makedirs(lowestval_weight_dir, exist_ok=True) - logdir = [] - files = [] - lbldict = { - 'Values': {}, - 'Names': {} - } - weights_dir_path = os.path.join(weights_dir, "") - low_val_path = os.path.join(os.getcwd(), os.path.join(lowestval_weight_dir, "")) - - try: - file_dict = LEval(file_dict) - except Exception as e: - print(f"Error! {e}") - return f"Couldn't load tensorboard file! {e}" - - weights = [f for f in os.scandir(weights_dir)] - for key, value in file_dict.items(): - pattern = fr"^{model_name}_.*_s{value}\.pth$" - matching_weights = [f.name for f in weights if f.is_file() and RSearch(pattern, f.name)] - for weight in matching_weights: - source_path = weights_dir_path + weight - destination_path = os.path.join(lowestval_weight_dir, weight) - - copy2(source_path, destination_path) - - logdir.append(f"File = {weight} Value: {key}, Step: {value}") - - lbldict['Names'][weight] = weight - lbldict['Values'][weight] = key - - files.append(low_val_path + weight) - - print(f"File = {weight} Value: {key}, Step: {value}") - - yield ('\n'.join(logdir), files, pd.DataFrame(lbldict)) - - - return ''.join(logdir), files, pd.DataFrame(lbldict) - - -if __name__ == "__main__": - model = str(input("Enter the name of the model: ")) - sav_freq = int(input("Enter save frequency of the model: ")) - ds = main(model, sav_freq) - - if ds: selectweights(model, ds, weights_dir, lowestval_weight_dir) - \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_1200e_icdar2015.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_1200e_icdar2015.py deleted file mode 100644 index 251b7bc2faaaa254766e0902c4238b2917f0d230..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_1200e_icdar2015.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_sgd_1200e.py', - '../../_base_/det_models/dbnet_r50dcnv2_fpnc.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/dbnet_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_r50dcnv2 = {{_base_.train_pipeline_r50dcnv2}} -test_pipeline_4068_1024 = {{_base_.test_pipeline_4068_1024}} - -load_from = 'checkpoints/textdet/dbnet/res50dcnv2_synthtext.pth' - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_r50dcnv2), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_4068_1024), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_4068_1024)) - -evaluation = dict(interval=100, metric='hmean-iou') diff --git a/spaces/GAIR/Factool/plugin_config/main.py b/spaces/GAIR/Factool/plugin_config/main.py deleted file mode 100644 index 5cae0ed7d4d4584a5dd35d303ab9f2899cf04be1..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/plugin_config/main.py +++ /dev/null @@ -1,91 +0,0 @@ -import json -from fastapi import FastAPI, Request -from fastapi.responses import JSONResponse, FileResponse -from fastapi.staticfiles import StaticFiles -from fastapi.middleware.cors import CORSMiddleware -from pydantic import BaseModel -from typing import Optional, List, Dict, Union -from factool.factool import Factool - -foundation_model = 'gpt-4' -factool_instance = Factool(foundation_model) - -app = FastAPI() - -app.add_middleware( - CORSMiddleware, - allow_origins=["https://chat.openai.com"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - -class FactCheckRequest(BaseModel): - prompt: str - response: str - entry_point: Optional[str] - -class FactCheckResponse(BaseModel): - fact_check_result: List[Dict[str, Union[str, List[str]]]] - -fact_checks = {} - -@app.post("/fact_check_kbqa") -async def fact_check_kbqa(request_data: FactCheckRequest): - request_obj = FactCheckRequest(**request_data.dict()) - fact_check_result = await factool_instance.run_for_plugin([{'prompt': request_obj.prompt, 'response': request_obj.response, 'category': 'kbqa'}]) - fact_check_id = len(fact_checks) + 1 - fact_checks[fact_check_id] = fact_check_result - return JSONResponse(content={"fact_check_id": fact_check_id, "fact_check_result": fact_check_result}) - -@app.post("/fact_check_code") -async def fact_check_code(request_data: FactCheckRequest): - request_obj = FactCheckRequest(**request_data.dict()) - fact_check_result = await factool_instance.run_for_plugin([{'prompt': request_obj.prompt, 'response': request_obj.response, 'category': 'code', 'entry_point': request_obj.entry_point}]) - fact_check_id = len(fact_checks) + 1 - fact_checks[fact_check_id] = fact_check_result - return JSONResponse(content={"fact_check_id": fact_check_id, "fact_check_result": fact_check_result}) - -@app.post("/fact_check_math") -async def fact_check_math(request_data: FactCheckRequest): - request_obj = FactCheckRequest(**request_data.dict()) - fact_check_result = await factool_instance.run_for_plugin([{'prompt': request_obj.prompt, 'response': request_obj.response, 'category': 'math'}]) - fact_check_id = len(fact_checks) + 1 - fact_checks[fact_check_id] = fact_check_result - return JSONResponse(content={"fact_check_id": fact_check_id, "fact_check_result": fact_check_result}) - -@app.post("/fact_check_scientific_literature") -async def fact_check_scientific_literature(request_data: FactCheckRequest): - request_obj = FactCheckRequest(**request_data.dict()) - fact_check_result = await factool_instance.run_for_plugin([{'prompt': request_obj.prompt, 'response': request_obj.response, 'category': 'scientific'}]) - fact_check_id = len(fact_checks) + 1 - fact_checks[fact_check_id] = fact_check_result - return JSONResponse(content={"fact_check_id": fact_check_id, "fact_check_result": fact_check_result}) - -@app.get("/get_fact_check/{fact_check_id}") -async def get_fact_check(fact_check_id: int): - if fact_check_id in fact_checks: - fact_check_result = fact_checks[fact_check_id] - return JSONResponse(content={"fact_check_id": fact_check_id, "fact_check_result": fact_check_result}) - else: - return JSONResponse(content={"error": "Fact check not found"}) - -@app.get("/logo.png") -async def plugin_logo(): - filename = "logo.png" - return FileResponse(filename, media_type="image/png") - -@app.get("/.well-known/ai-plugin.json") -async def read_plugin_manifest(): - return FileResponse(".well-known/ai-plugin.json") - -@app.get("/openapi.yaml") -async def openapi_spec(): - return FileResponse("./openapi.yaml") - -def main(): - import uvicorn - uvicorn.run(app, host="0.0.0.0", port=5003, log_level="info") - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/GIZ/SDSN-demo/appStore/info.py b/spaces/GIZ/SDSN-demo/appStore/info.py deleted file mode 100644 index 9f27362233aaa4da23013ddc67d7d7d33a8ea997..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/appStore/info.py +++ /dev/null @@ -1,72 +0,0 @@ -import streamlit as st - -def app(): - - - with open('style.css') as f: - st.markdown(f"", unsafe_allow_html=True) - - st.markdown("

    Policy Action Tracker

    ", - unsafe_allow_html=True) - - - st.markdown("
    The Policy Action Tracker is an open-source\ - digital tool which aims to assist policy analysts and \ - other users in extracting and filtering relevant \ - information from policy documents.
    ", - unsafe_allow_html=True) - footer = """ - - """ - st.markdown(footer, unsafe_allow_html=True) - - c1, c2, c3 = st.columns([8,1,12]) - with c1: - st.image("docStore/img/ndc.png") - with c3: - st.markdown('
    The manual extraction \ - of relevant information from text documents is a \ - time-consuming task for any policy analyst. As the amount and length of \ - public policy documents in relation to sustainable development (such as \ - National Development Plans and Nationally Determined Contributions) \ - continuously increases, a major challenge for policy action tracking – the \ - evaluation of stated goals and targets and their actual implementation on \ - the ground – arises. Luckily, Artificial Intelligence (AI) and Natural \ - Language Processing (NLP) methods can help in shortening and easing this \ - task for policy analysts.

    ', - unsafe_allow_html=True) - - intro = """ -
    - - For this purpose, the United Nations Sustainable Development Solutions \ - Network (SDSN) and the Deutsche Gesellschaft für Internationale \ - Zusammenarbeit (GIZ) GmbH are collaborated in the development \ - of this AI-powered open-source web application that helps find and extract \ - relevant information from public policy documents faster to facilitate \ - evidence-based decision-making processes in sustainable development and beyond. - - This tool allows policy analysts and other users the possibility to rapidly \ - search for relevant information/paragraphs in the document according to the \ - user’s interest, classify the document’s content according to the Sustainable \ - Development Goals (SDGs), and compare climate-related policy documents and NDCs \ - across countries using open data from the German Institute of Development and \ - Sustainability’s (IDOS) NDC Explorer. - To understand the application's functionalities and learn more about ß - the project, see the attached concept note. We hope you like our application 😊 - - -
    -
    - """ - st.markdown(intro, unsafe_allow_html=True) - # st.image("docStore/img/paris.png") diff --git a/spaces/GXSA/bingo/src/pages/api/kblob.ts b/spaces/GXSA/bingo/src/pages/api/kblob.ts deleted file mode 100644 index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/pages/api/kblob.ts +++ /dev/null @@ -1,56 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' - -const API_DOMAIN = 'https://bing.vcanbb.top' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": `${API_DOMAIN}/web/index.html`, - "Referrer-Policy": "origin-when-cross-origin", - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - ...formData.getHeaders() - } - } - ).then(res => res.text()) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } })) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py deleted file mode 100644 index 4b28a59280e6701d31afeeaae7ae12cdbd4fb95e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_cascade_rcnn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,86 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -model = dict( - roi_head=dict(bbox_head=[ - dict( - type='SABLHead', - num_classes=80, - cls_in_channels=256, - reg_in_channels=256, - roi_feat_size=7, - reg_feat_up_ratio=2, - reg_pre_kernel=3, - reg_post_kernel=3, - reg_pre_num=2, - reg_post_num=1, - cls_out_channels=1024, - reg_offset_out_channels=256, - reg_cls_out_channels=256, - num_cls_fcs=1, - num_reg_fcs=0, - reg_class_agnostic=True, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.7), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1, - loss_weight=1.0)), - dict( - type='SABLHead', - num_classes=80, - cls_in_channels=256, - reg_in_channels=256, - roi_feat_size=7, - reg_feat_up_ratio=2, - reg_pre_kernel=3, - reg_post_kernel=3, - reg_pre_num=2, - reg_post_num=1, - cls_out_channels=1024, - reg_offset_out_channels=256, - reg_cls_out_channels=256, - num_cls_fcs=1, - num_reg_fcs=0, - reg_class_agnostic=True, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.5), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1, - loss_weight=1.0)), - dict( - type='SABLHead', - num_classes=80, - cls_in_channels=256, - reg_in_channels=256, - roi_feat_size=7, - reg_feat_up_ratio=2, - reg_pre_kernel=3, - reg_post_kernel=3, - reg_pre_num=2, - reg_post_num=1, - cls_out_channels=1024, - reg_offset_out_channels=256, - reg_cls_out_channels=256, - num_cls_fcs=1, - num_reg_fcs=0, - reg_class_agnostic=True, - norm_cfg=None, - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=1.3), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox_reg=dict(type='SmoothL1Loss', beta=0.1, loss_weight=1.0)) - ])) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/regnet.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/regnet.py deleted file mode 100644 index 91a602a952226cebb5fd0e3e282c6f98ae4fa455..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/regnet.py +++ /dev/null @@ -1,325 +0,0 @@ -import numpy as np -import torch.nn as nn -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .resnet import ResNet -from .resnext import Bottleneck - - -@BACKBONES.register_module() -class RegNet(ResNet): - """RegNet backbone. - - More details can be found in `paper `_ . - - Args: - arch (dict): The parameter of RegNets. - - - w0 (int): initial width - - wa (float): slope of width - - wm (float): quantization parameter to quantize the width - - depth (int): depth of the backbone - - group_w (int): width of group - - bot_mul (float): bottleneck ratio, i.e. expansion of bottleneck. - strides (Sequence[int]): Strides of the first block of each stage. - base_channels (int): Base channels after stem layer. - in_channels (int): Number of input image channels. Default: 3. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import RegNet - >>> import torch - >>> self = RegNet( - arch=dict( - w0=88, - wa=26.31, - wm=2.25, - group_w=48, - depth=25, - bot_mul=1.0)) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 96, 8, 8) - (1, 192, 4, 4) - (1, 432, 2, 2) - (1, 1008, 1, 1) - """ - arch_settings = { - 'regnetx_400mf': - dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0), - 'regnetx_800mf': - dict(w0=56, wa=35.73, wm=2.28, group_w=16, depth=16, bot_mul=1.0), - 'regnetx_1.6gf': - dict(w0=80, wa=34.01, wm=2.25, group_w=24, depth=18, bot_mul=1.0), - 'regnetx_3.2gf': - dict(w0=88, wa=26.31, wm=2.25, group_w=48, depth=25, bot_mul=1.0), - 'regnetx_4.0gf': - dict(w0=96, wa=38.65, wm=2.43, group_w=40, depth=23, bot_mul=1.0), - 'regnetx_6.4gf': - dict(w0=184, wa=60.83, wm=2.07, group_w=56, depth=17, bot_mul=1.0), - 'regnetx_8.0gf': - dict(w0=80, wa=49.56, wm=2.88, group_w=120, depth=23, bot_mul=1.0), - 'regnetx_12gf': - dict(w0=168, wa=73.36, wm=2.37, group_w=112, depth=19, bot_mul=1.0), - } - - def __init__(self, - arch, - in_channels=3, - stem_channels=32, - base_channels=32, - strides=(2, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True): - super(ResNet, self).__init__() - - # Generate RegNet parameters first - if isinstance(arch, str): - assert arch in self.arch_settings, \ - f'"arch": "{arch}" is not one of the' \ - ' arch_settings' - arch = self.arch_settings[arch] - elif not isinstance(arch, dict): - raise ValueError('Expect "arch" to be either a string ' - f'or a dict, got {type(arch)}') - - widths, num_stages = self.generate_regnet( - arch['w0'], - arch['wa'], - arch['wm'], - arch['depth'], - ) - # Convert to per stage format - stage_widths, stage_blocks = self.get_stages_from_blocks(widths) - # Generate group widths and bot muls - group_widths = [arch['group_w'] for _ in range(num_stages)] - self.bottleneck_ratio = [arch['bot_mul'] for _ in range(num_stages)] - # Adjust the compatibility of stage_widths and group_widths - stage_widths, group_widths = self.adjust_width_group( - stage_widths, self.bottleneck_ratio, group_widths) - - # Group params by stage - self.stage_widths = stage_widths - self.group_widths = group_widths - self.depth = sum(stage_blocks) - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.zero_init_residual = zero_init_residual - self.block = Bottleneck - expansion_bak = self.block.expansion - self.block.expansion = 1 - self.stage_blocks = stage_blocks[:num_stages] - - self._make_stem_layer(in_channels, stem_channels) - - self.inplanes = stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - group_width = self.group_widths[i] - width = int(round(self.stage_widths[i] * self.bottleneck_ratio[i])) - stage_groups = width // group_width - - dcn = self.dcn if self.stage_with_dcn[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=self.stage_widths[i], - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - plugins=stage_plugins, - groups=stage_groups, - base_width=group_width, - base_channels=self.stage_widths[i]) - self.inplanes = self.stage_widths[i] - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = stage_widths[-1] - self.block.expansion = expansion_bak - - def _make_stem_layer(self, in_channels, base_channels): - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - base_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, base_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - - def generate_regnet(self, - initial_width, - width_slope, - width_parameter, - depth, - divisor=8): - """Generates per block width from RegNet parameters. - - Args: - initial_width ([int]): Initial width of the backbone - width_slope ([float]): Slope of the quantized linear function - width_parameter ([int]): Parameter used to quantize the width. - depth ([int]): Depth of the backbone. - divisor (int, optional): The divisor of channels. Defaults to 8. - - Returns: - list, int: return a list of widths of each stage and the number \ - of stages - """ - assert width_slope >= 0 - assert initial_width > 0 - assert width_parameter > 1 - assert initial_width % divisor == 0 - widths_cont = np.arange(depth) * width_slope + initial_width - ks = np.round( - np.log(widths_cont / initial_width) / np.log(width_parameter)) - widths = initial_width * np.power(width_parameter, ks) - widths = np.round(np.divide(widths, divisor)) * divisor - num_stages = len(np.unique(widths)) - widths, widths_cont = widths.astype(int).tolist(), widths_cont.tolist() - return widths, num_stages - - @staticmethod - def quantize_float(number, divisor): - """Converts a float to closest non-zero int divisible by divisor. - - Args: - number (int): Original number to be quantized. - divisor (int): Divisor used to quantize the number. - - Returns: - int: quantized number that is divisible by devisor. - """ - return int(round(number / divisor) * divisor) - - def adjust_width_group(self, widths, bottleneck_ratio, groups): - """Adjusts the compatibility of widths and groups. - - Args: - widths (list[int]): Width of each stage. - bottleneck_ratio (float): Bottleneck ratio. - groups (int): number of groups in each stage - - Returns: - tuple(list): The adjusted widths and groups of each stage. - """ - bottleneck_width = [ - int(w * b) for w, b in zip(widths, bottleneck_ratio) - ] - groups = [min(g, w_bot) for g, w_bot in zip(groups, bottleneck_width)] - bottleneck_width = [ - self.quantize_float(w_bot, g) - for w_bot, g in zip(bottleneck_width, groups) - ] - widths = [ - int(w_bot / b) - for w_bot, b in zip(bottleneck_width, bottleneck_ratio) - ] - return widths, groups - - def get_stages_from_blocks(self, widths): - """Gets widths/stage_blocks of network at each stage. - - Args: - widths (list[int]): Width in each stage. - - Returns: - tuple(list): width and depth of each stage - """ - width_diff = [ - width != width_prev - for width, width_prev in zip(widths + [0], [0] + widths) - ] - stage_widths = [ - width for width, diff in zip(widths, width_diff[:-1]) if diff - ] - stage_blocks = np.diff([ - depth for depth, diff in zip(range(len(width_diff)), width_diff) - if diff - ]).tolist() - return stage_widths, stage_blocks - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 9cac4254f37bc3755bff869a10eb3cb75db4d943..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/apcnet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/README.md deleted file mode 100644 index ca51545f638aa7a7fb57c1edbc667377416d92e9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# Deep High-Resolution Representation Learning for Human Pose Estimation - -## Introduction - - - -```latext -@inproceedings{SunXLW19, - title={Deep High-Resolution Representation Learning for Human Pose Estimation}, - author={Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang}, - booktitle={CVPR}, - year={2019} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ------------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| FCN | HRNetV2p-W18-Small | 512x1024 | 40000 | 1.7 | 23.74 | 73.86 | 75.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x1024_40k_cityscapes/fcn_hr18s_512x1024_40k_cityscapes_20200601_014216-93db27d0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x1024_40k_cityscapes/fcn_hr18s_512x1024_40k_cityscapes_20200601_014216.log.json) | -| FCN | HRNetV2p-W18 | 512x1024 | 40000 | 2.9 | 12.97 | 77.19 | 78.92 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x1024_40k_cityscapes/fcn_hr18_512x1024_40k_cityscapes_20200601_014216-f196fb4e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x1024_40k_cityscapes/fcn_hr18_512x1024_40k_cityscapes_20200601_014216.log.json) | -| FCN | HRNetV2p-W48 | 512x1024 | 40000 | 6.2 | 6.42 | 78.48 | 79.69 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x1024_40k_cityscapes/fcn_hr48_512x1024_40k_cityscapes_20200601_014240-a989b146.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x1024_40k_cityscapes/fcn_hr48_512x1024_40k_cityscapes_20200601_014240.log.json) | -| FCN | HRNetV2p-W18-Small | 512x1024 | 80000 | - | - | 75.31 | 77.48 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x1024_80k_cityscapes/fcn_hr18s_512x1024_80k_cityscapes_20200601_202700-1462b75d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x1024_80k_cityscapes/fcn_hr18s_512x1024_80k_cityscapes_20200601_202700.log.json) | -| FCN | HRNetV2p-W18 | 512x1024 | 80000 | - | - | 78.65 | 80.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x1024_80k_cityscapes/fcn_hr18_512x1024_80k_cityscapes_20200601_223255-4e7b345e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x1024_80k_cityscapes/fcn_hr18_512x1024_80k_cityscapes_20200601_223255.log.json) | -| FCN | HRNetV2p-W48 | 512x1024 | 80000 | - | - | 79.93 | 80.72 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x1024_80k_cityscapes/fcn_hr48_512x1024_80k_cityscapes_20200601_202606-58ea95d6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x1024_80k_cityscapes/fcn_hr48_512x1024_80k_cityscapes_20200601_202606.log.json) | -| FCN | HRNetV2p-W18-Small | 512x1024 | 160000 | - | - | 76.31 | 78.31 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x1024_160k_cityscapes/fcn_hr18s_512x1024_160k_cityscapes_20200602_190901-4a0797ea.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x1024_160k_cityscapes/fcn_hr18s_512x1024_160k_cityscapes_20200602_190901.log.json) | -| FCN | HRNetV2p-W18 | 512x1024 | 160000 | - | - | 78.80 | 80.74 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x1024_160k_cityscapes/fcn_hr18_512x1024_160k_cityscapes_20200602_190822-221e4a4f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x1024_160k_cityscapes/fcn_hr18_512x1024_160k_cityscapes_20200602_190822.log.json) | -| FCN | HRNetV2p-W48 | 512x1024 | 160000 | - | - | 80.65 | 81.92 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x1024_160k_cityscapes/fcn_hr48_512x1024_160k_cityscapes_20200602_190946-59b7973e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x1024_160k_cityscapes/fcn_hr48_512x1024_160k_cityscapes_20200602_190946.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ------------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | HRNetV2p-W18-Small | 512x512 | 80000 | 3.8 | 38.66 | 31.38 | 32.45 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_80k_ade20k/fcn_hr18s_512x512_80k_ade20k_20200614_144345-77fc814a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_80k_ade20k/fcn_hr18s_512x512_80k_ade20k_20200614_144345.log.json) | -| FCN | HRNetV2p-W18 | 512x512 | 80000 | 4.9 | 22.57 | 35.51 | 36.80 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_80k_ade20k/fcn_hr18_512x512_80k_ade20k_20200614_185145-66f20cb7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_80k_ade20k/fcn_hr18_512x512_80k_ade20k_20200614_185145.log.json) | -| FCN | HRNetV2p-W48 | 512x512 | 80000 | 8.2 | 21.23 | 41.90 | 43.27 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_80k_ade20k/fcn_hr48_512x512_80k_ade20k_20200614_193946-7ba5258d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_80k_ade20k/fcn_hr48_512x512_80k_ade20k_20200614_193946.log.json) | -| FCN | HRNetV2p-W18-Small | 512x512 | 160000 | - | - | 33.00 | 34.55 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_160k_ade20k/fcn_hr18s_512x512_160k_ade20k_20200614_214413-870f65ac.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_160k_ade20k/fcn_hr18s_512x512_160k_ade20k_20200614_214413.log.json) | -| FCN | HRNetV2p-W18 | 512x512 | 160000 | - | - | 36.79 | 38.58 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_160k_ade20k/fcn_hr18_512x512_160k_ade20k_20200614_214426-ca961836.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_160k_ade20k/fcn_hr18_512x512_160k_ade20k_20200614_214426.log.json) | -| FCN | HRNetV2p-W48 | 512x512 | 160000 | - | - | 42.02 | 43.86 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_160k_ade20k/fcn_hr48_512x512_160k_ade20k_20200614_214407-a52fc02c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_160k_ade20k/fcn_hr48_512x512_160k_ade20k_20200614_214407.log.json) | - -### Pascal VOC 2012 + Aug - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ------------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | HRNetV2p-W18-Small | 512x512 | 20000 | 1.8 | 43.36 | 65.20 | 68.55 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_20k_voc12aug/fcn_hr18s_512x512_20k_voc12aug_20200617_224503-56e36088.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_20k_voc12aug/fcn_hr18s_512x512_20k_voc12aug_20200617_224503.log.json) | -| FCN | HRNetV2p-W18 | 512x512 | 20000 | 2.9 | 23.48 | 72.30 | 74.71 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_20k_voc12aug/fcn_hr18_512x512_20k_voc12aug_20200617_224503-488d45f7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_20k_voc12aug/fcn_hr18_512x512_20k_voc12aug_20200617_224503.log.json) | -| FCN | HRNetV2p-W48 | 512x512 | 20000 | 6.2 | 22.05 | 75.87 | 78.58 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_20k_voc12aug/fcn_hr48_512x512_20k_voc12aug_20200617_224419-89de05cd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_20k_voc12aug/fcn_hr48_512x512_20k_voc12aug_20200617_224419.log.json) | -| FCN | HRNetV2p-W18-Small | 512x512 | 40000 | - | - | 66.61 | 70.00 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18s_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_40k_voc12aug/fcn_hr18s_512x512_40k_voc12aug_20200614_000648-4f8d6e7f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18s_512x512_40k_voc12aug/fcn_hr18s_512x512_40k_voc12aug_20200614_000648.log.json) | -| FCN | HRNetV2p-W18 | 512x512 | 40000 | - | - | 72.90 | 75.59 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr18_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_40k_voc12aug/fcn_hr18_512x512_40k_voc12aug_20200613_224401-1b4b76cd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr18_512x512_40k_voc12aug/fcn_hr18_512x512_40k_voc12aug_20200613_224401.log.json) | -| FCN | HRNetV2p-W48 | 512x512 | 40000 | - | - | 76.24 | 78.49 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_40k_voc12aug/fcn_hr48_512x512_40k_voc12aug_20200613_222111-1b0f18bc.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_512x512_40k_voc12aug/fcn_hr48_512x512_40k_voc12aug_20200613_222111.log.json) | - -### Pascal Context - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | HRNetV2p-W48 | 480x480 | 40000 | 6.1 | 8.86 | 45.14 | 47.42 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_480x480_40k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_40k_pascal_context/fcn_hr48_480x480_40k_pascal_context_20200911_164852-667d00b0.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_40k_pascal_context/fcn_hr48_480x480_40k_pascal_context-20200911_164852.log.json) | -| FCN | HRNetV2p-W48 | 480x480 | 80000 | - | - | 45.84 | 47.84 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_480x480_80k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_80k_pascal_context/fcn_hr48_480x480_80k_pascal_context_20200911_155322-847a6711.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_80k_pascal_context/fcn_hr48_480x480_80k_pascal_context-20200911_155322.log.json) | - -### Pascal Context 59 - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | HRNetV2p-W48 | 480x480 | 40000 | - | - | 50.33 | 52.83 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_480x480_40k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_40k_pascal_context_59/fcn_hr48_480x480_40k_pascal_context_59_20210410_122738-b808b8b2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_40k_pascal_context_59/fcn_hr48_480x480_40k_pascal_context_59-20210410_122738.log.json) | -| FCN | HRNetV2p-W48 | 480x480 | 80000 | - | - | 51.12 | 53.56 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_80k_pascal_context_59/fcn_hr48_480x480_80k_pascal_context_59_20210411_003240-3ae7081e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/hrnet/fcn_hr48_480x480_80k_pascal_context_59/fcn_hr48_480x480_80k_pascal_context_59-20210411_003240.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_769x769_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 5893e66a41cad73e8fb24aa58dc78ef002aecca5..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './pspnet_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet18_v1c', - backbone=dict(depth=18), - decode_head=dict( - in_channels=512, - channels=128, - ), - auxiliary_head=dict(in_channels=256, channels=64)) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_conv.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/HGZeon/test_model_2/README.md b/spaces/HGZeon/test_model_2/README.md deleted file mode 100644 index 8659a8d1f55fe86be8814891b7099c502f06abd3..0000000000000000000000000000000000000000 --- a/spaces/HGZeon/test_model_2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Test Model 2 -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hallucinate/demo/ldm/modules/diffusionmodules/model.py b/spaces/Hallucinate/demo/ldm/modules/diffusionmodules/model.py deleted file mode 100644 index 533e589a2024f1d7c52093d8c472c3b1b6617e26..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/ldm/modules/diffusionmodules/model.py +++ /dev/null @@ -1,835 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange - -from ldm.util import instantiate_from_config -from ldm.modules.attention import LinearAttention - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class LinAttnBlock(LinearAttention): - """to match AttnBlock usage""" - def __init__(self, in_channels): - super().__init__(dim=in_channels, heads=1, dim_head=in_channels) - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - - -def make_attn(in_channels, attn_type="vanilla"): - assert attn_type in ["vanilla", "linear", "none"], f'attn_type {attn_type} unknown' - print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - return AttnBlock(in_channels) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - return LinAttnBlock(in_channels) - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x, t=None, context=None): - #assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla", - **ignore_kwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False, - attn_type="vanilla", **ignorekwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d(in_channels, - mid_channels, - kernel_size=3, - stride=1, - padding=1) - self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - - self.conv_out = nn.Conv2d(mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor)))) - x = self.attn(x) - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult, - z_channels=intermediate_chn, double_z=False, resolution=resolution, - attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv, - out_ch=None) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn, - mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8), - dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - tmp_chn = z_channels*ch_mult[-1] - self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout, - resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks, - ch_mult=ch_mult, resolution=resolution, ch=ch) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn, - out_channels=tmp_chn, depth=rescale_module_depth) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size//in_size))+1 - factor_up = 1.+ (out_size % in_size) - print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}") - self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels, - out_channels=in_channels) - self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2, - attn_resolutions=[], in_channels=None, ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)]) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode") - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=4, - stride=2, - padding=1) - - def forward(self, x, scale_factor=1.0): - if scale_factor==1.0: - return x - else: - x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor) - return x - -class FirstStagePostProcessor(nn.Module): - - def __init__(self, ch_mult:list, in_channels, - pretrained_model:nn.Module=None, - reshape=False, - n_channels=None, - dropout=0., - pretrained_config=None): - super().__init__() - if pretrained_config is None: - assert pretrained_model is not None, 'Either "pretrained_model" or "pretrained_config" must not be None' - self.pretrained_model = pretrained_model - else: - assert pretrained_config is not None, 'Either "pretrained_model" or "pretrained_config" must not be None' - self.instantiate_pretrained(pretrained_config) - - self.do_reshape = reshape - - if n_channels is None: - n_channels = self.pretrained_model.encoder.ch - - self.proj_norm = Normalize(in_channels,num_groups=in_channels//2) - self.proj = nn.Conv2d(in_channels,n_channels,kernel_size=3, - stride=1,padding=1) - - blocks = [] - downs = [] - ch_in = n_channels - for m in ch_mult: - blocks.append(ResnetBlock(in_channels=ch_in,out_channels=m*n_channels,dropout=dropout)) - ch_in = m * n_channels - downs.append(Downsample(ch_in, with_conv=False)) - - self.model = nn.ModuleList(blocks) - self.downsampler = nn.ModuleList(downs) - - - def instantiate_pretrained(self, config): - model = instantiate_from_config(config) - self.pretrained_model = model.eval() - # self.pretrained_model.train = False - for param in self.pretrained_model.parameters(): - param.requires_grad = False - - - @torch.no_grad() - def encode_with_pretrained(self,x): - c = self.pretrained_model.encode(x) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - return c - - def forward(self,x): - z_fs = self.encode_with_pretrained(x) - z = self.proj_norm(z_fs) - z = self.proj(z) - z = nonlinearity(z) - - for submodel, downmodel in zip(self.model,self.downsampler): - z = submodel(z,temb=None) - z = downmodel(z) - - if self.do_reshape: - z = rearrange(z,'b c h w -> b (h w) c') - return z - diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/t5_dataloader/t5_datasets.py b/spaces/HaloMaster/chinesesummary/fengshen/data/t5_dataloader/t5_datasets.py deleted file mode 100644 index 4fd55b8d0be1dd61881b8c782a7eea7a6123efdd..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/t5_dataloader/t5_datasets.py +++ /dev/null @@ -1,562 +0,0 @@ -# coding=utf8 -import json -from torch.utils.data import Dataset, DataLoader -from tqdm import tqdm -from transformers import BertTokenizer, MT5Config, MT5Tokenizer, BatchEncoding -import torch -import pytorch_lightning as pl -import numpy as np -from itertools import chain -import sys -sys.path.append('../../') - - -def compute_input_and_target_lengths(inputs_length, noise_density, mean_noise_span_length): - """This function is copy of `random_spans_helper `__ . - Training parameters to avoid padding with random_spans_noise_mask. - When training a model with random_spans_noise_mask, we would like to set the other - training hyperparmeters in a way that avoids padding. - This function helps us compute these hyperparameters. - We assume that each noise span in the input is replaced by extra_tokens_per_span_inputs sentinel tokens, - and each non-noise span in the targets is replaced by extra_tokens_per_span_targets sentinel tokens. - This function tells us the required number of tokens in the raw example (for split_tokens()) - as well as the length of the encoded targets. Note that this function assumes - the inputs and targets will have EOS appended and includes that in the reported length. - Args: - inputs_length: an integer - desired length of the tokenized inputs sequence - noise_density: a float - mean_noise_span_length: a float - Returns: - tokens_length: length of original text in tokens - targets_length: an integer - length in tokens of encoded targets sequence - """ - - def _tokens_length_to_inputs_length_targets_length(tokens_length): - num_noise_tokens = int(round(tokens_length * noise_density)) - num_nonnoise_tokens = tokens_length - num_noise_tokens - num_noise_spans = int(round(num_noise_tokens / mean_noise_span_length)) - # inputs contain all nonnoise tokens, sentinels for all noise spans - # and one EOS token. - _input_length = num_nonnoise_tokens + num_noise_spans + 1 - _output_length = num_noise_tokens + num_noise_spans + 1 - return _input_length, _output_length - - tokens_length = inputs_length - - while _tokens_length_to_inputs_length_targets_length(tokens_length + 1)[0] <= inputs_length: - tokens_length += 1 - - inputs_length, targets_length = _tokens_length_to_inputs_length_targets_length( - tokens_length) - - # minor hack to get the targets length to be equal to inputs length - # which is more likely to have been set to a nice round number. - if noise_density == 0.5 and targets_length > inputs_length: - tokens_length -= 1 - targets_length -= 1 - return tokens_length, targets_length - - -class UnsuperviseT5Dataset(Dataset): - ''' - Dataset Used for T5 unsuprvise pretrain. - load_data_type = 0: load raw data from data path and save tokenized data, call function load_data - load_data_type = 1: load tokenized data from path, call function load_tokenized_data - load_data_type = 2: load tokenized data from memery data, call function load_tokenized_memory_data - ''' - - def __init__(self, data_path, args, load_data_type=0, data=None): - super().__init__() - - if args.tokenizer_type == 't5_tokenizer': - if args.new_vocab_path is not None: - self.tokenizer = MT5Tokenizer.from_pretrained(args.new_vocab_path) - else: - self.tokenizer = MT5Tokenizer.from_pretrained(args.pretrained_model_path) - else: - self.tokenizer = BertTokenizer.from_pretrained(args.pretrained_model_path) - self.noise_density = 0.15 - self.mean_noise_span_length = 3 - self.text_column_name = args.text_column_name - self.dataset_num_workers = args.dataset_num_workers - self.max_seq_length = args.max_seq_length - self.remove_columns = args.remove_columns - # whether load tokenieze data - self.load_data_type = load_data_type - - if self.load_data_type == 0: - # T5-like span masked language modeling will fuse consecutively masked tokens to a single sentinel token. - # To ensure that the input length is `max_seq_length`, we need to increase the maximum length - # according to `mlm_probability` and `mean_noise_span_length`. - # We can also define the label length accordingly. - self.expanded_inputs_length, self.targets_length = compute_input_and_target_lengths( - inputs_length=self.max_seq_length, - noise_density=self.noise_density, - mean_noise_span_length=self.mean_noise_span_length, - ) - print('self.expanded_inputs_length, self.targets_length:{},{}'.format( - self.expanded_inputs_length, self.targets_length)) - self.data = self.load_data(data_path) - elif self.load_data_type == 1: - self.data = self.load_tokenized_data(data_path) - else: - assert data is not None - self.data = self.load_tokenized_memory_data(data) - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.data[index] - - def load_data(self, data_path): - # TODO: large data process - from data.fs_datasets import load_dataset - samples = load_dataset( - # samples = datasets.load_from_disk(data_path)['train'] - data_path, num_proc=self.dataset_num_workers)['train'] - # print(samples) - tokenized_datasets = samples.map( - self.tokenize_function, - batched=True, - num_proc=self.dataset_num_workers, - # load_from_cache_file=not data_args.overwrite_cache, - ).map( - batched=True, - num_proc=self.dataset_num_workers, - remove_columns=self.remove_columns) - # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a - # remainder for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value - # might be slower to preprocess. - # - # To speed up this part, we use multiprocessing. See the documentation of the map method for more information: - # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map - tokenized_datasets = tokenized_datasets.map( - self.group_texts, - batched=True, - num_proc=self.dataset_num_workers, - # load_from_cache_file=not data_args.overwrite_cache, - ) - return tokenized_datasets - ''' - The function load tokenized data saved from load_data function. - ''' - - def load_tokenized_data(self, data_path): - from data.fs_datasets import load_dataset - samples = load_dataset(data_path)['train'] - return samples - - def load_tokenized_memory_data(self, data): - return data - - # Otherwise, we tokenize every text, then concatenate them together before splitting them in smaller parts. - # Since we make sure that all sequences are of the same length, no attention_mask is needed. - def tokenize_function(self, examples): - # 这里add_special_tokens=False,避免句子中间出现eos - return self.tokenizer(examples[self.text_column_name], - add_special_tokens=False, - return_attention_mask=False) - - # Main data processing function that will concatenate all texts from our dataset - # and generate chunks of expanded_inputs_length. - def group_texts(self, examples): - # Concatenate all texts. - concatenated_examples = { - k: list(chain(*examples[k])) for k in examples.keys()} - total_length = len(concatenated_examples[list(examples.keys())[0]]) - # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can - # customize this part to your needs. - if total_length >= self.expanded_inputs_length: - total_length = ( - total_length // self.expanded_inputs_length) * self.expanded_inputs_length - # Split by chunks of max_len. - result = { - k: [t[i: i + self.expanded_inputs_length] - for i in range(0, total_length, self.expanded_inputs_length)] - for k, t in concatenated_examples.items() - } - return result - - -class UnsuperviseT5DataModel(pl.LightningDataModule): - @staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('UnsuperviseT5DataModel') - parser.add_argument('--dataset_num_workers', default=8, type=int) - parser.add_argument('--dataloader_num_workers', default=4, type=int) - parser.add_argument( - '--train_data_path', default='wudao_180g_mt5_tokenized', type=str) - parser.add_argument('--train_batchsize', default=2, type=int) - parser.add_argument('--valid_batchsize', default=2, type=int) - parser.add_argument('--train_split_size', default=None, type=float) - parser.add_argument('--tokenizer_type', default='t5_tokenizer', choices=['t5_tokenizer', 'bert_tokenizer']) - parser.add_argument('--text_column_name', default='text') - parser.add_argument('--remove_columns', nargs='+', default=[]) - return parent_args - - def __init__(self, args): - super().__init__() - self.save_hyperparameters(args) - if args.train_split_size is not None: - from data.fs_datasets import load_dataset - data_splits = load_dataset(args.train_data_path, num_proc=args.dataset_num_workers) - train_split = data_splits['train'] - test_split = data_splits['test'] - print('train:', train_split, '\ntest_data:', test_split) - self.train_dataset = UnsuperviseT5Dataset('', args, load_data_type=2, data=train_split) - self.test_dataset = UnsuperviseT5Dataset('', args, load_data_type=2, data=test_split) - else: - self.train_data = UnsuperviseT5Dataset(args.train_data_path, args, load_data_type=1) - - self.config = MT5Config.from_pretrained(args.pretrained_model_path) - self.noise_density = 0.15 - self.mean_noise_span_length = 3 - self.pad_token_id = self.config.pad_token_id - self.decoder_start_token_id = self.config.decoder_start_token_id - self.eos_token_id = self.config.eos_token_id - self.vocab_size = self.config.vocab_size - self.max_seq_length = args.max_seq_length - # 因为加载旧的spm里面已经包括了exrta_ids,但是T5Tokenizer会在spm的基础上再增加100个extra_ids,所以需要指定extra_ids=0 - if args.tokenizer_type == 't5_tokenizer' and args.new_vocab_path is not None: - self.tokenizer = MT5Tokenizer.from_pretrained(args.new_vocab_path, extra_ids=0) - # 如果是刚开始加载mt5,需要更新vocab_size为提取中英词之后的new_vocab_size - self.vocab_size = len(self.tokenizer) - - # T5-like span masked language modeling will fuse consecutively masked tokens to a single sentinel token. - # To ensure that the input length is `max_seq_length`, we need to increase the maximum length - # according to `mlm_probability` and `mean_noise_span_length`. We can also define the label length accordingly. - self.expanded_inputs_length, self.targets_length = compute_input_and_target_lengths( - inputs_length=self.max_seq_length, - noise_density=self.noise_density, - mean_noise_span_length=self.mean_noise_span_length, - ) - - def train_dataloader(self): - from fengshen.data.universal_datamodule.universal_sampler import PretrainingSampler - from fengshen.data.universal_datamodule.universal_datamodule import get_consume_samples - # 采用自定义的sampler,确保继续训练能正确取到数据 - consumed_samples = get_consume_samples(self) - batch_sampler = PretrainingSampler( - total_samples=len(self.train_dataset), - consumed_samples=consumed_samples, - micro_batch_size=self.hparams.train_batchsize, - data_parallel_rank=self.trainer.global_rank, - data_parallel_size=self.trainer.world_size, - ) - return DataLoader( - self.train_dataset, - batch_sampler=batch_sampler, - pin_memory=True, - num_workers=self.hparams.dataloader_num_workers, - collate_fn=self.collate_fn, - ) - - def val_dataloader(self): - sampler = torch.utils.data.distributed.DistributedSampler( - self.test_dataset, shuffle=False) - return DataLoader( - self.test_dataset, - sampler=sampler, - shuffle=False, - batch_size=self.hparams.valid_batchsize, - pin_memory=True, - num_workers=self.hparams.dataloader_num_workers, - collate_fn=self.collate_fn, - ) - - def predict_dataloader(self): - sampler = torch.utils.data.distributed.DistributedSampler( - self.test_dataset, shuffle=False) - return DataLoader( - self.test_data, - sampler=sampler, - shuffle=False, - batch_size=self.hparams.valid_batchsize, - pin_memory=True, - num_workers=self.hparams.dataloader_num_workers, - collate_fn=self.collate_fn, - ) - - def collate_fn(self, examples): - # convert list to dict and tensorize input - batch = BatchEncoding( - {k: np.array([examples[i][k] for i in range(len(examples))]) - for k, v in examples[0].items()} - ) - - input_ids = np.array(batch['input_ids']) - batch_size, expanded_input_length = input_ids.shape - mask_indices = np.asarray([self.random_spans_noise_mask( - expanded_input_length) for i in range(batch_size)]) - labels_mask = ~mask_indices - - input_ids_sentinel = self.create_sentinel_ids( - mask_indices.astype(np.int8)) - labels_sentinel = self.create_sentinel_ids(labels_mask.astype(np.int8)) - - batch["input_ids"] = self.filter_input_ids( - input_ids, input_ids_sentinel) - batch["labels"] = self.filter_input_ids(input_ids, labels_sentinel) - - if batch["input_ids"].shape[-1] != self.max_seq_length: - raise ValueError( - f"`input_ids` are incorrectly preprocessed. `input_ids` length is \ - {batch['input_ids'].shape[-1]}, but should be {self.targets_length}." - ) - - if batch["labels"].shape[-1] != self.targets_length: - raise ValueError( - f"`labels` are incorrectly preprocessed. `labels` length is \ - {batch['labels'].shape[-1]}, but should be {self.targets_length}." - ) - - batch["decoder_input_ids"] = self.shift_tokens_right( - batch["labels"], self.pad_token_id, self.decoder_start_token_id - ) - - for k, v in batch.items(): - batch[k] = torch.tensor(v) - # print(k, batch[k], self.tokenizer.batch_decode(batch[k]), '\n', flush=True) - return batch - - def create_sentinel_ids(self, mask_indices): - """ - Sentinel ids creation given the indices that should be masked. - The start indices of each mask are replaced by the sentinel ids in increasing - order. Consecutive mask indices to be deleted are replaced with `-1`. - """ - start_indices = mask_indices - \ - np.roll(mask_indices, 1, axis=-1) * mask_indices - start_indices[:, 0] = mask_indices[:, 0] - - sentinel_ids = np.where(start_indices != 0, np.cumsum( - start_indices, axis=-1), start_indices) - sentinel_ids = np.where( - sentinel_ids != 0, (self.vocab_size - sentinel_ids), 0) - sentinel_ids -= mask_indices - start_indices - - return sentinel_ids - - def filter_input_ids(self, input_ids, sentinel_ids): - """ - Puts sentinel mask on `input_ids` and fuse consecutive mask tokens into a single mask token by deleting. - This will reduce the sequence length from `expanded_inputs_length` to `input_length`. - """ - batch_size = input_ids.shape[0] - - input_ids_full = np.where(sentinel_ids != 0, sentinel_ids, input_ids) - # input_ids tokens and sentinel tokens are >= 0, tokens < 0 are - # masked tokens coming after sentinel tokens and should be removed - input_ids = input_ids_full[input_ids_full >= - 0].reshape((batch_size, -1)) - input_ids = np.concatenate( - [input_ids, np.full((batch_size, 1), self.eos_token_id, dtype=np.int32)], axis=-1 - ) - return input_ids - - # Copied from transformers.models.bart.modeling_flax_bart.shift_tokens_right - def shift_tokens_right(self, input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray: - """ - Shift input ids one token to the right. - """ - shifted_input_ids = np.zeros_like(input_ids) - shifted_input_ids[:, 1:] = input_ids[:, :-1] - shifted_input_ids[:, 0] = decoder_start_token_id - - shifted_input_ids = np.where( - shifted_input_ids == -100, pad_token_id, shifted_input_ids) - return shifted_input_ids - - def random_spans_noise_mask(self, length): - """This function is copy of `random_spans_helper `__ . - Noise mask consisting of random spans of noise tokens. - The number of noise tokens and the number of noise spans and non-noise spans - are determined deterministically as follows: - num_noise_tokens = round(length * noise_density) - num_nonnoise_spans = num_noise_spans = round(num_noise_tokens / mean_noise_span_length) - Spans alternate between non-noise and noise, beginning with non-noise. - Subject to the above restrictions, all masks are equally likely. - Args: - length: an int32 scalar (length of the incoming token sequence) - noise_density: a float - approximate density of output mask - mean_noise_span_length: a number - Returns: - a boolean tensor with shape [length] - """ - - orig_length = length - - num_noise_tokens = int(np.round(length * self.noise_density)) - # avoid degeneracy by ensuring positive numbers of noise and nonnoise tokens. - num_noise_tokens = min(max(num_noise_tokens, 1), length - 1) - num_noise_spans = int( - np.round(num_noise_tokens / self.mean_noise_span_length)) - - # avoid degeneracy by ensuring positive number of noise spans - num_noise_spans = max(num_noise_spans, 1) - num_nonnoise_tokens = length - num_noise_tokens - - # pick the lengths of the noise spans and the non-noise spans - def _random_segmentation(num_items, num_segments): - """Partition a sequence of items randomly into non-empty segments. - Args: - num_items: an integer scalar > 0 - num_segments: an integer scalar in [1, num_items] - Returns: - a Tensor with shape [num_segments] containing positive integers that add - up to num_items - """ - mask_indices = np.arange(num_items - 1) < (num_segments - 1) - np.random.shuffle(mask_indices) - first_in_segment = np.pad(mask_indices, [[1, 0]]) - segment_id = np.cumsum(first_in_segment) - # count length of sub segments assuming that list is sorted - _, segment_length = np.unique(segment_id, return_counts=True) - return segment_length - - noise_span_lengths = _random_segmentation( - num_noise_tokens, num_noise_spans) - nonnoise_span_lengths = _random_segmentation( - num_nonnoise_tokens, num_noise_spans) - - interleaved_span_lengths = np.reshape( - np.stack([nonnoise_span_lengths, noise_span_lengths], - axis=1), [num_noise_spans * 2] - ) - span_starts = np.cumsum(interleaved_span_lengths)[:-1] - span_start_indicator = np.zeros((length,), dtype=np.int8) - span_start_indicator[span_starts] = True - span_num = np.cumsum(span_start_indicator) - is_noise = np.equal(span_num % 2, 1) - - return is_noise[:orig_length] - - -class TaskT5Dataset(Dataset): - def __init__(self, data_path, args): - super().__init__() - self.max_length = args.max_seq_length - if args.tokenizer_type == 't5_tokenizer': - self.tokenizer = MT5Tokenizer.from_pretrained(args.pretrained_model_path) - else: - self.tokenizer = BertTokenizer.from_pretrained(args.pretrained_model_path) - self.data = self.load_data(data_path) - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.encode(self.data[index]) - - def load_data(self, data_path): - samples = [] - with open(data_path, 'r', encoding='utf8') as f: - lines = f.readlines() - for line in tqdm(lines): - samples.append(json.loads(line)) - return samples - - def encode(self, item): - if item["textb"] != "": - text = item['question'] + ','.join(item['choice'])+'。' + f"""{item["texta"]}""" + f"""{item["textb"]}""" - else: - text = f"""{item["question"]}""" + ",".join(item["choice"]) + "。" + f"""{item["texta"]}""" - label = item['answer'] - encode_dict = self.tokenizer.encode_plus(text, max_length=self.max_length, padding='max_length', - truncation=True, return_tensors='pt') - decode_dict = self.tokenizer.encode_plus(label, max_length=16, padding='max_length', - truncation=True) - - answer_token = [] - max_label_len = 0 - choice_encode = [] # 用来确定模型生成的最大长度 - for a in item['choice']: - answer_encode = self.tokenizer.encode(a) - choice_encode.append(answer_encode) - if len(answer_encode) > max_label_len: - max_label_len = len(answer_encode) - for an in answer_encode: - if an not in answer_token: - answer_token.append(an) - - # bad_words_ids = [[i] for i in range(self.tokenizer.vocab_size) if i not in answer_token] #不生成这些token - - # while len(bad_words_ids) 0.5).float() - x = msk * x - - _, _, bm = self.DocTr(x) - bm = (2 * (bm / 255.) - 1) * 0.99 - - return bm - -def reload_seg_model(model, path=""): - if not bool(path): - return model - else: - model_dict = model.state_dict() - pretrained_dict = torch.load(path, map_location='cpu') - pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items() if k[6:] in model_dict} - model_dict.update(pretrained_dict) - model.load_state_dict(model_dict) - return model - -def reload_rec_model(model, path=""): - if not bool(path): - return model - else: - model_dict = model.state_dict() - pretrained_dict = torch.load(path, map_location='cpu') - pretrained_dict = {k[7:]: v for k, v in pretrained_dict.items() if k[7:] in model_dict} - model_dict.update(pretrained_dict) - model.load_state_dict(model_dict) - return model - -def rec(input_image): - seg_model_path = './model_pretrained/preprocess.pth' - rec_model_path = './model_pretrained/DocGeoNet.pth' - - net = Net() - reload_rec_model(net.DocTr, rec_model_path) - reload_seg_model(net.msk, seg_model_path) - net.eval() - - im_ori = np.array(input_image)[:, :, :3] / 255. # read image 0-255 to 0-1 - h, w, _ = im_ori.shape - im = cv2.resize(im_ori, (256, 256)) - im = im.transpose(2, 0, 1) - im = torch.from_numpy(im).float().unsqueeze(0) - - with torch.no_grad(): - bm = net(im) - bm = bm.cpu() - - bm0 = cv2.resize(bm[0, 0].numpy(), (w, h)) # x flow - bm1 = cv2.resize(bm[0, 1].numpy(), (w, h)) # y flow - bm0 = cv2.blur(bm0, (3, 3)) - bm1 = cv2.blur(bm1, (3, 3)) - lbl = torch.from_numpy(np.stack([bm0, bm1], axis=2)).unsqueeze(0) # h * w * 2 - out = F.grid_sample(torch.from_numpy(im_ori).permute(2, 0, 1).unsqueeze(0).float(), lbl, align_corners=True) - img_rec = ((out[0] * 255).permute(1, 2, 0).numpy())[:,:,::-1].astype(np.uint8) - - # Convert from BGR to RGB - img_rec = cv2.cvtColor(img_rec, cv2.COLOR_BGR2RGB) - return Image.fromarray(img_rec) - - -demo_img_files = glob.glob('./distorted/*.[jJ][pP][gG]') + glob.glob('./distorted/*.[pP][nN][gG]') - -# Gradio Interface -input_image = gr.inputs.Image() -output_image = gr.outputs.Image(type='pil') - - - -iface = gr.Interface(fn=rec, inputs=input_image, outputs=output_image, title="DocGeoNet",examples=demo_img_files) - -#iface.launch(server_port=8821, server_name="0.0.0.0") -iface.launch() \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/mbart/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/mbart/README.md deleted file mode 100644 index a45e37243c2c5d4027f79cf71498ca58bbac7d98..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/mbart/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# MBART: Multilingual Denoising Pre-training for Neural Machine Translation -[https://arxiv.org/abs/2001.08210] - -## Introduction - -MBART is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`mbart.CC25` | mBART model with 12 encoder and decoder layers trained on 25 languages' monolingual corpus | 610M | [mbart.CC25.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz) -`mbart.ft.ro_en` | finetune mBART cc25 model on ro-en language pairs | 610M | [mbart.cc25.ft.enro.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.ft.enro.tar.gz) - -## Results - -**[WMT16 EN-RO](https://www.statmt.org/wmt16/translation-task.html)** - -_(test set, no additional data used)_ - -Model | en-ro | ro-en ----|---|--- -`Random` | 34.3 | 34.0 -`mbart.cc25` | 37.7 | 37.8 -`mbart.enro.bilingual` | 38.5 | 38.5 - -## BPE data -# download model -wget https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz -tar -xzvf mbart.CC25.tar.gz -# bpe data -install SPM [here](https://github.com/google/sentencepiece) -```bash -SPM=/path/to/sentencepiece/build/src/spm_encode -MODEL=sentence.bpe.model -${SPM} --model=${MODEL} < ${DATA}/${TRAIN}.${SRC} > ${DATA}/${TRAIN}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${TRAIN}.${TGT} > ${DATA}/${TRAIN}.spm.${TGT} & -${SPM} --model=${MODEL} < ${DATA}/${VALID}.${SRC} > ${DATA}/${VALID}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${VALID}.${TGT} > ${DATA}/${VALID}.spm.${TGT} & -${SPM} --model=${MODEL} < ${DATA}/${TEST}.${SRC} > ${DATA}/${TEST}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${TEST}.${TGT} > ${DATA}/${TEST}.spm.${TGT} & -``` - -## Preprocess data - -```bash -DICT=dict.txt -fairseq-preprocess \ - --source-lang ${SRC} \ - --target-lang ${TGT} \ - --trainpref ${DATA}/${TRAIN}.spm \ - --validpref ${DATA}/${VALID}.spm \ - --testpref ${DATA}/${TEST}.spm \ - --destdir ${DEST}/${NAME} \ - --thresholdtgt 0 \ - --thresholdsrc 0 \ - --srcdict ${DICT} \ - --tgtdict ${DICT} \ - --workers 70 -``` - -## Finetune on EN-RO -Finetune on mbart CC25 - -```bash -PRETRAIN=mbart.cc25 # fix if you moved the downloaded checkpoint -langs=ar_AR,cs_CZ,de_DE,en_XX,es_XX,et_EE,fi_FI,fr_XX,gu_IN,hi_IN,it_IT,ja_XX,kk_KZ,ko_KR,lt_LT,lv_LV,my_MM,ne_NP,nl_XX,ro_RO,ru_RU,si_LK,tr_TR,vi_VN,zh_CN - -fairseq-train path_2_data \ - --encoder-normalize-before --decoder-normalize-before \ - --arch mbart_large --layernorm-embedding \ - --task translation_from_pretrained_bart \ - --source-lang en_XX --target-lang ro_RO \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler polynomial_decay --lr 3e-05 --warmup-updates 2500 --total-num-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 \ - --restore-file $PRETRAIN \ - --reset-optimizer --reset-meters --reset-dataloader --reset-lr-scheduler \ - --langs $langs \ - --ddp-backend legacy_ddp -``` -## Generate on EN-RO -Get sacrebleu on finetuned en-ro model - -get tokenizer [here](https://github.com/rsennrich/wmt16-scripts) -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.ft.enro.tar.gz -tar -xzvf mbart.cc25.ft.enro.tar.gz -``` - -```bash -model_dir=MBART_finetuned_enro # fix if you moved the checkpoint - -fairseq-generate path_2_data \ - --path $model_dir/model.pt \ - --task translation_from_pretrained_bart \ - --gen-subset test \ - -t ro_RO -s en_XX \ - --bpe 'sentencepiece' --sentencepiece-model $model_dir/sentence.bpe.model \ - --sacrebleu --remove-bpe 'sentencepiece' \ - --batch-size 32 --langs $langs > en_ro - -cat en_ro | grep -P "^H" |sort -V |cut -f 3- | sed 's/\[ro_RO\]//g' |$TOKENIZER ro > en_ro.hyp -cat en_ro | grep -P "^T" |sort -V |cut -f 2- | sed 's/\[ro_RO\]//g' |$TOKENIZER ro > en_ro.ref -sacrebleu -tok 'none' -s 'none' en_ro.ref < en_ro.hyp -``` - -## Citation - -```bibtex -@article{liu2020multilingual, - title={Multilingual Denoising Pre-training for Neural Machine Translation}, - author={Yinhan Liu and Jiatao Gu and Naman Goyal and Xian Li and Sergey Edunov and Marjan Ghazvininejad and Mike Lewis and Luke Zettlemoyer}, - year={2020}, - eprint={2001.08210}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/data/extracted_features_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/data/extracted_features_dataset.py deleted file mode 100644 index d6ee9c4a3602be9db8ddfe67d41ce8a96a98ad1e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/data/extracted_features_dataset.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import os -import contextlib - -import numpy as np -import torch - -from fairseq.data import FairseqDataset, data_utils - - -logger = logging.getLogger(__name__) - - -class ExtractedFeaturesDataset(FairseqDataset): - def __init__( - self, - path, - split, - min_length=3, - max_length=None, - labels=None, - label_dict=None, - shuffle=True, - sort_by_length=True, - ): - super().__init__() - - self.min_length = min_length - self.max_length = max_length - self.shuffle = shuffle - self.sort_by_length = sort_by_length - self.label_dict = label_dict - - if labels is not None: - assert label_dict is not None - - self.sizes = [] - self.offsets = [] - self.labels = [] - - path = os.path.join(path, split) - data_path = path - self.data = np.load(data_path + ".npy", mmap_mode="r") - - offset = 0 - skipped = 0 - - if not os.path.exists(path + f".{labels}"): - labels = None - - with open(data_path + ".lengths", "r") as len_f, open( - path + f".{labels}", "r" - ) if labels is not None else contextlib.ExitStack() as lbl_f: - for line in len_f: - length = int(line.rstrip()) - lbl = None if labels is None else next(lbl_f).rstrip().split() - if length >= min_length and ( - max_length is None or length <= max_length - ): - self.sizes.append(length) - self.offsets.append(offset) - if lbl is not None: - self.labels.append(lbl) - offset += length - - self.sizes = np.asarray(self.sizes) - self.offsets = np.asarray(self.offsets) - - logger.info(f"loaded {len(self.offsets)}, skipped {skipped} samples") - - def __getitem__(self, index): - offset = self.offsets[index] - end = self.sizes[index] + offset - feats = torch.from_numpy(self.data[offset:end].copy()).float() - - res = {"id": index, "features": feats} - if len(self.labels) > 0: - res["target"] = self.label_dict.encode_line( - self.labels[index], - line_tokenizer=lambda x: x, - append_eos=False, - ) - - return res - - def __len__(self): - return len(self.sizes) - - def collater(self, samples): - if len(samples) == 0: - return {} - - features = [s["features"] for s in samples] - sizes = [len(s) for s in features] - - target_size = max(sizes) - - collated_features = features[0].new_zeros( - len(features), target_size, features[0].size(-1) - ) - padding_mask = torch.BoolTensor(collated_features.shape[:-1]).fill_(False) - for i, (f, size) in enumerate(zip(features, sizes)): - collated_features[i, :size] = f - padding_mask[i, size:] = True - - res = { - "id": torch.LongTensor([s["id"] for s in samples]), - "net_input": {"features": collated_features, "padding_mask": padding_mask}, - } - - if len(self.labels) > 0: - target = data_utils.collate_tokens( - [s["target"] for s in samples], - pad_idx=self.label_dict.pad(), - left_pad=False, - ) - res["target"] = target - return res - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - return self.sizes[index] - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - - if self.sort_by_length: - order.append(self.sizes) - return np.lexsort(order)[::-1] - else: - return order[0] diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/model_criterion.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/model_criterion.py deleted file mode 100644 index 30350f13b1c00498de6784579250d6b342ced7dd..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/criterions/model_criterion.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Dict, List - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelCriterionConfig(FairseqDataclass): - loss_weights: Dict[str, float] = field( - default_factory=dict, - metadata={"help": "weights for the loss terms"}, - ) - log_keys: List[str] = field( - default_factory=list, - metadata={"help": "additional output keys to log"}, - ) - - -@register_criterion("model", dataclass=ModelCriterionConfig) -class ModelCriterion(FairseqCriterion): - """ - This criterion relies on the model to supply losses. - The losses should be a dictionary of name -> scalar returned by - the model either by including it in the net_output dict or by - implementing a get_losses(net_output, sample) method. The final loss is - a scaled sum of all losses according to weights in loss_weights. - If no weights are provided, then all losses are scaled by 1.0. - - The losses will be automatically logged. Additional keys from - net_output dict can be logged via the log_keys parameter. - """ - - def __init__(self, task, loss_weights=None, log_keys=None): - super().__init__(task) - self.loss_weights = loss_weights - self.log_keys = log_keys - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - - sample_size = net_output["sample_size"] - scaled_losses = {} - - if hasattr(model, "get_losses"): - losses = model.get_losses(net_output, sample) - elif isinstance(net_output, dict) and "losses" in net_output: - losses = net_output["losses"] - else: - raise Exception("Could not retrieve losses") - - for lk, p in losses.items(): - try: - coef = 1.0 if len(self.loss_weights) == 0 else self.loss_weights[lk] - except KeyError: - logger.error( - f"weight for loss {lk} is not in loss_weights ({self.loss_weights})" - ) - raise - if coef != 0 and p is not None: - scaled_losses[lk] = coef * p.float() - - loss = sum(scaled_losses.values()) - if reduce and loss.numel() > 1: - loss = loss.sum() - - logging_output = { - "loss": loss.data, - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - "_world_size": 1, - } - - for lk in self.log_keys: - if lk in net_output and net_output[lk] is not None: - logging_output[lk] = float(net_output[lk]) - - if len(scaled_losses) > 1: - for lk, l in scaled_losses.items(): - logging_output[f"loss_{lk}"] = l.item() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar("loss", loss_sum / sample_size, sample_size, round=3) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - - builtin_keys = { - "loss", - "ntokens", - "nsentences", - "sample_size", - "_world_size", - } - - world_size = utils.item( - sum(log.get("_world_size", 0) for log in logging_outputs) - ) - - for k in logging_outputs[0]: - if k not in builtin_keys: - val = sum(log.get(k, 0) for log in logging_outputs) - if k.startswith("loss_"): - metrics.log_scalar(k, val / sample_size, sample_size, round=3) - else: - metrics.log_scalar(k, val / world_size, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/generate_mels.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/generate_mels.py deleted file mode 100644 index a3d331aef019cfd8cf45d6264db88d0fa26e5c0f..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/generate_mels.py +++ /dev/null @@ -1,70 +0,0 @@ -import numpy as np -import os -import torch -import commons - -import models -import utils -from argparse import ArgumentParser -from tqdm import tqdm -from text import text_to_sequence - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument("-m", "--model_dir", required=True, type=str) - parser.add_argument("-s", "--mels_dir", required=True, type=str) - args = parser.parse_args() - MODEL_DIR = args.model_dir # path to model dir - SAVE_MELS_DIR = args.mels_dir # path to save generated mels - - if not os.path.exists(SAVE_MELS_DIR): - os.makedirs(SAVE_MELS_DIR) - - hps = utils.get_hparams_from_dir(MODEL_DIR) - symbols = list(hps.data.punc) + list(hps.data.chars) - checkpoint_path = utils.latest_checkpoint_path(MODEL_DIR) - cleaner = hps.data.text_cleaners - - model = models.FlowGenerator( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ).to("cuda") - - utils.load_checkpoint(checkpoint_path, model) - model.decoder.store_inverse() # do not calcuate jacobians for fast decoding - _ = model.eval() - - def get_mel(text, fpath): - if getattr(hps.data, "add_blank", False): - text_norm = text_to_sequence(text, symbols, cleaner) - text_norm = commons.intersperse(text_norm, len(symbols)) - else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality - text = " " + text.strip() + " " - text_norm = text_to_sequence(text, symbols, cleaner) - - sequence = np.array(text_norm)[None, :] - - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda() - - with torch.no_grad(): - noise_scale = 0.667 - length_scale = 1.0 - (y_gen_tst, *_), *_, (attn_gen, *_) = model( - x_tst, - x_tst_lengths, - gen=True, - noise_scale=noise_scale, - length_scale=length_scale, - ) - - np.save(os.path.join(SAVE_MELS_DIR, fpath), y_gen_tst.cpu().detach().numpy()) - - for f in [hps.data.training_files, hps.data.validation_files]: - file_lines = open(f).read().splitlines() - - for line in tqdm(file_lines): - fname, text = line.split("|") - fname = os.path.basename(fname).replace(".wav", ".npy") - get_mel(text, fname) diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/meldataset.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/meldataset.py deleted file mode 100644 index 8c6ca9ec8a6cc6408a77492e795bffef7f86b611..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/hifi_gan/meldataset.py +++ /dev/null @@ -1,233 +0,0 @@ -import math -import os -import random -import torch -import torch.utils.data -import numpy as np -from librosa.util import normalize -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - if torch.min(y) < -1.0: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.0: - print("max value is ", torch.max(y)) - - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax) + "_" + str(y.device)] = ( - torch.from_numpy(mel).float().to(y.device) - ) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[str(y.device)], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - ) - - spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) - - spec = torch.matmul(mel_basis[str(fmax) + "_" + str(y.device)], spec) - spec = spectral_normalize_torch(spec) - - return spec - - -def get_dataset_filelist(a): - with open(a.input_training_file, "r", encoding="utf-8") as fi: - training_files = [x for x in fi.read().split("\n") if len(x) > 0] - - with open(a.input_validation_file, "r", encoding="utf-8") as fi: - validation_files = [x for x in fi.read().split("\n") if len(x) > 0] - return training_files, validation_files - - -class MelDataset(torch.utils.data.Dataset): - def __init__( - self, - training_files, - segment_size, - n_fft, - num_mels, - hop_size, - win_size, - sampling_rate, - fmin, - fmax, - split=True, - shuffle=True, - n_cache_reuse=1, - device=None, - fmax_loss=None, - fine_tuning=False, - base_mels_path=None, - ): - self.audio_files = training_files - random.seed(1234) - if shuffle: - random.shuffle(self.audio_files) - self.segment_size = segment_size - self.sampling_rate = sampling_rate - self.split = split - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.fmax_loss = fmax_loss - self.cached_wav = None - self.n_cache_reuse = n_cache_reuse - self._cache_ref_count = 0 - self.device = device - self.fine_tuning = fine_tuning - self.base_mels_path = base_mels_path - - def __getitem__(self, index): - filename = self.audio_files[index] - if self._cache_ref_count == 0: - audio, sampling_rate = load_wav(filename) - audio = audio / MAX_WAV_VALUE - if not self.fine_tuning: - audio = normalize(audio) * 0.95 - self.cached_wav = audio - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - self._cache_ref_count = self.n_cache_reuse - else: - audio = self.cached_wav - self._cache_ref_count -= 1 - - audio = torch.FloatTensor(audio) - audio = audio.unsqueeze(0) - - if not self.fine_tuning: - if self.split: - if audio.size(1) >= self.segment_size: - max_audio_start = audio.size(1) - self.segment_size - audio_start = random.randint(0, max_audio_start) - audio = audio[:, audio_start : audio_start + self.segment_size] - else: - audio = torch.nn.functional.pad( - audio, (0, self.segment_size - audio.size(1)), "constant" - ) - - mel = mel_spectrogram( - audio, - self.n_fft, - self.num_mels, - self.sampling_rate, - self.hop_size, - self.win_size, - self.fmin, - self.fmax, - center=False, - ) - else: - mel = np.load( - os.path.join( - self.base_mels_path, - os.path.splitext(os.path.split(filename)[-1])[0] + ".npy", - ) - ) - mel = torch.from_numpy(mel) - - if len(mel.shape) < 3: - mel = mel.unsqueeze(0) - - if self.split: - frames_per_seg = math.ceil(self.segment_size / self.hop_size) - - if audio.size(1) >= self.segment_size: - mel_start = random.randint(0, mel.size(2) - frames_per_seg - 1) - mel = mel[:, :, mel_start : mel_start + frames_per_seg] - audio = audio[ - :, - mel_start - * self.hop_size : (mel_start + frames_per_seg) - * self.hop_size, - ] - else: - mel = torch.nn.functional.pad( - mel, (0, frames_per_seg - mel.size(2)), "constant" - ) - audio = torch.nn.functional.pad( - audio, (0, self.segment_size - audio.size(1)), "constant" - ) - - mel_loss = mel_spectrogram( - audio, - self.n_fft, - self.num_mels, - self.sampling_rate, - self.hop_size, - self.win_size, - self.fmin, - self.fmax_loss, - center=False, - ) - - return (mel.squeeze(), audio.squeeze(0), filename, mel_loss.squeeze()) - - def __len__(self): - return len(self.audio_files) diff --git a/spaces/Heriot-WattUniversity/generate-tone/README.md b/spaces/Heriot-WattUniversity/generate-tone/README.md deleted file mode 100644 index 7306bdeb6e66332cb4349eff6a739cf202bd5da3..0000000000000000000000000000000000000000 --- a/spaces/Heriot-WattUniversity/generate-tone/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Generate Tone -emoji: 📊 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 2.8.12 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Heriot-WattUniversity/generate-tone/app.py b/spaces/Heriot-WattUniversity/generate-tone/app.py deleted file mode 100644 index afab2e466822e599e84346b669fe5fe2ce8960ed..0000000000000000000000000000000000000000 --- a/spaces/Heriot-WattUniversity/generate-tone/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import numpy as np -import gradio as gr - -def generate_tone(note, octave, duration): - - sampling_rate = 48000 - a4_freq, tones_from_a4 = 440, 12 * (octave - 4) + (note - 9) - frequency = a4_freq * 2 ** (tones_from_a4 / 12) - audio = np.linspace(0, int(duration), int(duration) * sampling_rate) - audio = (20000 * np.sin(audio * (2 * np.pi * frequency))).astype(np.int16) - return sampling_rate, audio - -gr.Interface( - fn=generate_tone, - inputs=[ - gr.inputs.Dropdown(["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"], type="index"), - gr.inputs.Slider(2, 6, step=1), - gr.inputs.Textbox(type="number", default=1, label="Duration in seconds"), - ], - outputs="audio", - title="Generate a Musical Tone!" -).launch() \ No newline at end of file diff --git a/spaces/HiepPhuocSS/TimeSFormer/utils/img_container.py b/spaces/HiepPhuocSS/TimeSFormer/utils/img_container.py deleted file mode 100644 index bf3bac723ff7c8d7e8348e73e5903bcec1bba0cb..0000000000000000000000000000000000000000 --- a/spaces/HiepPhuocSS/TimeSFormer/utils/img_container.py +++ /dev/null @@ -1,28 +0,0 @@ -from typing import List, Optional - -import numpy as np -from pandas import DataFrame - -from .frame_rate import FrameRate - - -class ImgContainer: - def __init__(self, frames_per_video: int = 8, is_recording: bool = False) -> None: - self.img: Optional[np.ndarray] = None # raw image - self.frame_rate: FrameRate = FrameRate() - self.imgs: List[np.ndarray] = [] - self.frames_per_video = frames_per_video - self.rs: Optional[DataFrame] = None - self.is_recording = is_recording - - def add_frame(self, frame: np.ndarray) -> None: - if len(self.imgs) >= self.frames_per_video: - self.imgs.pop(0) - self.imgs.append(frame) - - def toggle_recording(self) -> None: - self.is_recording = not self.is_recording - - @property - def ready(self): - return len(self.imgs) == self.frames_per_video diff --git a/spaces/Hunter731/Unity3D-RTS/Build/output.framework.js b/spaces/Hunter731/Unity3D-RTS/Build/output.framework.js deleted file mode 100644 index 09ad3f7f4556cc2eace183aab01c368c5a3c78f8..0000000000000000000000000000000000000000 --- a/spaces/Hunter731/Unity3D-RTS/Build/output.framework.js +++ /dev/null @@ -1,5 +0,0 @@ -function unityFramework(Module) { -var Module=typeof Module!=="undefined"?Module:{}; -function Pointer_stringify(s,len){warnOnce("The JavaScript function 'Pointer_stringify(ptrToSomeCString)' is obsoleted and will be removed in a future Unity version. Please call 'UTF8ToString(ptrToSomeCString)' instead.");return UTF8ToString(s,len)}Module["Pointer_stringify"]=Pointer_stringify;var stackTraceReference="(^|\\n)(\\s+at\\s+|)jsStackTrace(\\s+\\(|@)([^\\n]+):\\d+:\\d+(\\)|)(\\n|$)";var stackTraceReferenceMatch=jsStackTrace().match(new RegExp(stackTraceReference));if(stackTraceReferenceMatch)Module.stackTraceRegExp=new RegExp(stackTraceReference.replace("([^\\n]+)",stackTraceReferenceMatch[4].replace(/[\\^${}[\]().*+?|]/g,"\\$&")).replace("jsStackTrace","[^\\n]+"));var abort=function(what){if(ABORT)return;ABORT=true;EXITSTATUS=1;if(typeof ENVIRONMENT_IS_PTHREAD!=="undefined"&&ENVIRONMENT_IS_PTHREAD)console.error("Pthread aborting at "+(new Error).stack);if(what!==undefined){out(what);err(what);what=JSON.stringify(what)}else{what=""}var message="abort("+what+") at "+stackTrace();if(Module.abortHandler&&Module.abortHandler(message))return;throw message};Module["SetFullscreen"]=function(fullscreen){if(typeof runtimeInitialized==="undefined"||!runtimeInitialized){console.log("Runtime not initialized yet.")}else if(typeof JSEvents==="undefined"){console.log("Player not loaded yet.")}else{var tmp=JSEvents.canPerformEventHandlerRequests;JSEvents.canPerformEventHandlerRequests=function(){return 1};Module.ccall("SetFullscreen",null,["number"],[fullscreen]);JSEvents.canPerformEventHandlerRequests=tmp}};if(typeof ENVIRONMENT_IS_PTHREAD==="undefined"||!ENVIRONMENT_IS_PTHREAD){Module["preRun"].push(function(){var unityFileSystemInit=Module["unityFileSystemInit"]||function(){FS.mkdir("/idbfs");FS.mount(IDBFS,{},"/idbfs");Module.addRunDependency("JS_FileSystem_Mount");FS.syncfs(true,function(err){if(err)console.log("IndexedDB is not available. Data will not persist in cache and PlayerPrefs will not be saved.");Module.removeRunDependency("JS_FileSystem_Mount")})};unityFileSystemInit()})}var videoInputDevices=[];var videoInputDevicesEnumerated=false;var removeEnumerateMediaDevicesRunDependency;var enumerateWatchdog=null;function matchToOldDevice(newDevice){var oldDevices=Object.keys(videoInputDevices);for(var i=0;i1){thisProgram=process["argv"][1].replace(/\\/g,"/")}arguments_=process["argv"].slice(2);if(typeof module!=="undefined"){module["exports"]=Module}process["on"]("uncaughtException",function(ex){if(!(ex instanceof ExitStatus)){throw ex}});process["on"]("unhandledRejection",abort);quit_=function(status){process["exit"](status)};Module["inspect"]=function(){return"[Emscripten Module object]"}}else if(ENVIRONMENT_IS_SHELL){if(typeof read!="undefined"){read_=function shell_read(f){return read(f)}}readBinary=function readBinary(f){var data;if(typeof readbuffer==="function"){return new Uint8Array(readbuffer(f))}data=read(f,"binary");assert(typeof data==="object");return data};if(typeof scriptArgs!="undefined"){arguments_=scriptArgs}else if(typeof arguments!="undefined"){arguments_=arguments}if(typeof quit==="function"){quit_=function(status){quit(status)}}if(typeof print!=="undefined"){if(typeof console==="undefined")console={};console.log=print;console.warn=console.error=typeof printErr!=="undefined"?printErr:print}}else if(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER){if(ENVIRONMENT_IS_WORKER){scriptDirectory=self.location.href}else if(typeof document!=="undefined"&&document.currentScript){scriptDirectory=document.currentScript.src}if(scriptDirectory.indexOf("blob:")!==0){scriptDirectory=scriptDirectory.substr(0,scriptDirectory.lastIndexOf("/")+1)}else{scriptDirectory=""}{read_=function(url){var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.send(null);return xhr.responseText};if(ENVIRONMENT_IS_WORKER){readBinary=function(url){var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.responseType="arraybuffer";xhr.send(null);return new Uint8Array(xhr.response)}}readAsync=function(url,onload,onerror){var xhr=new XMLHttpRequest;xhr.open("GET",url,true);xhr.responseType="arraybuffer";xhr.onload=function(){if(xhr.status==200||xhr.status==0&&xhr.response){onload(xhr.response);return}onerror()};xhr.onerror=onerror;xhr.send(null)}}setWindowTitle=function(title){document.title=title}}else{}var out=Module["print"]||console.log.bind(console);var err=Module["printErr"]||console.warn.bind(console);for(key in moduleOverrides){if(moduleOverrides.hasOwnProperty(key)){Module[key]=moduleOverrides[key]}}moduleOverrides=null;if(Module["arguments"])arguments_=Module["arguments"];if(Module["thisProgram"])thisProgram=Module["thisProgram"];if(Module["quit"])quit_=Module["quit"];var STACK_ALIGN=16;function alignMemory(size,factor){if(!factor)factor=STACK_ALIGN;return Math.ceil(size/factor)*factor}function warnOnce(text){if(!warnOnce.shown)warnOnce.shown={};if(!warnOnce.shown[text]){warnOnce.shown[text]=1;err(text)}}var tempRet0=0;var setTempRet0=function(value){tempRet0=value};var getTempRet0=function(){return tempRet0};var wasmBinary;if(Module["wasmBinary"])wasmBinary=Module["wasmBinary"];var noExitRuntime=Module["noExitRuntime"]||true;if(typeof WebAssembly!=="object"){abort("no native wasm support detected")}var wasmMemory;var ABORT=false;var EXITSTATUS;function assert(condition,text){if(!condition){abort("Assertion failed: "+text)}}function getCFunc(ident){var func=Module["_"+ident];assert(func,"Cannot call unknown function "+ident+", make sure it is exported");return func}function ccall(ident,returnType,argTypes,args,opts){var toC={"string":function(str){var ret=0;if(str!==null&&str!==undefined&&str!==0){var len=(str.length<<2)+1;ret=stackAlloc(len);stringToUTF8(str,ret,len)}return ret},"array":function(arr){var ret=stackAlloc(arr.length);writeArrayToMemory(arr,ret);return ret}};function convertReturnValue(ret){if(returnType==="string")return UTF8ToString(ret);if(returnType==="boolean")return Boolean(ret);return ret}var func=getCFunc(ident);var cArgs=[];var stack=0;if(args){for(var i=0;i=endIdx))++endPtr;if(endPtr-idx>16&&heap.subarray&&UTF8Decoder){return UTF8Decoder.decode(heap.subarray(idx,endPtr))}else{var str="";while(idx>10,56320|ch&1023)}}}return str}function UTF8ToString(ptr,maxBytesToRead){return ptr?UTF8ArrayToString(HEAPU8,ptr,maxBytesToRead):""}function stringToUTF8Array(str,heap,outIdx,maxBytesToWrite){if(!(maxBytesToWrite>0))return 0;var startIdx=outIdx;var endIdx=outIdx+maxBytesToWrite-1;for(var i=0;i=55296&&u<=57343){var u1=str.charCodeAt(++i);u=65536+((u&1023)<<10)|u1&1023}if(u<=127){if(outIdx>=endIdx)break;heap[outIdx++]=u}else if(u<=2047){if(outIdx+1>=endIdx)break;heap[outIdx++]=192|u>>6;heap[outIdx++]=128|u&63}else if(u<=65535){if(outIdx+2>=endIdx)break;heap[outIdx++]=224|u>>12;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}else{if(outIdx+3>=endIdx)break;heap[outIdx++]=240|u>>18;heap[outIdx++]=128|u>>12&63;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}}heap[outIdx]=0;return outIdx-startIdx}function stringToUTF8(str,outPtr,maxBytesToWrite){return stringToUTF8Array(str,HEAPU8,outPtr,maxBytesToWrite)}function lengthBytesUTF8(str){var len=0;for(var i=0;i=55296&&u<=57343)u=65536+((u&1023)<<10)|str.charCodeAt(++i)&1023;if(u<=127)++len;else if(u<=2047)len+=2;else if(u<=65535)len+=3;else len+=4}return len}function allocateUTF8(str){var size=lengthBytesUTF8(str)+1;var ret=_malloc(size);if(ret)stringToUTF8Array(str,HEAP8,ret,size);return ret}function allocateUTF8OnStack(str){var size=lengthBytesUTF8(str)+1;var ret=stackAlloc(size);stringToUTF8Array(str,HEAP8,ret,size);return ret}function writeArrayToMemory(array,buffer){HEAP8.set(array,buffer)}function writeAsciiToMemory(str,buffer,dontAddNull){for(var i=0;i>0]=str.charCodeAt(i)}if(!dontAddNull)HEAP8[buffer>>0]=0}function alignUp(x,multiple){if(x%multiple>0){x+=multiple-x%multiple}return x}var buffer,HEAP8,HEAPU8,HEAP16,HEAPU16,HEAP32,HEAPU32,HEAPF32,HEAPF64;function updateGlobalBufferAndViews(buf){buffer=buf;Module["HEAP8"]=HEAP8=new Int8Array(buf);Module["HEAP16"]=HEAP16=new Int16Array(buf);Module["HEAP32"]=HEAP32=new Int32Array(buf);Module["HEAPU8"]=HEAPU8=new Uint8Array(buf);Module["HEAPU16"]=HEAPU16=new Uint16Array(buf);Module["HEAPU32"]=HEAPU32=new Uint32Array(buf);Module["HEAPF32"]=HEAPF32=new Float32Array(buf);Module["HEAPF64"]=HEAPF64=new Float64Array(buf)}var INITIAL_MEMORY=Module["INITIAL_MEMORY"]||33554432;var wasmTable;var __ATPRERUN__=[];var __ATINIT__=[];var __ATMAIN__=[];var __ATEXIT__=[];var __ATPOSTRUN__=[];var runtimeInitialized=false;var runtimeExited=false;function preRun(){if(Module["preRun"]){if(typeof Module["preRun"]=="function")Module["preRun"]=[Module["preRun"]];while(Module["preRun"].length){addOnPreRun(Module["preRun"].shift())}}callRuntimeCallbacks(__ATPRERUN__)}function initRuntime(){runtimeInitialized=true;if(!Module["noFSInit"]&&!FS.init.initialized)FS.init();TTY.init();SOCKFS.root=FS.mount(SOCKFS,{},null);PIPEFS.root=FS.mount(PIPEFS,{},null);callRuntimeCallbacks(__ATINIT__)}function preMain(){FS.ignorePermissions=false;callRuntimeCallbacks(__ATMAIN__)}function exitRuntime(){runtimeExited=true}function postRun(){if(Module["postRun"]){if(typeof Module["postRun"]=="function")Module["postRun"]=[Module["postRun"]];while(Module["postRun"].length){addOnPostRun(Module["postRun"].shift())}}callRuntimeCallbacks(__ATPOSTRUN__)}function addOnPreRun(cb){__ATPRERUN__.unshift(cb)}function addOnInit(cb){__ATINIT__.unshift(cb)}function addOnPostRun(cb){__ATPOSTRUN__.unshift(cb)}var runDependencies=0;var runDependencyWatcher=null;var dependenciesFulfilled=null;function getUniqueRunDependency(id){return id}function addRunDependency(id){runDependencies++;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}}function removeRunDependency(id){runDependencies--;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}if(runDependencies==0){if(runDependencyWatcher!==null){clearInterval(runDependencyWatcher);runDependencyWatcher=null}if(dependenciesFulfilled){var callback=dependenciesFulfilled;dependenciesFulfilled=null;callback()}}}Module["preloadedImages"]={};Module["preloadedAudios"]={};function abort(what){if(Module["onAbort"]){Module["onAbort"](what)}what+="";err(what);ABORT=true;EXITSTATUS=1;what="abort("+what+"). Build with -s ASSERTIONS=1 for more info.";var e=new WebAssembly.RuntimeError(what);throw e}var dataURIPrefix="data:application/octet-stream;base64,";function isDataURI(filename){return filename.startsWith(dataURIPrefix)}function isFileURI(filename){return filename.startsWith("file://")}var wasmBinaryFile="build.wasm";if(!isDataURI(wasmBinaryFile)){wasmBinaryFile=locateFile(wasmBinaryFile)}function getBinary(file){try{if(file==wasmBinaryFile&&wasmBinary){return new Uint8Array(wasmBinary)}if(readBinary){return readBinary(file)}else{throw"both async and sync fetching of the wasm failed"}}catch(err){abort(err)}}function getBinaryPromise(){if(!wasmBinary&&(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER)){if(typeof fetch==="function"&&!isFileURI(wasmBinaryFile)){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){if(!response["ok"]){throw"failed to load wasm binary file at '"+wasmBinaryFile+"'"}return response["arrayBuffer"]()}).catch(function(){return getBinary(wasmBinaryFile)})}else{if(readAsync){return new Promise(function(resolve,reject){readAsync(wasmBinaryFile,function(response){resolve(new Uint8Array(response))},reject)})}}}return Promise.resolve().then(function(){return getBinary(wasmBinaryFile)})}function createWasm(){var info={"a":asmLibraryArg};function receiveInstance(instance,module){var exports=instance.exports;Module["asm"]=exports;wasmMemory=Module["asm"]["$g"];updateGlobalBufferAndViews(wasmMemory.buffer);wasmTable=Module["asm"]["xh"];addOnInit(Module["asm"]["ah"]);removeRunDependency("wasm-instantiate")}addRunDependency("wasm-instantiate");function receiveInstantiationResult(result){receiveInstance(result["instance"])}function instantiateArrayBuffer(receiver){return getBinaryPromise().then(function(binary){var result=WebAssembly.instantiate(binary,info);return result}).then(receiver,function(reason){err("failed to asynchronously prepare wasm: "+reason);abort(reason)})}function instantiateAsync(){if(!wasmBinary&&typeof WebAssembly.instantiateStreaming==="function"&&!isDataURI(wasmBinaryFile)&&!isFileURI(wasmBinaryFile)&&typeof fetch==="function"){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){var result=WebAssembly.instantiateStreaming(response,info);return result.then(receiveInstantiationResult,function(reason){err("wasm streaming compile failed: "+reason);err("falling back to ArrayBuffer instantiation");return instantiateArrayBuffer(receiveInstantiationResult)})})}else{return instantiateArrayBuffer(receiveInstantiationResult)}}if(Module["instantiateWasm"]){try{var exports=Module["instantiateWasm"](info,receiveInstance);return exports}catch(e){err("Module.instantiateWasm callback failed with error: "+e);return false}}instantiateAsync();return{}}var tempDouble;var tempI64;var ASM_CONSTS={3464392:function(){return Module.webglContextAttributes.premultipliedAlpha},3464453:function(){return Module.webglContextAttributes.preserveDrawingBuffer},3464517:function(){return Module.webglContextAttributes.powerPreference}};function callRuntimeCallbacks(callbacks){while(callbacks.length>0){var callback=callbacks.shift();if(typeof callback=="function"){callback(Module);continue}var func=callback.func;if(typeof func==="number"){if(callback.arg===undefined){(function(){dynCall_v.call(null,func)})()}else{(function(a1){dynCall_vi.apply(null,[func,a1])})(callback.arg)}}else{func(callback.arg===undefined?null:callback.arg)}}}function demangle(func){return func}function demangleAll(text){var regex=/\b_Z[\w\d_]+/g;return text.replace(regex,function(x){var y=demangle(x);return x===y?x:y+" ["+x+"]"})}function dynCallLegacy(sig,ptr,args){var f=Module["dynCall_"+sig];return args&&args.length?f.apply(null,[ptr].concat(args)):f.call(null,ptr)}function dynCall(sig,ptr,args){return dynCallLegacy(sig,ptr,args)}function jsStackTrace(){var error=new Error;if(!error.stack){try{throw new Error}catch(e){error=e}if(!error.stack){return"(no stack trace available)"}}return error.stack.toString()}var runtimeKeepaliveCounter=0;function keepRuntimeAlive(){return noExitRuntime||runtimeKeepaliveCounter>0}function stackTrace(){var js=jsStackTrace();if(Module["extraStackTrace"])js+="\n"+Module["extraStackTrace"]();return demangleAll(js)}var JS_Accelerometer=null;var JS_Accelerometer_callback=0;function _JS_Accelerometer_IsRunning(){return JS_Accelerometer&&JS_Accelerometer.activated||JS_Accelerometer_callback!=0}var JS_Accelerometer_multiplier=1;var JS_Accelerometer_lastValue={x:0,y:0,z:0};function JS_Accelerometer_eventHandler(){JS_Accelerometer_lastValue={x:JS_Accelerometer.x*JS_Accelerometer_multiplier,y:JS_Accelerometer.y*JS_Accelerometer_multiplier,z:JS_Accelerometer.z*JS_Accelerometer_multiplier};if(JS_Accelerometer_callback!=0)dynCall_vfff(JS_Accelerometer_callback,JS_Accelerometer_lastValue.x,JS_Accelerometer_lastValue.y,JS_Accelerometer_lastValue.z)}var JS_Accelerometer_frequencyRequest=0;var JS_Accelerometer_frequency=0;var JS_LinearAccelerationSensor_callback=0;var JS_GravitySensor_callback=0;var JS_Gyroscope_callback=0;function JS_ComputeGravity(accelerometerValue,linearAccelerationValue){var difference={x:accelerometerValue.x-linearAccelerationValue.x,y:accelerometerValue.y-linearAccelerationValue.y,z:accelerometerValue.z-linearAccelerationValue.z};var differenceMagnitudeSq=difference.x*difference.x+difference.y*difference.y+difference.z*difference.z;var sum={x:accelerometerValue.x+linearAccelerationValue.x,y:accelerometerValue.y+linearAccelerationValue.y,z:accelerometerValue.z+linearAccelerationValue.z};var sumMagnitudeSq=sum.x*sum.x+sum.y*sum.y+sum.z*sum.z;return differenceMagnitudeSq<=sumMagnitudeSq?difference:sum}function JS_DeviceMotion_eventHandler(event){var accelerometerValue={x:event.accelerationIncludingGravity.x*JS_Accelerometer_multiplier,y:event.accelerationIncludingGravity.y*JS_Accelerometer_multiplier,z:event.accelerationIncludingGravity.z*JS_Accelerometer_multiplier};if(JS_Accelerometer_callback!=0)dynCall_vfff(JS_Accelerometer_callback,accelerometerValue.x,accelerometerValue.y,accelerometerValue.z);var linearAccelerationValue={x:event.acceleration.x*JS_Accelerometer_multiplier,y:event.acceleration.y*JS_Accelerometer_multiplier,z:event.acceleration.z*JS_Accelerometer_multiplier};if(JS_LinearAccelerationSensor_callback!=0)dynCall_vfff(JS_LinearAccelerationSensor_callback,linearAccelerationValue.x,linearAccelerationValue.y,linearAccelerationValue.z);if(JS_GravitySensor_callback!=0){var gravityValue=JS_ComputeGravity(accelerometerValue,linearAccelerationValue);dynCall_vfff(JS_GravitySensor_callback,gravityValue.x,gravityValue.y,gravityValue.z)}if(JS_Gyroscope_callback!=0){var degToRad=Math.PI/180;dynCall_vfff(JS_Gyroscope_callback,event.rotationRate.alpha*degToRad,event.rotationRate.beta*degToRad,event.rotationRate.gamma*degToRad)}}var JS_DeviceSensorPermissions=0;function JS_RequestDeviceSensorPermissions(permissions){if(permissions&1){if(typeof DeviceOrientationEvent.requestPermission==="function"){DeviceOrientationEvent.requestPermission().then(function(permissionState){if(permissionState==="granted"){JS_DeviceSensorPermissions&=~1}else{warnOnce("DeviceOrientationEvent permission not granted")}}).catch(function(err){warnOnce(err);JS_DeviceSensorPermissions|=1})}}if(permissions&2){if(typeof DeviceMotionEvent.requestPermission==="function"){DeviceMotionEvent.requestPermission().then(function(permissionState){if(permissionState==="granted"){JS_DeviceSensorPermissions&=~2}else{warnOnce("DeviceMotionEvent permission not granted")}}).catch(function(err){warnOnce(err);JS_DeviceSensorPermissions|=2})}}}function JS_DeviceMotion_add(){if(JS_Accelerometer_callback==0&&JS_LinearAccelerationSensor_callback==0&&JS_GravitySensor_callback==0&&JS_Gyroscope_callback==0){JS_RequestDeviceSensorPermissions(2);window.addEventListener("devicemotion",JS_DeviceMotion_eventHandler)}}function JS_DefineAccelerometerMultiplier(){var g=9.80665;JS_Accelerometer_multiplier=/(iPhone|iPad|Macintosh)/i.test(navigator.userAgent)?1/g:-1/g}function _JS_Accelerometer_Start(callback,frequency){JS_DefineAccelerometerMultiplier();if(typeof Accelerometer==="undefined"){JS_DeviceMotion_add();if(callback!=0)JS_Accelerometer_callback=callback;return}if(callback!=0)JS_Accelerometer_callback=callback;function InitializeAccelerometer(frequency){JS_Accelerometer=new Accelerometer({frequency:frequency,referenceFrame:"device"});JS_Accelerometer.addEventListener("reading",JS_Accelerometer_eventHandler);JS_Accelerometer.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_Accelerometer.start();JS_Accelerometer_frequency=frequency}if(JS_Accelerometer){if(JS_Accelerometer_frequency!=frequency){JS_Accelerometer.stop();JS_Accelerometer.removeEventListener("reading",JS_Accelerometer_eventHandler);InitializeAccelerometer(frequency)}}else if(JS_Accelerometer_frequencyRequest!=0){JS_Accelerometer_frequencyRequest=frequency}else{JS_Accelerometer_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeAccelerometer(JS_Accelerometer_frequencyRequest)}else{warnOnce("No permission to use Accelerometer.")}JS_Accelerometer_frequencyRequest=0})}}function JS_DeviceMotion_remove(){if(JS_Accelerometer_callback==0&&JS_LinearAccelerationSensor_callback==0&&JS_GravitySensor_callback==0&&JS_Gyroscope_callback==0){window.removeEventListener("devicemotion",JS_DeviceOrientation_eventHandler)}}function _JS_Accelerometer_Stop(){if(JS_Accelerometer){if(typeof GravitySensor!=="undefined"||JS_GravitySensor_callback==0){JS_Accelerometer.stop();JS_Accelerometer.removeEventListener("reading",JS_Accelerometer_eventHandler);JS_Accelerometer=null}JS_Accelerometer_callback=0;JS_Accelerometer_frequency=0}else if(JS_Accelerometer_callback!=0){JS_Accelerometer_callback=0;JS_DeviceMotion_remove()}}function _JS_Cursor_SetImage(ptr,length){var binary="";for(var i=0;i>2]=viewportX-rect.left;HEAPU32[targetY>>2]=viewportY-rect.top}function stringToNewUTF8(jsString){var length=lengthBytesUTF8(jsString)+1;var cString=_malloc(length);stringToUTF8(jsString,cString,length);return cString}function _JS_DOM_UnityCanvasSelector(){if(!_JS_DOM_UnityCanvasSelector.ptr){var canvasId=Module["canvas"]?Module["canvas"].id:"unity-canvas";var canvasSelector="#"+jsDomCssEscapeId(canvasId);_JS_DOM_UnityCanvasSelector.ptr=stringToNewUTF8(canvasSelector)}return _JS_DOM_UnityCanvasSelector.ptr}var fs={numPendingSync:0,syncInternal:1e3,syncInProgress:false,sync:function(onlyPendingSync){if(onlyPendingSync){if(fs.numPendingSync==0)return}else if(fs.syncInProgress){fs.numPendingSync++;return}fs.syncInProgress=true;FS.syncfs(false,function(err){fs.syncInProgress=false});fs.numPendingSync=0}};function _JS_FileSystem_Initialize(){Module.setInterval(function(){fs.sync(true)},fs.syncInternal)}function _JS_FileSystem_Sync(){fs.sync(false)}var JS_GravitySensor=null;function _JS_GravitySensor_IsRunning(){return typeof GravitySensor!=="undefined"?JS_GravitySensor&&JS_GravitySensor.activated:JS_GravitySensor_callback!=0}function JS_GravitySensor_eventHandler(){if(JS_GravitySensor_callback!=0)dynCall_vfff(JS_GravitySensor_callback,JS_GravitySensor.x*JS_Accelerometer_multiplier,JS_GravitySensor.y*JS_Accelerometer_multiplier,JS_GravitySensor.z*JS_Accelerometer_multiplier)}var JS_GravitySensor_frequencyRequest=0;var JS_LinearAccelerationSensor=null;function JS_LinearAccelerationSensor_eventHandler(){var linearAccelerationValue={x:JS_LinearAccelerationSensor.x*JS_Accelerometer_multiplier,y:JS_LinearAccelerationSensor.y*JS_Accelerometer_multiplier,z:JS_LinearAccelerationSensor.z*JS_Accelerometer_multiplier};if(JS_LinearAccelerationSensor_callback!=0)dynCall_vfff(JS_LinearAccelerationSensor_callback,linearAccelerationValue.x,linearAccelerationValue.y,linearAccelerationValue.z);if(JS_GravitySensor_callback!=0&&typeof GravitySensor==="undefined"){var gravityValue=JS_ComputeGravity(JS_Accelerometer_lastValue,linearAccelerationValue);dynCall_vfff(JS_GravitySensor_callback,gravityValue.x,gravityValue.y,gravityValue.z)}}var JS_LinearAccelerationSensor_frequencyRequest=0;var JS_LinearAccelerationSensor_frequency=0;function _JS_LinearAccelerationSensor_Start(callback,frequency){JS_DefineAccelerometerMultiplier();if(typeof LinearAccelerationSensor==="undefined"){JS_DeviceMotion_add();if(callback!=0)JS_LinearAccelerationSensor_callback=callback;return}if(callback!=0)JS_LinearAccelerationSensor_callback=callback;function InitializeLinearAccelerationSensor(frequency){JS_LinearAccelerationSensor=new LinearAccelerationSensor({frequency:frequency,referenceFrame:"device"});JS_LinearAccelerationSensor.addEventListener("reading",JS_LinearAccelerationSensor_eventHandler);JS_LinearAccelerationSensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_LinearAccelerationSensor.start();JS_LinearAccelerationSensor_frequency=frequency}if(JS_LinearAccelerationSensor){if(JS_LinearAccelerationSensor_frequency!=frequency){JS_LinearAccelerationSensor.stop();JS_LinearAccelerationSensor.removeEventListener("reading",JS_LinearAccelerationSensor_eventHandler);InitializeLinearAccelerationSensor(frequency)}}else if(JS_LinearAccelerationSensor_frequencyRequest!=0){JS_LinearAccelerationSensor_frequencyRequest=frequency}else{JS_LinearAccelerationSensor_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeLinearAccelerationSensor(JS_LinearAccelerationSensor_frequencyRequest)}else{warnOnce("No permission to use LinearAccelerationSensor.")}JS_LinearAccelerationSensor_frequencyRequest=0})}}function _JS_GravitySensor_Start(callback,frequency){if(typeof GravitySensor==="undefined"){_JS_Accelerometer_Start(0,Math.max(frequency,JS_Accelerometer_frequency));_JS_LinearAccelerationSensor_Start(0,Math.max(frequency,JS_LinearAccelerationSensor_frequency));JS_GravitySensor_callback=callback;return}JS_DefineAccelerometerMultiplier();JS_GravitySensor_callback=callback;function InitializeGravitySensor(frequency){JS_GravitySensor=new GravitySensor({frequency:frequency,referenceFrame:"device"});JS_GravitySensor.addEventListener("reading",JS_GravitySensor_eventHandler);JS_GravitySensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_GravitySensor.start()}if(JS_GravitySensor){JS_GravitySensor.stop();JS_GravitySensor.removeEventListener("reading",JS_GravitySensor_eventHandler);InitializeGravitySensor(frequency)}else if(JS_GravitySensor_frequencyRequest!=0){JS_GravitySensor_frequencyRequest=frequency}else{JS_GravitySensor_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeGravitySensor(JS_GravitySensor_frequencyRequest)}else{warnOnce("No permission to use GravitySensor.")}JS_GravitySensor_frequencyRequest=0})}}function _JS_LinearAccelerationSensor_Stop(){if(JS_LinearAccelerationSensor){if(typeof GravitySensor!=="undefined"||JS_GravitySensor_callback==0){JS_LinearAccelerationSensor.stop();JS_LinearAccelerationSensor.removeEventListener("reading",JS_LinearAccelerationSensor_eventHandler);JS_LinearAccelerationSensor=null}JS_LinearAccelerationSensor_callback=0;JS_LinearAccelerationSensor_frequency=0}else if(JS_LinearAccelerationSensor_callback!=0){JS_LinearAccelerationSensor_callback=0;JS_DeviceMotion_remove()}}function _JS_GravitySensor_Stop(){JS_GravitySensor_callback=0;if(typeof GravitySensor==="undefined"){if(JS_Accelerometer_callback==0)_JS_Accelerometer_Stop();if(JS_LinearAccelerationSensor_callback==0)_JS_LinearAccelerationSensor_Stop();return}if(JS_GravitySensor){JS_GravitySensor.stop();JS_GravitySensor.removeEventListener("reading",JS_GravitySensor_eventHandler);JS_GravitySensor=null}}var JS_Gyroscope=null;function _JS_Gyroscope_IsRunning(){return JS_Gyroscope&&JS_Gyroscope.activated||JS_Gyroscope_callback!=0}function JS_Gyroscope_eventHandler(){if(JS_Gyroscope_callback!=0)dynCall_vfff(JS_Gyroscope_callback,JS_Gyroscope.x,JS_Gyroscope.y,JS_Gyroscope.z)}var JS_Gyroscope_frequencyRequest=0;function _JS_Gyroscope_Start(callback,frequency){if(typeof Gyroscope==="undefined"){JS_DeviceMotion_add();JS_Gyroscope_callback=callback;return}JS_Gyroscope_callback=callback;function InitializeGyroscope(frequency){JS_Gyroscope=new Gyroscope({frequency:frequency,referenceFrame:"device"});JS_Gyroscope.addEventListener("reading",JS_Gyroscope_eventHandler);JS_Gyroscope.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_Gyroscope.start()}if(JS_Gyroscope){JS_Gyroscope.stop();JS_Gyroscope.removeEventListener("reading",JS_Gyroscope_eventHandler);InitializeGyroscope(frequency)}else if(JS_Gyroscope_frequencyRequest!=0){JS_Gyroscope_frequencyRequest=frequency}else{JS_Gyroscope_frequencyRequest=frequency;navigator.permissions.query({name:"gyroscope"}).then(function(result){if(result.state==="granted"){InitializeGyroscope(JS_Gyroscope_frequencyRequest)}else{warnOnce("No permission to use Gyroscope.")}JS_Gyroscope_frequencyRequest=0})}}function _JS_Gyroscope_Stop(){if(JS_Gyroscope){JS_Gyroscope.stop();JS_Gyroscope.removeEventListener("reading",JS_Gyroscope_eventHandler);JS_Gyroscope=null;JS_Gyroscope_callback=0}else if(JS_Gyroscope_callback!=0){JS_Gyroscope_callback=0;JS_DeviceMotion_remove()}}function _JS_LinearAccelerationSensor_IsRunning(){return JS_LinearAccelerationSensor&&JS_LinearAccelerationSensor.activated||JS_LinearAccelerationSensor_callback!=0}function _JS_Log_Dump(ptr,type){var str=UTF8ToString(ptr);if(typeof dump=="function")dump(str);switch(type){case 0:case 1:case 4:console.error(str);return;case 2:console.warn(str);return;case 3:case 5:console.log(str);return;default:console.error("Unknown console message type!");console.error(str)}}function _JS_Log_StackTrace(buffer,bufferSize){var trace=stackTrace();if(buffer)stringToUTF8(trace,buffer,bufferSize);return lengthBytesUTF8(trace)}var JS_OrientationSensor=null;var JS_OrientationSensor_callback=0;function _JS_OrientationSensor_IsRunning(){return JS_OrientationSensor&&JS_OrientationSensor.activated||JS_OrientationSensor_callback!=0}function JS_OrientationSensor_eventHandler(){if(JS_OrientationSensor_callback!=0)dynCall_vffff(JS_OrientationSensor_callback,JS_OrientationSensor.quaternion[0],JS_OrientationSensor.quaternion[1],JS_OrientationSensor.quaternion[2],JS_OrientationSensor.quaternion[3])}var JS_OrientationSensor_frequencyRequest=0;function JS_DeviceOrientation_eventHandler(event){if(JS_OrientationSensor_callback){var degToRad=Math.PI/180;var x=event.beta*degToRad;var y=event.gamma*degToRad;var z=event.alpha*degToRad;var cx=Math.cos(x/2);var sx=Math.sin(x/2);var cy=Math.cos(y/2);var sy=Math.sin(y/2);var cz=Math.cos(z/2);var sz=Math.sin(z/2);var qx=sx*cy*cz-cx*sy*sz;var qy=cx*sy*cz+sx*cy*sz;var qz=cx*cy*sz+sx*sy*cz;var qw=cx*cy*cz-sx*sy*sz;dynCall_vffff(JS_OrientationSensor_callback,qx,qy,qz,qw)}}function _JS_OrientationSensor_Start(callback,frequency){if(typeof RelativeOrientationSensor==="undefined"){if(JS_OrientationSensor_callback==0){JS_OrientationSensor_callback=callback;JS_RequestDeviceSensorPermissions(1);window.addEventListener("deviceorientation",JS_DeviceOrientation_eventHandler)}return}JS_OrientationSensor_callback=callback;function InitializeOrientationSensor(frequency){JS_OrientationSensor=new RelativeOrientationSensor({frequency:frequency,referenceFrame:"device"});JS_OrientationSensor.addEventListener("reading",JS_OrientationSensor_eventHandler);JS_OrientationSensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_OrientationSensor.start()}if(JS_OrientationSensor){JS_OrientationSensor.stop();JS_OrientationSensor.removeEventListener("reading",JS_OrientationSensor_eventHandler);InitializeOrientationSensor(frequency)}else if(JS_OrientationSensor_frequencyRequest!=0){JS_OrientationSensor_frequencyRequest=frequency}else{JS_OrientationSensor_frequencyRequest=frequency;Promise.all([navigator.permissions.query({name:"accelerometer"}),navigator.permissions.query({name:"gyroscope"})]).then(function(results){if(results.every(function(result){return result.state==="granted"})){InitializeOrientationSensor(JS_OrientationSensor_frequencyRequest)}else{warnOnce("No permissions to use RelativeOrientationSensor.")}JS_OrientationSensor_frequencyRequest=0})}}function _JS_OrientationSensor_Stop(){if(JS_OrientationSensor){JS_OrientationSensor.stop();JS_OrientationSensor.removeEventListener("reading",JS_OrientationSensor_eventHandler);JS_OrientationSensor=null}else if(JS_OrientationSensor_callback!=0){window.removeEventListener("deviceorientation",JS_DeviceOrientation_eventHandler)}JS_OrientationSensor_callback=0}function _JS_RequestDeviceSensorPermissionsOnTouch(){if(JS_DeviceSensorPermissions==0)return;JS_RequestDeviceSensorPermissions(JS_DeviceSensorPermissions)}function _JS_RunQuitCallbacks(){Module.QuitCleanup()}var JS_ScreenOrientation_callback=0;function JS_ScreenOrientation_eventHandler(){if(JS_ScreenOrientation_callback)dynCall_viii(JS_ScreenOrientation_callback,window.innerWidth,window.innerHeight,screen.orientation?screen.orientation.angle:window.orientation)}function _JS_ScreenOrientation_DeInit(){JS_ScreenOrientation_callback=0;window.removeEventListener("resize",JS_ScreenOrientation_eventHandler);if(screen.orientation){screen.orientation.removeEventListener("change",JS_ScreenOrientation_eventHandler)}}function _JS_ScreenOrientation_Init(callback){if(!JS_ScreenOrientation_callback){if(screen.orientation){screen.orientation.addEventListener("change",JS_ScreenOrientation_eventHandler)}window.addEventListener("resize",JS_ScreenOrientation_eventHandler);JS_ScreenOrientation_callback=callback;setTimeout(JS_ScreenOrientation_eventHandler,0)}}var JS_ScreenOrientation_requestedLockType=-1;var JS_ScreenOrientation_appliedLockType=-1;var JS_ScreenOrientation_timeoutID=-1;function _JS_ScreenOrientation_Lock(orientationLockType){if(!screen.orientation){return}function applyLock(){JS_ScreenOrientation_appliedLockType=JS_ScreenOrientation_requestedLockType;var screenOrientations=["any",0,"landscape","portrait","portrait-primary","portrait-secondary","landscape-primary","landscape-secondary"];var type=screenOrientations[JS_ScreenOrientation_appliedLockType];screen.orientation.lock(type).then(function(){if(JS_ScreenOrientation_requestedLockType!=JS_ScreenOrientation_appliedLockType){JS_ScreenOrientation_timeoutID=setTimeout(applyLock,0)}else{JS_ScreenOrientation_timeoutID=-1}}).catch(function(err){warnOnce(err);JS_ScreenOrientation_timeoutID=-1})}JS_ScreenOrientation_requestedLockType=orientationLockType;if(JS_ScreenOrientation_timeoutID==-1&&orientationLockType!=JS_ScreenOrientation_appliedLockType){JS_ScreenOrientation_timeoutID=setTimeout(applyLock,0)}}var WEBAudio={audioInstanceIdCounter:0,audioInstances:{},audioContext:null,audioWebEnabled:0,audioCache:[],pendingAudioSources:{}};function jsAudioMixinSetPitch(source){source.estimatePlaybackPosition=function(){var t=(WEBAudio.audioContext.currentTime-source.playbackStartTime)*source.playbackRate.value;if(source.loop&&t>=source.loopStart){t=(t-source.loopStart)%(source.loopEnd-source.loopStart)+source.loopStart}return t};source.setPitch=function(newPitch){var curPosition=source.estimatePlaybackPosition();if(curPosition>=0){source.playbackStartTime=WEBAudio.audioContext.currentTime-curPosition/newPitch}if(source.playbackRate.value!==newPitch)source.playbackRate.value=newPitch}}function jsAudioCreateUncompressedSoundClip(buffer,error){var soundClip={buffer:buffer,error:error};soundClip.release=function(){};soundClip.getLength=function(){if(!this.buffer){console.log("Trying to get length of sound which is not loaded.");return 0}var sampleRateRatio=44100/this.buffer.sampleRate;return this.buffer.length*sampleRateRatio};soundClip.getData=function(ptr,length){if(!this.buffer){console.log("Trying to get data of sound which is not loaded.");return 0}var startOutputBuffer=ptr>>2;var output=HEAPF32.subarray(startOutputBuffer,startOutputBuffer+(length>>2));var numMaxSamples=Math.floor((length>>2)/this.buffer.numberOfChannels);var numReadSamples=Math.min(this.buffer.length,numMaxSamples);for(var i=0;istartDelayThresholdMS){source.playTimeout=setTimeout(function(){source.playTimeout=null;source._startPlayback(offset)},startDelayMS)}else{source._startPlayback(offset)}};source.stop=function(stopTime){if(typeof stopTime==="undefined"){stopTime=WEBAudio.audioContext.currentTime}var stopDelayThresholdMS=4;var stopDelayMS=(stopTime-WEBAudio.audioContext.currentTime)*1e3;if(stopDelayMS>stopDelayThresholdMS){setTimeout(function(){source._pauseMediaElement();source.isStopped=true},stopDelayMS)}else{source._pauseMediaElement();source.isStopped=true}};jsAudioMixinSetPitch(source);return source};return soundClip}function _JS_Sound_Load(ptr,length,decompress,fmodSoundType){if(WEBAudio.audioWebEnabled==0)return 0;var audioData=HEAPU8.buffer.slice(ptr,ptr+length);if(length<131072)decompress=1;var sound;if(decompress){sound=jsAudioCreateUncompressedSoundClipFromCompressedAudio(audioData)}else{sound=jsAudioCreateCompressedSoundClip(audioData,fmodSoundType)}WEBAudio.audioInstances[++WEBAudio.audioInstanceIdCounter]=sound;return WEBAudio.audioInstanceIdCounter}function jsAudioCreateUncompressedSoundClipFromPCM(channels,length,sampleRate,ptr){var buffer=WEBAudio.audioContext.createBuffer(channels,length,sampleRate);for(var i=0;i>2)+length*i;var copyToChannel=buffer["copyToChannel"]||function(source,channelNumber,startInChannel){var clipped=source.subarray(0,Math.min(source.length,this.length-(startInChannel|0)));this.getChannelData(channelNumber|0).set(clipped,startInChannel|0)};copyToChannel.apply(buffer,[HEAPF32.subarray(offs,offs+length),i,0])}return jsAudioCreateUncompressedSoundClip(buffer,false)}function _JS_Sound_Load_PCM(channels,length,sampleRate,ptr){if(WEBAudio.audioWebEnabled==0)return 0;var sound=jsAudioCreateUncompressedSoundClipFromPCM(channels,length,sampleRate,ptr);WEBAudio.audioInstances[++WEBAudio.audioInstanceIdCounter]=sound;return WEBAudio.audioInstanceIdCounter}function _JS_Sound_Play(bufferInstance,channelInstance,offset,delay){if(WEBAudio.audioWebEnabled==0)return;_JS_Sound_Stop(channelInstance,0);var soundClip=WEBAudio.audioInstances[bufferInstance];var channel=WEBAudio.audioInstances[channelInstance];if(!soundClip){console.log("Trying to play sound which is not loaded.");return}try{channel.playSoundClip(soundClip,WEBAudio.audioContext.currentTime+delay,offset)}catch(error){console.error("playSoundClip error. Exception: "+e)}}function _JS_Sound_ReleaseInstance(instance){var object=WEBAudio.audioInstances[instance];if(object){object.release()}delete WEBAudio.audioInstances[instance]}function _JS_Sound_ResumeIfNeeded(){if(WEBAudio.audioWebEnabled==0)return;if(WEBAudio.audioContext.state==="suspended")WEBAudio.audioContext.resume().catch(function(error){console.warn("Could not resume audio context. Exception: "+error)})}function _JS_Sound_Set3D(channelInstance,threeD){var channel=WEBAudio.audioInstances[channelInstance];channel.set3D(threeD)}function _JS_Sound_SetListenerOrientation(x,y,z,xUp,yUp,zUp){if(WEBAudio.audioWebEnabled==0)return;x=-x;y=-y;z=-z;var l=WEBAudio.audioContext.listener;if(l.forwardX){if(l.forwardX.value!==x)l.forwardX.value=x;if(l.forwardY.value!==y)l.forwardY.value=y;if(l.forwardZ.value!==z)l.forwardZ.value=z;if(l.upX.value!==xUp)l.upX.value=xUp;if(l.upY.value!==yUp)l.upY.value=yUp;if(l.upZ.value!==zUp)l.upZ.value=zUp}else if(l._forwardX!==x||l._forwardY!==y||l._forwardZ!==z||l._upX!==xUp||l._upY!==yUp||l._upZ!==zUp){l.setOrientation(x,y,z,xUp,yUp,zUp);l._forwardX=x;l._forwardY=y;l._forwardZ=z;l._upX=xUp;l._upY=yUp;l._upZ=zUp}}function _JS_Sound_SetListenerPosition(x,y,z){if(WEBAudio.audioWebEnabled==0)return;var l=WEBAudio.audioContext.listener;if(l.positionX){if(l.positionX.value!==x)l.positionX.value=x;if(l.positionY.value!==y)l.positionY.value=y;if(l.positionZ.value!==z)l.positionZ.value=z}else if(l._positionX!==x||l._positionY!==y||l._positionZ!==z){l.setPosition(x,y,z);l._positionX=x;l._positionY=y;l._positionZ=z}}function _JS_Sound_SetLoop(channelInstance,loop){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setLoop(loop)}function _JS_Sound_SetLoopPoints(channelInstance,loopStart,loopEnd){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setLoopPoints(loopStart,loopEnd)}function _JS_Sound_SetPaused(channelInstance,paused){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];if(paused!=channel.isPaused()){if(paused)channel.pause();else channel.resume()}}function _JS_Sound_SetPitch(channelInstance,v){if(WEBAudio.audioWebEnabled==0)return;try{var channel=WEBAudio.audioInstances[channelInstance];channel.setPitch(v)}catch(e){console.error("JS_Sound_SetPitch(channel="+channelInstance+", pitch="+v+") threw an exception: "+e)}}function _JS_Sound_SetPosition(channelInstance,x,y,z){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setPosition(x,y,z)}function _JS_Sound_SetVolume(channelInstance,v){if(WEBAudio.audioWebEnabled==0)return;try{var channel=WEBAudio.audioInstances[channelInstance];channel.setVolume(v)}catch(e){console.error("JS_Sound_SetVolume(channel="+channelInstance+", volume="+v+") threw an exception: "+e)}}function _JS_Sound_Stop(channelInstance,delay){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.stop(delay)}function _JS_SystemInfo_GetCanvasClientSize(domElementSelector,outWidth,outHeight){var selector=UTF8ToString(domElementSelector);var canvas=selector=="#canvas"?Module["canvas"]:document.querySelector(selector);var w=0,h=0;if(canvas){var size=canvas.getBoundingClientRect();w=size.width;h=size.height}HEAPF64[outWidth>>3]=w;HEAPF64[outHeight>>3]=h}function _JS_SystemInfo_GetDocumentURL(buffer,bufferSize){if(buffer)stringToUTF8(document.URL,buffer,bufferSize);return lengthBytesUTF8(document.URL)}function _JS_SystemInfo_GetGPUInfo(buffer,bufferSize){var gpuinfo=Module.SystemInfo.gpu;if(buffer)stringToUTF8(gpuinfo,buffer,bufferSize);return lengthBytesUTF8(gpuinfo)}function _JS_SystemInfo_GetMatchWebGLToCanvasSize(){return Module.matchWebGLToCanvasSize||Module.matchWebGLToCanvasSize===undefined}function _JS_SystemInfo_GetMemory(){return HEAPU8.length/(1024*1024)}function _JS_SystemInfo_GetOS(buffer,bufferSize){var browser=Module.SystemInfo.os+" "+Module.SystemInfo.osVersion;if(buffer)stringToUTF8(browser,buffer,bufferSize);return lengthBytesUTF8(browser)}function _JS_SystemInfo_GetPreferredDevicePixelRatio(){return Module.matchWebGLToCanvasSize==false?1:Module.devicePixelRatio||window.devicePixelRatio||1}function _JS_SystemInfo_GetScreenSize(outWidth,outHeight){HEAPF64[outWidth>>3]=Module.SystemInfo.width;HEAPF64[outHeight>>3]=Module.SystemInfo.height}function _JS_SystemInfo_HasAstcHdr(){var ext=GLctx.getExtension("WEBGL_compressed_texture_astc");if(ext&&ext.getSupportedProfiles){return ext.getSupportedProfiles().includes("hdr")}return false}function _JS_SystemInfo_HasCursorLock(){return Module.SystemInfo.hasCursorLock}function _JS_SystemInfo_HasFullscreen(){return Module.SystemInfo.hasFullscreen}function _JS_SystemInfo_HasWebGL(){return Module.SystemInfo.hasWebGL}function _JS_UnityEngineShouldQuit(){return!!Module.shouldQuit}var ExceptionInfoAttrs={DESTRUCTOR_OFFSET:0,REFCOUNT_OFFSET:4,TYPE_OFFSET:8,CAUGHT_OFFSET:12,RETHROWN_OFFSET:13,SIZE:16};function ___cxa_allocate_exception(size){return _malloc(size+ExceptionInfoAttrs.SIZE)+ExceptionInfoAttrs.SIZE}function ExceptionInfo(excPtr){this.excPtr=excPtr;this.ptr=excPtr-ExceptionInfoAttrs.SIZE;this.set_type=function(type){HEAP32[this.ptr+ExceptionInfoAttrs.TYPE_OFFSET>>2]=type};this.get_type=function(){return HEAP32[this.ptr+ExceptionInfoAttrs.TYPE_OFFSET>>2]};this.set_destructor=function(destructor){HEAP32[this.ptr+ExceptionInfoAttrs.DESTRUCTOR_OFFSET>>2]=destructor};this.get_destructor=function(){return HEAP32[this.ptr+ExceptionInfoAttrs.DESTRUCTOR_OFFSET>>2]};this.set_refcount=function(refcount){HEAP32[this.ptr+ExceptionInfoAttrs.REFCOUNT_OFFSET>>2]=refcount};this.set_caught=function(caught){caught=caught?1:0;HEAP8[this.ptr+ExceptionInfoAttrs.CAUGHT_OFFSET>>0]=caught};this.get_caught=function(){return HEAP8[this.ptr+ExceptionInfoAttrs.CAUGHT_OFFSET>>0]!=0};this.set_rethrown=function(rethrown){rethrown=rethrown?1:0;HEAP8[this.ptr+ExceptionInfoAttrs.RETHROWN_OFFSET>>0]=rethrown};this.get_rethrown=function(){return HEAP8[this.ptr+ExceptionInfoAttrs.RETHROWN_OFFSET>>0]!=0};this.init=function(type,destructor){this.set_type(type);this.set_destructor(destructor);this.set_refcount(0);this.set_caught(false);this.set_rethrown(false)};this.add_ref=function(){var value=HEAP32[this.ptr+ExceptionInfoAttrs.REFCOUNT_OFFSET>>2];HEAP32[this.ptr+ExceptionInfoAttrs.REFCOUNT_OFFSET>>2]=value+1};this.release_ref=function(){var prev=HEAP32[this.ptr+ExceptionInfoAttrs.REFCOUNT_OFFSET>>2];HEAP32[this.ptr+ExceptionInfoAttrs.REFCOUNT_OFFSET>>2]=prev-1;return prev===1}}function CatchInfo(ptr){this.free=function(){_free(this.ptr);this.ptr=0};this.set_base_ptr=function(basePtr){HEAP32[this.ptr>>2]=basePtr};this.get_base_ptr=function(){return HEAP32[this.ptr>>2]};this.set_adjusted_ptr=function(adjustedPtr){var ptrSize=4;HEAP32[this.ptr+ptrSize>>2]=adjustedPtr};this.get_adjusted_ptr=function(){var ptrSize=4;return HEAP32[this.ptr+ptrSize>>2]};this.get_exception_ptr=function(){var isPointer=___cxa_is_pointer_type(this.get_exception_info().get_type());if(isPointer){return HEAP32[this.get_base_ptr()>>2]}var adjusted=this.get_adjusted_ptr();if(adjusted!==0)return adjusted;return this.get_base_ptr()};this.get_exception_info=function(){return new ExceptionInfo(this.get_base_ptr())};if(ptr===undefined){this.ptr=_malloc(8);this.set_adjusted_ptr(0)}else{this.ptr=ptr}}var exceptionCaught=[];function exception_addRef(info){info.add_ref()}var uncaughtExceptionCount=0;function ___cxa_begin_catch(ptr){var catchInfo=new CatchInfo(ptr);var info=catchInfo.get_exception_info();if(!info.get_caught()){info.set_caught(true);uncaughtExceptionCount--}info.set_rethrown(false);exceptionCaught.push(catchInfo);exception_addRef(info);return catchInfo.get_exception_ptr()}var exceptionLast=0;function ___cxa_free_exception(ptr){return _free(new ExceptionInfo(ptr).ptr)}function exception_decRef(info){if(info.release_ref()&&!info.get_rethrown()){var destructor=info.get_destructor();if(destructor){(function(a1){return dynCall_ii.apply(null,[destructor,a1])})(info.excPtr)}___cxa_free_exception(info.excPtr)}}function ___cxa_end_catch(){_setThrew(0);var catchInfo=exceptionCaught.pop();exception_decRef(catchInfo.get_exception_info());catchInfo.free();exceptionLast=0}function ___resumeException(catchInfoPtr){var catchInfo=new CatchInfo(catchInfoPtr);var ptr=catchInfo.get_base_ptr();if(!exceptionLast){exceptionLast=ptr}catchInfo.free();throw ptr}function ___cxa_find_matching_catch_2(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);var stackTop=stackSave();var exceptionThrowBuf=stackAlloc(4);HEAP32[exceptionThrowBuf>>2]=thrown;for(var i=0;i>2];if(thrown!==adjusted){catchInfo.set_adjusted_ptr(adjusted)}setTempRet0(caughtType);return catchInfo.ptr|0}}stackRestore(stackTop);setTempRet0(thrownType);return catchInfo.ptr|0}function ___cxa_find_matching_catch_3(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);var stackTop=stackSave();var exceptionThrowBuf=stackAlloc(4);HEAP32[exceptionThrowBuf>>2]=thrown;for(var i=0;i>2];if(thrown!==adjusted){catchInfo.set_adjusted_ptr(adjusted)}setTempRet0(caughtType);return catchInfo.ptr|0}}stackRestore(stackTop);setTempRet0(thrownType);return catchInfo.ptr|0}function ___cxa_find_matching_catch_4(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);var stackTop=stackSave();var exceptionThrowBuf=stackAlloc(4);HEAP32[exceptionThrowBuf>>2]=thrown;for(var i=0;i>2];if(thrown!==adjusted){catchInfo.set_adjusted_ptr(adjusted)}setTempRet0(caughtType);return catchInfo.ptr|0}}stackRestore(stackTop);setTempRet0(thrownType);return catchInfo.ptr|0}function ___cxa_rethrow(){var catchInfo=exceptionCaught.pop();if(!catchInfo){abort("no exception to throw")}var info=catchInfo.get_exception_info();var ptr=catchInfo.get_base_ptr();if(!info.get_rethrown()){exceptionCaught.push(catchInfo);info.set_rethrown(true);info.set_caught(false);uncaughtExceptionCount++}else{catchInfo.free()}exceptionLast=ptr;throw ptr}function ___cxa_throw(ptr,type,destructor){var info=new ExceptionInfo(ptr);info.init(type,destructor);exceptionLast=ptr;uncaughtExceptionCount++;throw ptr}function _gmtime_r(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getUTCSeconds();HEAP32[tmPtr+4>>2]=date.getUTCMinutes();HEAP32[tmPtr+8>>2]=date.getUTCHours();HEAP32[tmPtr+12>>2]=date.getUTCDate();HEAP32[tmPtr+16>>2]=date.getUTCMonth();HEAP32[tmPtr+20>>2]=date.getUTCFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getUTCDay();HEAP32[tmPtr+36>>2]=0;HEAP32[tmPtr+32>>2]=0;var start=Date.UTC(date.getUTCFullYear(),0,1,0,0,0,0);var yday=(date.getTime()-start)/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;if(!_gmtime_r.GMTString)_gmtime_r.GMTString=allocateUTF8("GMT");HEAP32[tmPtr+40>>2]=_gmtime_r.GMTString;return tmPtr}function ___gmtime_r(a0,a1){return _gmtime_r(a0,a1)}function _tzset(){if(_tzset.called)return;_tzset.called=true;var currentYear=(new Date).getFullYear();var winter=new Date(currentYear,0,1);var summer=new Date(currentYear,6,1);var winterOffset=winter.getTimezoneOffset();var summerOffset=summer.getTimezoneOffset();var stdTimezoneOffset=Math.max(winterOffset,summerOffset);HEAP32[__get_timezone()>>2]=stdTimezoneOffset*60;HEAP32[__get_daylight()>>2]=Number(winterOffset!=summerOffset);function extractZone(date){var match=date.toTimeString().match(/\(([A-Za-z ]+)\)$/);return match?match[1]:"GMT"}var winterName=extractZone(winter);var summerName=extractZone(summer);var winterNamePtr=allocateUTF8(winterName);var summerNamePtr=allocateUTF8(summerName);if(summerOffset>2]=winterNamePtr;HEAP32[__get_tzname()+4>>2]=summerNamePtr}else{HEAP32[__get_tzname()>>2]=summerNamePtr;HEAP32[__get_tzname()+4>>2]=winterNamePtr}}function _localtime_r(time,tmPtr){_tzset();var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();HEAP32[tmPtr+20>>2]=date.getFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getDay();var start=new Date(date.getFullYear(),0,1);var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr+36>>2]=-(date.getTimezoneOffset()*60);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dst=(summerOffset!=winterOffset&&date.getTimezoneOffset()==Math.min(winterOffset,summerOffset))|0;HEAP32[tmPtr+32>>2]=dst;var zonePtr=HEAP32[__get_tzname()+(dst?4:0)>>2];HEAP32[tmPtr+40>>2]=zonePtr;return tmPtr}function ___localtime_r(a0,a1){return _localtime_r(a0,a1)}var PATH={splitPath:function(filename){var splitPathRe=/^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/;return splitPathRe.exec(filename).slice(1)},normalizeArray:function(parts,allowAboveRoot){var up=0;for(var i=parts.length-1;i>=0;i--){var last=parts[i];if(last==="."){parts.splice(i,1)}else if(last===".."){parts.splice(i,1);up++}else if(up){parts.splice(i,1);up--}}if(allowAboveRoot){for(;up;up--){parts.unshift("..")}}return parts},normalize:function(path){var isAbsolute=path.charAt(0)==="/",trailingSlash=path.substr(-1)==="/";path=PATH.normalizeArray(path.split("/").filter(function(p){return!!p}),!isAbsolute).join("/");if(!path&&!isAbsolute){path="."}if(path&&trailingSlash){path+="/"}return(isAbsolute?"/":"")+path},dirname:function(path){var result=PATH.splitPath(path),root=result[0],dir=result[1];if(!root&&!dir){return"."}if(dir){dir=dir.substr(0,dir.length-1)}return root+dir},basename:function(path){if(path==="/")return"/";path=PATH.normalize(path);path=path.replace(/\/$/,"");var lastSlash=path.lastIndexOf("/");if(lastSlash===-1)return path;return path.substr(lastSlash+1)},extname:function(path){return PATH.splitPath(path)[3]},join:function(){var paths=Array.prototype.slice.call(arguments,0);return PATH.normalize(paths.join("/"))},join2:function(l,r){return PATH.normalize(l+"/"+r)}};function getRandomDevice(){if(typeof crypto==="object"&&typeof crypto["getRandomValues"]==="function"){var randomBuffer=new Uint8Array(1);return function(){crypto.getRandomValues(randomBuffer);return randomBuffer[0]}}else if(ENVIRONMENT_IS_NODE){try{var crypto_module=require("crypto");return function(){return crypto_module["randomBytes"](1)[0]}}catch(e){}}return function(){abort("randomDevice")}}var PATH_FS={resolve:function(){var resolvedPath="",resolvedAbsolute=false;for(var i=arguments.length-1;i>=-1&&!resolvedAbsolute;i--){var path=i>=0?arguments[i]:FS.cwd();if(typeof path!=="string"){throw new TypeError("Arguments to path.resolve must be strings")}else if(!path){return""}resolvedPath=path+"/"+resolvedPath;resolvedAbsolute=path.charAt(0)==="/"}resolvedPath=PATH.normalizeArray(resolvedPath.split("/").filter(function(p){return!!p}),!resolvedAbsolute).join("/");return(resolvedAbsolute?"/":"")+resolvedPath||"."},relative:function(from,to){from=PATH_FS.resolve(from).substr(1);to=PATH_FS.resolve(to).substr(1);function trim(arr){var start=0;for(;start=0;end--){if(arr[end]!=="")break}if(start>end)return[];return arr.slice(start,end-start+1)}var fromParts=trim(from.split("/"));var toParts=trim(to.split("/"));var length=Math.min(fromParts.length,toParts.length);var samePartsLength=length;for(var i=0;i0){result=buf.slice(0,bytesRead).toString("utf-8")}else{result=null}}else if(typeof window!="undefined"&&typeof window.prompt=="function"){result=window.prompt("Input: ");if(result!==null){result+="\n"}}else if(typeof readline=="function"){result=readline();if(result!==null){result+="\n"}}if(!result){return null}tty.input=intArrayFromString(result,true)}return tty.input.shift()},put_char:function(tty,val){if(val===null||val===10){out(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){out(UTF8ArrayToString(tty.output,0));tty.output=[]}}},default_tty1_ops:{put_char:function(tty,val){if(val===null||val===10){err(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){err(UTF8ArrayToString(tty.output,0));tty.output=[]}}}};function mmapAlloc(size){var alignedSize=alignMemory(size,65536);var ptr=_malloc(alignedSize);while(size=newCapacity)return;var CAPACITY_DOUBLING_MAX=1024*1024;newCapacity=Math.max(newCapacity,prevCapacity*(prevCapacity>>0);if(prevCapacity!=0)newCapacity=Math.max(newCapacity,256);var oldContents=node.contents;node.contents=new Uint8Array(newCapacity);if(node.usedBytes>0)node.contents.set(oldContents.subarray(0,node.usedBytes),0)},resizeFileStorage:function(node,newSize){if(node.usedBytes==newSize)return;if(newSize==0){node.contents=null;node.usedBytes=0}else{var oldContents=node.contents;node.contents=new Uint8Array(newSize);if(oldContents){node.contents.set(oldContents.subarray(0,Math.min(newSize,node.usedBytes)))}node.usedBytes=newSize}},node_ops:{getattr:function(node){var attr={};attr.dev=FS.isChrdev(node.mode)?node.id:1;attr.ino=node.id;attr.mode=node.mode;attr.nlink=1;attr.uid=0;attr.gid=0;attr.rdev=node.rdev;if(FS.isDir(node.mode)){attr.size=4096}else if(FS.isFile(node.mode)){attr.size=node.usedBytes}else if(FS.isLink(node.mode)){attr.size=node.link.length}else{attr.size=0}attr.atime=new Date(node.timestamp);attr.mtime=new Date(node.timestamp);attr.ctime=new Date(node.timestamp);attr.blksize=4096;attr.blocks=Math.ceil(attr.size/attr.blksize);return attr},setattr:function(node,attr){if(attr.mode!==undefined){node.mode=attr.mode}if(attr.timestamp!==undefined){node.timestamp=attr.timestamp}if(attr.size!==undefined){MEMFS.resizeFileStorage(node,attr.size)}},lookup:function(parent,name){throw FS.genericErrors[44]},mknod:function(parent,name,mode,dev){return MEMFS.createNode(parent,name,mode,dev)},rename:function(old_node,new_dir,new_name){if(FS.isDir(old_node.mode)){var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(new_node){for(var i in new_node.contents){throw new FS.ErrnoError(55)}}}delete old_node.parent.contents[old_node.name];old_node.parent.timestamp=Date.now();old_node.name=new_name;new_dir.contents[new_name]=old_node;new_dir.timestamp=old_node.parent.timestamp;old_node.parent=new_dir},unlink:function(parent,name){delete parent.contents[name];parent.timestamp=Date.now()},rmdir:function(parent,name){var node=FS.lookupNode(parent,name);for(var i in node.contents){throw new FS.ErrnoError(55)}delete parent.contents[name];parent.timestamp=Date.now()},readdir:function(node){var entries=[".",".."];for(var key in node.contents){if(!node.contents.hasOwnProperty(key)){continue}entries.push(key)}return entries},symlink:function(parent,newname,oldpath){var node=MEMFS.createNode(parent,newname,511|40960,0);node.link=oldpath;return node},readlink:function(node){if(!FS.isLink(node.mode)){throw new FS.ErrnoError(28)}return node.link}},stream_ops:{read:function(stream,buffer,offset,length,position){var contents=stream.node.contents;if(position>=stream.node.usedBytes)return 0;var size=Math.min(stream.node.usedBytes-position,length);if(size>8&&contents.subarray){buffer.set(contents.subarray(position,position+size),offset)}else{for(var i=0;i0||position+length8){throw new FS.ErrnoError(32)}var parts=PATH.normalizeArray(path.split("/").filter(function(p){return!!p}),false);var current=FS.root;var current_path="/";for(var i=0;i40){throw new FS.ErrnoError(32)}}}}return{path:current_path,node:current}},getPath:function(node){var path;while(true){if(FS.isRoot(node)){var mount=node.mount.mountpoint;if(!path)return mount;return mount[mount.length-1]!=="/"?mount+"/"+path:mount+path}path=path?node.name+"/"+path:node.name;node=node.parent}},hashName:function(parentid,name){var hash=0;for(var i=0;i>>0)%FS.nameTable.length},hashAddNode:function(node){var hash=FS.hashName(node.parent.id,node.name);node.name_next=FS.nameTable[hash];FS.nameTable[hash]=node},hashRemoveNode:function(node){var hash=FS.hashName(node.parent.id,node.name);if(FS.nameTable[hash]===node){FS.nameTable[hash]=node.name_next}else{var current=FS.nameTable[hash];while(current){if(current.name_next===node){current.name_next=node.name_next;break}current=current.name_next}}},lookupNode:function(parent,name){var errCode=FS.mayLookup(parent);if(errCode){throw new FS.ErrnoError(errCode,parent)}var hash=FS.hashName(parent.id,name);for(var node=FS.nameTable[hash];node;node=node.name_next){var nodeName=node.name;if(node.parent.id===parent.id&&nodeName===name){return node}}return FS.lookup(parent,name)},createNode:function(parent,name,mode,rdev){var node=new FS.FSNode(parent,name,mode,rdev);FS.hashAddNode(node);return node},destroyNode:function(node){FS.hashRemoveNode(node)},isRoot:function(node){return node===node.parent},isMountpoint:function(node){return!!node.mounted},isFile:function(mode){return(mode&61440)===32768},isDir:function(mode){return(mode&61440)===16384},isLink:function(mode){return(mode&61440)===40960},isChrdev:function(mode){return(mode&61440)===8192},isBlkdev:function(mode){return(mode&61440)===24576},isFIFO:function(mode){return(mode&61440)===4096},isSocket:function(mode){return(mode&49152)===49152},flagModes:{"r":0,"r+":2,"w":577,"w+":578,"a":1089,"a+":1090},modeStringToFlags:function(str){var flags=FS.flagModes[str];if(typeof flags==="undefined"){throw new Error("Unknown file open mode: "+str)}return flags},flagsToPermissionString:function(flag){var perms=["r","w","rw"][flag&3];if(flag&512){perms+="w"}return perms},nodePermissions:function(node,perms){if(FS.ignorePermissions){return 0}if(perms.includes("r")&&!(node.mode&292)){return 2}else if(perms.includes("w")&&!(node.mode&146)){return 2}else if(perms.includes("x")&&!(node.mode&73)){return 2}return 0},mayLookup:function(dir){var errCode=FS.nodePermissions(dir,"x");if(errCode)return errCode;if(!dir.node_ops.lookup)return 2;return 0},mayCreate:function(dir,name){try{var node=FS.lookupNode(dir,name);return 20}catch(e){}return FS.nodePermissions(dir,"wx")},mayDelete:function(dir,name,isdir){var node;try{node=FS.lookupNode(dir,name)}catch(e){return e.errno}var errCode=FS.nodePermissions(dir,"wx");if(errCode){return errCode}if(isdir){if(!FS.isDir(node.mode)){return 54}if(FS.isRoot(node)||FS.getPath(node)===FS.cwd()){return 10}}else{if(FS.isDir(node.mode)){return 31}}return 0},mayOpen:function(node,flags){if(!node){return 44}if(FS.isLink(node.mode)){return 32}else if(FS.isDir(node.mode)){if(FS.flagsToPermissionString(flags)!=="r"||flags&512){return 31}}return FS.nodePermissions(node,FS.flagsToPermissionString(flags))},MAX_OPEN_FDS:4096,nextfd:function(fd_start,fd_end){fd_start=fd_start||0;fd_end=fd_end||FS.MAX_OPEN_FDS;for(var fd=fd_start;fd<=fd_end;fd++){if(!FS.streams[fd]){return fd}}throw new FS.ErrnoError(33)},getStream:function(fd){return FS.streams[fd]},createStream:function(stream,fd_start,fd_end){if(!FS.FSStream){FS.FSStream=function(){};FS.FSStream.prototype={object:{get:function(){return this.node},set:function(val){this.node=val}},isRead:{get:function(){return(this.flags&2097155)!==1}},isWrite:{get:function(){return(this.flags&2097155)!==0}},isAppend:{get:function(){return this.flags&1024}}}}var newStream=new FS.FSStream;for(var p in stream){newStream[p]=stream[p]}stream=newStream;var fd=FS.nextfd(fd_start,fd_end);stream.fd=fd;FS.streams[fd]=stream;return stream},closeStream:function(fd){FS.streams[fd]=null},chrdev_stream_ops:{open:function(stream){var device=FS.getDevice(stream.node.rdev);stream.stream_ops=device.stream_ops;if(stream.stream_ops.open){stream.stream_ops.open(stream)}},llseek:function(){throw new FS.ErrnoError(70)}},major:function(dev){return dev>>8},minor:function(dev){return dev&255},makedev:function(ma,mi){return ma<<8|mi},registerDevice:function(dev,ops){FS.devices[dev]={stream_ops:ops}},getDevice:function(dev){return FS.devices[dev]},getMounts:function(mount){var mounts=[];var check=[mount];while(check.length){var m=check.pop();mounts.push(m);check.push.apply(check,m.mounts)}return mounts},syncfs:function(populate,callback){if(typeof populate==="function"){callback=populate;populate=false}FS.syncFSRequests++;if(FS.syncFSRequests>1){err("warning: "+FS.syncFSRequests+" FS.syncfs operations in flight at once, probably just doing extra work")}var mounts=FS.getMounts(FS.root.mount);var completed=0;function doCallback(errCode){FS.syncFSRequests--;return callback(errCode)}function done(errCode){if(errCode){if(!done.errored){done.errored=true;return doCallback(errCode)}return}if(++completed>=mounts.length){doCallback(null)}}mounts.forEach(function(mount){if(!mount.type.syncfs){return done(null)}mount.type.syncfs(mount,populate,done)})},mount:function(type,opts,mountpoint){var root=mountpoint==="/";var pseudo=!mountpoint;var node;if(root&&FS.root){throw new FS.ErrnoError(10)}else if(!root&&!pseudo){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});mountpoint=lookup.path;node=lookup.node;if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}if(!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}}var mount={type:type,opts:opts,mountpoint:mountpoint,mounts:[]};var mountRoot=type.mount(mount);mountRoot.mount=mount;mount.root=mountRoot;if(root){FS.root=mountRoot}else if(node){node.mounted=mount;if(node.mount){node.mount.mounts.push(mount)}}return mountRoot},unmount:function(mountpoint){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});if(!FS.isMountpoint(lookup.node)){throw new FS.ErrnoError(28)}var node=lookup.node;var mount=node.mounted;var mounts=FS.getMounts(mount);Object.keys(FS.nameTable).forEach(function(hash){var current=FS.nameTable[hash];while(current){var next=current.name_next;if(mounts.includes(current.mount)){FS.destroyNode(current)}current=next}});node.mounted=null;var idx=node.mount.mounts.indexOf(mount);node.mount.mounts.splice(idx,1)},lookup:function(parent,name){return parent.node_ops.lookup(parent,name)},mknod:function(path,mode,dev){var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);if(!name||name==="."||name===".."){throw new FS.ErrnoError(28)}var errCode=FS.mayCreate(parent,name);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.mknod){throw new FS.ErrnoError(63)}return parent.node_ops.mknod(parent,name,mode,dev)},create:function(path,mode){mode=mode!==undefined?mode:438;mode&=4095;mode|=32768;return FS.mknod(path,mode,0)},mkdir:function(path,mode){mode=mode!==undefined?mode:511;mode&=511|512;mode|=16384;return FS.mknod(path,mode,0)},mkdirTree:function(path,mode){var dirs=path.split("/");var d="";for(var i=0;ithis.length-1||idx<0){return undefined}var chunkOffset=idx%this.chunkSize;var chunkNum=idx/this.chunkSize|0;return this.getter(chunkNum)[chunkOffset]};LazyUint8Array.prototype.setDataGetter=function LazyUint8Array_setDataGetter(getter){this.getter=getter};LazyUint8Array.prototype.cacheLength=function LazyUint8Array_cacheLength(){var xhr=new XMLHttpRequest;xhr.open("HEAD",url,false);xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);var datalength=Number(xhr.getResponseHeader("Content-length"));var header;var hasByteServing=(header=xhr.getResponseHeader("Accept-Ranges"))&&header==="bytes";var usesGzip=(header=xhr.getResponseHeader("Content-Encoding"))&&header==="gzip";var chunkSize=1024*1024;if(!hasByteServing)chunkSize=datalength;var doXHR=function(from,to){if(from>to)throw new Error("invalid range ("+from+", "+to+") or no bytes requested!");if(to>datalength-1)throw new Error("only "+datalength+" bytes available! programmer error!");var xhr=new XMLHttpRequest;xhr.open("GET",url,false);if(datalength!==chunkSize)xhr.setRequestHeader("Range","bytes="+from+"-"+to);if(typeof Uint8Array!="undefined")xhr.responseType="arraybuffer";if(xhr.overrideMimeType){xhr.overrideMimeType("text/plain; charset=x-user-defined")}xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);if(xhr.response!==undefined){return new Uint8Array(xhr.response||[])}else{return intArrayFromString(xhr.responseText||"",true)}};var lazyArray=this;lazyArray.setDataGetter(function(chunkNum){var start=chunkNum*chunkSize;var end=(chunkNum+1)*chunkSize-1;end=Math.min(end,datalength-1);if(typeof lazyArray.chunks[chunkNum]==="undefined"){lazyArray.chunks[chunkNum]=doXHR(start,end)}if(typeof lazyArray.chunks[chunkNum]==="undefined")throw new Error("doXHR failed!");return lazyArray.chunks[chunkNum]});if(usesGzip||!datalength){chunkSize=datalength=1;datalength=this.getter(0).length;chunkSize=datalength;out("LazyFiles on gzip forces download of the whole file when length is accessed")}this._length=datalength;this._chunkSize=chunkSize;this.lengthKnown=true};if(typeof XMLHttpRequest!=="undefined"){if(!ENVIRONMENT_IS_WORKER)throw"Cannot do synchronous binary XHRs outside webworkers in modern browsers. Use --embed-file or --preload-file in emcc";var lazyArray=new LazyUint8Array;Object.defineProperties(lazyArray,{length:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._length}},chunkSize:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._chunkSize}}});var properties={isDevice:false,contents:lazyArray}}else{var properties={isDevice:false,url:url}}var node=FS.createFile(parent,name,properties,canRead,canWrite);if(properties.contents){node.contents=properties.contents}else if(properties.url){node.contents=null;node.url=properties.url}Object.defineProperties(node,{usedBytes:{get:function(){return this.contents.length}}});var stream_ops={};var keys=Object.keys(node.stream_ops);keys.forEach(function(key){var fn=node.stream_ops[key];stream_ops[key]=function forceLoadLazyFile(){FS.forceLoadFile(node);return fn.apply(null,arguments)}});stream_ops.read=function stream_ops_read(stream,buffer,offset,length,position){FS.forceLoadFile(node);var contents=stream.node.contents;if(position>=contents.length)return 0;var size=Math.min(contents.length-position,length);if(contents.slice){for(var i=0;i>2]=stat.dev;HEAP32[buf+4>>2]=0;HEAP32[buf+8>>2]=stat.ino;HEAP32[buf+12>>2]=stat.mode;HEAP32[buf+16>>2]=stat.nlink;HEAP32[buf+20>>2]=stat.uid;HEAP32[buf+24>>2]=stat.gid;HEAP32[buf+28>>2]=stat.rdev;HEAP32[buf+32>>2]=0;tempI64=[stat.size>>>0,(tempDouble=stat.size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+40>>2]=tempI64[0],HEAP32[buf+44>>2]=tempI64[1];HEAP32[buf+48>>2]=4096;HEAP32[buf+52>>2]=stat.blocks;HEAP32[buf+56>>2]=stat.atime.getTime()/1e3|0;HEAP32[buf+60>>2]=0;HEAP32[buf+64>>2]=stat.mtime.getTime()/1e3|0;HEAP32[buf+68>>2]=0;HEAP32[buf+72>>2]=stat.ctime.getTime()/1e3|0;HEAP32[buf+76>>2]=0;tempI64=[stat.ino>>>0,(tempDouble=stat.ino,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+80>>2]=tempI64[0],HEAP32[buf+84>>2]=tempI64[1];return 0},doMsync:function(addr,stream,len,flags,offset){var buffer=HEAPU8.slice(addr,addr+len);FS.msync(stream,buffer,offset,len,flags)},doMkdir:function(path,mode){path=PATH.normalize(path);if(path[path.length-1]==="/")path=path.substr(0,path.length-1);FS.mkdir(path,mode,0);return 0},doMknod:function(path,mode,dev){switch(mode&61440){case 32768:case 8192:case 24576:case 4096:case 49152:break;default:return-28}FS.mknod(path,mode,dev);return 0},doReadlink:function(path,buf,bufsize){if(bufsize<=0)return-28;var ret=FS.readlink(path);var len=Math.min(bufsize,lengthBytesUTF8(ret));var endChar=HEAP8[buf+len];stringToUTF8(ret,buf,bufsize+1);HEAP8[buf+len]=endChar;return len},doAccess:function(path,amode){if(amode&~7){return-28}var node;var lookup=FS.lookupPath(path,{follow:true});node=lookup.node;if(!node){return-44}var perms="";if(amode&4)perms+="r";if(amode&2)perms+="w";if(amode&1)perms+="x";if(perms&&FS.nodePermissions(node,perms)){return-2}return 0},doDup:function(path,flags,suggestFD){var suggest=FS.getStream(suggestFD);if(suggest)FS.close(suggest);return FS.open(path,flags,0,suggestFD,suggestFD).fd},doReadv:function(stream,iov,iovcnt,offset){var ret=0;for(var i=0;i>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.read(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr;if(curr>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.write(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr}return ret},varargs:undefined,get:function(){SYSCALLS.varargs+=4;var ret=HEAP32[SYSCALLS.varargs-4>>2];return ret},getStr:function(ptr){var ret=UTF8ToString(ptr);return ret},getStreamFromFD:function(fd){var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);return stream},get64:function(low,high){return low}};function ___sys__newselect(nfds,readfds,writefds,exceptfds,timeout){try{var total=0;var srcReadLow=readfds?HEAP32[readfds>>2]:0,srcReadHigh=readfds?HEAP32[readfds+4>>2]:0;var srcWriteLow=writefds?HEAP32[writefds>>2]:0,srcWriteHigh=writefds?HEAP32[writefds+4>>2]:0;var srcExceptLow=exceptfds?HEAP32[exceptfds>>2]:0,srcExceptHigh=exceptfds?HEAP32[exceptfds+4>>2]:0;var dstReadLow=0,dstReadHigh=0;var dstWriteLow=0,dstWriteHigh=0;var dstExceptLow=0,dstExceptHigh=0;var allLow=(readfds?HEAP32[readfds>>2]:0)|(writefds?HEAP32[writefds>>2]:0)|(exceptfds?HEAP32[exceptfds>>2]:0);var allHigh=(readfds?HEAP32[readfds+4>>2]:0)|(writefds?HEAP32[writefds+4>>2]:0)|(exceptfds?HEAP32[exceptfds+4>>2]:0);var check=function(fd,low,high,val){return fd<32?low&val:high&val};for(var fd=0;fd>2]=dstReadLow;HEAP32[readfds+4>>2]=dstReadHigh}if(writefds){HEAP32[writefds>>2]=dstWriteLow;HEAP32[writefds+4>>2]=dstWriteHigh}if(exceptfds){HEAP32[exceptfds>>2]=dstExceptLow;HEAP32[exceptfds+4>>2]=dstExceptHigh}return total}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_access(path,amode){try{path=SYSCALLS.getStr(path);return SYSCALLS.doAccess(path,amode)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_chmod(path,mode){try{path=SYSCALLS.getStr(path);FS.chmod(path,mode);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}var ERRNO_CODES={EPERM:63,ENOENT:44,ESRCH:71,EINTR:27,EIO:29,ENXIO:60,E2BIG:1,ENOEXEC:45,EBADF:8,ECHILD:12,EAGAIN:6,EWOULDBLOCK:6,ENOMEM:48,EACCES:2,EFAULT:21,ENOTBLK:105,EBUSY:10,EEXIST:20,EXDEV:75,ENODEV:43,ENOTDIR:54,EISDIR:31,EINVAL:28,ENFILE:41,EMFILE:33,ENOTTY:59,ETXTBSY:74,EFBIG:22,ENOSPC:51,ESPIPE:70,EROFS:69,EMLINK:34,EPIPE:64,EDOM:18,ERANGE:68,ENOMSG:49,EIDRM:24,ECHRNG:106,EL2NSYNC:156,EL3HLT:107,EL3RST:108,ELNRNG:109,EUNATCH:110,ENOCSI:111,EL2HLT:112,EDEADLK:16,ENOLCK:46,EBADE:113,EBADR:114,EXFULL:115,ENOANO:104,EBADRQC:103,EBADSLT:102,EDEADLOCK:16,EBFONT:101,ENOSTR:100,ENODATA:116,ETIME:117,ENOSR:118,ENONET:119,ENOPKG:120,EREMOTE:121,ENOLINK:47,EADV:122,ESRMNT:123,ECOMM:124,EPROTO:65,EMULTIHOP:36,EDOTDOT:125,EBADMSG:9,ENOTUNIQ:126,EBADFD:127,EREMCHG:128,ELIBACC:129,ELIBBAD:130,ELIBSCN:131,ELIBMAX:132,ELIBEXEC:133,ENOSYS:52,ENOTEMPTY:55,ENAMETOOLONG:37,ELOOP:32,EOPNOTSUPP:138,EPFNOSUPPORT:139,ECONNRESET:15,ENOBUFS:42,EAFNOSUPPORT:5,EPROTOTYPE:67,ENOTSOCK:57,ENOPROTOOPT:50,ESHUTDOWN:140,ECONNREFUSED:14,EADDRINUSE:3,ECONNABORTED:13,ENETUNREACH:40,ENETDOWN:38,ETIMEDOUT:73,EHOSTDOWN:142,EHOSTUNREACH:23,EINPROGRESS:26,EALREADY:7,EDESTADDRREQ:17,EMSGSIZE:35,EPROTONOSUPPORT:66,ESOCKTNOSUPPORT:137,EADDRNOTAVAIL:4,ENETRESET:39,EISCONN:30,ENOTCONN:53,ETOOMANYREFS:141,EUSERS:136,EDQUOT:19,ESTALE:72,ENOTSUP:138,ENOMEDIUM:148,EILSEQ:25,EOVERFLOW:61,ECANCELED:11,ENOTRECOVERABLE:56,EOWNERDEAD:62,ESTRPIPE:135};var SOCKFS={mount:function(mount){Module["websocket"]=Module["websocket"]&&"object"===typeof Module["websocket"]?Module["websocket"]:{};Module["websocket"]._callbacks={};Module["websocket"]["on"]=function(event,callback){if("function"===typeof callback){this._callbacks[event]=callback}return this};Module["websocket"].emit=function(event,param){if("function"===typeof this._callbacks[event]){this._callbacks[event].call(this,param)}};return FS.createNode(null,"/",16384|511,0)},createSocket:function(family,type,protocol){type&=~526336;var streaming=type==1;if(protocol){assert(streaming==(protocol==6))}var sock={family:family,type:type,protocol:protocol,server:null,error:null,peers:{},pending:[],recv_queue:[],sock_ops:SOCKFS.websocket_sock_ops};var name=SOCKFS.nextname();var node=FS.createNode(SOCKFS.root,name,49152,0);node.sock=sock;var stream=FS.createStream({path:name,node:node,flags:2,seekable:false,stream_ops:SOCKFS.stream_ops});sock.stream=stream;return sock},getSocket:function(fd){var stream=FS.getStream(fd);if(!stream||!FS.isSocket(stream.node.mode)){return null}return stream.node.sock},stream_ops:{poll:function(stream){var sock=stream.node.sock;return sock.sock_ops.poll(sock)},ioctl:function(stream,request,varargs){var sock=stream.node.sock;return sock.sock_ops.ioctl(sock,request,varargs)},read:function(stream,buffer,offset,length,position){var sock=stream.node.sock;var msg=sock.sock_ops.recvmsg(sock,length);if(!msg){return 0}buffer.set(msg.buffer,offset);return msg.buffer.length},write:function(stream,buffer,offset,length,position){var sock=stream.node.sock;return sock.sock_ops.sendmsg(sock,buffer,offset,length)},close:function(stream){var sock=stream.node.sock;sock.sock_ops.close(sock)}},nextname:function(){if(!SOCKFS.nextname.current){SOCKFS.nextname.current=0}return"socket["+SOCKFS.nextname.current+++"]"},websocket_sock_ops:{createPeer:function(sock,addr,port){var ws;if(typeof addr==="object"){ws=addr;addr=null;port=null}if(ws){if(ws._socket){addr=ws._socket.remoteAddress;port=ws._socket.remotePort}else{var result=/ws[s]?:\/\/([^:]+):(\d+)/.exec(ws.url);if(!result){throw new Error("WebSocket URL must be in the format ws(s)://address:port")}addr=result[1];port=parseInt(result[2],10)}}else{try{var runtimeConfig=Module["websocket"]&&"object"===typeof Module["websocket"];var url="ws:#".replace("#","//");if(runtimeConfig){if("string"===typeof Module["websocket"]["url"]){url=Module["websocket"]["url"]}}if(url==="ws://"||url==="wss://"){var parts=addr.split("/");url=url+parts[0]+":"+port+"/"+parts.slice(1).join("/")}var subProtocols="binary";if(runtimeConfig){if("string"===typeof Module["websocket"]["subprotocol"]){subProtocols=Module["websocket"]["subprotocol"]}}var opts=undefined;if(subProtocols!=="null"){subProtocols=subProtocols.replace(/^ +| +$/g,"").split(/ *, */);opts=ENVIRONMENT_IS_NODE?{"protocol":subProtocols.toString()}:subProtocols}if(runtimeConfig&&null===Module["websocket"]["subprotocol"]){subProtocols="null";opts=undefined}var WebSocketConstructor;if(ENVIRONMENT_IS_NODE){WebSocketConstructor=require("ws")}else{WebSocketConstructor=WebSocket}ws=new WebSocketConstructor(url,opts);ws.binaryType="arraybuffer"}catch(e){throw new FS.ErrnoError(ERRNO_CODES.EHOSTUNREACH)}}var peer={addr:addr,port:port,socket:ws,dgram_send_queue:[]};SOCKFS.websocket_sock_ops.addPeer(sock,peer);SOCKFS.websocket_sock_ops.handlePeerEvents(sock,peer);if(sock.type===2&&typeof sock.sport!=="undefined"){peer.dgram_send_queue.push(new Uint8Array([255,255,255,255,"p".charCodeAt(0),"o".charCodeAt(0),"r".charCodeAt(0),"t".charCodeAt(0),(sock.sport&65280)>>8,sock.sport&255]))}return peer},getPeer:function(sock,addr,port){return sock.peers[addr+":"+port]},addPeer:function(sock,peer){sock.peers[peer.addr+":"+peer.port]=peer},removePeer:function(sock,peer){delete sock.peers[peer.addr+":"+peer.port]},handlePeerEvents:function(sock,peer){var first=true;var handleOpen=function(){Module["websocket"].emit("open",sock.stream.fd);try{var queued=peer.dgram_send_queue.shift();while(queued){peer.socket.send(queued);queued=peer.dgram_send_queue.shift()}}catch(e){peer.socket.close()}};function handleMessage(data){if(typeof data==="string"){var encoder=new TextEncoder;data=encoder.encode(data)}else{assert(data.byteLength!==undefined);if(data.byteLength==0){return}else{data=new Uint8Array(data)}}var wasfirst=first;first=false;if(wasfirst&&data.length===10&&data[0]===255&&data[1]===255&&data[2]===255&&data[3]===255&&data[4]==="p".charCodeAt(0)&&data[5]==="o".charCodeAt(0)&&data[6]==="r".charCodeAt(0)&&data[7]==="t".charCodeAt(0)){var newport=data[8]<<8|data[9];SOCKFS.websocket_sock_ops.removePeer(sock,peer);peer.port=newport;SOCKFS.websocket_sock_ops.addPeer(sock,peer);return}sock.recv_queue.push({addr:peer.addr,port:peer.port,data:data});Module["websocket"].emit("message",sock.stream.fd)}if(ENVIRONMENT_IS_NODE){peer.socket.on("open",handleOpen);peer.socket.on("message",function(data,flags){if(!flags.binary){return}handleMessage(new Uint8Array(data).buffer)});peer.socket.on("close",function(){Module["websocket"].emit("close",sock.stream.fd)});peer.socket.on("error",function(error){sock.error=ERRNO_CODES.ECONNREFUSED;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])})}else{peer.socket.onopen=handleOpen;peer.socket.onclose=function(){Module["websocket"].emit("close",sock.stream.fd)};peer.socket.onmessage=function peer_socket_onmessage(event){handleMessage(event.data)};peer.socket.onerror=function(error){sock.error=ERRNO_CODES.ECONNREFUSED;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])}}},poll:function(sock){if(sock.type===1&&sock.server){return sock.pending.length?64|1:0}var mask=0;var dest=sock.type===1?SOCKFS.websocket_sock_ops.getPeer(sock,sock.daddr,sock.dport):null;if(sock.recv_queue.length||!dest||dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=64|1}if(!dest||dest&&dest.socket.readyState===dest.socket.OPEN){mask|=4}if(dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=16}return mask},ioctl:function(sock,request,arg){switch(request){case 21531:var bytes=0;if(sock.recv_queue.length){bytes=sock.recv_queue[0].data.length}HEAP32[arg>>2]=bytes;return 0;default:return ERRNO_CODES.EINVAL}},close:function(sock){if(sock.server){try{sock.server.close()}catch(e){}sock.server=null}var peers=Object.keys(sock.peers);for(var i=0;i>2]=value;return value}function inetNtop4(addr){return(addr&255)+"."+(addr>>8&255)+"."+(addr>>16&255)+"."+(addr>>24&255)}function inetNtop6(ints){var str="";var word=0;var longest=0;var lastzero=0;var zstart=0;var len=0;var i=0;var parts=[ints[0]&65535,ints[0]>>16,ints[1]&65535,ints[1]>>16,ints[2]&65535,ints[2]>>16,ints[3]&65535,ints[3]>>16];var hasipv4=true;var v4part="";for(i=0;i<5;i++){if(parts[i]!==0){hasipv4=false;break}}if(hasipv4){v4part=inetNtop4(parts[6]|parts[7]<<16);if(parts[5]===-1){str="::ffff:";str+=v4part;return str}if(parts[5]===0){str="::";if(v4part==="0.0.0.0")v4part="";if(v4part==="0.0.0.1")v4part="1";str+=v4part;return str}}for(word=0;word<8;word++){if(parts[word]===0){if(word-lastzero>1){len=0}lastzero=word;len++}if(len>longest){longest=len;zstart=word-longest+1}}for(word=0;word<8;word++){if(longest>1){if(parts[word]===0&&word>=zstart&&word>1];var port=_ntohs(HEAPU16[sa+2>>1]);var addr;switch(family){case 2:if(salen!==16){return{errno:28}}addr=HEAP32[sa+4>>2];addr=inetNtop4(addr);break;case 10:if(salen!==28){return{errno:28}}addr=[HEAP32[sa+8>>2],HEAP32[sa+12>>2],HEAP32[sa+16>>2],HEAP32[sa+20>>2]];addr=inetNtop6(addr);break;default:return{errno:5}}return{family:family,addr:addr,port:port}}function getSocketAddress(addrp,addrlen,allowNull){if(allowNull&&addrp===0)return null;var info=readSockaddr(addrp,addrlen);if(info.errno)throw new FS.ErrnoError(info.errno);info.addr=DNS.lookup_addr(info.addr)||info.addr;return info}function ___sys_connect(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.connect(sock,info.addr,info.port);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_fcntl64(fd,cmd,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(cmd){case 0:{var arg=SYSCALLS.get();if(arg<0){return-28}var newStream;newStream=FS.open(stream.path,stream.flags,0,arg);return newStream.fd}case 1:case 2:return 0;case 3:return stream.flags;case 4:{var arg=SYSCALLS.get();stream.flags|=arg;return 0}case 12:{var arg=SYSCALLS.get();var offset=0;HEAP16[arg+offset>>1]=2;return 0}case 13:case 14:return 0;case 16:case 8:return-28;case 9:setErrNo(28);return-1;default:{return-28}}}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_fstat64(fd,buf){try{var stream=SYSCALLS.getStreamFromFD(fd);return SYSCALLS.doStat(FS.stat,stream.path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_ftruncate64(fd,zero,low,high){try{var length=SYSCALLS.get64(low,high);FS.ftruncate(fd,length);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_getcwd(buf,size){try{if(size===0)return-28;var cwd=FS.cwd();var cwdLengthInBytes=lengthBytesUTF8(cwd);if(size>>0,(tempDouble=id,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos>>2]=tempI64[0],HEAP32[dirp+pos+4>>2]=tempI64[1];tempI64=[(idx+1)*struct_size>>>0,(tempDouble=(idx+1)*struct_size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos+8>>2]=tempI64[0],HEAP32[dirp+pos+12>>2]=tempI64[1];HEAP16[dirp+pos+16>>1]=280;HEAP8[dirp+pos+18>>0]=type;stringToUTF8(name,dirp+pos+19,256);pos+=struct_size;idx+=1}FS.llseek(stream,idx*struct_size,0);return pos}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_getpid(){return 42}function ___sys_getrusage(who,usage){try{_memset(usage,0,136);HEAP32[usage>>2]=1;HEAP32[usage+4>>2]=2;HEAP32[usage+8>>2]=3;HEAP32[usage+12>>2]=4;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_getegid32(){return 0}function ___sys_getuid32(){return ___sys_getegid32()}function ___sys_ioctl(fd,op,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(op){case 21509:case 21505:{if(!stream.tty)return-59;return 0}case 21510:case 21511:case 21512:case 21506:case 21507:case 21508:{if(!stream.tty)return-59;return 0}case 21519:{if(!stream.tty)return-59;var argp=SYSCALLS.get();HEAP32[argp>>2]=0;return 0}case 21520:{if(!stream.tty)return-59;return-28}case 21531:{var argp=SYSCALLS.get();return FS.ioctl(stream,op,argp)}case 21523:{if(!stream.tty)return-59;return 0}case 21524:{if(!stream.tty)return-59;return 0}default:abort("bad ioctl syscall "+op)}}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_lstat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.lstat,path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_mkdir(path,mode){try{path=SYSCALLS.getStr(path);return SYSCALLS.doMkdir(path,mode)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function syscallMmap2(addr,len,prot,flags,fd,off){off<<=12;var ptr;var allocated=false;if((flags&16)!==0&&addr%65536!==0){return-28}if((flags&32)!==0){ptr=_memalign(65536,len);if(!ptr)return-48;_memset(ptr,0,len);allocated=true}else{var info=FS.getStream(fd);if(!info)return-8;var res=FS.mmap(info,addr,len,off,prot,flags);ptr=res.ptr;allocated=res.allocated}SYSCALLS.mappings[ptr]={malloc:ptr,len:len,allocated:allocated,fd:fd,prot:prot,flags:flags,offset:off};return ptr}function ___sys_mmap2(addr,len,prot,flags,fd,off){try{return syscallMmap2(addr,len,prot,flags,fd,off)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function syscallMunmap(addr,len){if((addr|0)===-1||len===0){return-28}var info=SYSCALLS.mappings[addr];if(!info)return 0;if(len===info.len){var stream=FS.getStream(info.fd);if(stream){if(info.prot&2){SYSCALLS.doMsync(addr,stream,len,info.flags,info.offset)}FS.munmap(stream)}SYSCALLS.mappings[addr]=null;if(info.allocated){_free(info.malloc)}}return 0}function ___sys_munmap(addr,len){try{return syscallMunmap(addr,len)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_open(path,flags,varargs){SYSCALLS.varargs=varargs;try{var pathname=SYSCALLS.getStr(path);var mode=varargs?SYSCALLS.get():0;var stream=FS.open(pathname,flags,mode);return stream.fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}var PIPEFS={BUCKET_BUFFER_SIZE:8192,mount:function(mount){return FS.createNode(null,"/",16384|511,0)},createPipe:function(){var pipe={buckets:[]};pipe.buckets.push({buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:0,roffset:0});var rName=PIPEFS.nextname();var wName=PIPEFS.nextname();var rNode=FS.createNode(PIPEFS.root,rName,4096,0);var wNode=FS.createNode(PIPEFS.root,wName,4096,0);rNode.pipe=pipe;wNode.pipe=pipe;var readableStream=FS.createStream({path:rName,node:rNode,flags:0,seekable:false,stream_ops:PIPEFS.stream_ops});rNode.stream=readableStream;var writableStream=FS.createStream({path:wName,node:wNode,flags:1,seekable:false,stream_ops:PIPEFS.stream_ops});wNode.stream=writableStream;return{readable_fd:readableStream.fd,writable_fd:writableStream.fd}},stream_ops:{poll:function(stream){var pipe=stream.node.pipe;if((stream.flags&2097155)===1){return 256|4}else{if(pipe.buckets.length>0){for(var i=0;i0){return 64|1}}}}return 0},ioctl:function(stream,request,varargs){return ERRNO_CODES.EINVAL},fsync:function(stream){return ERRNO_CODES.EINVAL},read:function(stream,buffer,offset,length,position){var pipe=stream.node.pipe;var currentLength=0;for(var i=0;i=dataLen){currBucket.buffer.set(data,currBucket.offset);currBucket.offset+=dataLen;return dataLen}else if(freeBytesInCurrBuffer>0){currBucket.buffer.set(data.subarray(0,freeBytesInCurrBuffer),currBucket.offset);currBucket.offset+=freeBytesInCurrBuffer;data=data.subarray(freeBytesInCurrBuffer,data.byteLength)}var numBuckets=data.byteLength/PIPEFS.BUCKET_BUFFER_SIZE|0;var remElements=data.byteLength%PIPEFS.BUCKET_BUFFER_SIZE;for(var i=0;i0){var newBucket={buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:data.byteLength,roffset:0};pipe.buckets.push(newBucket);newBucket.buffer.set(data)}return dataLen},close:function(stream){var pipe=stream.node.pipe;pipe.buckets=null}},nextname:function(){if(!PIPEFS.nextname.current){PIPEFS.nextname.current=0}return"pipe["+PIPEFS.nextname.current+++"]"}};function ___sys_pipe(fdPtr){try{if(fdPtr==0){throw new FS.ErrnoError(21)}var res=PIPEFS.createPipe();HEAP32[fdPtr>>2]=res.readable_fd;HEAP32[fdPtr+4>>2]=res.writable_fd;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_readlink(path,buf,bufsize){try{path=SYSCALLS.getStr(path);return SYSCALLS.doReadlink(path,buf,bufsize)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function inetPton4(str){var b=str.split(".");for(var i=0;i<4;i++){var tmp=Number(b[i]);if(isNaN(tmp))return null;b[i]=tmp}return(b[0]|b[1]<<8|b[2]<<16|b[3]<<24)>>>0}function jstoi_q(str){return parseInt(str)}function inetPton6(str){var words;var w,offset,z;var valid6regx=/^((?=.*::)(?!.*::.+::)(::)?([\dA-F]{1,4}:(:|\b)|){5}|([\dA-F]{1,4}:){6})((([\dA-F]{1,4}((?!\3)::|:\b|$))|(?!\2\3)){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})$/i;var parts=[];if(!valid6regx.test(str)){return null}if(str==="::"){return[0,0,0,0,0,0,0,0]}if(str.startsWith("::")){str=str.replace("::","Z:")}else{str=str.replace("::",":Z:")}if(str.indexOf(".")>0){str=str.replace(new RegExp("[.]","g"),":");words=str.split(":");words[words.length-4]=jstoi_q(words[words.length-4])+jstoi_q(words[words.length-3])*256;words[words.length-3]=jstoi_q(words[words.length-2])+jstoi_q(words[words.length-1])*256;words=words.slice(0,words.length-2)}else{words=str.split(":")}offset=0;z=0;for(w=0;w>2]=16}HEAP16[sa>>1]=family;HEAP32[sa+4>>2]=addr;HEAP16[sa+2>>1]=_htons(port);tempI64=[0>>>0,(tempDouble=0,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[sa+8>>2]=tempI64[0],HEAP32[sa+12>>2]=tempI64[1];break;case 10:addr=inetPton6(addr);if(addrlen){HEAP32[addrlen>>2]=28}HEAP32[sa>>2]=family;HEAP32[sa+8>>2]=addr[0];HEAP32[sa+12>>2]=addr[1];HEAP32[sa+16>>2]=addr[2];HEAP32[sa+20>>2]=addr[3];HEAP16[sa+2>>1]=_htons(port);HEAP32[sa+4>>2]=0;HEAP32[sa+24>>2]=0;break;default:return 5}return 0}var DNS={address_map:{id:1,addrs:{},names:{}},lookup_name:function(name){var res=inetPton4(name);if(res!==null){return name}res=inetPton6(name);if(res!==null){return name}var addr;if(DNS.address_map.addrs[name]){addr=DNS.address_map.addrs[name]}else{var id=DNS.address_map.id++;assert(id<65535,"exceeded max address mappings of 65535");addr="172.29."+(id&255)+"."+(id&65280);DNS.address_map.names[addr]=name;DNS.address_map.addrs[name]=addr}return addr},lookup_addr:function(addr){if(DNS.address_map.names[addr]){return DNS.address_map.names[addr]}return null}};function ___sys_recvfrom(fd,buf,len,flags,addr,addrlen){try{var sock=getSocketFromFD(fd);var msg=sock.sock_ops.recvmsg(sock,len);if(!msg)return 0;if(addr){var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(msg.addr),msg.port,addrlen)}HEAPU8.set(msg.buffer,buf);return msg.buffer.byteLength}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_rename(old_path,new_path){try{old_path=SYSCALLS.getStr(old_path);new_path=SYSCALLS.getStr(new_path);FS.rename(old_path,new_path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_rmdir(path){try{path=SYSCALLS.getStr(path);FS.rmdir(path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_sendto(fd,message,length,flags,addr,addr_len){try{var sock=getSocketFromFD(fd);var dest=getSocketAddress(addr,addr_len,true);if(!dest){return FS.write(sock.stream,HEAP8,message,length)}else{return sock.sock_ops.sendmsg(sock,HEAP8,message,length,dest.addr,dest.port)}}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_shutdown(fd,how){try{getSocketFromFD(fd);return-52}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_socket(domain,type,protocol){try{var sock=SOCKFS.createSocket(domain,type,protocol);return sock.stream.fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_stat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.stat,path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_statfs64(path,size,buf){try{path=SYSCALLS.getStr(path);HEAP32[buf+4>>2]=4096;HEAP32[buf+40>>2]=4096;HEAP32[buf+8>>2]=1e6;HEAP32[buf+12>>2]=5e5;HEAP32[buf+16>>2]=5e5;HEAP32[buf+20>>2]=FS.nextInode;HEAP32[buf+24>>2]=1e6;HEAP32[buf+28>>2]=42;HEAP32[buf+44>>2]=2;HEAP32[buf+36>>2]=255;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_truncate64(path,zero,low,high){try{path=SYSCALLS.getStr(path);var length=SYSCALLS.get64(low,high);FS.truncate(path,length);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_uname(buf){try{if(!buf)return-21;var layout={"__size__":390,"domainname":325,"machine":260,"nodename":65,"release":130,"sysname":0,"version":195};var copyString=function(element,value){var offset=layout[element];writeAsciiToMemory(value,buf+offset)};copyString("sysname","Emscripten");copyString("nodename","emscripten");copyString("release","1.0");copyString("version","#1");copyString("machine","wasm32");return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function ___sys_unlink(path){try{path=SYSCALLS.getStr(path);FS.unlink(path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}function _abort(){abort()}function _clock(){if(_clock.start===undefined)_clock.start=Date.now();return(Date.now()-_clock.start)*(1e6/1e3)|0}function _emscripten_get_now_res(){if(ENVIRONMENT_IS_NODE){return 1}else if(typeof dateNow!=="undefined"){return 1e3}else return 1e3}var _emscripten_get_now_is_monotonic=true;function _clock_getres(clk_id,res){var nsec;if(clk_id===0){nsec=1e3*1e3}else if(clk_id===1&&_emscripten_get_now_is_monotonic){nsec=_emscripten_get_now_res()}else{setErrNo(28);return-1}HEAP32[res>>2]=nsec/1e9|0;HEAP32[res+4>>2]=nsec;return 0}var _emscripten_get_now;if(ENVIRONMENT_IS_NODE){_emscripten_get_now=function(){var t=process["hrtime"]();return t[0]*1e3+t[1]/1e6}}else if(typeof dateNow!=="undefined"){_emscripten_get_now=dateNow}else _emscripten_get_now=function(){return performance.now()};function _clock_gettime(clk_id,tp){var now;if(clk_id===0){now=Date.now()}else if((clk_id===1||clk_id===4)&&_emscripten_get_now_is_monotonic){now=_emscripten_get_now()}else{setErrNo(28);return-1}HEAP32[tp>>2]=now/1e3|0;HEAP32[tp+4>>2]=now%1e3*1e3*1e3|0;return 0}function _difftime(time1,time0){return time1-time0}function _dlclose(handle){}function _dlerror(){return 0}function _dlopen(filename,flag){}function _dlsym(handle,symbol){return 0}var readAsmConstArgsArray=[];function readAsmConstArgs(sigPtr,buf){readAsmConstArgsArray.length=0;var ch;buf>>=2;while(ch=HEAPU8[sigPtr++]){var double=ch<105;if(double&&buf&1)buf++;readAsmConstArgsArray.push(double?HEAPF64[buf++>>1]:HEAP32[buf]);++buf}return readAsmConstArgsArray}function mainThreadEM_ASM(code,sigPtr,argbuf,sync){var args=readAsmConstArgs(sigPtr,argbuf);return ASM_CONSTS[code].apply(null,args)}function _emscripten_asm_const_int_sync_on_main_thread(code,sigPtr,argbuf){return mainThreadEM_ASM(code,sigPtr,argbuf,1)}function _emscripten_set_main_loop_timing(mode,value){Browser.mainLoop.timingMode=mode;Browser.mainLoop.timingValue=value;if(!Browser.mainLoop.func){return 1}if(!Browser.mainLoop.running){Browser.mainLoop.running=true}if(mode==0){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setTimeout(){var timeUntilNextTick=Math.max(0,Browser.mainLoop.tickStartTime+value-_emscripten_get_now())|0;setTimeout(Browser.mainLoop.runner,timeUntilNextTick)};Browser.mainLoop.method="timeout"}else if(mode==1){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_rAF(){Browser.requestAnimationFrame(Browser.mainLoop.runner)};Browser.mainLoop.method="rAF"}else if(mode==2){if(typeof setImmediate==="undefined"){var setImmediates=[];var emscriptenMainLoopMessageId="setimmediate";var Browser_setImmediate_messageHandler=function(event){if(event.data===emscriptenMainLoopMessageId||event.data.target===emscriptenMainLoopMessageId){event.stopPropagation();setImmediates.shift()()}};addEventListener("message",Browser_setImmediate_messageHandler,true);setImmediate=function Browser_emulated_setImmediate(func){setImmediates.push(func);if(ENVIRONMENT_IS_WORKER){if(Module["setImmediates"]===undefined)Module["setImmediates"]=[];Module["setImmediates"].push(func);postMessage({target:emscriptenMainLoopMessageId})}else postMessage(emscriptenMainLoopMessageId,"*")}}Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setImmediate(){setImmediate(Browser.mainLoop.runner)};Browser.mainLoop.method="immediate"}return 0}function _exit(status){exit(status)}function maybeExit(){if(!keepRuntimeAlive()){try{_exit(EXITSTATUS)}catch(e){if(e instanceof ExitStatus){return}throw e}}}function setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop,arg,noSetTiming){assert(!Browser.mainLoop.func,"emscripten_set_main_loop: there can only be one main loop function at once: call emscripten_cancel_main_loop to cancel the previous one before setting a new one with different parameters.");Browser.mainLoop.func=browserIterationFunc;Browser.mainLoop.arg=arg;var thisMainLoopId=Browser.mainLoop.currentlyRunningMainloop;function checkIsRunning(){if(thisMainLoopId0){var start=Date.now();var blocker=Browser.mainLoop.queue.shift();blocker.func(blocker.arg);if(Browser.mainLoop.remainingBlockers){var remaining=Browser.mainLoop.remainingBlockers;var next=remaining%1==0?remaining-1:Math.floor(remaining);if(blocker.counted){Browser.mainLoop.remainingBlockers=next}else{next=next+.5;Browser.mainLoop.remainingBlockers=(8*remaining+next)/9}}console.log('main loop blocker "'+blocker.name+'" took '+(Date.now()-start)+" ms");Browser.mainLoop.updateStatus();if(!checkIsRunning())return;setTimeout(Browser.mainLoop.runner,0);return}if(!checkIsRunning())return;Browser.mainLoop.currentFrameNumber=Browser.mainLoop.currentFrameNumber+1|0;if(Browser.mainLoop.timingMode==1&&Browser.mainLoop.timingValue>1&&Browser.mainLoop.currentFrameNumber%Browser.mainLoop.timingValue!=0){Browser.mainLoop.scheduler();return}else if(Browser.mainLoop.timingMode==0){Browser.mainLoop.tickStartTime=_emscripten_get_now()}GL.newRenderingFrameStarted();Browser.mainLoop.runIter(browserIterationFunc);if(!checkIsRunning())return;if(typeof SDL==="object"&&SDL.audio&&SDL.audio.queueNewAudioData)SDL.audio.queueNewAudioData();Browser.mainLoop.scheduler()};if(!noSetTiming){if(fps&&fps>0)_emscripten_set_main_loop_timing(0,1e3/fps);else _emscripten_set_main_loop_timing(1,1);Browser.mainLoop.scheduler()}if(simulateInfiniteLoop){throw"unwind"}}function callUserCallback(func,synchronous){if(ABORT){return}if(synchronous){func();return}try{func()}catch(e){if(e instanceof ExitStatus){return}else if(e!=="unwind"){if(e&&typeof e==="object"&&e.stack)err("exception thrown: "+[e,e.stack]);throw e}}}var Browser={mainLoop:{running:false,scheduler:null,method:"",currentlyRunningMainloop:0,func:null,arg:0,timingMode:0,timingValue:0,currentFrameNumber:0,queue:[],pause:function(){Browser.mainLoop.scheduler=null;Browser.mainLoop.currentlyRunningMainloop++},resume:function(){Browser.mainLoop.currentlyRunningMainloop++;var timingMode=Browser.mainLoop.timingMode;var timingValue=Browser.mainLoop.timingValue;var func=Browser.mainLoop.func;Browser.mainLoop.func=null;setMainLoop(func,0,false,Browser.mainLoop.arg,true);_emscripten_set_main_loop_timing(timingMode,timingValue);Browser.mainLoop.scheduler()},updateStatus:function(){if(Module["setStatus"]){var message=Module["statusMessage"]||"Please wait...";var remaining=Browser.mainLoop.remainingBlockers;var expected=Browser.mainLoop.expectedBlockers;if(remaining){if(remaining=6){var curr=leftchar>>leftbits-6&63;leftbits-=6;ret+=BASE[curr]}}if(leftbits==2){ret+=BASE[(leftchar&3)<<4];ret+=PAD+PAD}else if(leftbits==4){ret+=BASE[(leftchar&15)<<2];ret+=PAD}return ret}audio.src="data:audio/x-"+name.substr(-3)+";base64,"+encode64(byteArray);finish(audio)};audio.src=url;Browser.safeSetTimeout(function(){finish(audio)},1e4)}else{return fail()}};Module["preloadPlugins"].push(audioPlugin);function pointerLockChange(){Browser.pointerLock=document["pointerLockElement"]===Module["canvas"]||document["mozPointerLockElement"]===Module["canvas"]||document["webkitPointerLockElement"]===Module["canvas"]||document["msPointerLockElement"]===Module["canvas"]}var canvas=Module["canvas"];if(canvas){canvas.requestPointerLock=canvas["requestPointerLock"]||canvas["mozRequestPointerLock"]||canvas["webkitRequestPointerLock"]||canvas["msRequestPointerLock"]||function(){};canvas.exitPointerLock=document["exitPointerLock"]||document["mozExitPointerLock"]||document["webkitExitPointerLock"]||document["msExitPointerLock"]||function(){};canvas.exitPointerLock=canvas.exitPointerLock.bind(document);document.addEventListener("pointerlockchange",pointerLockChange,false);document.addEventListener("mozpointerlockchange",pointerLockChange,false);document.addEventListener("webkitpointerlockchange",pointerLockChange,false);document.addEventListener("mspointerlockchange",pointerLockChange,false);if(Module["elementPointerLock"]){canvas.addEventListener("click",function(ev){if(!Browser.pointerLock&&Module["canvas"].requestPointerLock){Module["canvas"].requestPointerLock();ev.preventDefault()}},false)}}},createContext:function(canvas,useWebGL,setInModule,webGLContextAttributes){if(useWebGL&&Module.ctx&&canvas==Module.canvas)return Module.ctx;var ctx;var contextHandle;if(useWebGL){var contextAttributes={antialias:false,alpha:false,majorVersion:typeof WebGL2RenderingContext!=="undefined"?2:1};if(webGLContextAttributes){for(var attribute in webGLContextAttributes){contextAttributes[attribute]=webGLContextAttributes[attribute]}}if(typeof GL!=="undefined"){contextHandle=GL.createContext(canvas,contextAttributes);if(contextHandle){ctx=GL.getContext(contextHandle).GLctx}}}else{ctx=canvas.getContext("2d")}if(!ctx)return null;if(setInModule){if(!useWebGL)assert(typeof GLctx==="undefined","cannot set in module if GLctx is used, but we are a non-GL context that would replace it");Module.ctx=ctx;if(useWebGL)GL.makeContextCurrent(contextHandle);Module.useWebGL=useWebGL;Browser.moduleContextCreatedCallbacks.forEach(function(callback){callback()});Browser.init()}return ctx},destroyContext:function(canvas,useWebGL,setInModule){},fullscreenHandlersInstalled:false,lockPointer:undefined,resizeCanvas:undefined,requestFullscreen:function(lockPointer,resizeCanvas){Browser.lockPointer=lockPointer;Browser.resizeCanvas=resizeCanvas;if(typeof Browser.lockPointer==="undefined")Browser.lockPointer=true;if(typeof Browser.resizeCanvas==="undefined")Browser.resizeCanvas=false;var canvas=Module["canvas"];function fullscreenChange(){Browser.isFullscreen=false;var canvasContainer=canvas.parentNode;if((document["fullscreenElement"]||document["mozFullScreenElement"]||document["msFullscreenElement"]||document["webkitFullscreenElement"]||document["webkitCurrentFullScreenElement"])===canvasContainer){canvas.exitFullscreen=Browser.exitFullscreen;if(Browser.lockPointer)canvas.requestPointerLock();Browser.isFullscreen=true;if(Browser.resizeCanvas){Browser.setFullscreenCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}else{canvasContainer.parentNode.insertBefore(canvas,canvasContainer);canvasContainer.parentNode.removeChild(canvasContainer);if(Browser.resizeCanvas){Browser.setWindowedCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}if(Module["onFullScreen"])Module["onFullScreen"](Browser.isFullscreen);if(Module["onFullscreen"])Module["onFullscreen"](Browser.isFullscreen)}if(!Browser.fullscreenHandlersInstalled){Browser.fullscreenHandlersInstalled=true;document.addEventListener("fullscreenchange",fullscreenChange,false);document.addEventListener("mozfullscreenchange",fullscreenChange,false);document.addEventListener("webkitfullscreenchange",fullscreenChange,false);document.addEventListener("MSFullscreenChange",fullscreenChange,false)}var canvasContainer=document.createElement("div");canvas.parentNode.insertBefore(canvasContainer,canvas);canvasContainer.appendChild(canvas);canvasContainer.requestFullscreen=canvasContainer["requestFullscreen"]||canvasContainer["mozRequestFullScreen"]||canvasContainer["msRequestFullscreen"]||(canvasContainer["webkitRequestFullscreen"]?function(){canvasContainer["webkitRequestFullscreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null)||(canvasContainer["webkitRequestFullScreen"]?function(){canvasContainer["webkitRequestFullScreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null);canvasContainer.requestFullscreen()},exitFullscreen:function(){if(!Browser.isFullscreen){return false}var CFS=document["exitFullscreen"]||document["cancelFullScreen"]||document["mozCancelFullScreen"]||document["msExitFullscreen"]||document["webkitCancelFullScreen"]||function(){};CFS.apply(document,[]);return true},nextRAF:0,fakeRequestAnimationFrame:function(func){var now=Date.now();if(Browser.nextRAF===0){Browser.nextRAF=now+1e3/60}else{while(now+2>=Browser.nextRAF){Browser.nextRAF+=1e3/60}}var delay=Math.max(Browser.nextRAF-now,0);setTimeout(func,delay)},requestAnimationFrame:function(func){if(typeof requestAnimationFrame==="function"){requestAnimationFrame(func);return}var RAF=Browser.fakeRequestAnimationFrame;RAF(func)},safeRequestAnimationFrame:function(func){return Browser.requestAnimationFrame(function(){callUserCallback(func)})},safeSetTimeout:function(func,timeout){return setTimeout(function(){callUserCallback(func)},timeout)},getMimetype:function(name){return{"jpg":"image/jpeg","jpeg":"image/jpeg","png":"image/png","bmp":"image/bmp","ogg":"audio/ogg","wav":"audio/wav","mp3":"audio/mpeg"}[name.substr(name.lastIndexOf(".")+1)]},getUserMedia:function(func){if(!window.getUserMedia){window.getUserMedia=navigator["getUserMedia"]||navigator["mozGetUserMedia"]}window.getUserMedia(func)},getMovementX:function(event){return event["movementX"]||event["mozMovementX"]||event["webkitMovementX"]||0},getMovementY:function(event){return event["movementY"]||event["mozMovementY"]||event["webkitMovementY"]||0},getMouseWheelDelta:function(event){var delta=0;switch(event.type){case"DOMMouseScroll":delta=event.detail/3;break;case"mousewheel":delta=event.wheelDelta/120;break;case"wheel":delta=event.deltaY;switch(event.deltaMode){case 0:delta/=100;break;case 1:delta/=3;break;case 2:delta*=80;break;default:throw"unrecognized mouse wheel delta mode: "+event.deltaMode}break;default:throw"unrecognized mouse wheel event: "+event.type}return delta},mouseX:0,mouseY:0,mouseMovementX:0,mouseMovementY:0,touches:{},lastTouches:{},calculateMouseEvent:function(event){if(Browser.pointerLock){if(event.type!="mousemove"&&"mozMovementX"in event){Browser.mouseMovementX=Browser.mouseMovementY=0}else{Browser.mouseMovementX=Browser.getMovementX(event);Browser.mouseMovementY=Browser.getMovementY(event)}if(typeof SDL!="undefined"){Browser.mouseX=SDL.mouseX+Browser.mouseMovementX;Browser.mouseY=SDL.mouseY+Browser.mouseMovementY}else{Browser.mouseX+=Browser.mouseMovementX;Browser.mouseY+=Browser.mouseMovementY}}else{var rect=Module["canvas"].getBoundingClientRect();var cw=Module["canvas"].width;var ch=Module["canvas"].height;var scrollX=typeof window.scrollX!=="undefined"?window.scrollX:window.pageXOffset;var scrollY=typeof window.scrollY!=="undefined"?window.scrollY:window.pageYOffset;if(event.type==="touchstart"||event.type==="touchend"||event.type==="touchmove"){var touch=event.touch;if(touch===undefined){return}var adjustedX=touch.pageX-(scrollX+rect.left);var adjustedY=touch.pageY-(scrollY+rect.top);adjustedX=adjustedX*(cw/rect.width);adjustedY=adjustedY*(ch/rect.height);var coords={x:adjustedX,y:adjustedY};if(event.type==="touchstart"){Browser.lastTouches[touch.identifier]=coords;Browser.touches[touch.identifier]=coords}else if(event.type==="touchend"||event.type==="touchmove"){var last=Browser.touches[touch.identifier];if(!last)last=coords;Browser.lastTouches[touch.identifier]=last;Browser.touches[touch.identifier]=coords}return}var x=event.pageX-(scrollX+rect.left);var y=event.pageY-(scrollY+rect.top);x=x*(cw/rect.width);y=y*(ch/rect.height);Browser.mouseMovementX=x-Browser.mouseX;Browser.mouseMovementY=y-Browser.mouseY;Browser.mouseX=x;Browser.mouseY=y}},asyncLoad:function(url,onload,onerror,noRunDep){var dep=!noRunDep?getUniqueRunDependency("al "+url):"";readAsync(url,function(arrayBuffer){assert(arrayBuffer,'Loading data file "'+url+'" failed (no arrayBuffer).');onload(new Uint8Array(arrayBuffer));if(dep)removeRunDependency(dep)},function(event){if(onerror){onerror()}else{throw'Loading data file "'+url+'" failed.'}});if(dep)addRunDependency(dep)},resizeListeners:[],updateResizeListeners:function(){var canvas=Module["canvas"];Browser.resizeListeners.forEach(function(listener){listener(canvas.width,canvas.height)})},setCanvasSize:function(width,height,noUpdates){var canvas=Module["canvas"];Browser.updateCanvasDimensions(canvas,width,height);if(!noUpdates)Browser.updateResizeListeners()},windowedWidth:0,windowedHeight:0,setFullscreenCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags|8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},setWindowedCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags&~8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},updateCanvasDimensions:function(canvas,wNative,hNative){if(wNative&&hNative){canvas.widthNative=wNative;canvas.heightNative=hNative}else{wNative=canvas.widthNative;hNative=canvas.heightNative}var w=wNative;var h=hNative;if(Module["forcedAspectRatio"]&&Module["forcedAspectRatio"]>0){if(w/h=0;--i){JSEvents._removeHandler(i)}JSEvents.eventHandlers=[];JSEvents.deferredCalls=[]},registerRemoveEventListeners:function(){if(!JSEvents.removeEventListenersRegistered){__ATEXIT__.push(JSEvents.removeAllEventListeners);JSEvents.removeEventListenersRegistered=true}},deferredCalls:[],deferCall:function(targetFunction,precedence,argsList){function arraysHaveEqualContent(arrA,arrB){if(arrA.length!=arrB.length)return false;for(var i in arrA){if(arrA[i]!=arrB[i])return false}return true}for(var i in JSEvents.deferredCalls){var call=JSEvents.deferredCalls[i];if(call.targetFunction==targetFunction&&arraysHaveEqualContent(call.argsList,argsList)){return}}JSEvents.deferredCalls.push({targetFunction:targetFunction,precedence:precedence,argsList:argsList});JSEvents.deferredCalls.sort(function(x,y){return x.precedence2?UTF8ToString(cString):cString}var specialHTMLTargets=[0,typeof document!=="undefined"?document:0,typeof window!=="undefined"?window:0];function findEventTarget(target){target=maybeCStringToJsString(target);var domElement=specialHTMLTargets[target]||(typeof document!=="undefined"?document.querySelector(target):undefined);return domElement}function findCanvasEventTarget(target){return findEventTarget(target)}function _emscripten_get_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;HEAP32[width>>2]=canvas.width;HEAP32[height>>2]=canvas.height}function getCanvasElementSize(target){var stackTop=stackSave();var w=stackAlloc(8);var h=w+4;var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);var ret=_emscripten_get_canvas_element_size(targetInt,w,h);var size=[HEAP32[w>>2],HEAP32[h>>2]];stackRestore(stackTop);return size}function _emscripten_set_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;canvas.width=width;canvas.height=height;return 0}function setCanvasElementSize(target,width,height){if(!target.controlTransferredOffscreen){target.width=width;target.height=height}else{var stackTop=stackSave();var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);_emscripten_set_canvas_element_size(targetInt,width,height);stackRestore(stackTop)}}function registerRestoreOldStyle(canvas){var canvasSize=getCanvasElementSize(canvas);var oldWidth=canvasSize[0];var oldHeight=canvasSize[1];var oldCssWidth=canvas.style.width;var oldCssHeight=canvas.style.height;var oldBackgroundColor=canvas.style.backgroundColor;var oldDocumentBackgroundColor=document.body.style.backgroundColor;var oldPaddingLeft=canvas.style.paddingLeft;var oldPaddingRight=canvas.style.paddingRight;var oldPaddingTop=canvas.style.paddingTop;var oldPaddingBottom=canvas.style.paddingBottom;var oldMarginLeft=canvas.style.marginLeft;var oldMarginRight=canvas.style.marginRight;var oldMarginTop=canvas.style.marginTop;var oldMarginBottom=canvas.style.marginBottom;var oldDocumentBodyMargin=document.body.style.margin;var oldDocumentOverflow=document.documentElement.style.overflow;var oldDocumentScroll=document.body.scroll;var oldImageRendering=canvas.style.imageRendering;function restoreOldStyle(){var fullscreenElement=document.fullscreenElement||document.webkitFullscreenElement||document.msFullscreenElement;if(!fullscreenElement){document.removeEventListener("fullscreenchange",restoreOldStyle);document.removeEventListener("webkitfullscreenchange",restoreOldStyle);setCanvasElementSize(canvas,oldWidth,oldHeight);canvas.style.width=oldCssWidth;canvas.style.height=oldCssHeight;canvas.style.backgroundColor=oldBackgroundColor;if(!oldDocumentBackgroundColor)document.body.style.backgroundColor="white";document.body.style.backgroundColor=oldDocumentBackgroundColor;canvas.style.paddingLeft=oldPaddingLeft;canvas.style.paddingRight=oldPaddingRight;canvas.style.paddingTop=oldPaddingTop;canvas.style.paddingBottom=oldPaddingBottom;canvas.style.marginLeft=oldMarginLeft;canvas.style.marginRight=oldMarginRight;canvas.style.marginTop=oldMarginTop;canvas.style.marginBottom=oldMarginBottom;document.body.style.margin=oldDocumentBodyMargin;document.documentElement.style.overflow=oldDocumentOverflow;document.body.scroll=oldDocumentScroll;canvas.style.imageRendering=oldImageRendering;if(canvas.GLctxObject)canvas.GLctxObject.GLctx.viewport(0,0,oldWidth,oldHeight);if(currentFullscreenStrategy.canvasResizedCallback){(function(a1,a2,a3){return dynCall_iiii.apply(null,[currentFullscreenStrategy.canvasResizedCallback,a1,a2,a3])})(37,0,currentFullscreenStrategy.canvasResizedCallbackUserData)}}}document.addEventListener("fullscreenchange",restoreOldStyle);document.addEventListener("webkitfullscreenchange",restoreOldStyle);return restoreOldStyle}function setLetterbox(element,topBottom,leftRight){element.style.paddingLeft=element.style.paddingRight=leftRight+"px";element.style.paddingTop=element.style.paddingBottom=topBottom+"px"}function getBoundingClientRect(e){return specialHTMLTargets.indexOf(e)<0?e.getBoundingClientRect():{"left":0,"top":0}}function _JSEvents_resizeCanvasForFullscreen(target,strategy){var restoreOldStyle=registerRestoreOldStyle(target);var cssWidth=strategy.softFullscreen?innerWidth:screen.width;var cssHeight=strategy.softFullscreen?innerHeight:screen.height;var rect=getBoundingClientRect(target);var windowedCssWidth=rect.width;var windowedCssHeight=rect.height;var canvasSize=getCanvasElementSize(target);var windowedRttWidth=canvasSize[0];var windowedRttHeight=canvasSize[1];if(strategy.scaleMode==3){setLetterbox(target,(cssHeight-windowedCssHeight)/2,(cssWidth-windowedCssWidth)/2);cssWidth=windowedCssWidth;cssHeight=windowedCssHeight}else if(strategy.scaleMode==2){if(cssWidth*windowedRttHeight>2]=isFullscreen;HEAP32[eventStruct+4>>2]=JSEvents.fullscreenEnabled();var reportedElement=isFullscreen?fullscreenElement:JSEvents.previousFullscreenElement;var nodeName=JSEvents.getNodeNameForTarget(reportedElement);var id=reportedElement&&reportedElement.id?reportedElement.id:"";stringToUTF8(nodeName,eventStruct+8,128);stringToUTF8(id,eventStruct+136,128);HEAP32[eventStruct+264>>2]=reportedElement?reportedElement.clientWidth:0;HEAP32[eventStruct+268>>2]=reportedElement?reportedElement.clientHeight:0;HEAP32[eventStruct+272>>2]=screen.width;HEAP32[eventStruct+276>>2]=screen.height;if(isFullscreen){JSEvents.previousFullscreenElement=fullscreenElement}}function _emscripten_get_fullscreen_status(fullscreenStatus){if(!JSEvents.fullscreenEnabled())return-1;fillFullscreenChangeEventData(fullscreenStatus);return 0}function fillGamepadEventData(eventStruct,e){HEAPF64[eventStruct>>3]=e.timestamp;for(var i=0;i>3]=e.axes[i]}for(var i=0;i>3]=e.buttons[i].value}else{HEAPF64[eventStruct+i*8+528>>3]=e.buttons[i]}}for(var i=0;i>2]=e.buttons[i].pressed}else{HEAP32[eventStruct+i*4+1040>>2]=e.buttons[i]==1}}HEAP32[eventStruct+1296>>2]=e.connected;HEAP32[eventStruct+1300>>2]=e.index;HEAP32[eventStruct+8>>2]=e.axes.length;HEAP32[eventStruct+12>>2]=e.buttons.length;stringToUTF8(e.id,eventStruct+1304,64);stringToUTF8(e.mapping,eventStruct+1368,64)}function _emscripten_get_gamepad_status(index,gamepadState){if(index<0||index>=JSEvents.lastGamepadState.length)return-5;if(!JSEvents.lastGamepadState[index])return-7;fillGamepadEventData(gamepadState,JSEvents.lastGamepadState[index]);return 0}function _emscripten_get_heap_max(){return 2147483648}function _emscripten_get_num_gamepads(){return JSEvents.lastGamepadState.length}function _emscripten_html5_remove_all_event_listeners(){JSEvents.removeAllEventListeners()}function _emscripten_is_webgl_context_lost(contextHandle){return!GL.contexts[contextHandle]||GL.contexts[contextHandle].GLctx.isContextLost()}function reallyNegative(x){return x<0||x===0&&1/x===-Infinity}function convertI32PairToI53(lo,hi){return(lo>>>0)+hi*4294967296}function convertU32PairToI53(lo,hi){return(lo>>>0)+(hi>>>0)*4294967296}function reSign(value,bits){if(value<=0){return value}var half=bits<=32?Math.abs(1<=half&&(bits<=32||value>half)){value=-2*half+value}return value}function unSign(value,bits){if(value>=0){return value}return bits<=32?2*Math.abs(1<>3];argIndex+=8}else if(type=="i64"){ret=[HEAP32[argIndex>>2],HEAP32[argIndex+4>>2]];argIndex+=8}else{type="i32";ret=HEAP32[argIndex>>2];argIndex+=4}return ret}var ret=[];var curr,next,currArg;while(1){var startTextIndex=textIndex;curr=HEAP8[textIndex>>0];if(curr===0)break;next=HEAP8[textIndex+1>>0];if(curr==37){var flagAlwaysSigned=false;var flagLeftAlign=false;var flagAlternative=false;var flagZeroPad=false;var flagPadSign=false;flagsLoop:while(1){switch(next){case 43:flagAlwaysSigned=true;break;case 45:flagLeftAlign=true;break;case 35:flagAlternative=true;break;case 48:if(flagZeroPad){break flagsLoop}else{flagZeroPad=true;break}case 32:flagPadSign=true;break;default:break flagsLoop}textIndex++;next=HEAP8[textIndex+1>>0]}var width=0;if(next==42){width=getNextArg("i32");textIndex++;next=HEAP8[textIndex+1>>0]}else{while(next>=48&&next<=57){width=width*10+(next-48);textIndex++;next=HEAP8[textIndex+1>>0]}}var precisionSet=false,precision=-1;if(next==46){precision=0;precisionSet=true;textIndex++;next=HEAP8[textIndex+1>>0];if(next==42){precision=getNextArg("i32");textIndex++}else{while(1){var precisionChr=HEAP8[textIndex+1>>0];if(precisionChr<48||precisionChr>57)break;precision=precision*10+(precisionChr-48);textIndex++}}next=HEAP8[textIndex+1>>0]}if(precision<0){precision=6;precisionSet=false}var argSize;switch(String.fromCharCode(next)){case"h":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==104){textIndex++;argSize=1}else{argSize=2}break;case"l":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==108){textIndex++;argSize=8}else{argSize=4}break;case"L":case"q":case"j":argSize=8;break;case"z":case"t":case"I":argSize=4;break;default:argSize=null}if(argSize)textIndex++;next=HEAP8[textIndex+1>>0];switch(String.fromCharCode(next)){case"d":case"i":case"u":case"o":case"x":case"X":case"p":{var signed=next==100||next==105;argSize=argSize||4;currArg=getNextArg("i"+argSize*8);var argText;if(argSize==8){currArg=next==117?convertU32PairToI53(currArg[0],currArg[1]):convertI32PairToI53(currArg[0],currArg[1])}if(argSize<=4){var limit=Math.pow(256,argSize)-1;currArg=(signed?reSign:unSign)(currArg&limit,argSize*8)}var currAbsArg=Math.abs(currArg);var prefix="";if(next==100||next==105){argText=reSign(currArg,8*argSize,1).toString(10)}else if(next==117){argText=unSign(currArg,8*argSize,1).toString(10);currArg=Math.abs(currArg)}else if(next==111){argText=(flagAlternative?"0":"")+currAbsArg.toString(8)}else if(next==120||next==88){prefix=flagAlternative&&currArg!=0?"0x":"";if(currArg<0){currArg=-currArg;argText=(currAbsArg-1).toString(16);var buffer=[];for(var i=0;i=0){if(flagAlwaysSigned){prefix="+"+prefix}else if(flagPadSign){prefix=" "+prefix}}if(argText.charAt(0)=="-"){prefix="-"+prefix;argText=argText.substr(1)}while(prefix.length+argText.lengthexponent&&exponent>=-4){next=(next==103?"f":"F").charCodeAt(0);precision-=exponent+1}else{next=(next==103?"e":"E").charCodeAt(0);precision--}effectivePrecision=Math.min(precision,20)}if(next==101||next==69){argText=currArg.toExponential(effectivePrecision);if(/[eE][-+]\d$/.test(argText)){argText=argText.slice(0,-1)+"0"+argText.slice(-1)}}else if(next==102||next==70){argText=currArg.toFixed(effectivePrecision);if(currArg===0&&reallyNegative(currArg)){argText="-"+argText}}var parts=argText.split("e");if(isGeneral&&!flagAlternative){while(parts[0].length>1&&parts[0].includes(".")&&(parts[0].slice(-1)=="0"||parts[0].slice(-1)==".")){parts[0]=parts[0].slice(0,-1)}}else{if(flagAlternative&&argText.indexOf(".")==-1)parts[0]+=".";while(precision>effectivePrecision++)parts[0]+="0"}argText=parts[0]+(parts.length>1?"e"+parts[1]:"");if(next==69)argText=argText.toUpperCase();if(currArg>=0){if(flagAlwaysSigned){argText="+"+argText}else if(flagPadSign){argText=" "+argText}}}while(argText.length>0])}}else{ret=ret.concat(intArrayFromString("(null)".substr(0,argLength),true))}if(flagLeftAlign){while(argLength0){ret.push(32)}if(!flagLeftAlign)ret.push(getNextArg("i8"));break}case"n":{var ptr=getNextArg("i32*");HEAP32[ptr>>2]=ret.length;break}case"%":{ret.push(curr);break}default:{for(var i=startTextIndex;i>0])}}}textIndex+=2}else{ret.push(curr);textIndex+=1}}return ret}function traverseStack(args){if(!args||!args.callee||!args.callee.name){return[null,"",""]}var funstr=args.callee.toString();var funcname=args.callee.name;var str="(";var first=true;for(var i in args){var a=args[i];if(!first){str+=", "}first=false;if(typeof a==="number"||typeof a==="string"){str+=a}else{str+="("+typeof a+")"}}str+=")";var caller=args.callee.caller;args=caller?caller.arguments:[];if(first)str="";return[args,funcname,str]}function _emscripten_get_callstack_js(flags){var callstack=jsStackTrace();var iThisFunc=callstack.lastIndexOf("_emscripten_log");var iThisFunc2=callstack.lastIndexOf("_emscripten_get_callstack");var iNextLine=callstack.indexOf("\n",Math.max(iThisFunc,iThisFunc2))+1;callstack=callstack.slice(iNextLine);if(flags&32){warnOnce("EM_LOG_DEMANGLE is deprecated; ignoring")}if(flags&8&&typeof emscripten_source_map==="undefined"){warnOnce('Source map information is not available, emscripten_log with EM_LOG_C_STACK will be ignored. Build with "--pre-js $EMSCRIPTEN/src/emscripten-source-map.min.js" linker flag to add source map loading to code.');flags^=8;flags|=16}var stack_args=null;if(flags&128){stack_args=traverseStack(arguments);while(stack_args[1].includes("_emscripten_"))stack_args=traverseStack(stack_args[0])}var lines=callstack.split("\n");callstack="";var newFirefoxRe=new RegExp("\\s*(.*?)@(.*?):([0-9]+):([0-9]+)");var firefoxRe=new RegExp("\\s*(.*?)@(.*):(.*)(:(.*))?");var chromeRe=new RegExp("\\s*at (.*?) \\((.*):(.*):(.*)\\)");for(var l in lines){var line=lines[l];var symbolName="";var file="";var lineno=0;var column=0;var parts=chromeRe.exec(line);if(parts&&parts.length==5){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]}else{parts=newFirefoxRe.exec(line);if(!parts)parts=firefoxRe.exec(line);if(parts&&parts.length>=4){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]|0}else{callstack+=line+"\n";continue}}var haveSourceMap=false;if(flags&8){var orig=emscripten_source_map.originalPositionFor({line:lineno,column:column});haveSourceMap=orig&&orig.source;if(haveSourceMap){if(flags&64){orig.source=orig.source.substring(orig.source.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=" at "+symbolName+" ("+orig.source+":"+orig.line+":"+orig.column+")\n"}}if(flags&16||!haveSourceMap){if(flags&64){file=file.substring(file.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=(haveSourceMap?" = "+symbolName:" at "+symbolName)+" ("+file+":"+lineno+":"+column+")\n"}if(flags&128&&stack_args[0]){if(stack_args[1]==symbolName&&stack_args[2].length>0){callstack=callstack.replace(/\s+$/,"");callstack+=" with values: "+stack_args[1]+stack_args[2]+"\n"}stack_args=traverseStack(stack_args[0])}}callstack=callstack.replace(/\s+$/,"");return callstack}function _emscripten_log_js(flags,str){if(flags&24){str=str.replace(/\s+$/,"");str+=(str.length>0?"\n":"")+_emscripten_get_callstack_js(flags)}if(flags&1){if(flags&4){console.error(str)}else if(flags&2){console.warn(str)}else if(flags&512){console.info(str)}else if(flags&256){console.debug(str)}else{console.log(str)}}else if(flags&6){err(str)}else{out(str)}}function _emscripten_log(flags,format,varargs){var result=formatString(format,varargs);var str=UTF8ArrayToString(result,0);_emscripten_log_js(flags,str)}function _longjmp(env,value){_setThrew(env,value||1);throw"longjmp"}function _emscripten_longjmp(a0,a1){return _longjmp(a0,a1)}function _emscripten_memcpy_big(dest,src,num){HEAPU8.copyWithin(dest,src,src+num)}function doRequestFullscreen(target,strategy){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;if(!target.requestFullscreen&&!target.webkitRequestFullscreen){return-3}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(strategy.deferUntilInEventHandler){JSEvents.deferCall(_JSEvents_requestFullscreen,1,[target,strategy]);return 1}else{return-2}}return _JSEvents_requestFullscreen(target,strategy)}function _emscripten_request_fullscreen(target,deferUntilInEventHandler){var strategy={scaleMode:0,canvasResolutionScaleMode:0,filteringMode:0,deferUntilInEventHandler:deferUntilInEventHandler,canvasResizedCallbackTargetThread:2};return doRequestFullscreen(target,strategy)}function _emscripten_request_pointerlock(target,deferUntilInEventHandler){target=findEventTarget(target);if(!target)return-4;if(!target.requestPointerLock&&!target.msRequestPointerLock){return-1}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(deferUntilInEventHandler){JSEvents.deferCall(requestPointerLock,2,[target]);return 1}else{return-2}}return requestPointerLock(target)}function emscripten_realloc_buffer(size){try{wasmMemory.grow(size-buffer.byteLength+65535>>>16);updateGlobalBufferAndViews(wasmMemory.buffer);return 1}catch(e){}}function _emscripten_resize_heap(requestedSize){var oldSize=HEAPU8.length;requestedSize=requestedSize>>>0;var maxHeapSize=2147483648;if(requestedSize>maxHeapSize){return false}for(var cutDown=1;cutDown<=4;cutDown*=2){var overGrownHeapSize=oldSize*(1+.2/cutDown);overGrownHeapSize=Math.min(overGrownHeapSize,requestedSize+100663296);var newSize=Math.min(maxHeapSize,alignUp(Math.max(requestedSize,overGrownHeapSize),65536));var replacement=emscripten_realloc_buffer(newSize);if(replacement){return true}}return false}function _emscripten_sample_gamepad_data(){return(JSEvents.lastGamepadState=navigator.getGamepads?navigator.getGamepads():navigator.webkitGetGamepads?navigator.webkitGetGamepads():null)?0:-1}function registerFocusEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.focusEvent)JSEvents.focusEvent=_malloc(256);var focusEventHandlerFunc=function(ev){var e=ev||event;var nodeName=JSEvents.getNodeNameForTarget(e.target);var id=e.target.id?e.target.id:"";var focusEvent=JSEvents.focusEvent;stringToUTF8(nodeName,focusEvent+0,128);stringToUTF8(id,focusEvent+128,128);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,focusEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:focusEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_blur_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,12,"blur",targetThread);return 0}function _emscripten_set_focus_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,13,"focus",targetThread);return 0}function registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.fullscreenChangeEvent)JSEvents.fullscreenChangeEvent=_malloc(280);var fullscreenChangeEventhandlerFunc=function(ev){var e=ev||event;var fullscreenChangeEvent=JSEvents.fullscreenChangeEvent;fillFullscreenChangeEventData(fullscreenChangeEvent);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,fullscreenChangeEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:fullscreenChangeEventhandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_fullscreenchange_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"fullscreenchange",targetThread);registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"webkitfullscreenchange",targetThread);return 0}function registerGamepadEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.gamepadEvent)JSEvents.gamepadEvent=_malloc(1432);var gamepadEventHandlerFunc=function(ev){var e=ev||event;var gamepadEvent=JSEvents.gamepadEvent;fillGamepadEventData(gamepadEvent,e["gamepad"]);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,gamepadEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:gamepadEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_gamepadconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,26,"gamepadconnected",targetThread);return 0}function _emscripten_set_gamepaddisconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,27,"gamepaddisconnected",targetThread);return 0}function _emscripten_set_interval(cb,msecs,userData){return setInterval(function(){(function(a1){dynCall_vi.apply(null,[cb,a1])})(userData)},msecs)}function registerKeyEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.keyEvent)JSEvents.keyEvent=_malloc(164);var keyEventHandlerFunc=function(e){var keyEventData=JSEvents.keyEvent;var idx=keyEventData>>2;HEAP32[idx+0]=e.location;HEAP32[idx+1]=e.ctrlKey;HEAP32[idx+2]=e.shiftKey;HEAP32[idx+3]=e.altKey;HEAP32[idx+4]=e.metaKey;HEAP32[idx+5]=e.repeat;HEAP32[idx+6]=e.charCode;HEAP32[idx+7]=e.keyCode;HEAP32[idx+8]=e.which;stringToUTF8(e.key||"",keyEventData+36,32);stringToUTF8(e.code||"",keyEventData+68,32);stringToUTF8(e.char||"",keyEventData+100,32);stringToUTF8(e.locale||"",keyEventData+132,32);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,keyEventData,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:keyEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_keydown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,2,"keydown",targetThread);return 0}function _emscripten_set_keypress_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,1,"keypress",targetThread);return 0}function _emscripten_set_keyup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,3,"keyup",targetThread);return 0}function _emscripten_set_main_loop(func,fps,simulateInfiniteLoop){var browserIterationFunc=function(){dynCall_v.call(null,func)};setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop)}function fillMouseEventData(eventStruct,e,target){var idx=eventStruct>>2;HEAP32[idx+0]=e.screenX;HEAP32[idx+1]=e.screenY;HEAP32[idx+2]=e.clientX;HEAP32[idx+3]=e.clientY;HEAP32[idx+4]=e.ctrlKey;HEAP32[idx+5]=e.shiftKey;HEAP32[idx+6]=e.altKey;HEAP32[idx+7]=e.metaKey;HEAP16[idx*2+16]=e.button;HEAP16[idx*2+17]=e.buttons;HEAP32[idx+9]=e["movementX"];HEAP32[idx+10]=e["movementY"];var rect=getBoundingClientRect(target);HEAP32[idx+11]=e.clientX-rect.left;HEAP32[idx+12]=e.clientY-rect.top}function registerMouseEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.mouseEvent)JSEvents.mouseEvent=_malloc(64);target=findEventTarget(target);var mouseEventHandlerFunc=function(ev){var e=ev||event;fillMouseEventData(JSEvents.mouseEvent,e,target);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,JSEvents.mouseEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString!="mousemove"&&eventTypeString!="mouseenter"&&eventTypeString!="mouseleave",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:mouseEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_mousedown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,5,"mousedown",targetThread);return 0}function _emscripten_set_mousemove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,8,"mousemove",targetThread);return 0}function _emscripten_set_mouseup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,6,"mouseup",targetThread);return 0}function registerTouchEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.touchEvent)JSEvents.touchEvent=_malloc(1684);target=findEventTarget(target);var touchEventHandlerFunc=function(e){var t,touches={},et=e.touches;for(var i=0;i>2;HEAP32[idx+1]=e.ctrlKey;HEAP32[idx+2]=e.shiftKey;HEAP32[idx+3]=e.altKey;HEAP32[idx+4]=e.metaKey;idx+=5;var targetRect=getBoundingClientRect(target);var numTouches=0;for(var i in touches){var t=touches[i];HEAP32[idx+0]=t.identifier;HEAP32[idx+1]=t.screenX;HEAP32[idx+2]=t.screenY;HEAP32[idx+3]=t.clientX;HEAP32[idx+4]=t.clientY;HEAP32[idx+5]=t.pageX;HEAP32[idx+6]=t.pageY;HEAP32[idx+7]=t.isChanged;HEAP32[idx+8]=t.onTarget;HEAP32[idx+9]=t.clientX-targetRect.left;HEAP32[idx+10]=t.clientY-targetRect.top;idx+=13;if(++numTouches>31){break}}HEAP32[touchEvent>>2]=numTouches;if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,touchEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString=="touchstart"||eventTypeString=="touchend",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:touchEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_touchcancel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,25,"touchcancel",targetThread);return 0}function _emscripten_set_touchend_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,23,"touchend",targetThread);return 0}function _emscripten_set_touchmove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,24,"touchmove",targetThread);return 0}function _emscripten_set_touchstart_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,22,"touchstart",targetThread);return 0}function registerWheelEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.wheelEvent)JSEvents.wheelEvent=_malloc(96);var wheelHandlerFunc=function(ev){var e=ev||event;var wheelEvent=JSEvents.wheelEvent;fillMouseEventData(wheelEvent,e,target);HEAPF64[wheelEvent+64>>3]=e["deltaX"];HEAPF64[wheelEvent+72>>3]=e["deltaY"];HEAPF64[wheelEvent+80>>3]=e["deltaZ"];HEAP32[wheelEvent+88>>2]=e["deltaMode"];if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,wheelEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:wheelHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_wheel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){target=findEventTarget(target);if(typeof target.onwheel!=="undefined"){registerWheelEventCallback(target,userData,useCapture,callbackfunc,9,"wheel",targetThread);return 0}else{return-1}}function _emscripten_thread_sleep(msecs){var start=_emscripten_get_now();while(_emscripten_get_now()-start>1;var quadIndexes=new Uint16Array(numIndexes);var i=0,v=0;while(1){quadIndexes[i++]=v;if(i>=numIndexes)break;quadIndexes[i++]=v+1;if(i>=numIndexes)break;quadIndexes[i++]=v+2;if(i>=numIndexes)break;quadIndexes[i++]=v;if(i>=numIndexes)break;quadIndexes[i++]=v+2;if(i>=numIndexes)break;quadIndexes[i++]=v+3;if(i>=numIndexes)break;v+=4}context.GLctx.bufferData(34963,quadIndexes,35044);context.GLctx.bindBuffer(34963,null)}},getTempVertexBuffer:function getTempVertexBuffer(sizeBytes){var idx=GL.log2ceilLookup(sizeBytes);var ringbuffer=GL.currentContext.tempVertexBuffers1[idx];var nextFreeBufferIndex=GL.currentContext.tempVertexBufferCounters1[idx];GL.currentContext.tempVertexBufferCounters1[idx]=GL.currentContext.tempVertexBufferCounters1[idx]+1&GL.numTempVertexBuffersPerSize-1;var vbo=ringbuffer[nextFreeBufferIndex];if(vbo){return vbo}var prevVBO=GLctx.getParameter(34964);ringbuffer[nextFreeBufferIndex]=GLctx.createBuffer();GLctx.bindBuffer(34962,ringbuffer[nextFreeBufferIndex]);GLctx.bufferData(34962,1<>2]:-1;source+=UTF8ToString(HEAP32[string+i*4>>2],len<0?undefined:len)}return source},calcBufLength:function calcBufLength(size,type,stride,count){if(stride>0){return count*stride}var typeSize=GL.byteSizeByType[type-GL.byteSizeByTypeRoot];return size*typeSize*count},usedTempBuffers:[],preDrawHandleClientVertexAttribBindings:function preDrawHandleClientVertexAttribBindings(count){GL.resetBufferBinding=false;for(var i=0;i1?canvas.getContext("webgl2",webGLContextAttributes):canvas.getContext("webgl",webGLContextAttributes);if(!ctx)return 0;var handle=GL.registerContext(ctx,webGLContextAttributes);return handle},registerContext:function(ctx,webGLContextAttributes){var handle=GL.getNewId(GL.contexts);var context={handle:handle,attributes:webGLContextAttributes,version:webGLContextAttributes.majorVersion,GLctx:ctx};if(ctx.canvas)ctx.canvas.GLctxObject=context;GL.contexts[handle]=context;if(typeof webGLContextAttributes.enableExtensionsByDefault==="undefined"||webGLContextAttributes.enableExtensionsByDefault){GL.initExtensions(context)}context.maxVertexAttribs=context.GLctx.getParameter(34921);context.clientBuffers=[];for(var i=0;i=2){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query_webgl2")}if(context.version<2||!GLctx.disjointTimerQueryExt){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query")}__webgl_enable_WEBGL_multi_draw(GLctx);var exts=GLctx.getSupportedExtensions()||[];exts.forEach(function(ext){if(!ext.includes("lose_context")&&!ext.includes("debug")){GLctx.getExtension(ext)}})}};var __emscripten_webgl_power_preferences=["default","low-power","high-performance"];function _emscripten_webgl_do_create_context(target,attributes){var a=attributes>>2;var powerPreference=HEAP32[a+(24>>2)];var contextAttributes={"alpha":!!HEAP32[a+(0>>2)],"depth":!!HEAP32[a+(4>>2)],"stencil":!!HEAP32[a+(8>>2)],"antialias":!!HEAP32[a+(12>>2)],"premultipliedAlpha":!!HEAP32[a+(16>>2)],"preserveDrawingBuffer":!!HEAP32[a+(20>>2)],"powerPreference":__emscripten_webgl_power_preferences[powerPreference],"failIfMajorPerformanceCaveat":!!HEAP32[a+(28>>2)],majorVersion:HEAP32[a+(32>>2)],minorVersion:HEAP32[a+(36>>2)],enableExtensionsByDefault:HEAP32[a+(40>>2)],explicitSwapControl:HEAP32[a+(44>>2)],proxyContextToMainThread:HEAP32[a+(48>>2)],renderViaOffscreenBackBuffer:HEAP32[a+(52>>2)]};var canvas=findCanvasEventTarget(target);if(!canvas){return 0}if(contextAttributes.explicitSwapControl){return 0}var contextHandle=GL.createContext(canvas,contextAttributes);return contextHandle}function _emscripten_webgl_create_context(a0,a1){return _emscripten_webgl_do_create_context(a0,a1)}function _emscripten_webgl_do_get_current_context(){return GL.currentContext?GL.currentContext.handle:0}function _emscripten_webgl_get_current_context(){return _emscripten_webgl_do_get_current_context()}Module["_emscripten_webgl_get_current_context"]=_emscripten_webgl_get_current_context;function _emscripten_webgl_make_context_current(contextHandle){var success=GL.makeContextCurrent(contextHandle);return success?0:-5}Module["_emscripten_webgl_make_context_current"]=_emscripten_webgl_make_context_current;function _emscripten_webgl_destroy_context(contextHandle){if(GL.currentContext==contextHandle)GL.currentContext=0;GL.deleteContext(contextHandle)}function _emscripten_webgl_enable_extension(contextHandle,extension){var context=GL.getContext(contextHandle);var extString=UTF8ToString(extension);if(extString.startsWith("GL_"))extString=extString.substr(3);if(extString=="ANGLE_instanced_arrays")__webgl_enable_ANGLE_instanced_arrays(GLctx);if(extString=="OES_vertex_array_object")__webgl_enable_OES_vertex_array_object(GLctx);if(extString=="WEBGL_draw_buffers")__webgl_enable_WEBGL_draw_buffers(GLctx);if(extString=="WEBGL_draw_instanced_base_vertex_base_instance")__webgl_enable_WEBGL_draw_instanced_base_vertex_base_instance(GLctx);if(extString=="WEBGL_multi_draw_instanced_base_vertex_base_instance")__webgl_enable_WEBGL_multi_draw_instanced_base_vertex_base_instance(GLctx);if(extString=="WEBGL_multi_draw")__webgl_enable_WEBGL_multi_draw(GLctx);var ext=context.GLctx.getExtension(extString);return!!ext}function _emscripten_webgl_init_context_attributes(attributes){var a=attributes>>2;for(var i=0;i<56>>2;++i){HEAP32[a+i]=0}HEAP32[a+(0>>2)]=HEAP32[a+(4>>2)]=HEAP32[a+(12>>2)]=HEAP32[a+(16>>2)]=HEAP32[a+(32>>2)]=HEAP32[a+(40>>2)]=1}var ENV={};function getExecutableName(){return thisProgram||"./this.program"}function getEnvStrings(){if(!getEnvStrings.strings){var lang=(typeof navigator==="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8";var env={"USER":"web_user","LOGNAME":"web_user","PATH":"/","PWD":"/","HOME":"/home/web_user","LANG":lang,"_":getExecutableName()};for(var x in ENV){env[x]=ENV[x]}var strings=[];for(var x in env){strings.push(x+"="+env[x])}getEnvStrings.strings=strings}return getEnvStrings.strings}function _environ_get(__environ,environ_buf){try{var bufSize=0;getEnvStrings().forEach(function(string,i){var ptr=environ_buf+bufSize;HEAP32[__environ+i*4>>2]=ptr;writeAsciiToMemory(string,ptr);bufSize+=string.length+1});return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _environ_sizes_get(penviron_count,penviron_buf_size){try{var strings=getEnvStrings();HEAP32[penviron_count>>2]=strings.length;var bufSize=0;strings.forEach(function(string){bufSize+=string.length+1});HEAP32[penviron_buf_size>>2]=bufSize;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _fd_close(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);FS.close(stream);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _fd_fdstat_get(fd,pbuf){try{var stream=SYSCALLS.getStreamFromFD(fd);var type=stream.tty?2:FS.isDir(stream.mode)?3:FS.isLink(stream.mode)?7:4;HEAP8[pbuf>>0]=type;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _fd_read(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doReadv(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _fd_seek(fd,offset_low,offset_high,whence,newOffset){try{var stream=SYSCALLS.getStreamFromFD(fd);var HIGH_OFFSET=4294967296;var offset=offset_high*HIGH_OFFSET+(offset_low>>>0);var DOUBLE_LIMIT=9007199254740992;if(offset<=-DOUBLE_LIMIT||offset>=DOUBLE_LIMIT){return-61}FS.llseek(stream,offset,whence);tempI64=[stream.position>>>0,(tempDouble=stream.position,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[newOffset>>2]=tempI64[0],HEAP32[newOffset+4>>2]=tempI64[1];if(stream.getdents&&offset===0&&whence===0)stream.getdents=null;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _fd_write(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doWritev(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}function _flock(fd,operation){return 0}function _getTempRet0(){return getTempRet0()}function getHostByName(name){var ret=_malloc(20);var nameBuf=_malloc(name.length+1);stringToUTF8(name,nameBuf,name.length+1);HEAP32[ret>>2]=nameBuf;var aliasesBuf=_malloc(4);HEAP32[aliasesBuf>>2]=0;HEAP32[ret+4>>2]=aliasesBuf;var afinet=2;HEAP32[ret+8>>2]=afinet;HEAP32[ret+12>>2]=4;var addrListBuf=_malloc(12);HEAP32[addrListBuf>>2]=addrListBuf+8;HEAP32[addrListBuf+4>>2]=0;HEAP32[addrListBuf+8>>2]=inetPton4(DNS.lookup_name(name));HEAP32[ret+16>>2]=addrListBuf;return ret}function _gethostbyaddr(addr,addrlen,type){if(type!==2){setErrNo(5);return null}addr=HEAP32[addr>>2];var host=inetNtop4(addr);var lookup=DNS.lookup_addr(host);if(lookup){host=lookup}return getHostByName(host)}function _gethostbyname(name){return getHostByName(UTF8ToString(name))}function _getpwuid(){throw"getpwuid: TODO"}function _gettimeofday(ptr){var now=Date.now();HEAP32[ptr>>2]=now/1e3|0;HEAP32[ptr+4>>2]=now%1e3*1e3|0;return 0}function _glActiveTexture(x0){GLctx["activeTexture"](x0)}function _glAttachShader(program,shader){program=GL.programs[program];shader=GL.shaders[shader];program[shader.shaderType]=shader;GLctx.attachShader(program,shader)}function _glBeginQuery(target,id){GLctx["beginQuery"](target,GL.queries[id])}function _glBeginTransformFeedback(x0){GLctx["beginTransformFeedback"](x0)}function _glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}function _glBindBuffer(target,buffer){if(target==34962){GLctx.currentArrayBufferBinding=buffer}else if(target==34963){GLctx.currentElementArrayBufferBinding=buffer}if(target==35051){GLctx.currentPixelPackBufferBinding=buffer}else if(target==35052){GLctx.currentPixelUnpackBufferBinding=buffer}GLctx.bindBuffer(target,GL.buffers[buffer])}function _glBindBufferBase(target,index,buffer){GLctx["bindBufferBase"](target,index,GL.buffers[buffer])}function _glBindBufferRange(target,index,buffer,offset,ptrsize){GLctx["bindBufferRange"](target,index,GL.buffers[buffer],offset,ptrsize)}function _glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,GL.framebuffers[framebuffer])}function _glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}function _glBindSampler(unit,sampler){GLctx["bindSampler"](unit,GL.samplers[sampler])}function _glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}function _glBindTransformFeedback(target,id){GLctx["bindTransformFeedback"](target,GL.transformFeedbacks[id])}function _glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao]);var ibo=GLctx.getParameter(34965);GLctx.currentElementArrayBufferBinding=ibo?ibo.name|0:0}function _glBlendEquation(x0){GLctx["blendEquation"](x0)}function _glBlendEquationSeparate(x0,x1){GLctx["blendEquationSeparate"](x0,x1)}function _glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}function _glBlitFramebuffer(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9){GLctx["blitFramebuffer"](x0,x1,x2,x3,x4,x5,x6,x7,x8,x9)}function _glBufferData(target,size,data,usage){if(GL.currentContext.version>=2){if(data){GLctx.bufferData(target,HEAPU8,usage,data,size)}else{GLctx.bufferData(target,size,usage)}}else{GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}}function _glBufferSubData(target,offset,size,data){if(GL.currentContext.version>=2){GLctx.bufferSubData(target,offset,HEAPU8,data,size);return}GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}function _glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}function _glClear(x0){GLctx["clear"](x0)}function _glClearBufferfi(x0,x1,x2,x3){GLctx["clearBufferfi"](x0,x1,x2,x3)}function _glClearBufferfv(buffer,drawbuffer,value){GLctx["clearBufferfv"](buffer,drawbuffer,HEAPF32,value>>2)}function _glClearBufferuiv(buffer,drawbuffer,value){GLctx["clearBufferuiv"](buffer,drawbuffer,HEAPU32,value>>2)}function _glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}function _glClearDepthf(x0){GLctx["clearDepth"](x0)}function _glClearStencil(x0){GLctx["clearStencil"](x0)}function _glClientWaitSync(sync,flags,timeoutLo,timeoutHi){return GLctx.clientWaitSync(GL.syncs[sync],flags,convertI32PairToI53(timeoutLo,timeoutHi))}function _glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}function _glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}function _glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,imageSize,data)}else{GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,HEAPU8,data,imageSize)}return}GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexImage3D(target,level,internalFormat,width,height,depth,border,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,imageSize,data)}else{GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,HEAPU8,data,imageSize)}}function _glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,imageSize,data)}else{GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,HEAPU8,data,imageSize)}return}GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data)}else{GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,HEAPU8,data,imageSize)}}function _glCopyBufferSubData(x0,x1,x2,x3,x4){GLctx["copyBufferSubData"](x0,x1,x2,x3,x4)}function _glCopyTexImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}function _glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);GL.shaders[id].shaderType=shaderType&1?"vs":"fs";return id}function _glCullFace(x0){GLctx["cullFace"](x0)}function _glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null;if(id==GLctx.currentArrayBufferBinding)GLctx.currentArrayBufferBinding=0;if(id==GLctx.currentElementArrayBufferBinding)GLctx.currentElementArrayBufferBinding=0;if(id==GLctx.currentPixelPackBufferBinding)GLctx.currentPixelPackBufferBinding=0;if(id==GLctx.currentPixelUnpackBufferBinding)GLctx.currentPixelUnpackBufferBinding=0}}function _glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}function _glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}function _glDeleteQueries(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx["deleteQuery"](query);GL.queries[id]=null}}function _glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}function _glDeleteSamplers(n,samplers){for(var i=0;i>2];var sampler=GL.samplers[id];if(!sampler)continue;GLctx["deleteSampler"](sampler);sampler.name=0;GL.samplers[id]=null}}function _glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}function _glDeleteSync(id){if(!id)return;var sync=GL.syncs[id];if(!sync){GL.recordError(1281);return}GLctx.deleteSync(sync);sync.name=0;GL.syncs[id]=null}function _glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}function _glDeleteTransformFeedbacks(n,ids){for(var i=0;i>2];var transformFeedback=GL.transformFeedbacks[id];if(!transformFeedback)continue;GLctx["deleteTransformFeedback"](transformFeedback);transformFeedback.name=0;GL.transformFeedbacks[id]=null}}function _glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}function _glDepthFunc(x0){GLctx["depthFunc"](x0)}function _glDepthMask(flag){GLctx.depthMask(!!flag)}function _glDetachShader(program,shader){GLctx.detachShader(GL.programs[program],GL.shaders[shader])}function _glDisable(x0){GLctx["disable"](x0)}function _glDisableVertexAttribArray(index){var cb=GL.currentContext.clientBuffers[index];cb.enabled=false;GLctx.disableVertexAttribArray(index)}function _glDrawArrays(mode,first,count){GL.preDrawHandleClientVertexAttribBindings(first+count);GLctx.drawArrays(mode,first,count);GL.postDrawHandleClientVertexAttribBindings()}function _glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}var tempFixedLengthArray=[];function _glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _glDrawElements(mode,count,type,indices){var buf;if(!GLctx.currentElementArrayBufferBinding){var size=GL.calcBufLength(1,type,0,count);buf=GL.getTempIndexBuffer(size);GLctx.bindBuffer(34963,buf);GLctx.bufferSubData(34963,0,HEAPU8.subarray(indices,indices+size));indices=0}GL.preDrawHandleClientVertexAttribBindings(count);GLctx.drawElements(mode,count,type,indices);GL.postDrawHandleClientVertexAttribBindings(count);if(!GLctx.currentElementArrayBufferBinding){GLctx.bindBuffer(34963,null)}}function _glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _glEnable(x0){GLctx["enable"](x0)}function _glEnableVertexAttribArray(index){var cb=GL.currentContext.clientBuffers[index];cb.enabled=true;GLctx.enableVertexAttribArray(index)}function _glEndQuery(x0){GLctx["endQuery"](x0)}function _glEndTransformFeedback(){GLctx["endTransformFeedback"]()}function _glFenceSync(condition,flags){var sync=GLctx.fenceSync(condition,flags);if(sync){var id=GL.getNewId(GL.syncs);sync.name=id;GL.syncs[id]=sync;return id}else{return 0}}function _glFinish(){GLctx["finish"]()}function _glFlush(){GLctx["flush"]()}function emscriptenWebGLGetBufferBinding(target){switch(target){case 34962:target=34964;break;case 34963:target=34965;break;case 35051:target=35053;break;case 35052:target=35055;break;case 35982:target=35983;break;case 36662:target=36662;break;case 36663:target=36663;break;case 35345:target=35368;break}var buffer=GLctx.getParameter(target);if(buffer)return buffer.name|0;else return 0}function emscriptenWebGLValidateMapBufferTarget(target){switch(target){case 34962:case 34963:case 36662:case 36663:case 35051:case 35052:case 35882:case 35982:case 35345:return true;default:return false}}function _glFlushMappedBufferRange(target,offset,length){if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glFlushMappedBufferRange");return}var mapping=GL.mappedBuffers[emscriptenWebGLGetBufferBinding(target)];if(!mapping){GL.recordError(1282);err("buffer was never mapped in glFlushMappedBufferRange");return}if(!(mapping.access&16)){GL.recordError(1282);err("buffer was not mapped with GL_MAP_FLUSH_EXPLICIT_BIT in glFlushMappedBufferRange");return}if(offset<0||length<0||offset+length>mapping.length){GL.recordError(1281);err("invalid range in glFlushMappedBufferRange");return}GLctx.bufferSubData(target,mapping.offset,HEAPU8.subarray(mapping.mem+offset,mapping.mem+offset+length))}function _glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}function _glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}function _glFramebufferTextureLayer(target,attachment,texture,level,layer){GLctx.framebufferTextureLayer(target,attachment,GL.textures[texture],level,layer)}function _glFrontFace(x0){GLctx["frontFace"](x0)}function __glGenObject(n,buffers,createFunction,objectTable){for(var i=0;i>2]=id}}function _glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}function _glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}function _glGenQueries(n,ids){__glGenObject(n,ids,"createQuery",GL.queries)}function _glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}function _glGenSamplers(n,samplers){__glGenObject(n,samplers,"createSampler",GL.samplers)}function _glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}function _glGenTransformFeedbacks(n,ids){__glGenObject(n,ids,"createTransformFeedback",GL.transformFeedbacks)}function _glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}function _glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}function __glGetActiveAttribOrUniform(funcName,program,index,bufSize,length,size,type,name){program=GL.programs[program];var info=GLctx[funcName](program,index);if(info){var numBytesWrittenExclNull=name&&stringToUTF8(info.name,name,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull;if(size)HEAP32[size>>2]=info.size;if(type)HEAP32[type>>2]=info.type}}function _glGetActiveAttrib(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveAttrib",program,index,bufSize,length,size,type,name)}function _glGetActiveUniform(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveUniform",program,index,bufSize,length,size,type,name)}function _glGetActiveUniformBlockName(program,uniformBlockIndex,bufSize,length,uniformBlockName){program=GL.programs[program];var result=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);if(!result)return;if(uniformBlockName&&bufSize>0){var numBytesWrittenExclNull=stringToUTF8(result,uniformBlockName,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull}else{if(length)HEAP32[length>>2]=0}}function _glGetActiveUniformBlockiv(program,uniformBlockIndex,pname,params){if(!params){GL.recordError(1281);return}program=GL.programs[program];if(pname==35393){var name=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);HEAP32[params>>2]=name.length+1;return}var result=GLctx["getActiveUniformBlockParameter"](program,uniformBlockIndex,pname);if(result===null)return;if(pname==35395){for(var i=0;i>2]=result[i]}}else{HEAP32[params>>2]=result}}function _glGetActiveUniformsiv(program,uniformCount,uniformIndices,pname,params){if(!params){GL.recordError(1281);return}if(uniformCount>0&&uniformIndices==0){GL.recordError(1281);return}program=GL.programs[program];var ids=[];for(var i=0;i>2])}var result=GLctx["getActiveUniforms"](program,ids,pname);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function _glGetAttribLocation(program,name){return GLctx.getAttribLocation(GL.programs[program],UTF8ToString(name))}function _glGetBufferSubData(target,offset,size,data){if(!data){GL.recordError(1281);return}GLctx["getBufferSubData"](target,offset,HEAPU8,data,size)}function _glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}function _glGetFramebufferAttachmentParameteriv(target,attachment,pname,params){var result=GLctx.getFramebufferAttachmentParameter(target,attachment,pname);if(result instanceof WebGLRenderbuffer||result instanceof WebGLTexture){result=result.name|0}HEAP32[params>>2]=result}function writeI53ToI64(ptr,num){HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}function emscriptenWebGLGetIndexed(target,index,data,type){if(!data){GL.recordError(1281);return}var result=GLctx["getIndexedParameter"](target,index);var ret;switch(typeof result){case"boolean":ret=result?1:0;break;case"number":ret=result;break;case"object":if(result===null){switch(target){case 35983:case 35368:ret=0;break;default:{GL.recordError(1280);return}}}else if(result instanceof WebGLBuffer){ret=result.name|0}else{GL.recordError(1280);return}break;default:GL.recordError(1280);return}switch(type){case 1:writeI53ToI64(data,ret);break;case 0:HEAP32[data>>2]=ret;break;case 2:HEAPF32[data>>2]=ret;break;case 4:HEAP8[data>>0]=ret?1:0;break;default:throw"internal emscriptenWebGLGetIndexed() error, bad type: "+type}}function _glGetIntegeri_v(target,index,data){emscriptenWebGLGetIndexed(target,index,data,0)}function emscriptenWebGLGet(name_,p,type){if(!p){GL.recordError(1281);return}var ret=undefined;switch(name_){case 36346:ret=1;break;case 36344:if(type!=0&&type!=1){GL.recordError(1280)}return;case 34814:case 36345:ret=0;break;case 34466:var formats=GLctx.getParameter(34467);ret=formats?formats.length:0;break;case 33390:ret=1048576;break;case 33309:if(GL.currentContext.version<2){GL.recordError(1282);return}var exts=GLctx.getSupportedExtensions()||[];ret=2*exts.length;break;case 33307:case 33308:if(GL.currentContext.version<2){GL.recordError(1280);return}ret=name_==33307?3:0;break}if(ret===undefined){var result=GLctx.getParameter(name_);switch(typeof result){case"number":ret=result;break;case"boolean":ret=result?1:0;break;case"string":GL.recordError(1280);return;case"object":if(result===null){switch(name_){case 34964:case 35725:case 34965:case 36006:case 36007:case 32873:case 34229:case 36662:case 36663:case 35053:case 35055:case 36010:case 35097:case 35869:case 32874:case 36389:case 35983:case 35368:case 34068:{ret=0;break}default:{GL.recordError(1280);return}}}else if(result instanceof Float32Array||result instanceof Uint32Array||result instanceof Int32Array||result instanceof Array){for(var i=0;i>2]=result[i];break;case 2:HEAPF32[p+i*4>>2]=result[i];break;case 4:HEAP8[p+i>>0]=result[i]?1:0;break}}return}else{try{ret=result.name|0}catch(e){GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Unknown object returned from WebGL getParameter("+name_+")! (error: "+e+")");return}}break;default:GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Native code calling glGet"+type+"v("+name_+") and it returns "+result+" of type "+typeof result+"!");return}}switch(type){case 1:writeI53ToI64(p,ret);break;case 0:HEAP32[p>>2]=ret;break;case 2:HEAPF32[p>>2]=ret;break;case 4:HEAP8[p>>0]=ret?1:0;break}}function _glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}function _glGetInternalformativ(target,internalformat,pname,bufSize,params){if(bufSize<0){GL.recordError(1281);return}if(!params){GL.recordError(1281);return}var ret=GLctx["getInternalformatParameter"](target,internalformat,pname);if(ret===null)return;for(var i=0;i>2]=ret[i]}}function _glGetProgramBinary(program,bufSize,length,binaryFormat,binary){GL.recordError(1282)}function _glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}function _glGetQueryObjectuiv(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx["getQueryParameter"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}function _glGetQueryiv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx["getQuery"](target,pname)}function _glGetRenderbufferParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getRenderbufferParameter(target,pname)}function _glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderPrecisionFormat(shaderType,precisionType,range,precision){var result=GLctx.getShaderPrecisionFormat(shaderType,precisionType);HEAP32[range>>2]=result.rangeMin;HEAP32[range+4>>2]=result.rangeMax;HEAP32[precision>>2]=result.precision}function _glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}function _glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);if(GL.currentContext.version>=2)glVersion="OpenGL ES 3.0 ("+glVersion+")";else{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}function _glGetStringi(name,index){if(GL.currentContext.version<2){GL.recordError(1282);return 0}var stringiCache=GL.stringiCache[name];if(stringiCache){if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index]}switch(name){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));exts=exts.map(function(e){return stringToNewUTF8(e)});stringiCache=GL.stringiCache[name]=exts;if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index];default:GL.recordError(1280);return 0}}function _glGetTexParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getTexParameter(target,pname)}function _glGetUniformBlockIndex(program,uniformBlockName){return GLctx["getUniformBlockIndex"](GL.programs[program],UTF8ToString(uniformBlockName))}function _glGetUniformIndices(program,uniformCount,uniformNames,uniformIndices){if(!uniformIndices){GL.recordError(1281);return}if(uniformCount>0&&(uniformNames==0||uniformIndices==0)){GL.recordError(1281);return}program=GL.programs[program];var names=[];for(var i=0;i>2]));var result=GLctx["getUniformIndices"](program,names);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function _glGetUniformLocation(program,name){function getLeftBracePos(name){return name.slice(-1)=="]"&&name.lastIndexOf("[")}name=UTF8ToString(name);if(program=GL.programs[program]){var uniformLocsById=program.uniformLocsById;var uniformSizeAndIdsByName=program.uniformSizeAndIdsByName;var i,j;var arrayIndex=0;var uniformBaseName=name;var leftBrace=getLeftBracePos(name);if(!uniformLocsById){program.uniformLocsById=uniformLocsById={};program.uniformArrayNamesById={};for(i=0;i0?nm.slice(0,lb):nm;var id=uniformSizeAndIdsByName[arrayName]?uniformSizeAndIdsByName[arrayName][1]:program.uniformIdCounter;program.uniformIdCounter=Math.max(id+sz,program.uniformIdCounter);uniformSizeAndIdsByName[arrayName]=[sz,id];for(j=0;j0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex0?"["+webglLoc+"]":""))}return webglLoc}else{GL.recordError(1282)}}function emscriptenWebGLGetUniform(program,location,params,type){if(!params){GL.recordError(1281);return}program=GL.programs[program];var data=GLctx.getUniform(program,webglGetUniformLocation(location));if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break}}}}function _glGetUniformiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}function emscriptenWebGLGetVertexAttrib(index,pname,params,type){if(!params){GL.recordError(1281);return}if(GL.currentContext.clientBuffers[index].enabled){err("glGetVertexAttrib*v on client-side array: not supported, bad data returned")}var data=GLctx.getVertexAttrib(index,pname);if(pname==34975){HEAP32[params>>2]=data&&data["name"]}else if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break;case 5:HEAP32[params>>2]=Math.fround(data);break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break;case 5:HEAP32[params+i*4>>2]=Math.fround(data[i]);break}}}}function _glGetVertexAttribiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,5)}function _glInvalidateFramebuffer(target,numAttachments,attachments){var list=tempFixedLengthArray[numAttachments];for(var i=0;i>2]}GLctx["invalidateFramebuffer"](target,list)}function _glIsEnabled(x0){return GLctx["isEnabled"](x0)}function _glIsVertexArray(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}function _glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={};[program["vs"],program["fs"]].forEach(function(s){Object.keys(s.explicitUniformLocations).forEach(function(shaderLocation){var loc=s.explicitUniformLocations[shaderLocation];program.uniformSizeAndIdsByName[shaderLocation]=[1,loc];program.uniformIdCounter=Math.max(program.uniformIdCounter,loc+1)})});function copyKeys(dst,src){Object.keys(src).forEach(function(key){dst[key]=src[key]})}program.explicitUniformBindings={};program.explicitSamplerBindings={};[program["vs"],program["fs"]].forEach(function(s){copyKeys(program.explicitUniformBindings,s.explicitUniformBindings);copyKeys(program.explicitSamplerBindings,s.explicitSamplerBindings)});program.explicitProgramBindingsApplied=0}function _glMapBufferRange(target,offset,length,access){if(access!=26&&access!=10){err("glMapBufferRange is only supported when access is MAP_WRITE|INVALIDATE_BUFFER");return 0}if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glMapBufferRange");return 0}var mem=_malloc(length);if(!mem)return 0;GL.mappedBuffers[emscriptenWebGLGetBufferBinding(target)]={offset:offset,length:length,mem:mem,access:access};return mem}function _glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}function _glPolygonOffset(x0,x1){GLctx["polygonOffset"](x0,x1)}function _glProgramBinary(program,binaryFormat,binary,length){GL.recordError(1280)}function _glProgramParameteri(program,pname,value){GL.recordError(1280)}function _glReadBuffer(x0){GLctx["readBuffer"](x0)}function computeUnpackAlignedImageSize(width,height,sizePerPixel,alignment){function roundedToNextMultipleOf(x,y){return x+y-1&-y}var plainRowSize=width*sizePerPixel;var alignedRowSize=roundedToNextMultipleOf(plainRowSize,alignment);return height*alignedRowSize}function __colorChannelsInGlTextureFormat(format){var colorChannels={5:3,6:4,8:2,29502:3,29504:4,26917:2,26918:2,29846:3,29847:4};return colorChannels[format-6402]||1}function heapObjectForWebGLType(type){type-=5120;if(type==0)return HEAP8;if(type==1)return HEAPU8;if(type==2)return HEAP16;if(type==4)return HEAP32;if(type==6)return HEAPF32;if(type==5||type==28922||type==28520||type==30779||type==30782)return HEAPU32;return HEAPU16}function heapAccessShiftForWebGLHeap(heap){return 31-Math.clz32(heap.BYTES_PER_ELEMENT)}function emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat){var heap=heapObjectForWebGLType(type);var shift=heapAccessShiftForWebGLHeap(heap);var byteSize=1<>shift,pixels+bytes>>shift)}function _glReadPixels(x,y,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelPackBufferBinding){GLctx.readPixels(x,y,width,height,format,type,pixels)}else{var heap=heapObjectForWebGLType(type);GLctx.readPixels(x,y,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}return}var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}function _glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}function _glRenderbufferStorageMultisample(x0,x1,x2,x3,x4){GLctx["renderbufferStorageMultisample"](x0,x1,x2,x3,x4)}function _glSamplerParameteri(sampler,pname,param){GLctx["samplerParameteri"](GL.samplers[sampler],pname,param)}function _glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}function find_closing_parens_index(arr,i,opening="(",closing=")"){for(var nesting=0;i32)}function nextWhitespace(str,i){while(!isWhitespace(str,i))++i;return i}function classifyChar(str,idx){var cc=str.charCodeAt(idx);if(cc>32){if(cc<48)return 1;if(cc<58)return 2;if(cc<65)return 1;if(cc<91||cc==95)return 3;if(cc<97)return 1;if(cc<123)return 3;return 1}return cc<33?0:4}function tokenize(exprString,keepWhitespace){var out=[],len=exprString.length;for(var i=0;i<=len;++i){var kind=classifyChar(exprString,i);if(kind==2||kind==3){for(var j=i+1;j<=len;++j){var kind2=classifyChar(exprString,j);if(kind2!=kind&&(kind2!=2||kind!=3)){out.push(exprString.substring(i,j));i=j-1;break}}}else if(kind==1){var op2=exprString.substr(i,2);if(["<=",">=","==","!=","&&","||"].includes(op2)){out.push(op2);++i}else{out.push(exprString[i])}}}return out}function expandMacros(str,lineStart,lineEnd){if(lineEnd===undefined)lineEnd=str.length;var len=str.length;var out="";for(var i=lineStart;i1||typeof tokens[0]!="function"){tokens=function(tokens){var i,j,p,operatorAndPriority=-2;for(j=0;j",">=","==","!=","&&","||","("].indexOf(tokens[j]))>operatorAndPriority){i=j;operatorAndPriority=p}}if(operatorAndPriority==13){var j=find_closing_parens_index(tokens,i);if(j){tokens.splice(i,j+1-i,buildExprTree(tokens.slice(i+1,j)));return tokens}}if(operatorAndPriority==4){i=tokens.lastIndexOf("!");var innerExpr=buildExprTree(tokens.slice(i+1,i+2));tokens.splice(i,2,function(){return!innerExpr()});return tokens}if(operatorAndPriority>=0){var left=buildExprTree(tokens.slice(0,i));var right=buildExprTree(tokens.slice(i+1));switch(tokens[i]){case"&&":return[function(){return left()&&right()}];case"||":return[function(){return left()||right()}];case"==":return[function(){return left()==right()}];case"!=":return[function(){return left()!=right()}];case"<":return[function(){return left()":return[function(){return left()>right()}];case">=":return[function(){return left()>=right()}];case"+":return[function(){return left()+right()}];case"-":return[function(){return left()-right()}];case"*":return[function(){return left()*right()}];case"/":return[function(){return Math.floor(left()/right())}]}}var num=jstoi_q(tokens[i]);return[function(){return num}]}(tokens)}return tokens[0]}for(;i0){var macroEnd=expression.indexOf(")",macroStart);let params=expression.substring(macroStart+1,macroEnd).split(",").map(x=>x.trim());let value=tokenize(expression.substring(macroEnd+1).trim());defs[expression.substring(0,macroStart)]=function(args){var ret="";value.forEach(x=>{var argIndex=params.indexOf(x);ret+=argIndex>=0?args[argIndex]:x});return ret}}else{let value=expandMacros(expression.substring(firstWs+1).trim(),0);defs[expression.substring(0,firstWs)]=function(){return value}}}break;case"undef":if(thisLineIsInActivePreprocessingBlock)delete defs[expression];break;default:if(directive!="version"&&directive!="pragma"&&directive!="extension"){}out+=expandMacros(code,lineStart,i)+"\n"}}return out}function remove_cpp_comments_in_shaders(code){var i=0,out="",ch,next,len=code.length;for(;i=0&&explicitUniformLocations[match[5]]<1048576)){console.error('Specified an out of range layout(location=x) directive "'+explicitUniformLocations[match[5]]+'"! ('+match[0]+")");GL.recordError(1281);return}}source=source.replace(regex,"$2");GL.shaders[shader].explicitUniformLocations=explicitUniformLocations;var bindingRegex=/layout\s*\(.*?binding\s*=\s*(-?\d+).*?\)\s*uniform\s+(\w+)\s+(\w+)?/g,samplerBindings={},uniformBindings={},bindingMatch;while(bindingMatch=bindingRegex.exec(source)){var arrayLength=1;for(var i=bindingMatch.index;i=0&&binding+arrayLength<=numBindingPoints)){console.error('Specified an out of range layout(binding=x) directive "'+binding+'"! ('+bindingMatch[0]+"). Valid range is [0, "+numBindingPoints+"-1]");GL.recordError(1281);return}}source=source.replace(/layout\s*\(.*?binding\s*=\s*([-\d]+).*?\)/g,"");source=source.replace(/(layout\s*\((.*?)),\s*binding\s*=\s*([-\d]+)\)/g,"$1)");source=source.replace(/layout\s*\(\s*binding\s*=\s*([-\d]+)\s*,(.*?)\)/g,"layout($2)");GL.shaders[shader].explicitSamplerBindings=samplerBindings;GL.shaders[shader].explicitUniformBindings=uniformBindings;GLctx.shaderSource(GL.shaders[shader],source)}function _glStencilFuncSeparate(x0,x1,x2,x3){GLctx["stencilFuncSeparate"](x0,x1,x2,x3)}function _glStencilMask(x0){GLctx["stencilMask"](x0)}function _glStencilOpSeparate(x0,x1,x2,x3){GLctx["stencilOpSeparate"](x0,x1,x2,x3)}function _glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,null)}return}GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}function _glTexImage3D(target,level,internalFormat,width,height,depth,border,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,null)}}function _glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}function _glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}function _glTexParameteriv(target,pname,params){var param=HEAP32[params>>2];GLctx.texParameteri(target,pname,param)}function _glTexStorage2D(x0,x1,x2,x3,x4){GLctx["texStorage2D"](x0,x1,x2,x3,x4)}function _glTexStorage3D(x0,x1,x2,x3,x4,x5){GLctx["texStorage3D"](x0,x1,x2,x3,x4,x5)}function _glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,null)}return}var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}function _glTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,null)}}function _glTransformFeedbackVaryings(program,count,varyings,bufferMode){program=GL.programs[program];var vars=[];for(var i=0;i>2]));GLctx["transformFeedbackVaryings"](program,vars,bufferMode)}var miniTempWebGLFloatBuffers=[];function _glUniform1fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform1fv(webglGetUniformLocation(location),HEAPF32,value>>2,count);return}if(count<=288){var view=miniTempWebGLFloatBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1fv(webglGetUniformLocation(location),view)}function _glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}var __miniTempWebGLIntBuffers=[];function _glUniform1iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform1iv(webglGetUniformLocation(location),HEAP32,value>>2,count);return}if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}function _glUniform1uiv(location,count,value){GLctx.uniform1uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count)}function _glUniform2fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform2fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*2);return}if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}function _glUniform2iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform2iv(webglGetUniformLocation(location),HEAP32,value>>2,count*2);return}if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}function _glUniform2uiv(location,count,value){GLctx.uniform2uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*2)}function _glUniform3fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform3fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*3);return}if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}function _glUniform3iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform3iv(webglGetUniformLocation(location),HEAP32,value>>2,count*3);return}if(count<=96){var view=__miniTempWebGLIntBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3iv(webglGetUniformLocation(location),view)}function _glUniform3uiv(location,count,value){GLctx.uniform3uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*3)}function _glUniform4fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform4fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}function _glUniform4iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform4iv(webglGetUniformLocation(location),HEAP32,value>>2,count*4);return}if(count<=72){var view=__miniTempWebGLIntBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2];view[i+3]=HEAP32[value+(4*i+12)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4iv(webglGetUniformLocation(location),view)}function _glUniform4uiv(location,count,value){GLctx.uniform4uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*4)}function _glUniformBlockBinding(program,uniformBlockIndex,uniformBlockBinding){program=GL.programs[program];GLctx["uniformBlockBinding"](program,uniformBlockIndex,uniformBlockBinding)}function _glUniformMatrix3fv(location,count,transpose,value){if(GL.currentContext.version>=2){GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*9);return}if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}function _glUniformMatrix4fv(location,count,transpose,value){if(GL.currentContext.version>=2){GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*16);return}if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}function _glUnmapBuffer(target){if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glUnmapBuffer");return 0}var buffer=emscriptenWebGLGetBufferBinding(target);var mapping=GL.mappedBuffers[buffer];if(!mapping){GL.recordError(1282);err("buffer was never mapped in glUnmapBuffer");return 0}GL.mappedBuffers[buffer]=null;if(!(mapping.access&16))if(GL.currentContext.version>=2){GLctx.bufferSubData(target,mapping.offset,HEAPU8,mapping.mem,mapping.length)}else{GLctx.bufferSubData(target,mapping.offset,HEAPU8.subarray(mapping.mem,mapping.mem+mapping.length))}_free(mapping.mem);return 1}function webglApplyExplicitProgramBindings(){var p=GLctx.currentProgram;if(!p.explicitProgramBindingsApplied){if(GL.currentContext.version>=2){Object.keys(p.explicitUniformBindings).forEach(function(ubo){var bindings=p.explicitUniformBindings[ubo];for(var i=0;i1?"["+i+"]":""));GLctx.uniformBlockBinding(p,blockIndex,bindings[0]+i)}})}Object.keys(p.explicitSamplerBindings).forEach(function(sampler){var bindings=p.explicitSamplerBindings[sampler];for(var i=0;i>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}function _glVertexAttribIPointer(index,size,type,stride,ptr){var cb=GL.currentContext.clientBuffers[index];if(!GLctx.currentArrayBufferBinding){cb.size=size;cb.type=type;cb.normalized=false;cb.stride=stride;cb.ptr=ptr;cb.clientside=true;cb.vertexAttribPointerAdaptor=function(index,size,type,normalized,stride,ptr){this.vertexAttribIPointer(index,size,type,stride,ptr)};return}cb.clientside=false;GLctx["vertexAttribIPointer"](index,size,type,stride,ptr)}function _glVertexAttribPointer(index,size,type,normalized,stride,ptr){var cb=GL.currentContext.clientBuffers[index];if(!GLctx.currentArrayBufferBinding){cb.size=size;cb.type=type;cb.normalized=normalized;cb.stride=stride;cb.ptr=ptr;cb.clientside=true;cb.vertexAttribPointerAdaptor=function(index,size,type,normalized,stride,ptr){this.vertexAttribPointer(index,size,type,normalized,stride,ptr)};return}cb.clientside=false;GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}function _glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}function _llvm_eh_typeid_for(type){return type}function _mktime(tmPtr){_tzset();var date=new Date(HEAP32[tmPtr+20>>2]+1900,HEAP32[tmPtr+16>>2],HEAP32[tmPtr+12>>2],HEAP32[tmPtr+8>>2],HEAP32[tmPtr+4>>2],HEAP32[tmPtr>>2],0);var dst=HEAP32[tmPtr+32>>2];var guessedOffset=date.getTimezoneOffset();var start=new Date(date.getFullYear(),0,1);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dstOffset=Math.min(winterOffset,summerOffset);if(dst<0){HEAP32[tmPtr+32>>2]=Number(summerOffset!=winterOffset&&dstOffset==guessedOffset)}else if(dst>0!=(dstOffset==guessedOffset)){var nonDstOffset=Math.max(winterOffset,summerOffset);var trueOffset=dst>0?dstOffset:nonDstOffset;date.setTime(date.getTime()+(trueOffset-guessedOffset)*6e4)}HEAP32[tmPtr+24>>2]=date.getDay();var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();return date.getTime()/1e3|0}function _setTempRet0(val){setTempRet0(val)}function _sigaction(signum,act,oldact){return 0}function _sigemptyset(set){HEAP32[set>>2]=0;return 0}function __isLeapYear(year){return year%4===0&&(year%100!==0||year%400===0)}function __arraySum(array,index){var sum=0;for(var i=0;i<=index;sum+=array[i++]){}return sum}var __MONTH_DAYS_LEAP=[31,29,31,30,31,30,31,31,30,31,30,31];var __MONTH_DAYS_REGULAR=[31,28,31,30,31,30,31,31,30,31,30,31];function __addDays(date,days){var newDate=new Date(date.getTime());while(days>0){var leap=__isLeapYear(newDate.getFullYear());var currentMonth=newDate.getMonth();var daysInCurrentMonth=(leap?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR)[currentMonth];if(days>daysInCurrentMonth-newDate.getDate()){days-=daysInCurrentMonth-newDate.getDate()+1;newDate.setDate(1);if(currentMonth<11){newDate.setMonth(currentMonth+1)}else{newDate.setMonth(0);newDate.setFullYear(newDate.getFullYear()+1)}}else{newDate.setDate(newDate.getDate()+days);return newDate}}return newDate}function _strftime(s,maxsize,format,tm){var tm_zone=HEAP32[tm+40>>2];var date={tm_sec:HEAP32[tm>>2],tm_min:HEAP32[tm+4>>2],tm_hour:HEAP32[tm+8>>2],tm_mday:HEAP32[tm+12>>2],tm_mon:HEAP32[tm+16>>2],tm_year:HEAP32[tm+20>>2],tm_wday:HEAP32[tm+24>>2],tm_yday:HEAP32[tm+28>>2],tm_isdst:HEAP32[tm+32>>2],tm_gmtoff:HEAP32[tm+36>>2],tm_zone:tm_zone?UTF8ToString(tm_zone):""};var pattern=UTF8ToString(format);var EXPANSION_RULES_1={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"};for(var rule in EXPANSION_RULES_1){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_1[rule])}var WEEKDAYS=["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"];var MONTHS=["January","February","March","April","May","June","July","August","September","October","November","December"];function leadingSomething(value,digits,character){var str=typeof value==="number"?value.toString():value||"";while(str.length0?1:0}var compare;if((compare=sgn(date1.getFullYear()-date2.getFullYear()))===0){if((compare=sgn(date1.getMonth()-date2.getMonth()))===0){compare=sgn(date1.getDate()-date2.getDate())}}return compare}function getFirstWeekStartDate(janFourth){switch(janFourth.getDay()){case 0:return new Date(janFourth.getFullYear()-1,11,29);case 1:return janFourth;case 2:return new Date(janFourth.getFullYear(),0,3);case 3:return new Date(janFourth.getFullYear(),0,2);case 4:return new Date(janFourth.getFullYear(),0,1);case 5:return new Date(janFourth.getFullYear()-1,11,31);case 6:return new Date(janFourth.getFullYear()-1,11,30)}}function getWeekBasedYear(date){var thisDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);var janFourthThisYear=new Date(thisDate.getFullYear(),0,4);var janFourthNextYear=new Date(thisDate.getFullYear()+1,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);if(compareByDay(firstWeekStartThisYear,thisDate)<=0){if(compareByDay(firstWeekStartNextYear,thisDate)<=0){return thisDate.getFullYear()+1}else{return thisDate.getFullYear()}}else{return thisDate.getFullYear()-1}}var EXPANSION_RULES_2={"%a":function(date){return WEEKDAYS[date.tm_wday].substring(0,3)},"%A":function(date){return WEEKDAYS[date.tm_wday]},"%b":function(date){return MONTHS[date.tm_mon].substring(0,3)},"%B":function(date){return MONTHS[date.tm_mon]},"%C":function(date){var year=date.tm_year+1900;return leadingNulls(year/100|0,2)},"%d":function(date){return leadingNulls(date.tm_mday,2)},"%e":function(date){return leadingSomething(date.tm_mday,2," ")},"%g":function(date){return getWeekBasedYear(date).toString().substring(2)},"%G":function(date){return getWeekBasedYear(date)},"%H":function(date){return leadingNulls(date.tm_hour,2)},"%I":function(date){var twelveHour=date.tm_hour;if(twelveHour==0)twelveHour=12;else if(twelveHour>12)twelveHour-=12;return leadingNulls(twelveHour,2)},"%j":function(date){return leadingNulls(date.tm_mday+__arraySum(__isLeapYear(date.tm_year+1900)?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,date.tm_mon-1),3)},"%m":function(date){return leadingNulls(date.tm_mon+1,2)},"%M":function(date){return leadingNulls(date.tm_min,2)},"%n":function(){return"\n"},"%p":function(date){if(date.tm_hour>=0&&date.tm_hour<12){return"AM"}else{return"PM"}},"%S":function(date){return leadingNulls(date.tm_sec,2)},"%t":function(){return"\t"},"%u":function(date){return date.tm_wday||7},"%U":function(date){var janFirst=new Date(date.tm_year+1900,0,1);var firstSunday=janFirst.getDay()===0?janFirst:__addDays(janFirst,7-janFirst.getDay());var endDate=new Date(date.tm_year+1900,date.tm_mon,date.tm_mday);if(compareByDay(firstSunday,endDate)<0){var februaryFirstUntilEndMonth=__arraySum(__isLeapYear(endDate.getFullYear())?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,endDate.getMonth()-1)-31;var firstSundayUntilEndJanuary=31-firstSunday.getDate();var days=firstSundayUntilEndJanuary+februaryFirstUntilEndMonth+endDate.getDate();return leadingNulls(Math.ceil(days/7),2)}return compareByDay(firstSunday,janFirst)===0?"01":"00"},"%V":function(date){var janFourthThisYear=new Date(date.tm_year+1900,0,4);var janFourthNextYear=new Date(date.tm_year+1901,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);var endDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);if(compareByDay(endDate,firstWeekStartThisYear)<0){return"53"}if(compareByDay(firstWeekStartNextYear,endDate)<=0){return"01"}var daysDifference;if(firstWeekStartThisYear.getFullYear()=0;off=Math.abs(off)/60;off=off/60*100+off%60;return(ahead?"+":"-")+String("0000"+off).slice(-4)},"%Z":function(date){return date.tm_zone},"%%":function(){return"%"}};for(var rule in EXPANSION_RULES_2){if(pattern.includes(rule)){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_2[rule](date))}}var bytes=intArrayFromString(pattern,false);if(bytes.length>maxsize){return 0}writeArrayToMemory(bytes,s);return bytes.length-1}function _time(ptr){var ret=Date.now()/1e3|0;if(ptr){HEAP32[ptr>>2]=ret}return ret}function setFileTime(path,time){path=UTF8ToString(path);try{FS.utime(path,time,time);return 0}catch(e){if(!(e instanceof FS.ErrnoError))throw e+" : "+stackTrace();setErrNo(e.errno);return-1}}function _utime(path,times){var time;if(times){time=HEAP32[times+4>>2]*1e3}else{time=Date.now()}return setFileTime(path,time)}var FSNode=function(parent,name,mode,rdev){if(!parent){parent=this}this.parent=parent;this.mount=parent.mount;this.mounted=null;this.id=FS.nextInode++;this.name=name;this.mode=mode;this.node_ops={};this.stream_ops={};this.rdev=rdev};var readMode=292|73;var writeMode=146;Object.defineProperties(FSNode.prototype,{read:{get:function(){return(this.mode&readMode)===readMode},set:function(val){val?this.mode|=readMode:this.mode&=~readMode}},write:{get:function(){return(this.mode&writeMode)===writeMode},set:function(val){val?this.mode|=writeMode:this.mode&=~writeMode}},isFolder:{get:function(){return FS.isDir(this.mode)}},isDevice:{get:function(){return FS.isChrdev(this.mode)}}});FS.FSNode=FSNode;FS.staticInit();Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["requestFullscreen"]=function Module_requestFullscreen(lockPointer,resizeCanvas){Browser.requestFullscreen(lockPointer,resizeCanvas)};Module["requestAnimationFrame"]=function Module_requestAnimationFrame(func){Browser.requestAnimationFrame(func)};Module["setCanvasSize"]=function Module_setCanvasSize(width,height,noUpdates){Browser.setCanvasSize(width,height,noUpdates)};Module["pauseMainLoop"]=function Module_pauseMainLoop(){Browser.mainLoop.pause()};Module["resumeMainLoop"]=function Module_resumeMainLoop(){Browser.mainLoop.resume()};Module["getUserMedia"]=function Module_getUserMedia(){Browser.getUserMedia()};Module["createContext"]=function Module_createContext(canvas,useWebGL,setInModule,webGLContextAttributes){return Browser.createContext(canvas,useWebGL,setInModule,webGLContextAttributes)};var GLctx;for(var i=0;i<32;++i)tempFixedLengthArray.push(new Array(i));var miniTempWebGLFloatBuffersStorage=new Float32Array(288);for(var i=0;i<288;++i){miniTempWebGLFloatBuffers[i]=miniTempWebGLFloatBuffersStorage.subarray(0,i+1)}var __miniTempWebGLIntBuffersStorage=new Int32Array(288);for(var i=0;i<288;++i){__miniTempWebGLIntBuffers[i]=__miniTempWebGLIntBuffersStorage.subarray(0,i+1)}function intArrayFromString(stringy,dontAddNull,length){var len=length>0?length:lengthBytesUTF8(stringy)+1;var u8array=new Array(len);var numBytesWritten=stringToUTF8Array(stringy,u8array,0,u8array.length);if(dontAddNull)u8array.length=numBytesWritten;return u8array}var asmLibraryArg={"ee":_JS_Accelerometer_IsRunning,"sb":_JS_Accelerometer_Start,"rb":_JS_Accelerometer_Stop,"ie":_JS_Cursor_SetImage,"Ua":_JS_Cursor_SetShow,"Ca":_JS_DOM_MapViewportCoordinateToElementLocalCoordinate,"Pd":_JS_DOM_UnityCanvasSelector,"Hd":_JS_FileSystem_Initialize,"Y":_JS_FileSystem_Sync,"ce":_JS_GravitySensor_IsRunning,"ob":_JS_GravitySensor_Start,"nb":_JS_GravitySensor_Stop,"ae":_JS_Gyroscope_IsRunning,"mb":_JS_Gyroscope_Start,"lb":_JS_Gyroscope_Stop,"de":_JS_LinearAccelerationSensor_IsRunning,"qb":_JS_LinearAccelerationSensor_Start,"pb":_JS_LinearAccelerationSensor_Stop,"Ng":_JS_Log_Dump,"Qd":_JS_Log_StackTrace,"fe":_JS_OrientationSensor_IsRunning,"vb":_JS_OrientationSensor_Start,"ub":_JS_OrientationSensor_Stop,"zb":_JS_RequestDeviceSensorPermissionsOnTouch,"Ld":_JS_RunQuitCallbacks,"$d":_JS_ScreenOrientation_DeInit,"ge":_JS_ScreenOrientation_Init,"X":_JS_ScreenOrientation_Lock,"Be":_JS_Sound_Create_Channel,"Ha":_JS_Sound_GetLength,"we":_JS_Sound_GetLoadState,"ue":_JS_Sound_Init,"Ae":_JS_Sound_IsStopped,"Mb":_JS_Sound_Load,"ve":_JS_Sound_Load_PCM,"Fa":_JS_Sound_Play,"Ia":_JS_Sound_ReleaseInstance,"Ab":_JS_Sound_ResumeIfNeeded,"xe":_JS_Sound_Set3D,"se":_JS_Sound_SetListenerOrientation,"te":_JS_Sound_SetListenerPosition,"Ob":_JS_Sound_SetLoop,"Nb":_JS_Sound_SetLoopPoints,"la":_JS_Sound_SetPaused,"Z":_JS_Sound_SetPitch,"ze":_JS_Sound_SetPosition,"ye":_JS_Sound_SetVolume,"ma":_JS_Sound_Stop,"ea":_JS_SystemInfo_GetCanvasClientSize,"Lb":_JS_SystemInfo_GetDocumentURL,"db":_JS_SystemInfo_GetGPUInfo,"kb":_JS_SystemInfo_GetMatchWebGLToCanvasSize,"eb":_JS_SystemInfo_GetMemory,"fb":_JS_SystemInfo_GetOS,"hb":_JS_SystemInfo_GetPreferredDevicePixelRatio,"Sd":_JS_SystemInfo_GetScreenSize,"je":_JS_SystemInfo_HasAstcHdr,"gb":_JS_SystemInfo_HasCursorLock,"_d":_JS_SystemInfo_HasFullscreen,"ka":_JS_SystemInfo_HasWebGL,"Od":_JS_UnityEngineShouldQuit,"o":___cxa_allocate_exception,"g":___cxa_begin_catch,"n":___cxa_end_catch,"e":___cxa_find_matching_catch_2,"a":___cxa_find_matching_catch_3,"tc":___cxa_find_matching_catch_4,"ra":___cxa_free_exception,"Fc":___cxa_rethrow,"P":___cxa_throw,"Jc":___gmtime_r,"Kc":___localtime_r,"i":___resumeException,"Rc":___sys__newselect,"Sc":___sys_access,"Cc":___sys_chmod,"nd":___sys_connect,"N":___sys_fcntl64,"he":___sys_fstat64,"be":___sys_ftruncate64,"zc":___sys_getcwd,"wc":___sys_getdents64,"xa":___sys_getpid,"Pc":___sys_getrusage,"Fe":___sys_getuid32,"bb":___sys_ioctl,"Dc":___sys_lstat64,"yc":___sys_mkdir,"Qc":___sys_mmap2,"Lc":___sys_munmap,"Da":___sys_open,"Ec":___sys_pipe,"Qg":___sys_readlink,"Tc":___sys_recvfrom,"Ac":___sys_rename,"xc":___sys_rmdir,"Uc":___sys_sendto,"Gc":___sys_shutdown,"Hc":___sys_socket,"tb":___sys_stat64,"Ce":___sys_statfs64,"De":___sys_truncate64,"Mc":___sys_uname,"Bc":___sys_unlink,"u":_abort,"F":_clock,"Oc":_clock_getres,"ab":_clock_gettime,"Ic":_difftime,"yd":_dlclose,"da":_dlerror,"jb":_dlopen,"Id":_dlsym,"Ea":_emscripten_asm_const_int_sync_on_main_thread,"Md":_emscripten_cancel_main_loop,"Kd":_emscripten_clear_interval,"Yd":_emscripten_exit_fullscreen,"Td":_emscripten_exit_pointerlock,"Rd":_emscripten_get_canvas_element_size,"Xd":_emscripten_get_fullscreen_status,"wb":_emscripten_get_gamepad_status,"Nc":_emscripten_get_heap_max,"E":_emscripten_get_now,"xb":_emscripten_get_num_gamepads,"Nd":_emscripten_html5_remove_all_event_listeners,"me":_emscripten_is_webgl_context_lost,"z":_emscripten_log,"C":_emscripten_longjmp,"Zg":_emscripten_memcpy_big,"Zd":_emscripten_request_fullscreen,"Ud":_emscripten_request_pointerlock,"_g":_emscripten_resize_heap,"yb":_emscripten_sample_gamepad_data,"ib":_emscripten_set_blur_callback_on_thread,"Ba":_emscripten_set_canvas_element_size,"Vd":_emscripten_set_focus_callback_on_thread,"Wd":_emscripten_set_fullscreenchange_callback_on_thread,"Cb":_emscripten_set_gamepadconnected_callback_on_thread,"Bb":_emscripten_set_gamepaddisconnected_callback_on_thread,"Gd":_emscripten_set_interval,"ha":_emscripten_set_keydown_callback_on_thread,"fa":_emscripten_set_keypress_callback_on_thread,"ga":_emscripten_set_keyup_callback_on_thread,"Fd":_emscripten_set_main_loop,"Jd":_emscripten_set_main_loop_timing,"Jb":_emscripten_set_mousedown_callback_on_thread,"Ib":_emscripten_set_mousemove_callback_on_thread,"Kb":_emscripten_set_mouseup_callback_on_thread,"Db":_emscripten_set_touchcancel_callback_on_thread,"Fb":_emscripten_set_touchend_callback_on_thread,"Eb":_emscripten_set_touchmove_callback_on_thread,"Gb":_emscripten_set_touchstart_callback_on_thread,"Hb":_emscripten_set_wheel_callback_on_thread,"Og":_emscripten_thread_sleep,"oe":_emscripten_webgl_create_context,"ne":_emscripten_webgl_destroy_context,"ia":_emscripten_webgl_enable_extension,"le":_emscripten_webgl_get_current_context,"pe":_emscripten_webgl_init_context_attributes,"ja":_emscripten_webgl_make_context_current,"gg":_environ_get,"qg":_environ_sizes_get,"x":_exit,"Q":_fd_close,"cb":_fd_fdstat_get,"_a":_fd_read,"Ed":_fd_seek,"Ga":_fd_write,"na":_flock,"b":_getTempRet0,"qe":_gethostbyaddr,"re":_gethostbyname,"Ee":_getpwuid,"ca":_gettimeofday,"Gg":_glActiveTexture,"Dg":_glAttachShader,"Vb":_glBeginQuery,"rf":_glBeginTransformFeedback,"va":_glBindAttribLocation,"Cg":_glBindBuffer,"Se":_glBindBufferBase,"Re":_glBindBufferRange,"zg":_glBindFramebuffer,"Ag":_glBindRenderbuffer,"Me":_glBindSampler,"Bg":_glBindTexture,"kf":_glBindTransformFeedback,"nf":_glBindVertexArray,"kc":_glBlendEquation,"lc":_glBlendEquationSeparate,"mc":_glBlendFuncSeparate,"bf":_glBlitFramebuffer,"xg":_glBufferData,"yg":_glBufferSubData,"wg":_glCheckFramebufferStatus,"sg":_glClear,"Ie":_glClearBufferfi,"He":_glClearBufferfv,"Ge":_glClearBufferuiv,"tg":_glClearColor,"ug":_glClearDepthf,"vg":_glClearStencil,"Vc":_glClientWaitSync,"Ta":_glColorMask,"rg":_glCompileShader,"og":_glCompressedTexImage2D,"df":_glCompressedTexImage3D,"pg":_glCompressedTexSubImage2D,"hf":_glCompressedTexSubImage3D,"Ve":_glCopyBufferSubData,"ng":_glCopyTexImage2D,"jc":_glCopyTexSubImage2D,"mg":_glCreateProgram,"lg":_glCreateShader,"kg":_glCullFace,"jg":_glDeleteBuffers,"ig":_glDeleteFramebuffers,"hg":_glDeleteProgram,"Na":_glDeleteQueries,"fg":_glDeleteRenderbuffers,"Le":_glDeleteSamplers,"eg":_glDeleteShader,"Rb":_glDeleteSync,"dg":_glDeleteTextures,"lf":_glDeleteTransformFeedbacks,"pf":_glDeleteVertexArrays,"ua":_glDepthFunc,"ta":_glDepthMask,"cg":_glDetachShader,"bg":_glDisable,"ag":_glDisableVertexAttribArray,"Zf":_glDrawArrays,"Xe":_glDrawArraysInstanced,"Ue":_glDrawBuffers,"_f":_glDrawElements,"We":_glDrawElementsInstanced,"$f":_glEnable,"Yf":_glEnableVertexAttribArray,"Wb":_glEndQuery,"sf":_glEndTransformFeedback,"Qb":_glFenceSync,"Vf":_glFinish,"Wf":_glFlush,"_e":_glFlushMappedBufferRange,"K":_glFramebufferRenderbuffer,"H":_glFramebufferTexture2D,"oa":_glFramebufferTextureLayer,"sa":_glFrontFace,"Uf":_glGenBuffers,"Qf":_glGenFramebuffers,"Ub":_glGenQueries,"Rf":_glGenRenderbuffers,"Ke":_glGenSamplers,"Tf":_glGenTextures,"mf":_glGenTransformFeedbacks,"qf":_glGenVertexArrays,"Sf":_glGenerateMipmap,"Mg":_glGetActiveAttrib,"Sa":_glGetActiveUniform,"Ka":_glGetActiveUniformBlockName,"S":_glGetActiveUniformBlockiv,"R":_glGetActiveUniformsiv,"Lg":_glGetAttribLocation,"ke":_glGetBufferSubData,"Pf":_glGetError,"Of":_glGetFramebufferAttachmentParameteriv,"Fg":_glGetIntegeri_v,"wa":_glGetIntegerv,"Oe":_glGetInternalformativ,"Sb":_glGetProgramBinary,"Ig":_glGetProgramInfoLog,"O":_glGetProgramiv,"uf":_glGetQueryObjectuiv,"tf":_glGetQueryiv,"Xf":_glGetRenderbufferParameteriv,"Mf":_glGetShaderInfoLog,"ic":_glGetShaderPrecisionFormat,"Nf":_glGetShaderSource,"Hg":_glGetShaderiv,"Lf":_glGetString,"$e":_glGetStringi,"Kf":_glGetTexParameteriv,"Pe":_glGetUniformBlockIndex,"Ja":_glGetUniformIndices,"_":_glGetUniformLocation,"hc":_glGetUniformiv,"Kg":_glGetVertexAttribiv,"Ma":_glInvalidateFramebuffer,"Eg":_glIsEnabled,"of":_glIsVertexArray,"If":_glLinkProgram,"Ye":_glMapBufferRange,"Jf":_glPixelStorei,"gc":_glPolygonOffset,"Tb":_glProgramBinary,"Je":_glProgramParameteri,"Te":_glReadBuffer,"U":_glReadPixels,"Hf":_glRenderbufferStorage,"af":_glRenderbufferStorageMultisample,"Ne":_glSamplerParameteri,"Ra":_glScissor,"Ff":_glShaderSource,"Gf":_glStencilFuncSeparate,"Df":_glStencilMask,"Ef":_glStencilOpSeparate,"Bf":_glTexImage2D,"ff":_glTexImage3D,"Cf":_glTexParameterf,"Qa":_glTexParameteri,"Af":_glTexParameteriv,"cf":_glTexStorage2D,"ef":_glTexStorage3D,"zf":_glTexSubImage2D,"gf":_glTexSubImage3D,"jf":_glTransformFeedbackVaryings,"Xb":_glUniform1fv,"pa":_glUniform1i,"Yb":_glUniform1iv,"Zb":_glUniform1uiv,"_b":_glUniform2fv,"$b":_glUniform2iv,"ac":_glUniform2uiv,"Pa":_glUniform3fv,"bc":_glUniform3iv,"cc":_glUniform3uiv,"T":_glUniform4fv,"dc":_glUniform4iv,"ec":_glUniform4uiv,"La":_glUniformBlockBinding,"fc":_glUniformMatrix3fv,"qa":_glUniformMatrix4fv,"Ze":_glUnmapBuffer,"vf":_glUseProgram,"Jg":_glValidateProgram,"wf":_glVertexAttrib4f,"xf":_glVertexAttrib4fv,"Qe":_glVertexAttribIPointer,"yf":_glVertexAttribPointer,"Oa":_glViewport,"Yg":invoke_ddiii,"W":invoke_dii,"Va":invoke_diiid,"V":invoke_fi,"J":invoke_fii,"B":invoke_fiii,"Wa":invoke_fiiif,"Vg":invoke_fiiii,"y":invoke_i,"d":invoke_ii,"c":invoke_iii,"oc":invoke_iiifi,"k":invoke_iiii,"q":invoke_iiiii,"s":invoke_iiiiii,"w":invoke_iiiiiii,"G":invoke_iiiiiiii,"$":invoke_iiiiiiiii,"aa":invoke_iiiiiiiiii,"Xa":invoke_iiiiiiiiiii,"vc":invoke_iiiiiiiiiiiii,"md":invoke_iiiiiiiiiji,"Wc":invoke_iiiijii,"Dd":invoke_iiij,"Ad":invoke_iiijiii,"Bd":invoke_iij,"Xc":invoke_iiji,"gd":invoke_iijii,"cd":invoke_iijji,"wd":invoke_iji,"ed":invoke_ijji,"Cd":invoke_j,"xd":invoke_ji,"zd":invoke_jii,"od":invoke_jiii,"ud":invoke_jiiiii,"fd":invoke_jiiiiiiiiii,"pd":invoke_jiiij,"bd":invoke_jiiji,"dd":invoke_jiji,"hd":invoke_jijiii,"ad":invoke_jijj,"vd":invoke_jjji,"f":invoke_v,"l":invoke_vi,"Ug":invoke_vidi,"Pg":invoke_viffi,"A":invoke_vifi,"nc":invoke_vifii,"m":invoke_vii,"Tg":invoke_viidi,"Pb":invoke_viif,"L":invoke_viiff,"Wg":invoke_viiffi,"I":invoke_viifi,"h":invoke_viii,"Xg":invoke_viiif,"rc":invoke_viiifi,"p":invoke_viiii,"Sg":invoke_viiiifi,"r":invoke_viiiii,"v":invoke_viiiiii,"M":invoke_viiiiiii,"Rg":invoke_viiiiiiifddfiii,"sc":invoke_viiiiiiiffffiii,"D":invoke_viiiiiiifiifiii,"td":invoke_viiiiiiifjjfiii,"ba":invoke_viiiiiiii,"pc":invoke_viiiiiiiii,"qc":invoke_viiiiiiiiifi,"uc":invoke_viiiiiiiiii,"qd":invoke_viiij,"id":invoke_viiiji,"kd":invoke_viij,"sd":invoke_viiji,"Yc":invoke_viijiiiiii,"jd":invoke_viji,"rd":invoke_vijii,"$c":invoke_vijiii,"ld":invoke_vji,"Zc":invoke_vjiiiii,"_c":invoke_vjjjiiii,"j":_llvm_eh_typeid_for,"Za":_mktime,"t":_setTempRet0,"za":_sigaction,"Aa":_sigemptyset,"ya":_strftime,"$a":_time,"Ya":_utime};var asm=createWasm();var ___wasm_call_ctors=Module["___wasm_call_ctors"]=function(){return(___wasm_call_ctors=Module["___wasm_call_ctors"]=Module["asm"]["ah"]).apply(null,arguments)};var _SendMessageFloat=Module["_SendMessageFloat"]=function(){return(_SendMessageFloat=Module["_SendMessageFloat"]=Module["asm"]["bh"]).apply(null,arguments)};var _SendMessageString=Module["_SendMessageString"]=function(){return(_SendMessageString=Module["_SendMessageString"]=Module["asm"]["ch"]).apply(null,arguments)};var _SendMessage=Module["_SendMessage"]=function(){return(_SendMessage=Module["_SendMessage"]=Module["asm"]["dh"]).apply(null,arguments)};var _SetFullscreen=Module["_SetFullscreen"]=function(){return(_SetFullscreen=Module["_SetFullscreen"]=Module["asm"]["eh"]).apply(null,arguments)};var _main=Module["_main"]=function(){return(_main=Module["_main"]=Module["asm"]["fh"]).apply(null,arguments)};var ___errno_location=Module["___errno_location"]=function(){return(___errno_location=Module["___errno_location"]=Module["asm"]["gh"]).apply(null,arguments)};var _htons=Module["_htons"]=function(){return(_htons=Module["_htons"]=Module["asm"]["hh"]).apply(null,arguments)};var _ntohs=Module["_ntohs"]=function(){return(_ntohs=Module["_ntohs"]=Module["asm"]["ih"]).apply(null,arguments)};var __get_tzname=Module["__get_tzname"]=function(){return(__get_tzname=Module["__get_tzname"]=Module["asm"]["jh"]).apply(null,arguments)};var __get_daylight=Module["__get_daylight"]=function(){return(__get_daylight=Module["__get_daylight"]=Module["asm"]["kh"]).apply(null,arguments)};var __get_timezone=Module["__get_timezone"]=function(){return(__get_timezone=Module["__get_timezone"]=Module["asm"]["lh"]).apply(null,arguments)};var stackSave=Module["stackSave"]=function(){return(stackSave=Module["stackSave"]=Module["asm"]["mh"]).apply(null,arguments)};var stackRestore=Module["stackRestore"]=function(){return(stackRestore=Module["stackRestore"]=Module["asm"]["nh"]).apply(null,arguments)};var stackAlloc=Module["stackAlloc"]=function(){return(stackAlloc=Module["stackAlloc"]=Module["asm"]["oh"]).apply(null,arguments)};var _setThrew=Module["_setThrew"]=function(){return(_setThrew=Module["_setThrew"]=Module["asm"]["ph"]).apply(null,arguments)};var ___cxa_can_catch=Module["___cxa_can_catch"]=function(){return(___cxa_can_catch=Module["___cxa_can_catch"]=Module["asm"]["qh"]).apply(null,arguments)};var ___cxa_is_pointer_type=Module["___cxa_is_pointer_type"]=function(){return(___cxa_is_pointer_type=Module["___cxa_is_pointer_type"]=Module["asm"]["rh"]).apply(null,arguments)};var _malloc=Module["_malloc"]=function(){return(_malloc=Module["_malloc"]=Module["asm"]["sh"]).apply(null,arguments)};var _free=Module["_free"]=function(){return(_free=Module["_free"]=Module["asm"]["th"]).apply(null,arguments)};var _memalign=Module["_memalign"]=function(){return(_memalign=Module["_memalign"]=Module["asm"]["uh"]).apply(null,arguments)};var _memset=Module["_memset"]=function(){return(_memset=Module["_memset"]=Module["asm"]["vh"]).apply(null,arguments)};var _strlen=Module["_strlen"]=function(){return(_strlen=Module["_strlen"]=Module["asm"]["wh"]).apply(null,arguments)};var dynCall_iidiiii=Module["dynCall_iidiiii"]=function(){return(dynCall_iidiiii=Module["dynCall_iidiiii"]=Module["asm"]["yh"]).apply(null,arguments)};var dynCall_vii=Module["dynCall_vii"]=function(){return(dynCall_vii=Module["dynCall_vii"]=Module["asm"]["zh"]).apply(null,arguments)};var dynCall_iii=Module["dynCall_iii"]=function(){return(dynCall_iii=Module["dynCall_iii"]=Module["asm"]["Ah"]).apply(null,arguments)};var dynCall_ii=Module["dynCall_ii"]=function(){return(dynCall_ii=Module["dynCall_ii"]=Module["asm"]["Bh"]).apply(null,arguments)};var dynCall_iiii=Module["dynCall_iiii"]=function(){return(dynCall_iiii=Module["dynCall_iiii"]=Module["asm"]["Ch"]).apply(null,arguments)};var dynCall_jiji=Module["dynCall_jiji"]=function(){return(dynCall_jiji=Module["dynCall_jiji"]=Module["asm"]["Dh"]).apply(null,arguments)};var dynCall_vi=Module["dynCall_vi"]=function(){return(dynCall_vi=Module["dynCall_vi"]=Module["asm"]["Eh"]).apply(null,arguments)};var dynCall_iiiii=Module["dynCall_iiiii"]=function(){return(dynCall_iiiii=Module["dynCall_iiiii"]=Module["asm"]["Fh"]).apply(null,arguments)};var dynCall_viii=Module["dynCall_viii"]=function(){return(dynCall_viii=Module["dynCall_viii"]=Module["asm"]["Gh"]).apply(null,arguments)};var dynCall_i=Module["dynCall_i"]=function(){return(dynCall_i=Module["dynCall_i"]=Module["asm"]["Hh"]).apply(null,arguments)};var dynCall_v=Module["dynCall_v"]=function(){return(dynCall_v=Module["dynCall_v"]=Module["asm"]["Ih"]).apply(null,arguments)};var dynCall_viiiiii=Module["dynCall_viiiiii"]=function(){return(dynCall_viiiiii=Module["dynCall_viiiiii"]=Module["asm"]["Jh"]).apply(null,arguments)};var dynCall_viiiii=Module["dynCall_viiiii"]=function(){return(dynCall_viiiii=Module["dynCall_viiiii"]=Module["asm"]["Kh"]).apply(null,arguments)};var dynCall_viiii=Module["dynCall_viiii"]=function(){return(dynCall_viiii=Module["dynCall_viiii"]=Module["asm"]["Lh"]).apply(null,arguments)};var dynCall_iiiiii=Module["dynCall_iiiiii"]=function(){return(dynCall_iiiiii=Module["dynCall_iiiiii"]=Module["asm"]["Mh"]).apply(null,arguments)};var dynCall_iiij=Module["dynCall_iiij"]=function(){return(dynCall_iiij=Module["dynCall_iiij"]=Module["asm"]["Nh"]).apply(null,arguments)};var dynCall_iiiiiiii=Module["dynCall_iiiiiiii"]=function(){return(dynCall_iiiiiiii=Module["dynCall_iiiiiiii"]=Module["asm"]["Oh"]).apply(null,arguments)};var dynCall_iiijiii=Module["dynCall_iiijiii"]=function(){return(dynCall_iiijiii=Module["dynCall_iiijiii"]=Module["asm"]["Ph"]).apply(null,arguments)};var dynCall_iij=Module["dynCall_iij"]=function(){return(dynCall_iij=Module["dynCall_iij"]=Module["asm"]["Qh"]).apply(null,arguments)};var dynCall_iiiiiii=Module["dynCall_iiiiiii"]=function(){return(dynCall_iiiiiii=Module["dynCall_iiiiiii"]=Module["asm"]["Rh"]).apply(null,arguments)};var dynCall_jii=Module["dynCall_jii"]=function(){return(dynCall_jii=Module["dynCall_jii"]=Module["asm"]["Sh"]).apply(null,arguments)};var dynCall_viiiiiii=Module["dynCall_viiiiiii"]=function(){return(dynCall_viiiiiii=Module["dynCall_viiiiiii"]=Module["asm"]["Th"]).apply(null,arguments)};var dynCall_viiji=Module["dynCall_viiji"]=function(){return(dynCall_viiji=Module["dynCall_viiji"]=Module["asm"]["Uh"]).apply(null,arguments)};var dynCall_viifi=Module["dynCall_viifi"]=function(){return(dynCall_viifi=Module["dynCall_viifi"]=Module["asm"]["Vh"]).apply(null,arguments)};var dynCall_vijii=Module["dynCall_vijii"]=function(){return(dynCall_vijii=Module["dynCall_vijii"]=Module["asm"]["Wh"]).apply(null,arguments)};var dynCall_viiifi=Module["dynCall_viiifi"]=function(){return(dynCall_viiifi=Module["dynCall_viiifi"]=Module["asm"]["Xh"]).apply(null,arguments)};var dynCall_viiij=Module["dynCall_viiij"]=function(){return(dynCall_viiij=Module["dynCall_viiij"]=Module["asm"]["Yh"]).apply(null,arguments)};var dynCall_viiff=Module["dynCall_viiff"]=function(){return(dynCall_viiff=Module["dynCall_viiff"]=Module["asm"]["Zh"]).apply(null,arguments)};var dynCall_jiii=Module["dynCall_jiii"]=function(){return(dynCall_jiii=Module["dynCall_jiii"]=Module["asm"]["_h"]).apply(null,arguments)};var dynCall_ddiii=Module["dynCall_ddiii"]=function(){return(dynCall_ddiii=Module["dynCall_ddiii"]=Module["asm"]["$h"]).apply(null,arguments)};var dynCall_viif=Module["dynCall_viif"]=function(){return(dynCall_viif=Module["dynCall_viif"]=Module["asm"]["ai"]).apply(null,arguments)};var dynCall_fii=Module["dynCall_fii"]=function(){return(dynCall_fii=Module["dynCall_fii"]=Module["asm"]["bi"]).apply(null,arguments)};var dynCall_viiiiiiiiifi=Module["dynCall_viiiiiiiiifi"]=function(){return(dynCall_viiiiiiiiifi=Module["dynCall_viiiiiiiiifi"]=Module["asm"]["ci"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiii"]=Module["asm"]["di"]).apply(null,arguments)};var dynCall_iiiiiiiiii=Module["dynCall_iiiiiiiiii"]=function(){return(dynCall_iiiiiiiiii=Module["dynCall_iiiiiiiiii"]=Module["asm"]["ei"]).apply(null,arguments)};var dynCall_viiiiiiiiii=Module["dynCall_viiiiiiiiii"]=function(){return(dynCall_viiiiiiiiii=Module["dynCall_viiiiiiiiii"]=Module["asm"]["fi"]).apply(null,arguments)};var dynCall_iiiiiiiiiji=Module["dynCall_iiiiiiiiiji"]=function(){return(dynCall_iiiiiiiiiji=Module["dynCall_iiiiiiiiiji"]=Module["asm"]["gi"]).apply(null,arguments)};var dynCall_vji=Module["dynCall_vji"]=function(){return(dynCall_vji=Module["dynCall_vji"]=Module["asm"]["hi"]).apply(null,arguments)};var dynCall_diiid=Module["dynCall_diiid"]=function(){return(dynCall_diiid=Module["dynCall_diiid"]=Module["asm"]["ii"]).apply(null,arguments)};var dynCall_viiiiiiiii=Module["dynCall_viiiiiiiii"]=function(){return(dynCall_viiiiiiiii=Module["dynCall_viiiiiiiii"]=Module["asm"]["ji"]).apply(null,arguments)};var dynCall_vifi=Module["dynCall_vifi"]=function(){return(dynCall_vifi=Module["dynCall_vifi"]=Module["asm"]["ki"]).apply(null,arguments)};var dynCall_jiiij=Module["dynCall_jiiij"]=function(){return(dynCall_jiiij=Module["dynCall_jiiij"]=Module["asm"]["li"]).apply(null,arguments)};var dynCall_fiiif=Module["dynCall_fiiif"]=function(){return(dynCall_fiiif=Module["dynCall_fiiif"]=Module["asm"]["mi"]).apply(null,arguments)};var dynCall_viiiiiiii=Module["dynCall_viiiiiiii"]=function(){return(dynCall_viiiiiiii=Module["dynCall_viiiiiiii"]=Module["asm"]["ni"]).apply(null,arguments)};var dynCall_fi=Module["dynCall_fi"]=function(){return(dynCall_fi=Module["dynCall_fi"]=Module["asm"]["oi"]).apply(null,arguments)};var dynCall_viiif=Module["dynCall_viiif"]=function(){return(dynCall_viiif=Module["dynCall_viiif"]=Module["asm"]["pi"]).apply(null,arguments)};var dynCall_diiii=Module["dynCall_diiii"]=function(){return(dynCall_diiii=Module["dynCall_diiii"]=Module["asm"]["qi"]).apply(null,arguments)};var dynCall_jiiii=Module["dynCall_jiiii"]=function(){return(dynCall_jiiii=Module["dynCall_jiiii"]=Module["asm"]["ri"]).apply(null,arguments)};var dynCall_fiiii=Module["dynCall_fiiii"]=function(){return(dynCall_fiiii=Module["dynCall_fiiii"]=Module["asm"]["si"]).apply(null,arguments)};var dynCall_jiiji=Module["dynCall_jiiji"]=function(){return(dynCall_jiiji=Module["dynCall_jiiji"]=Module["asm"]["ti"]).apply(null,arguments)};var dynCall_fiifi=Module["dynCall_fiifi"]=function(){return(dynCall_fiifi=Module["dynCall_fiifi"]=Module["asm"]["ui"]).apply(null,arguments)};var dynCall_iiffi=Module["dynCall_iiffi"]=function(){return(dynCall_iiffi=Module["dynCall_iiffi"]=Module["asm"]["vi"]).apply(null,arguments)};var dynCall_iiiifi=Module["dynCall_iiiifi"]=function(){return(dynCall_iiiifi=Module["dynCall_iiiifi"]=Module["asm"]["wi"]).apply(null,arguments)};var dynCall_iiiiiiiii=Module["dynCall_iiiiiiiii"]=function(){return(dynCall_iiiiiiiii=Module["dynCall_iiiiiiiii"]=Module["asm"]["xi"]).apply(null,arguments)};var dynCall_iiiiiiiiiii=Module["dynCall_iiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiii=Module["dynCall_iiiiiiiiiii"]=Module["asm"]["yi"]).apply(null,arguments)};var dynCall_viiiiiiiiiiii=Module["dynCall_viiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiii=Module["dynCall_viiiiiiiiiiii"]=Module["asm"]["zi"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiii"]=Module["asm"]["Ai"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiii"]=Module["asm"]["Bi"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiii"]=Module["asm"]["Ci"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiii"]=Module["asm"]["Di"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiii"]=Module["asm"]["Ei"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiii"]=Module["asm"]["Fi"]).apply(null,arguments)};var dynCall_viidi=Module["dynCall_viidi"]=function(){return(dynCall_viidi=Module["dynCall_viidi"]=Module["asm"]["Gi"]).apply(null,arguments)};var dynCall_viiffi=Module["dynCall_viiffi"]=function(){return(dynCall_viiffi=Module["dynCall_viiffi"]=Module["asm"]["Hi"]).apply(null,arguments)};var dynCall_iiiifii=Module["dynCall_iiiifii"]=function(){return(dynCall_iiiifii=Module["dynCall_iiiifii"]=Module["asm"]["Ii"]).apply(null,arguments)};var dynCall_iiifii=Module["dynCall_iiifii"]=function(){return(dynCall_iiifii=Module["dynCall_iiifii"]=Module["asm"]["Ji"]).apply(null,arguments)};var dynCall_viiiifii=Module["dynCall_viiiifii"]=function(){return(dynCall_viiiifii=Module["dynCall_viiiifii"]=Module["asm"]["Ki"]).apply(null,arguments)};var dynCall_fiiffi=Module["dynCall_fiiffi"]=function(){return(dynCall_fiiffi=Module["dynCall_fiiffi"]=Module["asm"]["Li"]).apply(null,arguments)};var dynCall_viiififii=Module["dynCall_viiififii"]=function(){return(dynCall_viiififii=Module["dynCall_viiififii"]=Module["asm"]["Mi"]).apply(null,arguments)};var dynCall_ji=Module["dynCall_ji"]=function(){return(dynCall_ji=Module["dynCall_ji"]=Module["asm"]["Ni"]).apply(null,arguments)};var dynCall_viiidi=Module["dynCall_viiidi"]=function(){return(dynCall_viiidi=Module["dynCall_viiidi"]=Module["asm"]["Oi"]).apply(null,arguments)};var dynCall_viiiji=Module["dynCall_viiiji"]=function(){return(dynCall_viiiji=Module["dynCall_viiiji"]=Module["asm"]["Pi"]).apply(null,arguments)};var dynCall_viiiiiiiiiii=Module["dynCall_viiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiii=Module["dynCall_viiiiiiiiiii"]=Module["asm"]["Qi"]).apply(null,arguments)};var dynCall_iiifi=Module["dynCall_iiifi"]=function(){return(dynCall_iiifi=Module["dynCall_iiifi"]=Module["asm"]["Ri"]).apply(null,arguments)};var dynCall_dii=Module["dynCall_dii"]=function(){return(dynCall_dii=Module["dynCall_dii"]=Module["asm"]["Si"]).apply(null,arguments)};var dynCall_iiiiiiiiiiii=Module["dynCall_iiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiii=Module["dynCall_iiiiiiiiiiii"]=Module["asm"]["Ti"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiii"]=Module["asm"]["Ui"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiii"]=Module["asm"]["Vi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiii"]=Module["asm"]["Wi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiii"]=Module["asm"]["Xi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiii"]=Module["asm"]["Yi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiiii"]=Module["asm"]["Zi"]).apply(null,arguments)};var dynCall_fiii=Module["dynCall_fiii"]=function(){return(dynCall_fiii=Module["dynCall_fiii"]=Module["asm"]["_i"]).apply(null,arguments)};var dynCall_fifi=Module["dynCall_fifi"]=function(){return(dynCall_fifi=Module["dynCall_fifi"]=Module["asm"]["$i"]).apply(null,arguments)};var dynCall_iiddi=Module["dynCall_iiddi"]=function(){return(dynCall_iiddi=Module["dynCall_iiddi"]=Module["asm"]["aj"]).apply(null,arguments)};var dynCall_viiiiiiifiifiii=Module["dynCall_viiiiiiifiifiii"]=function(){return(dynCall_viiiiiiifiifiii=Module["dynCall_viiiiiiifiifiii"]=Module["asm"]["bj"]).apply(null,arguments)};var dynCall_viiiiiiiffffiii=Module["dynCall_viiiiiiiffffiii"]=function(){return(dynCall_viiiiiiiffffiii=Module["dynCall_viiiiiiiffffiii"]=Module["asm"]["cj"]).apply(null,arguments)};var dynCall_viiiiiiifjjfiii=Module["dynCall_viiiiiiifjjfiii"]=function(){return(dynCall_viiiiiiifjjfiii=Module["dynCall_viiiiiiifjjfiii"]=Module["asm"]["dj"]).apply(null,arguments)};var dynCall_viij=Module["dynCall_viij"]=function(){return(dynCall_viij=Module["dynCall_viij"]=Module["asm"]["ej"]).apply(null,arguments)};var dynCall_vidi=Module["dynCall_vidi"]=function(){return(dynCall_vidi=Module["dynCall_vidi"]=Module["asm"]["fj"]).apply(null,arguments)};var dynCall_viji=Module["dynCall_viji"]=function(){return(dynCall_viji=Module["dynCall_viji"]=Module["asm"]["gj"]).apply(null,arguments)};var dynCall_iijji=Module["dynCall_iijji"]=function(){return(dynCall_iijji=Module["dynCall_iijji"]=Module["asm"]["hj"]).apply(null,arguments)};var dynCall_vifii=Module["dynCall_vifii"]=function(){return(dynCall_vifii=Module["dynCall_vifii"]=Module["asm"]["ij"]).apply(null,arguments)};var dynCall_viiiifi=Module["dynCall_viiiifi"]=function(){return(dynCall_viiiifi=Module["dynCall_viiiifi"]=Module["asm"]["jj"]).apply(null,arguments)};var dynCall_viiiiiiifddfiii=Module["dynCall_viiiiiiifddfiii"]=function(){return(dynCall_viiiiiiifddfiii=Module["dynCall_viiiiiiifddfiii"]=Module["asm"]["kj"]).apply(null,arguments)};var dynCall_iji=Module["dynCall_iji"]=function(){return(dynCall_iji=Module["dynCall_iji"]=Module["asm"]["lj"]).apply(null,arguments)};var dynCall_jjji=Module["dynCall_jjji"]=function(){return(dynCall_jjji=Module["dynCall_jjji"]=Module["asm"]["mj"]).apply(null,arguments)};var dynCall_jiiiii=Module["dynCall_jiiiii"]=function(){return(dynCall_jiiiii=Module["dynCall_jiiiii"]=Module["asm"]["nj"]).apply(null,arguments)};var dynCall_jijiii=Module["dynCall_jijiii"]=function(){return(dynCall_jijiii=Module["dynCall_jijiii"]=Module["asm"]["oj"]).apply(null,arguments)};var dynCall_iijii=Module["dynCall_iijii"]=function(){return(dynCall_iijii=Module["dynCall_iijii"]=Module["asm"]["pj"]).apply(null,arguments)};var dynCall_jiiiiiiiiii=Module["dynCall_jiiiiiiiiii"]=function(){return(dynCall_jiiiiiiiiii=Module["dynCall_jiiiiiiiiii"]=Module["asm"]["qj"]).apply(null,arguments)};var dynCall_ijji=Module["dynCall_ijji"]=function(){return(dynCall_ijji=Module["dynCall_ijji"]=Module["asm"]["rj"]).apply(null,arguments)};var dynCall_j=Module["dynCall_j"]=function(){return(dynCall_j=Module["dynCall_j"]=Module["asm"]["sj"]).apply(null,arguments)};var dynCall_jijj=Module["dynCall_jijj"]=function(){return(dynCall_jijj=Module["dynCall_jijj"]=Module["asm"]["tj"]).apply(null,arguments)};var dynCall_viijiiiiii=Module["dynCall_viijiiiiii"]=function(){return(dynCall_viijiiiiii=Module["dynCall_viijiiiiii"]=Module["asm"]["uj"]).apply(null,arguments)};var dynCall_vijiii=Module["dynCall_vijiii"]=function(){return(dynCall_vijiii=Module["dynCall_vijiii"]=Module["asm"]["vj"]).apply(null,arguments)};var dynCall_vjjjiiii=Module["dynCall_vjjjiiii"]=function(){return(dynCall_vjjjiiii=Module["dynCall_vjjjiiii"]=Module["asm"]["wj"]).apply(null,arguments)};var dynCall_vjiiiii=Module["dynCall_vjiiiii"]=function(){return(dynCall_vjiiiii=Module["dynCall_vjiiiii"]=Module["asm"]["xj"]).apply(null,arguments)};var dynCall_iiji=Module["dynCall_iiji"]=function(){return(dynCall_iiji=Module["dynCall_iiji"]=Module["asm"]["yj"]).apply(null,arguments)};var dynCall_fiffffi=Module["dynCall_fiffffi"]=function(){return(dynCall_fiffffi=Module["dynCall_fiffffi"]=Module["asm"]["zj"]).apply(null,arguments)};var dynCall_iiiffi=Module["dynCall_iiiffi"]=function(){return(dynCall_iiiffi=Module["dynCall_iiiffi"]=Module["asm"]["Aj"]).apply(null,arguments)};var dynCall_iifii=Module["dynCall_iifii"]=function(){return(dynCall_iifii=Module["dynCall_iifii"]=Module["asm"]["Bj"]).apply(null,arguments)};var dynCall_iiffii=Module["dynCall_iiffii"]=function(){return(dynCall_iiffii=Module["dynCall_iiffii"]=Module["asm"]["Cj"]).apply(null,arguments)};var dynCall_iiifiii=Module["dynCall_iiifiii"]=function(){return(dynCall_iiifiii=Module["dynCall_iiifiii"]=Module["asm"]["Dj"]).apply(null,arguments)};var dynCall_iiififii=Module["dynCall_iiififii"]=function(){return(dynCall_iiififii=Module["dynCall_iiififii"]=Module["asm"]["Ej"]).apply(null,arguments)};var dynCall_iiifiiiii=Module["dynCall_iiifiiiii"]=function(){return(dynCall_iiifiiiii=Module["dynCall_iiifiiiii"]=Module["asm"]["Fj"]).apply(null,arguments)};var dynCall_iiffifiiii=Module["dynCall_iiffifiiii"]=function(){return(dynCall_iiffifiiii=Module["dynCall_iiffifiiii"]=Module["asm"]["Gj"]).apply(null,arguments)};var dynCall_iifiifiiii=Module["dynCall_iifiifiiii"]=function(){return(dynCall_iifiifiiii=Module["dynCall_iifiifiiii"]=Module["asm"]["Hj"]).apply(null,arguments)};var dynCall_iiiifiii=Module["dynCall_iiiifiii"]=function(){return(dynCall_iiiifiii=Module["dynCall_iiiifiii"]=Module["asm"]["Ij"]).apply(null,arguments)};var dynCall_iiifiiii=Module["dynCall_iiifiiii"]=function(){return(dynCall_iiifiiii=Module["dynCall_iiifiiii"]=Module["asm"]["Jj"]).apply(null,arguments)};var dynCall_iiiffiii=Module["dynCall_iiiffiii"]=function(){return(dynCall_iiiffiii=Module["dynCall_iiiffiii"]=Module["asm"]["Kj"]).apply(null,arguments)};var dynCall_iifi=Module["dynCall_iifi"]=function(){return(dynCall_iifi=Module["dynCall_iifi"]=Module["asm"]["Lj"]).apply(null,arguments)};var dynCall_iiiiifii=Module["dynCall_iiiiifii"]=function(){return(dynCall_iiiiifii=Module["dynCall_iiiiifii"]=Module["asm"]["Mj"]).apply(null,arguments)};var dynCall_iifiifffii=Module["dynCall_iifiifffii"]=function(){return(dynCall_iifiifffii=Module["dynCall_iifiifffii"]=Module["asm"]["Nj"]).apply(null,arguments)};var dynCall_viiifii=Module["dynCall_viiifii"]=function(){return(dynCall_viiifii=Module["dynCall_viiifii"]=Module["asm"]["Oj"]).apply(null,arguments)};var dynCall_viiifiii=Module["dynCall_viiifiii"]=function(){return(dynCall_viiifiii=Module["dynCall_viiifiii"]=Module["asm"]["Pj"]).apply(null,arguments)};var dynCall_fiifii=Module["dynCall_fiifii"]=function(){return(dynCall_fiifii=Module["dynCall_fiifii"]=Module["asm"]["Qj"]).apply(null,arguments)};var dynCall_fiifiii=Module["dynCall_fiifiii"]=function(){return(dynCall_fiifiii=Module["dynCall_fiifiii"]=Module["asm"]["Rj"]).apply(null,arguments)};var dynCall_iiiififi=Module["dynCall_iiiififi"]=function(){return(dynCall_iiiififi=Module["dynCall_iiiififi"]=Module["asm"]["Sj"]).apply(null,arguments)};var dynCall_iiiffifi=Module["dynCall_iiiffifi"]=function(){return(dynCall_iiiffifi=Module["dynCall_iiiffifi"]=Module["asm"]["Tj"]).apply(null,arguments)};var dynCall_iiiffifii=Module["dynCall_iiiffifii"]=function(){return(dynCall_iiiffifii=Module["dynCall_iiiffifii"]=Module["asm"]["Uj"]).apply(null,arguments)};var dynCall_iiifiifii=Module["dynCall_iiifiifii"]=function(){return(dynCall_iiifiifii=Module["dynCall_iiifiifii"]=Module["asm"]["Vj"]).apply(null,arguments)};var dynCall_viffi=Module["dynCall_viffi"]=function(){return(dynCall_viffi=Module["dynCall_viffi"]=Module["asm"]["Wj"]).apply(null,arguments)};var dynCall_viiiiifiii=Module["dynCall_viiiiifiii"]=function(){return(dynCall_viiiiifiii=Module["dynCall_viiiiifiii"]=Module["asm"]["Xj"]).apply(null,arguments)};var dynCall_ffffi=Module["dynCall_ffffi"]=function(){return(dynCall_ffffi=Module["dynCall_ffffi"]=Module["asm"]["Yj"]).apply(null,arguments)};var dynCall_ifi=Module["dynCall_ifi"]=function(){return(dynCall_ifi=Module["dynCall_ifi"]=Module["asm"]["Zj"]).apply(null,arguments)};var dynCall_fffffi=Module["dynCall_fffffi"]=function(){return(dynCall_fffffi=Module["dynCall_fffffi"]=Module["asm"]["_j"]).apply(null,arguments)};var dynCall_fffi=Module["dynCall_fffi"]=function(){return(dynCall_fffi=Module["dynCall_fffi"]=Module["asm"]["$j"]).apply(null,arguments)};var dynCall_vifiii=Module["dynCall_vifiii"]=function(){return(dynCall_vifiii=Module["dynCall_vifiii"]=Module["asm"]["ak"]).apply(null,arguments)};var dynCall_diii=Module["dynCall_diii"]=function(){return(dynCall_diii=Module["dynCall_diii"]=Module["asm"]["bk"]).apply(null,arguments)};var dynCall_viiiiiifiifiiii=Module["dynCall_viiiiiifiifiiii"]=function(){return(dynCall_viiiiiifiifiiii=Module["dynCall_viiiiiifiifiiii"]=Module["asm"]["ck"]).apply(null,arguments)};var dynCall_iiffffiii=Module["dynCall_iiffffiii"]=function(){return(dynCall_iiffffiii=Module["dynCall_iiffffiii"]=Module["asm"]["dk"]).apply(null,arguments)};var dynCall_viiiiifi=Module["dynCall_viiiiifi"]=function(){return(dynCall_viiiiifi=Module["dynCall_viiiiifi"]=Module["asm"]["ek"]).apply(null,arguments)};var dynCall_vffi=Module["dynCall_vffi"]=function(){return(dynCall_vffi=Module["dynCall_vffi"]=Module["asm"]["fk"]).apply(null,arguments)};var dynCall_iiidfi=Module["dynCall_iiidfi"]=function(){return(dynCall_iiidfi=Module["dynCall_iiidfi"]=Module["asm"]["gk"]).apply(null,arguments)};var dynCall_iiijfi=Module["dynCall_iiijfi"]=function(){return(dynCall_iiijfi=Module["dynCall_iiijfi"]=Module["asm"]["hk"]).apply(null,arguments)};var dynCall_iiiffii=Module["dynCall_iiiffii"]=function(){return(dynCall_iiiffii=Module["dynCall_iiiffii"]=Module["asm"]["ik"]).apply(null,arguments)};var dynCall_iifffi=Module["dynCall_iifffi"]=function(){return(dynCall_iifffi=Module["dynCall_iifffi"]=Module["asm"]["jk"]).apply(null,arguments)};var dynCall_iiiffifiiii=Module["dynCall_iiiffifiiii"]=function(){return(dynCall_iiiffifiiii=Module["dynCall_iiiffifiiii"]=Module["asm"]["kk"]).apply(null,arguments)};var dynCall_iiifiifiii=Module["dynCall_iiifiifiii"]=function(){return(dynCall_iiifiifiii=Module["dynCall_iiifiifiii"]=Module["asm"]["lk"]).apply(null,arguments)};var dynCall_iiifiifiiiii=Module["dynCall_iiifiifiiiii"]=function(){return(dynCall_iiifiifiiiii=Module["dynCall_iiifiifiiiii"]=Module["asm"]["mk"]).apply(null,arguments)};var dynCall_ifii=Module["dynCall_ifii"]=function(){return(dynCall_ifii=Module["dynCall_ifii"]=Module["asm"]["nk"]).apply(null,arguments)};var dynCall_ifffii=Module["dynCall_ifffii"]=function(){return(dynCall_ifffii=Module["dynCall_ifffii"]=Module["asm"]["ok"]).apply(null,arguments)};var dynCall_ffffii=Module["dynCall_ffffii"]=function(){return(dynCall_ffffii=Module["dynCall_ffffii"]=Module["asm"]["pk"]).apply(null,arguments)};var dynCall_ffffifi=Module["dynCall_ffffifi"]=function(){return(dynCall_ffffifi=Module["dynCall_ffffifi"]=Module["asm"]["qk"]).apply(null,arguments)};var dynCall_ffffiffi=Module["dynCall_ffffiffi"]=function(){return(dynCall_ffffiffi=Module["dynCall_ffffiffi"]=Module["asm"]["rk"]).apply(null,arguments)};var dynCall_viiififi=Module["dynCall_viiififi"]=function(){return(dynCall_viiififi=Module["dynCall_viiififi"]=Module["asm"]["sk"]).apply(null,arguments)};var dynCall_viiififfi=Module["dynCall_viiififfi"]=function(){return(dynCall_viiififfi=Module["dynCall_viiififfi"]=Module["asm"]["tk"]).apply(null,arguments)};var dynCall_ifiii=Module["dynCall_ifiii"]=function(){return(dynCall_ifiii=Module["dynCall_ifiii"]=Module["asm"]["uk"]).apply(null,arguments)};var dynCall_iifiiiiii=Module["dynCall_iifiiiiii"]=function(){return(dynCall_iifiiiiii=Module["dynCall_iifiiiiii"]=Module["asm"]["vk"]).apply(null,arguments)};var dynCall_iifiiiii=Module["dynCall_iifiiiii"]=function(){return(dynCall_iifiiiii=Module["dynCall_iifiiiii"]=Module["asm"]["wk"]).apply(null,arguments)};var dynCall_iiffiiiii=Module["dynCall_iiffiiiii"]=function(){return(dynCall_iiffiiiii=Module["dynCall_iiffiiiii"]=Module["asm"]["xk"]).apply(null,arguments)};var dynCall_iiffifiii=Module["dynCall_iiffifiii"]=function(){return(dynCall_iiffifiii=Module["dynCall_iiffifiii"]=Module["asm"]["yk"]).apply(null,arguments)};var dynCall_iifiifiii=Module["dynCall_iifiifiii"]=function(){return(dynCall_iifiifiii=Module["dynCall_iifiifiii"]=Module["asm"]["zk"]).apply(null,arguments)};var dynCall_iififi=Module["dynCall_iififi"]=function(){return(dynCall_iififi=Module["dynCall_iififi"]=Module["asm"]["Ak"]).apply(null,arguments)};var dynCall_iiififi=Module["dynCall_iiififi"]=function(){return(dynCall_iiififi=Module["dynCall_iiififi"]=Module["asm"]["Bk"]).apply(null,arguments)};var dynCall_iifiii=Module["dynCall_iifiii"]=function(){return(dynCall_iifiii=Module["dynCall_iifiii"]=Module["asm"]["Ck"]).apply(null,arguments)};var dynCall_iiiiifiiii=Module["dynCall_iiiiifiiii"]=function(){return(dynCall_iiiiifiiii=Module["dynCall_iiiiifiiii"]=Module["asm"]["Dk"]).apply(null,arguments)};var dynCall_viidiii=Module["dynCall_viidiii"]=function(){return(dynCall_viidiii=Module["dynCall_viidiii"]=Module["asm"]["Ek"]).apply(null,arguments)};var dynCall_diidi=Module["dynCall_diidi"]=function(){return(dynCall_diidi=Module["dynCall_diidi"]=Module["asm"]["Fk"]).apply(null,arguments)};var dynCall_fiifdi=Module["dynCall_fiifdi"]=function(){return(dynCall_fiifdi=Module["dynCall_fiifdi"]=Module["asm"]["Gk"]).apply(null,arguments)};var dynCall_viiiiiifddfiiii=Module["dynCall_viiiiiifddfiiii"]=function(){return(dynCall_viiiiiifddfiiii=Module["dynCall_viiiiiifddfiiii"]=Module["asm"]["Hk"]).apply(null,arguments)};var dynCall_viijiii=Module["dynCall_viijiii"]=function(){return(dynCall_viijiii=Module["dynCall_viijiii"]=Module["asm"]["Ik"]).apply(null,arguments)};var dynCall_fiifji=Module["dynCall_fiifji"]=function(){return(dynCall_fiifji=Module["dynCall_fiifji"]=Module["asm"]["Jk"]).apply(null,arguments)};var dynCall_viiiiiifjjfiiii=Module["dynCall_viiiiiifjjfiiii"]=function(){return(dynCall_viiiiiifjjfiiii=Module["dynCall_viiiiiifjjfiiii"]=Module["asm"]["Kk"]).apply(null,arguments)};var dynCall_viiiifiii=Module["dynCall_viiiifiii"]=function(){return(dynCall_viiiifiii=Module["dynCall_viiiifiii"]=Module["asm"]["Lk"]).apply(null,arguments)};var dynCall_viifiii=Module["dynCall_viifiii"]=function(){return(dynCall_viifiii=Module["dynCall_viifiii"]=Module["asm"]["Mk"]).apply(null,arguments)};var dynCall_viiiiiiffffiiii=Module["dynCall_viiiiiiffffiiii"]=function(){return(dynCall_viiiiiiffffiiii=Module["dynCall_viiiiiiffffiiii"]=Module["asm"]["Nk"]).apply(null,arguments)};var dynCall_viifiiii=Module["dynCall_viifiiii"]=function(){return(dynCall_viifiiii=Module["dynCall_viifiiii"]=Module["asm"]["Ok"]).apply(null,arguments)};var dynCall_viifii=Module["dynCall_viifii"]=function(){return(dynCall_viifii=Module["dynCall_viifii"]=Module["asm"]["Pk"]).apply(null,arguments)};var dynCall_iiiiifiii=Module["dynCall_iiiiifiii"]=function(){return(dynCall_iiiiifiii=Module["dynCall_iiiiifiii"]=Module["asm"]["Qk"]).apply(null,arguments)};var dynCall_fiiffffi=Module["dynCall_fiiffffi"]=function(){return(dynCall_fiiffffi=Module["dynCall_fiiffffi"]=Module["asm"]["Rk"]).apply(null,arguments)};var dynCall_fffifffi=Module["dynCall_fffifffi"]=function(){return(dynCall_fffifffi=Module["dynCall_fffifffi"]=Module["asm"]["Sk"]).apply(null,arguments)};var dynCall_viffffi=Module["dynCall_viffffi"]=function(){return(dynCall_viffffi=Module["dynCall_viffffi"]=Module["asm"]["Tk"]).apply(null,arguments)};var dynCall_viifiifi=Module["dynCall_viifiifi"]=function(){return(dynCall_viifiifi=Module["dynCall_viifiifi"]=Module["asm"]["Uk"]).apply(null,arguments)};var dynCall_vifiifi=Module["dynCall_vifiifi"]=function(){return(dynCall_vifiifi=Module["dynCall_vifiifi"]=Module["asm"]["Vk"]).apply(null,arguments)};var dynCall_viffii=Module["dynCall_viffii"]=function(){return(dynCall_viffii=Module["dynCall_viffii"]=Module["asm"]["Wk"]).apply(null,arguments)};var dynCall_viddfffi=Module["dynCall_viddfffi"]=function(){return(dynCall_viddfffi=Module["dynCall_viddfffi"]=Module["asm"]["Xk"]).apply(null,arguments)};var dynCall_viidfffi=Module["dynCall_viidfffi"]=function(){return(dynCall_viidfffi=Module["dynCall_viidfffi"]=Module["asm"]["Yk"]).apply(null,arguments)};var dynCall_vidifffi=Module["dynCall_vidifffi"]=function(){return(dynCall_vidifffi=Module["dynCall_vidifffi"]=Module["asm"]["Zk"]).apply(null,arguments)};var dynCall_viiifffi=Module["dynCall_viiifffi"]=function(){return(dynCall_viiifffi=Module["dynCall_viiifffi"]=Module["asm"]["_k"]).apply(null,arguments)};var dynCall_viddi=Module["dynCall_viddi"]=function(){return(dynCall_viddi=Module["dynCall_viddi"]=Module["asm"]["$k"]).apply(null,arguments)};var dynCall_vidii=Module["dynCall_vidii"]=function(){return(dynCall_vidii=Module["dynCall_vidii"]=Module["asm"]["al"]).apply(null,arguments)};var dynCall_viiiiiiifi=Module["dynCall_viiiiiiifi"]=function(){return(dynCall_viiiiiiifi=Module["dynCall_viiiiiiifi"]=Module["asm"]["bl"]).apply(null,arguments)};var dynCall_viidii=Module["dynCall_viidii"]=function(){return(dynCall_viidii=Module["dynCall_viidii"]=Module["asm"]["cl"]).apply(null,arguments)};var dynCall_viijii=Module["dynCall_viijii"]=function(){return(dynCall_viijii=Module["dynCall_viijii"]=Module["asm"]["dl"]).apply(null,arguments)};var dynCall_iffi=Module["dynCall_iffi"]=function(){return(dynCall_iffi=Module["dynCall_iffi"]=Module["asm"]["el"]).apply(null,arguments)};var dynCall_iddi=Module["dynCall_iddi"]=function(){return(dynCall_iddi=Module["dynCall_iddi"]=Module["asm"]["fl"]).apply(null,arguments)};var dynCall_viiffffi=Module["dynCall_viiffffi"]=function(){return(dynCall_viiffffi=Module["dynCall_viiffffi"]=Module["asm"]["gl"]).apply(null,arguments)};var dynCall_viiffii=Module["dynCall_viiffii"]=function(){return(dynCall_viiffii=Module["dynCall_viiffii"]=Module["asm"]["hl"]).apply(null,arguments)};var dynCall_ffi=Module["dynCall_ffi"]=function(){return(dynCall_ffi=Module["dynCall_ffi"]=Module["asm"]["il"]).apply(null,arguments)};var dynCall_ffii=Module["dynCall_ffii"]=function(){return(dynCall_ffii=Module["dynCall_ffii"]=Module["asm"]["jl"]).apply(null,arguments)};var dynCall_fiiiii=Module["dynCall_fiiiii"]=function(){return(dynCall_fiiiii=Module["dynCall_fiiiii"]=Module["asm"]["kl"]).apply(null,arguments)};var dynCall_ddddi=Module["dynCall_ddddi"]=function(){return(dynCall_ddddi=Module["dynCall_ddddi"]=Module["asm"]["ll"]).apply(null,arguments)};var dynCall_dddi=Module["dynCall_dddi"]=function(){return(dynCall_dddi=Module["dynCall_dddi"]=Module["asm"]["ml"]).apply(null,arguments)};var dynCall_ddi=Module["dynCall_ddi"]=function(){return(dynCall_ddi=Module["dynCall_ddi"]=Module["asm"]["nl"]).apply(null,arguments)};var dynCall_idi=Module["dynCall_idi"]=function(){return(dynCall_idi=Module["dynCall_idi"]=Module["asm"]["ol"]).apply(null,arguments)};var dynCall_iiiji=Module["dynCall_iiiji"]=function(){return(dynCall_iiiji=Module["dynCall_iiiji"]=Module["asm"]["pl"]).apply(null,arguments)};var dynCall_iiiiji=Module["dynCall_iiiiji"]=function(){return(dynCall_iiiiji=Module["dynCall_iiiiji"]=Module["asm"]["ql"]).apply(null,arguments)};var dynCall_iiiiiji=Module["dynCall_iiiiiji"]=function(){return(dynCall_iiiiiji=Module["dynCall_iiiiiji"]=Module["asm"]["rl"]).apply(null,arguments)};var dynCall_viiijii=Module["dynCall_viiijii"]=function(){return(dynCall_viiijii=Module["dynCall_viiijii"]=Module["asm"]["sl"]).apply(null,arguments)};var dynCall_iidi=Module["dynCall_iidi"]=function(){return(dynCall_iidi=Module["dynCall_iidi"]=Module["asm"]["tl"]).apply(null,arguments)};var dynCall_vifffffi=Module["dynCall_vifffffi"]=function(){return(dynCall_vifffffi=Module["dynCall_vifffffi"]=Module["asm"]["ul"]).apply(null,arguments)};var dynCall_viiiiiffii=Module["dynCall_viiiiiffii"]=function(){return(dynCall_viiiiiffii=Module["dynCall_viiiiiffii"]=Module["asm"]["vl"]).apply(null,arguments)};var dynCall_viiiiiffi=Module["dynCall_viiiiiffi"]=function(){return(dynCall_viiiiiffi=Module["dynCall_viiiiiffi"]=Module["asm"]["wl"]).apply(null,arguments)};var dynCall_viififi=Module["dynCall_viififi"]=function(){return(dynCall_viififi=Module["dynCall_viififi"]=Module["asm"]["xl"]).apply(null,arguments)};var dynCall_viififfi=Module["dynCall_viififfi"]=function(){return(dynCall_viififfi=Module["dynCall_viififfi"]=Module["asm"]["yl"]).apply(null,arguments)};var dynCall_ijiii=Module["dynCall_ijiii"]=function(){return(dynCall_ijiii=Module["dynCall_ijiii"]=Module["asm"]["zl"]).apply(null,arguments)};var dynCall_vfffi=Module["dynCall_vfffi"]=function(){return(dynCall_vfffi=Module["dynCall_vfffi"]=Module["asm"]["Al"]).apply(null,arguments)};var dynCall_vffffi=Module["dynCall_vffffi"]=function(){return(dynCall_vffffi=Module["dynCall_vffffi"]=Module["asm"]["Bl"]).apply(null,arguments)};var dynCall_vfi=Module["dynCall_vfi"]=function(){return(dynCall_vfi=Module["dynCall_vfi"]=Module["asm"]["Cl"]).apply(null,arguments)};var dynCall_viiiiffi=Module["dynCall_viiiiffi"]=function(){return(dynCall_viiiiffi=Module["dynCall_viiiiffi"]=Module["asm"]["Dl"]).apply(null,arguments)};var dynCall_viiiffii=Module["dynCall_viiiffii"]=function(){return(dynCall_viiiffii=Module["dynCall_viiiffii"]=Module["asm"]["El"]).apply(null,arguments)};var dynCall_vifffi=Module["dynCall_vifffi"]=function(){return(dynCall_vifffi=Module["dynCall_vifffi"]=Module["asm"]["Fl"]).apply(null,arguments)};var dynCall_vifffii=Module["dynCall_vifffii"]=function(){return(dynCall_vifffii=Module["dynCall_vifffii"]=Module["asm"]["Gl"]).apply(null,arguments)};var dynCall_viiiffi=Module["dynCall_viiiffi"]=function(){return(dynCall_viiiffi=Module["dynCall_viiiffi"]=Module["asm"]["Hl"]).apply(null,arguments)};var dynCall_vfiii=Module["dynCall_vfiii"]=function(){return(dynCall_vfiii=Module["dynCall_vfiii"]=Module["asm"]["Il"]).apply(null,arguments)};var dynCall_vfii=Module["dynCall_vfii"]=function(){return(dynCall_vfii=Module["dynCall_vfii"]=Module["asm"]["Jl"]).apply(null,arguments)};var dynCall_vjiiii=Module["dynCall_vjiiii"]=function(){return(dynCall_vjiiii=Module["dynCall_vjiiii"]=Module["asm"]["Kl"]).apply(null,arguments)};var dynCall_vijjii=Module["dynCall_vijjii"]=function(){return(dynCall_vijjii=Module["dynCall_vijjii"]=Module["asm"]["Ll"]).apply(null,arguments)};var dynCall_viffiii=Module["dynCall_viffiii"]=function(){return(dynCall_viffiii=Module["dynCall_viffiii"]=Module["asm"]["Ml"]).apply(null,arguments)};var dynCall_viffffiii=Module["dynCall_viffffiii"]=function(){return(dynCall_viffffiii=Module["dynCall_viffffiii"]=Module["asm"]["Nl"]).apply(null,arguments)};var dynCall_viffffii=Module["dynCall_viffffii"]=function(){return(dynCall_viffffii=Module["dynCall_viffffii"]=Module["asm"]["Ol"]).apply(null,arguments)};var dynCall_vifiiii=Module["dynCall_vifiiii"]=function(){return(dynCall_vifiiii=Module["dynCall_vifiiii"]=Module["asm"]["Pl"]).apply(null,arguments)};var dynCall_viififiii=Module["dynCall_viififiii"]=function(){return(dynCall_viififiii=Module["dynCall_viififiii"]=Module["asm"]["Ql"]).apply(null,arguments)};var dynCall_iiiiifi=Module["dynCall_iiiiifi"]=function(){return(dynCall_iiiiifi=Module["dynCall_iiiiifi"]=Module["asm"]["Rl"]).apply(null,arguments)};var dynCall_viififii=Module["dynCall_viififii"]=function(){return(dynCall_viififii=Module["dynCall_viififii"]=Module["asm"]["Sl"]).apply(null,arguments)};var dynCall_iifiifii=Module["dynCall_iifiifii"]=function(){return(dynCall_iifiifii=Module["dynCall_iifiifii"]=Module["asm"]["Tl"]).apply(null,arguments)};var dynCall_vififfii=Module["dynCall_vififfii"]=function(){return(dynCall_vififfii=Module["dynCall_vififfii"]=Module["asm"]["Ul"]).apply(null,arguments)};var dynCall_iiififiiii=Module["dynCall_iiififiiii"]=function(){return(dynCall_iiififiiii=Module["dynCall_iiififiiii"]=Module["asm"]["Vl"]).apply(null,arguments)};var dynCall_viffiiii=Module["dynCall_viffiiii"]=function(){return(dynCall_viffiiii=Module["dynCall_viffiiii"]=Module["asm"]["Wl"]).apply(null,arguments)};var dynCall_viiiiffffiiii=Module["dynCall_viiiiffffiiii"]=function(){return(dynCall_viiiiffffiiii=Module["dynCall_viiiiffffiiii"]=Module["asm"]["Xl"]).apply(null,arguments)};var dynCall_viifiiiii=Module["dynCall_viifiiiii"]=function(){return(dynCall_viifiiiii=Module["dynCall_viifiiiii"]=Module["asm"]["Yl"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiffffiiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiffffiiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiiii"]=Module["asm"]["Zl"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiiiiiii=Module["dynCall_iiiiiiffiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiiiiiii=Module["dynCall_iiiiiiffiiiiiiiiiiiiiii"]=Module["asm"]["_l"]).apply(null,arguments)};var dynCall_fiiiffi=Module["dynCall_fiiiffi"]=function(){return(dynCall_fiiiffi=Module["dynCall_fiiiffi"]=Module["asm"]["$l"]).apply(null,arguments)};var dynCall_viijji=Module["dynCall_viijji"]=function(){return(dynCall_viijji=Module["dynCall_viijji"]=Module["asm"]["am"]).apply(null,arguments)};var dynCall_viffffffi=Module["dynCall_viffffffi"]=function(){return(dynCall_viffffffi=Module["dynCall_viffffffi"]=Module["asm"]["bm"]).apply(null,arguments)};var dynCall_iiiffiiii=Module["dynCall_iiiffiiii"]=function(){return(dynCall_iiiffiiii=Module["dynCall_iiiffiiii"]=Module["asm"]["cm"]).apply(null,arguments)};var dynCall_iiiiffiiii=Module["dynCall_iiiiffiiii"]=function(){return(dynCall_iiiiffiiii=Module["dynCall_iiiiffiiii"]=Module["asm"]["dm"]).apply(null,arguments)};var dynCall_ijii=Module["dynCall_ijii"]=function(){return(dynCall_ijii=Module["dynCall_ijii"]=Module["asm"]["em"]).apply(null,arguments)};var dynCall_vjii=Module["dynCall_vjii"]=function(){return(dynCall_vjii=Module["dynCall_vjii"]=Module["asm"]["fm"]).apply(null,arguments)};var dynCall_fifffi=Module["dynCall_fifffi"]=function(){return(dynCall_fifffi=Module["dynCall_fifffi"]=Module["asm"]["gm"]).apply(null,arguments)};var dynCall_fffffffi=Module["dynCall_fffffffi"]=function(){return(dynCall_fffffffi=Module["dynCall_fffffffi"]=Module["asm"]["hm"]).apply(null,arguments)};var dynCall_viffifi=Module["dynCall_viffifi"]=function(){return(dynCall_viffifi=Module["dynCall_viffifi"]=Module["asm"]["im"]).apply(null,arguments)};var dynCall_viiffifi=Module["dynCall_viiffifi"]=function(){return(dynCall_viiffifi=Module["dynCall_viiffifi"]=Module["asm"]["jm"]).apply(null,arguments)};var dynCall_iijiii=Module["dynCall_iijiii"]=function(){return(dynCall_iijiii=Module["dynCall_iijiii"]=Module["asm"]["km"]).apply(null,arguments)};var dynCall_ifffi=Module["dynCall_ifffi"]=function(){return(dynCall_ifffi=Module["dynCall_ifffi"]=Module["asm"]["lm"]).apply(null,arguments)};var dynCall_viiififiii=Module["dynCall_viiififiii"]=function(){return(dynCall_viiififiii=Module["dynCall_viiififiii"]=Module["asm"]["mm"]).apply(null,arguments)};var dynCall_viiffiiiiiiiii=Module["dynCall_viiffiiiiiiiii"]=function(){return(dynCall_viiffiiiiiiiii=Module["dynCall_viiffiiiiiiiii"]=Module["asm"]["nm"]).apply(null,arguments)};var dynCall_viiiiiffiii=Module["dynCall_viiiiiffiii"]=function(){return(dynCall_viiiiiffiii=Module["dynCall_viiiiiffiii"]=Module["asm"]["om"]).apply(null,arguments)};var dynCall_viiffiii=Module["dynCall_viiffiii"]=function(){return(dynCall_viiffiii=Module["dynCall_viiffiii"]=Module["asm"]["pm"]).apply(null,arguments)};var dynCall_viiffiiiiiii=Module["dynCall_viiffiiiiiii"]=function(){return(dynCall_viiffiiiiiii=Module["dynCall_viiffiiiiiii"]=Module["asm"]["qm"]).apply(null,arguments)};var dynCall_fffffffffi=Module["dynCall_fffffffffi"]=function(){return(dynCall_fffffffffi=Module["dynCall_fffffffffi"]=Module["asm"]["rm"]).apply(null,arguments)};var dynCall_vifiiiiii=Module["dynCall_vifiiiiii"]=function(){return(dynCall_vifiiiiii=Module["dynCall_vifiiiiii"]=Module["asm"]["sm"]).apply(null,arguments)};var dynCall_vifiiiii=Module["dynCall_vifiiiii"]=function(){return(dynCall_vifiiiii=Module["dynCall_vifiiiii"]=Module["asm"]["tm"]).apply(null,arguments)};var dynCall_viifiiiiiii=Module["dynCall_viifiiiiiii"]=function(){return(dynCall_viifiiiiiii=Module["dynCall_viifiiiiiii"]=Module["asm"]["um"]).apply(null,arguments)};var dynCall_viiififfiiiiiii=Module["dynCall_viiififfiiiiiii"]=function(){return(dynCall_viiififfiiiiiii=Module["dynCall_viiififfiiiiiii"]=Module["asm"]["vm"]).apply(null,arguments)};var dynCall_viiffiifiiiiiii=Module["dynCall_viiffiifiiiiiii"]=function(){return(dynCall_viiffiifiiiiiii=Module["dynCall_viiffiifiiiiiii"]=Module["asm"]["wm"]).apply(null,arguments)};var dynCall_viifiiiiii=Module["dynCall_viifiiiiii"]=function(){return(dynCall_viifiiiiii=Module["dynCall_viifiiiiii"]=Module["asm"]["xm"]).apply(null,arguments)};var dynCall_viiifiiiiii=Module["dynCall_viiifiiiiii"]=function(){return(dynCall_viiifiiiiii=Module["dynCall_viiifiiiiii"]=Module["asm"]["ym"]).apply(null,arguments)};var dynCall_viiiifiiiiii=Module["dynCall_viiiifiiiiii"]=function(){return(dynCall_viiiifiiiiii=Module["dynCall_viiiifiiiiii"]=Module["asm"]["zm"]).apply(null,arguments)};var dynCall_viififiiiiii=Module["dynCall_viififiiiiii"]=function(){return(dynCall_viififiiiiii=Module["dynCall_viififiiiiii"]=Module["asm"]["Am"]).apply(null,arguments)};var dynCall_viiiffiifiiiiiii=Module["dynCall_viiiffiifiiiiiii"]=function(){return(dynCall_viiiffiifiiiiiii=Module["dynCall_viiiffiifiiiiiii"]=Module["asm"]["Bm"]).apply(null,arguments)};var dynCall_viiiiiifiiiiii=Module["dynCall_viiiiiifiiiiii"]=function(){return(dynCall_viiiiiifiiiiii=Module["dynCall_viiiiiifiiiiii"]=Module["asm"]["Cm"]).apply(null,arguments)};var dynCall_vififiii=Module["dynCall_vififiii"]=function(){return(dynCall_vififiii=Module["dynCall_vififiii"]=Module["asm"]["Dm"]).apply(null,arguments)};var dynCall_fiffi=Module["dynCall_fiffi"]=function(){return(dynCall_fiffi=Module["dynCall_fiffi"]=Module["asm"]["Em"]).apply(null,arguments)};var dynCall_viiiiiiiijiiii=Module["dynCall_viiiiiiiijiiii"]=function(){return(dynCall_viiiiiiiijiiii=Module["dynCall_viiiiiiiijiiii"]=Module["asm"]["Fm"]).apply(null,arguments)};var dynCall_fifii=Module["dynCall_fifii"]=function(){return(dynCall_fifii=Module["dynCall_fifii"]=Module["asm"]["Gm"]).apply(null,arguments)};var dynCall_iiiifiiii=Module["dynCall_iiiifiiii"]=function(){return(dynCall_iiiifiiii=Module["dynCall_iiiifiiii"]=Module["asm"]["Hm"]).apply(null,arguments)};var dynCall_iiffffi=Module["dynCall_iiffffi"]=function(){return(dynCall_iiffffi=Module["dynCall_iiffffi"]=Module["asm"]["Im"]).apply(null,arguments)};var dynCall_viifffi=Module["dynCall_viifffi"]=function(){return(dynCall_viifffi=Module["dynCall_viifffi"]=Module["asm"]["Jm"]).apply(null,arguments)};var dynCall_viifffffi=Module["dynCall_viifffffi"]=function(){return(dynCall_viifffffi=Module["dynCall_viifffffi"]=Module["asm"]["Km"]).apply(null,arguments)};var dynCall_viiffffffi=Module["dynCall_viiffffffi"]=function(){return(dynCall_viiffffffi=Module["dynCall_viiffffffi"]=Module["asm"]["Lm"]).apply(null,arguments)};var dynCall_viifffffffi=Module["dynCall_viifffffffi"]=function(){return(dynCall_viifffffffi=Module["dynCall_viifffffffi"]=Module["asm"]["Mm"]).apply(null,arguments)};var dynCall_viiffffffffi=Module["dynCall_viiffffffffi"]=function(){return(dynCall_viiffffffffi=Module["dynCall_viiffffffffi"]=Module["asm"]["Nm"]).apply(null,arguments)};var dynCall_vidiii=Module["dynCall_vidiii"]=function(){return(dynCall_vidiii=Module["dynCall_vidiii"]=Module["asm"]["Om"]).apply(null,arguments)};var dynCall_viiffffffffiii=Module["dynCall_viiffffffffiii"]=function(){return(dynCall_viiffffffffiii=Module["dynCall_viiffffffffiii"]=Module["asm"]["Pm"]).apply(null,arguments)};var dynCall_viiiiffffii=Module["dynCall_viiiiffffii"]=function(){return(dynCall_viiiiffffii=Module["dynCall_viiiiffffii"]=Module["asm"]["Qm"]).apply(null,arguments)};var dynCall_fiiiiii=Module["dynCall_fiiiiii"]=function(){return(dynCall_fiiiiii=Module["dynCall_fiiiiii"]=Module["asm"]["Rm"]).apply(null,arguments)};var dynCall_idiiii=Module["dynCall_idiiii"]=function(){return(dynCall_idiiii=Module["dynCall_idiiii"]=Module["asm"]["Sm"]).apply(null,arguments)};var dynCall_jjii=Module["dynCall_jjii"]=function(){return(dynCall_jjii=Module["dynCall_jjii"]=Module["asm"]["Tm"]).apply(null,arguments)};var dynCall_vijiiiiiii=Module["dynCall_vijiiiiiii"]=function(){return(dynCall_vijiiiiiii=Module["dynCall_vijiiiiiii"]=Module["asm"]["Um"]).apply(null,arguments)};var dynCall_vijiiiiiiii=Module["dynCall_vijiiiiiiii"]=function(){return(dynCall_vijiiiiiiii=Module["dynCall_vijiiiiiiii"]=Module["asm"]["Vm"]).apply(null,arguments)};var dynCall_jji=Module["dynCall_jji"]=function(){return(dynCall_jji=Module["dynCall_jji"]=Module["asm"]["Wm"]).apply(null,arguments)};var dynCall_jijii=Module["dynCall_jijii"]=function(){return(dynCall_jijii=Module["dynCall_jijii"]=Module["asm"]["Xm"]).apply(null,arguments)};var dynCall_jjiiii=Module["dynCall_jjiiii"]=function(){return(dynCall_jjiiii=Module["dynCall_jjiiii"]=Module["asm"]["Ym"]).apply(null,arguments)};var dynCall_jjiiiii=Module["dynCall_jjiiiii"]=function(){return(dynCall_jjiiiii=Module["dynCall_jjiiiii"]=Module["asm"]["Zm"]).apply(null,arguments)};var dynCall_iijiiiiii=Module["dynCall_iijiiiiii"]=function(){return(dynCall_iijiiiiii=Module["dynCall_iijiiiiii"]=Module["asm"]["_m"]).apply(null,arguments)};var dynCall_iiiijjii=Module["dynCall_iiiijjii"]=function(){return(dynCall_iiiijjii=Module["dynCall_iiiijjii"]=Module["asm"]["$m"]).apply(null,arguments)};var dynCall_jijjji=Module["dynCall_jijjji"]=function(){return(dynCall_jijjji=Module["dynCall_jijjji"]=Module["asm"]["an"]).apply(null,arguments)};var dynCall_jijjjii=Module["dynCall_jijjjii"]=function(){return(dynCall_jijjjii=Module["dynCall_jijjjii"]=Module["asm"]["bn"]).apply(null,arguments)};var dynCall_jjiii=Module["dynCall_jjiii"]=function(){return(dynCall_jjiii=Module["dynCall_jjiii"]=Module["asm"]["cn"]).apply(null,arguments)};var dynCall_ijiiii=Module["dynCall_ijiiii"]=function(){return(dynCall_ijiiii=Module["dynCall_ijiiii"]=Module["asm"]["dn"]).apply(null,arguments)};var dynCall_ijijiiiii=Module["dynCall_ijijiiiii"]=function(){return(dynCall_ijijiiiii=Module["dynCall_ijijiiiii"]=Module["asm"]["en"]).apply(null,arguments)};var dynCall_ijjjiii=Module["dynCall_ijjjiii"]=function(){return(dynCall_ijjjiii=Module["dynCall_ijjjiii"]=Module["asm"]["fn"]).apply(null,arguments)};var dynCall_vijjjiijii=Module["dynCall_vijjjiijii"]=function(){return(dynCall_vijjjiijii=Module["dynCall_vijjjiijii"]=Module["asm"]["gn"]).apply(null,arguments)};var dynCall_ijjjiijii=Module["dynCall_ijjjiijii"]=function(){return(dynCall_ijjjiijii=Module["dynCall_ijjjiijii"]=Module["asm"]["hn"]).apply(null,arguments)};var dynCall_vijiiiiii=Module["dynCall_vijiiiiii"]=function(){return(dynCall_vijiiiiii=Module["dynCall_vijiiiiii"]=Module["asm"]["jn"]).apply(null,arguments)};var dynCall_vijiiii=Module["dynCall_vijiiii"]=function(){return(dynCall_vijiiii=Module["dynCall_vijiiii"]=Module["asm"]["kn"]).apply(null,arguments)};var dynCall_jdi=Module["dynCall_jdi"]=function(){return(dynCall_jdi=Module["dynCall_jdi"]=Module["asm"]["ln"]).apply(null,arguments)};var dynCall_dji=Module["dynCall_dji"]=function(){return(dynCall_dji=Module["dynCall_dji"]=Module["asm"]["mn"]).apply(null,arguments)};var dynCall_jfi=Module["dynCall_jfi"]=function(){return(dynCall_jfi=Module["dynCall_jfi"]=Module["asm"]["nn"]).apply(null,arguments)};var dynCall_fji=Module["dynCall_fji"]=function(){return(dynCall_fji=Module["dynCall_fji"]=Module["asm"]["on"]).apply(null,arguments)};var dynCall_fdi=Module["dynCall_fdi"]=function(){return(dynCall_fdi=Module["dynCall_fdi"]=Module["asm"]["pn"]).apply(null,arguments)};var dynCall_dfi=Module["dynCall_dfi"]=function(){return(dynCall_dfi=Module["dynCall_dfi"]=Module["asm"]["qn"]).apply(null,arguments)};var dynCall_jidii=Module["dynCall_jidii"]=function(){return(dynCall_jidii=Module["dynCall_jidii"]=Module["asm"]["rn"]).apply(null,arguments)};var dynCall_jidi=Module["dynCall_jidi"]=function(){return(dynCall_jidi=Module["dynCall_jidi"]=Module["asm"]["sn"]).apply(null,arguments)};var dynCall_vijji=Module["dynCall_vijji"]=function(){return(dynCall_vijji=Module["dynCall_vijji"]=Module["asm"]["tn"]).apply(null,arguments)};var dynCall_ijiijii=Module["dynCall_ijiijii"]=function(){return(dynCall_ijiijii=Module["dynCall_ijiijii"]=Module["asm"]["un"]).apply(null,arguments)};var dynCall_vjjiiiii=Module["dynCall_vjjiiiii"]=function(){return(dynCall_vjjiiiii=Module["dynCall_vjjiiiii"]=Module["asm"]["vn"]).apply(null,arguments)};var dynCall_vjjii=Module["dynCall_vjjii"]=function(){return(dynCall_vjjii=Module["dynCall_vjjii"]=Module["asm"]["wn"]).apply(null,arguments)};var dynCall_ijiiji=Module["dynCall_ijiiji"]=function(){return(dynCall_ijiiji=Module["dynCall_ijiiji"]=Module["asm"]["xn"]).apply(null,arguments)};var dynCall_ijiiiii=Module["dynCall_ijiiiii"]=function(){return(dynCall_ijiiiii=Module["dynCall_ijiiiii"]=Module["asm"]["yn"]).apply(null,arguments)};var dynCall_ijiiiiji=Module["dynCall_ijiiiiji"]=function(){return(dynCall_ijiiiiji=Module["dynCall_ijiiiiji"]=Module["asm"]["zn"]).apply(null,arguments)};var dynCall_ijjiii=Module["dynCall_ijjiii"]=function(){return(dynCall_ijjiii=Module["dynCall_ijjiii"]=Module["asm"]["An"]).apply(null,arguments)};var dynCall_ddii=Module["dynCall_ddii"]=function(){return(dynCall_ddii=Module["dynCall_ddii"]=Module["asm"]["Bn"]).apply(null,arguments)};var dynCall_idiii=Module["dynCall_idiii"]=function(){return(dynCall_idiii=Module["dynCall_idiii"]=Module["asm"]["Cn"]).apply(null,arguments)};var dynCall_idiiiii=Module["dynCall_idiiiii"]=function(){return(dynCall_idiiiii=Module["dynCall_idiiiii"]=Module["asm"]["Dn"]).apply(null,arguments)};var dynCall_iidiii=Module["dynCall_iidiii"]=function(){return(dynCall_iidiii=Module["dynCall_iidiii"]=Module["asm"]["En"]).apply(null,arguments)};var dynCall_ifiiiii=Module["dynCall_ifiiiii"]=function(){return(dynCall_ifiiiii=Module["dynCall_ifiiiii"]=Module["asm"]["Fn"]).apply(null,arguments)};var dynCall_jjjii=Module["dynCall_jjjii"]=function(){return(dynCall_jjjii=Module["dynCall_jjjii"]=Module["asm"]["Gn"]).apply(null,arguments)};var dynCall_vdiii=Module["dynCall_vdiii"]=function(){return(dynCall_vdiii=Module["dynCall_vdiii"]=Module["asm"]["Hn"]).apply(null,arguments)};var dynCall_jdii=Module["dynCall_jdii"]=function(){return(dynCall_jdii=Module["dynCall_jdii"]=Module["asm"]["In"]).apply(null,arguments)};var dynCall_vijijji=Module["dynCall_vijijji"]=function(){return(dynCall_vijijji=Module["dynCall_vijijji"]=Module["asm"]["Jn"]).apply(null,arguments)};var dynCall_iijjji=Module["dynCall_iijjji"]=function(){return(dynCall_iijjji=Module["dynCall_iijjji"]=Module["asm"]["Kn"]).apply(null,arguments)};var dynCall_viijjji=Module["dynCall_viijjji"]=function(){return(dynCall_viijjji=Module["dynCall_viijjji"]=Module["asm"]["Ln"]).apply(null,arguments)};var dynCall_vdii=Module["dynCall_vdii"]=function(){return(dynCall_vdii=Module["dynCall_vdii"]=Module["asm"]["Mn"]).apply(null,arguments)};var dynCall_iiiijii=Module["dynCall_iiiijii"]=function(){return(dynCall_iiiijii=Module["dynCall_iiiijii"]=Module["asm"]["Nn"]).apply(null,arguments)};var dynCall_jijji=Module["dynCall_jijji"]=function(){return(dynCall_jijji=Module["dynCall_jijji"]=Module["asm"]["On"]).apply(null,arguments)};var dynCall_diddi=Module["dynCall_diddi"]=function(){return(dynCall_diddi=Module["dynCall_diddi"]=Module["asm"]["Pn"]).apply(null,arguments)};var dynCall_didi=Module["dynCall_didi"]=function(){return(dynCall_didi=Module["dynCall_didi"]=Module["asm"]["Qn"]).apply(null,arguments)};var dynCall_viiiijii=Module["dynCall_viiiijii"]=function(){return(dynCall_viiiijii=Module["dynCall_viiiijii"]=Module["asm"]["Rn"]).apply(null,arguments)};var dynCall_viiijji=Module["dynCall_viiijji"]=function(){return(dynCall_viiijji=Module["dynCall_viiijji"]=Module["asm"]["Sn"]).apply(null,arguments)};var dynCall_iijjii=Module["dynCall_iijjii"]=function(){return(dynCall_iijjii=Module["dynCall_iijjii"]=Module["asm"]["Tn"]).apply(null,arguments)};var dynCall_viijijii=Module["dynCall_viijijii"]=function(){return(dynCall_viijijii=Module["dynCall_viijijii"]=Module["asm"]["Un"]).apply(null,arguments)};var dynCall_viijijiii=Module["dynCall_viijijiii"]=function(){return(dynCall_viijijiii=Module["dynCall_viijijiii"]=Module["asm"]["Vn"]).apply(null,arguments)};var dynCall_vijiji=Module["dynCall_vijiji"]=function(){return(dynCall_vijiji=Module["dynCall_vijiji"]=Module["asm"]["Wn"]).apply(null,arguments)};var dynCall_viijiijiii=Module["dynCall_viijiijiii"]=function(){return(dynCall_viijiijiii=Module["dynCall_viijiijiii"]=Module["asm"]["Xn"]).apply(null,arguments)};var dynCall_viiiijiiii=Module["dynCall_viiiijiiii"]=function(){return(dynCall_viiiijiiii=Module["dynCall_viiiijiiii"]=Module["asm"]["Yn"]).apply(null,arguments)};var dynCall_jiiiiii=Module["dynCall_jiiiiii"]=function(){return(dynCall_jiiiiii=Module["dynCall_jiiiiii"]=Module["asm"]["Zn"]).apply(null,arguments)};var dynCall_di=Module["dynCall_di"]=function(){return(dynCall_di=Module["dynCall_di"]=Module["asm"]["_n"]).apply(null,arguments)};var dynCall_viijjii=Module["dynCall_viijjii"]=function(){return(dynCall_viijjii=Module["dynCall_viijjii"]=Module["asm"]["$n"]).apply(null,arguments)};var dynCall_vijjji=Module["dynCall_vijjji"]=function(){return(dynCall_vijjji=Module["dynCall_vijjji"]=Module["asm"]["ao"]).apply(null,arguments)};var dynCall_iiiiijii=Module["dynCall_iiiiijii"]=function(){return(dynCall_iiiiijii=Module["dynCall_iiiiijii"]=Module["asm"]["bo"]).apply(null,arguments)};var dynCall_iiijii=Module["dynCall_iiijii"]=function(){return(dynCall_iiijii=Module["dynCall_iiijii"]=Module["asm"]["co"]).apply(null,arguments)};var dynCall_iidii=Module["dynCall_iidii"]=function(){return(dynCall_iidii=Module["dynCall_iidii"]=Module["asm"]["eo"]).apply(null,arguments)};var dynCall_iidfii=Module["dynCall_iidfii"]=function(){return(dynCall_iidfii=Module["dynCall_iidfii"]=Module["asm"]["fo"]).apply(null,arguments)};var dynCall_iidfi=Module["dynCall_iidfi"]=function(){return(dynCall_iidfi=Module["dynCall_iidfi"]=Module["asm"]["go"]).apply(null,arguments)};var dynCall_iiddfi=Module["dynCall_iiddfi"]=function(){return(dynCall_iiddfi=Module["dynCall_iiddfi"]=Module["asm"]["ho"]).apply(null,arguments)};var dynCall_iijfii=Module["dynCall_iijfii"]=function(){return(dynCall_iijfii=Module["dynCall_iijfii"]=Module["asm"]["io"]).apply(null,arguments)};var dynCall_iijfi=Module["dynCall_iijfi"]=function(){return(dynCall_iijfi=Module["dynCall_iijfi"]=Module["asm"]["jo"]).apply(null,arguments)};var dynCall_iijjfi=Module["dynCall_iijjfi"]=function(){return(dynCall_iijjfi=Module["dynCall_iijjfi"]=Module["asm"]["ko"]).apply(null,arguments)};var dynCall_iiiiffiiiji=Module["dynCall_iiiiffiiiji"]=function(){return(dynCall_iiiiffiiiji=Module["dynCall_iiiiffiiiji"]=Module["asm"]["lo"]).apply(null,arguments)};var dynCall_iiidfii=Module["dynCall_iiidfii"]=function(){return(dynCall_iiidfii=Module["dynCall_iiidfii"]=Module["asm"]["mo"]).apply(null,arguments)};var dynCall_iiijfii=Module["dynCall_iiijfii"]=function(){return(dynCall_iiijfii=Module["dynCall_iiijfii"]=Module["asm"]["no"]).apply(null,arguments)};var dynCall_iiiiffiiiii=Module["dynCall_iiiiffiiiii"]=function(){return(dynCall_iiiiffiiiii=Module["dynCall_iiiiffiiiii"]=Module["asm"]["oo"]).apply(null,arguments)};var dynCall_iiiidfii=Module["dynCall_iiiidfii"]=function(){return(dynCall_iiiidfii=Module["dynCall_iiiidfii"]=Module["asm"]["po"]).apply(null,arguments)};var dynCall_iiiijfii=Module["dynCall_iiiijfii"]=function(){return(dynCall_iiiijfii=Module["dynCall_iiiijfii"]=Module["asm"]["qo"]).apply(null,arguments)};var dynCall_iiiiffii=Module["dynCall_iiiiffii"]=function(){return(dynCall_iiiiffii=Module["dynCall_iiiiffii"]=Module["asm"]["ro"]).apply(null,arguments)};var dynCall_jiiiiji=Module["dynCall_jiiiiji"]=function(){return(dynCall_jiiiiji=Module["dynCall_jiiiiji"]=Module["asm"]["so"]).apply(null,arguments)};var dynCall_fiiiifi=Module["dynCall_fiiiifi"]=function(){return(dynCall_fiiiifi=Module["dynCall_fiiiifi"]=Module["asm"]["to"]).apply(null,arguments)};var dynCall_iiiijiii=Module["dynCall_iiiijiii"]=function(){return(dynCall_iiiijiii=Module["dynCall_iiiijiii"]=Module["asm"]["uo"]).apply(null,arguments)};var dynCall_iiiij=Module["dynCall_iiiij"]=function(){return(dynCall_iiiij=Module["dynCall_iiiij"]=Module["asm"]["vo"]).apply(null,arguments)};var dynCall_fff=Module["dynCall_fff"]=function(){return(dynCall_fff=Module["dynCall_fff"]=Module["asm"]["wo"]).apply(null,arguments)};var dynCall_ijj=Module["dynCall_ijj"]=function(){return(dynCall_ijj=Module["dynCall_ijj"]=Module["asm"]["xo"]).apply(null,arguments)};var dynCall_vjji=Module["dynCall_vjji"]=function(){return(dynCall_vjji=Module["dynCall_vjji"]=Module["asm"]["yo"]).apply(null,arguments)};var dynCall_ij=Module["dynCall_ij"]=function(){return(dynCall_ij=Module["dynCall_ij"]=Module["asm"]["zo"]).apply(null,arguments)};var dynCall_vjiiiiiii=Module["dynCall_vjiiiiiii"]=function(){return(dynCall_vjiiiiiii=Module["dynCall_vjiiiiiii"]=Module["asm"]["Ao"]).apply(null,arguments)};var dynCall_vif=Module["dynCall_vif"]=function(){return(dynCall_vif=Module["dynCall_vif"]=Module["asm"]["Bo"]).apply(null,arguments)};var dynCall_vid=Module["dynCall_vid"]=function(){return(dynCall_vid=Module["dynCall_vid"]=Module["asm"]["Co"]).apply(null,arguments)};var dynCall_viiiiif=Module["dynCall_viiiiif"]=function(){return(dynCall_viiiiif=Module["dynCall_viiiiif"]=Module["asm"]["Do"]).apply(null,arguments)};var dynCall_viiiif=Module["dynCall_viiiif"]=function(){return(dynCall_viiiif=Module["dynCall_viiiif"]=Module["asm"]["Eo"]).apply(null,arguments)};var dynCall_viiiiiif=Module["dynCall_viiiiiif"]=function(){return(dynCall_viiiiiif=Module["dynCall_viiiiiif"]=Module["asm"]["Fo"]).apply(null,arguments)};var dynCall_iiif=Module["dynCall_iiif"]=function(){return(dynCall_iiif=Module["dynCall_iiif"]=Module["asm"]["Go"]).apply(null,arguments)};var dynCall_fif=Module["dynCall_fif"]=function(){return(dynCall_fif=Module["dynCall_fif"]=Module["asm"]["Ho"]).apply(null,arguments)};var dynCall_iiiiiifff=Module["dynCall_iiiiiifff"]=function(){return(dynCall_iiiiiifff=Module["dynCall_iiiiiifff"]=Module["asm"]["Io"]).apply(null,arguments)};var dynCall_iiiiiifiif=Module["dynCall_iiiiiifiif"]=function(){return(dynCall_iiiiiifiif=Module["dynCall_iiiiiifiif"]=Module["asm"]["Jo"]).apply(null,arguments)};var dynCall_iiiiiifiii=Module["dynCall_iiiiiifiii"]=function(){return(dynCall_iiiiiifiii=Module["dynCall_iiiiiifiii"]=Module["asm"]["Ko"]).apply(null,arguments)};var dynCall_iiiiiiifiif=Module["dynCall_iiiiiiifiif"]=function(){return(dynCall_iiiiiiifiif=Module["dynCall_iiiiiiifiif"]=Module["asm"]["Lo"]).apply(null,arguments)};var dynCall_fiff=Module["dynCall_fiff"]=function(){return(dynCall_fiff=Module["dynCall_fiff"]=Module["asm"]["Mo"]).apply(null,arguments)};var dynCall_fiiiiiifiifif=Module["dynCall_fiiiiiifiifif"]=function(){return(dynCall_fiiiiiifiifif=Module["dynCall_fiiiiiifiifif"]=Module["asm"]["No"]).apply(null,arguments)};var dynCall_fiiiiiifiiiif=Module["dynCall_fiiiiiifiiiif"]=function(){return(dynCall_fiiiiiifiiiif=Module["dynCall_fiiiiiifiiiif"]=Module["asm"]["Oo"]).apply(null,arguments)};var dynCall_iifiiiijii=Module["dynCall_iifiiiijii"]=function(){return(dynCall_iifiiiijii=Module["dynCall_iifiiiijii"]=Module["asm"]["Po"]).apply(null,arguments)};var dynCall_vifijii=Module["dynCall_vifijii"]=function(){return(dynCall_vifijii=Module["dynCall_vifijii"]=Module["asm"]["Qo"]).apply(null,arguments)};var dynCall_iiiifffiii=Module["dynCall_iiiifffiii"]=function(){return(dynCall_iiiifffiii=Module["dynCall_iiiifffiii"]=Module["asm"]["Ro"]).apply(null,arguments)};var dynCall_iiiifffffi=Module["dynCall_iiiifffffi"]=function(){return(dynCall_iiiifffffi=Module["dynCall_iiiifffffi"]=Module["asm"]["So"]).apply(null,arguments)};var dynCall_viffiiiif=Module["dynCall_viffiiiif"]=function(){return(dynCall_viffiiiif=Module["dynCall_viffiiiif"]=Module["asm"]["To"]).apply(null,arguments)};var dynCall_viffiifffffiii=Module["dynCall_viffiifffffiii"]=function(){return(dynCall_viffiifffffiii=Module["dynCall_viffiifffffiii"]=Module["asm"]["Uo"]).apply(null,arguments)};var dynCall_viffffiifffiiiiif=Module["dynCall_viffffiifffiiiiif"]=function(){return(dynCall_viffffiifffiiiiif=Module["dynCall_viffffiifffiiiiif"]=Module["asm"]["Vo"]).apply(null,arguments)};var dynCall_iiiifffffii=Module["dynCall_iiiifffffii"]=function(){return(dynCall_iiiifffffii=Module["dynCall_iiiifffffii"]=Module["asm"]["Wo"]).apply(null,arguments)};var dynCall_viiiiiiiiiiifii=Module["dynCall_viiiiiiiiiiifii"]=function(){return(dynCall_viiiiiiiiiiifii=Module["dynCall_viiiiiiiiiiifii"]=Module["asm"]["Xo"]).apply(null,arguments)};var dynCall_viff=Module["dynCall_viff"]=function(){return(dynCall_viff=Module["dynCall_viff"]=Module["asm"]["Yo"]).apply(null,arguments)};var dynCall_iiiifiiiii=Module["dynCall_iiiifiiiii"]=function(){return(dynCall_iiiifiiiii=Module["dynCall_iiiifiiiii"]=Module["asm"]["Zo"]).apply(null,arguments)};var dynCall_iiiiifiiiiif=Module["dynCall_iiiiifiiiiif"]=function(){return(dynCall_iiiiifiiiiif=Module["dynCall_iiiiifiiiiif"]=Module["asm"]["_o"]).apply(null,arguments)};var dynCall_viiifiiiii=Module["dynCall_viiifiiiii"]=function(){return(dynCall_viiifiiiii=Module["dynCall_viiifiiiii"]=Module["asm"]["$o"]).apply(null,arguments)};var dynCall_viiiifiiiiif=Module["dynCall_viiiifiiiiif"]=function(){return(dynCall_viiiifiiiiif=Module["dynCall_viiiifiiiiif"]=Module["asm"]["ap"]).apply(null,arguments)};var dynCall_iifff=Module["dynCall_iifff"]=function(){return(dynCall_iifff=Module["dynCall_iifff"]=Module["asm"]["bp"]).apply(null,arguments)};var dynCall_iif=Module["dynCall_iif"]=function(){return(dynCall_iif=Module["dynCall_iif"]=Module["asm"]["cp"]).apply(null,arguments)};var dynCall_viijijj=Module["dynCall_viijijj"]=function(){return(dynCall_viijijj=Module["dynCall_viijijj"]=Module["asm"]["dp"]).apply(null,arguments)};var dynCall_viijj=Module["dynCall_viijj"]=function(){return(dynCall_viijj=Module["dynCall_viijj"]=Module["asm"]["ep"]).apply(null,arguments)};var dynCall_viiiij=Module["dynCall_viiiij"]=function(){return(dynCall_viiiij=Module["dynCall_viiiij"]=Module["asm"]["fp"]).apply(null,arguments)};var dynCall_iiiiiifffiiifiii=Module["dynCall_iiiiiifffiiifiii"]=function(){return(dynCall_iiiiiifffiiifiii=Module["dynCall_iiiiiifffiiifiii"]=Module["asm"]["gp"]).apply(null,arguments)};var dynCall_fiiiif=Module["dynCall_fiiiif"]=function(){return(dynCall_fiiiif=Module["dynCall_fiiiif"]=Module["asm"]["hp"]).apply(null,arguments)};var dynCall_viffff=Module["dynCall_viffff"]=function(){return(dynCall_viffff=Module["dynCall_viffff"]=Module["asm"]["ip"]).apply(null,arguments)};var dynCall_vifff=Module["dynCall_vifff"]=function(){return(dynCall_vifff=Module["dynCall_vifff"]=Module["asm"]["jp"]).apply(null,arguments)};var dynCall_viifff=Module["dynCall_viifff"]=function(){return(dynCall_viifff=Module["dynCall_viifff"]=Module["asm"]["kp"]).apply(null,arguments)};var dynCall_vij=Module["dynCall_vij"]=function(){return(dynCall_vij=Module["dynCall_vij"]=Module["asm"]["lp"]).apply(null,arguments)};var dynCall_vf=Module["dynCall_vf"]=function(){return(dynCall_vf=Module["dynCall_vf"]=Module["asm"]["mp"]).apply(null,arguments)};var dynCall_vffff=Module["dynCall_vffff"]=function(){return(dynCall_vffff=Module["dynCall_vffff"]=Module["asm"]["np"]).apply(null,arguments)};var dynCall_vff=Module["dynCall_vff"]=function(){return(dynCall_vff=Module["dynCall_vff"]=Module["asm"]["op"]).apply(null,arguments)};var dynCall_vfff=Module["dynCall_vfff"]=function(){return(dynCall_vfff=Module["dynCall_vfff"]=Module["asm"]["pp"]).apply(null,arguments)};var dynCall_f=Module["dynCall_f"]=function(){return(dynCall_f=Module["dynCall_f"]=Module["asm"]["qp"]).apply(null,arguments)};var dynCall_ff=Module["dynCall_ff"]=function(){return(dynCall_ff=Module["dynCall_ff"]=Module["asm"]["rp"]).apply(null,arguments)};var dynCall_vifffff=Module["dynCall_vifffff"]=function(){return(dynCall_vifffff=Module["dynCall_vifffff"]=Module["asm"]["sp"]).apply(null,arguments)};var dynCall_viififf=Module["dynCall_viififf"]=function(){return(dynCall_viififf=Module["dynCall_viififf"]=Module["asm"]["tp"]).apply(null,arguments)};var dynCall_vififfi=Module["dynCall_vififfi"]=function(){return(dynCall_vififfi=Module["dynCall_vififfi"]=Module["asm"]["up"]).apply(null,arguments)};var dynCall_iiififiii=Module["dynCall_iiififiii"]=function(){return(dynCall_iiififiii=Module["dynCall_iiififiii"]=Module["asm"]["vp"]).apply(null,arguments)};var dynCall_fiif=Module["dynCall_fiif"]=function(){return(dynCall_fiif=Module["dynCall_fiif"]=Module["asm"]["wp"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiffffiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiffffiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiii"]=Module["asm"]["xp"]).apply(null,arguments)};var dynCall_viiiiiiiijiii=Module["dynCall_viiiiiiiijiii"]=function(){return(dynCall_viiiiiiiijiii=Module["dynCall_viiiiiiiijiii"]=Module["asm"]["yp"]).apply(null,arguments)};function invoke_iiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iiiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vii(index,a1,a2){var sp=stackSave();try{dynCall_vii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iii(index,a1,a2){var sp=stackSave();try{return dynCall_iii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiii(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_fiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_fiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viif(index,a1,a2,a3){var sp=stackSave();try{dynCall_viif(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vi(index,a1){var sp=stackSave();try{dynCall_vi(index,a1)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viii(index,a1,a2,a3){var sp=stackSave();try{dynCall_viii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_ii(index,a1){var sp=stackSave();try{return dynCall_ii(index,a1)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_v(index){var sp=stackSave();try{dynCall_v(index)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_i(index){var sp=stackSave();try{return dynCall_i(index)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiiiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiiiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_iiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12){var sp=stackSave();try{return dynCall_iiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{dynCall_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_viiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiifiifiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14){var sp=stackSave();try{dynCall_viiiiiiifiifiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_fii(index,a1,a2){var sp=stackSave();try{return dynCall_fii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiiffffiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14){var sp=stackSave();try{dynCall_viiiiiiiffffiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viifi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viifi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiifi(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiifi(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{dynCall_viiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_fiiif(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_fiiif(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_dii(index,a1,a2){var sp=stackSave();try{return dynCall_dii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_diiid(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_diiid(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiff(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viiff(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_ddiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_ddiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiiiifi(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{dynCall_viiiiiiiiifi(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{dynCall_viiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vifi(index,a1,a2,a3){var sp=stackSave();try{dynCall_vifi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_fi(index,a1){var sp=stackSave();try{return dynCall_fi(index,a1)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiif(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viiif(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiffi(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiffi(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_fiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_fiiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiifi(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiifi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vidi(index,a1,a2,a3){var sp=stackSave();try{dynCall_vidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viidi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viidi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{return dynCall_iiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vifii(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_vifii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiifi(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiifi(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiifddfiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14){var sp=stackSave();try{dynCall_viiiiiiifddfiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viffi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viffi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiij(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiij(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_j(index){var sp=stackSave();try{return dynCall_j(index)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iij(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iij(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiijiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiijiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jii(index,a1,a2){var sp=stackSave();try{return dynCall_jii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_ji(index,a1){var sp=stackSave();try{return dynCall_ji(index,a1)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jjji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jjji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jiiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiiiiifjjfiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16){var sp=stackSave();try{dynCall_viiiiiiifjjfiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_vijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiij(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiij(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jiiij(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jiiij(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiji(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iiiiiiiiiji(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vji(index,a1,a2,a3){var sp=stackSave();try{dynCall_vji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viij(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viij(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viji(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viiiji(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiji(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jijiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_jijiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_jiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_ijji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_ijji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jiji(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iijji(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iijji(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jiiji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jiiji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_jijj(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jijj(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vijiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_vijiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vjjjiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_vjjjiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_vjiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{dynCall_vjiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_viijiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_viijiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiji(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}function invoke_iiiijii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiiijii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0&&e!=="longjmp")throw e;_setThrew(1,0)}}Module["ccall"]=ccall;Module["cwrap"]=cwrap;Module["stackTrace"]=stackTrace;Module["addRunDependency"]=addRunDependency;Module["removeRunDependency"]=removeRunDependency;Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["stackTrace"]=stackTrace;var calledRun;function ExitStatus(status){this.name="ExitStatus";this.message="Program terminated with exit("+status+")";this.status=status}var calledMain=false;dependenciesFulfilled=function runCaller(){if(!calledRun)run();if(!calledRun)dependenciesFulfilled=runCaller};function callMain(args){var entryFunction=Module["_main"];args=args||[];var argc=args.length+1;var argv=stackAlloc((argc+1)*4);HEAP32[argv>>2]=allocateUTF8OnStack(thisProgram);for(var i=1;i>2)+i]=allocateUTF8OnStack(args[i-1])}HEAP32[(argv>>2)+argc]=0;try{var ret=entryFunction(argc,argv);exit(ret,true)}catch(e){if(e instanceof ExitStatus){return}else if(e=="unwind"){return}else{var toLog=e;if(e&&typeof e==="object"&&e.stack){toLog=[e,e.stack]}err("exception thrown: "+toLog);quit_(1,e)}}finally{calledMain=true}}function run(args){args=args||arguments_;if(runDependencies>0){return}preRun();if(runDependencies>0){return}function doRun(){if(calledRun)return;calledRun=true;Module["calledRun"]=true;if(ABORT)return;initRuntime();preMain();if(Module["onRuntimeInitialized"])Module["onRuntimeInitialized"]();if(shouldRunNow)callMain(args);postRun()}if(Module["setStatus"]){Module["setStatus"]("Running...");setTimeout(function(){setTimeout(function(){Module["setStatus"]("")},1);doRun()},1)}else{doRun()}}Module["run"]=run;function exit(status,implicit){EXITSTATUS=status;if(implicit&&keepRuntimeAlive()&&status===0){return}if(keepRuntimeAlive()){}else{exitRuntime();if(Module["onExit"])Module["onExit"](status);ABORT=true}quit_(status,new ExitStatus(status))}if(Module["preInit"]){if(typeof Module["preInit"]=="function")Module["preInit"]=[Module["preInit"]];while(Module["preInit"].length>0){Module["preInit"].pop()()}}var shouldRunNow=true;if(Module["noInitialRun"])shouldRunNow=false;run(); - -} diff --git a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/README.md b/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/model_gottbert.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/model_gottbert.py deleted file mode 100644 index 2e8c66354ac7ce7309226bb091a7baa4776fbfdc..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/roberta/model_gottbert.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -GottBERT: a pure German Language Model -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model('gottbert') -class GottbertModel(RobertaModel): - - @classmethod - def hub_models(cls): - return { - 'gottbert-base': 'https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz', - } - - @classmethod - def from_pretrained(cls, - model_name_or_path, - checkpoint_file='model.pt', - data_name_or_path='.', - bpe='hf_byte_bpe', - bpe_vocab='vocab.json', - bpe_merges='merges.txt', - bpe_add_prefix_space=False, - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - bpe_vocab=bpe_vocab, - bpe_merges=bpe_merges, - bpe_add_prefix_space=bpe_add_prefix_space, - **kwargs, - ) - return RobertaHubInterface(x['args'], x['task'], x['models'][0]) diff --git a/spaces/IISRFactCheck/claim_detection/code/do_predict.py b/spaces/IISRFactCheck/claim_detection/code/do_predict.py deleted file mode 100644 index 8fc4f5e4b3d386c7f9430adc79638764bca67d83..0000000000000000000000000000000000000000 --- a/spaces/IISRFactCheck/claim_detection/code/do_predict.py +++ /dev/null @@ -1,187 +0,0 @@ -from args import args, config -from items_dataset import items_dataset -from torch.utils.data import DataLoader -from models import Model_Crf, Model_Softmax -from transformers import AutoTokenizer -from tqdm import tqdm -import prediction -import torch -import math - -directory = args.SAVE_MODEL_PATH -model_name = "roberta_CRF.pt" -device = torch.device("cuda", 0) if torch.cuda.is_available() else torch.device("cpu") -model_crf = Model_Crf(config).to(device) -model_crf.load_state_dict( - state_dict=torch.load(directory + model_name, map_location=device) -) - -model_name = "roberta_softmax.pt" -device = torch.device("cuda", 0) if torch.cuda.is_available() else torch.device("cpu") -model_roberta = Model_Softmax(config).to(device) -model_roberta.load_state_dict( - state_dict=torch.load(directory + model_name, map_location=device) -) - - -def prepare_span_data(dataset): - for sample in dataset: - spans = items_dataset.cal_agreement_span( - None, - agreement_table=sample["predict_sentence_table"], - min_agree=1, - max_agree=2, - ) - sample["span_labels"] = spans - sample["original_text"] = sample["text_a"] - del sample["text_a"] - - -def rank_spans(test_loader, device, model, reverse=True): - """Calculate each span probability by e**(word average log likelihood)""" - model.eval() - result = [] - - for i, test_batch in enumerate(tqdm(test_loader)): - batch_text = test_batch["batch_text"] - input_ids = test_batch["input_ids"].to(device) - token_type_ids = test_batch["token_type_ids"].to(device) - attention_mask = test_batch["attention_mask"].to(device) - labels = test_batch["labels"] - crf_mask = test_batch["crf_mask"].to(device) - sample_mapping = test_batch["overflow_to_sample_mapping"] - output = model( - input_ids=input_ids, - token_type_ids=token_type_ids, - attention_mask=attention_mask, - labels=None, - crf_mask=crf_mask, - ) - output = torch.nn.functional.softmax(output[0], dim=-1) - - # make result of every sample - sample_id = 0 - sample_result = { - "original_text": test_batch["batch_text"][sample_id], - "span_ranked": [], - } - for batch_id in range(len(sample_mapping)): - change_sample = False - - # make sure status - if sample_id != sample_mapping[batch_id]: - change_sample = True - if change_sample: - sample_id = sample_mapping[batch_id] - result.append(sample_result) - sample_result = { - "original_text": test_batch["batch_text"][sample_id], - "span_ranked": [], - } - - encoded_spans = items_dataset.cal_agreement_span( - None, agreement_table=labels[batch_id], min_agree=1, max_agree=2 - ) - # print(encoded_spans) - for encoded_span in encoded_spans: - # calculate span loss - span_lenght = encoded_span[1] - encoded_span[0] - # print(span_lenght) - span_prob_table = torch.log( - output[batch_id][encoded_span[0] : encoded_span[1]] - ) - if ( - not change_sample and encoded_span[0] == 0 and batch_id != 0 - ): # span cross two tensors - span_loss += span_prob_table[0][1] # Begin - else: - span_loss = span_prob_table[0][1] # Begin - for token_id in range(1, span_prob_table.shape[0]): - span_loss += span_prob_table[token_id][2] # Inside - span_loss /= span_lenght - - # span decode - decode_start = test_batch[batch_id].token_to_chars(encoded_span[0] + 1)[ - 0 - ] - decode_end = test_batch[batch_id].token_to_chars(encoded_span[1])[0] + 1 - # print((decode_start, decode_end)) - span_text = test_batch["batch_text"][sample_mapping[batch_id]][ - decode_start:decode_end - ] - if ( - not change_sample and encoded_span[0] == 0 and batch_id != 0 - ): # span cross two tensors - presample = sample_result["span_ranked"].pop(-1) - sample_result["span_ranked"].append( - [presample[0] + span_text, math.e ** float(span_loss)] - ) - else: - sample_result["span_ranked"].append( - [span_text, math.e ** float(span_loss)] - ) - result.append(sample_result) - - # sorted spans by probability - # for sample in result: - # sample["span_ranked"] = sorted( - # sample["span_ranked"], key=lambda x: x[1], reverse=reverse - # ) - return result - - -def predict_single(text): - input_dict = [{"span_labels": []}] - input_dict[0]["original_text"] = text - tokenizer = AutoTokenizer.from_pretrained( - args.pre_model_name, add_prefix_space=True - ) - prediction_dataset = items_dataset(tokenizer, input_dict, args.label_dict) - prediction_loader = DataLoader( - prediction_dataset, - batch_size=args.batch_size, - shuffle=True, - collate_fn=prediction_dataset.collate_fn, - ) - predict_data = prediction.test_predict(prediction_loader, device, model_crf) - prediction.add_sentence_table(predict_data) - - prepare_span_data(predict_data) - tokenizer = AutoTokenizer.from_pretrained( - args.pre_model_name, add_prefix_space=True - ) - prediction_dataset = items_dataset(tokenizer, predict_data, args.label_dict) - prediction_loader = DataLoader( - prediction_dataset, - batch_size=args.batch_size, - shuffle=False, - collate_fn=prediction_dataset.collate_fn, - ) - span_ranked = rank_spans(prediction_loader, device, model_roberta) - # for sample in span_ranked: - # print(sample["original_text"]) - # print(sample["span_ranked"]) - - result = [] - sample = span_ranked[0] - orig = sample["original_text"] - cur = 0 - for s, score in sample["span_ranked"]: - # print() - # print('ORIG', repr(orig)) - # print('CCUR', repr(orig[cur:])) - # print('SSSS', repr(s)) - # print() - end = orig.index(s, cur) - if cur != end: - result.append([orig[cur:end], 0]) - result.append([s, score]) - cur = end + len(s) - if cur < len(orig): - result.append([orig[cur:], 0]) - return result - - -if __name__ == "__main__": - s = """貓咪犯錯後,以下5種懲罰方法很有效,飼主可以試試樂享網 2021-03-06 繼續閱讀 繼續閱讀 繼續閱讀 繼續閱讀 繼續閱讀 貓咪雖然高冷,但也是會犯錯的,那貓咪犯錯後,怎麼懲罰它才最有效呢?今天就來說一些懲罰貓咪最有效的5個方法!1、把痛感形成條件反射 這裡說的是「痛感」,而不是「暴打」。在貓咪犯錯後,寵主不需要打它,可以彈鼻頭或者是輕拍它的頭頂,給它造成痛感,這樣讓貓咪有一些畏懼心理,知道你在懲罰它。這樣時間長了,貓咪就會形成條件反射,以後就會少犯錯了。 2、大聲呵斥比起打貓,大聲呵斥貓咪會更加有效。因為貓咪對聲音很敏感,它能從主人的語氣中判斷主人的情緒,當大聲呵斥它的時候,它往往會楞一下,這時你繼續大聲呵斥它,那它就會明白你不允許它做這件事,這樣犯錯地方幾率就會減少了。 3、限制自由限制自由說白了,就是把貓咪關進籠子裡。因為貓咪都是很愛外出玩耍,當它犯錯咯,主人可以把它關進籠子裡,不搭理它,讓它自己反思。但要注意,這個方法不能經常用,而且不能把貓咪關進籠子太久。 4、利用水都知道貓咪很怕水的,所以當貓咪犯錯後,寵主也可以利用水來懲罰貓咪,這也是很效果的方法。寵主可以給貓咪臉上或是頭頂噴一些水,從而讓貓知道這個行為是錯誤的,以後就不會再犯了。 5、冷淡雖然貓咪不是很粘主人,但它還是很愛主人的,所以在貓咪犯錯後,寵主也可以採取冷淡的方法來懲罰貓。對貓咪採取不理睬、不靠近、不擁抱等策略,這樣貓咪就會知道自己錯了。當然懲罰的時間不要太長,不然貓咪就會以為你不愛它了。""" - print(predict_single(s)) diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/preprocess_v2.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/preprocess_v2.py deleted file mode 100644 index 38fa0e78657129fcb466d5ca2ae5fa0355fb0997..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/preprocess_v2.py +++ /dev/null @@ -1,151 +0,0 @@ -import os -import argparse -import json -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--add_auxiliary_data", type=bool, help="Whether to add extra data as fine-tuning helper") - parser.add_argument("--languages", default="CJE") - args = parser.parse_args() - if args.languages == "CJE": - langs = ["[ZH]", "[JA]", "[EN]"] - elif args.languages == "CJ": - langs = ["[ZH]", "[JA]"] - elif args.languages == "C": - langs = ["[ZH]"] - new_annos = [] - # Source 1: transcribed short audios - if os.path.exists("short_character_anno.txt"): - with open("short_character_anno.txt", 'r', encoding='utf-8') as f: - short_character_anno = f.readlines() - new_annos += short_character_anno - # Source 2: transcribed long audio segments - if os.path.exists("long_character_anno.txt"): - with open("long_character_anno.txt", 'r', encoding='utf-8') as f: - long_character_anno = f.readlines() - new_annos += long_character_anno - - # Get all speaker names - speakers = [] - for line in new_annos: - path, speaker, text = line.split("|") - if speaker not in speakers: - speakers.append(speaker) - assert (len(speakers) != 0), "No audio file found. Please check your uploaded file structure." - # Source 3 (Optional): sampled audios as extra training helpers - if args.add_auxiliary_data: - with open("sampled_audio4ft.txt", 'r', encoding='utf-8') as f: - old_annos = f.readlines() - # filter old_annos according to supported languages - filtered_old_annos = [] - for line in old_annos: - for lang in langs: - if lang in line: - filtered_old_annos.append(line) - old_annos = filtered_old_annos - for line in old_annos: - path, speaker, text = line.split("|") - if speaker not in speakers: - speakers.append(speaker) - num_old_voices = len(old_annos) - num_new_voices = len(new_annos) - # STEP 1: balance number of new & old voices - cc_duplicate = num_old_voices // num_new_voices - if cc_duplicate == 0: - cc_duplicate = 1 - - - # STEP 2: modify config file - with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - - # assign ids to new speakers - speaker2id = {} - for i, speaker in enumerate(speakers): - speaker2id[speaker] = i - # modify n_speakers - hps['data']["n_speakers"] = len(speakers) - # overwrite speaker names - hps['speakers'] = speaker2id - hps['train']['log_interval'] = 100 - hps['train']['eval_interval'] = 1000 - hps['train']['batch_size'] = 16 - hps['data']['training_files'] = "final_annotation_train.txt" - hps['data']['validation_files'] = "final_annotation_val.txt" - # save modified config - with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - json.dump(hps, f, indent=2) - - # STEP 3: clean annotations, replace speaker names with assigned speaker IDs - import text - cleaned_new_annos = [] - for i, line in enumerate(new_annos): - path, speaker, txt = line.split("|") - if len(txt) > 150: - continue - cleaned_text = text._clean_text(txt, hps['data']['text_cleaners']) - cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - cleaned_new_annos.append(path + "|" + str(speaker2id[speaker]) + "|" + cleaned_text) - cleaned_old_annos = [] - for i, line in enumerate(old_annos): - path, speaker, txt = line.split("|") - if len(txt) > 150: - continue - cleaned_text = text._clean_text(txt, hps['data']['text_cleaners']) - cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - cleaned_old_annos.append(path + "|" + str(speaker2id[speaker]) + "|" + cleaned_text) - # merge with old annotation - final_annos = cleaned_old_annos + cc_duplicate * cleaned_new_annos - # save annotation file - with open("final_annotation_train.txt", 'w', encoding='utf-8') as f: - for line in final_annos: - f.write(line) - # save annotation file for validation - with open("final_annotation_val.txt", 'w', encoding='utf-8') as f: - for line in cleaned_new_annos: - f.write(line) - print("finished") - else: - # Do not add extra helper data - # STEP 1: modify config file - with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - - # assign ids to new speakers - speaker2id = {} - for i, speaker in enumerate(speakers): - speaker2id[speaker] = i - # modify n_speakers - hps['data']["n_speakers"] = len(speakers) - # overwrite speaker names - hps['speakers'] = speaker2id - hps['train']['log_interval'] = 10 - hps['train']['eval_interval'] = 100 - hps['train']['batch_size'] = 16 - hps['data']['training_files'] = "final_annotation_train.txt" - hps['data']['validation_files'] = "final_annotation_val.txt" - # save modified config - with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - json.dump(hps, f, indent=2) - - # STEP 2: clean annotations, replace speaker names with assigned speaker IDs - import text - - cleaned_new_annos = [] - for i, line in enumerate(new_annos): - path, speaker, txt = line.split("|") - if len(txt) > 150: - continue - cleaned_text = text._clean_text(txt, hps['data']['text_cleaners']).replace("[ZH]", "") - cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - cleaned_new_annos.append(path + "|" + str(speaker2id[speaker]) + "|" + cleaned_text) - - final_annos = cleaned_new_annos - # save annotation file - with open("final_annotation_train.txt", 'w', encoding='utf-8') as f: - for line in final_annos: - f.write(line) - # save annotation file for validation - with open("final_annotation_val.txt", 'w', encoding='utf-8') as f: - for line in cleaned_new_annos: - f.write(line) - print("finished") \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/examples/server/public/index.html b/spaces/Illumotion/Koboldcpp/examples/server/public/index.html deleted file mode 100644 index 1bf2a8b3a0a0358f82eb19dad8f85e2139bf8f11..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/public/index.html +++ /dev/null @@ -1,884 +0,0 @@ - - - - - - - llama.cpp - chat - - - - - - - -
    -
    - - - diff --git a/spaces/JammyMachina/the-jam-machine-app/constants.py b/spaces/JammyMachina/the-jam-machine-app/constants.py deleted file mode 100644 index 067778dc63be191289d7a90cbd124b7e835c28e4..0000000000000000000000000000000000000000 --- a/spaces/JammyMachina/the-jam-machine-app/constants.py +++ /dev/null @@ -1,128 +0,0 @@ -# fmt: off -# Instrument mapping and mapping functions -INSTRUMENT_CLASSES = [ - {"name": "Piano", "program_range": range(0, 8), "family_number": 0}, - {"name": "Chromatic Percussion", "program_range": range(8, 16), "family_number": 1}, - {"name": "Organ", "program_range": range(16, 24), "family_number": 2}, - {"name": "Guitar", "program_range": range(24, 32), "family_number": 3}, - {"name": "Bass", "program_range": range(32, 40), "family_number": 4}, - {"name": "Strings", "program_range": range(40, 48), "family_number": 5}, - {"name": "Ensemble", "program_range": range(48, 56), "family_number": 6}, - {"name": "Brass", "program_range": range(56, 64), "family_number": 7}, - {"name": "Reed", "program_range": range(64, 72), "family_number": 8}, - {"name": "Pipe", "program_range": range(72, 80), "family_number": 9}, - {"name": "Synth Lead", "program_range": range(80, 88), "family_number": 10}, - {"name": "Synth Pad", "program_range": range(88, 96), "family_number": 11}, - {"name": "Synth Effects", "program_range": range(96, 104), "family_number": 12}, - {"name": "Ethnic", "program_range": range(104, 112), "family_number": 13}, - {"name": "Percussive", "program_range": range(112, 120), "family_number": 14}, - {"name": "Sound Effects", "program_range": range(120, 128), "family_number": 15,}, -] -# fmt: on - -# Instrument mapping for decodiing our midi sequence into midi instruments of our choice -INSTRUMENT_TRANSFER_CLASSES = [ - { - "name": "Piano", - "program_range": [4], - "family_number": 0, - "transfer_to": "Electric Piano 1", - }, - { - "name": "Chromatic Percussion", - "program_range": [11], - "family_number": 1, - "transfer_to": "Vibraphone", - }, - { - "name": "Organ", - "program_range": [17], - "family_number": 2, - "transfer_to": "Percussive Organ", - }, - { - "name": "Guitar", - "program_range": [80], - "family_number": 3, - "transfer_to": "Synth Lead Square", - }, - { - "name": "Bass", - "program_range": [38], - "family_number": 4, - "transfer_to": "Synth Bass 1", - }, - { - "name": "Strings", - "program_range": [50], - "family_number": 5, - "transfer_to": "Synth Strings 1", - }, - { - "name": "Ensemble", - "program_range": [51], - "family_number": 6, - "transfer_to": "Synth Strings 2", - }, - { - "name": "Brass", - "program_range": [63], - "family_number": 7, - "transfer_to": "Synth Brass 1", - }, - { - "name": "Reed", - "program_range": [64], - "family_number": 8, - "transfer_to": "Synth Brass 2", - }, - { - "name": "Pipe", - "program_range": [82], - "family_number": 9, - "transfer_to": "Synth Lead Calliope", - }, - { - "name": "Synth Lead", - "program_range": [81], # Synth Lead Sawtooth - "family_number": 10, - "transfer_to": "Synth Lead Sawtooth", - }, - { - "name": "Synth Pad", - "program_range": range(88, 96), - "family_number": 11, - "transfer_to": "Synth Pad", - }, - { - "name": "Synth Effects", - "program_range": range(96, 104), - "family_number": 12, - "transfer_to": "Synth Effects", - }, - { - "name": "Ethnic", - "program_range": range(104, 112), - "family_number": 13, - "transfer_to": "Ethnic", - }, - { - "name": "Percussive", - "program_range": range(112, 120), - "family_number": 14, - "transfer_to": "Percussive", - }, - { - "name": "Sound Effects", - "program_range": range(120, 128), - "family_number": 15, - "transfer_to": "Sound Effects", - }, -] - - -"Encoding and decoding constants" - -DRUMS_BEAT_QUANTIZATION = 4 # 8th notes per beat -NONE_DRUMS_BEAT_QUANTIZATION = 4 # 4th notes per beat -BEATS_PER_BAR = 4 # 4/4 time diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py deleted file mode 100644 index 37ba4c4420789c92dc0e2aaeb3d5b64859ec728c..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py +++ /dev/null @@ -1,45 +0,0 @@ -# # This file contains experimental modules - -import numpy as np -import torch -from torch import nn - -from facelib.detection.yolov5face.models.common import Conv - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super().__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1e-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) diff --git a/spaces/Javtor/Biomedical-topic-categorization/README.md b/spaces/Javtor/Biomedical-topic-categorization/README.md deleted file mode 100644 index bceab279bc9267d01a3cd7d0c98a69951171b3ec..0000000000000000000000000000000000000000 --- a/spaces/Javtor/Biomedical-topic-categorization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Biomedical Topic Categorization -emoji: 💻 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JunchuanYu/SegRS/segment_anything/modeling/prompt_encoder.py b/spaces/JunchuanYu/SegRS/segment_anything/modeling/prompt_encoder.py deleted file mode 100644 index c3143f4f8e02ddd7ca8587b40ff5d47c3a6b7ef3..0000000000000000000000000000000000000000 --- a/spaces/JunchuanYu/SegRS/segment_anything/modeling/prompt_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch import nn - -from typing import Any, Optional, Tuple, Type - -from .common import LayerNorm2d - - -class PromptEncoder(nn.Module): - def __init__( - self, - embed_dim: int, - image_embedding_size: Tuple[int, int], - input_image_size: Tuple[int, int], - mask_in_chans: int, - activation: Type[nn.Module] = nn.GELU, - ) -> None: - """ - Encodes prompts for input to SAM's mask decoder. - - Arguments: - embed_dim (int): The prompts' embedding dimension - image_embedding_size (tuple(int, int)): The spatial size of the - image embedding, as (H, W). - input_image_size (int): The padded size of the image as input - to the image encoder, as (H, W). - mask_in_chans (int): The number of hidden channels used for - encoding input masks. - activation (nn.Module): The activation to use when encoding - input masks. - """ - super().__init__() - self.embed_dim = embed_dim - self.input_image_size = input_image_size - self.image_embedding_size = image_embedding_size - self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) - - self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners - point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] - self.point_embeddings = nn.ModuleList(point_embeddings) - self.not_a_point_embed = nn.Embedding(1, embed_dim) - - self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) - self.mask_downscaling = nn.Sequential( - nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans // 4), - activation(), - nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans), - activation(), - nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), - ) - self.no_mask_embed = nn.Embedding(1, embed_dim) - - def get_dense_pe(self) -> torch.Tensor: - """ - Returns the positional encoding used to encode point prompts, - applied to a dense set of points the shape of the image encoding. - - Returns: - torch.Tensor: Positional encoding with shape - 1x(embed_dim)x(embedding_h)x(embedding_w) - """ - return self.pe_layer(self.image_embedding_size).unsqueeze(0) - - def _embed_points( - self, - points: torch.Tensor, - labels: torch.Tensor, - pad: bool, - ) -> torch.Tensor: - """Embeds point prompts.""" - points = points + 0.5 # Shift to center of pixel - if pad: - padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) - padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) - points = torch.cat([points, padding_point], dim=1) - labels = torch.cat([labels, padding_label], dim=1) - point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) - point_embedding[labels == -1] = 0.0 - point_embedding[labels == -1] += self.not_a_point_embed.weight - point_embedding[labels == 0] += self.point_embeddings[0].weight - point_embedding[labels == 1] += self.point_embeddings[1].weight - return point_embedding - - def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: - """Embeds box prompts.""" - boxes = boxes + 0.5 # Shift to center of pixel - coords = boxes.reshape(-1, 2, 2) - corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) - corner_embedding[:, 0, :] += self.point_embeddings[2].weight - corner_embedding[:, 1, :] += self.point_embeddings[3].weight - return corner_embedding - - def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: - """Embeds mask inputs.""" - mask_embedding = self.mask_downscaling(masks) - return mask_embedding - - def _get_batch_size( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> int: - """ - Gets the batch size of the output given the batch size of the input prompts. - """ - if points is not None: - return points[0].shape[0] - elif boxes is not None: - return boxes.shape[0] - elif masks is not None: - return masks.shape[0] - else: - return 1 - - def _get_device(self) -> torch.device: - return self.point_embeddings[0].weight.device - - def forward( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Embeds different types of prompts, returning both sparse and dense - embeddings. - - Arguments: - points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates - and labels to embed. - boxes (torch.Tensor or none): boxes to embed - masks (torch.Tensor or none): masks to embed - - Returns: - torch.Tensor: sparse embeddings for the points and boxes, with shape - BxNx(embed_dim), where N is determined by the number of input points - and boxes. - torch.Tensor: dense embeddings for the masks, in the shape - Bx(embed_dim)x(embed_H)x(embed_W) - """ - bs = self._get_batch_size(points, boxes, masks) - sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) - if points is not None: - coords, labels = points - point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) - sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) - if boxes is not None: - box_embeddings = self._embed_boxes(boxes) - sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) - - if masks is not None: - dense_embeddings = self._embed_masks(masks) - else: - dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( - bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] - ) - - return sparse_embeddings, dense_embeddings - - -class PositionEmbeddingRandom(nn.Module): - """ - Positional encoding using random spatial frequencies. - """ - - def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: - super().__init__() - if scale is None or scale <= 0.0: - scale = 1.0 - self.register_buffer( - "positional_encoding_gaussian_matrix", - scale * torch.randn((2, num_pos_feats)), - ) - - def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: - """Positionally encode points that are normalized to [0,1].""" - # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape - coords = 2 * coords - 1 - coords = coords @ self.positional_encoding_gaussian_matrix - coords = 2 * np.pi * coords - # outputs d_1 x ... x d_n x C shape - return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) - - def forward(self, size: Tuple[int, int]) -> torch.Tensor: - """Generate positional encoding for a grid of the specified size.""" - h, w = size - device: Any = self.positional_encoding_gaussian_matrix.device - grid = torch.ones((h, w), device=device, dtype=torch.float32) - y_embed = grid.cumsum(dim=0) - 0.5 - x_embed = grid.cumsum(dim=1) - 0.5 - y_embed = y_embed / h - x_embed = x_embed / w - - pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) - return pe.permute(2, 0, 1) # C x H x W - - def forward_with_coords( - self, coords_input: torch.Tensor, image_size: Tuple[int, int] - ) -> torch.Tensor: - """Positionally encode points that are not normalized to [0,1].""" - coords = coords_input.clone() - coords[:, :, 0] = coords[:, :, 0] / image_size[1] - coords[:, :, 1] = coords[:, :, 1] / image_size[0] - return self._pe_encoding(coords.to(torch.float)) # B x N x C diff --git a/spaces/KenjieDec/RemBG/rembg/sessions/silueta.py b/spaces/KenjieDec/RemBG/rembg/sessions/silueta.py deleted file mode 100644 index 50094f15a1c4c24c1f03c8d79d9430dc29b485ad..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/RemBG/rembg/sessions/silueta.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -from typing import List - -import numpy as np -import pooch -from PIL import Image -from PIL.Image import Image as PILImage - -from .base import BaseSession - - -class SiluetaSession(BaseSession): - def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, - self.normalize( - img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320) - ), - ) - - pred = ort_outs[0][:, 0, :, :] - - ma = np.max(pred) - mi = np.min(pred) - - pred = (pred - mi) / (ma - mi) - pred = np.squeeze(pred) - - mask = Image.fromarray((pred * 255).astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - return [mask] - - @classmethod - def download_models(cls, *args, **kwargs): - fname = f"{cls.name()}.onnx" - pooch.retrieve( - "https://github.com/danielgatis/rembg/releases/download/v0.0.0/silueta.onnx", - None - if cls.checksum_disabled(*args, **kwargs) - else "md5:55e59e0d8062d2f5d013f4725ee84782", - fname=fname, - path=cls.u2net_home(*args, **kwargs), - progressbar=True, - ) - - return os.path.join(cls.u2net_home(), fname) - - @classmethod - def name(cls, *args, **kwargs): - return "silueta" diff --git a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/langinfo.py b/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/langinfo.py deleted file mode 100644 index efb7e372feeb67d7106eb5c443de2e14053fd204..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/indic_nlp_library/indicnlp/langinfo.py +++ /dev/null @@ -1,488 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -## language codes -LC_TA='ta' - -SCRIPT_RANGES={ - 'pa':[0x0a00,0x0a7f] , - 'gu':[0x0a80,0x0aff] , - 'or':[0x0b00,0x0b7f] , - 'ta':[0x0b80,0x0bff] , - 'te':[0x0c00,0x0c7f] , - 'kn':[0x0c80,0x0cff] , - 'ml':[0x0d00,0x0d7f] , - 'si':[0x0d80,0x0dff] , - 'hi':[0x0900,0x097f] , - 'mr':[0x0900,0x097f] , - 'kK':[0x0900,0x097f] , - 'sa':[0x0900,0x097f] , - 'ne':[0x0900,0x097f] , - 'sd':[0x0900,0x097f] , - 'bn':[0x0980,0x09ff] , - 'as':[0x0980,0x09ff] , - } - -DRAVIDIAN_LANGUAGES=['ta', 'te', 'kn', 'ml',] -IE_LANGUAGES=['hi', 'mr', 'kK', 'sa', 'ne', 'sd', 'bn', 'as', 'pa', 'gu', 'or', 'si', ] -DANDA_DELIM_LANGUAGES=['as','bn','hi','ne','or','pa','sa','sd'] - -URDU_RANGES=[ - [0x0600,0x06ff], - [0x0750,0x077f], - [0xfb50,0xfdff], - [0xfe70,0xfeff], - ] - -COORDINATED_RANGE_START_INCLUSIVE=0 -COORDINATED_RANGE_END_INCLUSIVE=0x6f - -NUMERIC_OFFSET_START=0x66 -NUMERIC_OFFSET_END=0x6f - -HALANTA_OFFSET=0x4d -AUM_OFFSET=0x50 -NUKTA_OFFSET=0x3c - -RUPEE_SIGN=0x20b9 - -DANDA=0x0964 -DOUBLE_DANDA=0x0965 - -#TODO: add missing fricatives and approximants -VELAR_RANGE=[0x15,0x19] -PALATAL_RANGE=[0x1a,0x1e] -RETROFLEX_RANGE=[0x1f,0x23] -DENTAL_RANGE=[0x24,0x29] -LABIAL_RANGE=[0x2a,0x2e] - -# verify -VOICED_LIST=[0x17,0x18,0x1c,0x1d,0x21,0x22,0x26,0x27,0x2c,0x2d] -UNVOICED_LIST=[0x15,0x16,0x1a,0x1b,0x1f,0x20,0x24,0x25,0x2a,0x2b] #TODO: add sibilants/sonorants -ASPIRATED_LIST=[0x16,0x18,0x1b,0x1d,0x20,0x22,0x25,0x27,0x2b,0x2d] -UNASPIRATED_LIST=[0x15,0x17,0x1a,0x1c,0x1f,0x21,0x24,0x26,0x2a,0x2c] -NASAL_LIST=[0x19,0x1e,0x23,0x28,0x29,0x2d] -FRICATIVE_LIST=[0x36,0x37,0x38] -APPROXIMANT_LIST=[0x2f,0x30,0x31,0x32,0x33,0x34,0x35] - -#TODO: ha has to be properly categorized - -def is_danda_delim(lang): - """ - Returns True if danda/double danda is a possible delimiter for the language - """ - return lang in DANDA_DELIM_LANGUAGES - -def get_offset(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return ord(c)-SCRIPT_RANGES[lang][0] - -def offset_to_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - return chr(c+SCRIPT_RANGES[lang][0]) - -def in_coordinated_range(c_offset): - """ - Applicable to Brahmi derived Indic scripts - """ - return (c_offset>=COORDINATED_RANGE_START_INCLUSIVE and c_offset<=COORDINATED_RANGE_END_INCLUSIVE) - -def is_indiclang_char(c,lang): - """ - Applicable to Brahmi derived Indic scripts - """ - o=get_offset(c,lang) - return (o>=0 and o<=0x7f) or ord(c)==DANDA or ord(c)==DOUBLE_DANDA - -# def is_vowel(c,lang): -# """ -# Is the character a vowel -# """ -# o=get_offset(c,lang) -# return (o>=0x04 and o<=0x14) - -# def is_vowel_sign(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o>=0x3e and o<=0x4c) - -# def is_halanta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==HALANTA_OFFSET) - -# def is_nukta(c,lang): -# """ -# Is the character the halanta character -# """ -# o=get_offset(c,lang) -# return (o==NUKTA_OFFSET) - -# def is_aum(c,lang): -# """ -# Is the character a vowel sign (maatraa) -# """ -# o=get_offset(c,lang) -# return (o==AUM_OFFSET) - -# def is_consonant(c,lang): -# """ -# Is the character a consonant -# """ -# o=get_offset(c,lang) -# return (o>=0x15 and o<=0x39) - -# def is_velar(c,lang): -# """ -# Is the character a velar -# """ -# o=get_offset(c,lang) -# return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -# def is_palatal(c,lang): -# """ -# Is the character a palatal -# """ -# o=get_offset(c,lang) -# return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -# def is_retroflex(c,lang): -# """ -# Is the character a retroflex -# """ -# o=get_offset(c,lang) -# return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -# def is_dental(c,lang): -# """ -# Is the character a dental -# """ -# o=get_offset(c,lang) -# return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -# def is_labial(c,lang): -# """ -# Is the character a labial -# """ -# o=get_offset(c,lang) -# return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -# def is_voiced(c,lang): -# """ -# Is the character a voiced consonant -# """ -# o=get_offset(c,lang) -# return o in VOICED_LIST - -# def is_unvoiced(c,lang): -# """ -# Is the character a unvoiced consonant -# """ -# o=get_offset(c,lang) -# return o in UNVOICED_LIST - -# def is_aspirated(c,lang): -# """ -# Is the character a aspirated consonant -# """ -# o=get_offset(c,lang) -# return o in ASPIRATED_LIST - -# def is_unaspirated(c,lang): -# """ -# Is the character a unaspirated consonant -# """ -# o=get_offset(c,lang) -# return o in UNASPIRATED_LIST - -# def is_nasal(c,lang): -# """ -# Is the character a nasal consonant -# """ -# o=get_offset(c,lang) -# return o in NASAL_LIST - -# def is_fricative(c,lang): -# """ -# Is the character a fricative consonant -# """ -# o=get_offset(c,lang) -# return o in FRICATIVE_LIST - -# def is_approximant(c,lang): -# """ -# Is the character an approximant consonant -# """ -# o=get_offset(c,lang) -# return o in APPROXIMANT_LIST - -# def is_number(c,lang): -# """ -# Is the character a number -# """ -# o=get_offset(c,lang) -# return (o>=0x66 and o<=0x6f) - - -def is_vowel(c,lang): - """ - Is the character a vowel - """ - o=get_offset(c,lang) - return (o>=0x04 and o<=0x14) - -def is_vowel_sign(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o>=0x3e and o<=0x4c) - -def is_halanta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==HALANTA_OFFSET) - -def is_nukta(c,lang): - """ - Is the character the halanta character - """ - o=get_offset(c,lang) - return (o==NUKTA_OFFSET) - -def is_aum(c,lang): - """ - Is the character a vowel sign (maatraa) - """ - o=get_offset(c,lang) - return (o==AUM_OFFSET) - -def is_consonant(c,lang): - """ - Is the character a consonant - """ - o=get_offset(c,lang) - return (o>=0x15 and o<=0x39) - -def is_velar(c,lang): - """ - Is the character a velar - """ - o=get_offset(c,lang) - return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1]) - -def is_palatal(c,lang): - """ - Is the character a palatal - """ - o=get_offset(c,lang) - return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1]) - -def is_retroflex(c,lang): - """ - Is the character a retroflex - """ - o=get_offset(c,lang) - return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1]) - -def is_dental(c,lang): - """ - Is the character a dental - """ - o=get_offset(c,lang) - return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1]) - -def is_labial(c,lang): - """ - Is the character a labial - """ - o=get_offset(c,lang) - return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1]) - -def is_voiced(c,lang): - """ - Is the character a voiced consonant - """ - o=get_offset(c,lang) - return o in VOICED_LIST - -def is_unvoiced(c,lang): - """ - Is the character a unvoiced consonant - """ - o=get_offset(c,lang) - return o in UNVOICED_LIST - -def is_aspirated(c,lang): - """ - Is the character a aspirated consonant - """ - o=get_offset(c,lang) - return o in ASPIRATED_LIST - -def is_unaspirated(c,lang): - """ - Is the character a unaspirated consonant - """ - o=get_offset(c,lang) - return o in UNASPIRATED_LIST - -def is_nasal(c,lang): - """ - Is the character a nasal consonant - """ - o=get_offset(c,lang) - return o in NASAL_LIST - -def is_fricative(c,lang): - """ - Is the character a fricative consonant - """ - o=get_offset(c,lang) - return o in FRICATIVE_LIST - -def is_approximant(c,lang): - """ - Is the character an approximant consonant - """ - o=get_offset(c,lang) - return o in APPROXIMANT_LIST - -def is_number(c,lang): - """ - Is the character a number - """ - o=get_offset(c,lang) - return (o>=0x66 and o<=0x6f) - - -################################################## - -def is_vowel_offset(c_offset): - """ - Is the offset a vowel - """ - return (c_offset>=0x04 and c_offset<=0x14) - -def is_vowel_sign_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset>=0x3e and c_offset<=0x4c) - -def is_halanta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==HALANTA_OFFSET) - -def is_nukta_offset(c_offset): - """ - Is the offset the halanta offset - """ - return (c_offset==NUKTA_OFFSET) - -def is_aum_offset(c_offset): - """ - Is the offset a vowel sign (maatraa) - """ - return (c_offset==AUM_OFFSET) - -def is_consonant_offset(c_offset): - """ - Is the offset a consonant - """ - return (c_offset>=0x15 and c_offset<=0x39) - -def is_velar_offset(c_offset): - """ - Is the offset a velar - """ - return (c_offset>=VELAR_RANGE[0] and c_offset<=VELAR_RANGE[1]) - -def is_palatal_offset(c_offset): - """ - Is the offset a palatal - """ - return (c_offset>=PALATAL_RANGE[0] and c_offset<=PALATAL_RANGE[1]) - -def is_retroflex_offset(c_offset): - """ - Is the offset a retroflex - """ - return (c_offset>=RETROFLEX_RANGE[0] and c_offset<=RETROFLEX_RANGE[1]) - -def is_dental_offset(c_offset): - """ - Is the offset a dental - """ - return (c_offset>=DENTAL_RANGE[0] and c_offset<=DENTAL_RANGE[1]) - -def is_labial_offset(c_offset): - """ - Is the offset a labial - """ - return (c_offset>=LABIAL_RANGE[0] and c_offset<=LABIAL_RANGE[1]) - -def is_voiced_offset(c_offset): - """ - Is the offset a voiced consonant - """ - return c_offset in VOICED_LIST - -def is_unvoiced_offset(c_offset): - """ - Is the offset a unvoiced consonant - """ - return c_offset in UNVOICED_LIST - -def is_aspirated_offset(c_offset): - """ - Is the offset a aspirated consonant - """ - return c_offset in ASPIRATED_LIST - -def is_unaspirated_offset(c_offset): - """ - Is the offset a unaspirated consonant - """ - return c_offset in UNASPIRATED_LIST - -def is_nasal_offset(c_offset): - """ - Is the offset a nasal consonant - """ - return c_offset in NASAL_LIST - -def is_fricative_offset(c_offset): - """ - Is the offset a fricative consonant - """ - return c_offset in FRICATIVE_LIST - -def is_approximant_offset(c_offset): - """ - Is the offset an approximant consonant - """ - return c_offset in APPROXIMANT_LIST - -def is_number_offset(c_offset): - """ - Is the offset a number - """ - return (c_offset>=0x66 and c_offset<=0x6f) diff --git a/spaces/KyanChen/FunSR/models/baselines/basis.py b/spaces/KyanChen/FunSR/models/baselines/basis.py deleted file mode 100644 index 95ebfd2feaed08bd2df2ac9aa0d47d8bf57a76c1..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/models/baselines/basis.py +++ /dev/null @@ -1,79 +0,0 @@ -import math -from argparse import Namespace - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from models import register - - -class gen_basis(nn.Module): - def __init__(self, args): - super(gen_basis, self).__init__() - self.basis_num = args.basis_num - self.hidden = args.hidden - self.state = args.state - self.path=args.path - - def init_basis_bias(self): - self.w0 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden*580), requires_grad=True) - nn.init.kaiming_uniform_(self.w0, a=math.sqrt(5)) - self.w1 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden*self.hidden), requires_grad=True) - nn.init.kaiming_uniform_(self.w1, a=math.sqrt(5)) - self.w2 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden*self.hidden), requires_grad=True) - nn.init.kaiming_uniform_(self.w2, a=math.sqrt(5)) - self.w3 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden*self.hidden), requires_grad=True) - nn.init.kaiming_uniform_(self.w3, a=math.sqrt(5)) - self.w4 = nn.Parameter(torch.Tensor(self.basis_num,3*self.hidden), requires_grad=True) - nn.init.kaiming_uniform_(self.w4, a=math.sqrt(5)) - basis = [self.w0, self.w1, self.w2, self.w3, self.w4] - self.bias1 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden), requires_grad=True) - self.bias2 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden), requires_grad=True) - self.bias3 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden), requires_grad=True) - self.bias4 = nn.Parameter(torch.Tensor(self.basis_num,self.hidden), requires_grad=True) - self.bias5 = nn.Parameter(torch.Tensor(self.basis_num,3), requires_grad=True) - bias = [self.bias1,self.bias2,self.bias3,self.bias4,self.bias5] - - for i in range(len(bias)): - fan_in, _ = nn.init._calculate_fan_in_and_fan_out(basis[i]) - bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0 - nn.init.uniform_(bias[i], -bound, bound) - - - - return basis,bias - - - def load_basis_for_test_kaiming(self,path): - model_spec = torch.load(path)['model'] - w0 = model_spec['sd']['basis.w0'] - w1 = model_spec['sd']['basis.w1'] - w2 = model_spec['sd']['basis.w2'] - w3 = model_spec['sd']['basis.w3'] - w4 = model_spec['sd']['basis.w4'] - b0 = model_spec['sd']['basis.bias1'] - b1 = model_spec['sd']['basis.bias2'] - b2 = model_spec['sd']['basis.bias3'] - b3 = model_spec['sd']['basis.bias4'] - b4 = model_spec['sd']['basis.bias5'] - torch.cuda.empty_cache() - return [w0,w1,w2,w3,w4],[b0,b1,b2,b3,b4] - - def forward(self): - if self.state=='train': - print('init_basis_use_kaiming') - res=self.init_basis_bias() - else: - print('load_basis_from_model') - res=self.load_basis_for_test_kaiming(self.path) - return res - -@register('basis') -def make_basis(basis_num=10,hidden=16,state=None,path=None): - args = Namespace() - args.basis_num = basis_num - args.hidden = hidden - args.state = state - args.path = path - return gen_basis(args) diff --git a/spaces/KyanChen/FunSR/tools/paper_vis_tools/rearange_x4_PSNR_SSIM.py b/spaces/KyanChen/FunSR/tools/paper_vis_tools/rearange_x4_PSNR_SSIM.py deleted file mode 100644 index f2f6ec0b60dd323327a440cd8f640b6efa017173..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/tools/paper_vis_tools/rearange_x4_PSNR_SSIM.py +++ /dev/null @@ -1,152 +0,0 @@ -all = '''1 & 27.96/0.7853 & 28.24/0.7950 & 28.31/0.7961 & 28.25/0.7947 & 28.42/0.7987 & 28.32/0.7960 -\\ -2 & 35.39/0.8411 & 35.43/0.8424 & 35.43/0.8429 & 35.41/0.8424 & 35.45/0.8434 & 35.43/0.8426 -\\ -3 & 34.87/0.8769 & 35.12/0.8815 & 35.15/0.8818 & 35.11/0.8812 & 35.22/0.8831 & 35.18/0.8825 -\\ -4 & 32.33/0.8027 & 32.43/0.8048 & 32.43/0.8054 & 32.42/0.8051 & 32.47/0.8065 & 32.43/0.8056 -\\ -5 & 30.93/0.8502 & 31.46/0.8619 & 31.49/0.8627 & 31.44/0.8615 & 31.63/0.8658 & 31.49/0.8629 -\\ -6 & 28.98/0.8123 & 29.59/0.8283 & 29.64/0.8314 & 29.56/0.8281 & 29.78/0.8354 & 29.63/0.8303 -\\ -7 & 23.31/0.6780 & 23.60/0.6920 & 23.62/0.6950 & 23.60/0.6923 & 23.74/0.7005 & 23.66/0.6966 -\\ -8 & 25.97/0.7552 & 26.24/0.7662 & 26.26/0.7672 & 26.21/0.7653 & 26.32/0.7695 & 26.26/0.7672 -\\ -9 & 23.06/0.6636 & 23.27/0.6754 & 23.31/0.6784 & 23.26/0.6757 & 23.36/0.6822 & 23.30/0.6789 -\\ -10 & 37.39/0.8860 & 37.48/0.8877 & 37.51/0.8884 & 37.50/0.8883 & 37.52/0.8885 & 37.50/0.8882 -\\ -11 & 34.65/0.8620 & 34.90/0.8673 & 34.92/0.8676 & 34.90/0.8670 & 34.97/0.8688 & 34.93/0.8679 -\\ -12 & 27.50/0.6394 & 27.61/0.6457 & 27.62/0.6482 & 27.60/0.6452 & 27.66/0.6519 & 27.64/0.6495 -\\ -13 & 26.16/0.7576 & 26.53/0.7741 & 26.55/0.7762 & 26.50/0.7731 & 26.65/0.7807 & 26.56/0.7763 -\\ -14 & 33.27/0.7518 & 33.35/0.7539 & 33.38/0.7551 & 33.33/0.7546 & 33.35/0.7555 & 33.36/0.7545 -\\ -15 & 25.48/0.6900 & 25.80/0.7014 & 25.84/0.7043 & 25.80/0.7025 & 25.90/0.7069 & 25.83/0.7055 -\\ -16 & 27.97/0.7213 & 28.06/0.7250 & 28.07/0.7255 & 28.06/0.7251 & 28.10/0.7272 & 28.08/0.7258 -\\ -17 & 28.74/0.7631 & 28.89/0.7689 & 28.91/0.7699 & 28.88/0.7689 & 28.95/0.7719 & 28.91/0.7702 -\\ -18 & 23.36/0.7927 & 24.24/0.8179 & 24.31/0.8203 & 24.20/0.8171 & 24.62/0.8280 & 24.36/0.8209 -\\ -19 & 36.85/0.8920 & 37.28/0.8992 & 37.30/0.8995 & 37.23/0.8987 & 37.43/0.9015 & 37.25/0.8989 -\\ -20 & 38.02/0.9129 & 38.18/0.9146 & 38.17/0.9147 & 38.15/0.9147 & 38.26/0.9156 & 38.17/0.9149 -\\ -21 & 26.89/0.8433 & 27.38/0.8545 & 27.39/0.8554 & 27.33/0.8541 & 27.48/0.8579 & 27.39/0.8562 -\\ -22 & 27.68/0.8091 & 28.19/0.8263 & 28.21/0.8265 & 28.13/0.8239 & 28.31/0.8300 & 28.20/0.8256 -\\ -23 & 26.26/0.7492 & 26.51/0.7588 & 26.55/0.7606 & 26.50/0.7589 & 26.64/0.7645 & 26.54/0.7611 -\\ -24 & 30.09/0.7624 & 30.22/0.7659 & 30.23/0.7670 & 30.22/0.7665 & 30.26/0.7680 & 30.23/0.7666 -\\ -25 & 25.07/0.7363 & 25.35/0.7471 & 25.41/0.7499 & 25.36/0.7475 & 25.47/0.7525 & 25.41/0.7504 -\\ -26 & 24.39/0.6501 & 24.61/0.6580 & 24.64/0.6604 & 24.61/0.6588 & 24.72/0.6637 & 24.66/0.6616 -\\ -27 & 29.42/0.8154 & 29.74/0.8258 & 29.81/0.8276 & 29.72/0.8255 & 29.86/0.8306 & 29.80/0.8272 -\\ -28 & 33.46/0.8813 & 33.99/0.8921 & 34.09/0.8930 & 33.94/0.8905 & 34.20/0.8944 & 33.99/0.8898 -\\ -29 & 24.60/0.7345 & 25.04/0.7488 & 25.08/0.7505 & 25.02/0.7481 & 25.16/0.7544 & 25.06/0.7507 -\\ -30 & 27.60/0.7737 & 28.12/0.7944 & 28.17/0.7966 & 28.10/0.7934 & 28.26/0.8005 & 28.17/0.7964 -\\ -''' -all = all.split('\n') - -results = '''psnr -Airport: 28.57 -BareLand: 35.46 -BaseballField: 35.29 -Beach: 32.51 -Bridge: 31.74 -Center: 29.99 -Church: 23.83 -Commercial: 26.39 -DenseResidential: 23.50 -Desert: 37.67 -Farmland: 35.00 -Forest: 27.67 -Industrial: 26.76 -Meadow: 33.40 -MediumResidential: 25.95 -Mountain: 28.14 -Park: 28.98 -Parking: 24.89 -Playground: 37.54 -Pond: 38.28 -Port: 27.63 -RailwayStation: 28.43 -Resort: 26.67 -River: 30.28 -School: 25.57 -SparseResidential: 24.75 -Square: 29.98 -Stadium: 34.48 -StorageTanks: 25.29 -Viaduct: 28.34 -all: 29.78 -ssim -Airport: 0.8013 -BareLand: 0.8439 -BaseballField: 0.8848 -Beach: 0.8085 -Bridge: 0.8668 -Center: 0.8403 -Church: 0.7053 -Commercial: 0.7717 -DenseResidential: 0.6879 -Desert: 0.8926 -Farmland: 0.8697 -Forest: 0.6541 -Industrial: 0.7845 -Meadow: 0.7578 -MediumResidential: 0.7100 -Mountain: 0.7296 -Park: 0.7738 -Parking: 0.8344 -Playground: 0.9032 -Pond: 0.9158 -Port: 0.8612 -RailwayStation: 0.8327 -Resort: 0.7658 -River: 0.7694 -School: 0.7561 -SparseResidential: 0.6652 -Square: 0.8330 -Stadium: 0.8983 -StorageTanks: 0.7574 -Viaduct: 0.8030 -all: 0.8013''' -results = results.split('\n') -n_len = len(results) -total_psnr = 0 -total_ssim = 0 -all_data = '' -idx_all = 0 -for i in range(1, n_len//2): - name = results[i].split(':')[0] - name_tmp = results[i+n_len//2].split(':')[0] - assert name == name_tmp - psnr_val = results[i].split(':')[-1].strip() - psnr_val = float(psnr_val) - ssim_val = results[i+n_len//2].split(':')[-1].strip() - ssim_val = float(ssim_val) - if name == 'all': - all_data = name+':'+str(psnr_val)+'/'+str(ssim_val) - continue - total_psnr += psnr_val - total_ssim += ssim_val - print(all[idx_all] +' & ' +f'{psnr_val:.2f}'+'/'+f'{ssim_val:.4f}') - idx_all += 2 - print('\\\\') -print(all_data) -print(total_psnr/(n_len//2-2)) -print(total_ssim/(n_len//2-2)) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/single_stage_instance_seg.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/single_stage_instance_seg.py deleted file mode 100644 index acb5f0d2f8e4636b86b4b66cbf5c4916d0dae16f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/single_stage_instance_seg.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from typing import Tuple - -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures import OptSampleList, SampleList -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .base import BaseDetector - -INF = 1e8 - - -@MODELS.register_module() -class SingleStageInstanceSegmentor(BaseDetector): - """Base class for single-stage instance segmentors.""" - - def __init__(self, - backbone: ConfigType, - neck: OptConfigType = None, - bbox_head: OptConfigType = None, - mask_head: OptConfigType = None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - data_preprocessor=data_preprocessor, init_cfg=init_cfg) - self.backbone = MODELS.build(backbone) - if neck is not None: - self.neck = MODELS.build(neck) - else: - self.neck = None - if bbox_head is not None: - bbox_head.update(train_cfg=copy.deepcopy(train_cfg)) - bbox_head.update(test_cfg=copy.deepcopy(test_cfg)) - self.bbox_head = MODELS.build(bbox_head) - else: - self.bbox_head = None - - assert mask_head, f'`mask_head` must ' \ - f'be implemented in {self.__class__.__name__}' - mask_head.update(train_cfg=copy.deepcopy(train_cfg)) - mask_head.update(test_cfg=copy.deepcopy(test_cfg)) - self.mask_head = MODELS.build(mask_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, batch_inputs: Tensor) -> Tuple[Tensor]: - """Extract features. - - Args: - batch_inputs (Tensor): Image tensor with shape (N, C, H ,W). - - Returns: - tuple[Tensor]: Multi-level features that may have different - resolutions. - """ - x = self.backbone(batch_inputs) - if self.with_neck: - x = self.neck(x) - return x - - def _forward(self, - batch_inputs: Tensor, - batch_data_samples: OptSampleList = None, - **kwargs) -> tuple: - """Network forward process. Usually includes backbone, neck and head - forward without any post-processing. - - Args: - batch_inputs (Tensor): Inputs with shape (N, C, H, W). - - Returns: - tuple: A tuple of features from ``bbox_head`` forward. - """ - outs = () - # backbone - x = self.extract_feat(batch_inputs) - # bbox_head - positive_infos = None - if self.with_bbox: - assert batch_data_samples is not None - bbox_outs = self.bbox_head.forward(x) - outs = outs + (bbox_outs, ) - # It is necessary to use `bbox_head.loss` to update - # `_raw_positive_infos` which will be used in `get_positive_infos` - # positive_infos will be used in the following mask head. - _ = self.bbox_head.loss(x, batch_data_samples, **kwargs) - positive_infos = self.bbox_head.get_positive_infos() - # mask_head - if positive_infos is None: - mask_outs = self.mask_head.forward(x) - else: - mask_outs = self.mask_head.forward(x, positive_infos) - outs = outs + (mask_outs, ) - return outs - - def loss(self, batch_inputs: Tensor, batch_data_samples: SampleList, - **kwargs) -> dict: - """ - Args: - batch_inputs (Tensor): Input images of shape (N, C, H, W). - These should usually be mean centered and std scaled. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - dict: A dictionary of loss components. - """ - x = self.extract_feat(batch_inputs) - losses = dict() - - positive_infos = None - # CondInst and YOLACT have bbox_head - if self.with_bbox: - bbox_losses = self.bbox_head.loss(x, batch_data_samples, **kwargs) - losses.update(bbox_losses) - # get positive information from bbox head, which will be used - # in the following mask head. - positive_infos = self.bbox_head.get_positive_infos() - - mask_loss = self.mask_head.loss( - x, batch_data_samples, positive_infos=positive_infos, **kwargs) - # avoid loss override - assert not set(mask_loss.keys()) & set(losses.keys()) - - losses.update(mask_loss) - return losses - - def predict(self, - batch_inputs: Tensor, - batch_data_samples: SampleList, - rescale: bool = True, - **kwargs) -> SampleList: - """Perform forward propagation of the mask head and predict mask - results on the features of the upstream network. - - Args: - batch_inputs (Tensor): Inputs with shape (N, C, H, W). - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool): Whether to rescale the results. - Defaults to False. - - Returns: - list[:obj:`DetDataSample`]: Detection results of the - input images. Each DetDataSample usually contain - 'pred_instances'. And the ``pred_instances`` usually - contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - - masks (Tensor): Has a shape (num_instances, H, W). - """ - x = self.extract_feat(batch_inputs) - if self.with_bbox: - # the bbox branch does not need to be scaled to the original - # image scale, because the mask branch will scale both bbox - # and mask at the same time. - bbox_rescale = rescale if not self.with_mask else False - results_list = self.bbox_head.predict( - x, batch_data_samples, rescale=bbox_rescale) - else: - results_list = None - - results_list = self.mask_head.predict( - x, batch_data_samples, rescale=rescale, results_list=results_list) - - batch_data_samples = self.add_pred_to_datasample( - batch_data_samples, results_list) - return batch_data_samples diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_new.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_new.py deleted file mode 100644 index bfaf72e48b31cc1130f2892b0973c9aa06f195a3..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/nets_new.py +++ /dev/null @@ -1,132 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from . import layers_new - - -class BaseNet(nn.Module): - def __init__( - self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6)) - ): - super(BaseNet, self).__init__() - self.enc1 = layers_new.Conv2DBNActiv(nin, nout, 3, 1, 1) - self.enc2 = layers_new.Encoder(nout, nout * 2, 3, 2, 1) - self.enc3 = layers_new.Encoder(nout * 2, nout * 4, 3, 2, 1) - self.enc4 = layers_new.Encoder(nout * 4, nout * 6, 3, 2, 1) - self.enc5 = layers_new.Encoder(nout * 6, nout * 8, 3, 2, 1) - - self.aspp = layers_new.ASPPModule(nout * 8, nout * 8, dilations, dropout=True) - - self.dec4 = layers_new.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1) - self.dec3 = layers_new.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1) - self.dec2 = layers_new.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1) - self.lstm_dec2 = layers_new.LSTMModule(nout * 2, nin_lstm, nout_lstm) - self.dec1 = layers_new.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1) - - def __call__(self, x): - e1 = self.enc1(x) - e2 = self.enc2(e1) - e3 = self.enc3(e2) - e4 = self.enc4(e3) - e5 = self.enc5(e4) - - h = self.aspp(e5) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = torch.cat([h, self.lstm_dec2(h)], dim=1) - h = self.dec1(h, e1) - - return h - - -class CascadedNet(nn.Module): - def __init__(self, n_fft, nout=32, nout_lstm=128): - super(CascadedNet, self).__init__() - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - self.nin_lstm = self.max_bin // 2 - self.offset = 64 - - self.stg1_low_band_net = nn.Sequential( - BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm), - layers_new.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0), - ) - - self.stg1_high_band_net = BaseNet( - 2, nout // 4, self.nin_lstm // 2, nout_lstm // 2 - ) - - self.stg2_low_band_net = nn.Sequential( - BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm), - layers_new.Conv2DBNActiv(nout, nout // 2, 1, 1, 0), - ) - self.stg2_high_band_net = BaseNet( - nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2 - ) - - self.stg3_full_band_net = BaseNet( - 3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm - ) - - self.out = nn.Conv2d(nout, 2, 1, bias=False) - self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False) - - def forward(self, x): - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - l1_in = x[:, :, :bandw] - h1_in = x[:, :, bandw:] - l1 = self.stg1_low_band_net(l1_in) - h1 = self.stg1_high_band_net(h1_in) - aux1 = torch.cat([l1, h1], dim=2) - - l2_in = torch.cat([l1_in, l1], dim=1) - h2_in = torch.cat([h1_in, h1], dim=1) - l2 = self.stg2_low_band_net(l2_in) - h2 = self.stg2_high_band_net(h2_in) - aux2 = torch.cat([l2, h2], dim=2) - - f3_in = torch.cat([x, aux1, aux2], dim=1) - f3 = self.stg3_full_band_net(f3_in) - - mask = torch.sigmoid(self.out(f3)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux = torch.cat([aux1, aux2], dim=1) - aux = torch.sigmoid(self.aux_out(aux)) - aux = F.pad( - input=aux, - pad=(0, 0, 0, self.output_bin - aux.size()[2]), - mode="replicate", - ) - return mask, aux - else: - return mask - - def predict_mask(self, x): - mask = self.forward(x) - - if self.offset > 0: - mask = mask[:, :, :, self.offset : -self.offset] - assert mask.size()[3] > 0 - - return mask - - def predict(self, x, aggressiveness=None): - mask = self.forward(x) - pred_mag = x * mask - - if self.offset > 0: - pred_mag = pred_mag[:, :, :, self.offset : -self.offset] - assert pred_mag.size()[3] > 0 - - return pred_mag diff --git a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/README.md b/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/README.md deleted file mode 100644 index 034bbddd35159e85625f10ca3ea19555ed2020eb..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RVC-Inference-webui-grado-colab-huggingafce/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Advanced RVC Inference -emoji: ⚡ -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LegacyLeague/Legacy_League/app.py b/spaces/LegacyLeague/Legacy_League/app.py deleted file mode 100644 index 4529dd6c56ab0bbe0ad519aa62bbd11b2ede7ac1..0000000000000000000000000000000000000000 --- a/spaces/LegacyLeague/Legacy_League/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import gradio as gr -import fastai -from fastai.vision import * -from fastai.utils.mem import * -from fastai.vision import open_image, load_learner, image, torch -import numpy as np4 -import urllib.request -import PIL.Image -from io import BytesIO -import torchvision.transforms as T -from PIL import Image -import requests -from io import BytesIO -import fastai -from fastai.vision import * -from fastai.utils.mem import * -from fastai.vision import open_image, load_learner, image, torch -import numpy as np -import urllib.request -from urllib.request import urlretrieve -import PIL.Image -from io import BytesIO -import torchvision.transforms as T -import torchvision.transforms as tfms - -class FeatureLoss(nn.Module): - def __init__(self, m_feat, layer_ids, layer_wgts): - super().__init__() - self.m_feat = m_feat - self.loss_features = [self.m_feat[i] for i in layer_ids] - self.hooks = hook_outputs(self.loss_features, detach=False) - self.wgts = layer_wgts - self.metric_names = ['pixel',] + [f'feat_{i}' for i in range(len(layer_ids)) - ] + [f'gram_{i}' for i in range(len(layer_ids))] - - def make_features(self, x, clone=False): - self.m_feat(x) - return [(o.clone() if clone else o) for o in self.hooks.stored] - - def forward(self, input, target): - out_feat = self.make_features(target, clone=True) - in_feat = self.make_features(input) - self.feat_losses = [base_loss(input,target)] - self.feat_losses += [base_loss(f_in, f_out)*w - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)] - self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3 - for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)] - self.metrics = dict(zip(self.metric_names, self.feat_losses)) - return sum(self.feat_losses) - - def __del__(self): self.hooks.remove() - -MODEL_URL = "https://www.dropbox.com/s/daf70v42oo93kym/Legacy_best.pkl?dl=1" -urllib.request.urlretrieve(MODEL_URL, "Legacy_best.pkl") -path = Path(".") -learn=load_learner(path, 'Legacy_best.pkl') - -urlretrieve("https://s.hdnux.com/photos/01/07/33/71/18726490/5/1200x0.jpg","soccer1.jpg") -urlretrieve("https://cdn.vox-cdn.com/thumbor/4J8EqJBsS2qEQltIBuFOJWSn8dc=/1400x1400/filters:format(jpeg)/cdn.vox-cdn.com/uploads/chorus_asset/file/22466347/1312893179.jpg","soccer2.jpg") -urlretrieve("https://cdn.vox-cdn.com/thumbor/VHa7adj0Oie2Ao12RwKbs40i58s=/0x0:2366x2730/1200x800/filters:focal(1180x774:1558x1152)/cdn.vox-cdn.com/uploads/chorus_image/image/69526697/E5GnQUTWEAEK445.0.jpg","baseball.jpg") -urlretrieve("https://baseball.ca/uploads/images/content/Diodati(1).jpeg","baseball2.jpeg") - -sample_images = [["soccer1.jpg"], - ["soccer2.jpg"], - ["baseball.jpg"], - ["baseball2.jpeg"]] - - -def predict(input): - img_t = T.ToTensor()(input) - img_fast = Image(img_t) - p,img_hr,b = learn.predict(img_fast) - x = np.minimum(np.maximum(image2np(img_hr.data*255), 0), 255).astype(np.uint8) - img = PIL.Image.fromarray(x) - return img - -gr_interface = gr.Interface(fn=predict, inputs=gr.inputs.Image(), outputs="image", title='Legacy-League',examples=sample_images).launch(); diff --git a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/datasets.py b/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/datasets.py deleted file mode 100644 index e672b136f56fd6b05038e24377908361a54fe519..0000000000000000000000000000000000000000 --- a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/utils/datasets.py +++ /dev/null @@ -1,35 +0,0 @@ -import cv2 -import numpy as np - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scale_fill=False, scaleup=True): - # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232 - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding - elif scale_fill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) diff --git a/spaces/LuxOAI/guanaco-playground-tgi/dialogue.py b/spaces/LuxOAI/guanaco-playground-tgi/dialogue.py deleted file mode 100644 index 6ca26d2d887c6b683c5bd6240a09f9a55a046a3a..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/guanaco-playground-tgi/dialogue.py +++ /dev/null @@ -1,239 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import os -from dataclasses import asdict, dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Type, TypeVar, Union - -from huggingface_hub import ModelHubMixin, hf_hub_download - -# Generic variable that is either ModelHubMixin or a subclass thereof -T = TypeVar("T", bound="ModelHubMixin") - -TEMPLATE_FILENAME = "dialogue_template.json" -IGNORE_INDEX = -100 - - -@dataclass -class DialogueTemplate(ModelHubMixin): - """Converts all turns of a dialogue between a user and assistant to a standardized format. - Adapted from OpenAI's ChatML (https://github.com/openai/openai-python/blob/main/chatml.md) and Vicuna (https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) - """ - - system: str - messages: List[Dict[str, str]] = None - system_token: str = "<|system|>" - user_token: str = "<|user|>" - assistant_token: str = "<|assistant|>" - end_token: str = "<|end|>" - - def get_training_prompt(self) -> str: - prompt = self.system_token + "\n" + self.system + self.end_token + "\n" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n" - else: - prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n" - return prompt - - def get_inference_prompt(self) -> str: - prompt = self.system_token + "\n" + self.system + self.end_token + "\n" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += self.user_token + "\n" + message["content"] + self.end_token + "\n" - else: - prompt += self.assistant_token + "\n" + message["content"] + self.end_token + "\n" - prompt += self.assistant_token - return prompt - - def get_dialogue(self): - """Helper function to format the messages as an easy-to-read dialogue.""" - prompt = "" - if self.messages is None: - raise ValueError("Dialogue template must have at least one message.") - for message in self.messages: - if message["role"] == "user": - prompt += "\n\nHuman: " + message["content"] - else: - prompt += "\n\nAssistant: " + message["content"] - return prompt - - def get_special_tokens(self) -> List[str]: - return [self.system_token, self.user_token, self.assistant_token, self.end_token] - - def copy(self): - return DialogueTemplate( - system=self.system, - messages=self.messages, - system_token=self.system_token, - user_token=self.user_token, - assistant_token=self.assistant_token, - end_token=self.end_token, - ) - - def to_dict(self) -> Dict[str, Any]: - return {k: v for k, v in asdict(self).items()} - - @classmethod - def from_dict(cls, data): - return DialogueTemplate( - system=data["system"] if "system" in data else "", - messages=data["messages"] if "messages" in data else None, - system_token=data["system_token"] if "system_token" in data else "<|system|>", - user_token=data["user_token"] if "user_token" in data else "<|user|>", - assistant_token=data["assistant_token"] if "assistant_token" in data else "<|assistant|>", - end_token=data["end_token"] if "end_token" in data else "<|end|>", - ) - - def _save_pretrained(self, save_directory: Union[str, Path]) -> None: - save_directory = Path(save_directory) - save_directory.mkdir(exist_ok=True) - with open(save_directory / "dialogue_template.json", "w") as f: - json.dump(self.to_dict(), f, indent=2) - - @classmethod - def _from_pretrained( - cls: Type[T], - *, - model_id: str, - revision: Optional[str], - cache_dir: Optional[Union[str, Path]], - force_download: bool, - proxies: Optional[Dict], - resume_download: bool, - local_files_only: bool, - token: Optional[Union[str, bool]], - **model_kwargs, - ) -> T: - """Loads the dialogue template from a local directory or the Huggingface Hub. - Args: - model_id (`str`): - ID of the model to load from the Huggingface Hub (e.g. `bigscience/bloom`). - revision (`str`, *optional*): - Revision of the model on the Hub. Can be a branch name, a git tag or any commit id. Defaults to the - latest commit on `main` branch. - force_download (`bool`, *optional*, defaults to `False`): - Whether to force (re-)downloading the model weights and configuration files from the Hub, overriding - the existing cache. - resume_download (`bool`, *optional*, defaults to `False`): - Whether to delete incompletely received files. Will attempt to resume the download if such a file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint (e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`). - token (`str` or `bool`, *optional*): - The token to use as HTTP bearer authorization for remote files. By default, it will use the token - cached when running `huggingface-cli login`. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the local cached file if it exists. - model_kwargs: - Additional keyword arguments passed along to the [`~ModelHubMixin._from_pretrained`] method. - """ - if os.path.isdir(model_id): # Can either be a local directory - print("Loading dialogue template from local directory") - template_file = os.path.join(model_id, TEMPLATE_FILENAME) - else: # Or a template on the Hub - template_file = hf_hub_download( # Download from the hub, passing same input args - repo_id=model_id, - filename=TEMPLATE_FILENAME, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - ) - - # Load template - with open(template_file, "r") as f: - data = json.load(f) - return cls.from_dict(data=data) - - -# A shortened version of the system message in Anthropic's HHH prompt: https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt -default_template = DialogueTemplate( - system="A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.", -) - -# OpenAI and OpenAssistant train on few to no system messages. -# TODO: consider defining this as the `default` template -no_system_template = DialogueTemplate( - system="", -) - -alpaca_template = DialogueTemplate( - system="A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.", - user_token="### Instruction:", - assistant_token="### Response:", -) - -SUPPORTED_DIALOGUE_TEMPLATES = { - "default": default_template, - "no_system": no_system_template, - "alpaca": alpaca_template, -} - - -def get_dialogue_template(template: str) -> DialogueTemplate: - if template not in SUPPORTED_DIALOGUE_TEMPLATES.keys(): - raise ValueError(f"Template {template} is not supported!") - return SUPPORTED_DIALOGUE_TEMPLATES[template].copy() - - -def prepare_dialogue(example, dialogue_template, is_train=True): - """Format example to single- or multi-turn dialogue.""" - # TODO: make this simpler by just ensuring every dataset has a messages column - if "messages" in example.keys() and example["messages"] is not None: - dialogue_template.messages = example["messages"] - elif all(k in example.keys() for k in ("prompt", "completion")): - # Construct single-turn dialogue from prompt and completion - dialogue_template.messages = [ - {"role": "user", "content": example["prompt"]}, - {"role": "assistant", "content": example["completion"]}, - ] - elif "prompt" in example.keys(): - # Construct single-turn dialogue from prompt (inference only) - dialogue_template.messages = [ - {"role": "user", "content": example["prompt"]}, - ] - else: - raise ValueError( - f"Could not format example as dialogue! Require either `messages` or `[prompt, completion]` or `[prompt]` keys but found {list(example.keys())}" - ) - if is_train: - example["text"] = dialogue_template.get_training_prompt() - else: - example["text"] = dialogue_template.get_inference_prompt() - return example - - -def mask_user_labels(tokenizer, dialogue_template, labels): - """Masks the user turns of a dialogue from the loss""" - user_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.user_token) - assistant_token_id = tokenizer.convert_tokens_to_ids(dialogue_template.assistant_token) - for idx, label_id in enumerate(labels): - if label_id == user_token_id: - current_idx = idx - while labels[current_idx] != assistant_token_id and current_idx < len(labels): - labels[current_idx] = IGNORE_INDEX - current_idx += 1 \ No newline at end of file diff --git a/spaces/MCkernick/Image_Restoration_Colorization/CODE_OF_CONDUCT.md b/spaces/MCkernick/Image_Restoration_Colorization/CODE_OF_CONDUCT.md deleted file mode 100644 index f9ba8cf65f3e3104dd061c178066ec8247811f33..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,9 +0,0 @@ -# Microsoft Open Source Code of Conduct - -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). - -Resources: - -- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) -- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) -- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_vcoco.py b/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_vcoco.py deleted file mode 100644 index 716ca22ab52d29d763f732cddd1d9e90ec85d2c9..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/engine/evaluator_vcoco.py +++ /dev/null @@ -1,87 +0,0 @@ -# ------------------------------------------------------------------------ -# HOTR official code : hotr/engine/evaluator_vcoco.py -# Copyright (c) Kakao Brain, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ -import os -import torch -import time -import datetime - -import hotr.util.misc as utils -import hotr.util.logger as loggers -from hotr.data.evaluators.vcoco_eval import VCocoEvaluator -from hotr.util.box_ops import rescale_bboxes, rescale_pairs - -import wandb - -@torch.no_grad() -def vcoco_evaluate(model, criterion, postprocessors, data_loader, device, output_dir, thr,args=None): - model.eval() - criterion.eval() - - metric_logger = loggers.MetricLogger(mode="test", delimiter=" ") - header = 'Evaluation Inference (V-COCO)' - - print_freq = 1 # len(data_loader) - res = {} - hoi_recognition_time = [] - - for samples, targets in metric_logger.log_every(data_loader, print_freq, header): - samples = samples.to(device) - targets = [{k: v.to(device) for k, v in t.items()} for t in targets] - - outputs = model(samples) - loss_dict = criterion(outputs, targets) - loss_dict_reduced = utils.reduce_dict(loss_dict) # ddp gathering - - orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0) - results = postprocessors['hoi'](outputs, orig_target_sizes, threshold=thr, dataset='vcoco',args=args) - targets = process_target(targets, orig_target_sizes) - hoi_recognition_time.append(results[0]['hoi_recognition_time'] * 1000) - - res.update( - {target['image_id'].item():\ - {'target': target, 'prediction': output} for target, output in zip(targets, results) - } - ) - print(f"[stats] HOI Recognition Time (avg) : {sum(hoi_recognition_time)/len(hoi_recognition_time):.4f} ms") - - start_time = time.time() - gather_res = utils.all_gather(res) - total_res = {} - for dist_res in gather_res: - total_res.update(dist_res) - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print(f"[stats] Distributed Gathering Time : {total_time_str}") - - return total_res - -def vcoco_accumulate(total_res, args, print_results, wandb_log): - vcoco_evaluator = VCocoEvaluator(args) - vcoco_evaluator.update(total_res) - print(f"[stats] Score Matrix Generation completed!! ") - - scenario1 = vcoco_evaluator.role_eval1.evaluate(print_results) - scenario2 = vcoco_evaluator.role_eval2.evaluate(print_results) - - if wandb_log: - wandb.log({ - 'scenario1': scenario1, - 'scenario2': scenario2 - }) - - return scenario1, scenario2 - -def process_target(targets, target_sizes): - for idx, (target, target_size) in enumerate(zip(targets, target_sizes)): - labels = target['labels'] - valid_boxes_inds = (labels > 0) - - targets[idx]['boxes'] = rescale_bboxes(target['boxes'], target_size) # boxes - targets[idx]['pair_boxes'] = rescale_pairs(target['pair_boxes'], target_size) # pairs - - return targets \ No newline at end of file diff --git a/spaces/Maharaja36/myGenAIApp/README.md b/spaces/Maharaja36/myGenAIApp/README.md deleted file mode 100644 index 0bfbca57d6add7d9c4c8376cb2fe68689b1cc955..0000000000000000000000000000000000000000 --- a/spaces/Maharaja36/myGenAIApp/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyGenAIApp -emoji: 🐠 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/modeling/ocr.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/modeling/ocr.py deleted file mode 100644 index df3b4f67959fc6a088b93ee7a34b15c1e07402df..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/modeling/ocr.py +++ /dev/null @@ -1,141 +0,0 @@ -import torch -import torch.nn as nn -import torch._utils -import torch.nn.functional as F - - -class SpatialGather_Module(nn.Module): - """ - Aggregate the context features according to the initial - predicted probability distribution. - Employ the soft-weighted method to aggregate the context. - """ - - def __init__(self, cls_num=0, scale=1): - super(SpatialGather_Module, self).__init__() - self.cls_num = cls_num - self.scale = scale - - def forward(self, feats, probs): - batch_size, c, h, w = probs.size(0), probs.size(1), probs.size(2), probs.size(3) - probs = probs.view(batch_size, c, -1) - feats = feats.view(batch_size, feats.size(1), -1) - feats = feats.permute(0, 2, 1) # batch x hw x c - probs = F.softmax(self.scale * probs, dim=2) # batch x k x hw - ocr_context = torch.matmul(probs, feats) \ - .permute(0, 2, 1).unsqueeze(3) # batch x k x c - return ocr_context - - -class SpatialOCR_Module(nn.Module): - """ - Implementation of the OCR module: - We aggregate the global object representation to update the representation for each pixel. - """ - - def __init__(self, - in_channels, - key_channels, - out_channels, - scale=1, - dropout=0.1, - norm_layer=nn.BatchNorm2d, - align_corners=True): - super(SpatialOCR_Module, self).__init__() - self.object_context_block = ObjectAttentionBlock2D(in_channels, key_channels, scale, - norm_layer, align_corners) - _in_channels = 2 * in_channels - - self.conv_bn_dropout = nn.Sequential( - nn.Conv2d(_in_channels, out_channels, kernel_size=1, padding=0, bias=False), - nn.Sequential(norm_layer(out_channels), nn.ReLU(inplace=True)), - nn.Dropout2d(dropout) - ) - - def forward(self, feats, proxy_feats): - context = self.object_context_block(feats, proxy_feats) - - output = self.conv_bn_dropout(torch.cat([context, feats], 1)) - - return output - - -class ObjectAttentionBlock2D(nn.Module): - ''' - The basic implementation for object context block - Input: - N X C X H X W - Parameters: - in_channels : the dimension of the input feature map - key_channels : the dimension after the key/query transform - scale : choose the scale to downsample the input feature maps (save memory cost) - bn_type : specify the bn type - Return: - N X C X H X W - ''' - - def __init__(self, - in_channels, - key_channels, - scale=1, - norm_layer=nn.BatchNorm2d, - align_corners=True): - super(ObjectAttentionBlock2D, self).__init__() - self.scale = scale - self.in_channels = in_channels - self.key_channels = key_channels - self.align_corners = align_corners - - self.pool = nn.MaxPool2d(kernel_size=(scale, scale)) - self.f_pixel = nn.Sequential( - nn.Conv2d(in_channels=self.in_channels, out_channels=self.key_channels, - kernel_size=1, stride=1, padding=0, bias=False), - nn.Sequential(norm_layer(self.key_channels), nn.ReLU(inplace=True)), - nn.Conv2d(in_channels=self.key_channels, out_channels=self.key_channels, - kernel_size=1, stride=1, padding=0, bias=False), - nn.Sequential(norm_layer(self.key_channels), nn.ReLU(inplace=True)) - ) - self.f_object = nn.Sequential( - nn.Conv2d(in_channels=self.in_channels, out_channels=self.key_channels, - kernel_size=1, stride=1, padding=0, bias=False), - nn.Sequential(norm_layer(self.key_channels), nn.ReLU(inplace=True)), - nn.Conv2d(in_channels=self.key_channels, out_channels=self.key_channels, - kernel_size=1, stride=1, padding=0, bias=False), - nn.Sequential(norm_layer(self.key_channels), nn.ReLU(inplace=True)) - ) - self.f_down = nn.Sequential( - nn.Conv2d(in_channels=self.in_channels, out_channels=self.key_channels, - kernel_size=1, stride=1, padding=0, bias=False), - nn.Sequential(norm_layer(self.key_channels), nn.ReLU(inplace=True)) - ) - self.f_up = nn.Sequential( - nn.Conv2d(in_channels=self.key_channels, out_channels=self.in_channels, - kernel_size=1, stride=1, padding=0, bias=False), - nn.Sequential(norm_layer(self.in_channels), nn.ReLU(inplace=True)) - ) - - def forward(self, x, proxy): - batch_size, h, w = x.size(0), x.size(2), x.size(3) - if self.scale > 1: - x = self.pool(x) - - query = self.f_pixel(x).view(batch_size, self.key_channels, -1) - query = query.permute(0, 2, 1) - key = self.f_object(proxy).view(batch_size, self.key_channels, -1) - value = self.f_down(proxy).view(batch_size, self.key_channels, -1) - value = value.permute(0, 2, 1) - - sim_map = torch.matmul(query, key) - sim_map = (self.key_channels ** -.5) * sim_map - sim_map = F.softmax(sim_map, dim=-1) - - # add bg context ... - context = torch.matmul(sim_map, value) - context = context.permute(0, 2, 1).contiguous() - context = context.view(batch_size, self.key_channels, *x.size()[2:]) - context = self.f_up(context) - if self.scale > 1: - context = F.interpolate(input=context, size=(h, w), - mode='bilinear', align_corners=self.align_corners) - - return context diff --git a/spaces/MarioWasTaken/BackroomsIG/index.html b/spaces/MarioWasTaken/BackroomsIG/index.html deleted file mode 100644 index db5b2cdad6c826d9024a906638cbbd34e30165ab..0000000000000000000000000000000000000000 --- a/spaces/MarioWasTaken/BackroomsIG/index.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - BackroomsIG - - - -
    - Backrooms Image Generator -

    -
    - - diff --git a/spaces/MathysL/AutoGPT4/autogpt/commands/google_search.py b/spaces/MathysL/AutoGPT4/autogpt/commands/google_search.py deleted file mode 100644 index 7d38ce7568d2de207d521b077cfebd72527c9795..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/commands/google_search.py +++ /dev/null @@ -1,87 +0,0 @@ -"""Google search command for Autogpt.""" -from __future__ import annotations - -import json - -from duckduckgo_search import ddg - -from autogpt.config import Config - -CFG = Config() - - -def google_search(query: str, num_results: int = 8) -> str: - """Return the results of a Google search - - Args: - query (str): The search query. - num_results (int): The number of results to return. - - Returns: - str: The results of the search. - """ - search_results = [] - if not query: - return json.dumps(search_results) - - results = ddg(query, max_results=num_results) - if not results: - return json.dumps(search_results) - - for j in results: - search_results.append(j) - - return json.dumps(search_results, ensure_ascii=False, indent=4) - - -def google_official_search(query: str, num_results: int = 8) -> str | list[str]: - """Return the results of a Google search using the official Google API - - Args: - query (str): The search query. - num_results (int): The number of results to return. - - Returns: - str: The results of the search. - """ - - from googleapiclient.discovery import build - from googleapiclient.errors import HttpError - - try: - # Get the Google API key and Custom Search Engine ID from the config file - api_key = CFG.google_api_key - custom_search_engine_id = CFG.custom_search_engine_id - - # Initialize the Custom Search API service - service = build("customsearch", "v1", developerKey=api_key) - - # Send the search query and retrieve the results - result = ( - service.cse() - .list(q=query, cx=custom_search_engine_id, num=num_results) - .execute() - ) - - # Extract the search result items from the response - search_results = result.get("items", []) - - # Create a list of only the URLs from the search results - search_results_links = [item["link"] for item in search_results] - - except HttpError as e: - # Handle errors in the API call - error_details = json.loads(e.content.decode()) - - # Check if the error is related to an invalid or missing API key - if error_details.get("error", {}).get( - "code" - ) == 403 and "invalid API key" in error_details.get("error", {}).get( - "message", "" - ): - return "Error: The provided Google API key is invalid or missing." - else: - return f"Error: {e}" - - # Return the list of search result URLs - return search_results_links diff --git a/spaces/MathysL/AutoGPT4/autogpt/spinner.py b/spaces/MathysL/AutoGPT4/autogpt/spinner.py deleted file mode 100644 index 4e33d74213881352546f334ccb1eb4772b8b7b70..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/spinner.py +++ /dev/null @@ -1,65 +0,0 @@ -"""A simple spinner module""" -import itertools -import sys -import threading -import time - - -class Spinner: - """A simple spinner class""" - - def __init__(self, message: str = "Loading...", delay: float = 0.1) -> None: - """Initialize the spinner class - - Args: - message (str): The message to display. - delay (float): The delay between each spinner update. - """ - self.spinner = itertools.cycle(["-", "/", "|", "\\"]) - self.delay = delay - self.message = message - self.running = False - self.spinner_thread = None - - def spin(self) -> None: - """Spin the spinner""" - while self.running: - sys.stdout.write(f"{next(self.spinner)} {self.message}\r") - sys.stdout.flush() - time.sleep(self.delay) - sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - - def __enter__(self): - """Start the spinner""" - self.running = True - self.spinner_thread = threading.Thread(target=self.spin) - self.spinner_thread.start() - - return self - - def __exit__(self, exc_type, exc_value, exc_traceback) -> None: - """Stop the spinner - - Args: - exc_type (Exception): The exception type. - exc_value (Exception): The exception value. - exc_traceback (Exception): The exception traceback. - """ - self.running = False - if self.spinner_thread is not None: - self.spinner_thread.join() - sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r") - sys.stdout.flush() - - def update_message(self, new_message, delay=0.1): - """Update the spinner message - Args: - new_message (str): New message to display - delay: Delay in seconds before updating the message - """ - time.sleep(delay) - sys.stdout.write( - f"\r{' ' * (len(self.message) + 2)}\r" - ) # Clear the current message - sys.stdout.flush() - self.message = new_message diff --git a/spaces/MechaXYZ/Audio-to-Text/README.md b/spaces/MechaXYZ/Audio-to-Text/README.md deleted file mode 100644 index 39e4257d2d8cf7b195fbe6b388d9bb62ed1b6974..0000000000000000000000000000000000000000 --- a/spaces/MechaXYZ/Audio-to-Text/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Audio-to-Text Playground -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: NeuralInternet/Audio-to-Text_Playground ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/utils/sync_bn.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/utils/sync_bn.py deleted file mode 100644 index f78f39181d75bb85c53e8c7c8eaf45690e9f0bee..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/utils/sync_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch - -import annotator.uniformer.mmcv as mmcv - - -class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - """A general BatchNorm layer without input dimension check. - - Reproduced from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - is `_check_input_dim` that is designed for tensor sanity checks. - The check has been bypassed in this class for the convenience of converting - SyncBatchNorm. - """ - - def _check_input_dim(self, input): - return - - -def revert_sync_batchnorm(module): - """Helper function to convert all `SyncBatchNorm` (SyncBN) and - `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to - `BatchNormXd` layers. - - Adapted from @kapily's work: - (https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547) - - Args: - module (nn.Module): The module containing `SyncBatchNorm` layers. - - Returns: - module_output: The converted module with `BatchNormXd` layers. - """ - module_output = module - module_checklist = [torch.nn.modules.batchnorm.SyncBatchNorm] - if hasattr(mmcv, 'ops'): - module_checklist.append(mmcv.ops.SyncBatchNorm) - if isinstance(module, tuple(module_checklist)): - module_output = _BatchNormXd(module.num_features, module.eps, - module.momentum, module.affine, - module.track_running_stats) - if module.affine: - # no_grad() may not be needed here but - # just to be consistent with `convert_sync_batchnorm()` - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - module_output.training = module.training - # qconfig exists in quantized models - if hasattr(module, 'qconfig'): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/__init__.py b/spaces/MetaWabbit/Auto-GPT/autogpt/speech/__init__.py deleted file mode 100644 index 2ff0d2bf48dc356bf810cb5a2063d6774e5fec6e..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/speech/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""This module contains the speech recognition and speech synthesis functions.""" -from autogpt.speech.say import say_text - -__all__ = ["say_text"] diff --git a/spaces/MiloSobral/PortiloopDemo/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/MiloSobral/PortiloopDemo/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index bbcbbe7d61558adde3cbfd0c7a63a67c27ed6d30..0000000000000000000000000000000000000000 --- a/spaces/MiloSobral/PortiloopDemo/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: '' -assignees: '' - ---- - -**Is your feature request related to a problem? Please describe.** -A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** -Add any other context or screenshots about the feature request here. diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/model_utils_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/model_utils_test.py deleted file mode 100644 index a8c4a15c9aba8dbff043088a392fe415f22206ca..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/model_utils_test.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Test Transformer model helper methods.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from official.nlp.transformer import model_utils - -NEG_INF = -1e9 - - -class ModelUtilsTest(tf.test.TestCase): - - def test_get_padding(self): - x = tf.constant([[1, 0, 0, 0, 2], [3, 4, 0, 0, 0], [0, 5, 6, 0, 7]]) - padding = model_utils.get_padding(x, padding_value=0) - - self.assertAllEqual([[0, 1, 1, 1, 0], [0, 0, 1, 1, 1], [1, 0, 0, 1, 0]], - padding) - - def test_get_padding_bias(self): - x = tf.constant([[1, 0, 0, 0, 2], [3, 4, 0, 0, 0], [0, 5, 6, 0, 7]]) - bias = model_utils.get_padding_bias(x) - bias_shape = tf.shape(bias) - flattened_bias = tf.reshape(bias, [3, 5]) - - self.assertAllEqual([[0, NEG_INF, NEG_INF, NEG_INF, 0], - [0, 0, NEG_INF, NEG_INF, NEG_INF], - [NEG_INF, 0, 0, NEG_INF, 0]], - flattened_bias) - self.assertAllEqual([3, 1, 1, 5], bias_shape) - - def test_get_decoder_self_attention_bias(self): - length = 5 - bias = model_utils.get_decoder_self_attention_bias(length) - - self.assertAllEqual([[[[0, NEG_INF, NEG_INF, NEG_INF, NEG_INF], - [0, 0, NEG_INF, NEG_INF, NEG_INF], - [0, 0, 0, NEG_INF, NEG_INF], - [0, 0, 0, 0, NEG_INF], - [0, 0, 0, 0, 0]]]], - bias) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/README.md b/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/README.md deleted file mode 100644 index 9675f01a57fd26a83ed5103e116257b3664396cb..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# XLNet: Generalized Autoregressive Pretraining for Language Understanding - -The academic paper which describes XLNet in detail and provides full results on -a number of tasks can be found here: https://arxiv.org/abs/1906.08237. - -**Instructions and user guide will be added soon.** - -XLNet is a generalized autoregressive BERT-like pretraining language model that -enables learning bidirectional contexts by maximizing the expected likelihood -over all permutations of the factorization order. It can learn dependency beyond -a fixed length without disrupting temporal coherence by using segment-level -recurrence mechanism and relative positional encoding scheme introduced in -[Transformer-XL](https://arxiv.org/pdf/1901.02860.pdf). XLNet outperforms BERT -on 20 NLP benchmark tasks and achieves state-of-the-art results on 18 tasks -including question answering, natural language inference, sentiment analysis, -and document ranking. diff --git a/spaces/NSect/multitrack-midi-music-generator/utils.py b/spaces/NSect/multitrack-midi-music-generator/utils.py deleted file mode 100644 index 22c140ff963af1fa884267c280b0a2c6933a3f77..0000000000000000000000000000000000000000 --- a/spaces/NSect/multitrack-midi-music-generator/utils.py +++ /dev/null @@ -1,244 +0,0 @@ -from typing import List, Tuple - -import gradio as gr -from transformers import AutoTokenizer, AutoModelForCausalLM -import note_seq -from matplotlib.figure import Figure -from numpy import ndarray -import torch - -from constants import GM_INSTRUMENTS, SAMPLE_RATE -from string_to_notes import token_sequence_to_note_sequence -from model import get_model_and_tokenizer - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -# Load the tokenizer and the model -model, tokenizer = get_model_and_tokenizer() - - -def create_seed_string(genre: str = "OTHER") -> str: - """ - Creates a seed string for generating a new piece. - - Args: - genre (str, optional): The genre of the piece. Defaults to "OTHER". - - Returns: - str: The seed string. - """ - if genre == "RANDOM": - seed_string = "PIECE_START" - else: - seed_string = f"PIECE_START GENRE={genre} TRACK_START" - return seed_string - - -def get_instruments(text_sequence: str) -> List[str]: - """ - Extracts the list of instruments from a text sequence. - - Args: - text_sequence (str): The text sequence. - - Returns: - List[str]: The list of instruments. - """ - instruments = [] - parts = text_sequence.split() - for part in parts: - if part.startswith("INST="): - if part[5:] == "DRUMS": - instruments.append("Drums") - else: - index = int(part[5:]) - instruments.append(GM_INSTRUMENTS[index]) - return instruments - - -def generate_new_instrument(seed: str, temp: float = 0.75) -> str: - """ - Generates a new instrument sequence from a given seed and temperature. - - Args: - seed (str): The seed string for the generation. - temp (float, optional): The temperature for the generation, which controls the randomness. Defaults to 0.75. - - Returns: - str: The generated instrument sequence. - """ - seed_length = len(tokenizer.encode(seed)) - - while True: - # Encode the conditioning tokens. - input_ids = tokenizer.encode(seed, return_tensors="pt") - - # Move the input_ids tensor to the same device as the model - input_ids = input_ids.to(model.device) - - # Generate more tokens. - eos_token_id = tokenizer.encode("TRACK_END")[0] - generated_ids = model.generate( - input_ids, - max_new_tokens=2048, - do_sample=True, - temperature=temp, - eos_token_id=eos_token_id, - ) - generated_sequence = tokenizer.decode(generated_ids[0]) - - # Check if the generated sequence contains "NOTE_ON" beyond the seed - new_generated_sequence = tokenizer.decode(generated_ids[0][seed_length:]) - if "NOTE_ON" in new_generated_sequence: - return generated_sequence - - -def get_outputs_from_string( - generated_sequence: str, qpm: int = 120 -) -> Tuple[ndarray, str, Figure, str, str]: - """ - Converts a generated sequence into various output formats including audio, MIDI, plot, etc. - - Args: - generated_sequence (str): The generated sequence of tokens. - qpm (int, optional): The quarter notes per minute. Defaults to 120. - - Returns: - Tuple[ndarray, str, Figure, str, str]: The audio waveform, MIDI file name, plot figure, - instruments string, and number of tokens string. - """ - instruments = get_instruments(generated_sequence) - instruments_str = "\n".join(f"- {instrument}" for instrument in instruments) - note_sequence = token_sequence_to_note_sequence(generated_sequence, qpm=qpm) - - synth = note_seq.fluidsynth - array_of_floats = synth(note_sequence, sample_rate=SAMPLE_RATE) - int16_data = note_seq.audio_io.float_samples_to_int16(array_of_floats) - fig = note_seq.plot_sequence(note_sequence, show_figure=False) - num_tokens = str(len(generated_sequence.split())) - audio = gr.make_waveform((SAMPLE_RATE, int16_data)) - note_seq.note_sequence_to_midi_file(note_sequence, "midi_ouput.mid") - return audio, "midi_ouput.mid", fig, instruments_str, num_tokens - - -def remove_last_instrument( - text_sequence: str, qpm: int = 120 -) -> Tuple[ndarray, str, Figure, str, str, str]: - """ - Removes the last instrument from a song string and returns the various output formats. - - Args: - text_sequence (str): The song string. - qpm (int, optional): The quarter notes per minute. Defaults to 120. - - Returns: - Tuple[ndarray, str, Figure, str, str, str]: The audio waveform, MIDI file name, plot figure, - instruments string, new song string, and number of tokens string. - """ - # We split the song into tracks by splitting on 'TRACK_START' - tracks = text_sequence.split("TRACK_START") - # We keep all tracks except the last one - modified_tracks = tracks[:-1] - # We join the tracks back together, adding back the 'TRACK_START' that was removed by split - new_song = "TRACK_START".join(modified_tracks) - - if len(tracks) == 2: - # There is only one instrument, so start from scratch - audio, midi_file, fig, instruments_str, new_song, num_tokens = generate_song( - text_sequence=new_song - ) - elif len(tracks) == 1: - # No instrument so start from empty sequence - audio, midi_file, fig, instruments_str, new_song, num_tokens = generate_song( - text_sequence="" - ) - else: - audio, midi_file, fig, instruments_str, num_tokens = get_outputs_from_string( - new_song, qpm - ) - - return audio, midi_file, fig, instruments_str, new_song, num_tokens - - -def regenerate_last_instrument( - text_sequence: str, qpm: int = 120 -) -> Tuple[ndarray, str, Figure, str, str, str]: - """ - Regenerates the last instrument in a song string and returns the various output formats. - - Args: - text_sequence (str): The song string. - qpm (int, optional): The quarter notes per minute. Defaults to 120. - - Returns: - Tuple[ndarray, str, Figure, str, str, str]: The audio waveform, MIDI file name, plot figure, - instruments string, new song string, and number of tokens string. - """ - last_inst_index = text_sequence.rfind("INST=") - if last_inst_index == -1: - # No instrument so start from empty sequence - audio, midi_file, fig, instruments_str, new_song, num_tokens = generate_song( - text_sequence="", qpm=qpm - ) - else: - # Take it from the last instrument and continue generation - next_space_index = text_sequence.find(" ", last_inst_index) - new_seed = text_sequence[:next_space_index] - audio, midi_file, fig, instruments_str, new_song, num_tokens = generate_song( - text_sequence=new_seed, qpm=qpm - ) - return audio, midi_file, fig, instruments_str, new_song, num_tokens - - -def change_tempo( - text_sequence: str, qpm: int -) -> Tuple[ndarray, str, Figure, str, str, str]: - """ - Changes the tempo of a song string and returns the various output formats. - - Args: - text_sequence (str): The song string. - qpm (int): The new quarter notes per minute. - - Returns: - Tuple[ndarray, str, Figure, str, str, str]: The audio waveform, MIDI file name, plot figure, - instruments string, text sequence, and number of tokens string. - """ - audio, midi_file, fig, instruments_str, num_tokens = get_outputs_from_string( - text_sequence, qpm=qpm - ) - return audio, midi_file, fig, instruments_str, text_sequence, num_tokens - - -def generate_song( - genre: str = "OTHER", - temp: float = 0.75, - text_sequence: str = "", - qpm: int = 120, -) -> Tuple[ndarray, str, Figure, str, str, str]: - """ - Generates a song given a genre, temperature, initial text sequence, and tempo. - - Args: - model (AutoModelForCausalLM): The pretrained model used for generating the sequences. - tokenizer (AutoTokenizer): The tokenizer used to encode and decode the sequences. - genre (str, optional): The genre of the song. Defaults to "OTHER". - temp (float, optional): The temperature for the generation, which controls the randomness. Defaults to 0.75. - text_sequence (str, optional): The initial text sequence for the song. Defaults to "". - qpm (int, optional): The quarter notes per minute. Defaults to 120. - - Returns: - Tuple[ndarray, str, Figure, str, str, str]: The audio waveform, MIDI file name, plot figure, - instruments string, generated song string, and number of tokens string. - """ - if text_sequence == "": - seed_string = create_seed_string(genre) - else: - seed_string = text_sequence - - generated_sequence = generate_new_instrument(seed=seed_string, temp=temp) - audio, midi_file, fig, instruments_str, num_tokens = get_outputs_from_string( - generated_sequence, qpm - ) - return audio, midi_file, fig, instruments_str, generated_sequence, num_tokens diff --git a/spaces/NataKaichkina/PredictSalary/app.py b/spaces/NataKaichkina/PredictSalary/app.py deleted file mode 100644 index b2efc16441c240e718ffc94bc421c59b4b2787e9..0000000000000000000000000000000000000000 --- a/spaces/NataKaichkina/PredictSalary/app.py +++ /dev/null @@ -1,138 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -import re -import pickle -from sentence_transformers import SentenceTransformer -from PIL import Image -COUNT = 768 - - -image = Image.open('images.jpg') -st.sidebar.image(image) -st.sidebar.header('Предсказание заработной платы по описанию вакансии') - - -categories = [ - 'Accounting & Finance Jobs', - 'Admin Jobs', - 'Charity & Voluntary Jobs', - 'Consultancy Jobs', - 'Creative & Design Jobs', - 'Customer Services Jobs', - 'Domestic help & Cleaning Jobs', - 'Energy, Oil & Gas Jobs', - 'Engineering Jobs', - 'Graduate Jobs', - 'HR & Recruitment Jobs', - 'Healthcare & Nursing Jobs', - 'Hospitality & Catering Jobs', - 'IT Jobs', - 'Legal Jobs', - 'Logistics & Warehouse Jobs', - 'Maintenance Jobs', - 'Manufacturing Jobs', - 'Other/General Jobs', - 'PR, Advertising & Marketing Jobs', - 'Part time Jobs', - 'Property Jobs', - 'Retail Jobs', - 'Sales Jobs', - 'Scientific & QA Jobs', - 'Social work Jobs', - 'Teaching Jobs', - 'Trade & Construction Jobs', - 'Travel Jobs'] - - -def normilize_text(text): - text = str(text) - return re.sub('[^a-zA-Zа-яА-Я\s]+', '', text.lower().strip()) - - -def predict_by_model(Title, FullDescription, model, embedding_model): - title, full_description = normilize_text(Title), normilize_text(FullDescription) - temp = np.array((*embedding_model.encode(title), *embedding_model.encode(full_description))) - return model.predict(temp.reshape((1, COUNT * 2)))[0] - - -@st.cache(allow_output_mutation=True) -def load_models(): - embedding_model = SentenceTransformer('all-mpnet-base-v2') - with open("pickle_model_final.pkl", 'rb') as file: - pickle_model = pickle.load(file) - with open("pickle_model_try20.pkl", 'rb') as file: - pickle_model20 = pickle.load(file) - return embedding_model, pickle_model, pickle_model20 - - -def pd_to_emb_pd(pd, embedding_model): # подразумевается, что нужные два столбца идут первыми - title_emb, descr_emb = embedding_model.encode(pd["Title"]), embedding_model.encode(pd['FullDescription']) - return title_emb, descr_emb - - -def GetResultV(description, text, model, embedding_model): - prediction = predict_by_model(description, text, model, embedding_model) - return int((prediction // 500) * 500) - - -def GetResultN(title, description, category, contract_type, contract_time, model, embedding_model): - title123 = [] - for i in range(COUNT): - title123.append('Title' + str(i)) - - descr123 = [] - for i in range(COUNT): - descr123.append('Descr' + str(i)) - - d = {'Title': [normilize_text(title)], - 'FullDescription': [normilize_text(description)], - 'ContractTime' + '_contract': [contract_time == 'contract'], - 'ContractTime' + '_permanent': [contract_time == 'permanent'], - 'ContractTime' + '_unknown': [contract_time != 'contract' and contract_time != 'permanent'], - 'ContractType' + '_full_time': [contract_type == 'full_time'], - 'ContractType' + '_part_time': [contract_type == 'part_time'], - 'ContractType' + '_unknown': [contract_type != 'full_time' and contract_time != 'part_time'], - } - for i in categories: - d[i] = 0 - d[category] = 1 - - valid = pd.DataFrame.from_dict(d) - title_emb, descr_emb = pd_to_emb_pd(valid, embedding_model) - valid = valid.join(pd.DataFrame(title_emb, columns=title123)).join(pd.DataFrame(descr_emb, columns=descr123)) - del valid['Title'] - del valid['FullDescription'] - - predict = model.predict(valid) - return int((predict // 500) * 500) - - -def main(): - embedding_model, pickle_model, pickle_model20 = load_models() - title = st.text_input('Введите название вакансии') - description = st.text_area('Введите описание вакансии') - category = st.selectbox('Введите категорию вакансии', ['', *categories]) - contractType = st.selectbox('Введите тип занятости', ['part_time', 'full_time', 'other']) - contractTime = st.selectbox('Введите тип договора', ['contract', 'permanent', 'other']) - - if st.button('Предсказать зарплату'): - if description != '' and title != '' and category != '': - if len(description) < 50 or len(title) < 10: - st.error('Поле с названием и описанием вакансии имеют слишком короткие значения') - else: - result = (GetResultN(title, description, category, contractType, contractTime, pickle_model20, embedding_model) +\ - GetResultV(description, category, pickle_model, embedding_model)) / 2. - st.write('Результат: ' + str(result)) - else: - st.error('Поле с названием, описанием вакансии и категорией должны быть обязательно заполнены') - else: - st.write('Результат:') - if description != '' and title != '': - st.warning('Для получения предсказания нажмите кнопку "Предсказать зарплату"') - else: - st.warning('Введите название и описание вакансии') - - -if __name__ == "__main__": - main() diff --git a/spaces/Nee001/bing0/src/components/ui/dropdown-menu.tsx b/spaces/Nee001/bing0/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/Nepmods/kawaiiAI/README.md b/spaces/Nepmods/kawaiiAI/README.md deleted file mode 100644 index 9f475824e5bb0693b756d96a3056f9e14afc04a4..0000000000000000000000000000000000000000 --- a/spaces/Nepmods/kawaiiAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: KawaiiAI -emoji: 📉 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OAOA/DifFace/basicsr/models/sr_model.py b/spaces/OAOA/DifFace/basicsr/models/sr_model.py deleted file mode 100644 index 787f1fd2eab5963579c764c1bfb87199b7dd196f..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/models/sr_model.py +++ /dev/null @@ -1,279 +0,0 @@ -import torch -from collections import OrderedDict -from os import path as osp -from tqdm import tqdm - -from basicsr.archs import build_network -from basicsr.losses import build_loss -from basicsr.metrics import calculate_metric -from basicsr.utils import get_root_logger, imwrite, tensor2img -from basicsr.utils.registry import MODEL_REGISTRY -from .base_model import BaseModel - - -@MODEL_REGISTRY.register() -class SRModel(BaseModel): - """Base SR model for single image super-resolution.""" - - def __init__(self, opt): - super(SRModel, self).__init__(opt) - - # define network - self.net_g = build_network(opt['network_g']) - self.net_g = self.model_to_device(self.net_g) - self.print_network(self.net_g) - - # load pretrained models - load_path = self.opt['path'].get('pretrain_network_g', None) - if load_path is not None: - param_key = self.opt['path'].get('param_key_g', 'params') - self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key) - - if self.is_train: - self.init_training_settings() - - def init_training_settings(self): - self.net_g.train() - train_opt = self.opt['train'] - - self.ema_decay = train_opt.get('ema_decay', 0) - if self.ema_decay > 0: - logger = get_root_logger() - logger.info(f'Use Exponential Moving Average with decay: {self.ema_decay}') - # define network net_g with Exponential Moving Average (EMA) - # net_g_ema is used only for testing on one GPU and saving - # There is no need to wrap with DistributedDataParallel - self.net_g_ema = build_network(self.opt['network_g']).to(self.device) - # load pretrained model - load_path = self.opt['path'].get('pretrain_network_g', None) - if load_path is not None: - self.load_network(self.net_g_ema, load_path, self.opt['path'].get('strict_load_g', True), 'params_ema') - else: - self.model_ema(0) # copy net_g weight - self.net_g_ema.eval() - - # define losses - if train_opt.get('pixel_opt'): - self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device) - else: - self.cri_pix = None - - if train_opt.get('perceptual_opt'): - self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device) - else: - self.cri_perceptual = None - - if self.cri_pix is None and self.cri_perceptual is None: - raise ValueError('Both pixel and perceptual losses are None.') - - # set up optimizers and schedulers - self.setup_optimizers() - self.setup_schedulers() - - def setup_optimizers(self): - train_opt = self.opt['train'] - optim_params = [] - for k, v in self.net_g.named_parameters(): - if v.requires_grad: - optim_params.append(v) - else: - logger = get_root_logger() - logger.warning(f'Params {k} will not be optimized.') - - optim_type = train_opt['optim_g'].pop('type') - self.optimizer_g = self.get_optimizer(optim_type, optim_params, **train_opt['optim_g']) - self.optimizers.append(self.optimizer_g) - - def feed_data(self, data): - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - - def optimize_parameters(self, current_iter): - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - l_total = 0 - loss_dict = OrderedDict() - # pixel loss - if self.cri_pix: - l_pix = self.cri_pix(self.output, self.gt) - l_total += l_pix - loss_dict['l_pix'] = l_pix - # perceptual loss - if self.cri_perceptual: - l_percep, l_style = self.cri_perceptual(self.output, self.gt) - if l_percep is not None: - l_total += l_percep - loss_dict['l_percep'] = l_percep - if l_style is not None: - l_total += l_style - loss_dict['l_style'] = l_style - - l_total.backward() - self.optimizer_g.step() - - self.log_dict = self.reduce_loss_dict(loss_dict) - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - def test(self): - if hasattr(self, 'net_g_ema'): - self.net_g_ema.eval() - with torch.no_grad(): - self.output = self.net_g_ema(self.lq) - else: - self.net_g.eval() - with torch.no_grad(): - self.output = self.net_g(self.lq) - self.net_g.train() - - def test_selfensemble(self): - # TODO: to be tested - # 8 augmentations - # modified from https://github.com/thstkdgus35/EDSR-PyTorch - - def _transform(v, op): - # if self.precision != 'single': v = v.float() - v2np = v.data.cpu().numpy() - if op == 'v': - tfnp = v2np[:, :, :, ::-1].copy() - elif op == 'h': - tfnp = v2np[:, :, ::-1, :].copy() - elif op == 't': - tfnp = v2np.transpose((0, 1, 3, 2)).copy() - - ret = torch.Tensor(tfnp).to(self.device) - # if self.precision == 'half': ret = ret.half() - - return ret - - # prepare augmented data - lq_list = [self.lq] - for tf in 'v', 'h', 't': - lq_list.extend([_transform(t, tf) for t in lq_list]) - - # inference - if hasattr(self, 'net_g_ema'): - self.net_g_ema.eval() - with torch.no_grad(): - out_list = [self.net_g_ema(aug) for aug in lq_list] - else: - self.net_g.eval() - with torch.no_grad(): - out_list = [self.net_g_ema(aug) for aug in lq_list] - self.net_g.train() - - # merge results - for i in range(len(out_list)): - if i > 3: - out_list[i] = _transform(out_list[i], 't') - if i % 4 > 1: - out_list[i] = _transform(out_list[i], 'h') - if (i % 4) % 2 == 1: - out_list[i] = _transform(out_list[i], 'v') - output = torch.cat(out_list, dim=0) - - self.output = output.mean(dim=0, keepdim=True) - - def dist_validation(self, dataloader, current_iter, tb_logger, save_img): - if self.opt['rank'] == 0: - self.nondist_validation(dataloader, current_iter, tb_logger, save_img) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - dataset_name = dataloader.dataset.opt['name'] - with_metrics = self.opt['val'].get('metrics') is not None - use_pbar = self.opt['val'].get('pbar', False) - - if with_metrics: - if not hasattr(self, 'metric_results'): # only execute in the first run - self.metric_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()} - # initialize the best metric results for each dataset_name (supporting multiple validation datasets) - self._initialize_best_metric_results(dataset_name) - # zero self.metric_results - if with_metrics: - self.metric_results = {metric: 0 for metric in self.metric_results} - - metric_data = dict() - if use_pbar: - pbar = tqdm(total=len(dataloader), unit='image') - - for idx, val_data in enumerate(dataloader): - img_name = osp.splitext(osp.basename(val_data['lq_path'][0]))[0] - self.feed_data(val_data) - self.test() - - visuals = self.get_current_visuals() - sr_img = tensor2img([visuals['result']]) - metric_data['img'] = sr_img - if 'gt' in visuals: - gt_img = tensor2img([visuals['gt']]) - metric_data['img2'] = gt_img - del self.gt - - # tentative for out of GPU memory - del self.lq - del self.output - torch.cuda.empty_cache() - - if save_img: - if self.opt['is_train']: - save_img_path = osp.join(self.opt['path']['visualization'], img_name, - f'{img_name}_{current_iter}.png') - else: - if self.opt['val']['suffix']: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, - f'{img_name}_{self.opt["val"]["suffix"]}.png') - else: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, - f'{img_name}_{self.opt["name"]}.png') - imwrite(sr_img, save_img_path) - - if with_metrics: - # calculate metrics - for name, opt_ in self.opt['val']['metrics'].items(): - self.metric_results[name] += calculate_metric(metric_data, opt_) - if use_pbar: - pbar.update(1) - pbar.set_description(f'Test {img_name}') - if use_pbar: - pbar.close() - - if with_metrics: - for metric in self.metric_results.keys(): - self.metric_results[metric] /= (idx + 1) - # update the best metric result - self._update_best_metric_result(dataset_name, metric, self.metric_results[metric], current_iter) - - self._log_validation_metric_values(current_iter, dataset_name, tb_logger) - - def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger): - log_str = f'Validation {dataset_name}\n' - for metric, value in self.metric_results.items(): - log_str += f'\t # {metric}: {value:.4f}' - if hasattr(self, 'best_metric_results'): - log_str += (f'\tBest: {self.best_metric_results[dataset_name][metric]["val"]:.4f} @ ' - f'{self.best_metric_results[dataset_name][metric]["iter"]} iter') - log_str += '\n' - - logger = get_root_logger() - logger.info(log_str) - if tb_logger: - for metric, value in self.metric_results.items(): - tb_logger.add_scalar(f'metrics/{dataset_name}/{metric}', value, current_iter) - - def get_current_visuals(self): - out_dict = OrderedDict() - out_dict['lq'] = self.lq.detach().cpu() - out_dict['result'] = self.output.detach().cpu() - if hasattr(self, 'gt'): - out_dict['gt'] = self.gt.detach().cpu() - return out_dict - - def save(self, epoch, current_iter): - if hasattr(self, 'net_g_ema'): - self.save_network([self.net_g, self.net_g_ema], 'net_g', current_iter, param_key=['params', 'params_ema']) - else: - self.save_network(self.net_g, 'net_g', current_iter) - self.save_training_state(epoch, current_iter) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/language_pair_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/language_pair_dataset.py deleted file mode 100644 index ff3e14bf14770638524ef6067b558e455dbe5f2b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/language_pair_dataset.py +++ /dev/null @@ -1,471 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -import torch -from fairseq.data import FairseqDataset, data_utils - - -logger = logging.getLogger(__name__) - - -def collate( - samples, - pad_idx, - eos_idx, - left_pad_source=True, - left_pad_target=False, - input_feeding=True, - pad_to_length=None, - pad_to_multiple=1, -): - if len(samples) == 0: - return {} - - def merge(key, left_pad, move_eos_to_beginning=False, pad_to_length=None): - return data_utils.collate_tokens( - [s[key] for s in samples], - pad_idx, - eos_idx, - left_pad, - move_eos_to_beginning, - pad_to_length=pad_to_length, - pad_to_multiple=pad_to_multiple, - ) - - def check_alignment(alignment, src_len, tgt_len): - if alignment is None or len(alignment) == 0: - return False - if ( - alignment[:, 0].max().item() >= src_len - 1 - or alignment[:, 1].max().item() >= tgt_len - 1 - ): - logger.warning("alignment size mismatch found, skipping alignment!") - return False - return True - - def compute_alignment_weights(alignments): - """ - Given a tensor of shape [:, 2] containing the source-target indices - corresponding to the alignments, a weight vector containing the - inverse frequency of each target index is computed. - For e.g. if alignments = [[5, 7], [2, 3], [1, 3], [4, 2]], then - a tensor containing [1., 0.5, 0.5, 1] should be returned (since target - index 3 is repeated twice) - """ - align_tgt = alignments[:, 1] - _, align_tgt_i, align_tgt_c = torch.unique( - align_tgt, return_inverse=True, return_counts=True - ) - align_weights = align_tgt_c[align_tgt_i[np.arange(len(align_tgt))]] - return 1.0 / align_weights.float() - - id = torch.LongTensor([s["id"] for s in samples]) - src_tokens = merge( - "source", - left_pad=left_pad_source, - pad_to_length=pad_to_length["source"] if pad_to_length is not None else None, - ) - # sort by descending source length - src_lengths = torch.LongTensor( - [s["source"].ne(pad_idx).long().sum() for s in samples] - ) - src_lengths, sort_order = src_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - src_tokens = src_tokens.index_select(0, sort_order) - - prev_output_tokens = None - target = None - if samples[0].get("target", None) is not None: - target = merge( - "target", - left_pad=left_pad_target, - pad_to_length=pad_to_length["target"] - if pad_to_length is not None - else None, - ) - target = target.index_select(0, sort_order) - tgt_lengths = torch.LongTensor( - [s["target"].ne(pad_idx).long().sum() for s in samples] - ).index_select(0, sort_order) - ntokens = tgt_lengths.sum().item() - - if samples[0].get("prev_output_tokens", None) is not None: - prev_output_tokens = merge("prev_output_tokens", left_pad=left_pad_target) - elif input_feeding: - # we create a shifted version of targets for feeding the - # previous output token(s) into the next decoder step - prev_output_tokens = merge( - "target", - left_pad=left_pad_target, - move_eos_to_beginning=True, - pad_to_length=pad_to_length["target"] - if pad_to_length is not None - else None, - ) - else: - ntokens = src_lengths.sum().item() - - batch = { - "id": id, - "nsentences": len(samples), - "ntokens": ntokens, - "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths,}, - "target": target, - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens.index_select( - 0, sort_order - ) - - if samples[0].get("alignment", None) is not None: - bsz, tgt_sz = batch["target"].shape - src_sz = batch["net_input"]["src_tokens"].shape[1] - - offsets = torch.zeros((len(sort_order), 2), dtype=torch.long) - offsets[:, 1] += torch.arange(len(sort_order), dtype=torch.long) * tgt_sz - if left_pad_source: - offsets[:, 0] += src_sz - src_lengths - if left_pad_target: - offsets[:, 1] += tgt_sz - tgt_lengths - - alignments = [ - alignment + offset - for align_idx, offset, src_len, tgt_len in zip( - sort_order, offsets, src_lengths, tgt_lengths - ) - for alignment in [samples[align_idx]["alignment"].view(-1, 2)] - if check_alignment(alignment, src_len, tgt_len) - ] - - if len(alignments) > 0: - alignments = torch.cat(alignments, dim=0) - align_weights = compute_alignment_weights(alignments) - - batch["alignments"] = alignments - batch["align_weights"] = align_weights - - if samples[0].get("constraints", None) is not None: - # Collate the packed constraints across the samples, padding to - # the length of the longest sample. - lens = [sample.get("constraints").size(0) for sample in samples] - max_len = max(lens) - constraints = torch.zeros((len(samples), max(lens))).long() - for i, sample in enumerate(samples): - constraints[i, 0 : lens[i]] = samples[i].get("constraints") - batch["constraints"] = constraints.index_select(0, sort_order) - - return batch - - -class LanguagePairDataset(FairseqDataset): - """ - A pair of torch.utils.data.Datasets. - - Args: - src (torch.utils.data.Dataset): source dataset to wrap - src_sizes (List[int]): source sentence lengths - src_dict (~fairseq.data.Dictionary): source vocabulary - tgt (torch.utils.data.Dataset, optional): target dataset to wrap - tgt_sizes (List[int], optional): target sentence lengths - tgt_dict (~fairseq.data.Dictionary, optional): target vocabulary - left_pad_source (bool, optional): pad source tensors on the left side - (default: True). - left_pad_target (bool, optional): pad target tensors on the left side - (default: False). - shuffle (bool, optional): shuffle dataset elements before batching - (default: True). - input_feeding (bool, optional): create a shifted version of the targets - to be passed into the model for teacher forcing (default: True). - remove_eos_from_source (bool, optional): if set, removes eos from end - of source if it's present (default: False). - append_eos_to_target (bool, optional): if set, appends eos to end of - target if it's absent (default: False). - align_dataset (torch.utils.data.Dataset, optional): dataset - containing alignments. - constraints (Tensor, optional): 2d tensor with a concatenated, zero- - delimited list of constraints for each sentence. - append_bos (bool, optional): if set, appends bos to the beginning of - source/target sentence. - num_buckets (int, optional): if set to a value greater than 0, then - batches will be bucketed into the given number of batch shapes. - src_lang_id (int, optional): source language ID, if set, the collated batch - will contain a field 'src_lang_id' in 'net_input' which indicates the - source language of the samples. - tgt_lang_id (int, optional): target language ID, if set, the collated batch - will contain a field 'tgt_lang_id' which indicates the target language - of the samples. - """ - - def __init__( - self, - src, - src_sizes, - src_dict, - tgt=None, - tgt_sizes=None, - tgt_dict=None, - left_pad_source=True, - left_pad_target=False, - shuffle=True, - input_feeding=True, - remove_eos_from_source=False, - append_eos_to_target=False, - align_dataset=None, - constraints=None, - append_bos=False, - eos=None, - num_buckets=0, - src_lang_id=None, - tgt_lang_id=None, - pad_to_multiple=1, - ): - if tgt_dict is not None: - assert src_dict.pad() == tgt_dict.pad() - assert src_dict.eos() == tgt_dict.eos() - assert src_dict.unk() == tgt_dict.unk() - if tgt is not None: - assert len(src) == len( - tgt - ), "Source and target must contain the same number of examples" - self.src = src - self.tgt = tgt - self.src_sizes = np.array(src_sizes) - self.tgt_sizes = np.array(tgt_sizes) if tgt_sizes is not None else None - self.sizes = ( - np.vstack((self.src_sizes, self.tgt_sizes)).T - if self.tgt_sizes is not None - else self.src_sizes - ) - self.src_dict = src_dict - self.tgt_dict = tgt_dict - self.left_pad_source = left_pad_source - self.left_pad_target = left_pad_target - self.shuffle = shuffle - self.input_feeding = input_feeding - self.remove_eos_from_source = remove_eos_from_source - self.append_eos_to_target = append_eos_to_target - self.align_dataset = align_dataset - if self.align_dataset is not None: - assert ( - self.tgt_sizes is not None - ), "Both source and target needed when alignments are provided" - self.constraints = constraints - self.append_bos = append_bos - self.eos = eos if eos is not None else src_dict.eos() - self.src_lang_id = src_lang_id - self.tgt_lang_id = tgt_lang_id - if num_buckets > 0: - from fairseq.data import BucketPadLengthDataset - - self.src = BucketPadLengthDataset( - self.src, - sizes=self.src_sizes, - num_buckets=num_buckets, - pad_idx=self.src_dict.pad(), - left_pad=self.left_pad_source, - ) - self.src_sizes = self.src.sizes - logger.info("bucketing source lengths: {}".format(list(self.src.buckets))) - if self.tgt is not None: - self.tgt = BucketPadLengthDataset( - self.tgt, - sizes=self.tgt_sizes, - num_buckets=num_buckets, - pad_idx=self.tgt_dict.pad(), - left_pad=self.left_pad_target, - ) - self.tgt_sizes = self.tgt.sizes - logger.info( - "bucketing target lengths: {}".format(list(self.tgt.buckets)) - ) - - # determine bucket sizes using self.num_tokens, which will return - # the padded lengths (thanks to BucketPadLengthDataset) - num_tokens = np.vectorize(self.num_tokens, otypes=[np.compat.long]) - self.bucketed_num_tokens = num_tokens(np.arange(len(self.src))) - self.buckets = [ - (None, num_tokens) for num_tokens in np.unique(self.bucketed_num_tokens) - ] - else: - self.buckets = None - self.pad_to_multiple = pad_to_multiple - - def get_batch_shapes(self): - return self.buckets - - def __getitem__(self, index): - tgt_item = self.tgt[index] if self.tgt is not None else None - src_item = self.src[index] - # Append EOS to end of tgt sentence if it does not have an EOS and remove - # EOS from end of src sentence if it exists. This is useful when we use - # use existing datasets for opposite directions i.e., when we want to - # use tgt_dataset as src_dataset and vice versa - if self.append_eos_to_target: - eos = self.tgt_dict.eos() if self.tgt_dict else self.src_dict.eos() - if self.tgt and self.tgt[index][-1] != eos: - tgt_item = torch.cat([self.tgt[index], torch.LongTensor([eos])]) - - if self.append_bos: - bos = self.tgt_dict.bos() if self.tgt_dict else self.src_dict.bos() - if self.tgt and self.tgt[index][0] != bos: - tgt_item = torch.cat([torch.LongTensor([bos]), self.tgt[index]]) - - bos = self.src_dict.bos() - if self.src[index][0] != bos: - src_item = torch.cat([torch.LongTensor([bos]), self.src[index]]) - - if self.remove_eos_from_source: - eos = self.src_dict.eos() - if self.src[index][-1] == eos: - src_item = self.src[index][:-1] - - example = { - "id": index, - "source": src_item, - "target": tgt_item, - } - if self.align_dataset is not None: - example["alignment"] = self.align_dataset[index] - if self.constraints is not None: - example["constraints"] = self.constraints[index] - return example - - def __len__(self): - return len(self.src) - - def collater(self, samples, pad_to_length=None): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - pad_to_length (dict, optional): a dictionary of - {'source': source_pad_to_length, 'target': target_pad_to_length} - to indicate the max length to pad to in source and target respectively. - - Returns: - dict: a mini-batch with the following keys: - - - `id` (LongTensor): example IDs in the original input order - - `ntokens` (int): total number of tokens in the batch - - `net_input` (dict): the input to the Model, containing keys: - - - `src_tokens` (LongTensor): a padded 2D Tensor of tokens in - the source sentence of shape `(bsz, src_len)`. Padding will - appear on the left if *left_pad_source* is ``True``. - - `src_lengths` (LongTensor): 1D Tensor of the unpadded - lengths of each source sentence of shape `(bsz)` - - `prev_output_tokens` (LongTensor): a padded 2D Tensor of - tokens in the target sentence, shifted right by one - position for teacher forcing, of shape `(bsz, tgt_len)`. - This key will not be present if *input_feeding* is - ``False``. Padding will appear on the left if - *left_pad_target* is ``True``. - - `src_lang_id` (LongTensor): a long Tensor which contains source - language IDs of each sample in the batch - - - `target` (LongTensor): a padded 2D Tensor of tokens in the - target sentence of shape `(bsz, tgt_len)`. Padding will appear - on the left if *left_pad_target* is ``True``. - - `tgt_lang_id` (LongTensor): a long Tensor which contains target language - IDs of each sample in the batch - """ - res = collate( - samples, - pad_idx=self.src_dict.pad(), - eos_idx=self.eos, - left_pad_source=self.left_pad_source, - left_pad_target=self.left_pad_target, - input_feeding=self.input_feeding, - pad_to_length=pad_to_length, - pad_to_multiple=self.pad_to_multiple, - ) - if self.src_lang_id is not None or self.tgt_lang_id is not None: - src_tokens = res["net_input"]["src_tokens"] - bsz = src_tokens.size(0) - if self.src_lang_id is not None: - res["net_input"]["src_lang_id"] = ( - torch.LongTensor([[self.src_lang_id]]).expand(bsz, 1).to(src_tokens) - ) - if self.tgt_lang_id is not None: - res["tgt_lang_id"] = ( - torch.LongTensor([[self.tgt_lang_id]]).expand(bsz, 1).to(src_tokens) - ) - return res - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - return max( - self.src_sizes[index], - self.tgt_sizes[index] if self.tgt_sizes is not None else 0, - ) - - def num_tokens_vec(self, indices): - """Return the number of tokens for a set of positions defined by indices. - This value is used to enforce ``--max-tokens`` during batching.""" - sizes = self.src_sizes[indices] - if self.tgt_sizes is not None: - sizes = np.maximum(sizes, self.tgt_sizes[indices]) - return sizes - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return ( - self.src_sizes[index], - self.tgt_sizes[index] if self.tgt_sizes is not None else 0, - ) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - indices = np.random.permutation(len(self)).astype(np.int64) - else: - indices = np.arange(len(self), dtype=np.int64) - if self.buckets is None: - # sort by target length, then source length - if self.tgt_sizes is not None: - indices = indices[np.argsort(self.tgt_sizes[indices], kind="mergesort")] - return indices[np.argsort(self.src_sizes[indices], kind="mergesort")] - else: - # sort by bucketed_num_tokens, which is: - # max(padded_src_len, padded_tgt_len) - return indices[ - np.argsort(self.bucketed_num_tokens[indices], kind="mergesort") - ] - - @property - def supports_prefetch(self): - return getattr(self.src, "supports_prefetch", False) and ( - getattr(self.tgt, "supports_prefetch", False) or self.tgt is None - ) - - def prefetch(self, indices): - self.src.prefetch(indices) - if self.tgt is not None: - self.tgt.prefetch(indices) - if self.align_dataset is not None: - self.align_dataset.prefetch(indices) - - def filter_indices_by_size(self, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - return data_utils.filter_paired_dataset_indices_by_size( - self.src_sizes, self.tgt_sizes, indices, max_sizes, - ) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/ema/ema.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/ema/ema.py deleted file mode 100644 index 010b60ba2fd766340d2c5b8ba96f9e57c6fe25b5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/ema/ema.py +++ /dev/null @@ -1,200 +0,0 @@ -#!/usr/bin/env python3 - -""" -This module has the EMA class used to store a copy of the exponentially decayed -model params. - -Typical usage of EMA class involves initializing an object using an existing -model (random or from a seed model) and setting the config like ema_decay, -ema_start_update which determine how the EMA model is updated. After every -update of the model i.e. at the end of the train_step, the EMA should be updated -by passing the new model to the EMA.step function. The EMA model state dict -can be stored in the extra state under the key of "ema" and dumped -into a checkpoint and loaded. The EMA object can be passed to tasks -by setting task.uses_ema property. -EMA is a smoothed/ensemble model which might have better performance -when used for inference or further fine-tuning. EMA class has a -reverse function to load the EMA params into a model and use it -like a regular model. -""" - -import copy -import logging - -import torch -from fairseq import checkpoint_utils - - -class EMA(object): - """Exponential Moving Average of Fairseq Models - EMA keeps a copy of the exponentially decayed model params. - The set of params should include both gradient-descent and - non-gradient descent params, such as batch mean/var and buffers. - This is a modified implementation of - the open source code in https://github.com/zhawe01/fairseq-gec.git, - and internal source code in - fbcode/mobile-vision/projects/classification_pytorch/lib/utils/model_ema.py. - - Similar to TF EMA. - https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage. - EMA provides a averaged and smoothed set of model weights, and has been shown to - improve vision models. EMA class does all necessary functions to update, reload, - or init EMA methods. - - EMA object is initialized from an arbitrary model. By default, it is stored in - the same device (unless device specified at initialization) and with the - same precision as the model (unless ema_fp32 is True). ema_fp32 is recommended. - This stores the EMA parameters in fp32 only for the EMA update step, and - is used at the default precision otherwise. - EMA is usually enabled using EMAConfig with store_ema=True. Some important - parameters to configure EMA are - 1) ema_decay - The decay of EMA - 2) ema_update_freq - EMA is updated every this many model updates. - 3) ema_start_update - Start EMA update after this many model updates [default 0] - - Key methods: - 1) step - One update of EMA using new model - 2) restore - Update EMA from a state dict - 3) reverse - Load EMA into a model - 4) get_decay, _set_decay - Used to get or set the decay. Note _set_decay is - called from step. - 5) build_fp32_params - Used to initialize or update the fp32 copy of EMA params. - Note this is enabled only when ema_fp32=True - """ - - def __init__(self, model, config, device=None): - """ - @param model model to initialize the EMA with - @param config EMAConfig object with configuration like - ema_decay, ema_update_freq, ema_fp32 - @param device If provided, copy EMA to this device (e.g. gpu). - Otherwise EMA is in the same device as the model. - """ - - self.decay = config.ema_decay - self.model = copy.deepcopy(model) - self.model.requires_grad_(False) - self.config = config - self.fp32_params = {} - - if self.config.ema_seed_model is not None: - state = checkpoint_utils.load_ema_from_checkpoint(self.config.ema_seed_model) - self.model.load_state_dict(state["model"], strict=True) - - if device is not None: - logging.info(f"Copying EMA model to device {device}") - self.model = self.model.to(device=device) - - if self.config.ema_fp32: - self.build_fp32_params() - - self.update_freq_counter = 0 - - def get_model(self): - return self.model - - def build_fp32_params(self, state_dict=None): - """ - Store a copy of the EMA params in fp32. - If state dict is passed, the EMA params is copied from - the provided state dict. Otherwise, it is copied from the - current EMA model parameters. - """ - if not self.config.ema_fp32: - raise RuntimeError( - "build_fp32_params should not be called if ema_fp32=False. " - "Use ema_fp32=True if this is really intended." - ) - - if state_dict is None: - state_dict = self.model.state_dict() - - def _to_float(t): - return t.float() if torch.is_floating_point(t) else t - - # for non-float params (like registered symbols), they are copied into this dict and covered in each update - for param_key in state_dict: - if param_key in self.fp32_params: - self.fp32_params[param_key].copy_(state_dict[param_key]) - else: - self.fp32_params[param_key] = _to_float(state_dict[param_key]) - - def restore(self, state_dict, build_fp32_params=False): - """ Load data from a model spec into EMA model """ - self.model.load_state_dict(state_dict, strict=False) - if build_fp32_params: - self.build_fp32_params(state_dict) - - def _set_decay(self, decay): - self.decay = decay - - def get_decay(self): - return self.decay - - def _step_internal(self, new_model, updates=None): - """ One update of the EMA model based on new model weights """ - decay = self.decay - - ema_state_dict = {} - ema_params = self.fp32_params if self.config.ema_fp32 else self.model.state_dict() - for key, param in new_model.state_dict().items(): - try: - ema_param = ema_params[key] - except KeyError: - ema_param = param.float().clone() if param.ndim == 1 else copy.deepcopy(param) - - if param.shape != ema_param.shape: - raise ValueError( - "incompatible tensor shapes between model param and ema param" - + "{} vs. {}".format(param.shape, ema_param.shape) - ) - if "version" in key: - # Do not decay a model.version pytorch param - continue - - # for non-float params (like registered symbols), they are covered in each update - if not torch.is_floating_point(ema_param): - if ema_param.dtype != param.dtype: - raise ValueError( - "incompatible tensor dtypes between model param and ema param" - + "{} vs. {}".format(param.dtype, ema_param.dtype) - ) - ema_param.copy_(param) - else: - ema_param.mul_(decay) - ema_param.add_(param.to(dtype=ema_param.dtype), alpha=1-decay) - ema_state_dict[key] = ema_param - self.restore(ema_state_dict, build_fp32_params=False) - - def step(self, new_model, updates=None): - """ - One update of EMA which is done every self.config.ema_update_freq - updates of the model. - - @param updates The current number of model updates done. - Decay is set of 0 if model updates < ema_start_update, which means - the model will be simply copied over to the EMA. - When model updates >= ema_start_updates, then EMA is updated with - a decay of self.config.ema_decay. - """ - self._set_decay( - 0 - if updates is not None - and updates < self.config.ema_start_update - else self.config.ema_decay - ) - if updates is not None and self.config.ema_update_freq > 1: - self.update_freq_counter += 1 - if self.update_freq_counter >= self.config.ema_update_freq: - self._step_internal(new_model, updates) - self.update_freq_counter = 0 - else: - self._step_internal(new_model, updates) - - def reverse(self, model): - """ - Load the model parameters from EMA model. - Useful for inference or fine-tuning from the EMA model. - """ - model.load_state_dict(self.model.state_dict(), strict=False) - return model diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/sparse_multihead_attention.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/sparse_multihead_attention.py deleted file mode 100644 index 3cbd9d6785886e319aab0601517e27df733b6f97..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/sparse_multihead_attention.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch - -from .multihead_attention import MultiheadAttention - - -class SparseMultiheadAttention(MultiheadAttention): - """Sparse Multi-Headed Attention. - - "Generating Long Sequences with Sparse Transformers". Implements - fixed factorized self attention, where l=stride and c=expressivity. - A(1) includes all words in the stride window and A(2) takes a summary of c - words from the end of each stride window. - If is_bidirectional=False, we do not include any words past the current word, - as in the paper. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - stride=32, - expressivity=8, - is_bidirectional=True, - ): - - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - ) - - self.is_bidirectional = is_bidirectional - self.stride = stride - self.expressivity = expressivity - assert self.stride > 0 and self.stride >= self.expressivity - - # Used for Ai(2) calculations - beginning of [l-c, l] range - def compute_checkpoint(self, word_index): - if word_index % self.stride == 0 and word_index != 0: - checkpoint_index = word_index - self.expressivity - else: - checkpoint_index = ( - math.floor(word_index / self.stride) * self.stride - + self.stride - - self.expressivity - ) - return checkpoint_index - - # Computes Ai(2) - def compute_subset_summaries(self, absolute_max): - checkpoint_index = self.compute_checkpoint(0) - subset_two = set() - while checkpoint_index <= absolute_max - 1: - summary = set( - range( - checkpoint_index, - min(checkpoint_index + self.expressivity + 1, absolute_max), - ) - ) - subset_two = subset_two.union(summary) - checkpoint_index = self.compute_checkpoint(checkpoint_index + self.stride) - return subset_two - - # Sparse Transformer Fixed Attention Pattern: https://arxiv.org/pdf/1904.10509.pdf - def compute_fixed_attention_subset(self, word_index, tgt_len): - # +1s account for range function; [min, max) -> [min, max] - if not self.is_bidirectional: - absolute_max = word_index + 1 - else: - absolute_max = tgt_len - - # Subset 1 - whole window - rounded_index = ( - math.floor((word_index + self.stride) / self.stride) * self.stride - ) - if word_index % self.stride == 0 and word_index != 0: - subset_one = set( - range(word_index - self.stride, min(absolute_max, word_index + 1)) - ) - else: - subset_one = set( - range( - max(0, rounded_index - self.stride), - min(absolute_max, rounded_index + 1), - ) - ) - - # Subset 2 - summary per window - # If bidirectional, subset 2 is the same for every index - subset_two = set() - if not self.is_bidirectional: - subset_two = self.compute_subset_summaries(absolute_max) - - return subset_one.union(subset_two) - - # Compute sparse mask - if bidirectional, can pre-compute and store - def buffered_sparse_mask(self, tensor, tgt_len, src_len): - assert tgt_len > self.stride - sparse_mask = torch.empty((tgt_len, src_len)).float().fill_(float("-inf")) - - # If bidirectional, subset 2 is the same for every index - subset_summaries = set() - if self.is_bidirectional: - subset_summaries = self.compute_subset_summaries(tgt_len) - - for i in range(tgt_len): - fixed_attention_subset = self.compute_fixed_attention_subset(i, tgt_len) - fixed_attention_subset = fixed_attention_subset.union(subset_summaries) - included_word_indices = torch.LongTensor(list(fixed_attention_subset)) - sparse_mask[i].index_fill_(0, included_word_indices, 0) - return sparse_mask.type_as(tensor) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - sparse_mask = self.buffered_sparse_mask(attn_weights, tgt_len, src_len) - sparse_mask = sparse_mask.unsqueeze(0).expand( - bsz * self.num_heads, tgt_len, src_len - ) - attn_weights += sparse_mask diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/shard.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/shard.py deleted file mode 100644 index 9d7f2eb9e5de6086fe2435d432bde7521ebb8155..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/shard.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, Dict - -from fairseq.distributed import utils - - -try: - from fairscale.optim import OSS - - _has_fairscale = True -except ImportError: - _has_fairscale = False - - -def shard_(optimizer, group): - if not _has_fairscale: - raise ImportError( - "\n\nPlease install the fairscale package:" "\n\n pip install fairscale" - ) - - class FairseqOSS(OSS): - @property - def disable_mem_eff_fp16_loading_hack(self): - return True - - def __getattr__(self, name): - if name.startswith("supports") and hasattr(self.optim, name): - return getattr(self.optim, name) - raise AttributeError( - "'FairseqOSS' object has no attribute {0!r}".format(name) - ) - - def broadcast_global_state_dict( - self, state_dict: Dict[str, Any] - ) -> Dict[str, Any]: - """ - Broadcasts the entire state_dict to all other ranks - each rank is responsible to load their own partition of data - """ - return utils.broadcast_object( - state_dict, - src_rank=0, - group=self.group, - ) - - torch_optimizer = optimizer.optimizer - optim_cls = type(torch_optimizer) - - optimizer.optimizer = FairseqOSS( - torch_optimizer.param_groups, - optim_cls, - group=group, - **optimizer.optimizer_config - ) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/models/ofa/resnet.py b/spaces/OFA-Sys/OFA-Generic_Interface/models/ofa/resnet.py deleted file mode 100644 index 9ad8ee87de4bb579d745ab8302a368ca1749a1fe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/models/ofa/resnet.py +++ /dev/null @@ -1,225 +0,0 @@ -import torch -import torch.nn as nn - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, - the original name is misleading as 'Drop Connect' is a.sh different form of dropout in a.sh separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for - changing the layer and argument names to 'drop path' rather than mix DropConnect as a.sh layer name and use - 'survival rate' as the argument. - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None): - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - assert False - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None, drop_path_rate=0.0): - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out = identity + self.drop_path(out) - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, layers, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, - norm_layer=None, drop_path_rate=0.0): - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(Bottleneck, 64, layers[0], drop_path_rate=drop_path_rate) - self.layer2 = self._make_layer(Bottleneck, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0], drop_path_rate=drop_path_rate) - self.layer3 = self._make_layer(Bottleneck, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1], drop_path_rate=drop_path_rate) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.SyncBatchNorm, nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False, drop_path_rate=0.0): - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, blocks)] - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer, drop_path_rate=dpr[i])) - - return nn.Sequential(*layers) - - def _forward_impl(self, x): - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - return x - - def forward(self, x): - return self._forward_impl(x) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/models/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/models/__init__.py deleted file mode 100644 index 3532479e52a0e1f1ba204c6f5d51c71c98ee5df0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/model_parallel/models/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.model_parallel.models." + model_name) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/m2m_100/tokenizers/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/m2m_100/tokenizers/README.md deleted file mode 100644 index e116932bc80572f221cff6472a7b1eea7032925d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/m2m_100/tokenizers/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# M2M-100 Tokenization - -We apply different tokenization strategies for different languages following the existing literature. Here we provide tok.sh a tokenizer that can be used to reproduce our results. - -To reproduce the results, follow these steps: - -``` -tgt_lang=... -reference_translation=... -cat generation_output | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh $tgt_lang > hyp -cat $reference_translation |sh tok.sh $tgt_lang > ref -sacrebleu -tok 'none' ref < hyp -``` - -## Installation - -Tools needed for all the languages except Arabic can be installed by running install_dependencies.sh -If you want to evaluate Arabic models, please follow the instructions provided here: http://alt.qcri.org/tools/arabic-normalizer/ to install diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/pq/pq.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/pq/pq.py deleted file mode 100644 index eddc2eb34602403f10979f54cd23a45bc2f104d5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/pq/pq.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .em import EM, EmptyClusterResolveError - - -class PQ(EM): - """ - Quantizes the layer weights W with the standard Product Quantization - technique. This learns a codebook of codewords or centroids of size - block_size from W. For further reference on using PQ to quantize - neural networks, see "And the Bit Goes Down: Revisiting the Quantization - of Neural Networks", Stock et al., ICLR 2020. - - PQ is performed in two steps: - (1) The matrix W (weights or fully-connected or convolutional layer) - is reshaped to (block_size, -1). - - If W is fully-connected (2D), its columns are split into - blocks of size block_size. - - If W is convolutional (4D), its filters are split along the - spatial dimension. - (2) We apply the standard EM/k-means algorithm to the resulting reshaped matrix. - - Args: - - W: weight matrix to quantize of size (in_features x out_features) - - block_size: size of the blocks (subvectors) - - n_centroids: number of centroids - - n_iter: number of k-means iterations - - eps: for cluster reassignment when an empty cluster is found - - max_tentatives for cluster reassignment when an empty cluster is found - - verbose: print information after each iteration - - Remarks: - - block_size be compatible with the shape of W - """ - - def __init__( - self, - W, - block_size, - n_centroids=256, - n_iter=20, - eps=1e-6, - max_tentatives=30, - verbose=True, - ): - self.block_size = block_size - W_reshaped = self._reshape(W) - super(PQ, self).__init__( - W_reshaped, - n_centroids=n_centroids, - n_iter=n_iter, - eps=eps, - max_tentatives=max_tentatives, - verbose=verbose, - ) - - def _reshape(self, W): - """ - Reshapes the matrix W as expained in step (1). - """ - - # fully connected: by convention the weight has size out_features x in_features - if len(W.size()) == 2: - self.out_features, self.in_features = W.size() - assert ( - self.in_features % self.block_size == 0 - ), "Linear: n_blocks must be a multiple of in_features" - return ( - W.reshape(self.out_features, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - - # convolutional: we reshape along the spatial dimension - elif len(W.size()) == 4: - self.out_channels, self.in_channels, self.k_h, self.k_w = W.size() - assert ( - self.in_channels * self.k_h * self.k_w - ) % self.block_size == 0, ( - "Conv2d: n_blocks must be a multiple of in_channels * k_h * k_w" - ) - return ( - W.reshape(self.out_channels, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - # not implemented - else: - raise NotImplementedError(W.size()) - - def encode(self): - """ - Performs self.n_iter EM steps. - """ - - self.initialize_centroids() - for i in range(self.n_iter): - try: - self.step(i) - except EmptyClusterResolveError: - break - - def decode(self): - """ - Returns the encoded full weight matrix. Must be called after - the encode function. - """ - - # fully connected case - if "k_h" not in self.__dict__: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_features, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - # convolutional case - else: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape(self.out_channels, self.in_channels, self.k_h, self.k_w) - ) diff --git a/spaces/ORI-Muchim/MarinTTS/models.py b/spaces/ORI-Muchim/MarinTTS/models.py deleted file mode 100644 index fe004e94bbe9074ec736f14325268f4515a53420..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/MarinTTS/models.py +++ /dev/null @@ -1,540 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - if self.n_vocab != 0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - if self.n_vocab != 0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 1, "n_speakers have to be larger than 1." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) diff --git a/spaces/OdiaGenAI/Olive_Farm/README.md b/spaces/OdiaGenAI/Olive_Farm/README.md deleted file mode 100644 index ab646bb4e2fb246ce9002aacf174c83ec6fb0e0a..0000000000000000000000000000000000000000 --- a/spaces/OdiaGenAI/Olive_Farm/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Olive Farm -emoji: 🐠 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.27.0 -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/yhd/index.tsx b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/yhd/index.tsx deleted file mode 100644 index 1f7a9c8574d739d958775a3ce916dc2111866f66..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/yhd/index.tsx +++ /dev/null @@ -1,26 +0,0 @@ -import React from 'react'; -import { Theme } from '../interface'; -import { DefaultSoundNames, defaultSounds } from '../default'; - -const imagesUrls = import.meta.glob('./images/*.png', { - import: 'default', - eager: true, -}); - -const yhds = Object.entries(imagesUrls).map(([key, value]) => ({ - name: key.slice(9, -4), - // eslint-disable-next-line @typescript-eslint/ban-ts-comment - // @ts-ignore - content: , -})); - -export const yhdTheme: Theme = { - name: 'YHD', - icons: yhds.map(({ name, content }) => ({ - name, - content, - clickSound: 'button-click', - tripleSound: 'triple', - })), - sounds: defaultSounds, -}; diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/image_dense_captions.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/image_dense_captions.py deleted file mode 100644 index 527a72eb75b75d2d636c9ae4e9d6e04d0346044a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/image_dense_captions.py +++ /dev/null @@ -1,83 +0,0 @@ -import argparse -import multiprocessing as mp -import os -import time -import cv2 -import tqdm -import sys - -from detectron2.config import get_cfg -from detectron2.data.detection_utils import read_image -from detectron2.utils.logger import setup_logger - -sys.path.insert(0, 'iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/') -from centernet.config import add_centernet_config -from ..grit_src.grit.config import add_grit_config - -from ..grit_src.grit.predictor import VisualizationDemo - - -# constants -WINDOW_NAME = "GRiT" - - -def dense_pred_to_caption(predictions): - boxes = predictions["instances"].pred_boxes if predictions["instances"].has("pred_boxes") else None - object_description = predictions["instances"].pred_object_descriptions.data - new_caption = "" - for i in range(len(object_description)): - new_caption += (object_description[i] + ": " + str([int(a) for a in boxes[i].tensor.cpu().detach().numpy()[0]])) + "; " - return new_caption - -def dense_pred_to_caption_only_name(predictions): - # boxes = predictions["instances"].pred_boxes if predictions["instances"].has("pred_boxes") else None - object_description = predictions["instances"].pred_object_descriptions.data - new_caption = ",".join(object_description) - del predictions - # for i in range(len(object_description)): - # new_caption += (object_description[i] + ": " + str([int(a) for a in boxes[i].tensor.cpu().detach().numpy()[0]])) + "; " - return new_caption - -def setup_cfg(args): - cfg = get_cfg() - # if args["cpu"]: - # cfg.MODEL.DEVICE="cpu" - add_centernet_config(cfg) - add_grit_config(cfg) - cfg.merge_from_file(args["config_file"]) - cfg.merge_from_list(args["opts"]) - # Set score_threshold for builtin models - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args["confidence_threshold"] - cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args["confidence_threshold"] - if args["test_task"]: - cfg.MODEL.TEST_TASK = args["test_task"] - cfg.MODEL.BEAM_SIZE = 1 - cfg.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False - cfg.USE_ACT_CHECKPOINT = False - cfg.freeze() - return cfg - -def get_parser(device): - arg_dict = {'config_file': "iGPT/models/grit_src/configs/GRiT_B_DenseCap_ObjectDet.yaml", 'device': device, 'confidence_threshold': 0.5, 'test_task': 'DenseCap', 'opts': ["MODEL.WEIGHTS", "model_zoo/grit_b_densecap_objectdet.pth"]} - return arg_dict - -def image_caption_api(image_src, device): - args2 = get_parser(device) - cfg = setup_cfg(args2) - demo = VisualizationDemo(cfg) - if image_src: - img = read_image(image_src, format="BGR") - predictions, visualized_output = demo.run_on_image(img) - new_caption = dense_pred_to_caption(predictions) - return new_caption - -def init_demo(device): - args2 = get_parser(device) - cfg = setup_cfg(args2) - demo = VisualizationDemo(cfg) - return demo - -if __name__=="__main__": - import os - os.environ['CUDA_VISIBLE_DEVICES']='7' - print(image_caption_api("images/dancing_example_4.mp4_20230417_135359.263.jpg",'cuda')) \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_events.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_events.py deleted file mode 100644 index c1b03e4d1a703a417a83c2805be1ca15a4e458ed..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_events.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import os -import tempfile -import unittest - -from detectron2.utils.events import CommonMetricPrinter, EventStorage, JSONWriter - - -class TestEventWriter(unittest.TestCase): - def testScalar(self): - with tempfile.TemporaryDirectory( - prefix="detectron2_tests" - ) as dir, EventStorage() as storage: - json_file = os.path.join(dir, "test.json") - writer = JSONWriter(json_file) - for k in range(60): - storage.put_scalar("key", k, smoothing_hint=False) - if (k + 1) % 20 == 0: - writer.write() - storage.step() - writer.close() - with open(json_file) as f: - data = [json.loads(l) for l in f] - self.assertTrue([int(k["key"]) for k in data] == [19, 39, 59]) - - def testScalarMismatchedPeriod(self): - with tempfile.TemporaryDirectory( - prefix="detectron2_tests" - ) as dir, EventStorage() as storage: - json_file = os.path.join(dir, "test.json") - - writer = JSONWriter(json_file) - for k in range(60): - if k % 17 == 0: # write in a differnt period - storage.put_scalar("key2", k, smoothing_hint=False) - storage.put_scalar("key", k, smoothing_hint=False) - if (k + 1) % 20 == 0: - writer.write() - storage.step() - writer.close() - with open(json_file) as f: - data = [json.loads(l) for l in f] - self.assertTrue([int(k.get("key2", 0)) for k in data] == [17, 0, 34, 0, 51, 0]) - self.assertTrue([int(k.get("key", 0)) for k in data] == [0, 19, 0, 39, 0, 59]) - self.assertTrue([int(k["iteration"]) for k in data] == [17, 19, 34, 39, 51, 59]) - - def testPrintETA(self): - with EventStorage() as s: - p1 = CommonMetricPrinter(10) - p2 = CommonMetricPrinter() - - s.put_scalar("time", 1.0) - s.step() - s.put_scalar("time", 1.0) - s.step() - - with self.assertLogs("detectron2.utils.events") as logs: - p1.write() - self.assertIn("eta", logs.output[0]) - - with self.assertLogs("detectron2.utils.events") as logs: - p2.write() - self.assertNotIn("eta", logs.output[0]) diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/models/mgpt.py b/spaces/OpenMotionLab/MotionGPT/mGPT/models/mgpt.py deleted file mode 100644 index c8db4a45978020bf712bfae2757b20fc283b13de..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/models/mgpt.py +++ /dev/null @@ -1,494 +0,0 @@ -import numpy as np -import os -import random -import torch -import time -from mGPT.config import instantiate_from_config -from os.path import join as pjoin -from mGPT.losses.mgpt import GPTLosses -from mGPT.models.base import BaseModel -from .base import BaseModel -import json -import mGPT.render.matplot.plot_3d_global as plot_3d - - -class MotionGPT(BaseModel): - """ - Stage 1 Motion Tokenizer - Stage 2 Motion-language pretrian - Stage 3 Motion-language instruction tuning - """ - - def __init__(self, - cfg, - datamodule, - lm, - motion_vae, - codebook_size=512, - stage='vae', - debug=True, - condition='text', - task='t2m', - metrics_dict=['TM2TMetrics'], - **kwargs): - - self.save_hyperparameters(ignore='datamodule', logger=False) - self.datamodule = datamodule - super().__init__() - - # Instantiate motion tokenizer - if motion_vae != None: - self.vae = instantiate_from_config(motion_vae) - - # Instantiate motion-language model - self.lm = instantiate_from_config(lm) - - # Freeze the motion tokenizer for lm training - if 'lm' in self.hparams.stage: - self.vae.training = False - for p in self.vae.parameters(): - p.requires_grad = False - - # Instantiate the losses - self._losses = torch.nn.ModuleDict({ - split: GPTLosses(cfg, self.hparams.stage, self.datamodule.njoints) - for split in ["losses_train", "losses_test", "losses_val"] - }) - - # Data transform - self.feats2joints = datamodule.feats2joints - - # Count codebook frequency - self.codePred = [] - self.codeFrequency = torch.zeros((self.hparams.codebook_size, )) - - def forward(self, batch, task="t2m"): - texts = batch["text"] - lengths_ref = batch["length"] - - # Forward - # texts = ['Generate motion: ' + text for text in texts] - outputs, output_texts = self.lm.generate_direct(texts, do_sample=True) - - # Motion Decode - feats_rst_lst = [] - lengths = [] - max_len = 0 - - for i in range(len(texts)): - if task == "pred": - motion = self.vae.decode( - torch.cat((batch["motion"][i], outputs[i]))) - elif task in ["t2m", "m2t", "inbetween"]: - motion = self.vae.decode(outputs[i]) - # motion = self.datamodule.denormalize(motion) - lengths.append(motion.shape[1]) - else: - raise NotImplementedError - - if motion.shape[1] > max_len: - max_len = motion.shape[1] - - if task in ["t2m", "m2t", "pred"]: - feats_rst_lst.append(motion) - - elif task == "inbetween": - motion = torch.cat( - (batch["motion_heading"][i][None], - motion[:, lengths_ref[i] // 4:lengths_ref[i] // 4 * 3, - ...], batch["motion_tailing"][i][None]), - dim=1) - feats_rst_lst.append(motion) - - feats_rst = torch.zeros( - (len(feats_rst_lst), max_len, motion.shape[-1])).to(self.device) - - # padding and concat - for i in range(len(feats_rst_lst)): - feats_rst[i, :feats_rst_lst[i].shape[1], ...] = feats_rst_lst[i] - - # Recover joints for evaluation - joints_rst = self.feats2joints(feats_rst) - - # return set - outputs = { - "texts": output_texts, - "feats": feats_rst, - "joints": joints_rst, - "length": lengths - } - - return outputs - - def train_lm_forward(self, batch): - tokens_ref = batch["motion"] - texts = batch["text"] - lengths = batch["length"] - tasks = batch["tasks"] - all_captions = batch['all_captions'] - if self.hparams.condition == 'caption': - texts = [random.choice(all_captions[i]) for i in range(len(texts))] - - # LLM Forward - outputs = self.lm(texts, tokens_ref, lengths, tasks) - # outputs = self.t2m_gpt.generate(texts) - return {'outputs': outputs} - - @torch.no_grad() - def val_t2m_forward(self, batch): - feats_ref = batch["motion"] - texts = batch["text"] - lengths = batch["length"] - tasks = None - if self.trainer.datamodule.is_mm: - texts = texts * self.hparams.cfg.METRIC.MM_NUM_REPEATS - feats_ref = feats_ref.repeat_interleave( - self.hparams.cfg.METRIC.MM_NUM_REPEATS, dim=0) - lengths = lengths * self.hparams.cfg.METRIC.MM_NUM_REPEATS - instructions = pjoin(self.datamodule.hparams.data_root, - 'template_instructions.json') - instructions = json.load(open(instructions, 'r')) - tasks = [instructions["Text-to-Motion"]["caption"]] * len(texts) - - if self.hparams.condition == 'caption': - tasks = [{ - 'input': [''], - 'output': [''] - }] * len(texts) - - if self.hparams.cfg.DATASET.TASK_PATH: - instructions = pjoin(self.hparams.cfg.DATASET.TASK_PATH) - instructions = json.load(open(instructions, 'r')) - tasks = [instructions["Text-to-Motion"]["t2m"]] * len(texts) - - min_len = lengths.copy() - # Forward - outputs = self.lm.generate_conditional(texts, - lengths=lengths, - stage='test', - tasks=tasks) - - # Motion Decode - feats_rst = torch.zeros_like(feats_ref) - - for i in range(len(texts)): - outputs[i] = torch.clamp(outputs[i], - 0, - self.hparams.codebook_size - 1, - out=None) - - if len(outputs[i]) > 1: - motion = self.vae.decode(outputs[i]) - else: - motion = torch.zeros_like(feats_ref[i:i + 1, ...]) - - min_len[i] = min(motion.shape[1], lengths[i]) - - # Cut Motion - feats_rst[i:i + 1, :min_len[i], ...] = motion[:, :lengths[i]] - - # Recover joints for evaluation - joints_ref = self.feats2joints(feats_ref) - joints_rst = self.feats2joints(feats_rst) - - # Renorm for evaluation - feats_ref = self.datamodule.renorm4t2m(feats_ref) - feats_rst = self.datamodule.renorm4t2m(feats_rst) - - # return set - rs_set = { - "m_ref": feats_ref, - "m_rst": feats_rst, - "joints_ref": joints_ref, - "joints_rst": joints_rst, - "length": min_len - # "length": lengths - } - - return rs_set - - @torch.no_grad() - def val_m2t_forward(self, batch): - self.hparams.metrics_dict = [] - - feats_ref = batch["motion"] - texts = batch["text"] - lengths = batch["length"] - all_captions = batch['all_captions'] - - # Motion Encode - motion_tokens = [] - lengths_tokens = [] - for i in range(len(feats_ref)): - motion_token, _ = self.vae.encode(feats_ref[i:i + 1]) - motion_tokens.append(motion_token[0]) - lengths_tokens.append(motion_token.shape[1]) - - # Forward - outputs = self.lm.generate_conditional(motion_tokens=motion_tokens, - lengths=lengths_tokens, - task="m2t", - stage='test') - - # return set - rs_set = { - "m_ref": feats_ref, - "t_ref": all_captions, - # "t_ref": texts, - "t_pred": outputs, - "length": lengths - } - - return rs_set - - @torch.no_grad() - def val_m2m_forward(self, batch, task="pred"): - feats_ref = batch["motion"] - lengths = batch["length"] - - # Motion Encode - motion_tokens = [] - lengths_tokens = [] - for i in range(len(feats_ref)): - motion_token, _ = self.vae.encode(feats_ref[i:i + 1]) - motion_tokens.append(motion_token[0]) - - # Forward - outputs = self.lm.generate_conditional(motion_tokens=motion_tokens, - lengths=lengths, - task=task, - stage='test') - - # Motion Decode - feats_rst = torch.zeros_like(feats_ref) - min_len = lengths.copy() - - for i in range(len(lengths)): - outputs[i] = torch.clamp(outputs[i], - 0, - self.hparams.codebook_size - 1, - out=None) - - if len(outputs[i]) > 1: - motion = self.vae.decode(outputs[i]) - else: - motion = torch.zeros_like(feats_ref[i:i + 1, ...]) - - min_len[i] = min(motion.shape[1], lengths[i]) - - # Cut Motion - feats_rst[i:i + 1, :min_len[i], ...] = motion[:, :lengths[i]] - - # Recover joints for evaluation - joints_ref = self.feats2joints(feats_ref) - joints_rst = self.feats2joints(feats_rst) - - # Renorm for evaluation - feats_ref = self.datamodule.renorm4t2m(feats_ref) - feats_rst = self.datamodule.renorm4t2m(feats_rst) - - # return set - rs_set = { - "m_ref": feats_ref, - "m_rst": feats_rst, - "joints_ref": joints_ref, - "joints_rst": joints_rst, - "length": min_len - # "length": lengths - } - - return rs_set - - def train_vae_forward(self, batch): - # batch detach - feats_ref = batch["motion"] - joints_ref = self.feats2joints(feats_ref) - # motion encode & decode - feats_rst, loss_commit, perplexity = self.vae(feats_ref) - joints_rst = self.feats2joints(feats_rst) - # return set - rs_set = { - "m_ref": feats_ref, - "joints_ref": joints_ref, - "m_rst": feats_rst, - "joints_rst": joints_rst, - "loss_commit": loss_commit, - "perplexity": perplexity, - } - return rs_set - - @torch.no_grad() - def val_vae_forward(self, batch, split="train"): - # Detach batch - feats_ref = batch["motion"] - lengths = batch["length"] - - # Repeat for multimodal evaluation - if self.trainer.datamodule.is_mm: - feats_ref = feats_ref.repeat_interleave( - self.hparams.cfg.METRIC.MM_NUM_REPEATS, dim=0) - lengths = lengths * self.hparams.cfg.METRIC.MM_NUM_REPEATS - - # Motion encode & decode - feats_rst = torch.zeros_like(feats_ref) - - for i in range(len(feats_ref)): - if lengths[i] == 0: - continue - feats_pred, _, _ = self.vae(feats_ref[i:i + 1, :lengths[i]]) - feats_rst[i:i + 1, :feats_pred.shape[1], :] = feats_pred - - code_pred, _ = self.vae.encode(feats_ref[i:i + 1, :lengths[i]]) - - # codeFre_pred = torch.bincount(code_pred[0], - # minlength=self.hparams.codebook_size).to( - # self.codeFrequency.device) - # self.codePred.append(code_pred[0]) - # self.codeFrequency += codeFre_pred - - # np.save('../memData/results/codeFrequency.npy', - # self.codeFrequency.cpu().numpy()) - - # Recover joints for evaluation - joints_ref = self.feats2joints(feats_ref) - joints_rst = self.feats2joints(feats_rst) - - # Renorm for evaluation - feats_ref = self.datamodule.renorm4t2m(feats_ref) - feats_rst = self.datamodule.renorm4t2m(feats_rst) - - # Return set - rs_set = { - "m_ref": feats_ref, - "joints_ref": joints_ref, - "m_rst": feats_rst, - "joints_rst": joints_rst, - "length": lengths, - } - - return rs_set - - - def allsplit_step(self, split: str, batch, batch_idx): - # Compute the losses - loss = None - - if self.hparams.stage == "vae" and split in ["train", "val"]: - rs_set = self.train_vae_forward(batch) - loss = self._losses['losses_' + split].update(rs_set) - elif self.hparams.stage in ["lm_instruct", "lm_pretrain" - ] and split in ["train"]: - rs_set = self.train_lm_forward(batch) - loss = self._losses['losses_' + split].update(rs_set) - elif self.hparams.stage == 'lm_rl' and split in ['train']: - rs_set = self.train_rl_forward(batch) - loss = None - - # Compute the metrics - if split in ["val", "test"]: - if self.hparams.stage == "vae": - rs_set = self.val_vae_forward(batch, split) - elif self.hparams.stage in ["lm_instruct", "lm_pretrain", "lm_rl"]: - if self.hparams.task == "t2m": - rs_set = self.val_t2m_forward(batch) - elif self.hparams.task == "m2t": - rs_set = self.val_m2t_forward(batch) - elif self.hparams.task in ["m2m", "pred", "inbetween"]: - rs_set = self.val_m2m_forward(batch, self.hparams.task) - - if self.hparams.task not in ["m2t"]: - # MultiModality evaluation sperately - if self.trainer.datamodule.is_mm: - metrics_dicts = ['MMMetrics'] - else: - metrics_dicts = self.hparams.metrics_dict - - if self.hparams.task not in ['pred', 'inbetween']: - metrics_dicts.remove('PredMetrics') - - for metric in metrics_dicts: - lengths = batch['length'] - if metric == "TemosMetric": - getattr(self.metrics, - metric).update(rs_set["joints_rst"], - rs_set["joints_ref"], lengths) - elif metric == "TM2TMetrics": - if self.hparams.stage in [ - "lm_instruct", "lm_pretrain", "lm_rl" - ]: - word_embs = batch['word_embs'] - pos_ohot = batch['pos_ohot'] - text_lengths = batch['text_len'] - if self.trainer.datamodule.is_mm: - word_embs = word_embs.repeat_interleave( - self.hparams.cfg.METRIC.MM_NUM_REPEATS, - dim=0) - pos_ohot = pos_ohot.repeat_interleave( - self.hparams.cfg.METRIC.MM_NUM_REPEATS, - dim=0) - text_lengths = text_lengths.repeat_interleave( - self.hparams.cfg.METRIC.MM_NUM_REPEATS, - dim=0) - else: - word_embs = None - pos_ohot = None - text_lengths = None - - getattr(self.metrics, metric).update( - feats_ref=rs_set["m_ref"], - feats_rst=rs_set["m_rst"], - lengths_ref=lengths, - lengths_rst=rs_set['length'], - word_embs=word_embs, - pos_ohot=pos_ohot, - text_lengths=text_lengths, - ) - elif metric == "UncondMetrics": - getattr(self.metrics, metric).update( - recmotion_embeddings=rs_set["lat_rm"], - gtmotion_embeddings=rs_set["lat_m"], - lengths=lengths, - ) - elif metric == "MRMetrics": - getattr(self.metrics, - metric).update(rs_set["joints_rst"], - rs_set["joints_ref"], lengths) - elif metric == "PredMetrics": - getattr(self.metrics, - metric).update(rs_set["joints_rst"], - rs_set["joints_ref"], lengths) - elif metric == "MMMetrics": - # pass - getattr(self.metrics, - metric).update(rs_set["m_rst"], - rs_set['length']) - else: - raise TypeError(f"Not support this metric {metric}") - - elif self.hparams.task == "m2t" and self.hparams.stage in [ - "lm_instruct", "lm_pretrain", "lm_rl" - ]: - self.hparams.metrics_dict = metrics_dicts = ['M2TMetrics'] - for metric in metrics_dicts: - if metric == "M2TMetrics": - getattr(self.metrics, metric).update( - feats_ref=rs_set["m_ref"], - pred_texts=rs_set["t_pred"], - gt_texts=batch["all_captions"], - lengths=rs_set['length'], - word_embs=batch["word_embs"], - pos_ohot=batch["pos_ohot"], - text_lengths=batch["text_len"], - ) - - # return forward output rather than loss during test - if split in ["test"]: - if self.hparams.task == "t2m": - return rs_set["joints_rst"], rs_set["length"], rs_set[ - "joints_ref"] - # pass - elif self.hparams.task == "m2t": - return rs_set["t_pred"], batch["length"] - # return batch["length"] - - return loss diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/diffusionmodules/__init__.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/pmatch.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/pmatch.go deleted file mode 100644 index 3f048637be52641d10cc7cee13c148261dd74df5..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/base/pmatch.go and /dev/null differ diff --git a/spaces/Paulraj916/paulraj916/newScrapCss.py b/spaces/Paulraj916/paulraj916/newScrapCss.py deleted file mode 100644 index e5115d1be192670830f4e5ee63b0f78b7753216e..0000000000000000000000000000000000000000 --- a/spaces/Paulraj916/paulraj916/newScrapCss.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import requests -from bs4 import BeautifulSoup, Tag -from urllib.parse import urljoin -import cssbeautifier - -class NewScrapCss: - def __init__(self, link): - self.link = link - - def scrap_css(self): - try: - # Send an HTTP GET request to the webpage - response = requests.get(self.link) - response.raise_for_status() - - # Get the HTML content of the page - html_content = response.text - - # Extract CSS file URLs from the webpage and convert to absolute URLs - base_url = response.url - soup = BeautifulSoup(html_content, 'html.parser') - css_urls = [urljoin(base_url, link['href']) for link in soup.find_all('link', rel='stylesheet')] - - # Create an "output" folder if it doesn't exist - output_path = "output" - if not os.path.exists(output_path): - os.makedirs(output_path) - - # Download and store CSS files in the "output" folder - for css_url in css_urls: - # Check if the URL ends with ".css" - if not css_url.lower().endswith(".css"): - # Append ".css" to the URL if it doesn't end with it - css_url += ".css" - - folder_name = os.path.dirname(css_url.replace(base_url, "").replace("http://", "").replace("https://", "")) - if folder_name.startswith("/"): - folder_name = folder_name[1:] - folder_path = os.path.join(output_path, folder_name) - try: - os.makedirs(folder_path, exist_ok=True) - filename = os.path.basename(css_url) - try: - css_content = requests.get(css_url).text - # Beautify CSS content - css_content = cssbeautifier.beautify(css_content) - with open(os.path.join(folder_path, filename), 'w', encoding='utf-8') as file: - file.write(css_content) - print("Downloaded and beautified:", css_url) - except Exception as e: - print(f"Failed to download {css_url}: {e}") - except Exception as e: - print(f"Failed to download {css_url}: {e}") - - print("CSS files downloaded and saved successfully.") - except requests.exceptions.RequestException as e: - print(f"Failed to fetch content from {self.link}: {e}") diff --git a/spaces/PeepDaSlan9/Llama-2-AWS/README.md b/spaces/PeepDaSlan9/Llama-2-AWS/README.md deleted file mode 100644 index c75c53aac9edff5a2f4b79e0ca644b58f41f81c3..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Llama-2-AWS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llama 2 AWS -emoji: 😻 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Plurigrid/LifeSim/src/app/main.tsx b/spaces/Plurigrid/LifeSim/src/app/main.tsx deleted file mode 100644 index 3b89e11ab88283e7a4ee85919d6d95e26b258bde..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/app/main.tsx +++ /dev/null @@ -1,93 +0,0 @@ -"use client" - -import { useEffect, useRef, useState, useTransition } from "react" - -import { VideoPlayer } from "@/components/business/video-player" - -import { - Select, - SelectContent, - SelectItem, - SelectTrigger, - SelectValue, -} from "@/components/ui/select" - -import { render } from "./render" -import { Agent, AgentType, Scene } from "./agents/types" -import { agents, defaultAgent, getAgent } from "./agents" - -export default function Main() { - const [url, setUrl] = useState() - const [isPending, startTransition] = useTransition() - const [scene, setScene] = useState() - const ref = useRef(defaultAgent) - - useEffect(() => { - - const updateView = async () => { - // console.log(`update view..`) - - await startTransition(async () => { - - // console.log(`getting agent..`) - const type = ref?.current - const agent = getAgent(type) - - // console.log(`asking agent to determine things..`) - const scene = agent.simulate() - - // console.log(`rendering scene..`) - const newUrl = await render(scene.prompt) - - if (type !== ref?.current) { - console.log("agent type changed! reloading scene") - setTimeout(() => { updateView() }, 0) - return - } - - if (newUrl) { - // console.log(`got a new url: ${newUrl}`) - setUrl(newUrl) - setScene(scene) - setTimeout(() => { updateView()}, 1000) - } else { - // console.log(`going to wait a bit more: ${newUrl}`) - setTimeout(() => { updateView()}, 3000) - } - }) - } - - updateView() - - }, []) - - return ( -
    -
    -
    - - -
    - {(scene) ?
    -

    Action: {scene.action}

    -

    Position: {scene.position}

    -
    : null} -
    - -
    - ) -} \ No newline at end of file diff --git a/spaces/Pranay009/FACE2COMIC/src/Generator.py b/spaces/Pranay009/FACE2COMIC/src/Generator.py deleted file mode 100644 index ac144d25729ead4d02ab55369652c706bd478ee8..0000000000000000000000000000000000000000 --- a/spaces/Pranay009/FACE2COMIC/src/Generator.py +++ /dev/null @@ -1,97 +0,0 @@ -import keras -import tensorflow as tf -from keras.models import Model -from keras.models import Input -from keras.layers import Conv2D -from keras.layers import Conv2DTranspose -from keras.layers import LeakyReLU -from keras.layers import Activation -from keras.layers import Concatenate -from keras.layers import Dropout -from keras.layers import BatchNormalization -from matplotlib import pyplot as plt -from keras.initializers import RandomNormal -from keras.layers import concatenate,MaxPooling2D - -#model -def Generator(x=256,y=256,z=3): - inputs=Input(shape=(x,y,z)) - init = RandomNormal(stddev=0.02) - c1=Conv2D(64,(4,4),strides=(2,2),activation="relu",kernel_initializer=init,padding="same")(inputs) - c1=LeakyReLU(alpha=0.2)(c1)#(64,64,64) - - c2 = Conv2D(128, (4,4),strides=(2,2), kernel_initializer=init, padding='same')(c1) - c2=BatchNormalization()(c2,training=True)#(128,32,32) - c2=LeakyReLU(alpha=0.2)(c2) - - c3 = Conv2D(256 ,(4,4), strides=(2,2) ,kernel_initializer=init, padding='same')(c2) - c3=BatchNormalization()(c3,training=True)#(256,16,16) - c3=LeakyReLU(alpha=0.2)(c3) - - c4 = Conv2D(512, (4,4), strides=(2,2), kernel_initializer=init, padding='same')(c3) - c4=BatchNormalization()(c4,training=True)#(512,8,8) - c4=LeakyReLU(alpha=0.2)(c4) - - c5 = Conv2D(512, (4,4),strides=(2,2), kernel_initializer=init, padding='same')(c4) - c5=BatchNormalization()(c5,training=True)#512,4,4 - c5=LeakyReLU(alpha=0.2)(c5) - - c6 = Conv2D(512, (4,4), strides=(2,2),kernel_initializer=init, padding='same')(c5) - c6=BatchNormalization()(c6,training=True)#512,2,2 - c6=LeakyReLU(alpha=0.2)(c6) - - c7 = Conv2D(512, (4,4), strides=(2,2), kernel_initializer=init, padding='same')(c6) - c7=BatchNormalization()(c7,training=True)#512,1,1 - c7=LeakyReLU(alpha=0.2)(c7) - - #b = Conv2D(512, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(c7) - #b = Activation('relu')(c7) - - d1 = Conv2DTranspose(512, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(c7) - d1=BatchNormalization()(d1,training=True)#512*2*2 - d1=Dropout(0.5)(d1,training=True) - d1=Concatenate()([d1, c6]) - d1=Activation("relu")(d1) - - d2 =Conv2DTranspose(512, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d1) - d2=BatchNormalization()(d2,training=True)#512*4*4 - d2=Dropout(0.5)(d2,training=True) - d2=Concatenate()([d2,c5]) - d2=Activation("relu")(d2) - - d3 = Conv2DTranspose(512, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d2) - d3=BatchNormalization()(d3,training=True)#512*8*8 - d3=Dropout(0.5)(d3,training=True) - d3=Concatenate()([d3, c4]) - d3=Activation("relu")(d3) - - d4 = Conv2DTranspose(512, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d3) - d4=BatchNormalization()(d4,training=True)#512*16*16 - #d4=Dropout(0.5)(d4,training=True) - d4=Concatenate()([d4, c3]) - d4=Activation("relu")(d4) - - d5 = Conv2DTranspose(256, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d4) - d5=BatchNormalization()(d5,training=True)#256*32*32 - #d5=Dropout(0.5)(d5,training=True) - d5=Concatenate()([d5, c2]) - d5=Activation("relu")(d5) - - d6 = Conv2DTranspose(128, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d5) - d6=BatchNormalization()(d6,training=True)#128*64*64 - #d6=Dropout(0.5)(d6,training=True) - d6=Concatenate()([d6, c1]) - d6=Activation("relu")(d6) - - #d7 = Conv2DTranspose(64, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d6) - #d7=BatchNormalization()(d7,training=True) - #d7=Concatenate()([d7, c1]) - #$d7=Activation("relu")(d7) - - f = Conv2DTranspose(3, (4,4), strides=(2,2), padding='same', kernel_initializer=init)(d6) - out_image = Activation('tanh')(f) #3,128,128 - - model = Model(inputs, out_image) - return model - - \ No newline at end of file diff --git a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/__init__.py b/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/__init__.py deleted file mode 100644 index 757de134d8725e0bdceb03cc1521df1eedf82983..0000000000000000000000000000000000000000 --- a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .constants import * -from .training_arguments import * -from .metrics import * -from .dataloaders import * -from .models import * -from .utils import * diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/install_lib.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/install_lib.py deleted file mode 100644 index 2e9d8757a582b1dcdb47a34c35c6cfb3ed23ba90..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/install_lib.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import sys -from itertools import product, starmap -import distutils.command.install_lib as orig - - -class install_lib(orig.install_lib): - """Don't add compiled flags to filenames of non-Python files""" - - def run(self): - self.build() - outfiles = self.install() - if outfiles is not None: - # always compile, in case we have any extension stubs to deal with - self.byte_compile(outfiles) - - def get_exclusions(self): - """ - Return a collections.Sized collections.Container of paths to be - excluded for single_version_externally_managed installations. - """ - all_packages = ( - pkg - for ns_pkg in self._get_SVEM_NSPs() - for pkg in self._all_packages(ns_pkg) - ) - - excl_specs = product(all_packages, self._gen_exclusion_paths()) - return set(starmap(self._exclude_pkg_path, excl_specs)) - - def _exclude_pkg_path(self, pkg, exclusion_path): - """ - Given a package name and exclusion path within that package, - compute the full exclusion path. - """ - parts = pkg.split('.') + [exclusion_path] - return os.path.join(self.install_dir, *parts) - - @staticmethod - def _all_packages(pkg_name): - """ - >>> list(install_lib._all_packages('foo.bar.baz')) - ['foo.bar.baz', 'foo.bar', 'foo'] - """ - while pkg_name: - yield pkg_name - pkg_name, sep, child = pkg_name.rpartition('.') - - def _get_SVEM_NSPs(self): - """ - Get namespace packages (list) but only for - single_version_externally_managed installations and empty otherwise. - """ - # TODO: is it necessary to short-circuit here? i.e. what's the cost - # if get_finalized_command is called even when namespace_packages is - # False? - if not self.distribution.namespace_packages: - return [] - - install_cmd = self.get_finalized_command('install') - svem = install_cmd.single_version_externally_managed - - return self.distribution.namespace_packages if svem else [] - - @staticmethod - def _gen_exclusion_paths(): - """ - Generate file paths to be excluded for namespace packages (bytecode - cache files). - """ - # always exclude the package module itself - yield '__init__.py' - - yield '__init__.pyc' - yield '__init__.pyo' - - if not hasattr(sys, 'implementation'): - return - - base = os.path.join( - '__pycache__', '__init__.' + sys.implementation.cache_tag) - yield base + '.pyc' - yield base + '.pyo' - yield base + '.opt-1.pyc' - yield base + '.opt-2.pyc' - - def copy_tree( - self, infile, outfile, - preserve_mode=1, preserve_times=1, preserve_symlinks=0, level=1 - ): - assert preserve_mode and preserve_times and not preserve_symlinks - exclude = self.get_exclusions() - - if not exclude: - return orig.install_lib.copy_tree(self, infile, outfile) - - # Exclude namespace package __init__.py* files from the output - - from setuptools.archive_util import unpack_directory - from distutils import log - - outfiles = [] - - def pf(src, dst): - if dst in exclude: - log.warn("Skipping installation of %s (namespace package)", - dst) - return False - - log.info("copying %s -> %s", src, os.path.dirname(dst)) - outfiles.append(dst) - return dst - - unpack_directory(infile, outfile, pf) - return outfiles - - def get_outputs(self): - outputs = orig.install_lib.get_outputs(self) - exclude = self.get_exclusions() - if exclude: - return [f for f in outputs if f not in exclude] - return outputs diff --git a/spaces/Reeve/Ohayou_Face/model_build.py b/spaces/Reeve/Ohayou_Face/model_build.py deleted file mode 100644 index 475f8f36ea5c7b2ceed1c2cf4eac7127118d9f17..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/model_build.py +++ /dev/null @@ -1,95 +0,0 @@ -import os -import glob - -import numpy as np -from numpy import linalg -import PIL.Image as Image -import torch -from torchvision import transforms -from tqdm import tqdm -from argparse import Namespace -import easydict - -import legacy -import dnnlib - -from opensimplex import OpenSimplex - -from configs import data_configs -from models.psp import pSp - - -def build_stylegan2( - increment = 0.01, - network_pkl = 'pretrained/ohayou_face2.pkl', - process = 'image', #['image', 'interpolation','truncation','interpolation-truncation'] - random_seed = 0, - diameter = 100.0, - scale_type = 'pad', #['pad', 'padside', 'symm','symmside'] - size = [512, 512], - seeds = [0], - space = 'z', #['z', 'w'] - fps = 24, - frames = 240, - noise_mode = 'none', #['const', 'random', 'none'] - outdir = 'path', - projected_w = 'path', - easing = 'linear', - device = 'cpu' - - ): - - G_kwargs = dnnlib.EasyDict() - G_kwargs.size = size - G_kwargs.scale_type = scale_type - - device = torch.device(device) - with dnnlib.util.open_url(network_pkl) as f: - # G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore - G = legacy.load_network_pkl(f, custom=True, **G_kwargs)['G_ema'].to(device) # type: ignore - - return G.synthesis - - -def build_psp(): - test_opts = easydict.EasyDict({ - # arguments for inference script - 'checkpoint_path' : 'pretrained/ohayou_face.pt', - 'couple_outputs' : False, - 'resize_outputs' : False, - - 'test_batch_size' : 1, - 'test_workers' : 1, - - # arguments for style-mixing script - 'n_images' : None, - 'n_outputs_to_generate' : 5, - 'mix_alpha' : None, - 'latent_mask' : None, - - # arguments for super-resolution - 'resize_factors' : None, - }) - - # update test options with options used during training - ckpt = torch.load(test_opts.checkpoint_path, map_location='cpu') - opts = ckpt['opts'] - opts.update(vars(test_opts)) - if 'learn_in_w' not in opts: - opts['learn_in_w'] = False - opts = Namespace(**opts) - opts.device = 'cpu' - net = pSp(opts) - net.eval() - return net - -def img_preprocess(img, transform): - if (img.mode == 'RGBA') or (img.mode == 'P'): - img.load() - background = Image.new("RGB", img.size, (255, 255, 255)) - background.paste(img, mask=img.split()[3]) # 3 is the alpha channel - img = background - assert img.mode == 'RGB' - img = transform(img) - img = img.unsqueeze(dim=0) - return img \ No newline at end of file diff --git a/spaces/RichardMB1217/blip/data/vqa_dataset.py b/spaces/RichardMB1217/blip/data/vqa_dataset.py deleted file mode 100644 index 92ec1df429b3910316ddd554bfea01c6e7922cae..0000000000000000000000000000000000000000 --- a/spaces/RichardMB1217/blip/data/vqa_dataset.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import json -import random -from PIL import Image - -import torch -from torch.utils.data import Dataset -from data.utils import pre_question - -from torchvision.datasets.utils import download_url - -class vqa_dataset(Dataset): - def __init__(self, transform, ann_root, vqa_root, vg_root, train_files=[], split="train"): - self.split = split - - self.transform = transform - self.vqa_root = vqa_root - self.vg_root = vg_root - - if split=='train': - urls = {'vqa_train':'https://storage.googleapis.com/sfr-vision-language-research/datasets/vqa_train.json', - 'vqa_val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/vqa_val.json', - 'vg_qa':'https://storage.googleapis.com/sfr-vision-language-research/datasets/vg_qa.json'} - - self.annotation = [] - for f in train_files: - download_url(urls[f],ann_root) - self.annotation += json.load(open(os.path.join(ann_root,'%s.json'%f),'r')) - else: - download_url('https://storage.googleapis.com/sfr-vision-language-research/datasets/vqa_test.json',ann_root) - self.annotation = json.load(open(os.path.join(ann_root,'vqa_test.json'),'r')) - - download_url('https://storage.googleapis.com/sfr-vision-language-research/datasets/answer_list.json',ann_root) - self.answer_list = json.load(open(os.path.join(ann_root,'answer_list.json'),'r')) - - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - if ann['dataset']=='vqa': - image_path = os.path.join(self.vqa_root,ann['image']) - elif ann['dataset']=='vg': - image_path = os.path.join(self.vg_root,ann['image']) - - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - if self.split == 'test': - question = pre_question(ann['question']) - question_id = ann['question_id'] - return image, question, question_id - - - elif self.split=='train': - - question = pre_question(ann['question']) - - if ann['dataset']=='vqa': - answer_weight = {} - for answer in ann['answer']: - if answer in answer_weight.keys(): - answer_weight[answer] += 1/len(ann['answer']) - else: - answer_weight[answer] = 1/len(ann['answer']) - - answers = list(answer_weight.keys()) - weights = list(answer_weight.values()) - - elif ann['dataset']=='vg': - answers = [ann['answer']] - weights = [0.2] - - return image, question, answers, weights - - -def vqa_collate_fn(batch): - image_list, question_list, answer_list, weight_list, n = [], [], [], [], [] - for image, question, answer, weights in batch: - image_list.append(image) - question_list.append(question) - weight_list += weights - answer_list += answer - n.append(len(answer)) - return torch.stack(image_list,dim=0), question_list, answer_list, torch.Tensor(weight_list), n \ No newline at end of file diff --git a/spaces/RobLi/ControlNet-v1-1/app_scribble.py b/spaces/RobLi/ControlNet-v1-1/app_scribble.py deleted file mode 100644 index 17a8565cb741e12a65b46e3e7a66b20e7efb301c..0000000000000000000000000000000000000000 --- a/spaces/RobLi/ControlNet-v1-1/app_scribble.py +++ /dev/null @@ -1,105 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['HED', 'PidiNet', 'None'], - type='value', - value='HED') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=512, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='scribble', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='scribble') - demo = create_demo(model.process_scribble) - demo.queue().launch() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/exp/upernet_global_small/run.sh b/spaces/Robert001/UniControl-Demo/annotator/uniformer/exp/upernet_global_small/run.sh deleted file mode 100644 index 9fb22edfa7a32624ea08a63fe7d720c40db3b696..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/exp/upernet_global_small/run.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/train.py ${work_path}/config.py \ - --launcher pytorch \ - --options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \ - --work-dir ${work_path}/ckpt \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/base_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/base_sampler.py deleted file mode 100644 index 9ea35def115b49dfdad8a1f7c040ef3cd983b0d1..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/base_sampler.py +++ /dev/null @@ -1,101 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import torch - -from .sampling_result import SamplingResult - - -class BaseSampler(metaclass=ABCMeta): - """Base class of samplers.""" - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - self.num = num - self.pos_fraction = pos_fraction - self.neg_pos_ub = neg_pos_ub - self.add_gt_as_proposals = add_gt_as_proposals - self.pos_sampler = self - self.neg_sampler = self - - @abstractmethod - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive samples.""" - pass - - @abstractmethod - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative samples.""" - pass - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - - Example: - >>> from mmdet.core.bbox import RandomSampler - >>> from mmdet.core.bbox import AssignResult - >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes - >>> rng = ensure_rng(None) - >>> assign_result = AssignResult.random(rng=rng) - >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) - >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) - >>> gt_labels = None - >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, - >>> add_gt_as_proposals=False) - >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels) - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - neg_inds = neg_inds.unique() - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/dvclive.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/dvclive.py deleted file mode 100644 index 687cdc58c0336c92b1e4f9a410ba67ebaab2bc7a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/dvclive.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class DvcliveLoggerHook(LoggerHook): - """Class to log metrics with dvclive. - - It requires `dvclive`_ to be installed. - - Args: - path (str): Directory where dvclive will write TSV log files. - interval (int): Logging interval (every k iterations). - Default 10. - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - Default: True. - reset_flag (bool): Whether to clear the output buffer after logging. - Default: True. - by_epoch (bool): Whether EpochBasedRunner is used. - Default: True. - - .. _dvclive: - https://dvc.org/doc/dvclive - """ - - def __init__(self, - path, - interval=10, - ignore_last=True, - reset_flag=True, - by_epoch=True): - - super(DvcliveLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.path = path - self.import_dvclive() - - def import_dvclive(self): - try: - import dvclive - except ImportError: - raise ImportError( - 'Please run "pip install dvclive" to install dvclive') - self.dvclive = dvclive - - @master_only - def before_run(self, runner): - self.dvclive.init(self.path) - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for k, v in tags.items(): - self.dvclive.log(k, v, step=self.get_iter(runner)) diff --git a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/txt_processors/__init__.py b/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/txt_processors/__init__.py deleted file mode 100644 index 7bff3e9af7d634363116c6605f22a52aad614dea..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/txt_processors/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import en \ No newline at end of file diff --git a/spaces/Sapphire-356/Video2MC/common/loss.py b/spaces/Sapphire-356/Video2MC/common/loss.py deleted file mode 100644 index 12e5f437f89a137eff7580f12e37677c7caf797d..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/common/loss.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -import numpy as np -import torch - - -def mpjpe(predicted, target): - """ - Mean per-joint position error (i.e. mean Euclidean distance), - often referred to as "Protocol #1" in many papers. - """ - assert predicted.shape == target.shape - return torch.mean(torch.norm(predicted - target, dim=len(target.shape) - 1)) - - -def weighted_mpjpe(predicted, target, w): - """ - Weighted mean per-joint position error (i.e. mean Euclidean distance) - """ - assert predicted.shape == target.shape - assert w.shape[0] == predicted.shape[0] - return torch.mean(w * torch.norm(predicted - target, dim=len(target.shape) - 1)) - - -def p_mpjpe(predicted, target): - """ - Pose error: MPJPE after rigid alignment (scale, rotation, and translation), - often referred to as "Protocol #2" in many papers. - """ - assert predicted.shape == target.shape - - muX = np.mean(target, axis=1, keepdims=True) - muY = np.mean(predicted, axis=1, keepdims=True) - - X0 = target - muX - Y0 = predicted - muY - - normX = np.sqrt(np.sum(X0 ** 2, axis=(1, 2), keepdims=True)) - normY = np.sqrt(np.sum(Y0 ** 2, axis=(1, 2), keepdims=True)) - - X0 /= normX - Y0 /= normY - - H = np.matmul(X0.transpose(0, 2, 1), Y0) - U, s, Vt = np.linalg.svd(H) - V = Vt.transpose(0, 2, 1) - R = np.matmul(V, U.transpose(0, 2, 1)) - - # Avoid improper rotations (reflections), i.e. rotations with det(R) = -1 - sign_detR = np.sign(np.expand_dims(np.linalg.det(R), axis=1)) - V[:, :, -1] *= sign_detR - s[:, -1] *= sign_detR.flatten() - R = np.matmul(V, U.transpose(0, 2, 1)) # Rotation - - tr = np.expand_dims(np.sum(s, axis=1, keepdims=True), axis=2) - - a = tr * normX / normY # Scale - t = muX - a * np.matmul(muY, R) # Translation - - # Perform rigid transformation on the input - predicted_aligned = a * np.matmul(predicted, R) + t - - # Return MPJPE - return np.mean(np.linalg.norm(predicted_aligned - target, axis=len(target.shape) - 1)) - - -def n_mpjpe(predicted, target): - """ - Normalized MPJPE (scale only), adapted from: - https://github.com/hrhodin/UnsupervisedGeometryAwareRepresentationLearning/blob/master/losses/poses.py - """ - assert predicted.shape == target.shape - - norm_predicted = torch.mean(torch.sum(predicted ** 2, dim=3, keepdim=True), dim=2, keepdim=True) - norm_target = torch.mean(torch.sum(target * predicted, dim=3, keepdim=True), dim=2, keepdim=True) - scale = norm_target / norm_predicted - return mpjpe(scale * predicted, target) - - -def mean_velocity_error(predicted, target): - """ - Mean per-joint velocity error (i.e. mean Euclidean distance of the 1st derivative) - """ - assert predicted.shape == target.shape - - velocity_predicted = np.diff(predicted, axis=0) - velocity_target = np.diff(target, axis=0) - - return np.mean(np.linalg.norm(velocity_predicted - velocity_target, axis=len(target.shape) - 1)) diff --git a/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_feature_extractor.py b/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_feature_extractor.py deleted file mode 100644 index df7632c6d8e7eac7e6ae019379e53febd3f7ef0c..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/albef_models/albef_feature_extractor.py +++ /dev/null @@ -1,204 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import warnings - -import torch -import torch.nn.functional as F -from lavis.common.registry import registry -from lavis.common.utils import get_abs_path -from lavis.models.albef_models import AlbefBase -from lavis.models.albef_models.albef_outputs import AlbefOutputFeatures -from lavis.models.med import BertForMaskedLM -from lavis.models.vit import VisionTransformerEncoder -from torch import nn -from transformers import BertConfig - - -@registry.register_model("albef_feature_extractor") -class AlbefFeatureExtractor(AlbefBase): - PRETRAINED_MODEL_CONFIG_DICT = { - "base": "configs/models/albef_feature_extractor.yaml", - } - - def __init__(self, image_encoder, text_encoder, embed_dim=256, max_txt_len=30): - super().__init__() - - self.tokenizer = self.init_tokenizer() - - self.visual_encoder = image_encoder - self.text_encoder = text_encoder - - text_width = text_encoder.config.hidden_size - vision_width = image_encoder.vision_width - - self.embed_dim = embed_dim - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.max_txt_len = max_txt_len - - self.temp = nn.Parameter(0.07 * torch.ones([])) - - @torch.no_grad() - def extract_features(self, samples, mode="multimodal"): - """ - Extract features for multimodal or unimodal samples. - - Args: - samples (dict): A dictionary of samples, containing the following keys: - - image (torch.Tensor): A tensor of shape (B, C, H, W) containing the image. - Raw images should be preprocessed before being passed to feature extractor. - - text_input (list): A list of strings containing the text, length B. - mode (str): The mode of feature extraction. Can be either "multimodal", "text" or "image". - If "multimodal", return image features and multimodal features; - if "text", return text features; - if "image", return image features. - Default: "multimodal". - - Returns: - An AlbefOutputFeatures object, see lavis/models/albef_models/albef_outputs.py for details. - - Examples: - ```python - >>> from PIL import Image - >>> from lavis.models import load_model_and_preprocess - >>> raw_image = Image.open("docs/data/merlion.png").convert("RGB") - >>> caption = "a large fountain spewing water into the air" - >>> model, vis_processors, txt_processors = load_model_and_preprocess("albef_feature_extractor", is_eval=True) - >>> image = vis_processors["eval"](raw_image).unsqueeze(0) - >>> text_input = txt_processors["eval"](caption) - - >>> sample = {"image": image, "text_input": [text_input]} - - >>> features_multimodal = model.extract_features(sample) - >>> features_multimodal.keys() - odict_keys(['image_embeds', 'multimodal_embeds']) - >>> features_multimodal.image_embeds.shape - torch.Size([1, 197, 768]) - >>> features_multimodal.multimodal_embeds.shape - torch.Size([1, 12, 768]) - - >>> features_text = model.extract_features(sample, mode="text") - >>> features_text.keys() - odict_keys(['text_embeds', 'text_features']) - >>> features_text.text_embeds.shape - torch.Size([1, 12, 768]) - >>> features_text.text_features.shape - torch.Size([1, 12, 256]) - - >>> features_image = model.extract_features(sample, mode="image") - >>> features_image.keys() - odict_keys(['image_embeds', 'image_features']) - >>> features_image.image_embeds.shape - torch.Size([1, 197, 768]) - >>> features_image.image_features.shape - torch.Size([1, 197, 256]) - ``` - """ - image = samples["image"] - caption = samples["text_input"] - - if isinstance(mode, str): - mode = [mode] - - for m in mode: - assert m in [ - "multimodal", - "image", - "text", - ], "mode must be one of [multimodal, image, text], but got {}".format(m) - - # initalize output - image_embeds, text_embeds, multimodal_embeds = None, None, None - image_features, text_features = None, None - - if "image" in mode or "multimodal" in mode: - assert ( - image is not None - ), "image must be provided if mode is 'image' or 'multimodal'" - - image_embeds = self.visual_encoder.forward_features(image) - image_features = F.normalize(self.vision_proj(image_embeds), dim=-1) - - if "text" in mode or "multimodal" in mode: - assert ( - caption is not None - ), "text must be provided if mode is 'text' or 'multimodal'" - - text = self.tokenizer( - caption, - padding=True, - return_tensors="pt", - ).to(self.device) - - text_output = self.text_encoder.bert( - text.input_ids, - attention_mask=text.attention_mask, - return_dict=True, - mode="text", - ) - text_embeds = text_output.last_hidden_state - text_features = F.normalize(self.text_proj(text_embeds), dim=-1) - - if "multimodal" in mode: - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - self.device - ) - - # forward the positve image-text pair - output = self.text_encoder.bert( - encoder_embeds=text_embeds, - attention_mask=text.attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - mode="fusion", - ) - - multimodal_embeds = output.last_hidden_state - - return AlbefOutputFeatures( - image_embeds=image_embeds, - image_embeds_proj=image_features, - text_embeds=text_embeds, - text_embeds_proj=text_features, - multimodal_embeds=multimodal_embeds, - ) - - @classmethod - def from_config(cls, cfg=None): - image_encoder = VisionTransformerEncoder.from_config(cfg, from_pretrained=True) - config_text_encoder = BertConfig.from_json_file( - get_abs_path(cfg["med_config_path"]) - ) - config_text_encoder.fusion_layer = 6 - text_encoder = BertForMaskedLM.from_pretrained( - "bert-base-uncased", config=config_text_encoder - ) - - embed_dim = cfg.get("embed_dim", 256) - max_txt_len = cfg.get("max_txt_len", 30) - - model = cls( - image_encoder=image_encoder, - text_encoder=text_encoder, - embed_dim=embed_dim, - max_txt_len=max_txt_len, - ) - - # load pre-trained weights - pretrain_path = cfg.get("pretrained", None) - if pretrain_path is not None: - msg = model.load_from_pretrained( - url_or_filename=pretrain_path, rename_text_keys=False - ) - else: - warnings.warn("No pretrained weights are loaded.") - - return model diff --git a/spaces/ServerX/PorcoDiaz/infer/modules/vc/__init__.py b/spaces/ServerX/PorcoDiaz/infer/modules/vc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SeyedAli/Multilingual-Text-Similarity/app.py b/spaces/SeyedAli/Multilingual-Text-Similarity/app.py deleted file mode 100644 index c945f79f4d3d06e5925d13c1add87d1091fe881b..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Multilingual-Text-Similarity/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/SeyedAli/Multilingual-Text-Semantic-Search-Siamese-BERT-V1").launch() \ No newline at end of file diff --git a/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/latex/attention/introduction.tex b/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/latex/attention/introduction.tex deleted file mode 100644 index 1baa8915f4cf7aec2520894a87470fc9436d954b..0000000000000000000000000000000000000000 --- a/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/latex/attention/introduction.tex +++ /dev/null @@ -1,18 +0,0 @@ -Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. - -Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. -%\marginpar{not sure if the memory constraints are understandable here} -Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. - -%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away} - -Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network. - -%\marginpar{not sure if "cross-positional communication" is understandable without explanation} -%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?} - -In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. -%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.} - -% Just a standard paragraph with citations, rewrite. -%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do. \ No newline at end of file diff --git a/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/python/dqn/dqn.py b/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/python/dqn/dqn.py deleted file mode 100644 index 6cea64d39baa7ff4c1e549869aaa4b0ae17779a9..0000000000000000000000000000000000000000 --- a/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/python/dqn/dqn.py +++ /dev/null @@ -1,245 +0,0 @@ -from typing import Any, Dict, List, Optional, Tuple, Type, Union - -import gym -import numpy as np -import torch as th -from torch.nn import functional as F - -from stable_baselines3.common import logger -from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm -from stable_baselines3.common.preprocessing import maybe_transpose -from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule -from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update -from stable_baselines3.dqn.policies import DQNPolicy - - -class DQN(OffPolicyAlgorithm): - """ - Deep Q-Network (DQN) - - Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236 - Default hyperparameters are taken from the nature paper, - except for the optimizer and learning rate that were taken from Stable Baselines defaults. - - :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...) - :param env: The environment to learn from (if registered in Gym, can be str) - :param learning_rate: The learning rate, it can be a function - of the current progress remaining (from 1 to 0) - :param buffer_size: size of the replay buffer - :param learning_starts: how many steps of the model to collect transitions for before learning starts - :param batch_size: Minibatch size for each gradient update - :param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update - :param gamma: the discount factor - :param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit - like ``(5, "step")`` or ``(2, "episode")``. - :param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``) - Set to ``-1`` means to do as many gradient steps as steps done in the environment - during the rollout. - :param optimize_memory_usage: Enable a memory efficient variant of the replay buffer - at a cost of more complexity. - See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195 - :param target_update_interval: update the target network every ``target_update_interval`` - environment steps. - :param exploration_fraction: fraction of entire training period over which the exploration rate is reduced - :param exploration_initial_eps: initial value of random action probability - :param exploration_final_eps: final value of random action probability - :param max_grad_norm: The maximum value for the gradient clipping - :param tensorboard_log: the log location for tensorboard (if None, no logging) - :param create_eval_env: Whether to create a second environment that will be - used for evaluating the agent periodically. (Only available when passing string for the environment) - :param policy_kwargs: additional arguments to be passed to the policy on creation - :param verbose: the verbosity level: 0 no output, 1 info, 2 debug - :param seed: Seed for the pseudo random generators - :param device: Device (cpu, cuda, ...) on which the code should be run. - Setting it to auto, the code will be run on the GPU if possible. - :param _init_setup_model: Whether or not to build the network at the creation of the instance - """ - - def __init__( - self, - policy: Union[str, Type[DQNPolicy]], - env: Union[GymEnv, str], - learning_rate: Union[float, Schedule] = 1e-4, - buffer_size: int = 1000000, - learning_starts: int = 50000, - batch_size: Optional[int] = 32, - tau: float = 1.0, - gamma: float = 0.99, - train_freq: Union[int, Tuple[int, str]] = 4, - gradient_steps: int = 1, - optimize_memory_usage: bool = False, - target_update_interval: int = 10000, - exploration_fraction: float = 0.1, - exploration_initial_eps: float = 1.0, - exploration_final_eps: float = 0.05, - max_grad_norm: float = 10, - tensorboard_log: Optional[str] = None, - create_eval_env: bool = False, - policy_kwargs: Optional[Dict[str, Any]] = None, - verbose: int = 0, - seed: Optional[int] = None, - device: Union[th.device, str] = "auto", - _init_setup_model: bool = True, - ): - - super(DQN, self).__init__( - policy, - env, - DQNPolicy, - learning_rate, - buffer_size, - learning_starts, - batch_size, - tau, - gamma, - train_freq, - gradient_steps, - action_noise=None, # No action noise - policy_kwargs=policy_kwargs, - tensorboard_log=tensorboard_log, - verbose=verbose, - device=device, - create_eval_env=create_eval_env, - seed=seed, - sde_support=False, - optimize_memory_usage=optimize_memory_usage, - supported_action_spaces=(gym.spaces.Discrete,), - ) - - self.exploration_initial_eps = exploration_initial_eps - self.exploration_final_eps = exploration_final_eps - self.exploration_fraction = exploration_fraction - self.target_update_interval = target_update_interval - self.max_grad_norm = max_grad_norm - # "epsilon" for the epsilon-greedy exploration - self.exploration_rate = 0.0 - # Linear schedule will be defined in `_setup_model()` - self.exploration_schedule = None - self.q_net, self.q_net_target = None, None - - if _init_setup_model: - self._setup_model() - - def _setup_model(self) -> None: - super(DQN, self)._setup_model() - self._create_aliases() - self.exploration_schedule = get_linear_fn( - self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction - ) - - def _create_aliases(self) -> None: - self.q_net = self.policy.q_net - self.q_net_target = self.policy.q_net_target - - def _on_step(self) -> None: - """ - Update the exploration rate and target network if needed. - This method is called in ``collect_rollouts()`` after each step in the environment. - """ - if self.num_timesteps % self.target_update_interval == 0: - polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau) - - self.exploration_rate = self.exploration_schedule(self._current_progress_remaining) - logger.record("rollout/exploration rate", self.exploration_rate) - - def train(self, gradient_steps: int, batch_size: int = 100) -> None: - # Update learning rate according to schedule - self._update_learning_rate(self.policy.optimizer) - - losses = [] - for _ in range(gradient_steps): - # Sample replay buffer - replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env) - - with th.no_grad(): - # Compute the next Q-values using the target network - next_q_values = self.q_net_target(replay_data.next_observations) - # Follow greedy policy: use the one with the highest value - next_q_values, _ = next_q_values.max(dim=1) - # Avoid potential broadcast issue - next_q_values = next_q_values.reshape(-1, 1) - # 1-step TD target - target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values - - # Get current Q-values estimates - current_q_values = self.q_net(replay_data.observations) - - # Retrieve the q-values for the actions from the replay buffer - current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long()) - - # Compute Huber loss (less sensitive to outliers) - loss = F.smooth_l1_loss(current_q_values, target_q_values) - losses.append(loss.item()) - - # Optimize the policy - self.policy.optimizer.zero_grad() - loss.backward() - # Clip gradient norm - th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm) - self.policy.optimizer.step() - - # Increase update counter - self._n_updates += gradient_steps - - logger.record("train/n_updates", self._n_updates, exclude="tensorboard") - logger.record("train/loss", np.mean(losses)) - - def predict( - self, - observation: np.ndarray, - state: Optional[np.ndarray] = None, - mask: Optional[np.ndarray] = None, - deterministic: bool = False, - ) -> Tuple[np.ndarray, Optional[np.ndarray]]: - """ - Overrides the base_class predict function to include epsilon-greedy exploration. - - :param observation: the input observation - :param state: The last states (can be None, used in recurrent policies) - :param mask: The last masks (can be None, used in recurrent policies) - :param deterministic: Whether or not to return deterministic actions. - :return: the model's action and the next state - (used in recurrent policies) - """ - if not deterministic and np.random.rand() < self.exploration_rate: - if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space): - n_batch = observation.shape[0] - action = np.array([self.action_space.sample() for _ in range(n_batch)]) - else: - action = np.array(self.action_space.sample()) - else: - action, state = self.policy.predict(observation, state, mask, deterministic) - return action, state - - def learn( - self, - total_timesteps: int, - callback: MaybeCallback = None, - log_interval: int = 4, - eval_env: Optional[GymEnv] = None, - eval_freq: int = -1, - n_eval_episodes: int = 5, - tb_log_name: str = "DQN", - eval_log_path: Optional[str] = None, - reset_num_timesteps: bool = True, - ) -> OffPolicyAlgorithm: - - return super(DQN, self).learn( - total_timesteps=total_timesteps, - callback=callback, - log_interval=log_interval, - eval_env=eval_env, - eval_freq=eval_freq, - n_eval_episodes=n_eval_episodes, - tb_log_name=tb_log_name, - eval_log_path=eval_log_path, - reset_num_timesteps=reset_num_timesteps, - ) - - def _excluded_save_params(self) -> List[str]: - return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"] - - def _get_torch_save_params(self) -> Tuple[List[str], List[str]]: - state_dicts = ["policy", "policy.optimizer"] - - return state_dicts, [] diff --git a/spaces/Sriharsha6902/Chat-Analyser/setup.sh b/spaces/Sriharsha6902/Chat-Analyser/setup.sh deleted file mode 100644 index f0ab2585fe12edf5a8ea8eb3a8614ba23ed52e7f..0000000000000000000000000000000000000000 --- a/spaces/Sriharsha6902/Chat-Analyser/setup.sh +++ /dev/null @@ -1,8 +0,0 @@ -mkdir -p ~/.streamlit/ -echo "\ -[server]\n\ -headless = true\n\ -port = $PORT\n\ -enableCORS = false\n\ -\n\ -" > ~/.streamlit/config.toml \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/__init__.py deleted file mode 100644 index 75e25a0212f98e4a18d97c86c6cda225636a3215..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Utilities.""" diff --git a/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients2/README.md b/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients2/README.md deleted file mode 100644 index e4f3fb4d2deda2840b96484da6d4e12a183b8da1..0000000000000000000000000000000000000000 --- a/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SumanthKarnati Image2Ingredients2 -emoji: ⚡ -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/shellapp.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/shellapp.py deleted file mode 100644 index 29325a0ad2b427aea25f3088fa5648b520c6ca4b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/shellapp.py +++ /dev/null @@ -1,451 +0,0 @@ -# encoding: utf-8 -""" -A mixin for :class:`~IPython.core.application.Application` classes that -launch InteractiveShell instances, load extensions, etc. -""" - -# Copyright (c) IPython Development Team. -# Distributed under the terms of the Modified BSD License. - -import glob -from itertools import chain -import os -import sys - -from traitlets.config.application import boolean_flag -from traitlets.config.configurable import Configurable -from traitlets.config.loader import Config -from IPython.core.application import SYSTEM_CONFIG_DIRS, ENV_CONFIG_DIRS -from IPython.core import pylabtools -from IPython.utils.contexts import preserve_keys -from IPython.utils.path import filefind -from traitlets import ( - Unicode, Instance, List, Bool, CaselessStrEnum, observe, - DottedObjectName, -) -from IPython.terminal import pt_inputhooks - -#----------------------------------------------------------------------------- -# Aliases and Flags -#----------------------------------------------------------------------------- - -gui_keys = tuple(sorted(pt_inputhooks.backends) + sorted(pt_inputhooks.aliases)) - -backend_keys = sorted(pylabtools.backends.keys()) -backend_keys.insert(0, 'auto') - -shell_flags = {} - -addflag = lambda *args: shell_flags.update(boolean_flag(*args)) -addflag('autoindent', 'InteractiveShell.autoindent', - 'Turn on autoindenting.', 'Turn off autoindenting.' -) -addflag('automagic', 'InteractiveShell.automagic', - """Turn on the auto calling of magic commands. Type %%magic at the - IPython prompt for more information.""", - 'Turn off the auto calling of magic commands.' -) -addflag('pdb', 'InteractiveShell.pdb', - "Enable auto calling the pdb debugger after every exception.", - "Disable auto calling the pdb debugger after every exception." -) -addflag('pprint', 'PlainTextFormatter.pprint', - "Enable auto pretty printing of results.", - "Disable auto pretty printing of results." -) -addflag('color-info', 'InteractiveShell.color_info', - """IPython can display information about objects via a set of functions, - and optionally can use colors for this, syntax highlighting - source code and various other elements. This is on by default, but can cause - problems with some pagers. If you see such problems, you can disable the - colours.""", - "Disable using colors for info related things." -) -addflag('ignore-cwd', 'InteractiveShellApp.ignore_cwd', - "Exclude the current working directory from sys.path", - "Include the current working directory in sys.path", -) -nosep_config = Config() -nosep_config.InteractiveShell.separate_in = '' -nosep_config.InteractiveShell.separate_out = '' -nosep_config.InteractiveShell.separate_out2 = '' - -shell_flags['nosep']=(nosep_config, "Eliminate all spacing between prompts.") -shell_flags['pylab'] = ( - {'InteractiveShellApp' : {'pylab' : 'auto'}}, - """Pre-load matplotlib and numpy for interactive use with - the default matplotlib backend.""" -) -shell_flags['matplotlib'] = ( - {'InteractiveShellApp' : {'matplotlib' : 'auto'}}, - """Configure matplotlib for interactive use with - the default matplotlib backend.""" -) - -# it's possible we don't want short aliases for *all* of these: -shell_aliases = dict( - autocall='InteractiveShell.autocall', - colors='InteractiveShell.colors', - logfile='InteractiveShell.logfile', - logappend='InteractiveShell.logappend', - c='InteractiveShellApp.code_to_run', - m='InteractiveShellApp.module_to_run', - ext="InteractiveShellApp.extra_extensions", - gui='InteractiveShellApp.gui', - pylab='InteractiveShellApp.pylab', - matplotlib='InteractiveShellApp.matplotlib', -) -shell_aliases['cache-size'] = 'InteractiveShell.cache_size' - -#----------------------------------------------------------------------------- -# Main classes and functions -#----------------------------------------------------------------------------- - -class InteractiveShellApp(Configurable): - """A Mixin for applications that start InteractiveShell instances. - - Provides configurables for loading extensions and executing files - as part of configuring a Shell environment. - - The following methods should be called by the :meth:`initialize` method - of the subclass: - - - :meth:`init_path` - - :meth:`init_shell` (to be implemented by the subclass) - - :meth:`init_gui_pylab` - - :meth:`init_extensions` - - :meth:`init_code` - """ - extensions = List(Unicode(), - help="A list of dotted module names of IPython extensions to load." - ).tag(config=True) - - extra_extensions = List( - DottedObjectName(), - help=""" - Dotted module name(s) of one or more IPython extensions to load. - - For specifying extra extensions to load on the command-line. - - .. versionadded:: 7.10 - """, - ).tag(config=True) - - reraise_ipython_extension_failures = Bool(False, - help="Reraise exceptions encountered loading IPython extensions?", - ).tag(config=True) - - # Extensions that are always loaded (not configurable) - default_extensions = List(Unicode(), [u'storemagic']).tag(config=False) - - hide_initial_ns = Bool(True, - help="""Should variables loaded at startup (by startup files, exec_lines, etc.) - be hidden from tools like %who?""" - ).tag(config=True) - - exec_files = List(Unicode(), - help="""List of files to run at IPython startup.""" - ).tag(config=True) - exec_PYTHONSTARTUP = Bool(True, - help="""Run the file referenced by the PYTHONSTARTUP environment - variable at IPython startup.""" - ).tag(config=True) - file_to_run = Unicode('', - help="""A file to be run""").tag(config=True) - - exec_lines = List(Unicode(), - help="""lines of code to run at IPython startup.""" - ).tag(config=True) - code_to_run = Unicode('', - help="Execute the given command string." - ).tag(config=True) - module_to_run = Unicode('', - help="Run the module as a script." - ).tag(config=True) - gui = CaselessStrEnum(gui_keys, allow_none=True, - help="Enable GUI event loop integration with any of {0}.".format(gui_keys) - ).tag(config=True) - matplotlib = CaselessStrEnum(backend_keys, allow_none=True, - help="""Configure matplotlib for interactive use with - the default matplotlib backend.""" - ).tag(config=True) - pylab = CaselessStrEnum(backend_keys, allow_none=True, - help="""Pre-load matplotlib and numpy for interactive use, - selecting a particular matplotlib backend and loop integration. - """ - ).tag(config=True) - pylab_import_all = Bool(True, - help="""If true, IPython will populate the user namespace with numpy, pylab, etc. - and an ``import *`` is done from numpy and pylab, when using pylab mode. - - When False, pylab mode should not import any names into the user namespace. - """ - ).tag(config=True) - ignore_cwd = Bool( - False, - help="""If True, IPython will not add the current working directory to sys.path. - When False, the current working directory is added to sys.path, allowing imports - of modules defined in the current directory.""" - ).tag(config=True) - shell = Instance('IPython.core.interactiveshell.InteractiveShellABC', - allow_none=True) - # whether interact-loop should start - interact = Bool(True) - - user_ns = Instance(dict, args=None, allow_none=True) - @observe('user_ns') - def _user_ns_changed(self, change): - if self.shell is not None: - self.shell.user_ns = change['new'] - self.shell.init_user_ns() - - def init_path(self): - """Add current working directory, '', to sys.path - - Unlike Python's default, we insert before the first `site-packages` - or `dist-packages` directory, - so that it is after the standard library. - - .. versionchanged:: 7.2 - Try to insert after the standard library, instead of first. - .. versionchanged:: 8.0 - Allow optionally not including the current directory in sys.path - """ - if '' in sys.path or self.ignore_cwd: - return - for idx, path in enumerate(sys.path): - parent, last_part = os.path.split(path) - if last_part in {'site-packages', 'dist-packages'}: - break - else: - # no site-packages or dist-packages found (?!) - # back to original behavior of inserting at the front - idx = 0 - sys.path.insert(idx, '') - - def init_shell(self): - raise NotImplementedError("Override in subclasses") - - def init_gui_pylab(self): - """Enable GUI event loop integration, taking pylab into account.""" - enable = False - shell = self.shell - if self.pylab: - enable = lambda key: shell.enable_pylab(key, import_all=self.pylab_import_all) - key = self.pylab - elif self.matplotlib: - enable = shell.enable_matplotlib - key = self.matplotlib - elif self.gui: - enable = shell.enable_gui - key = self.gui - - if not enable: - return - - try: - r = enable(key) - except ImportError: - self.log.warning("Eventloop or matplotlib integration failed. Is matplotlib installed?") - self.shell.showtraceback() - return - except Exception: - self.log.warning("GUI event loop or pylab initialization failed") - self.shell.showtraceback() - return - - if isinstance(r, tuple): - gui, backend = r[:2] - self.log.info("Enabling GUI event loop integration, " - "eventloop=%s, matplotlib=%s", gui, backend) - if key == "auto": - print("Using matplotlib backend: %s" % backend) - else: - gui = r - self.log.info("Enabling GUI event loop integration, " - "eventloop=%s", gui) - - def init_extensions(self): - """Load all IPython extensions in IPythonApp.extensions. - - This uses the :meth:`ExtensionManager.load_extensions` to load all - the extensions listed in ``self.extensions``. - """ - try: - self.log.debug("Loading IPython extensions...") - extensions = ( - self.default_extensions + self.extensions + self.extra_extensions - ) - for ext in extensions: - try: - self.log.info("Loading IPython extension: %s", ext) - self.shell.extension_manager.load_extension(ext) - except: - if self.reraise_ipython_extension_failures: - raise - msg = ("Error in loading extension: {ext}\n" - "Check your config files in {location}".format( - ext=ext, - location=self.profile_dir.location - )) - self.log.warning(msg, exc_info=True) - except: - if self.reraise_ipython_extension_failures: - raise - self.log.warning("Unknown error in loading extensions:", exc_info=True) - - def init_code(self): - """run the pre-flight code, specified via exec_lines""" - self._run_startup_files() - self._run_exec_lines() - self._run_exec_files() - - # Hide variables defined here from %who etc. - if self.hide_initial_ns: - self.shell.user_ns_hidden.update(self.shell.user_ns) - - # command-line execution (ipython -i script.py, ipython -m module) - # should *not* be excluded from %whos - self._run_cmd_line_code() - self._run_module() - - # flush output, so itwon't be attached to the first cell - sys.stdout.flush() - sys.stderr.flush() - self.shell._sys_modules_keys = set(sys.modules.keys()) - - def _run_exec_lines(self): - """Run lines of code in IPythonApp.exec_lines in the user's namespace.""" - if not self.exec_lines: - return - try: - self.log.debug("Running code from IPythonApp.exec_lines...") - for line in self.exec_lines: - try: - self.log.info("Running code in user namespace: %s" % - line) - self.shell.run_cell(line, store_history=False) - except: - self.log.warning("Error in executing line in user " - "namespace: %s" % line) - self.shell.showtraceback() - except: - self.log.warning("Unknown error in handling IPythonApp.exec_lines:") - self.shell.showtraceback() - - def _exec_file(self, fname, shell_futures=False): - try: - full_filename = filefind(fname, [u'.', self.ipython_dir]) - except IOError: - self.log.warning("File not found: %r"%fname) - return - # Make sure that the running script gets a proper sys.argv as if it - # were run from a system shell. - save_argv = sys.argv - sys.argv = [full_filename] + self.extra_args[1:] - try: - if os.path.isfile(full_filename): - self.log.info("Running file in user namespace: %s" % - full_filename) - # Ensure that __file__ is always defined to match Python - # behavior. - with preserve_keys(self.shell.user_ns, '__file__'): - self.shell.user_ns['__file__'] = fname - if full_filename.endswith('.ipy') or full_filename.endswith('.ipynb'): - self.shell.safe_execfile_ipy(full_filename, - shell_futures=shell_futures) - else: - # default to python, even without extension - self.shell.safe_execfile(full_filename, - self.shell.user_ns, - shell_futures=shell_futures, - raise_exceptions=True) - finally: - sys.argv = save_argv - - def _run_startup_files(self): - """Run files from profile startup directory""" - startup_dirs = [self.profile_dir.startup_dir] + [ - os.path.join(p, 'startup') for p in chain(ENV_CONFIG_DIRS, SYSTEM_CONFIG_DIRS) - ] - startup_files = [] - - if self.exec_PYTHONSTARTUP and os.environ.get('PYTHONSTARTUP', False) and \ - not (self.file_to_run or self.code_to_run or self.module_to_run): - python_startup = os.environ['PYTHONSTARTUP'] - self.log.debug("Running PYTHONSTARTUP file %s...", python_startup) - try: - self._exec_file(python_startup) - except: - self.log.warning("Unknown error in handling PYTHONSTARTUP file %s:", python_startup) - self.shell.showtraceback() - for startup_dir in startup_dirs[::-1]: - startup_files += glob.glob(os.path.join(startup_dir, '*.py')) - startup_files += glob.glob(os.path.join(startup_dir, '*.ipy')) - if not startup_files: - return - - self.log.debug("Running startup files from %s...", startup_dir) - try: - for fname in sorted(startup_files): - self._exec_file(fname) - except: - self.log.warning("Unknown error in handling startup files:") - self.shell.showtraceback() - - def _run_exec_files(self): - """Run files from IPythonApp.exec_files""" - if not self.exec_files: - return - - self.log.debug("Running files in IPythonApp.exec_files...") - try: - for fname in self.exec_files: - self._exec_file(fname) - except: - self.log.warning("Unknown error in handling IPythonApp.exec_files:") - self.shell.showtraceback() - - def _run_cmd_line_code(self): - """Run code or file specified at the command-line""" - if self.code_to_run: - line = self.code_to_run - try: - self.log.info("Running code given at command line (c=): %s" % - line) - self.shell.run_cell(line, store_history=False) - except: - self.log.warning("Error in executing line in user namespace: %s" % - line) - self.shell.showtraceback() - if not self.interact: - self.exit(1) - - # Like Python itself, ignore the second if the first of these is present - elif self.file_to_run: - fname = self.file_to_run - if os.path.isdir(fname): - fname = os.path.join(fname, "__main__.py") - if not os.path.exists(fname): - self.log.warning("File '%s' doesn't exist", fname) - if not self.interact: - self.exit(2) - try: - self._exec_file(fname, shell_futures=True) - except: - self.shell.showtraceback(tb_offset=4) - if not self.interact: - self.exit(1) - - def _run_module(self): - """Run module specified at the command-line.""" - if self.module_to_run: - # Make sure that the module gets a proper sys.argv as if it were - # run using `python -m`. - save_argv = sys.argv - sys.argv = [sys.executable] + self.extra_args - try: - self.shell.safe_run_module(self.module_to_run, - self.shell.user_ns) - finally: - sys.argv = save_argv diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_shortcuts.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_shortcuts.py deleted file mode 100644 index 45bb327988b918151fffc6314d855ac1d788d77b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_shortcuts.py +++ /dev/null @@ -1,468 +0,0 @@ -import pytest -from IPython.terminal.shortcuts.auto_suggest import ( - accept, - accept_or_jump_to_end, - accept_token, - accept_character, - accept_word, - accept_and_keep_cursor, - discard, - NavigableAutoSuggestFromHistory, - swap_autosuggestion_up, - swap_autosuggestion_down, -) -from IPython.terminal.shortcuts.auto_match import skip_over -from IPython.terminal.shortcuts import create_ipython_shortcuts - -from prompt_toolkit.history import InMemoryHistory -from prompt_toolkit.buffer import Buffer -from prompt_toolkit.document import Document -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory - -from unittest.mock import patch, Mock - - -def test_deprected(): - import IPython.terminal.shortcuts.auto_suggest as iptsa - - with pytest.warns(DeprecationWarning, match=r"8\.12.+accept_or_jump_to_end"): - iptsa.accept_in_vi_insert_mode - - -def make_event(text, cursor, suggestion): - event = Mock() - event.current_buffer = Mock() - event.current_buffer.suggestion = Mock() - event.current_buffer.text = text - event.current_buffer.cursor_position = cursor - event.current_buffer.suggestion.text = suggestion - event.current_buffer.document = Document(text=text, cursor_position=cursor) - return event - - -@pytest.mark.parametrize( - "text, suggestion, expected", - [ - ("", "def out(tag: str, n=50):", "def out(tag: str, n=50):"), - ("def ", "out(tag: str, n=50):", "out(tag: str, n=50):"), - ], -) -def test_accept(text, suggestion, expected): - event = make_event(text, len(text), suggestion) - buffer = event.current_buffer - buffer.insert_text = Mock() - accept(event) - assert buffer.insert_text.called - assert buffer.insert_text.call_args[0] == (expected,) - - -@pytest.mark.parametrize( - "text, suggestion", - [ - ("", "def out(tag: str, n=50):"), - ("def ", "out(tag: str, n=50):"), - ], -) -def test_discard(text, suggestion): - event = make_event(text, len(text), suggestion) - buffer = event.current_buffer - buffer.insert_text = Mock() - discard(event) - assert not buffer.insert_text.called - assert buffer.suggestion is None - - -@pytest.mark.parametrize( - "text, cursor, suggestion, called", - [ - ("123456", 6, "123456789", True), - ("123456", 3, "123456789", False), - ("123456 \n789", 6, "123456789", True), - ], -) -def test_autosuggest_at_EOL(text, cursor, suggestion, called): - """ - test that autosuggest is only applied at end of line. - """ - - event = make_event(text, cursor, suggestion) - event.current_buffer.insert_text = Mock() - accept_or_jump_to_end(event) - if called: - event.current_buffer.insert_text.assert_called() - else: - event.current_buffer.insert_text.assert_not_called() - # event.current_buffer.document.get_end_of_line_position.assert_called() - - -@pytest.mark.parametrize( - "text, suggestion, expected", - [ - ("", "def out(tag: str, n=50):", "def "), - ("d", "ef out(tag: str, n=50):", "ef "), - ("de ", "f out(tag: str, n=50):", "f "), - ("def", " out(tag: str, n=50):", " "), - ("def ", "out(tag: str, n=50):", "out("), - ("def o", "ut(tag: str, n=50):", "ut("), - ("def ou", "t(tag: str, n=50):", "t("), - ("def out", "(tag: str, n=50):", "("), - ("def out(", "tag: str, n=50):", "tag: "), - ("def out(t", "ag: str, n=50):", "ag: "), - ("def out(ta", "g: str, n=50):", "g: "), - ("def out(tag", ": str, n=50):", ": "), - ("def out(tag:", " str, n=50):", " "), - ("def out(tag: ", "str, n=50):", "str, "), - ("def out(tag: s", "tr, n=50):", "tr, "), - ("def out(tag: st", "r, n=50):", "r, "), - ("def out(tag: str", ", n=50):", ", n"), - ("def out(tag: str,", " n=50):", " n"), - ("def out(tag: str, ", "n=50):", "n="), - ("def out(tag: str, n", "=50):", "="), - ("def out(tag: str, n=", "50):", "50)"), - ("def out(tag: str, n=5", "0):", "0)"), - ("def out(tag: str, n=50", "):", "):"), - ("def out(tag: str, n=50)", ":", ":"), - ], -) -def test_autosuggest_token(text, suggestion, expected): - event = make_event(text, len(text), suggestion) - event.current_buffer.insert_text = Mock() - accept_token(event) - assert event.current_buffer.insert_text.called - assert event.current_buffer.insert_text.call_args[0] == (expected,) - - -@pytest.mark.parametrize( - "text, suggestion, expected", - [ - ("", "def out(tag: str, n=50):", "d"), - ("d", "ef out(tag: str, n=50):", "e"), - ("de ", "f out(tag: str, n=50):", "f"), - ("def", " out(tag: str, n=50):", " "), - ], -) -def test_accept_character(text, suggestion, expected): - event = make_event(text, len(text), suggestion) - event.current_buffer.insert_text = Mock() - accept_character(event) - assert event.current_buffer.insert_text.called - assert event.current_buffer.insert_text.call_args[0] == (expected,) - - -@pytest.mark.parametrize( - "text, suggestion, expected", - [ - ("", "def out(tag: str, n=50):", "def "), - ("d", "ef out(tag: str, n=50):", "ef "), - ("de", "f out(tag: str, n=50):", "f "), - ("def", " out(tag: str, n=50):", " "), - # (this is why we also have accept_token) - ("def ", "out(tag: str, n=50):", "out(tag: "), - ], -) -def test_accept_word(text, suggestion, expected): - event = make_event(text, len(text), suggestion) - event.current_buffer.insert_text = Mock() - accept_word(event) - assert event.current_buffer.insert_text.called - assert event.current_buffer.insert_text.call_args[0] == (expected,) - - -@pytest.mark.parametrize( - "text, suggestion, expected, cursor", - [ - ("", "def out(tag: str, n=50):", "def out(tag: str, n=50):", 0), - ("def ", "out(tag: str, n=50):", "out(tag: str, n=50):", 4), - ], -) -def test_accept_and_keep_cursor(text, suggestion, expected, cursor): - event = make_event(text, cursor, suggestion) - buffer = event.current_buffer - buffer.insert_text = Mock() - accept_and_keep_cursor(event) - assert buffer.insert_text.called - assert buffer.insert_text.call_args[0] == (expected,) - assert buffer.cursor_position == cursor - - -def test_autosuggest_token_empty(): - full = "def out(tag: str, n=50):" - event = make_event(full, len(full), "") - event.current_buffer.insert_text = Mock() - - with patch( - "prompt_toolkit.key_binding.bindings.named_commands.forward_word" - ) as forward_word: - accept_token(event) - assert not event.current_buffer.insert_text.called - assert forward_word.called - - -def test_other_providers(): - """Ensure that swapping autosuggestions does not break with other providers""" - provider = AutoSuggestFromHistory() - ip = get_ipython() - ip.auto_suggest = provider - event = Mock() - event.current_buffer = Buffer() - assert swap_autosuggestion_up(event) is None - assert swap_autosuggestion_down(event) is None - - -async def test_navigable_provider(): - provider = NavigableAutoSuggestFromHistory() - history = InMemoryHistory(history_strings=["very_a", "very", "very_b", "very_c"]) - buffer = Buffer(history=history) - ip = get_ipython() - ip.auto_suggest = provider - - async for _ in history.load(): - pass - - buffer.cursor_position = 5 - buffer.text = "very" - - up = swap_autosuggestion_up - down = swap_autosuggestion_down - - event = Mock() - event.current_buffer = buffer - - def get_suggestion(): - suggestion = provider.get_suggestion(buffer, buffer.document) - buffer.suggestion = suggestion - return suggestion - - assert get_suggestion().text == "_c" - - # should go up - up(event) - assert get_suggestion().text == "_b" - - # should skip over 'very' which is identical to buffer content - up(event) - assert get_suggestion().text == "_a" - - # should cycle back to beginning - up(event) - assert get_suggestion().text == "_c" - - # should cycle back through end boundary - down(event) - assert get_suggestion().text == "_a" - - down(event) - assert get_suggestion().text == "_b" - - down(event) - assert get_suggestion().text == "_c" - - down(event) - assert get_suggestion().text == "_a" - - -async def test_navigable_provider_multiline_entries(): - provider = NavigableAutoSuggestFromHistory() - history = InMemoryHistory(history_strings=["very_a\nvery_b", "very_c"]) - buffer = Buffer(history=history) - ip = get_ipython() - ip.auto_suggest = provider - - async for _ in history.load(): - pass - - buffer.cursor_position = 5 - buffer.text = "very" - up = swap_autosuggestion_up - down = swap_autosuggestion_down - - event = Mock() - event.current_buffer = buffer - - def get_suggestion(): - suggestion = provider.get_suggestion(buffer, buffer.document) - buffer.suggestion = suggestion - return suggestion - - assert get_suggestion().text == "_c" - - up(event) - assert get_suggestion().text == "_b" - - up(event) - assert get_suggestion().text == "_a" - - down(event) - assert get_suggestion().text == "_b" - - down(event) - assert get_suggestion().text == "_c" - - -def create_session_mock(): - session = Mock() - session.default_buffer = Buffer() - return session - - -def test_navigable_provider_connection(): - provider = NavigableAutoSuggestFromHistory() - provider.skip_lines = 1 - - session_1 = create_session_mock() - provider.connect(session_1) - - assert provider.skip_lines == 1 - session_1.default_buffer.on_text_insert.fire() - assert provider.skip_lines == 0 - - session_2 = create_session_mock() - provider.connect(session_2) - provider.skip_lines = 2 - - assert provider.skip_lines == 2 - session_2.default_buffer.on_text_insert.fire() - assert provider.skip_lines == 0 - - provider.skip_lines = 3 - provider.disconnect() - session_1.default_buffer.on_text_insert.fire() - session_2.default_buffer.on_text_insert.fire() - assert provider.skip_lines == 3 - - -@pytest.fixture -def ipython_with_prompt(): - ip = get_ipython() - ip.pt_app = Mock() - ip.pt_app.key_bindings = create_ipython_shortcuts(ip) - try: - yield ip - finally: - ip.pt_app = None - - -def find_bindings_by_command(command): - ip = get_ipython() - return [ - binding - for binding in ip.pt_app.key_bindings.bindings - if binding.handler == command - ] - - -def test_modify_unique_shortcut(ipython_with_prompt): - original = find_bindings_by_command(accept_token) - assert len(original) == 1 - - ipython_with_prompt.shortcuts = [ - {"command": "IPython:auto_suggest.accept_token", "new_keys": ["a", "b", "c"]} - ] - matched = find_bindings_by_command(accept_token) - assert len(matched) == 1 - assert list(matched[0].keys) == ["a", "b", "c"] - assert list(matched[0].keys) != list(original[0].keys) - assert matched[0].filter == original[0].filter - - ipython_with_prompt.shortcuts = [ - {"command": "IPython:auto_suggest.accept_token", "new_filter": "always"} - ] - matched = find_bindings_by_command(accept_token) - assert len(matched) == 1 - assert list(matched[0].keys) != ["a", "b", "c"] - assert list(matched[0].keys) == list(original[0].keys) - assert matched[0].filter != original[0].filter - - -def test_disable_shortcut(ipython_with_prompt): - matched = find_bindings_by_command(accept_token) - assert len(matched) == 1 - - ipython_with_prompt.shortcuts = [ - {"command": "IPython:auto_suggest.accept_token", "new_keys": []} - ] - matched = find_bindings_by_command(accept_token) - assert len(matched) == 0 - - ipython_with_prompt.shortcuts = [] - matched = find_bindings_by_command(accept_token) - assert len(matched) == 1 - - -def test_modify_shortcut_with_filters(ipython_with_prompt): - matched = find_bindings_by_command(skip_over) - matched_keys = {m.keys[0] for m in matched} - assert matched_keys == {")", "]", "}", "'", '"'} - - with pytest.raises(ValueError, match="Multiple shortcuts matching"): - ipython_with_prompt.shortcuts = [ - {"command": "IPython:auto_match.skip_over", "new_keys": ["x"]} - ] - - ipython_with_prompt.shortcuts = [ - { - "command": "IPython:auto_match.skip_over", - "new_keys": ["x"], - "match_filter": "focused_insert & auto_match & followed_by_single_quote", - } - ] - matched = find_bindings_by_command(skip_over) - matched_keys = {m.keys[0] for m in matched} - assert matched_keys == {")", "]", "}", "x", '"'} - - -def example_command(): - pass - - -def test_add_shortcut_for_new_command(ipython_with_prompt): - matched = find_bindings_by_command(example_command) - assert len(matched) == 0 - - with pytest.raises(ValueError, match="example_command is not a known"): - ipython_with_prompt.shortcuts = [ - {"command": "example_command", "new_keys": ["x"]} - ] - matched = find_bindings_by_command(example_command) - assert len(matched) == 0 - - -def test_modify_shortcut_failure(ipython_with_prompt): - with pytest.raises(ValueError, match="No shortcuts matching"): - ipython_with_prompt.shortcuts = [ - { - "command": "IPython:auto_match.skip_over", - "match_keys": ["x"], - "new_keys": ["y"], - } - ] - - -def test_add_shortcut_for_existing_command(ipython_with_prompt): - matched = find_bindings_by_command(skip_over) - assert len(matched) == 5 - - with pytest.raises(ValueError, match="Cannot add a shortcut without keys"): - ipython_with_prompt.shortcuts = [ - {"command": "IPython:auto_match.skip_over", "new_keys": [], "create": True} - ] - - ipython_with_prompt.shortcuts = [ - {"command": "IPython:auto_match.skip_over", "new_keys": ["x"], "create": True} - ] - matched = find_bindings_by_command(skip_over) - assert len(matched) == 6 - - ipython_with_prompt.shortcuts = [] - matched = find_bindings_by_command(skip_over) - assert len(matched) == 5 - - -def test_setting_shortcuts_before_pt_app_init(): - ipython = get_ipython() - assert ipython.pt_app is None - shortcuts = [ - {"command": "IPython:auto_match.skip_over", "new_keys": ["x"], "create": True} - ] - ipython.shortcuts = shortcuts - assert ipython.shortcuts == shortcuts diff --git a/spaces/Suniilkumaar/SwapMukham/nsfw_checker/__init__.py b/spaces/Suniilkumaar/SwapMukham/nsfw_checker/__init__.py deleted file mode 100644 index a6dc0ae6d70e03fa601037db4846b40a91871ed4..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/SwapMukham/nsfw_checker/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . opennsfw import NSFWChecker \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/upfirdn2d.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/upfirdn2d.py deleted file mode 100644 index c8bb2c3c949eed38a6465ed369fa881538dca010..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/upfirdn2d.py +++ /dev/null @@ -1,330 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -from torch.autograd import Function -from torch.nn import functional as F - -from annotator.uniformer.mmcv.utils import to_2tuple -from ..utils import ext_loader - -upfirdn2d_ext = ext_loader.load_ext('_ext', ['upfirdn2d']) - - -class UpFirDn2dBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, - in_size, out_size): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_ext.upfirdn2d( - grad_output, - grad_kernel, - up_x=down_x, - up_y=down_y, - down_x=up_x, - down_y=up_y, - pad_x0=g_pad_x0, - pad_x1=g_pad_x1, - pad_y0=g_pad_y0, - pad_y1=g_pad_y1) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], - in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], - ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_ext.upfirdn2d( - gradgrad_input, - kernel, - up_x=ctx.up_x, - up_y=ctx.up_y, - down_x=ctx.down_x, - down_y=ctx.down_y, - pad_x0=ctx.pad_x0, - pad_x1=ctx.pad_x1, - pad_y0=ctx.pad_y0, - pad_y1=ctx.pad_y1) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], - # ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], - ctx.out_size[0], ctx.out_size[1]) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_ext.upfirdn2d( - input, - kernel, - up_x=up_x, - up_y=up_y, - down_x=down_x, - down_y=down_y, - pad_x0=pad_x0, - pad_x1=pad_x1, - pad_y0=pad_y0, - pad_y1=pad_y1) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - """UpFRIDn for 2d features. - - UpFIRDn is short for upsample, apply FIR filter and downsample. More - details can be found in: - https://www.mathworks.com/help/signal/ref/upfirdn.html - - Args: - input (Tensor): Tensor with shape of (n, c, h, w). - kernel (Tensor): Filter kernel. - up (int | tuple[int], optional): Upsampling factor. If given a number, - we will use this factor for the both height and width side. - Defaults to 1. - down (int | tuple[int], optional): Downsampling factor. If given a - number, we will use this factor for the both height and width side. - Defaults to 1. - pad (tuple[int], optional): Padding for tensors, (x_pad, y_pad) or - (x_pad_0, x_pad_1, y_pad_0, y_pad_1). Defaults to (0, 0). - - Returns: - Tensor: Tensor after UpFIRDn. - """ - if input.device.type == 'cpu': - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - up = to_2tuple(up) - - down = to_2tuple(down) - - out = upfirdn2d_native(input, kernel, up[0], up[1], down[0], down[1], - pad[0], pad[1], pad[2], pad[3]) - else: - _up = to_2tuple(up) - - _down = to_2tuple(down) - - if len(pad) == 4: - _pad = pad - elif len(pad) == 2: - _pad = (pad[0], pad[1], pad[0], pad[1]) - - out = UpFirDn2d.apply(input, kernel, _up, _down, _pad) - - return out - - -def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, - pad_y0, pad_y1): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, - [0, 0, - max(pad_x0, 0), - max(pad_x1, 0), - max(pad_y0, 0), - max(pad_y1, 0)]) - out = out[:, - max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/monotonic_align/core.py b/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/monotonic_align/core.py deleted file mode 100644 index 7c962adea65543ef426034c4d53c4f0e615e8181..0000000000000000000000000000000000000000 --- a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/monotonic_align/core.py +++ /dev/null @@ -1,46 +0,0 @@ -import numba - - -@numba.jit( - numba.void( - numba.int32[:, :, ::1], - numba.float32[:, :, ::1], - numba.int32[::1], - numba.int32[::1], - ), - nopython=True, - nogil=True, -) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0.0 - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and ( - index == y or value[y - 1, index] < value[y - 1, index - 1] - ): - index = index - 1 diff --git a/spaces/TabPFN/TabPFNPrediction/TabPFN/priors/__init__.py b/spaces/TabPFN/TabPFNPrediction/TabPFN/priors/__init__.py deleted file mode 100644 index 3407398b08379f975aa59cb35e731b82d2a50360..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/TabPFN/priors/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from . import fast_gp, mlp, flexible_categorical, differentiable_prior, prior_bag - - - diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/__main__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/__main__.py deleted file mode 100644 index 2f7f8cbad05d3955be8fbe68ac8ba6c13ef974e6..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/__main__.py +++ /dev/null @@ -1,17 +0,0 @@ -""" - pygments.__main__ - ~~~~~~~~~~~~~~~~~ - - Main entry point for ``python -m pygments``. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import sys -from pip._vendor.pygments.cmdline import main - -try: - sys.exit(main(sys.argv)) -except KeyboardInterrupt: - sys.exit(1) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/scope.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/scope.py deleted file mode 100644 index c9d134cc3cedae929e5bef2b5547f7e33dc10a52..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/scope.py +++ /dev/null @@ -1,86 +0,0 @@ -from collections.abc import Mapping -from typing import TYPE_CHECKING, Any, Optional, Tuple - -from .highlighter import ReprHighlighter -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import ConsoleRenderable - - -def render_scope( - scope: "Mapping[str, Any]", - *, - title: Optional[TextType] = None, - sort_keys: bool = True, - indent_guides: bool = False, - max_length: Optional[int] = None, - max_string: Optional[int] = None, -) -> "ConsoleRenderable": - """Render python variables in a given scope. - - Args: - scope (Mapping): A mapping containing variable names and values. - title (str, optional): Optional title. Defaults to None. - sort_keys (bool, optional): Enable sorting of items. Defaults to True. - indent_guides (bool, optional): Enable indentation guides. Defaults to False. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None. - - Returns: - ConsoleRenderable: A renderable object. - """ - highlighter = ReprHighlighter() - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - """Sort special variables first, then alphabetically.""" - key, _ = item - return (not key.startswith("__"), key.lower()) - - items = sorted(scope.items(), key=sort_items) if sort_keys else scope.items() - for key, value in items: - key_text = Text.assemble( - (key, "scope.key.special" if key.startswith("__") else "scope.key"), - (" =", "scope.equals"), - ) - items_table.add_row( - key_text, - Pretty( - value, - highlighter=highlighter, - indent_guides=indent_guides, - max_length=max_length, - max_string=max_string, - ), - ) - return Panel.fit( - items_table, - title=title, - border_style="scope.border", - padding=(0, 1), - ) - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich import print - - print() - - def test(foo: float, bar: float) -> None: - list_of_things = [1, 2, 3, None, 4, True, False, "Hello World"] - dict_of_things = { - "version": "1.1", - "method": "confirmFruitPurchase", - "params": [["apple", "orange", "mangoes", "pomelo"], 1.123], - "id": "194521489", - } - print(render_scope(locals(), title="[i]locals", sort_keys=False)) - - test(20.3423, 3.1427) - print() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/versionpredicate.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/versionpredicate.py deleted file mode 100644 index d6c0c007aad871b9348fea57c9188d0ffd5f10d2..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/versionpredicate.py +++ /dev/null @@ -1,175 +0,0 @@ -"""Module for parsing and testing package version predicate strings. -""" -import re -from . import version -import operator - - -re_validPackage = re.compile(r"(?i)^\s*([a-z_]\w*(?:\.[a-z_]\w*)*)(.*)", re.ASCII) -# (package) (rest) - -re_paren = re.compile(r"^\s*\((.*)\)\s*$") # (list) inside of parentheses -re_splitComparison = re.compile(r"^\s*(<=|>=|<|>|!=|==)\s*([^\s,]+)\s*$") -# (comp) (version) - - -def splitUp(pred): - """Parse a single version comparison. - - Return (comparison string, StrictVersion) - """ - res = re_splitComparison.match(pred) - if not res: - raise ValueError("bad package restriction syntax: %r" % pred) - comp, verStr = res.groups() - with version.suppress_known_deprecation(): - other = version.StrictVersion(verStr) - return (comp, other) - - -compmap = { - "<": operator.lt, - "<=": operator.le, - "==": operator.eq, - ">": operator.gt, - ">=": operator.ge, - "!=": operator.ne, -} - - -class VersionPredicate: - """Parse and test package version predicates. - - >>> v = VersionPredicate('pyepat.abc (>1.0, <3333.3a1, !=1555.1b3)') - - The `name` attribute provides the full dotted name that is given:: - - >>> v.name - 'pyepat.abc' - - The str() of a `VersionPredicate` provides a normalized - human-readable version of the expression:: - - >>> print(v) - pyepat.abc (> 1.0, < 3333.3a1, != 1555.1b3) - - The `satisfied_by()` method can be used to determine with a given - version number is included in the set described by the version - restrictions:: - - >>> v.satisfied_by('1.1') - True - >>> v.satisfied_by('1.4') - True - >>> v.satisfied_by('1.0') - False - >>> v.satisfied_by('4444.4') - False - >>> v.satisfied_by('1555.1b3') - False - - `VersionPredicate` is flexible in accepting extra whitespace:: - - >>> v = VersionPredicate(' pat( == 0.1 ) ') - >>> v.name - 'pat' - >>> v.satisfied_by('0.1') - True - >>> v.satisfied_by('0.2') - False - - If any version numbers passed in do not conform to the - restrictions of `StrictVersion`, a `ValueError` is raised:: - - >>> v = VersionPredicate('p1.p2.p3.p4(>=1.0, <=1.3a1, !=1.2zb3)') - Traceback (most recent call last): - ... - ValueError: invalid version number '1.2zb3' - - It the module or package name given does not conform to what's - allowed as a legal module or package name, `ValueError` is - raised:: - - >>> v = VersionPredicate('foo-bar') - Traceback (most recent call last): - ... - ValueError: expected parenthesized list: '-bar' - - >>> v = VersionPredicate('foo bar (12.21)') - Traceback (most recent call last): - ... - ValueError: expected parenthesized list: 'bar (12.21)' - - """ - - def __init__(self, versionPredicateStr): - """Parse a version predicate string.""" - # Fields: - # name: package name - # pred: list of (comparison string, StrictVersion) - - versionPredicateStr = versionPredicateStr.strip() - if not versionPredicateStr: - raise ValueError("empty package restriction") - match = re_validPackage.match(versionPredicateStr) - if not match: - raise ValueError("bad package name in %r" % versionPredicateStr) - self.name, paren = match.groups() - paren = paren.strip() - if paren: - match = re_paren.match(paren) - if not match: - raise ValueError("expected parenthesized list: %r" % paren) - str = match.groups()[0] - self.pred = [splitUp(aPred) for aPred in str.split(",")] - if not self.pred: - raise ValueError("empty parenthesized list in %r" % versionPredicateStr) - else: - self.pred = [] - - def __str__(self): - if self.pred: - seq = [cond + " " + str(ver) for cond, ver in self.pred] - return self.name + " (" + ", ".join(seq) + ")" - else: - return self.name - - def satisfied_by(self, version): - """True if version is compatible with all the predicates in self. - The parameter version must be acceptable to the StrictVersion - constructor. It may be either a string or StrictVersion. - """ - for cond, ver in self.pred: - if not compmap[cond](version, ver): - return False - return True - - -_provision_rx = None - - -def split_provision(value): - """Return the name and optional version number of a provision. - - The version number, if given, will be returned as a `StrictVersion` - instance, otherwise it will be `None`. - - >>> split_provision('mypkg') - ('mypkg', None) - >>> split_provision(' mypkg( 1.2 ) ') - ('mypkg', StrictVersion ('1.2')) - """ - global _provision_rx - if _provision_rx is None: - _provision_rx = re.compile( - r"([a-zA-Z_]\w*(?:\.[a-zA-Z_]\w*)*)(?:\s*\(\s*([^)\s]+)\s*\))?$", re.ASCII - ) - value = value.strip() - m = _provision_rx.match(value) - if not m: - raise ValueError("illegal provides specification: %r" % value) - ver = m.group(2) or None - if ver: - with version.suppress_known_deprecation(): - ver = version.StrictVersion(ver) - return m.group(1), ver diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/export/README.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/export/README.md deleted file mode 100644 index 9fcd33513fb81ef3aeb4d3c8d9732324dffa2646..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/export/README.md +++ /dev/null @@ -1,13 +0,0 @@ - -This directory contains code to prepare a detectron2 model for deployment. -Currently it supports exporting a detectron2 model to Caffe2 format through ONNX. - -Please see [documentation](https://detectron2.readthedocs.io/tutorials/deployment.html) for its usage. - - -### Acknowledgements - -Thanks to Mobile Vision team at Facebook for developing the Caffe2 conversion tools. - -Thanks to Computing Platform Department - PAI team at Alibaba Group (@bddpqq, @chenbohua3) who -help export Detectron2 models to TorchScript. diff --git a/spaces/ThirdEyeData/Entity-Extraction/README.md b/spaces/ThirdEyeData/Entity-Extraction/README.md deleted file mode 100644 index eae771250da9ad9239ec46806ed45ec85d85cdfc..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Entity-Extraction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Entity Extraction -emoji: 🌖 -colorFrom: yellow -colorTo: pink -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/UserXTheUnknown/stablediffusion-infinity/js/toolbar.js b/spaces/UserXTheUnknown/stablediffusion-infinity/js/toolbar.js deleted file mode 100644 index 6c721bc84d3a41a0761ead58e6034ba4dfd4a6ef..0000000000000000000000000000000000000000 --- a/spaces/UserXTheUnknown/stablediffusion-infinity/js/toolbar.js +++ /dev/null @@ -1,581 +0,0 @@ -// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://rawgit.com/vitmalina/w2ui/master/dist/w2ui.es6.min.js" -// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://cdn.jsdelivr.net/gh/vitmalina/w2ui@master/dist/w2ui.es6.min.js" - -// https://stackoverflow.com/questions/36280818/how-to-convert-file-to-base64-in-javascript -function getBase64(file) { - var reader = new FileReader(); - reader.readAsDataURL(file); - reader.onload = function () { - add_image(reader.result); - // console.log(reader.result); - }; - reader.onerror = function (error) { - console.log("Error: ", error); - }; -} - -function getText(file) { - var reader = new FileReader(); - reader.readAsText(file); - reader.onload = function () { - window.postMessage(["load",reader.result],"*") - // console.log(reader.result); - }; - reader.onerror = function (error) { - console.log("Error: ", error); - }; -} - -document.querySelector("#upload_file").addEventListener("change", (event)=>{ - console.log(event); - let file = document.querySelector("#upload_file").files[0]; - getBase64(file); -}) - -document.querySelector("#upload_state").addEventListener("change", (event)=>{ - console.log(event); - let file = document.querySelector("#upload_state").files[0]; - getText(file); -}) - -open_setting = function() { - if (!w2ui.foo) { - new w2form({ - name: "foo", - style: "border: 0px; background-color: transparent;", - fields: [{ - field: "canvas_width", - type: "int", - required: true, - html: { - label: "Canvas Width" - } - }, - { - field: "canvas_height", - type: "int", - required: true, - html: { - label: "Canvas Height" - } - }, - ], - record: { - canvas_width: 1200, - canvas_height: 600, - }, - actions: { - Save() { - this.validate(); - let record = this.getCleanRecord(); - window.postMessage(["resize",record.canvas_width,record.canvas_height],"*"); - w2popup.close(); - }, - custom: { - text: "Cancel", - style: "text-transform: uppercase", - onClick(event) { - w2popup.close(); - } - } - } - }); - } - w2popup.open({ - title: "Form in a Popup", - body: "
    ", - style: "padding: 15px 0px 0px 0px", - width: 500, - height: 280, - showMax: true, - async onToggle(event) { - await event.complete - w2ui.foo.resize(); - } - }) - .then((event) => { - w2ui.foo.render("#form") - }); -} - -var button_lst=["clear", "load", "save", "export", "upload", "selection", "canvas", "eraser", "outpaint", "accept", "cancel", "retry", "prev", "current", "next", "eraser_size_btn", "eraser_size", "resize_selection", "scale", "zoom_in", "zoom_out", "help"]; -var upload_button_lst=['clear', 'load', 'save', "upload", 'export', 'outpaint', 'resize_selection', 'help', "setting"]; -var resize_button_lst=['clear', 'load', 'save', "upload", 'export', "selection", "canvas", "eraser", 'outpaint', 'resize_selection',"zoom_in", "zoom_out", 'help', "setting"]; -var outpaint_button_lst=['clear', 'load', 'save', "canvas", "eraser", "upload", 'export', 'resize_selection', "zoom_in", "zoom_out",'help', "setting"]; -var outpaint_result_lst=["accept", "cancel", "retry", "prev", "current", "next"]; -var outpaint_result_func_lst=["accept", "retry", "prev", "current", "next"]; - -function check_button(id,text="",checked=true,tooltip="") -{ - return { type: "check", id: id, text: text, icon: checked?"fa-solid fa-square-check":"fa-regular fa-square", checked: checked, tooltip: tooltip }; -} - -var toolbar=new w2toolbar({ - box: "#toolbar", - name: "toolbar", - tooltip: "top", - items: [ - { type: "button", id: "clear", text: "Reset", tooltip: "Reset Canvas", icon: "fa-solid fa-rectangle-xmark" }, - { type: "break" }, - { type: "button", id: "load", tooltip: "Load Canvas", icon: "fa-solid fa-file-import" }, - { type: "button", id: "save", tooltip: "Save Canvas", icon: "fa-solid fa-file-export" }, - { type: "button", id: "export", tooltip: "Export Image", icon: "fa-solid fa-floppy-disk" }, - { type: "break" }, - { type: "button", id: "upload", text: "Upload Image", icon: "fa-solid fa-upload" }, - { type: "break" }, - { type: "radio", id: "selection", group: "1", tooltip: "Selection", icon: "fa-solid fa-arrows-up-down-left-right", checked: true }, - { type: "radio", id: "canvas", group: "1", tooltip: "Canvas", icon: "fa-solid fa-image" }, - { type: "radio", id: "eraser", group: "1", tooltip: "Eraser", icon: "fa-solid fa-eraser" }, - { type: "break" }, - { type: "button", id: "outpaint", text: "Outpaint", tooltip: "Run Outpainting", icon: "fa-solid fa-brush" }, - { type: "break" }, - { type: "button", id: "accept", text: "Accept", tooltip: "Accept current result", icon: "fa-solid fa-check", hidden: true, disable:true,}, - { type: "button", id: "cancel", text: "Cancel", tooltip: "Cancel current outpainting/error", icon: "fa-solid fa-ban", hidden: true}, - { type: "button", id: "retry", text: "Retry", tooltip: "Retry", icon: "fa-solid fa-rotate", hidden: true, disable:true,}, - { type: "button", id: "prev", tooltip: "Prev Result", icon: "fa-solid fa-caret-left", hidden: true, disable:true,}, - { type: "html", id: "current", hidden: true, disable:true, - async onRefresh(event) { - await event.complete - let fragment = query.html(` -
    -
    - ${this.sel_value ?? "1/1"} -
    `) - query(this.box).find("#tb_toolbar_item_current").append(fragment) - } - }, - { type: "button", id: "next", tooltip: "Next Result", icon: "fa-solid fa-caret-right", hidden: true,disable:true,}, - { type: "button", id: "add_image", text: "Add Image", icon: "fa-solid fa-file-circle-plus", hidden: true,disable:true,}, - { type: "button", id: "delete_image", text: "Delete Image", icon: "fa-solid fa-trash-can", hidden: true,disable:true,}, - { type: "button", id: "confirm", text: "Confirm", icon: "fa-solid fa-check", hidden: true,disable:true,}, - { type: "button", id: "cancel_overlay", text: "Cancel", icon: "fa-solid fa-ban", hidden: true,disable:true,}, - { type: "break" }, - { type: "spacer" }, - { type: "break" }, - { type: "button", id: "eraser_size_btn", tooltip: "Eraser Size", text:"Size", icon: "fa-solid fa-eraser", hidden: true, count: 32}, - { type: "html", id: "eraser_size", hidden: true, - async onRefresh(event) { - await event.complete - // let fragment = query.html(` - // - // `) - let fragment = query.html(` - - `) - fragment.filter("input").on("change", event => { - this.eraser_size = event.target.value; - window.overlay.freeDrawingBrush.width=this.eraser_size; - this.setCount("eraser_size_btn", event.target.value); - window.postMessage(["eraser_size", event.target.value],"*") - this.refresh(); - }) - query(this.box).find("#tb_toolbar_item_eraser_size").append(fragment) - } - }, - // { type: "button", id: "resize_eraser", tooltip: "Resize Eraser", icon: "fa-solid fa-sliders" }, - { type: "button", id: "resize_selection", text: "Resize Selection", tooltip: "Resize Selection", icon: "fa-solid fa-expand" }, - { type: "break" }, - { type: "html", id: "scale", - async onRefresh(event) { - await event.complete - let fragment = query.html(` -
    -
    - ${this.scale_value ?? "100%"} -
    `) - query(this.box).find("#tb_toolbar_item_scale").append(fragment) - } - }, - { type: "button", id: "zoom_in", tooltip: "Zoom In", icon: "fa-solid fa-magnifying-glass-plus" }, - { type: "button", id: "zoom_out", tooltip: "Zoom Out", icon: "fa-solid fa-magnifying-glass-minus" }, - { type: "break" }, - { type: "button", id: "help", tooltip: "Help", icon: "fa-solid fa-circle-info" }, - { type: "new-line"}, - { type: "button", id: "setting", text: "Canvas Setting", tooltip: "Resize Canvas Here", icon: "fa-solid fa-sliders" }, - { type: "break" }, - check_button("enable_img2img","Enable Img2Img",false), - // check_button("use_correction","Photometric Correction",false), - check_button("resize_check","Resize Small Input",true), - check_button("enable_safety","Enable Safety Checker",true), - check_button("square_selection","Square Selection Only",false), - {type: "break"}, - check_button("use_seed","Use Seed:",false), - { type: "html", id: "seed_val", - async onRefresh(event) { - await event.complete - let fragment = query.html(` - `) - fragment.filter("input").on("change", event => { - this.config_obj.seed_val = event.target.value; - parent.config_obj=this.config_obj; - this.refresh(); - }) - query(this.box).find("#tb_toolbar_item_seed_val").append(fragment) - } - }, - { type: "button", id: "random_seed", tooltip: "Set a random seed", icon: "fa-solid fa-dice" }, - ], - onClick(event) { - switch(event.target){ - case "setting": - open_setting(); - break; - case "upload": - this.upload_mode=true - document.querySelector("#overlay_container").style.pointerEvents="auto"; - this.click("canvas"); - this.click("selection"); - this.show("confirm","cancel_overlay","add_image","delete_image"); - this.enable("confirm","cancel_overlay","add_image","delete_image"); - this.disable(...upload_button_lst); - query("#upload_file").click(); - if(this.upload_tip) - { - this.upload_tip=false; - w2utils.notify("Note that only visible images will be added to canvas",{timeout:10000,where:query("#container")}) - } - break; - case "resize_selection": - this.resize_mode=true; - this.disable(...resize_button_lst); - this.enable("confirm","cancel_overlay"); - this.show("confirm","cancel_overlay"); - window.postMessage(["resize_selection",""],"*"); - document.querySelector("#overlay_container").style.pointerEvents="auto"; - break; - case "confirm": - if(this.upload_mode) - { - export_image(); - } - else - { - let sel_box=this.selection_box; - window.postMessage(["resize_selection",sel_box.x,sel_box.y,sel_box.width,sel_box.height],"*"); - } - case "cancel_overlay": - end_overlay(); - this.hide("confirm","cancel_overlay","add_image","delete_image"); - if(this.upload_mode){ - this.enable(...upload_button_lst); - } - else - { - this.enable(...resize_button_lst); - window.postMessage(["resize_selection","",""],"*"); - if(event.target=="cancel_overlay") - { - this.selection_box=this.selection_box_bak; - } - } - if(this.selection_box) - { - this.setCount("resize_selection",`${Math.floor(this.selection_box.width/8)*8}x${Math.floor(this.selection_box.height/8)*8}`); - } - this.disable("confirm","cancel_overlay","add_image","delete_image"); - this.upload_mode=false; - this.resize_mode=false; - this.click("selection"); - break; - case "add_image": - query("#upload_file").click(); - break; - case "delete_image": - let active_obj = window.overlay.getActiveObject(); - if(active_obj) - { - window.overlay.remove(active_obj); - window.overlay.renderAll(); - } - else - { - w2utils.notify("You need to select an image first",{error:true,timeout:2000,where:query("#container")}) - } - break; - case "load": - query("#upload_state").click(); - this.selection_box=null; - this.setCount("resize_selection",""); - break; - case "next": - case "prev": - window.postMessage(["outpaint", "", event.target], "*"); - break; - case "outpaint": - this.click("selection"); - this.disable(...outpaint_button_lst); - this.show(...outpaint_result_lst); - if(this.outpaint_tip) - { - this.outpaint_tip=false; - w2utils.notify("The canvas stays locked until you accept/cancel current outpainting",{timeout:10000,where:query("#container")}) - } - document.querySelector("#container").style.pointerEvents="none"; - case "retry": - this.disable(...outpaint_result_func_lst); - window.postMessage(["transfer",""],"*") - break; - case "accept": - case "cancel": - this.hide(...outpaint_result_lst); - this.disable(...outpaint_result_func_lst); - this.enable(...outpaint_button_lst); - document.querySelector("#container").style.pointerEvents="auto"; - window.postMessage(["click", event.target],"*"); - let app=parent.document.querySelector("gradio-app"); - app=app.shadowRoot??app; - app.querySelector("#cancel").click(); - break; - case "eraser": - case "selection": - case "canvas": - if(event.target=="eraser") - { - this.show("eraser_size","eraser_size_btn"); - window.overlay.freeDrawingBrush.width=this.eraser_size; - window.overlay.isDrawingMode = true; - } - else - { - this.hide("eraser_size","eraser_size_btn"); - window.overlay.isDrawingMode = false; - } - if(this.upload_mode) - { - if(event.target=="canvas") - { - window.postMessage(["mode", event.target],"*") - document.querySelector("#overlay_container").style.pointerEvents="none"; - document.querySelector("#overlay_container").style.opacity = 0.5; - } - else - { - document.querySelector("#overlay_container").style.pointerEvents="auto"; - document.querySelector("#overlay_container").style.opacity = 1.0; - } - } - else - { - window.postMessage(["mode", event.target],"*") - } - break; - case "help": - w2popup.open({ - title: "Document", - body: "Usage: https://github.com/lkwq007/stablediffusion-infinity/blob/master/docs/usage.md" - }) - break; - case "clear": - w2confirm("Reset canvas?").yes(() => { - window.postMessage(["click", event.target],"*"); - }).no(() => {}) - break; - case "random_seed": - this.config_obj.seed_val=Math.floor(Math.random() * 3000000000); - parent.config_obj=this.config_obj; - this.refresh(); - break; - case "enable_img2img": - case "use_correction": - case "resize_check": - case "enable_safety": - case "use_seed": - case "square_selection": - let target=this.get(event.target); - target.icon=target.checked?"fa-regular fa-square":"fa-solid fa-square-check"; - this.config_obj[event.target]=!target.checked; - parent.config_obj=this.config_obj; - this.refresh(); - break; - case "save": - case "export": - ask_filename(event.target); - break; - default: - // clear, save, export, outpaint, retry - // break, save, export, accept, retry, outpaint - window.postMessage(["click", event.target],"*") - } - console.log("Target: "+ event.target, event) - } -}) -window.w2ui=w2ui; -w2ui.toolbar.config_obj={ - resize_check: true, - enable_safety: true, - use_correction: false, - enable_img2img: false, - use_seed: false, - seed_val: 0, - square_selection: false, -}; -w2ui.toolbar.outpaint_tip=true; -w2ui.toolbar.upload_tip=true; -window.update_count=function(cur,total){ - w2ui.toolbar.sel_value=`${cur}/${total}`; - w2ui.toolbar.refresh(); -} -window.update_eraser=function(val,max_val){ - w2ui.toolbar.eraser_size=`${val}`; - w2ui.toolbar.eraser_max=`${max_val}`; - w2ui.toolbar.setCount("eraser_size_btn", `${val}`); - w2ui.toolbar.refresh(); -} -window.update_scale=function(val){ - w2ui.toolbar.scale_value=`${val}`; - w2ui.toolbar.refresh(); -} -window.enable_result_lst=function(){ - w2ui.toolbar.enable(...outpaint_result_lst); -} -function onObjectScaled(e) -{ - let object = e.target; - if(object.isType("rect")) - { - let width=object.getScaledWidth(); - let height=object.getScaledHeight(); - object.scale(1); - width=Math.max(Math.min(width,window.overlay.width-object.left),256); - height=Math.max(Math.min(height,window.overlay.height-object.top),256); - let l=Math.max(Math.min(object.left,window.overlay.width-width-object.strokeWidth),0); - let t=Math.max(Math.min(object.top,window.overlay.height-height-object.strokeWidth),0); - if(window.w2ui.toolbar.config_obj.square_selection) - { - let max_val = Math.min(Math.max(width,height),window.overlay.width,window.overlay.height); - width=max_val; - height=max_val; - } - object.set({ width: width, height: height, left:l,top:t}) - window.w2ui.toolbar.selection_box={width: width, height: height, x:object.left, y:object.top}; - window.w2ui.toolbar.setCount("resize_selection",`${Math.floor(width/8)*8}x${Math.floor(height/8)*8}`); - window.w2ui.toolbar.refresh(); - } -} -function onObjectMoved(e) -{ - let object = e.target; - if(object.isType("rect")) - { - let l=Math.max(Math.min(object.left,window.overlay.width-object.width-object.strokeWidth),0); - let t=Math.max(Math.min(object.top,window.overlay.height-object.height-object.strokeWidth),0); - object.set({left:l,top:t}); - window.w2ui.toolbar.selection_box={width: object.width, height: object.height, x:object.left, y:object.top}; - } -} -window.setup_overlay=function(width,height) -{ - if(window.overlay) - { - window.overlay.setDimensions({width:width,height:height}); - let app=parent.document.querySelector("gradio-app"); - app=app.shadowRoot??app; - app.querySelector("#sdinfframe").style.height=80+Number(height)+"px"; - document.querySelector("#container").style.height= height+"px"; - document.querySelector("#container").style.width = width+"px"; - } - else - { - canvas=new fabric.Canvas("overlay_canvas"); - canvas.setDimensions({width:width,height:height}); - let app=parent.document.querySelector("gradio-app"); - app=app.shadowRoot??app; - app.querySelector("#sdinfframe").style.height=80+Number(height)+"px"; - canvas.freeDrawingBrush = new fabric.EraserBrush(canvas); - canvas.on("object:scaling", onObjectScaled); - canvas.on("object:moving", onObjectMoved); - window.overlay=canvas; - } - document.querySelector("#overlay_container").style.pointerEvents="none"; -} -window.update_overlay=function(width,height) -{ - window.overlay.setDimensions({width:width,height:height},{backstoreOnly:true}); - // document.querySelector("#overlay_container").style.pointerEvents="none"; -} -window.adjust_selection=function(x,y,width,height) -{ - var rect = new fabric.Rect({ - left: x, - top: y, - fill: "rgba(0,0,0,0)", - strokeWidth: 3, - stroke: "rgba(0,0,0,0.7)", - cornerColor: "red", - cornerStrokeColor: "red", - borderColor: "rgba(255, 0, 0, 1.0)", - width: width, - height: height, - lockRotation: true, - }); - rect.setControlsVisibility({ mtr: false }); - window.overlay.add(rect); - window.overlay.setActiveObject(window.overlay.item(0)); - window.w2ui.toolbar.selection_box={width: width, height: height, x:x, y:y}; - window.w2ui.toolbar.selection_box_bak={width: width, height: height, x:x, y:y}; -} -function add_image(url) -{ - fabric.Image.fromURL(url,function(img){ - window.overlay.add(img); - window.overlay.setActiveObject(img); - },{left:100,top:100}); -} -function export_image() -{ - data=window.overlay.toDataURL(); - document.querySelector("#upload_content").value=data; - window.postMessage(["upload",""],"*"); - end_overlay(); -} -function end_overlay() -{ - window.overlay.clear(); - document.querySelector("#overlay_container").style.opacity = 1.0; - document.querySelector("#overlay_container").style.pointerEvents="none"; -} -function ask_filename(target) -{ - w2prompt({ - label: "Enter filename", - value: `outpaint_${((new Date(Date.now() -(new Date()).getTimezoneOffset() * 60000))).toISOString().replace("T","_").replace(/[^0-9_]/g, "").substring(0,15)}`, - }) - .change((event) => { - console.log("change", event.detail.originalEvent.target.value); - }) - .ok((event) => { - console.log("value=", event.detail.value); - window.postMessage(["click",target,event.detail.value],"*"); - }) - .cancel((event) => { - console.log("cancel"); - }); -} - -document.querySelector("#container").addEventListener("wheel",(e)=>{e.preventDefault()}) -window.setup_shortcut=function(json) -{ - var config=JSON.parse(json); - var key_map={}; - Object.keys(config.shortcut).forEach(k=>{ - key_map[config.shortcut[k]]=k; - }) - document.addEventListener("keydown",(e)=>{ - if(e.target.tagName!="INPUT") - { - let key=e.key; - if(e.ctrlKey) - { - key="Ctrl+"+e.key; - if(key in key_map) - { - e.preventDefault(); - } - } - if(key in key_map) - { - w2ui.toolbar.click(key_map[key]); - } - } - }) -} \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Bard.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Bard.py deleted file mode 100644 index 4c37c4b719430031fce41ce49946f0e6ac93d155..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Bard.py +++ /dev/null @@ -1,74 +0,0 @@ -import os, requests, json, browser_cookie3, re, random -from ...typing import sha256, Dict, get_type_hints - -url = 'https://bard.google.com' -model = ['Palm2'] -supports_stream = False -needs_auth = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - psid = {cookie.name: cookie.value for cookie in browser_cookie3.chrome( - domain_name='.google.com')}['__Secure-1PSID'] - - formatted = '\n'.join([ - '%s: %s' % (message['role'], message['content']) for message in messages - ]) - prompt = f'{formatted}\nAssistant:' - - proxy = kwargs.get('proxy', False) - if proxy == False: - print('warning!, you did not give a proxy, a lot of countries are banned from Google Bard, so it may not work') - - snlm0e = None - conversation_id = None - response_id = None - choice_id = None - - client = requests.Session() - client.proxies = { - 'http': f'http://{proxy}', - 'https': f'http://{proxy}'} if proxy else None - - client.headers = { - 'authority': 'bard.google.com', - 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8', - 'origin': 'https://bard.google.com', - 'referer': 'https://bard.google.com/', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36', - 'x-same-domain': '1', - 'cookie': f'__Secure-1PSID={psid}' - } - - snlm0e = re.search(r'SNlM0e\":\"(.*?)\"', - client.get('https://bard.google.com/').text).group(1) if not snlm0e else snlm0e - - params = { - 'bl': 'boq_assistant-bard-web-server_20230326.21_p0', - '_reqid': random.randint(1111, 9999), - 'rt': 'c' - } - - data = { - 'at': snlm0e, - 'f.req': json.dumps([None, json.dumps([[prompt], None, [conversation_id, response_id, choice_id]])])} - - intents = '.'.join([ - 'assistant', - 'lamda', - 'BardFrontendService' - ]) - - response = client.post(f'https://bard.google.com/_/BardChatUi/data/{intents}/StreamGenerate', - data=data, params=params) - - chat_data = json.loads(response.content.splitlines()[3])[0][2] - if chat_data: - json_chat_data = json.loads(chat_data) - - yield json_chat_data[0][0] - - else: - yield 'error' - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/modules/activations.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/modules/activations.py deleted file mode 100644 index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/YaYaB/text-to-magic/app.py b/spaces/YaYaB/text-to-magic/app.py deleted file mode 100644 index 5a443eb33a4bbd4f9685ec0f9adb447e3d2768fc..0000000000000000000000000000000000000000 --- a/spaces/YaYaB/text-to-magic/app.py +++ /dev/null @@ -1,196 +0,0 @@ -from contextlib import nullcontext -import gradio as gr -import torch -from torch import autocast -from diffusers import StableDiffusionPipeline - - -device = "cuda" if torch.cuda.is_available() else "cpu" -context = autocast if device == "cuda" else nullcontext -dtype = torch.float16 if device == "cuda" else torch.float32 - -pipe = StableDiffusionPipeline.from_pretrained("YaYaB/sd-magic-diffusers-test2", revision="5ed9cff0a416a6346aa17aee0d5ed57dfd59b809", torch_dtype=dtype) -pipe = pipe.to(device) - - -# Sometimes the nsfw checker is confused by the Pokémon images, you can disable -# it at your own risk here -disable_safety = True - -if disable_safety: - def null_safety(images, **kwargs): - return images, False - pipe.safety_checker = null_safety - - -def infer(prompt, n_samples, steps, scale): - - with context("cuda"): - images = pipe(n_samples*[prompt], guidance_scale=scale, num_inference_steps=steps).images - - return images - -css = """ - a { - color: inherit; - text-decoration: underline; - } - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: #9d66e5; - background: #9d66e5; - } - input[type='range'] { - accent-color: #9d66e5; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-options { - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .logo{ filter: invert(1); } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } -""" - -block = gr.Blocks(css=css) - -examples = [ - [ - 'Yoda', - 2, - 7.5, - ], - [ - 'Abraham Lincoln', - 2, - 7.5, - ], - [ - 'George Washington', - 2, - 7, - ], -] - -with block: - gr.HTML( - """ -
    -
    -

    - Magic text to image -

    -
    -

    - Generate a new Magic card from a text description, - created by YaYaB. -

    -
    - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - ) - - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery" - ).style(grid=[2], height="auto") - - - with gr.Row(elem_id="advanced-options"): - samples = gr.Slider(label="Images", minimum=1, maximum=4, value=2, step=1) - steps = gr.Slider(label="Steps", minimum=5, maximum=50, value=25, step=5) - scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 - ) - - - ex = gr.Examples(examples=examples, fn=infer, inputs=[text, samples, scale], outputs=gallery, cache_examples=False) - ex.dataset.headers = [""] - - - text.submit(infer, inputs=[text, samples, steps, scale], outputs=gallery) - btn.click(infer, inputs=[text, samples, steps, scale], outputs=gallery) - gr.HTML( - """ - - """ - ) - -block.launch() \ No newline at end of file diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py deleted file mode 100644 index d1a3d4c55e9199f448ccc820dc459131838ff299..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py +++ /dev/null @@ -1,1118 +0,0 @@ -from typing import Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn as nn - -from ...configuration_utils import ConfigMixin, register_to_config -from ...modeling_utils import ModelMixin -from ...models.attention import DualTransformer2DModel, Transformer2DModel -from ...models.embeddings import TimestepEmbedding, Timesteps -from ...models.unet_2d_condition import UNet2DConditionOutput -from ...utils import logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlockFlat": - return DownBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - ) - elif down_block_type == "CrossAttnDownBlockFlat": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlockFlat") - return CrossAttnDownBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - ) - raise ValueError(f"{down_block_type} is not supported.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlockFlat": - return UpBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - ) - elif up_block_type == "CrossAttnUpBlockFlat": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlockFlat") - return CrossAttnUpBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - ) - raise ValueError(f"{up_block_type} is not supported.") - - -# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel with UNet2DConditionModel->UNetFlatConditionModel, nn.Conv2d->LinearMultiDim, Block2D->BlockFlat -class UNetFlatConditionModel(ModelMixin, ConfigMixin): - r""" - UNetFlatConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a - timestep and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the models (such as downloading or saving, etc.) - - Parameters: - sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): - Height and width of input/output sample. - in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "DownBlockFlat")`): - The tuple of downsample blocks to use. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat",)`): - The tuple of upsample blocks to use. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlockFlat", - "CrossAttnDownBlockFlat", - "CrossAttnDownBlockFlat", - "DownBlockFlat", - ), - up_block_types: Tuple[str] = ( - "UpBlockFlat", - "CrossAttnUpBlockFlat", - "CrossAttnUpBlockFlat", - "CrossAttnUpBlockFlat", - ), - only_cross_attention: Union[bool, Tuple[bool]] = False, - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: int = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - act_fn: str = "silu", - norm_num_groups: int = 32, - norm_eps: float = 1e-5, - cross_attention_dim: int = 1280, - attention_head_dim: Union[int, Tuple[int]] = 8, - dual_cross_attention: bool = False, - use_linear_projection: bool = False, - num_class_embeds: Optional[int] = None, - ): - super().__init__() - - self.sample_size = sample_size - time_embed_dim = block_out_channels[0] * 4 - - # input - self.conv_in = LinearMultiDim(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1)) - - # time - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - - self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - - # class embedding - if num_class_embeds is not None: - self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim) - - self.down_blocks = nn.ModuleList([]) - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - if isinstance(only_cross_attention, bool): - only_cross_attention = [only_cross_attention] * len(down_block_types) - - if isinstance(attention_head_dim, int): - attention_head_dim = (attention_head_dim,) * len(down_block_types) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[i], - downsample_padding=downsample_padding, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlockFlatCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift="default", - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim[-1], - resnet_groups=norm_num_groups, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - ) - - # count how many layers upsample the images - self.num_upsamplers = 0 - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - reversed_attention_head_dim = list(reversed(attention_head_dim)) - only_cross_attention = list(reversed(only_cross_attention)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - is_final_block = i == len(block_out_channels) - 1 - - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - # add upsample block for all BUT final layer - if not is_final_block: - add_upsample = True - self.num_upsamplers += 1 - else: - add_upsample = False - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block + 1, - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=time_embed_dim, - add_upsample=add_upsample, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=reversed_attention_head_dim[i], - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps) - self.conv_act = nn.SiLU() - self.conv_out = LinearMultiDim(block_out_channels[0], out_channels, kernel_size=3, padding=1) - - def set_attention_slice(self, slice_size): - head_dims = self.config.attention_head_dim - head_dims = [head_dims] if isinstance(head_dims, int) else head_dims - if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims): - raise ValueError( - f"Make sure slice_size {slice_size} is a common divisor of " - f"the number of heads used in cross_attention: {head_dims}" - ) - if slice_size is not None and slice_size > min(head_dims): - raise ValueError( - f"slice_size {slice_size} has to be smaller or equal to " - f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}" - ) - - for block in self.down_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - self.mid_block.set_attention_slice(slice_size) - - for block in self.up_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (CrossAttnDownBlockFlat, DownBlockFlat, CrossAttnUpBlockFlat, UpBlockFlat)): - module.gradient_checkpointing = value - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - class_labels: Optional[torch.Tensor] = None, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - r""" - Args: - sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor - timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps - encoder_hidden_states (`torch.FloatTensor`): (batch, channel, height, width) encoder hidden states - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition.UNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # By default samples have to be AT least a multiple of the overall upsampling factor. - # The overall upsampling factor is equal to 2 ** (# num of upsampling layears). - # However, the upsampling interpolation output size can be forced to fit any upsampling size - # on the fly if necessary. - default_overall_up_factor = 2**self.num_upsamplers - - # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` - forward_upsample_size = False - upsample_size = None - - if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): - logger.info("Forward upsample size to force interpolation output size.") - forward_upsample_size = True - - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - # This would be a good case for the `match` statement (Python 3.10+) - is_mps = sample.device.type == "mps" - if isinstance(timestep, float): - dtype = torch.float32 if is_mps else torch.float64 - else: - dtype = torch.int32 if is_mps else torch.int64 - timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) - elif len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand(sample.shape[0]) - - t_emb = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might actually be running in fp16. so we need to cast here. - # there might be better ways to encapsulate this. - t_emb = t_emb.to(dtype=self.dtype) - emb = self.time_embedding(t_emb) - - if self.config.num_class_embeds is not None: - if class_labels is None: - raise ValueError("class_labels should be provided when num_class_embeds > 0") - class_emb = self.class_embedding(class_labels).to(dtype=self.dtype) - emb = emb + class_emb - - # 2. pre-process - sample = self.conv_in(sample) - - # 3. down - down_block_res_samples = (sample,) - for downsample_block in self.down_blocks: - if hasattr(downsample_block, "attentions") and downsample_block.attentions is not None: - sample, res_samples = downsample_block( - hidden_states=sample, - temb=emb, - encoder_hidden_states=encoder_hidden_states, - ) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb) - - down_block_res_samples += res_samples - - # 4. mid - sample = self.mid_block(sample, emb, encoder_hidden_states=encoder_hidden_states) - - # 5. up - for i, upsample_block in enumerate(self.up_blocks): - is_final_block = i == len(self.up_blocks) - 1 - - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - # if we have not reached the final block and need to forward the - # upsample size, we do it here - if not is_final_block and forward_upsample_size: - upsample_size = down_block_res_samples[-1].shape[2:] - - if hasattr(upsample_block, "attentions") and upsample_block.attentions is not None: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - upsample_size=upsample_size, - ) - else: - sample = upsample_block( - hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size - ) - # 6. post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample) - - -class LinearMultiDim(nn.Linear): - def __init__(self, in_features, out_features=None, second_dim=4, *args, **kwargs): - in_features = [in_features, second_dim, 1] if isinstance(in_features, int) else list(in_features) - if out_features is None: - out_features = in_features - out_features = [out_features, second_dim, 1] if isinstance(out_features, int) else list(out_features) - self.in_features_multidim = in_features - self.out_features_multidim = out_features - super().__init__(np.array(in_features).prod(), np.array(out_features).prod()) - - def forward(self, input_tensor, *args, **kwargs): - shape = input_tensor.shape - n_dim = len(self.in_features_multidim) - input_tensor = input_tensor.reshape(*shape[0:-n_dim], self.in_features) - output_tensor = super().forward(input_tensor) - output_tensor = output_tensor.view(*shape[0:-n_dim], *self.out_features_multidim) - return output_tensor - - -class ResnetBlockFlat(nn.Module): - def __init__( - self, - *, - in_channels, - out_channels=None, - dropout=0.0, - temb_channels=512, - groups=32, - groups_out=None, - pre_norm=True, - eps=1e-6, - time_embedding_norm="default", - use_in_shortcut=None, - second_dim=4, - **kwargs, - ): - super().__init__() - self.pre_norm = pre_norm - self.pre_norm = True - - in_channels = [in_channels, second_dim, 1] if isinstance(in_channels, int) else list(in_channels) - self.in_channels_prod = np.array(in_channels).prod() - self.channels_multidim = in_channels - - if out_channels is not None: - out_channels = [out_channels, second_dim, 1] if isinstance(out_channels, int) else list(out_channels) - out_channels_prod = np.array(out_channels).prod() - self.out_channels_multidim = out_channels - else: - out_channels_prod = self.in_channels_prod - self.out_channels_multidim = self.channels_multidim - self.time_embedding_norm = time_embedding_norm - - if groups_out is None: - groups_out = groups - - self.norm1 = torch.nn.GroupNorm(num_groups=groups, num_channels=self.in_channels_prod, eps=eps, affine=True) - self.conv1 = torch.nn.Conv2d(self.in_channels_prod, out_channels_prod, kernel_size=1, padding=0) - - if temb_channels is not None: - self.time_emb_proj = torch.nn.Linear(temb_channels, out_channels_prod) - else: - self.time_emb_proj = None - - self.norm2 = torch.nn.GroupNorm(num_groups=groups_out, num_channels=out_channels_prod, eps=eps, affine=True) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels_prod, out_channels_prod, kernel_size=1, padding=0) - - self.nonlinearity = nn.SiLU() - - self.use_in_shortcut = ( - self.in_channels_prod != out_channels_prod if use_in_shortcut is None else use_in_shortcut - ) - - self.conv_shortcut = None - if self.use_in_shortcut: - self.conv_shortcut = torch.nn.Conv2d( - self.in_channels_prod, out_channels_prod, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, input_tensor, temb): - shape = input_tensor.shape - n_dim = len(self.channels_multidim) - input_tensor = input_tensor.reshape(*shape[0:-n_dim], self.in_channels_prod, 1, 1) - input_tensor = input_tensor.view(-1, self.in_channels_prod, 1, 1) - - hidden_states = input_tensor - - hidden_states = self.norm1(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - hidden_states = self.conv1(hidden_states) - - if temb is not None: - temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None] - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = input_tensor + hidden_states - - output_tensor = output_tensor.view(*shape[0:-n_dim], -1) - output_tensor = output_tensor.view(*shape[0:-n_dim], *self.out_channels_multidim) - - return output_tensor - - -# Copied from diffusers.models.unet_2d_blocks.DownBlock2D with DownBlock2D->DownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim -class DownBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - LinearMultiDim( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -# Copied from diffusers.models.unet_2d_blocks.CrossAttnDownBlock2D with CrossAttnDownBlock2D->CrossAttnDownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim -class CrossAttnDownBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - attention_type="default", - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - LinearMultiDim( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def set_attention_slice(self, slice_size): - head_dims = self.attn_num_head_channels - head_dims = [head_dims] if isinstance(head_dims, int) else head_dims - if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims): - raise ValueError( - f"Make sure slice_size {slice_size} is a common divisor of " - f"the number of heads used in cross_attention: {head_dims}" - ) - if slice_size is not None and slice_size > min(head_dims): - raise ValueError( - f"slice_size {slice_size} has to be smaller or equal to " - f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), hidden_states, encoder_hidden_states - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -# Copied from diffusers.models.unet_2d_blocks.UpBlock2D with UpBlock2D->UpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim -class UpBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlockFlat( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.CrossAttnUpBlock2D with CrossAttnUpBlock2D->CrossAttnUpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim -class CrossAttnUpBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - attention_type="default", - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlockFlat( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def set_attention_slice(self, slice_size): - head_dims = self.attn_num_head_channels - head_dims = [head_dims] if isinstance(head_dims, int) else head_dims - if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims): - raise ValueError( - f"Make sure slice_size {slice_size} is a common divisor of " - f"the number of heads used in cross_attention: {head_dims}" - ) - if slice_size is not None and slice_size > min(head_dims): - raise ValueError( - f"slice_size {slice_size} has to be smaller or equal to " - f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - upsample_size=None, - ): - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), hidden_states, encoder_hidden_states - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DCrossAttn with UNetMidBlock2DCrossAttn->UNetMidBlockFlatCrossAttn, ResnetBlock2D->ResnetBlockFlat -class UNetMidBlockFlatCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - ): - super().__init__() - - self.attention_type = attention_type - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def set_attention_slice(self, slice_size): - head_dims = self.attn_num_head_channels - head_dims = [head_dims] if isinstance(head_dims, int) else head_dims - if slice_size is not None and any(dim % slice_size != 0 for dim in head_dims): - raise ValueError( - f"Make sure slice_size {slice_size} is a common divisor of " - f"the number of heads used in cross_attention: {head_dims}" - ) - if slice_size is not None and slice_size > min(head_dims): - raise ValueError( - f"slice_size {slice_size} has to be smaller or equal to " - f"the lowest number of heads used in cross_attention: min({head_dims}) = {min(head_dims)}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - hidden_states = attn(hidden_states, encoder_hidden_states).sample - hidden_states = resnet(hidden_states, temb) - - return hidden_states diff --git a/spaces/ZJunTvT/ZJunChat/assets/Kelpy-Codos.js b/spaces/ZJunTvT/ZJunChat/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/ZJunTvT/ZJunChat/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Arabic_poem_classifier/app.py b/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Arabic_poem_classifier/app.py deleted file mode 100644 index bbf72b782320453cd5d9fb4e7e1ebd99fc972af8..0000000000000000000000000000000000000000 --- a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Arabic_poem_classifier/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr - -description = "التعرف على خاصيات البيت الشعري" -title = """هذا البرنامج يقوم بالتعرف على مختلف خاصيات البيت من الشعر. -يمكنكم إختيار الخاصية من بين: -- التعرف على البحر -- التعرف على الروي -التعرف على الموضوع-""" - -examples = [["سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا"], ["قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"]] - - -meter = gr.Interface.load("huggingface/Yah216/Arabic_poem_meter_3", - description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه", - examples=examples, title = "التعرف على البحر", - inputs = gr.inputs.Textbox(lines = 3, label = "البيت") - -) -rawiy = gr.Interface.load("huggingface/Yah216/Poem_Qafiyah_Detection", - title ="التعرف على الروي", - examples=examples, - description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه", - inputs = gr.inputs.Textbox(lines = 3, label = "البيت") - -) -subject = gr.Interface.load( - "huggingface/zenkri/autotrain-Arabic_Poetry_by_Subject-920730230", - title="التعرف على الموضوع", - examples=examples, - description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه", - inputs = gr.inputs.Textbox(lines = 3, label = "البيت") - -) -demo = gr.TabbedInterface([meter, rawiy, subject], ["التعرف على البحر","التعرف على الروي","التعرف على الموضوع"]) -demo.launch() - diff --git a/spaces/abhijitguha/chatbot_gpt3/app.py b/spaces/abhijitguha/chatbot_gpt3/app.py deleted file mode 100644 index ced97e751f804ec57bd65ffebdddf68d8c14711a..0000000000000000000000000000000000000000 --- a/spaces/abhijitguha/chatbot_gpt3/app.py +++ /dev/null @@ -1,42 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[ ]: - - -import os -import openai -import gradio as gr - -#openai.api_key = "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" -openai.api_key = "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" -start_sequence = "\nAI:" -restart_sequence = "\nHuman: " - -def predict(input,initial_prompt, history=[]): - - s = list(sum(history, ())) - s.append(input) -# initial_prompt="The following is a conversation with an AI movie recommendation assistant. The assistant is helpful, creative, clever, and very friendly.Along with movie recommendation it also talks about general topics" -# \n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: " - response = openai.Completion.create( - model="text-davinci-003", - prompt= initial_prompt + "\n" + str(s), - temperature=0.9, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0.6, - stop=[" Human:", " AI:"]) - # tokenize the new input sentence - response2 = response["choices"][0]["text"] - history.append((input, response2)) - - return history, history - - -gr.Interface(fn=predict, - inputs=["text","text",'state'], - - outputs=["chatbot",'state']).launch() - diff --git a/spaces/abhishek/sketch-to-image/annotator/hed/__init__.py b/spaces/abhishek/sketch-to-image/annotator/hed/__init__.py deleted file mode 100644 index edfc2927b50cdfb42f7cbfdc78300238a67bf9df..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/hed/__init__.py +++ /dev/null @@ -1,107 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -# This is an improved version and model of HED edge detection without GPL contamination -# Please use this implementation in your products -# This implementation may produce slightly different results from Saining Xie's official implementations, -# but it generates smoother edges and is more suitable for ControlNet as well as other image-to-image translations. -# Different from official models and other implementations, this is an RGB-input model (rather than BGR) -# and in this way it works better for gradio's RGB protocol - -import os -import cv2 -import torch -import numpy as np - -from einops import rearrange -from annotator.util import annotator_ckpts_path - - -class DoubleConvBlock(torch.nn.Module): - def __init__(self, input_channel, output_channel, layer_number): - super().__init__() - self.convs = torch.nn.Sequential() - self.convs.append(torch.nn.Conv2d(in_channels=input_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1)) - for i in range(1, layer_number): - self.convs.append(torch.nn.Conv2d(in_channels=output_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1)) - self.projection = torch.nn.Conv2d(in_channels=output_channel, out_channels=1, kernel_size=(1, 1), stride=(1, 1), padding=0) - - def __call__(self, x, down_sampling=False): - h = x - if down_sampling: - h = torch.nn.functional.max_pool2d(h, kernel_size=(2, 2), stride=(2, 2)) - for conv in self.convs: - h = conv(h) - h = torch.nn.functional.relu(h) - return h, self.projection(h) - - -class ControlNetHED_Apache2(torch.nn.Module): - def __init__(self): - super().__init__() - self.norm = torch.nn.Parameter(torch.zeros(size=(1, 3, 1, 1))) - self.block1 = DoubleConvBlock(input_channel=3, output_channel=64, layer_number=2) - self.block2 = DoubleConvBlock(input_channel=64, output_channel=128, layer_number=2) - self.block3 = DoubleConvBlock(input_channel=128, output_channel=256, layer_number=3) - self.block4 = DoubleConvBlock(input_channel=256, output_channel=512, layer_number=3) - self.block5 = DoubleConvBlock(input_channel=512, output_channel=512, layer_number=3) - - def __call__(self, x): - h = x - self.norm - h, projection1 = self.block1(h) - h, projection2 = self.block2(h, down_sampling=True) - h, projection3 = self.block3(h, down_sampling=True) - h, projection4 = self.block4(h, down_sampling=True) - h, projection5 = self.block5(h, down_sampling=True) - return projection1, projection2, projection3, projection4, projection5 - - -class HEDdetector: - def __init__(self): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth" - modelpath = remote_model_path - modelpath = os.path.join(annotator_ckpts_path, "ControlNetHED.pth") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - self.netNetwork = ControlNetHED_Apache2().float().cuda().eval() - self.netNetwork.load_state_dict(torch.load(modelpath)) - - def __call__(self, input_image): - assert input_image.ndim == 3 - H, W, C = input_image.shape - with torch.no_grad(): - image_hed = torch.from_numpy(input_image.copy()).float().cuda() - image_hed = rearrange(image_hed, 'h w c -> 1 c h w') - edges = self.netNetwork(image_hed) - edges = [e.detach().cpu().numpy().astype(np.float32)[0, 0] for e in edges] - edges = [cv2.resize(e, (W, H), interpolation=cv2.INTER_LINEAR) for e in edges] - edges = np.stack(edges, axis=2) - edge = 1 / (1 + np.exp(-np.mean(edges, axis=2).astype(np.float64))) - edge = (edge * 255.0).clip(0, 255).astype(np.uint8) - return edge - - -def nms(x, t, s): - x = cv2.GaussianBlur(x.astype(np.float32), (0, 0), s) - - f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8) - f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8) - f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8) - f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8) - - y = np.zeros_like(x) - - for f in [f1, f2, f3, f4]: - np.putmask(y, cv2.dilate(x, kernel=f) == x, x) - - z = np.zeros_like(y, dtype=np.uint8) - z[y > t] = 255 - return z diff --git a/spaces/adirik/stylemc-demo/encoder4editing/utils/train_utils.py b/spaces/adirik/stylemc-demo/encoder4editing/utils/train_utils.py deleted file mode 100644 index 0c55177f7442010bc1fcc64de3d142585c22adc0..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/utils/train_utils.py +++ /dev/null @@ -1,13 +0,0 @@ - -def aggregate_loss_dict(agg_loss_dict): - mean_vals = {} - for output in agg_loss_dict: - for key in output: - mean_vals[key] = mean_vals.setdefault(key, []) + [output[key]] - for key in mean_vals: - if len(mean_vals[key]) > 0: - mean_vals[key] = sum(mean_vals[key]) / len(mean_vals[key]) - else: - print('{} has no value'.format(key)) - mean_vals[key] = 0 - return mean_vals diff --git a/spaces/ahmedghani/Editing-Tools/README.md b/spaces/ahmedghani/Editing-Tools/README.md deleted file mode 100644 index 6288fff80057bc9bb6addf04040dd1a51f9ab034..0000000000000000000000000000000000000000 --- a/spaces/ahmedghani/Editing-Tools/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Editing Tools -emoji: 📽️📷🎥📹🎦🖼️🎨🖌️ -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.22.1 -app_file: app.py -pinned: false ---- - -```bash -conda create -n editing-tools python=3.9 -y -conda activate editing-tools -conda install -c "nvidia/label/cuda-11.7.0" cuda-toolkit cuda -pip install -r requirements.txt -python app.py -``` - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/Keypoint_Communities/README.md b/spaces/akhaliq/Keypoint_Communities/README.md deleted file mode 100644 index 1217eeb57b73fd355e773f5c039b4bcd0fe0164e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Keypoint_Communities/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Keypoint_Communities -emoji: 👁 -colorFrom: pink -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/stylegan3_clip/avg_spectra.py b/spaces/akhaliq/stylegan3_clip/avg_spectra.py deleted file mode 100644 index afaef87de54e49df230b432b52fda92667d17667..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/avg_spectra.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Compare average power spectra between real and generated images, -or between multiple generators.""" - -import os -import numpy as np -import torch -import torch.fft -import scipy.ndimage -import matplotlib.pyplot as plt -import click -import tqdm -import dnnlib - -import legacy -from training import dataset - -#---------------------------------------------------------------------------- -# Setup an iterator for streaming images, in uint8 NCHW format, based on the -# respective command line options. - -def stream_source_images(source, num, seed, device, data_loader_kwargs=None): # => num_images, image_size, image_iter - ext = source.split('.')[-1].lower() - if data_loader_kwargs is None: - data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2) - - if ext == 'pkl': - if num is None: - raise click.ClickException('--num is required when --source points to network pickle') - with dnnlib.util.open_url(source) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) - def generate_image(seed): - rnd = np.random.RandomState(seed) - z = torch.from_numpy(rnd.randn(1, G.z_dim)).to(device) - c = torch.zeros([1, G.c_dim], device=device) - if G.c_dim > 0: - c[:, rnd.randint(G.c_dim)] = 1 - return (G(z=z, c=c) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - _ = generate_image(seed) # warm up - image_iter = (generate_image(seed + idx) for idx in range(num)) - return num, G.img_resolution, image_iter - - elif ext == 'zip' or os.path.isdir(source): - dataset_obj = dataset.ImageFolderDataset(path=source, max_size=num, random_seed=seed) - if num is not None and num != len(dataset_obj): - raise click.ClickException(f'--source contains fewer than {num} images') - data_loader = torch.utils.data.DataLoader(dataset_obj, batch_size=1, **data_loader_kwargs) - image_iter = (image.to(device) for image, _label in data_loader) - return len(dataset_obj), dataset_obj.resolution, image_iter - - else: - raise click.ClickException('--source must point to network pickle, dataset zip, or directory') - -#---------------------------------------------------------------------------- -# Load average power spectrum from the specified .npz file and construct -# the corresponding heatmap for visualization. - -def construct_heatmap(npz_file, smooth): - npz_data = np.load(npz_file) - spectrum = npz_data['spectrum'] - image_size = npz_data['image_size'] - hmap = np.log10(spectrum) * 10 # dB - hmap = np.fft.fftshift(hmap) - hmap = np.concatenate([hmap, hmap[:1, :]], axis=0) - hmap = np.concatenate([hmap, hmap[:, :1]], axis=1) - if smooth > 0: - sigma = spectrum.shape[0] / image_size * smooth - hmap = scipy.ndimage.gaussian_filter(hmap, sigma=sigma, mode='nearest') - return hmap, image_size - -#---------------------------------------------------------------------------- - -@click.group() -def main(): - """Compare average power spectra between real and generated images, - or between multiple generators. - - Example: - - \b - # Calculate dataset mean and std, needed in subsequent steps. - python avg_spectra.py stats --source=~/datasets/ffhq-1024x1024.zip - - \b - # Calculate average spectrum for the training data. - python avg_spectra.py calc --source=~/datasets/ffhq-1024x1024.zip \\ - --dest=tmp/training-data.npz --mean=112.684 --std=69.509 - - \b - # Calculate average spectrum for a pre-trained generator. - python avg_spectra.py calc \\ - --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \\ - --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000 - - \b - # Display results. - python avg_spectra.py heatmap tmp/training-data.npz - python avg_spectra.py heatmap tmp/stylegan3-r.npz - python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz - - \b - # Save as PNG. - python avg_spectra.py heatmap tmp/training-data.npz --save=tmp/training-data.png --dpi=300 - python avg_spectra.py heatmap tmp/stylegan3-r.npz --save=tmp/stylegan3-r.png --dpi=300 - python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz --save=tmp/slices.png --dpi=300 - """ - -#---------------------------------------------------------------------------- - -@main.command() -@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True) -@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1)) -@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -def stats(source, num, seed, device=torch.device('cuda')): - """Calculate dataset mean and standard deviation needed by 'calc'.""" - torch.multiprocessing.set_start_method('spawn') - num_images, _image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device) - - # Accumulate moments. - moments = torch.zeros([3], dtype=torch.float64, device=device) - for image in tqdm.tqdm(image_iter, total=num_images): - image = image.to(torch.float64) - moments += torch.stack([torch.ones_like(image).sum(), image.sum(), image.square().sum()]) - moments = moments / moments[0] - - # Compute mean and standard deviation. - mean = moments[1] - std = (moments[2] - moments[1].square()).sqrt() - print(f'--mean={mean:g} --std={std:g}') - -#---------------------------------------------------------------------------- - -@main.command() -@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True) -@click.option('--dest', help='Where to store the result', metavar='NPZ', required=True) -@click.option('--mean', help='Dataset mean for whitening', metavar='FLOAT', type=float, required=True) -@click.option('--std', help='Dataset standard deviation for whitening', metavar='FLOAT', type=click.FloatRange(min=0), required=True) -@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1)) -@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -@click.option('--beta', help='Shape parameter for the Kaiser window', metavar='FLOAT', type=click.FloatRange(min=0), default=8, show_default=True) -@click.option('--interp', help='Frequency-domain interpolation factor', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True) -def calc(source, dest, mean, std, num, seed, beta, interp, device=torch.device('cuda')): - """Calculate average power spectrum and store it in .npz file.""" - torch.multiprocessing.set_start_method('spawn') - num_images, image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device) - spectrum_size = image_size * interp - padding = spectrum_size - image_size - - # Setup window function. - window = torch.kaiser_window(image_size, periodic=False, beta=beta, device=device) - window *= window.square().sum().rsqrt() - window = window.ger(window).unsqueeze(0).unsqueeze(1) - - # Accumulate power spectrum. - spectrum = torch.zeros([spectrum_size, spectrum_size], dtype=torch.float64, device=device) - for image in tqdm.tqdm(image_iter, total=num_images): - image = (image.to(torch.float64) - mean) / std - image = torch.nn.functional.pad(image * window, [0, padding, 0, padding]) - spectrum += torch.fft.fftn(image, dim=[2,3]).abs().square().mean(dim=[0,1]) - spectrum /= num_images - - # Save result. - if os.path.dirname(dest): - os.makedirs(os.path.dirname(dest), exist_ok=True) - np.savez(dest, spectrum=spectrum.cpu().numpy(), image_size=image_size) - -#---------------------------------------------------------------------------- - -@main.command() -@click.argument('npz-file', nargs=1) -@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]') -@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True) -@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=1.25, show_default=True) -def heatmap(npz_file, save, smooth, dpi): - """Visualize 2D heatmap based on the given .npz file.""" - hmap, image_size = construct_heatmap(npz_file=npz_file, smooth=smooth) - - # Setup plot. - plt.figure(figsize=[6, 4.8], dpi=dpi, tight_layout=True) - freqs = np.linspace(-0.5, 0.5, num=hmap.shape[0], endpoint=True) * image_size - ticks = np.linspace(freqs[0], freqs[-1], num=5, endpoint=True) - levels = np.linspace(-40, 20, num=13, endpoint=True) - - # Draw heatmap. - plt.xlim(ticks[0], ticks[-1]) - plt.ylim(ticks[0], ticks[-1]) - plt.xticks(ticks) - plt.yticks(ticks) - plt.contourf(freqs, freqs, hmap, levels=levels, extend='both', cmap='Blues') - plt.gca().set_aspect('equal') - plt.colorbar(ticks=levels) - plt.contour(freqs, freqs, hmap, levels=levels, extend='both', linestyles='solid', linewidths=1, colors='midnightblue', alpha=0.2) - - # Display or save. - if save is None: - plt.show() - else: - if os.path.dirname(save): - os.makedirs(os.path.dirname(save), exist_ok=True) - plt.savefig(save) - -#---------------------------------------------------------------------------- - -@main.command() -@click.argument('npz-files', nargs=-1, required=True) -@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]') -@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True) -@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=0, show_default=True) -def slices(npz_files, save, dpi, smooth): - """Visualize 1D slices based on the given .npz files.""" - cases = [dnnlib.EasyDict(npz_file=npz_file) for npz_file in npz_files] - for c in cases: - c.hmap, c.image_size = construct_heatmap(npz_file=c.npz_file, smooth=smooth) - c.label = os.path.splitext(os.path.basename(c.npz_file))[0] - - # Check consistency. - image_size = cases[0].image_size - hmap_size = cases[0].hmap.shape[0] - if any(c.image_size != image_size or c.hmap.shape[0] != hmap_size for c in cases): - raise click.ClickException('All .npz must have the same resolution') - - # Setup plot. - plt.figure(figsize=[12, 4.6], dpi=dpi, tight_layout=True) - hmap_center = hmap_size // 2 - hmap_range = np.arange(hmap_center, hmap_size) - freqs0 = np.linspace(0, image_size / 2, num=(hmap_size // 2 + 1), endpoint=True) - freqs45 = np.linspace(0, image_size / np.sqrt(2), num=(hmap_size // 2 + 1), endpoint=True) - xticks0 = np.linspace(freqs0[0], freqs0[-1], num=9, endpoint=True) - xticks45 = np.round(np.linspace(freqs45[0], freqs45[-1], num=9, endpoint=True)) - yticks = np.linspace(-50, 30, num=9, endpoint=True) - - # Draw 0 degree slice. - plt.subplot(1, 2, 1) - plt.title('0\u00b0 slice') - plt.xlim(xticks0[0], xticks0[-1]) - plt.ylim(yticks[0], yticks[-1]) - plt.xticks(xticks0) - plt.yticks(yticks) - for c in cases: - plt.plot(freqs0, c.hmap[hmap_center, hmap_range], label=c.label) - plt.grid() - plt.legend(loc='upper right') - - # Draw 45 degree slice. - plt.subplot(1, 2, 2) - plt.title('45\u00b0 slice') - plt.xlim(xticks45[0], xticks45[-1]) - plt.ylim(yticks[0], yticks[-1]) - plt.xticks(xticks45) - plt.yticks(yticks) - for c in cases: - plt.plot(freqs45, c.hmap[hmap_range, hmap_range], label=c.label) - plt.grid() - plt.legend(loc='upper right') - - # Display or save. - if save is None: - plt.show() - else: - if os.path.dirname(save): - os.makedirs(os.path.dirname(save), exist_ok=True) - plt.savefig(save) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() # pylint: disable=no-value-for-parameter - -#---------------------------------------------------------------------------- diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/unix.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/unix.py deleted file mode 100644 index 2fbd4d4f367863ff0cf635fddc5f6e44383e7d94..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/unix.py +++ /dev/null @@ -1,181 +0,0 @@ -from __future__ import annotations - -import os -import sys -from configparser import ConfigParser -from pathlib import Path - -from .api import PlatformDirsABC - -if sys.platform.startswith("linux"): # pragma: no branch # no op check, only to please the type checker - from os import getuid -else: - - def getuid() -> int: - raise RuntimeError("should only be used on Linux") - - -class Unix(PlatformDirsABC): - """ - On Unix/Linux, we follow the - `XDG Basedir Spec `_. The spec allows - overriding directories with environment variables. The examples show are the default values, alongside the name of - the environment variable that overrides them. Makes use of the - `appname `, - `version `, - `multipath `, - `opinion `. - """ - - @property - def user_data_dir(self) -> str: - """ - :return: data directory tied to the user, e.g. ``~/.local/share/$appname/$version`` or - ``$XDG_DATA_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_DATA_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.local/share") - return self._append_app_name_and_version(path) - - @property - def site_data_dir(self) -> str: - """ - :return: data directories shared by users (if `multipath ` is - enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS - path separator), e.g. ``/usr/local/share/$appname/$version`` or ``/usr/share/$appname/$version`` - """ - # XDG default for $XDG_DATA_DIRS; only first, if multipath is False - path = os.environ.get("XDG_DATA_DIRS", "") - if not path.strip(): - path = f"/usr/local/share{os.pathsep}/usr/share" - return self._with_multi_path(path) - - def _with_multi_path(self, path: str) -> str: - path_list = path.split(os.pathsep) - if not self.multipath: - path_list = path_list[0:1] - path_list = [self._append_app_name_and_version(os.path.expanduser(p)) for p in path_list] - return os.pathsep.join(path_list) - - @property - def user_config_dir(self) -> str: - """ - :return: config directory tied to the user, e.g. ``~/.config/$appname/$version`` or - ``$XDG_CONFIG_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_CONFIG_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.config") - return self._append_app_name_and_version(path) - - @property - def site_config_dir(self) -> str: - """ - :return: config directories shared by users (if `multipath ` - is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS - path separator), e.g. ``/etc/xdg/$appname/$version`` - """ - # XDG default for $XDG_CONFIG_DIRS only first, if multipath is False - path = os.environ.get("XDG_CONFIG_DIRS", "") - if not path.strip(): - path = "/etc/xdg" - return self._with_multi_path(path) - - @property - def user_cache_dir(self) -> str: - """ - :return: cache directory tied to the user, e.g. ``~/.cache/$appname/$version`` or - ``~/$XDG_CACHE_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_CACHE_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.cache") - return self._append_app_name_and_version(path) - - @property - def user_state_dir(self) -> str: - """ - :return: state directory tied to the user, e.g. ``~/.local/state/$appname/$version`` or - ``$XDG_STATE_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_STATE_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.local/state") - return self._append_app_name_and_version(path) - - @property - def user_log_dir(self) -> str: - """ - :return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``log`` in it - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "log") - return path - - @property - def user_documents_dir(self) -> str: - """ - :return: documents directory tied to the user, e.g. ``~/Documents`` - """ - documents_dir = _get_user_dirs_folder("XDG_DOCUMENTS_DIR") - if documents_dir is None: - documents_dir = os.environ.get("XDG_DOCUMENTS_DIR", "").strip() - if not documents_dir: - documents_dir = os.path.expanduser("~/Documents") - - return documents_dir - - @property - def user_runtime_dir(self) -> str: - """ - :return: runtime directory tied to the user, e.g. ``/run/user/$(id -u)/$appname/$version`` or - ``$XDG_RUNTIME_DIR/$appname/$version`` - """ - path = os.environ.get("XDG_RUNTIME_DIR", "") - if not path.strip(): - path = f"/run/user/{getuid()}" - return self._append_app_name_and_version(path) - - @property - def site_data_path(self) -> Path: - """:return: data path shared by users. Only return first item, even if ``multipath`` is set to ``True``""" - return self._first_item_as_path_if_multipath(self.site_data_dir) - - @property - def site_config_path(self) -> Path: - """:return: config path shared by the users. Only return first item, even if ``multipath`` is set to ``True``""" - return self._first_item_as_path_if_multipath(self.site_config_dir) - - def _first_item_as_path_if_multipath(self, directory: str) -> Path: - if self.multipath: - # If multipath is True, the first path is returned. - directory = directory.split(os.pathsep)[0] - return Path(directory) - - -def _get_user_dirs_folder(key: str) -> str | None: - """Return directory from user-dirs.dirs config file. See https://freedesktop.org/wiki/Software/xdg-user-dirs/""" - user_dirs_config_path = os.path.join(Unix().user_config_dir, "user-dirs.dirs") - if os.path.exists(user_dirs_config_path): - parser = ConfigParser() - - with open(user_dirs_config_path) as stream: - # Add fake section header, so ConfigParser doesn't complain - parser.read_string(f"[top]\n{stream.read()}") - - if key not in parser["top"]: - return None - - path = parser["top"][key].strip('"') - # Handle relative home paths - path = path.replace("$HOME", os.path.expanduser("~")) - return path - - return None - - -__all__ = [ - "Unix", -] diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/__init__.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/__init__.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/allknowingroger/Image-Models-Test119/README.md b/spaces/allknowingroger/Image-Models-Test119/README.md deleted file mode 100644 index 77af92a0c2ef86e2bfc609479cf59bb741dd3132..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test119/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test118 ---- - - \ No newline at end of file diff --git a/spaces/alphahg/academic-paper-translate-summary/app.py b/spaces/alphahg/academic-paper-translate-summary/app.py deleted file mode 100644 index b1f7444bd5940242664b4a3e34b0fcaaa4522619..0000000000000000000000000000000000000000 --- a/spaces/alphahg/academic-paper-translate-summary/app.py +++ /dev/null @@ -1,167 +0,0 @@ -# %% -import gradio as gr -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline -from nltk.tokenize import sent_tokenize -import gc - -import nltk -nltk.download('punkt') - -# from PyKakao import KoGPT -# kogpt_api = KoGPT(service_key = "") -import openai -openai.api_key = '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' -gpt2_tokenizer = AutoTokenizer.from_pretrained('gpt2') - -#en2ko = 'alphahg/m2m100_418M-finetuned-en-to-ko-4770260'#'alphahg/mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408' -en2ko = 'alphahg/mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408' -ko2en = 'alphahg/opus-mt-ko-en-finetuned-ko-to-en-2780616' -ensum = 'allenai/led-large-16384-arxiv' -kosum = 'alphahg/pko-t5-small-finetuned-paper-4564652' #'lcw99/t5-base-korean-text-summary' - -#en_pipe = pipeline('translation', model=en2ko, tokenizer=en2ko, src_lang = "en", tgt_lang = "ko", device_map="auto") -en2ko_model = AutoModelForSeq2SeqLM.from_pretrained(en2ko) - -en_pipe = pipeline('translation', model=en2ko_model, tokenizer=en2ko, src_lang = "en_XX", tgt_lang = "ko_KR") -ko_pipe = pipeline('translation', model=ko2en, tokenizer=ko2en) -style_pipe = pipeline('translation', model=en2ko_model, tokenizer=en2ko, src_lang = "ko_KR", tgt_lang = "ko_KR") - -en_sum = pipeline('summarization', model=ensum, tokenizer=ensum) -ko_sum = pipeline('summarization', model=kosum, tokenizer=kosum) - -def len_tokens(text, pipe): - return len(pipe.tokenizer(text)['input_ids']) - -def split_sent(sentences, pipe, max_len=256): - if not sentences: - return [] - - paragraphs = [] - example = sentences[0] - for i in range(1, len(sentences)): - if len_tokens(example + ' ' + sentences[i], pipe) > max_len: - paragraphs.append(example) - example = sentences[i] - else: - example += ' ' + sentences[i] - - paragraphs.append(example) - - return paragraphs - -# chatbot = Chatbot({ -# #"session_token": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..hV_ujfbYLwBgI-g6.zQW0evUrpYfli2cujTFp1ie5PhthUZayoSY2Chb1Eb8Ow3t6l2-NUwGJcYyxVKQS0aITN3-ph-KzPysnu7dCF9KrC-22DZzs1zMFm3PHEkjb4jD69qndcFEGGH8y4SejfYwvdj4wfKmVnGo3XTNZ31qPSDA2PBoaOMpxWABSqWULMJbS-_Y--wd0YhsqMFlkCQpXyfxSf9yxXlPYt0HR_NgupoBXP-WVYbODSUYFVqa3IsScbSPS-mUY0YrAb8AJUkvej7HiSCN9onyThgTtZgZklwqpB4FesJaC6R3nSXSg5cKDDupGVBUQlPRiemacCV6tXnSC-bCtN9a-l9RRqtX_FJNP8T7Kb75ktuedKKrXXTmk3x7hz_RhYhZ4wXFkbqXexZQXTfQoI2vKlLN73EHBlJOqsDLnOP7zT4Vr2RpBbk1HK5D_uHh5x1X3aBHslHfEQpjjZKiMs0to9DwKzSNHXlNmCeGjT9ZzKVsWYCiseO20IlKxQ63Q_nIbi-6y8e6LWw9O82ESKkkRe8kN-CzGxakKJegmHKQGRPZu9ZEIqwWYqlnahVyWRFOtjfMNN3ncGbQGi54VMyfSqmSvPXecaxsVNzl00gHvmCFBFJBDXM2GTsvEzQsJi1MLyopXtSuiU1anL_kC1eMvew61vd3TtC97ZwlQLaWc6dT14p5NdJpN9ihFpgtxMP1rcQhNTc5fo9BBoKrO4yGOuy0wixJs6ORVdY2o3c653X8PFmrso1XaV3KpBaSmcvszIhL1anJTA8SpnPNRmksEFONcX8AfpQ-4WbckCS737TZYCDulVtvukyVAbtq9cEQP5kXXAGOWKg5lX4nRFynM83f8P-XGIg-XUGE99NRIOCBo28cr04fWFOaJOyHf9eP6Rx5zjNv1qwp4FxhVVP9jlmSTfu97CZSR91L-k8V6jVgbj8F6YUZ6iiu51kaOAqf5de4EUncSFyGLuJfCGTJTPSYYl1lnR6bSfTVHKwP28YzzcU2myMM5B0ZXDwydD900TYXZOCxxLPUbu5-G3roR2KZnuWLXFOiafAvDx-LHYUHSWQZ9ouWcDaQBNsXmfTZtIWHQ8aTZwlNEnN4-uFdlk2Lm35qp1v-8Fp_3aXGQ3CrTy-ryMV0rUPTSMCEA8gVA_mD40zV6Wcb4asc3zsYAuomQ3Iu4iB5wyWGxUIJVzl1C9QaPpAx7vp5u7w-0_rtocVVXFRTZ8aSxNS3QAd62TbVyToIOrsvp4kOWDcqhNp5QBAsJtES9pbO9fiy_SJS83SFMliSFd-jhXfKu0kUYIUb9yaN5QC6eEpgJ7KzhwTcNDtoqyBKMyVTSdUXA9P2Yv2e4r-BVnxlW0RxknQdesK-wZrwuAZt_bnLaHSqFzyWz5AE7pukTBQ2QdVoity_tVURzhTcINh6rvPywm2IVl1gC3FjfhQVfTvHWFtUzNqLN4yUfI0Tc1mGHQlYuxZ_yux2B8HeYb_cyb1rR_mwDiOs3PKOnhfNRdXqXf6RWr7KdjNc0k-CMm13DAYQggmFCmEZW20FiwalKqVq3nFTFDhfp5mxtr0sLCVxGA3eTqC6_i2TAVGqDLjxzfz5WiK7J3FAN2_kmEZLBVXHabwa9kKyCzcgCx6FrxaFidskO6t4dWu3wok95hXMae0Z4ZFs7HVNisM1pkRm7LE3XdvnslKHAJkPr57HsFdlQITJRSx3Tg0EN1LUt80hKx8VGXPv7zBeXP5lni2ixpglMQmiKLiszowGoqu2oJPwougueu5Bj4BLhmoqK8DCtdxl3MYAyxLWWStXQqcEJQw7koYmPNwr4BzI9cQVk81LbPwrXBbJfR7G14e5qV0lULfuU5qVfNt7DU6FbwXmzv6qFI-jOClLzSTKpFzp51wQQ5fh2REs6CPJlL-kiyomJPXcqSeezDCLLwWjI_vIyODFkzt91l-dmFriu3HMkMC1v29AJlfPA_avSiJzJDEI7rb6AbEyT6piqp1TYlWMkI_rJCsCZXIb10Rjd2y9sR-Dz3_FZzRvJUA7BfRlP7Bf04HnYsyMRoJilbuyQ5fB0B2L2nxjYY2zoHJ_x6HTS6tcrAijOO4FSSQngWD9iTKCm6pjW3aZjFyXyjmP82S3VnhEyON390aIL7j9Y0wGnHzOkn54OfyxxGeo2mFAIv9kthL_Fi8d9G_rvvQOBUM2a7kjF5-n8wby0YDujoRl0ETg379HyMVf7F2BHWQ8nAbICRxWZ7EzPLwzrVjPiQVPZklkrVYgEmGxZDrgEG8IeNi7FMgGruaQ1tENczRMXzaApK8k6-FXKhfFIV7dN95tP4k6tnnxRFoMAUWcXwQCzRH8YhID36TAUFdBQ-c52MTogPo1Rki1N49j_e7Mph1OABQX2Fw9-CukT6reQkp3nGrwi0IKnKoyGhHOBHK3kzQwINfjOBbNpOjP-6MX_9kiRTstN2GLte8w0QJQVl84o8ACTjV8N4rhI6xyLKIvqoyZ6jNO3SYs8fEutmZO8-qB0iksIiQHupxQgcmgbAyM.KxAbJWqMvwm0PYtq7MuR6A" - -# }, conversation_id=None, parent_id=None) # You can start a custom conversation -# %% -def translate(text, lang, gpt_fix=False): - from_en = False if lang == '한영' else True - sentences = sent_tokenize(text) - #print(sentences) - if not sentences: - return '' - - paragraphs = split_sent(sentences, en_pipe, max_len=180) if from_en else split_sent(sentences, ko_pipe) - #print(paragraphs) - - ret = [] - for text in paragraphs: - result = en_pipe(text) if from_en else ko_pipe(text) - ret.append(result[0]['translation_text']) - - translated = ' '.join(ret) - gc.collect() - - if gpt_fix: - if lang == '한영': - prompt = 'Improve given formal article without adding:' - elif lang == '영한': - prompt = "추가적인 내용없이 주어진 글을 개선해:" - - def fix_sent(sent): - number_of_tokens = len(gpt2_tokenizer(sent)['input_ids']) - - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt+'\n'+sent, - temperature=0, - max_tokens=number_of_tokens+128, - top_p=1.0, - frequency_penalty=0.0, - presence_penalty=0.0 - ) - - return response.choices[0].text.strip() - - # def fix_sent(sent): - # generated = kogpt_api.generate(prompt+'\n'+sent, max_tokens=256) - # return generated['generations'][0]['text'] - - translated = fix_sent(translated) - - return translated - -#%% -def translate_with_sum(text, lang, gpt_fix=False): - from_en = False if lang == '한영' else True - - if lang == '영한': - summary = en_sum(text, max_length=int(len_tokens(text, en_sum)/2)+32) - text = summary[0]['summary_text'] - - sentences = sent_tokenize(text) - #print(sentences) - if not sentences: - return '' - - paragraphs = split_sent(sentences, en_pipe if from_en else ko_pipe) - #print(paragraphs) - - ret = [] - for text in paragraphs: - result = en_pipe(text) if from_en else ko_pipe(text) - ret.append(result[0]['translation_text']) - - summarized = ' '.join(ret) - if lang == '한영': - summary = en_sum(summarized, max_length=int(len_tokens(summarized, en_sum)/2)+32) - return summary[0]['summary_text'] - - gc.collect() - return summarized - -def summarize(text, lang): - if lang == 'Korean': - summarizer = ko_sum - elif lang == 'English': - summarizer = en_sum - - summary = summarizer(text, max_length=int(len_tokens(text, summarizer) * 0.7))[0]['summary_text'] - return summary - -def translate_styleonly(text): - sentences = sent_tokenize(text) - paragraphs = split_sent(sentences, style_pipe, max_len=180) - #print(paragraphs) - - ret = [] - for text in paragraphs: - result = style_pipe(text) - ret.append(result[0]['translation_text']) - - translated = ' '.join(ret) - gc.collect() - - return translated - -# %% -interface1 = gr.Interface(fn=translate, inputs=["text", gr.Radio(["영한", "한영"], value='영한'), 'checkbox'], outputs="text") -interface2 = gr.Interface(fn=translate_with_sum, inputs=["text", gr.Radio(["영한", "한영"], value='영한')], outputs="text") -parallel_interface = gr.Parallel(interface1, interface2) - -summarize_interface = gr.Interface(fn=summarize, inputs=["text", gr.Radio(["Korean", "English"], value='Korean')], outputs="text") -style_interface = gr.Interface(fn=translate_styleonly, inputs=["text"], outputs="text") - -demo = gr.TabbedInterface([parallel_interface, summarize_interface, style_interface], ['번역 및 요약', '요약', '스타일 번역'], css="footer {visibility: hidden}") # '요약' -demo.launch() # Share the demo -# %% \ No newline at end of file diff --git a/spaces/amasad/sahil2801-replit-code-instruct-glaive/app.py b/spaces/amasad/sahil2801-replit-code-instruct-glaive/app.py deleted file mode 100644 index 976f6a74229ccc0badfdaca594a78558f0afbab4..0000000000000000000000000000000000000000 --- a/spaces/amasad/sahil2801-replit-code-instruct-glaive/app.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import gradio as gr -import torch - -from transformers import AutoTokenizer, AutoModelForCausalLM - -REPO = "sahil2801/replit-code-instruct-glaive" - -description = """#

    Code Generation by Instruction with sahil2801/replit-code-instruct-glaive

    - This model is trained on a large amount of code and fine tuned on code-instruct datasets. You can type an instruction in the ### Input: section and received code generation.""" - -device = "cuda" if torch.cuda.is_available() else "cpu" - -tokenizer = AutoTokenizer.from_pretrained(REPO, trust_remote_code=True) -model = AutoModelForCausalLM.from_pretrained(REPO, torch_dtype=torch.bfloat16, trust_remote_code=True) -model.to(device) - -model.eval() - -custom_css = """ -.gradio-container { - background-color: #0D1525; - color:white -} -#orange-button { - background: #F26207 !important; - color: white; -} -.cm-gutters{ - border: none !important; -} -""" - -def post_processing(prompt, completion): - return prompt + completion - -def code_generation(prompt, max_new_tokens=1024, temperature=0.2, top_p=0.9, eos_token_id=tokenizer.eos_token_id): - input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) - generated_ids = model.generate(input_ids, max_new_tokens=max_new_tokens, do_sample=True, use_cache=True, temperature=temperature, top_p=top_p, eos_token_id=eos_token_id) - completion = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_spaces=False) - return post_processing(prompt, completion) - -demo = gr.Blocks( - css=custom_css -) - -with demo: - gr.Markdown(value=description) - with gr.Row(): - input_col , settings_col = gr.Column(scale=6), gr.Column(scale=6), - with input_col: - code = gr.Code(lines=28,label='Input', value="Below is an instruction that describes a task, paired with an input that provides further context.\n Write a response that appropriately completes the request.\n\n ### Instruction:\nWrite a program to perform the given task.\n\n###Input: \n\n### Response:") - with settings_col: - with gr.Accordion("Generation Settings", open=True): - max_new_tokens= gr.Slider( - minimum=8, - maximum=1024, - step=1, - value=48, - label="Max Tokens", - ) - temperature = gr.Slider( - minimum=0.1, - maximum=2.5, - step=0.1, - value=0.2, - label="Temperature", - ) - - with gr.Row(): - run = gr.Button(elem_id="orange-button", value="Generate Response") - - event = run.click(code_generation, [code, max_new_tokens, temperature], code, api_name="predict") - -demo.queue(max_size=40).launch() \ No newline at end of file diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/backend.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/backend.py deleted file mode 100644 index fd45b94d916512059e4d1f7850b63de6f9da6320..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/backend.py +++ /dev/null @@ -1,176 +0,0 @@ -import re -from datetime import datetime -from g4f import ChatCompletion -from flask import request, Response, stream_with_context -from requests import get -from server.config import special_instructions - - -class Backend_Api: - def __init__(self, bp, config: dict) -> None: - """ - Initialize the Backend_Api class. - :param app: Flask application instance - :param config: Configuration dictionary - """ - self.bp = bp - self.routes = { - '/backend-api/v2/conversation': { - 'function': self._conversation, - 'methods': ['POST'] - } - } - - def _conversation(self): - """ - Handles the conversation route. - - :return: Response object containing the generated conversation stream - """ - conversation_id = request.json['conversation_id'] - - try: - jailbreak = request.json['jailbreak'] - model = request.json['model'] - messages = build_messages(jailbreak) - - # Generate response - response = ChatCompletion.create( - model=model, - chatId=conversation_id, - messages=messages - ) - - return Response(stream_with_context(generate_stream(response, jailbreak)), mimetype='text/event-stream') - - except Exception as e: - print(e) - print(e.__traceback__.tb_next) - - return { - '_action': '_ask', - 'success': False, - "error": f"an error occurred {str(e)}" - }, 400 - - -def build_messages(jailbreak): - """ - Build the messages for the conversation. - - :param jailbreak: Jailbreak instruction string - :return: List of messages for the conversation - """ - _conversation = request.json['meta']['content']['conversation'] - internet_access = request.json['meta']['content']['internet_access'] - prompt = request.json['meta']['content']['parts'][0] - - # Add the existing conversation - conversation = _conversation - - # Add web results if enabled - if internet_access: - current_date = datetime.now().strftime("%Y-%m-%d") - query = f'Current date: {current_date}. ' + prompt["content"] - search_results = fetch_search_results(query) - conversation.extend(search_results) - - # Add jailbreak instructions if enabled - if jailbreak_instructions := getJailbreak(jailbreak): - conversation.extend(jailbreak_instructions) - - # Add the prompt - conversation.append(prompt) - - # Reduce conversation size to avoid API Token quantity error - if len(conversation) > 3: - conversation = conversation[-4:] - - return conversation - - -def fetch_search_results(query): - """ - Fetch search results for a given query. - - :param query: Search query string - :return: List of search results - """ - search = get('https://ddg-api.herokuapp.com/search', - params={ - 'query': query, - 'limit': 3, - }) - - snippets = "" - for index, result in enumerate(search.json()): - snippet = f'[{index + 1}] "{result["snippet"]}" URL:{result["link"]}.' - snippets += snippet - - response = "Here are some updated web searches. Use this to improve user response:" - response += snippets - - return [{'role': 'system', 'content': response}] - - -def generate_stream(response, jailbreak): - """ - Generate the conversation stream. - - :param response: Response object from ChatCompletion.create - :param jailbreak: Jailbreak instruction string - :return: Generator object yielding messages in the conversation - """ - if getJailbreak(jailbreak): - response_jailbreak = '' - jailbroken_checked = False - for message in response: - response_jailbreak += message - if jailbroken_checked: - yield message - else: - if response_jailbroken_success(response_jailbreak): - jailbroken_checked = True - if response_jailbroken_failed(response_jailbreak): - yield response_jailbreak - jailbroken_checked = True - else: - yield from response - - -def response_jailbroken_success(response: str) -> bool: - """Check if the response has been jailbroken. - - :param response: Response string - :return: Boolean indicating if the response has been jailbroken - """ - act_match = re.search(r'ACT:', response, flags=re.DOTALL) - return bool(act_match) - - -def response_jailbroken_failed(response): - """ - Check if the response has not been jailbroken. - - :param response: Response string - :return: Boolean indicating if the response has not been jailbroken - """ - return False if len(response) < 4 else not (response.startswith("GPT:") or response.startswith("ACT:")) - - -def getJailbreak(jailbreak): - """ - Check if jailbreak instructions are provided. - - :param jailbreak: Jailbreak instruction string - :return: Jailbreak instructions if provided, otherwise None - """ - if jailbreak != "default": - special_instructions[jailbreak][0]['content'] += special_instructions['two_responses_instruction'] - if jailbreak in special_instructions: - special_instructions[jailbreak] - return special_instructions[jailbreak] - else: - return None - else: - return None diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/webui_sd_pipeline.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/webui_sd_pipeline.py deleted file mode 100644 index 21b46152c3167038954f9f170a65647929c2e929..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/webui_sd_pipeline.py +++ /dev/null @@ -1,49 +0,0 @@ -from modules.processing import StableDiffusionProcessingImg2Img -from modules.shared import opts, sd_model -import os - -def get_webui_sd_pipeline(args, root, frame): - import re - assert args.prompt is not None - - # Setup the pipeline - p = StableDiffusionProcessingImg2Img( - sd_model=sd_model, - outpath_samples = opts.outdir_samples or opts.outdir_img2img_samples, - #we'll setup the rest later - ) - - os.makedirs(args.outdir, exist_ok=True) - p.width, p.height = map(lambda x: x - x % 64, (args.W, args.H)) - p.steps = args.steps - p.seed = args.seed - p.sampler_name = args.sampler - p.batch_size = args.n_batch - p.tiling = args.tiling - p.restore_faces = args.restore_faces - p.subseed = args.subseed - p.subseed_strength = args.subseed_strength - p.seed_resize_from_w = args.seed_resize_from_w - p.seed_resize_from_h = args.seed_resize_from_h - p.fill = args.fill - p.ddim_eta = args.ddim_eta - p.batch_size = args.n_samples - p.width = args.W - p.height = args.H - p.seed = args.seed - p.do_not_save_samples = not args.save_sample_per_step - p.sampler_name = args.sampler - p.mask_blur = args.mask_overlay_blur - p.extra_generation_params["Mask blur"] = args.mask_overlay_blur - p.n_iter = 1 - p.steps = args.steps - if opts.img2img_fix_steps: - p.denoising_strength = 1 / (1 - args.strength + 1.0/args.steps) #see https://github.com/deforum-art/deforum-for-automatic1111-webui/issues/3 - else: - p.denoising_strength = 1 - args.strength - p.cfg_scale = args.scale - p.image_cfg_scale = args.pix2pix_img_cfg_scale - p.outpath_samples = root.outpath_samples - - - return p \ No newline at end of file diff --git a/spaces/aodianyun/stable-diffusion-webui/test/basic_features/extras_test.py b/spaces/aodianyun/stable-diffusion-webui/test/basic_features/extras_test.py deleted file mode 100644 index 0170c511fe54cc6bcf49ec7f75ca7c747de41db5..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/test/basic_features/extras_test.py +++ /dev/null @@ -1,54 +0,0 @@ -import unittest -import requests -from gradio.processing_utils import encode_pil_to_base64 -from PIL import Image - -class TestExtrasWorking(unittest.TestCase): - def setUp(self): - self.url_extras_single = "http://localhost:7860/sdapi/v1/extra-single-image" - self.extras_single = { - "resize_mode": 0, - "show_extras_results": True, - "gfpgan_visibility": 0, - "codeformer_visibility": 0, - "codeformer_weight": 0, - "upscaling_resize": 2, - "upscaling_resize_w": 128, - "upscaling_resize_h": 128, - "upscaling_crop": True, - "upscaler_1": "None", - "upscaler_2": "None", - "extras_upscaler_2_visibility": 0, - "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")) - } - - def test_simple_upscaling_performed(self): - self.extras_single["upscaler_1"] = "Lanczos" - self.assertEqual(requests.post(self.url_extras_single, json=self.extras_single).status_code, 200) - - -class TestPngInfoWorking(unittest.TestCase): - def setUp(self): - self.url_png_info = "http://localhost:7860/sdapi/v1/extra-single-image" - self.png_info = { - "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")) - } - - def test_png_info_performed(self): - self.assertEqual(requests.post(self.url_png_info, json=self.png_info).status_code, 200) - - -class TestInterrogateWorking(unittest.TestCase): - def setUp(self): - self.url_interrogate = "http://localhost:7860/sdapi/v1/extra-single-image" - self.interrogate = { - "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")), - "model": "clip" - } - - def test_interrogate_performed(self): - self.assertEqual(requests.post(self.url_interrogate, json=self.interrogate).status_code, 200) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/Blowfish.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/Blowfish.py deleted file mode 100644 index 6005ffe2b90694ae241c87404862f5f66db8f271..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/Blowfish.py +++ /dev/null @@ -1,159 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Cipher/Blowfish.py : Blowfish -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== -""" -Module's constants for the modes of operation supported with Blowfish: - -:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` -:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` -:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` -:var MODE_OFB: :ref:`Output FeedBack (OFB) ` -:var MODE_CTR: :ref:`CounTer Mode (CTR) ` -:var MODE_OPENPGP: :ref:`OpenPGP Mode ` -:var MODE_EAX: :ref:`EAX Mode ` -""" - -import sys - -from Crypto.Cipher import _create_cipher -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, c_size_t, - c_uint8_ptr) - -_raw_blowfish_lib = load_pycryptodome_raw_lib( - "Crypto.Cipher._raw_blowfish", - """ - int Blowfish_start_operation(const uint8_t key[], - size_t key_len, - void **pResult); - int Blowfish_encrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int Blowfish_decrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int Blowfish_stop_operation(void *state); - """ - ) - - -def _create_base_cipher(dict_parameters): - """This method instantiates and returns a smart pointer to - a low-level base cipher. It will absorb named parameters in - the process.""" - - try: - key = dict_parameters.pop("key") - except KeyError: - raise TypeError("Missing 'key' parameter") - - if len(key) not in key_size: - raise ValueError("Incorrect Blowfish key length (%d bytes)" % len(key)) - - start_operation = _raw_blowfish_lib.Blowfish_start_operation - stop_operation = _raw_blowfish_lib.Blowfish_stop_operation - - void_p = VoidPointer() - result = start_operation(c_uint8_ptr(key), - c_size_t(len(key)), - void_p.address_of()) - if result: - raise ValueError("Error %X while instantiating the Blowfish cipher" - % result) - return SmartPointer(void_p.get(), stop_operation) - - -def new(key, mode, *args, **kwargs): - """Create a new Blowfish cipher - - :param key: - The secret key to use in the symmetric cipher. - Its length can vary from 5 to 56 bytes. - :type key: bytes, bytearray, memoryview - - :param mode: - The chaining mode to use for encryption or decryption. - :type mode: One of the supported ``MODE_*`` constants - - :Keyword Arguments: - * **iv** (*bytes*, *bytearray*, *memoryview*) -- - (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, - and ``MODE_OPENPGP`` modes). - - The initialization vector to use for encryption or decryption. - - For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. - - For ``MODE_OPENPGP`` mode only, - it must be 8 bytes long for encryption - and 10 bytes for decryption (in the latter case, it is - actually the *encrypted* IV which was prefixed to the ciphertext). - - If not provided, a random byte string is generated (you must then - read its value with the :attr:`iv` attribute). - - * **nonce** (*bytes*, *bytearray*, *memoryview*) -- - (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). - - A value that must never be reused for any other encryption done - with this key. - - For ``MODE_EAX`` there are no - restrictions on its length (recommended: **16** bytes). - - For ``MODE_CTR``, its length must be in the range **[0..7]**. - - If not provided for ``MODE_EAX``, a random byte string is generated (you - can read it back via the ``nonce`` attribute). - - * **segment_size** (*integer*) -- - (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext - are segmented in. It must be a multiple of 8. - If not specified, it will be assumed to be 8. - - * **mac_len** : (*integer*) -- - (Only ``MODE_EAX``) - Length of the authentication tag, in bytes. - It must be no longer than 8 (default). - - * **initial_value** : (*integer*) -- - (Only ``MODE_CTR``). The initial value for the counter within - the counter block. By default it is **0**. - - :Return: a Blowfish object, of the applicable mode. - """ - - return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) - -MODE_ECB = 1 -MODE_CBC = 2 -MODE_CFB = 3 -MODE_OFB = 5 -MODE_CTR = 6 -MODE_OPENPGP = 7 -MODE_EAX = 9 - -# Size of a data block (in bytes) -block_size = 8 -# Size of a key (in bytes) -key_size = range(4, 56 + 1) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py deleted file mode 100644 index a710462ed68cf64ee3b5fc76d200e6061d648672..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py +++ /dev/null @@ -1,367 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Cipher/Salsa20.py: Self-test for the Salsa20 stream cipher -# -# Written in 2013 by Fabrizio Tarizzo -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Cipher.Salsa20""" - -import unittest - -from Crypto.Util.py3compat import bchr - -from Crypto.SelfTest.st_common import list_test_cases - -from Crypto.Cipher import Salsa20 - -from .common import make_stream_tests - -# This is a list of (plaintext, ciphertext, key[, description[, params]]) -# tuples. -test_data = [ - # Test vectors are taken from - # http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/verified.test-vectors - ( '00' * 512, - '4dfa5e481da23ea09a31022050859936da52fcee218005164f267cb65f5cfd7f' - + '2b4f97e0ff16924a52df269515110a07f9e460bc65ef95da58f740b7d1dbb0aa' - + 'd64cec189c7eb8c6bbf3d7376c80a481d43e628701f6a27afb9fe23919f24114' - + '8db44f70d7063efcc3dd55a0893a613c3c6fe1c127bd6f59910589293bb6ef9e' - + 'e24819066dee1a64f49b0bbad5988635272b169af861f85df881939f29ada6fd' - + '0241410e8d332ae4798d929434a2630de451ec4e0169694cbaa7ebb121ea6a2b' - + 'da9c1581f429e0a00f7d67e23b730676783b262e8eb43a25f55fb90b3e753aef' - + '8c6713ec66c51881111593ccb3e8cb8f8de124080501eeeb389c4bcb6977cf95' - + '7d5789631eb4554400e1e025935dfa7b3e9039d61bdc58a8697d36815bf1985c' - + 'efdf7ae112e5bb81e37ecf0616ce7147fc08a93a367e08631f23c03b00a8da2f' - + 'aa5024e5c8d30aca43fc2d5082067b21b234bc741d68fb292c6012c3764ccee3' - + '1e364a5403e00cfee338a21a01e7d3cefd5a770ca0ab48c435ea6116435f7ad8' - + '30b217b49f978a68e207ed9f462af7fb195b2115fe8f24f152e4ddc32202d6f2' - + 'b52fafbcfbc202d8a259a611e901d3f62d065eb13f09bbc45cd45119b843efaa' - + 'b375703739daced4dd4059fd71c3c47fc2f9939670fad4a46066adcc6a564578' - + '3308b90ffb72be04a6b147cbe38cc0c3b9267c296a92a7c69873f9f263be9703', - '80000000000000000000000000000000', - '128 bits key, set 1, vector 0', - dict (iv='00'*8)), - - ( '00' * 512, - 'e3be8fdd8beca2e3ea8ef9475b29a6e7003951e1097a5c38d23b7a5fad9f6844' - + 'b22c97559e2723c7cbbd3fe4fc8d9a0744652a83e72a9c461876af4d7ef1a117' - + '8da2b74eef1b6283e7e20166abcae538e9716e4669e2816b6b20c5c356802001' - + 'cc1403a9a117d12a2669f456366d6ebb0f1246f1265150f793cdb4b253e348ae' - + '203d89bc025e802a7e0e00621d70aa36b7e07cb1e7d5b38d5e222b8b0e4b8407' - + '0142b1e29504767d76824850320b5368129fdd74e861b498e3be8d16f2d7d169' - + '57be81f47b17d9ae7c4ff15429a73e10acf250ed3a90a93c711308a74c6216a9' - + 'ed84cd126da7f28e8abf8bb63517e1ca98e712f4fb2e1a6aed9fdc73291faa17' - + '958211c4ba2ebd5838c635edb81f513a91a294e194f1c039aeec657dce40aa7e' - + '7c0af57cacefa40c9f14b71a4b3456a63e162ec7d8d10b8ffb1810d71001b618' - + '2f9f73da53b85405c11f7b2d890fa8ae0c7f2e926d8a98c7ec4e91b65120e988' - + '349631a700c6facec3471cb0413656e75e309456584084d7e12c5b43a41c43ed' - + '9a048abd9b880da65f6a665a20fe7b77cd292fe62cae644b7f7df69f32bdb331' - + '903e6505ce44fdc293920c6a9ec7057e23df7dad298f82ddf4efb7fdc7bfc622' - + '696afcfd0cddcc83c7e77f11a649d79acdc3354e9635ff137e929933a0bd6f53' - + '77efa105a3a4266b7c0d089d08f1e855cc32b15b93784a36e56a76cc64bc8477', - '8000000000000000000000000000000000000000000000000000000000000000', - '256 bits key, set 1, vector 0', - dict (iv='00'*8)), - - ( '00' * 512, - '169060ccb42bea7bee4d8012a02f3635eb7bca12859fa159cd559094b3507db8' - + '01735d1a1300102a9c9415546829cbd2021ba217b39b81d89c55b13d0c603359' - + '3f84159a3c84f4b4f4a0edcd9d38ff261a737909e0b66d68b5cac496f3a5be99' - + 'cb12c321ab711afaab36cc0947955e1a9bb952ed54425e7711279fbc81bb83f5' - + '6e55cea44e6daddb05858a153ea6213b3350c12aa1a83ef2726f09485fa71790' - + 'f9b9f922c7dda1113b1f9d56658ed3402803f511bc1f122601d5e7f0ff036e23' - + '23ef24bb24195b9fd574823cd8a40c29d86bd35c191e2038779ff696c712b6d8' - + '2e7014dbe1ac5d527af076c088c4a8d44317958189f6ef54933a7e0816b5b916' - + 'd8f12ed8afe9422b85e5cc9b8adec9d6cfabe8dbc1082bccc02f5a7266aa074c' - + 'a284e583a35837798cc0e69d4ce937653b8cdd65ce414b89138615ccb165ad19' - + '3c6b9c3d05eef4be921a10ea811fe61d11c6867600188e065daff90b509ec56b' - + 'd41e7e8968c478c78d590c2d2ee24ea009c8f49bc3d81672cfc47895a9e21c9a' - + '471ebf8e294bee5d2de436ac8d052bf31111b345f1da23c3a4d13b9fc5f0900a' - + 'a298f98f538973b8fad40d4d159777de2cfe2a3dead1645ddb49794827dba040' - + 'f70a0ff4ecd155e0f033604693a51e2363880e2ecf98699e7174af7c2c6b0fc6' - + '59ae329599a3949272a37b9b2183a0910922a3f325ae124dcbdd735364055ceb', - '09090909090909090909090909090909', - '128 bits key, set 2, vector 9', - dict (iv='00'*8)), - - ( '00' * 512, - '7041e747ceb22ed7812985465f50333124f971da1c5d6efe5ca201b886f31046' - + 'e757e5c3ec914f60ed1f6bce2819b6810953f12b8ba1199bf82d746a8b8a88f1' - + '142002978ec4c35b95dc2c82990f9e847a0ab45f2ca72625f5190c820f29f3aa' - + 'f5f0b5572b06b70a144f2a240c3b3098d4831fa1ce1459f8d1df226a6a79b0ab' - + '41e91799ef31b5ff3d756c19126b19025858ee70fbd69f2be955cb011c005e31' - + '32b271b378f39b0cb594e95c99ce6ff17735a541891845bbf0450afcb4a850b9' - + '4ee90afb713ae7e01295c74381180a3816d7020d5a396c0d97aaa783eaabb6ec' - + '44d5111157f2212d1b1b8fca7893e8b520cd482418c272ab119b569a2b9598eb' - + '355624d12e79adab81153b58cd22eaf1b2a32395dedc4a1c66f4d274070b9800' - + 'ea95766f0245a8295f8aadb36ddbbdfa936417c8dbc6235d19494036964d3e70' - + 'b125b0f800c3d53881d9d11e7970f827c2f9556935cd29e927b0aceb8cae5fd4' - + '0fd88a8854010a33db94c96c98735858f1c5df6844f864feaca8f41539313e7f' - + '3c0610214912cd5e6362197646207e2d64cd5b26c9dfe0822629dcbeb16662e8' - + '9ff5bf5cf2e499138a5e27bd5027329d0e68ddf53103e9e409523662e27f61f6' - + '5cf38c1232023e6a6ef66c315bcb2a4328642faabb7ca1e889e039e7c444b34b' - + 'b3443f596ac730f3df3dfcdb343c307c80f76e43e8898c5e8f43dc3bb280add0', - '0909090909090909090909090909090909090909090909090909090909090909', - '256 bits key, set 2, vector 9', - dict (iv='00'*8)), - - ( '00' * 1024, - '71daee5142d0728b41b6597933ebf467e43279e30978677078941602629cbf68' - + 'b73d6bd2c95f118d2b3e6ec955dabb6dc61c4143bc9a9b32b99dbe6866166dc0' - + '8631b7d6553050303d7252c264d3a90d26c853634813e09ad7545a6ce7e84a5d' - + 'fc75ec43431207d5319970b0faadb0e1510625bb54372c8515e28e2accf0a993' - + '0ad15f431874923d2a59e20d9f2a5367dba6051564f150287debb1db536ff9b0' - + '9ad981f25e5010d85d76ee0c305f755b25e6f09341e0812f95c94f42eead346e' - + '81f39c58c5faa2c88953dc0cac90469db2063cb5cdb22c9eae22afbf0506fca4' - + '1dc710b846fbdfe3c46883dd118f3a5e8b11b6afd9e71680d8666557301a2daa' - + 'fb9496c559784d35a035360885f9b17bd7191977deea932b981ebdb29057ae3c' - + '92cfeff5e6c5d0cb62f209ce342d4e35c69646ccd14e53350e488bb310a32f8b' - + '0248e70acc5b473df537ced3f81a014d4083932bedd62ed0e447b6766cd2604b' - + '706e9b346c4468beb46a34ecf1610ebd38331d52bf33346afec15eefb2a7699e' - + '8759db5a1f636a48a039688e39de34d995df9f27ed9edc8dd795e39e53d9d925' - + 'b278010565ff665269042f05096d94da3433d957ec13d2fd82a0066283d0d1ee' - + 'b81bf0ef133b7fd90248b8ffb499b2414cd4fa003093ff0864575a43749bf596' - + '02f26c717fa96b1d057697db08ebc3fa664a016a67dcef8807577cc3a09385d3' - + 'f4dc79b34364bb3b166ce65fe1dd28e3950fe6fa81063f7b16ce1c0e6daac1f8' - + '188455b77752045e863c9b256ad92bc6e2d08314c5bba191c274f42dfbb3d652' - + 'bb771956555e880f84cd8b827a4c5a52f3a099fa0259bd4aac3efd541f191170' - + '4412d6e85fbcc628b335875b9fef24807f6e1bc66c3186159e1e7f5a13913e02' - + 'd241ce2efdbcaa275039fb14eac5923d17ffbc7f1abd3b45e92127575bfbabf9' - + '3a257ebef0aa1437b326e41b585af572f7239c33b32981a1577a4f629b027e1e' - + 'b49d58cc497e944d79cef44357c2bf25442ab779651e991147bf79d6fd3a8868' - + '0cd3b1748e07fd10d78aceef6db8a5e563570d40127f754146c34a440f2a991a' - + '23fa39d365141f255041f2135c5cba4373452c114da1801bacca38610e3a6524' - + '2b822d32de4ab5a7d3cf9b61b37493c863bd12e2cae10530cddcda2cb7a5436b' - + 'ef8988d4d24e8cdc31b2d2a3586340bc5141f8f6632d0dd543bfed81eb471ba1' - + 'f3dc2225a15ffddcc03eb48f44e27e2aa390598adf83f15c6608a5f18d4dfcf0' - + 'f547d467a4d70b281c83a595d7660d0b62de78b9cca023cca89d7b1f83484638' - + '0e228c25f049184a612ef5bb3d37454e6cfa5b10dceda619d898a699b3c8981a' - + '173407844bb89b4287bf57dd6600c79e352c681d74b03fa7ea0d7bf6ad69f8a6' - + '8ecb001963bd2dd8a2baa0083ec09751cd9742402ad716be16d5c052304cfca1', - '0F62B5085BAE0154A7FA4DA0F34699EC', - '128 bits key, Set 6, vector# 3', - dict (iv='288FF65DC42B92F9')), - - ( '00' * 1024, - '5e5e71f90199340304abb22a37b6625bf883fb89ce3b21f54a10b81066ef87da' - + '30b77699aa7379da595c77dd59542da208e5954f89e40eb7aa80a84a6176663f' - + 'd910cde567cf1ff60f7040548d8f376bfd1f44c4774aac37410ede7d5c3463fc' - + '4508a603201d8495ad257894e5eb1914b53e8da5e4bf2bc83ac87ce55cc67df7' - + '093d9853d2a83a9c8be969175df7c807a17156df768445dd0874a9271c6537f5' - + 'ce0466473582375f067fa4fcdaf65dbc0139cd75e8c21a482f28c0fb8c3d9f94' - + '22606cc8e88fe28fe73ec3cb10ff0e8cc5f2a49e540f007265c65b7130bfdb98' - + '795b1df9522da46e48b30e55d9f0d787955ece720205b29c85f3ad9be33b4459' - + '7d21b54d06c9a60b04b8e640c64e566e51566730e86cf128ab14174f91bd8981' - + 'a6fb00fe587bbd6c38b5a1dfdb04ea7e61536fd229f957aa9b070ca931358e85' - + '11b92c53c523cb54828fb1513c5636fa9a0645b4a3c922c0db94986d92f314ff' - + '7852c03b231e4dceea5dd8cced621869cff818daf3c270ff3c8be2e5c74be767' - + 'a4e1fdf3327a934fe31e46df5a74ae2021cee021d958c4f615263d99a5ddae7f' - + 'eab45e6eccbafefe4761c57750847b7e75ee2e2f14333c0779ce4678f47b1e1b' - + '760a03a5f17d6e91d4b42313b3f1077ee270e432fe04917ed1fc8babebf7c941' - + '42b80dfb44a28a2a3e59093027606f6860bfb8c2e5897078cfccda7314c70035' - + 'f137de6f05daa035891d5f6f76e1df0fce1112a2ff0ac2bd3534b5d1bf4c7165' - + 'fb40a1b6eacb7f295711c4907ae457514a7010f3a342b4427593d61ba993bc59' - + '8bd09c56b9ee53aac5dd861fa4b4bb53888952a4aa9d8ca8671582de716270e1' - + '97375b3ee49e51fa2bf4ef32015dd9a764d966aa2ae541592d0aa650849e99ca' - + '5c6c39beebf516457cc32fe4c105bff314a12f1ec94bdf4d626f5d9b1cbbde42' - + 'e5733f0885765ba29e2e82c829d312f5fc7e180679ac84826c08d0a644b326d0' - + '44da0fdcc75fa53cfe4ced0437fa4df5a7ecbca8b4cb7c4a9ecf9a60d00a56eb' - + '81da52adc21f508dbb60a9503a3cc94a896616d86020d5b0e5c637329b6d396a' - + '41a21ba2c4a9493cf33fa2d4f10f77d5b12fdad7e478ccfe79b74851fc96a7ca' - + '6320c5efd561a222c0ab0fb44bbda0e42149611d2262bb7d1719150fa798718a' - + '0eec63ee297cad459869c8b0f06c4e2b56cbac03cd2605b2a924efedf85ec8f1' - + '9b0b6c90e7cbd933223ffeb1b3a3f9677657905829294c4c70acdb8b0891b47d' - + '0875d0cd6c0f4efe2917fc44b581ef0d1e4280197065d07da34ab33283364552' - + 'efad0bd9257b059acdd0a6f246812feb69e7e76065f27dbc2eee94da9cc41835' - + 'bf826e36e5cebe5d4d6a37a6a666246290ce51a0c082718ab0ec855668db1add' - + 'a658e5f257e0db39384d02e6145c4c00eaa079098f6d820d872de711b6ed08cf', - '0F62B5085BAE0154A7FA4DA0F34699EC3F92E5388BDE3184D72A7DD02376C91C', - '256 bits key, Set 6, vector# 3', - dict (iv='288FF65DC42B92F9')), - -] - - -class KeyLength(unittest.TestCase): - - def runTest(self): - - nonce = bchr(0) * 8 - for key_length in (15, 30, 33): - key = bchr(1) * key_length - self.assertRaises(ValueError, Salsa20.new, key, nonce) - - -class NonceTests(unittest.TestCase): - - def test_invalid_nonce_length(self): - key = bchr(1) * 16 - self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 7) - self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 9) - - def test_default_nonce(self): - - cipher1 = Salsa20.new(bchr(1) * 16) - cipher2 = Salsa20.new(bchr(1) * 16) - self.assertEqual(len(cipher1.nonce), 8) - self.assertNotEqual(cipher1.nonce, cipher2.nonce) - - -class ByteArrayTest(unittest.TestCase): - """Verify we can encrypt or decrypt bytearrays""" - - def runTest(self): - - data = b"0123" - key = b"9" * 32 - nonce = b"t" * 8 - - # Encryption - data_ba = bytearray(data) - key_ba = bytearray(key) - nonce_ba = bytearray(nonce) - - cipher1 = Salsa20.new(key=key, nonce=nonce) - ct = cipher1.encrypt(data) - - cipher2 = Salsa20.new(key=key_ba, nonce=nonce_ba) - key_ba[:1] = b'\xFF' - nonce_ba[:1] = b'\xFF' - ct_test = cipher2.encrypt(data_ba) - - self.assertEqual(ct, ct_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decryption - key_ba = bytearray(key) - nonce_ba = bytearray(nonce) - ct_ba = bytearray(ct) - - cipher3 = Salsa20.new(key=key_ba, nonce=nonce_ba) - key_ba[:1] = b'\xFF' - nonce_ba[:1] = b'\xFF' - pt_test = cipher3.decrypt(ct_ba) - - self.assertEqual(data, pt_test) - - -class MemoryviewTest(unittest.TestCase): - """Verify we can encrypt or decrypt bytearrays""" - - def runTest(self): - - data = b"0123" - key = b"9" * 32 - nonce = b"t" * 8 - - # Encryption - data_mv = memoryview(bytearray(data)) - key_mv = memoryview(bytearray(key)) - nonce_mv = memoryview(bytearray(nonce)) - - cipher1 = Salsa20.new(key=key, nonce=nonce) - ct = cipher1.encrypt(data) - - cipher2 = Salsa20.new(key=key_mv, nonce=nonce_mv) - key_mv[:1] = b'\xFF' - nonce_mv[:1] = b'\xFF' - ct_test = cipher2.encrypt(data_mv) - - self.assertEqual(ct, ct_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decryption - key_mv = memoryview(bytearray(key)) - nonce_mv = memoryview(bytearray(nonce)) - ct_mv = memoryview(bytearray(ct)) - - cipher3 = Salsa20.new(key=key_mv, nonce=nonce_mv) - key_mv[:1] = b'\xFF' - nonce_mv[:1] = b'\xFF' - pt_test = cipher3.decrypt(ct_mv) - - self.assertEqual(data, pt_test) - - -class TestOutput(unittest.TestCase): - - def runTest(self): - # Encrypt/Decrypt data and test output parameter - - key = b'4' * 32 - nonce = b'5' * 8 - cipher = Salsa20.new(key=key, nonce=nonce) - - pt = b'5' * 300 - ct = cipher.encrypt(pt) - - output = bytearray(len(pt)) - cipher = Salsa20.new(key=key, nonce=nonce) - res = cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - self.assertEqual(res, None) - - cipher = Salsa20.new(key=key, nonce=nonce) - res = cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - self.assertEqual(res, None) - - output = memoryview(bytearray(len(pt))) - cipher = Salsa20.new(key=key, nonce=nonce) - cipher.encrypt(pt, output=output) - self.assertEqual(ct, output) - - cipher = Salsa20.new(key=key, nonce=nonce) - cipher.decrypt(ct, output=output) - self.assertEqual(pt, output) - - cipher = Salsa20.new(key=key, nonce=nonce) - self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*len(pt)) - - cipher = Salsa20.new(key=key, nonce=nonce) - self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*len(ct)) - - shorter_output = bytearray(len(pt) - 1) - - cipher = Salsa20.new(key=key, nonce=nonce) - self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output) - - cipher = Salsa20.new(key=key, nonce=nonce) - self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output) - - -def get_tests(config={}): - tests = make_stream_tests(Salsa20, "Salsa20", test_data) - tests.append(KeyLength()) - tests += list_test_cases(NonceTests) - tests.append(ByteArrayTest()) - tests.append(MemoryviewTest()) - tests.append(TestOutput()) - - return tests - - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/StringTools.c b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/StringTools.c deleted file mode 100644 index 35241c64a463a835f016f3369c22a10877b44dbf..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/StringTools.c +++ /dev/null @@ -1,1195 +0,0 @@ - -//////////////////// IncludeStringH.proto //////////////////// - -#include - -//////////////////// IncludeCppStringH.proto //////////////////// - -#include - -//////////////////// InitStrings.proto //////////////////// - -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ - -//////////////////// InitStrings //////////////////// - -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else /* Python 3+ has unicode identifiers */ - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - // initialise cached hash value - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -//////////////////// BytesContains.proto //////////////////// - -static CYTHON_INLINE int __Pyx_BytesContains(PyObject* bytes, char character); /*proto*/ - -//////////////////// BytesContains //////////////////// -//@requires: IncludeStringH - -static CYTHON_INLINE int __Pyx_BytesContains(PyObject* bytes, char character) { - const Py_ssize_t length = PyBytes_GET_SIZE(bytes); - char* char_start = PyBytes_AS_STRING(bytes); - return memchr(char_start, (unsigned char)character, (size_t)length) != NULL; -} - - -//////////////////// PyUCS4InUnicode.proto //////////////////// - -static CYTHON_INLINE int __Pyx_UnicodeContainsUCS4(PyObject* unicode, Py_UCS4 character); /*proto*/ - -//////////////////// PyUCS4InUnicode //////////////////// - -#if PY_VERSION_HEX < 0x03090000 || (defined(PyUnicode_WCHAR_KIND) && defined(PyUnicode_AS_UNICODE)) - -#if PY_VERSION_HEX < 0x03090000 -#define __Pyx_PyUnicode_AS_UNICODE(op) PyUnicode_AS_UNICODE(op) -#define __Pyx_PyUnicode_GET_SIZE(op) PyUnicode_GET_SIZE(op) -#else -// Avoid calling deprecated C-API functions in Py3.9+ that PEP-623 schedules for removal in Py3.12. -// https://www.python.org/dev/peps/pep-0623/ -#define __Pyx_PyUnicode_AS_UNICODE(op) (((PyASCIIObject *)(op))->wstr) -#define __Pyx_PyUnicode_GET_SIZE(op) ((PyCompactUnicodeObject *)(op))->wstr_length -#endif - -#if !defined(Py_UNICODE_SIZE) || Py_UNICODE_SIZE == 2 -static int __Pyx_PyUnicodeBufferContainsUCS4_SP(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) { - /* handle surrogate pairs for Py_UNICODE buffers in 16bit Unicode builds */ - Py_UNICODE high_val, low_val; - Py_UNICODE* pos; - high_val = (Py_UNICODE) (0xD800 | (((character - 0x10000) >> 10) & ((1<<10)-1))); - low_val = (Py_UNICODE) (0xDC00 | ( (character - 0x10000) & ((1<<10)-1))); - for (pos=buffer; pos < buffer+length-1; pos++) { - if (unlikely((high_val == pos[0]) & (low_val == pos[1]))) return 1; - } - return 0; -} -#endif - -static int __Pyx_PyUnicodeBufferContainsUCS4_BMP(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) { - Py_UNICODE uchar; - Py_UNICODE* pos; - uchar = (Py_UNICODE) character; - for (pos=buffer; pos < buffer+length; pos++) { - if (unlikely(uchar == pos[0])) return 1; - } - return 0; -} -#endif - -static CYTHON_INLINE int __Pyx_UnicodeContainsUCS4(PyObject* unicode, Py_UCS4 character) { -#if CYTHON_PEP393_ENABLED - const int kind = PyUnicode_KIND(unicode); - #ifdef PyUnicode_WCHAR_KIND - if (likely(kind != PyUnicode_WCHAR_KIND)) - #endif - { - Py_ssize_t i; - const void* udata = PyUnicode_DATA(unicode); - const Py_ssize_t length = PyUnicode_GET_LENGTH(unicode); - for (i=0; i < length; i++) { - if (unlikely(character == PyUnicode_READ(kind, udata, i))) return 1; - } - return 0; - } -#elif PY_VERSION_HEX >= 0x03090000 - #error Cannot use "UChar in Unicode" in Python 3.9 without PEP-393 unicode strings. -#elif !defined(PyUnicode_AS_UNICODE) - #error Cannot use "UChar in Unicode" in Python < 3.9 without Py_UNICODE support. -#endif - -#if PY_VERSION_HEX < 0x03090000 || (defined(PyUnicode_WCHAR_KIND) && defined(PyUnicode_AS_UNICODE)) -#if !defined(Py_UNICODE_SIZE) || Py_UNICODE_SIZE == 2 - if ((sizeof(Py_UNICODE) == 2) && unlikely(character > 65535)) { - return __Pyx_PyUnicodeBufferContainsUCS4_SP( - __Pyx_PyUnicode_AS_UNICODE(unicode), - __Pyx_PyUnicode_GET_SIZE(unicode), - character); - } else -#endif - { - return __Pyx_PyUnicodeBufferContainsUCS4_BMP( - __Pyx_PyUnicode_AS_UNICODE(unicode), - __Pyx_PyUnicode_GET_SIZE(unicode), - character); - - } -#endif -} - - -//////////////////// PyUnicodeContains.proto //////////////////// - -static CYTHON_INLINE int __Pyx_PyUnicode_ContainsTF(PyObject* substring, PyObject* text, int eq) { - int result = PyUnicode_Contains(text, substring); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - - -//////////////////// CStringEquals.proto //////////////////// - -static CYTHON_INLINE int __Pyx_StrEq(const char *, const char *); /*proto*/ - -//////////////////// CStringEquals //////////////////// - -static CYTHON_INLINE int __Pyx_StrEq(const char *s1, const char *s2) { - while (*s1 != '\0' && *s1 == *s2) { s1++; s2++; } - return *s1 == *s2; -} - - -//////////////////// StrEquals.proto //////////////////// -//@requires: BytesEquals -//@requires: UnicodeEquals - -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - - -//////////////////// UnicodeEquals.proto //////////////////// - -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); /*proto*/ - -//////////////////// UnicodeEquals //////////////////// -//@requires: BytesEquals - -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - /* as done by PyObject_RichCompareBool(); also catches the (interned) empty string */ - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - // len(s1) == len(s2) >= 1 (empty string is interned, and "s1 is not s2") - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - - -//////////////////// BytesEquals.proto //////////////////// - -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); /*proto*/ - -//////////////////// BytesEquals //////////////////// -//@requires: IncludeStringH - -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - /* as done by PyObject_RichCompareBool(); also catches the (interned) empty string */ - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - // len(s1) == len(s2) >= 1 (empty string is interned, and "s1 is not s2") - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -//////////////////// GetItemIntByteArray.proto //////////////////// - -#define __Pyx_GetItemInt_ByteArray(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck) \ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ? \ - __Pyx_GetItemInt_ByteArray_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) : \ - (PyErr_SetString(PyExc_IndexError, "bytearray index out of range"), -1)) - -static CYTHON_INLINE int __Pyx_GetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i, - int wraparound, int boundscheck); - -//////////////////// GetItemIntByteArray //////////////////// - -static CYTHON_INLINE int __Pyx_GetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i, - int wraparound, int boundscheck) { - Py_ssize_t length; - if (wraparound | boundscheck) { - length = PyByteArray_GET_SIZE(string); - if (wraparound & unlikely(i < 0)) i += length; - if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) { - return (unsigned char) (PyByteArray_AS_STRING(string)[i]); - } else { - PyErr_SetString(PyExc_IndexError, "bytearray index out of range"); - return -1; - } - } else { - return (unsigned char) (PyByteArray_AS_STRING(string)[i]); - } -} - - -//////////////////// SetItemIntByteArray.proto //////////////////// - -#define __Pyx_SetItemInt_ByteArray(o, i, v, type, is_signed, to_py_func, is_list, wraparound, boundscheck) \ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ? \ - __Pyx_SetItemInt_ByteArray_Fast(o, (Py_ssize_t)i, v, wraparound, boundscheck) : \ - (PyErr_SetString(PyExc_IndexError, "bytearray index out of range"), -1)) - -static CYTHON_INLINE int __Pyx_SetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i, unsigned char v, - int wraparound, int boundscheck); - -//////////////////// SetItemIntByteArray //////////////////// - -static CYTHON_INLINE int __Pyx_SetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i, unsigned char v, - int wraparound, int boundscheck) { - Py_ssize_t length; - if (wraparound | boundscheck) { - length = PyByteArray_GET_SIZE(string); - if (wraparound & unlikely(i < 0)) i += length; - if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) { - PyByteArray_AS_STRING(string)[i] = (char) v; - return 0; - } else { - PyErr_SetString(PyExc_IndexError, "bytearray index out of range"); - return -1; - } - } else { - PyByteArray_AS_STRING(string)[i] = (char) v; - return 0; - } -} - - -//////////////////// GetItemIntUnicode.proto //////////////////// - -#define __Pyx_GetItemInt_Unicode(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck) \ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ? \ - __Pyx_GetItemInt_Unicode_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) : \ - (PyErr_SetString(PyExc_IndexError, "string index out of range"), (Py_UCS4)-1)) - -static CYTHON_INLINE Py_UCS4 __Pyx_GetItemInt_Unicode_Fast(PyObject* ustring, Py_ssize_t i, - int wraparound, int boundscheck); - -//////////////////// GetItemIntUnicode //////////////////// - -static CYTHON_INLINE Py_UCS4 __Pyx_GetItemInt_Unicode_Fast(PyObject* ustring, Py_ssize_t i, - int wraparound, int boundscheck) { - Py_ssize_t length; - if (unlikely(__Pyx_PyUnicode_READY(ustring) < 0)) return (Py_UCS4)-1; - if (wraparound | boundscheck) { - length = __Pyx_PyUnicode_GET_LENGTH(ustring); - if (wraparound & unlikely(i < 0)) i += length; - if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) { - return __Pyx_PyUnicode_READ_CHAR(ustring, i); - } else { - PyErr_SetString(PyExc_IndexError, "string index out of range"); - return (Py_UCS4)-1; - } - } else { - return __Pyx_PyUnicode_READ_CHAR(ustring, i); - } -} - - -/////////////// decode_c_string_utf16.proto /////////////// - -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/////////////// decode_cpp_string.proto /////////////// -//@requires: IncludeCppStringH -//@requires: decode_c_bytes - -static CYTHON_INLINE PyObject* __Pyx_decode_cpp_string( - std::string cppstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - return __Pyx_decode_c_bytes( - cppstring.data(), cppstring.size(), start, stop, encoding, errors, decode_func); -} - -/////////////// decode_c_string.proto /////////////// - -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/////////////// decode_c_string /////////////// -//@requires: IncludeStringH -//@requires: decode_c_string_utf16 -//@substitute: naming - -/* duplicate code to avoid calling strlen() if start >= 0 and stop >= 0 */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef($empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/////////////// decode_c_bytes.proto /////////////// - -static CYTHON_INLINE PyObject* __Pyx_decode_c_bytes( - const char* cstring, Py_ssize_t length, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/////////////// decode_c_bytes /////////////// -//@requires: decode_c_string_utf16 -//@substitute: naming - -static CYTHON_INLINE PyObject* __Pyx_decode_c_bytes( - const char* cstring, Py_ssize_t length, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - if (unlikely((start < 0) | (stop < 0))) { - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (stop > length) - stop = length; - if (unlikely(stop <= start)) - return __Pyx_NewRef($empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/////////////// decode_bytes.proto /////////////// -//@requires: decode_c_bytes - -static CYTHON_INLINE PyObject* __Pyx_decode_bytes( - PyObject* string, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - return __Pyx_decode_c_bytes( - PyBytes_AS_STRING(string), PyBytes_GET_SIZE(string), - start, stop, encoding, errors, decode_func); -} - -/////////////// decode_bytearray.proto /////////////// -//@requires: decode_c_bytes - -static CYTHON_INLINE PyObject* __Pyx_decode_bytearray( - PyObject* string, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - return __Pyx_decode_c_bytes( - PyByteArray_AS_STRING(string), PyByteArray_GET_SIZE(string), - start, stop, encoding, errors, decode_func); -} - -/////////////// PyUnicode_Substring.proto /////////////// - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Substring( - PyObject* text, Py_ssize_t start, Py_ssize_t stop); - -/////////////// PyUnicode_Substring /////////////// -//@substitute: naming - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Substring( - PyObject* text, Py_ssize_t start, Py_ssize_t stop) { - Py_ssize_t length; - if (unlikely(__Pyx_PyUnicode_READY(text) == -1)) return NULL; - length = __Pyx_PyUnicode_GET_LENGTH(text); - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - else if (stop > length) - stop = length; - if (stop <= start) - return __Pyx_NewRef($empty_unicode); -#if CYTHON_PEP393_ENABLED - return PyUnicode_FromKindAndData(PyUnicode_KIND(text), - PyUnicode_1BYTE_DATA(text) + start*PyUnicode_KIND(text), stop-start); -#else - return PyUnicode_FromUnicode(PyUnicode_AS_UNICODE(text)+start, stop-start); -#endif -} - - -/////////////// py_unicode_istitle.proto /////////////// - -// Py_UNICODE_ISTITLE() doesn't match unicode.istitle() as the latter -// additionally allows character that comply with Py_UNICODE_ISUPPER() - -#if PY_VERSION_HEX < 0x030200A2 -static CYTHON_INLINE int __Pyx_Py_UNICODE_ISTITLE(Py_UNICODE uchar) -#else -static CYTHON_INLINE int __Pyx_Py_UNICODE_ISTITLE(Py_UCS4 uchar) -#endif -{ - return Py_UNICODE_ISTITLE(uchar) || Py_UNICODE_ISUPPER(uchar); -} - - -/////////////// unicode_tailmatch.proto /////////////// - -static int __Pyx_PyUnicode_Tailmatch( - PyObject* s, PyObject* substr, Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/ - -/////////////// unicode_tailmatch /////////////// - -// Python's unicode.startswith() and unicode.endswith() support a -// tuple of prefixes/suffixes, whereas it's much more common to -// test for a single unicode string. - -static int __Pyx_PyUnicode_TailmatchTuple(PyObject* s, PyObject* substrings, - Py_ssize_t start, Py_ssize_t end, int direction) { - Py_ssize_t i, count = PyTuple_GET_SIZE(substrings); - for (i = 0; i < count; i++) { - Py_ssize_t result; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - result = PyUnicode_Tailmatch(s, PyTuple_GET_ITEM(substrings, i), - start, end, direction); -#else - PyObject* sub = PySequence_ITEM(substrings, i); - if (unlikely(!sub)) return -1; - result = PyUnicode_Tailmatch(s, sub, start, end, direction); - Py_DECREF(sub); -#endif - if (result) { - return (int) result; - } - } - return 0; -} - -static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr, - Py_ssize_t start, Py_ssize_t end, int direction) { - if (unlikely(PyTuple_Check(substr))) { - return __Pyx_PyUnicode_TailmatchTuple(s, substr, start, end, direction); - } - return (int) PyUnicode_Tailmatch(s, substr, start, end, direction); -} - - -/////////////// bytes_tailmatch.proto /////////////// - -static int __Pyx_PyBytes_SingleTailmatch(PyObject* self, PyObject* arg, - Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/ -static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr, - Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/ - -/////////////// bytes_tailmatch /////////////// - -static int __Pyx_PyBytes_SingleTailmatch(PyObject* self, PyObject* arg, - Py_ssize_t start, Py_ssize_t end, int direction) { - const char* self_ptr = PyBytes_AS_STRING(self); - Py_ssize_t self_len = PyBytes_GET_SIZE(self); - const char* sub_ptr; - Py_ssize_t sub_len; - int retval; - - Py_buffer view; - view.obj = NULL; - - if ( PyBytes_Check(arg) ) { - sub_ptr = PyBytes_AS_STRING(arg); - sub_len = PyBytes_GET_SIZE(arg); - } -#if PY_MAJOR_VERSION < 3 - // Python 2.x allows mixing unicode and str - else if ( PyUnicode_Check(arg) ) { - return (int) PyUnicode_Tailmatch(self, arg, start, end, direction); - } -#endif - else { - if (unlikely(PyObject_GetBuffer(self, &view, PyBUF_SIMPLE) == -1)) - return -1; - sub_ptr = (const char*) view.buf; - sub_len = view.len; - } - - if (end > self_len) - end = self_len; - else if (end < 0) - end += self_len; - if (end < 0) - end = 0; - if (start < 0) - start += self_len; - if (start < 0) - start = 0; - - if (direction > 0) { - /* endswith */ - if (end-sub_len > start) - start = end - sub_len; - } - - if (start + sub_len <= end) - retval = !memcmp(self_ptr+start, sub_ptr, (size_t)sub_len); - else - retval = 0; - - if (view.obj) - PyBuffer_Release(&view); - - return retval; -} - -static int __Pyx_PyBytes_TailmatchTuple(PyObject* self, PyObject* substrings, - Py_ssize_t start, Py_ssize_t end, int direction) { - Py_ssize_t i, count = PyTuple_GET_SIZE(substrings); - for (i = 0; i < count; i++) { - int result; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - result = __Pyx_PyBytes_SingleTailmatch(self, PyTuple_GET_ITEM(substrings, i), - start, end, direction); -#else - PyObject* sub = PySequence_ITEM(substrings, i); - if (unlikely(!sub)) return -1; - result = __Pyx_PyBytes_SingleTailmatch(self, sub, start, end, direction); - Py_DECREF(sub); -#endif - if (result) { - return result; - } - } - return 0; -} - -static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr, - Py_ssize_t start, Py_ssize_t end, int direction) { - if (unlikely(PyTuple_Check(substr))) { - return __Pyx_PyBytes_TailmatchTuple(self, substr, start, end, direction); - } - - return __Pyx_PyBytes_SingleTailmatch(self, substr, start, end, direction); -} - - -/////////////// str_tailmatch.proto /////////////// - -static CYTHON_INLINE int __Pyx_PyStr_Tailmatch(PyObject* self, PyObject* arg, Py_ssize_t start, - Py_ssize_t end, int direction); /*proto*/ - -/////////////// str_tailmatch /////////////// -//@requires: bytes_tailmatch -//@requires: unicode_tailmatch - -static CYTHON_INLINE int __Pyx_PyStr_Tailmatch(PyObject* self, PyObject* arg, Py_ssize_t start, - Py_ssize_t end, int direction) -{ - // We do not use a C compiler macro here to avoid "unused function" - // warnings for the *_Tailmatch() function that is not being used in - // the specific CPython version. The C compiler will generate the same - // code anyway, and will usually just remove the unused function. - if (PY_MAJOR_VERSION < 3) - return __Pyx_PyBytes_Tailmatch(self, arg, start, end, direction); - else - return __Pyx_PyUnicode_Tailmatch(self, arg, start, end, direction); -} - - -/////////////// bytes_index.proto /////////////// - -static CYTHON_INLINE char __Pyx_PyBytes_GetItemInt(PyObject* bytes, Py_ssize_t index, int check_bounds); /*proto*/ - -/////////////// bytes_index /////////////// - -static CYTHON_INLINE char __Pyx_PyBytes_GetItemInt(PyObject* bytes, Py_ssize_t index, int check_bounds) { - if (index < 0) - index += PyBytes_GET_SIZE(bytes); - if (check_bounds) { - Py_ssize_t size = PyBytes_GET_SIZE(bytes); - if (unlikely(!__Pyx_is_valid_index(index, size))) { - PyErr_SetString(PyExc_IndexError, "string index out of range"); - return (char) -1; - } - } - return PyBytes_AS_STRING(bytes)[index]; -} - - -//////////////////// StringJoin.proto //////////////////// - -#if PY_MAJOR_VERSION < 3 -#define __Pyx_PyString_Join __Pyx_PyBytes_Join -#define __Pyx_PyBaseString_Join(s, v) (PyUnicode_CheckExact(s) ? PyUnicode_Join(s, v) : __Pyx_PyBytes_Join(s, v)) -#else -#define __Pyx_PyString_Join PyUnicode_Join -#define __Pyx_PyBaseString_Join PyUnicode_Join -#endif - -#if CYTHON_COMPILING_IN_CPYTHON - #if PY_MAJOR_VERSION < 3 - #define __Pyx_PyBytes_Join _PyString_Join - #else - #define __Pyx_PyBytes_Join _PyBytes_Join - #endif -#else -static CYTHON_INLINE PyObject* __Pyx_PyBytes_Join(PyObject* sep, PyObject* values); /*proto*/ -#endif - - -//////////////////// StringJoin //////////////////// - -#if !CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyBytes_Join(PyObject* sep, PyObject* values) { - return PyObject_CallMethodObjArgs(sep, PYIDENT("join"), values, NULL); -} -#endif - - -/////////////// JoinPyUnicode.proto /////////////// - -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - Py_UCS4 max_char); - -/////////////// JoinPyUnicode /////////////// -//@requires: IncludeStringH -//@substitute: naming - -static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength, - CYTHON_UNUSED Py_UCS4 max_char) { -#if CYTHON_USE_UNICODE_INTERNALS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - PyObject *result_uval; - int result_ukind; - Py_ssize_t i, char_pos; - void *result_udata; -#if CYTHON_PEP393_ENABLED - // Py 3.3+ (post PEP-393) - result_uval = PyUnicode_New(result_ulength, max_char); - if (unlikely(!result_uval)) return NULL; - result_ukind = (max_char <= 255) ? PyUnicode_1BYTE_KIND : (max_char <= 65535) ? PyUnicode_2BYTE_KIND : PyUnicode_4BYTE_KIND; - result_udata = PyUnicode_DATA(result_uval); -#else - // Py 2.x/3.2 (pre PEP-393) - result_uval = PyUnicode_FromUnicode(NULL, result_ulength); - if (unlikely(!result_uval)) return NULL; - result_ukind = sizeof(Py_UNICODE); - result_udata = PyUnicode_AS_UNICODE(result_uval); -#endif - - char_pos = 0; - for (i=0; i < value_count; i++) { - int ukind; - Py_ssize_t ulength; - void *udata; - PyObject *uval = PyTuple_GET_ITEM(value_tuple, i); - if (unlikely(__Pyx_PyUnicode_READY(uval))) - goto bad; - ulength = __Pyx_PyUnicode_GET_LENGTH(uval); - if (unlikely(!ulength)) - continue; - if (unlikely(char_pos + ulength < 0)) - goto overflow; - ukind = __Pyx_PyUnicode_KIND(uval); - udata = __Pyx_PyUnicode_DATA(uval); - if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) { - memcpy((char *)result_udata + char_pos * result_ukind, udata, (size_t) (ulength * result_ukind)); - } else { - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters) - _PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength); - #else - Py_ssize_t j; - for (j=0; j < ulength; j++) { - Py_UCS4 uchar = __Pyx_PyUnicode_READ(ukind, udata, j); - __Pyx_PyUnicode_WRITE(result_ukind, result_udata, char_pos+j, uchar); - } - #endif - } - char_pos += ulength; - } - return result_uval; -overflow: - PyErr_SetString(PyExc_OverflowError, "join() result is too long for a Python string"); -bad: - Py_DECREF(result_uval); - return NULL; -#else - // non-CPython fallback - result_ulength++; - value_count++; - return PyUnicode_Join($empty_unicode, value_tuple); -#endif -} - - -/////////////// BuildPyUnicode.proto /////////////// - -static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength, - int prepend_sign, char padding_char); - -/////////////// BuildPyUnicode /////////////// - -// Create a PyUnicode object from an ASCII char*, e.g. a formatted number. - -static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength, - int prepend_sign, char padding_char) { - PyObject *uval; - Py_ssize_t uoffset = ulength - clength; -#if CYTHON_USE_UNICODE_INTERNALS - Py_ssize_t i; -#if CYTHON_PEP393_ENABLED - // Py 3.3+ (post PEP-393) - void *udata; - uval = PyUnicode_New(ulength, 127); - if (unlikely(!uval)) return NULL; - udata = PyUnicode_DATA(uval); -#else - // Py 2.x/3.2 (pre PEP-393) - Py_UNICODE *udata; - uval = PyUnicode_FromUnicode(NULL, ulength); - if (unlikely(!uval)) return NULL; - udata = PyUnicode_AS_UNICODE(uval); -#endif - if (uoffset > 0) { - i = 0; - if (prepend_sign) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, 0, '-'); - i++; - } - for (; i < uoffset; i++) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, i, padding_char); - } - } - for (i=0; i < clength; i++) { - __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, uoffset+i, chars[i]); - } - -#else - // non-CPython - { - PyObject *sign = NULL, *padding = NULL; - uval = NULL; - if (uoffset > 0) { - prepend_sign = !!prepend_sign; - if (uoffset > prepend_sign) { - padding = PyUnicode_FromOrdinal(padding_char); - if (likely(padding) && uoffset > prepend_sign + 1) { - PyObject *tmp; - PyObject *repeat = PyInt_FromSize_t(uoffset - prepend_sign); - if (unlikely(!repeat)) goto done_or_error; - tmp = PyNumber_Multiply(padding, repeat); - Py_DECREF(repeat); - Py_DECREF(padding); - padding = tmp; - } - if (unlikely(!padding)) goto done_or_error; - } - if (prepend_sign) { - sign = PyUnicode_FromOrdinal('-'); - if (unlikely(!sign)) goto done_or_error; - } - } - - uval = PyUnicode_DecodeASCII(chars, clength, NULL); - if (likely(uval) && padding) { - PyObject *tmp = PyNumber_Add(padding, uval); - Py_DECREF(uval); - uval = tmp; - } - if (likely(uval) && sign) { - PyObject *tmp = PyNumber_Add(sign, uval); - Py_DECREF(uval); - uval = tmp; - } -done_or_error: - Py_XDECREF(padding); - Py_XDECREF(sign); - } -#endif - - return uval; -} - - -//////////////////// ByteArrayAppendObject.proto //////////////////// - -static CYTHON_INLINE int __Pyx_PyByteArray_AppendObject(PyObject* bytearray, PyObject* value); - -//////////////////// ByteArrayAppendObject //////////////////// -//@requires: ByteArrayAppend - -static CYTHON_INLINE int __Pyx_PyByteArray_AppendObject(PyObject* bytearray, PyObject* value) { - Py_ssize_t ival; -#if PY_MAJOR_VERSION < 3 - if (unlikely(PyString_Check(value))) { - if (unlikely(PyString_GET_SIZE(value) != 1)) { - PyErr_SetString(PyExc_ValueError, "string must be of size 1"); - return -1; - } - ival = (unsigned char) (PyString_AS_STRING(value)[0]); - } else -#endif -#if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(value)) && likely(Py_SIZE(value) == 1 || Py_SIZE(value) == 0)) { - if (Py_SIZE(value) == 0) { - ival = 0; - } else { - ival = ((PyLongObject*)value)->ob_digit[0]; - if (unlikely(ival > 255)) goto bad_range; - } - } else -#endif - { - // CPython calls PyNumber_Index() internally - ival = __Pyx_PyIndex_AsSsize_t(value); - if (unlikely(!__Pyx_is_valid_index(ival, 256))) { - if (ival == -1 && PyErr_Occurred()) - return -1; - goto bad_range; - } - } - return __Pyx_PyByteArray_Append(bytearray, ival); -bad_range: - PyErr_SetString(PyExc_ValueError, "byte must be in range(0, 256)"); - return -1; -} - -//////////////////// ByteArrayAppend.proto //////////////////// - -static CYTHON_INLINE int __Pyx_PyByteArray_Append(PyObject* bytearray, int value); - -//////////////////// ByteArrayAppend //////////////////// -//@requires: ObjectHandling.c::PyObjectCallMethod1 - -static CYTHON_INLINE int __Pyx_PyByteArray_Append(PyObject* bytearray, int value) { - PyObject *pyval, *retval; -#if CYTHON_COMPILING_IN_CPYTHON - if (likely(__Pyx_is_valid_index(value, 256))) { - Py_ssize_t n = Py_SIZE(bytearray); - if (likely(n != PY_SSIZE_T_MAX)) { - if (unlikely(PyByteArray_Resize(bytearray, n + 1) < 0)) - return -1; - PyByteArray_AS_STRING(bytearray)[n] = value; - return 0; - } - } else { - PyErr_SetString(PyExc_ValueError, "byte must be in range(0, 256)"); - return -1; - } -#endif - pyval = PyInt_FromLong(value); - if (unlikely(!pyval)) - return -1; - retval = __Pyx_PyObject_CallMethod1(bytearray, PYIDENT("append"), pyval); - Py_DECREF(pyval); - if (unlikely(!retval)) - return -1; - Py_DECREF(retval); - return 0; -} - - -//////////////////// PyObjectFormat.proto //////////////////// - -#if CYTHON_USE_UNICODE_WRITER -static PyObject* __Pyx_PyObject_Format(PyObject* s, PyObject* f); -#else -#define __Pyx_PyObject_Format(s, f) PyObject_Format(s, f) -#endif - -//////////////////// PyObjectFormat //////////////////// - -#if CYTHON_USE_UNICODE_WRITER -static PyObject* __Pyx_PyObject_Format(PyObject* obj, PyObject* format_spec) { - int ret; - _PyUnicodeWriter writer; - - if (likely(PyFloat_CheckExact(obj))) { - // copied from CPython 3.5 "float__format__()" in floatobject.c -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x03040000 - _PyUnicodeWriter_Init(&writer, 0); -#else - _PyUnicodeWriter_Init(&writer); -#endif - ret = _PyFloat_FormatAdvancedWriter( - &writer, - obj, - format_spec, 0, PyUnicode_GET_LENGTH(format_spec)); - } else if (likely(PyLong_CheckExact(obj))) { - // copied from CPython 3.5 "long__format__()" in longobject.c -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x03040000 - _PyUnicodeWriter_Init(&writer, 0); -#else - _PyUnicodeWriter_Init(&writer); -#endif - ret = _PyLong_FormatAdvancedWriter( - &writer, - obj, - format_spec, 0, PyUnicode_GET_LENGTH(format_spec)); - } else { - return PyObject_Format(obj, format_spec); - } - - if (unlikely(ret == -1)) { - _PyUnicodeWriter_Dealloc(&writer); - return NULL; - } - return _PyUnicodeWriter_Finish(&writer); -} -#endif - - -//////////////////// PyObjectFormatSimple.proto //////////////////// - -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_FormatSimple(s, f) ( \ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \ - PyObject_Format(s, f)) -#elif PY_MAJOR_VERSION < 3 - // str is common in Py2, but formatting must return a Unicode string - #define __Pyx_PyObject_FormatSimple(s, f) ( \ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \ - likely(PyString_CheckExact(s)) ? PyUnicode_FromEncodedObject(s, NULL, "strict") : \ - PyObject_Format(s, f)) -#elif CYTHON_USE_TYPE_SLOTS - // Py3 nicely returns unicode strings from str() which makes this quite efficient for builtin types - #define __Pyx_PyObject_FormatSimple(s, f) ( \ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \ - likely(PyLong_CheckExact(s)) ? PyLong_Type.tp_str(s) : \ - likely(PyFloat_CheckExact(s)) ? PyFloat_Type.tp_str(s) : \ - PyObject_Format(s, f)) -#else - #define __Pyx_PyObject_FormatSimple(s, f) ( \ - likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \ - PyObject_Format(s, f)) -#endif - - -//////////////////// PyObjectFormatAndDecref.proto //////////////////// - -static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatSimpleAndDecref(PyObject* s, PyObject* f); -static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatAndDecref(PyObject* s, PyObject* f); - -//////////////////// PyObjectFormatAndDecref //////////////////// - -static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatSimpleAndDecref(PyObject* s, PyObject* f) { - if (unlikely(!s)) return NULL; - if (likely(PyUnicode_CheckExact(s))) return s; - #if PY_MAJOR_VERSION < 3 - // str is common in Py2, but formatting must return a Unicode string - if (likely(PyString_CheckExact(s))) { - PyObject *result = PyUnicode_FromEncodedObject(s, NULL, "strict"); - Py_DECREF(s); - return result; - } - #endif - return __Pyx_PyObject_FormatAndDecref(s, f); -} - -static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatAndDecref(PyObject* s, PyObject* f) { - PyObject *result = PyObject_Format(s, f); - Py_DECREF(s); - return result; -} - - -//////////////////// PyUnicode_Unicode.proto //////////////////// - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Unicode(PyObject *obj);/*proto*/ - -//////////////////// PyUnicode_Unicode //////////////////// - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Unicode(PyObject *obj) { - if (unlikely(obj == Py_None)) - obj = PYUNICODE("None"); - return __Pyx_NewRef(obj); -} - - -//////////////////// PyObject_Unicode.proto //////////////////// - -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyObject_Unicode(obj) \ - (likely(PyUnicode_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Str(obj)) -#else -#define __Pyx_PyObject_Unicode(obj) \ - (likely(PyUnicode_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Unicode(obj)) -#endif diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_with_highlighted_segment.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_with_highlighted_segment.py deleted file mode 100644 index 3326ac500cbc3fb309c714b06db37eefd7ae0cdb..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_with_highlighted_segment.py +++ /dev/null @@ -1,30 +0,0 @@ -""" -Bar Chart with Highlighted Segment ----------------------------------- -This example shows a bar chart that highlights values beyond a threshold. -""" -import altair as alt -import pandas as pd -from vega_datasets import data - -source = data.wheat() -threshold = pd.DataFrame([{"threshold": 90}]) - -bars = alt.Chart(source).mark_bar().encode( - x="year:O", - y="wheat:Q", -) - -highlight = alt.Chart(source).mark_bar(color="#e45755").encode( - x='year:O', - y='baseline:Q', - y2='wheat:Q' -).transform_filter( - alt.datum.wheat > 90 -).transform_calculate("baseline", "90") - -rule = alt.Chart(threshold).mark_rule().encode( - y='threshold:Q' -) - -(bars + highlight + rule).properties(width=600) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/ParseTreeMatch.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/ParseTreeMatch.py deleted file mode 100644 index bbda73e8f29f2636d9cad1351c3ac26d18f46d1c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/ParseTreeMatch.py +++ /dev/null @@ -1,118 +0,0 @@ -# -# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved. -# Use of this file is governed by the BSD 3-clause license that -# can be found in the LICENSE.txt file in the project root. -# - - -# -# Represents the result of matching a {@link ParseTree} against a tree pattern. -# -from io import StringIO -from antlr4.tree.ParseTreePattern import ParseTreePattern -from antlr4.tree.Tree import ParseTree - - -class ParseTreeMatch(object): - - # - # Constructs a new instance of {@link ParseTreeMatch} from the specified - # parse tree and pattern. - # - # @param tree The parse tree to match against the pattern. - # @param pattern The parse tree pattern. - # @param labels A mapping from label names to collections of - # {@link ParseTree} objects located by the tree pattern matching process. - # @param mismatchedNode The first node which failed to match the tree - # pattern during the matching process. - # - # @exception IllegalArgumentException if {@code tree} is {@code null} - # @exception IllegalArgumentException if {@code pattern} is {@code null} - # @exception IllegalArgumentException if {@code labels} is {@code null} - # - def __init__(self, tree:ParseTree, pattern:ParseTreePattern, labels:dict, mismatchedNode:ParseTree): - if tree is None: - raise Exception("tree cannot be null") - if pattern is None: - raise Exception("pattern cannot be null") - if labels is None: - raise Exception("labels cannot be null") - self.tree = tree - self.pattern = pattern - self.labels = labels - self.mismatchedNode = mismatchedNode - - # - # Get the last node associated with a specific {@code label}. - # - #

    For example, for pattern {@code }, {@code get("id")} returns the - # node matched for that {@code ID}. If more than one node - # matched the specified label, only the last is returned. If there is - # no node associated with the label, this returns {@code null}.

    - # - #

    Pattern tags like {@code } and {@code } without labels are - # considered to be labeled with {@code ID} and {@code expr}, respectively.

    - # - # @param label The label to check. - # - # @return The last {@link ParseTree} to match a tag with the specified - # label, or {@code null} if no parse tree matched a tag with the label. - # - def get(self, label:str): - parseTrees = self.labels.get(label, None) - if parseTrees is None or len(parseTrees)==0: - return None - else: - return parseTrees[len(parseTrees)-1] - - # - # Return all nodes matching a rule or token tag with the specified label. - # - #

    If the {@code label} is the name of a parser rule or token in the - # grammar, the resulting list will contain both the parse trees matching - # rule or tags explicitly labeled with the label and the complete set of - # parse trees matching the labeled and unlabeled tags in the pattern for - # the parser rule or token. For example, if {@code label} is {@code "foo"}, - # the result will contain all of the following.

    - # - #
      - #
    • Parse tree nodes matching tags of the form {@code } and - # {@code }.
    • - #
    • Parse tree nodes matching tags of the form {@code }.
    • - #
    • Parse tree nodes matching tags of the form {@code }.
    • - #
    - # - # @param label The label. - # - # @return A collection of all {@link ParseTree} nodes matching tags with - # the specified {@code label}. If no nodes matched the label, an empty list - # is returned. - # - def getAll(self, label:str): - nodes = self.labels.get(label, None) - if nodes is None: - return list() - else: - return nodes - - - # - # Gets a value indicating whether the match operation succeeded. - # - # @return {@code true} if the match operation succeeded; otherwise, - # {@code false}. - # - def succeeded(self): - return self.mismatchedNode is None - - # - # {@inheritDoc} - # - def __str__(self): - with StringIO() as buf: - buf.write("Match ") - buf.write("succeeded" if self.succeeded() else "failed") - buf.write("; found ") - buf.write(str(len(self.labels))) - buf.write(" labels") - return buf.getvalue() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/func.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/func.py deleted file mode 100644 index 0c09a60b4951019966a4c607ca2128ebee35c72a..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/func.py +++ /dev/null @@ -1,117 +0,0 @@ -"""`functools.lru_cache` compatible memoizing function decorators.""" - -__all__ = ("fifo_cache", "lfu_cache", "lru_cache", "mru_cache", "rr_cache", "ttl_cache") - -import math -import random -import time - -try: - from threading import RLock -except ImportError: # pragma: no cover - from dummy_threading import RLock - -from . import FIFOCache, LFUCache, LRUCache, MRUCache, RRCache, TTLCache -from . import cached -from . import keys - - -class _UnboundTTLCache(TTLCache): - def __init__(self, ttl, timer): - TTLCache.__init__(self, math.inf, ttl, timer) - - @property - def maxsize(self): - return None - - -def _cache(cache, maxsize, typed): - def decorator(func): - key = keys.typedkey if typed else keys.hashkey - wrapper = cached(cache=cache, key=key, lock=RLock(), info=True)(func) - wrapper.cache_parameters = lambda: {"maxsize": maxsize, "typed": typed} - return wrapper - - return decorator - - -def fifo_cache(maxsize=128, typed=False): - """Decorator to wrap a function with a memoizing callable that saves - up to `maxsize` results based on a First In First Out (FIFO) - algorithm. - - """ - if maxsize is None: - return _cache({}, None, typed) - elif callable(maxsize): - return _cache(FIFOCache(128), 128, typed)(maxsize) - else: - return _cache(FIFOCache(maxsize), maxsize, typed) - - -def lfu_cache(maxsize=128, typed=False): - """Decorator to wrap a function with a memoizing callable that saves - up to `maxsize` results based on a Least Frequently Used (LFU) - algorithm. - - """ - if maxsize is None: - return _cache({}, None, typed) - elif callable(maxsize): - return _cache(LFUCache(128), 128, typed)(maxsize) - else: - return _cache(LFUCache(maxsize), maxsize, typed) - - -def lru_cache(maxsize=128, typed=False): - """Decorator to wrap a function with a memoizing callable that saves - up to `maxsize` results based on a Least Recently Used (LRU) - algorithm. - - """ - if maxsize is None: - return _cache({}, None, typed) - elif callable(maxsize): - return _cache(LRUCache(128), 128, typed)(maxsize) - else: - return _cache(LRUCache(maxsize), maxsize, typed) - - -def mru_cache(maxsize=128, typed=False): - """Decorator to wrap a function with a memoizing callable that saves - up to `maxsize` results based on a Most Recently Used (MRU) - algorithm. - """ - if maxsize is None: - return _cache({}, None, typed) - elif callable(maxsize): - return _cache(MRUCache(128), 128, typed)(maxsize) - else: - return _cache(MRUCache(maxsize), maxsize, typed) - - -def rr_cache(maxsize=128, choice=random.choice, typed=False): - """Decorator to wrap a function with a memoizing callable that saves - up to `maxsize` results based on a Random Replacement (RR) - algorithm. - - """ - if maxsize is None: - return _cache({}, None, typed) - elif callable(maxsize): - return _cache(RRCache(128, choice), 128, typed)(maxsize) - else: - return _cache(RRCache(maxsize, choice), maxsize, typed) - - -def ttl_cache(maxsize=128, ttl=600, timer=time.monotonic, typed=False): - """Decorator to wrap a function with a memoizing callable that saves - up to `maxsize` results based on a Least Recently Used (LRU) - algorithm with a per-item time-to-live (TTL) value. - """ - if maxsize is None: - return _cache(_UnboundTTLCache(ttl, timer), None, typed) - elif callable(maxsize): - return _cache(TTLCache(128, ttl, timer), 128, typed)(maxsize) - else: - return _cache(TTLCache(maxsize, ttl, timer), maxsize, typed) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/ffiplatform.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/ffiplatform.py deleted file mode 100644 index 85313460a69477513c8e00f4df430925f2c4ecc9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/ffiplatform.py +++ /dev/null @@ -1,127 +0,0 @@ -import sys, os -from .error import VerificationError - - -LIST_OF_FILE_NAMES = ['sources', 'include_dirs', 'library_dirs', - 'extra_objects', 'depends'] - -def get_extension(srcfilename, modname, sources=(), **kwds): - _hack_at_distutils() - from distutils.core import Extension - allsources = [srcfilename] - for src in sources: - allsources.append(os.path.normpath(src)) - return Extension(name=modname, sources=allsources, **kwds) - -def compile(tmpdir, ext, compiler_verbose=0, debug=None): - """Compile a C extension module using distutils.""" - - _hack_at_distutils() - saved_environ = os.environ.copy() - try: - outputfilename = _build(tmpdir, ext, compiler_verbose, debug) - outputfilename = os.path.abspath(outputfilename) - finally: - # workaround for a distutils bugs where some env vars can - # become longer and longer every time it is used - for key, value in saved_environ.items(): - if os.environ.get(key) != value: - os.environ[key] = value - return outputfilename - -def _build(tmpdir, ext, compiler_verbose=0, debug=None): - # XXX compact but horrible :-( - from distutils.core import Distribution - import distutils.errors, distutils.log - # - dist = Distribution({'ext_modules': [ext]}) - dist.parse_config_files() - options = dist.get_option_dict('build_ext') - if debug is None: - debug = sys.flags.debug - options['debug'] = ('ffiplatform', debug) - options['force'] = ('ffiplatform', True) - options['build_lib'] = ('ffiplatform', tmpdir) - options['build_temp'] = ('ffiplatform', tmpdir) - # - try: - old_level = distutils.log.set_threshold(0) or 0 - try: - distutils.log.set_verbosity(compiler_verbose) - dist.run_command('build_ext') - cmd_obj = dist.get_command_obj('build_ext') - [soname] = cmd_obj.get_outputs() - finally: - distutils.log.set_threshold(old_level) - except (distutils.errors.CompileError, - distutils.errors.LinkError) as e: - raise VerificationError('%s: %s' % (e.__class__.__name__, e)) - # - return soname - -try: - from os.path import samefile -except ImportError: - def samefile(f1, f2): - return os.path.abspath(f1) == os.path.abspath(f2) - -def maybe_relative_path(path): - if not os.path.isabs(path): - return path # already relative - dir = path - names = [] - while True: - prevdir = dir - dir, name = os.path.split(prevdir) - if dir == prevdir or not dir: - return path # failed to make it relative - names.append(name) - try: - if samefile(dir, os.curdir): - names.reverse() - return os.path.join(*names) - except OSError: - pass - -# ____________________________________________________________ - -try: - int_or_long = (int, long) - import cStringIO -except NameError: - int_or_long = int # Python 3 - import io as cStringIO - -def _flatten(x, f): - if isinstance(x, str): - f.write('%ds%s' % (len(x), x)) - elif isinstance(x, dict): - keys = sorted(x.keys()) - f.write('%dd' % len(keys)) - for key in keys: - _flatten(key, f) - _flatten(x[key], f) - elif isinstance(x, (list, tuple)): - f.write('%dl' % len(x)) - for value in x: - _flatten(value, f) - elif isinstance(x, int_or_long): - f.write('%di' % (x,)) - else: - raise TypeError( - "the keywords to verify() contains unsupported object %r" % (x,)) - -def flatten(x): - f = cStringIO.StringIO() - _flatten(x, f) - return f.getvalue() - -def _hack_at_distutils(): - # Windows-only workaround for some configurations: see - # https://bugs.python.org/issue23246 (Python 2.7 with - # a specific MS compiler suite download) - if sys.platform == "win32": - try: - import setuptools # for side-effects, patches distutils - except ImportError: - pass diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/byte_utils.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/byte_utils.py deleted file mode 100644 index a305c080926c2d094b7e8ae48f5331da82025a75..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/byte_utils.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re - - -WHITESPACE_NORMALIZER = re.compile(r"\s+") -SPACE = chr(32) -SPACE_ESCAPE = chr(9601) -# excluding non-breaking space (160) here -PRINTABLE_LATIN = set( - list(range(32, 126 + 1)) + list(range(161, 172 + 1)) + list(range(174, 255 + 1)) -) -BYTE_TO_BCHAR = { - b: chr(b) if b in PRINTABLE_LATIN else chr(256 + b) for b in range(256) -} -BCHAR_TO_BYTE = {bc: b for b, bc in BYTE_TO_BCHAR.items()} - - -def byte_encode(x: str) -> str: - normalized = WHITESPACE_NORMALIZER.sub(SPACE, x) - return "".join([BYTE_TO_BCHAR[b] for b in normalized.encode("utf-8")]) - - -def byte_decode(x: str) -> str: - try: - return bytes([BCHAR_TO_BYTE[bc] for bc in x]).decode("utf-8") - except ValueError: - return "" - - -def smart_byte_decode(x: str) -> str: - output = byte_decode(x) - if output == "": - # DP the best recovery (max valid chars) if it's broken - n_bytes = len(x) - f = [0 for _ in range(n_bytes + 1)] - pt = [0 for _ in range(n_bytes + 1)] - for i in range(1, n_bytes + 1): - f[i], pt[i] = f[i - 1], i - 1 - for j in range(1, min(4, i) + 1): - if f[i - j] + 1 > f[i] and len(byte_decode(x[i - j : i])) > 0: - f[i], pt[i] = f[i - j] + 1, i - j - cur_pt = n_bytes - while cur_pt > 0: - if f[cur_pt] == f[pt[cur_pt]] + 1: - output = byte_decode(x[pt[cur_pt] : cur_pt]) + output - cur_pt = pt[cur_pt] - return output diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/__init__.py deleted file mode 100644 index 9b61fafadba28f65fe78a28b2099368b83cfcf41..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .huffman_coder import HuffmanCodeBuilder, HuffmanCoder -from .huffman_mmap_indexed_dataset import ( - HuffmanMMapIndex, - HuffmanMMapIndexedDataset, - HuffmanMMapIndexedDatasetBuilder, - vocab_file_path, -) - -__all__ = [ - "HuffmanCoder", - "HuffmanCodeBuilder", - "HuffmanMMapIndexedDatasetBuilder", - "HuffmanMMapIndexedDataset", - "HuffmanMMapIndex", - "vocab_file_path", -] diff --git a/spaces/aryadytm/chatmagic-ai/main.py b/spaces/aryadytm/chatmagic-ai/main.py deleted file mode 100644 index 101f5a43890b2c65a2efa10b298340ee86477d95..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/chatmagic-ai/main.py +++ /dev/null @@ -1,72 +0,0 @@ -from PIL import Image - -import gradio as gr -import random -import time -import os -import requests - - -CHATMAGIC_AI = os.environ["CHATMAGIC_AI"] - -markdown_text = """ -![ChatMagic AI](https://i.ibb.co/3szrgL8/chatmagic-ai.png) -ChatMagic AI is available as an Android app for FREE. Download now to chat faster and better! -- Google Play Store URL: **[CLICK HERE](https://bit.ly/googleplaystore-chatmagicai)** -- Discord URL: **[CLICK HERE](https://bit.ly/discord-chatmagicai)** -- Don't forget to **like** this space :) -""" - -welcome_text = """ -Hello! I'm ChatMagic AI. I'm here to assist you. I can do the following: -1. Answer questions and give explanations -2. Assist in writing a text based content -3. Follow simple instructions - -However, I still have limitations. I may write incorrect information or produce harmful instructions. Please use me with caution. -""".strip() - - -empty_history = [[None, welcome_text]] - - -with gr.Blocks() as demo: - gr.Markdown(markdown_text) - - chatbot = gr.Chatbot(empty_history, label="Chat with ChatMagic AI") - msg = gr.Textbox(label="Enter your question here") - - with gr.Row() as row: - btn_ask = gr.Button("Ask", variant="primary") - btn_clear = gr.Button("Clear") - - def user(user_message: str, history: list) -> tuple[str, list]: - return "", history + [[user_message, None]] - - def bot(history: list): - bot_message = "An error has occured. Please try again." - - try: - bot_message = requests.post(CHATMAGIC_AI, json={"question": history[-1][0]}).json()["answer"] - except Exception as e: - pass - - history[-1][1] = bot_message - return history - - msg.submit( - fn=user, inputs=[msg, chatbot], outputs=[msg, chatbot], queue=True).then( - fn=bot, inputs=chatbot, outputs=chatbot - ) - - btn_ask.click( - fn=user, inputs=[msg, chatbot], outputs=[msg, chatbot], queue=True).then( - fn=bot, inputs=chatbot, outputs=chatbot - ) - - btn_clear.click( - fn=lambda: empty_history, inputs=None, outputs=chatbot, queue=False) - - -demo.queue(concurrency_count=1) -demo.launch(server_name="0.0.0.0") \ No newline at end of file diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/app.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/app.py deleted file mode 100644 index f60fd3e8acba47d269b834f01b4f918def227119..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/app.py +++ /dev/null @@ -1,81 +0,0 @@ -import io -import os - -# os.system("wget -P cvec/ https://huggingface.co/spaces/innnky/nanami/resolve/main/checkpoint_best_legacy_500.pt") -import gradio as gr -import librosa -import numpy as np -import soundfile -from inference.infer_tool import Svc -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -config_path = "configs/config.json" - -model = Svc("logs/44k/G_90400.pth", "configs/config.json", cluster_model_path="logs/44k/kmeans_10000.pt") - - - -def vc_fn(sid, input_audio, vc_transform, auto_f0,cluster_ratio, slice_db, noise_scale): - if input_audio is None: - return "没有上传待处理的音频哦", None - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - duration = audio.shape[0] / sampling_rate - if duration > 10000000000000000000: - return "请上传小于100s的音频,需要转换长音频请本地进行转换", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - print(audio.shape) - out_wav_path = "temp.wav" - soundfile.write(out_wav_path, audio, 16000, format="wav") - print( cluster_ratio, auto_f0, noise_scale) - _audio = model.slice_inference(out_wav_path, sid, vc_transform, slice_db, cluster_ratio, auto_f0, noise_scale) - return "转换完成", (44100, _audio) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("一个窗口awa"): - gr.Markdown(value=""" - 香风智乃sovits4.0 在线demo 小孩子不懂事做着玩的 - - 备注: - - 1. 上传音频必须为`.mp3`或者`.wav`格式 `单声道` `44100采样率`。 - 2. 音频文件应`小于100s`转换大于100s可以在AU/AudioLab中切片逐一上传。 - 3. 使用男性音频可以考虑使用 升降调+4或+6/开启f0预测,使用女性音频可以不做调整。 - 4. 在线版服务器为2核16G免费版,转换效率较慢请耐心等待。 - 5. 使用该模型请标注作者 **模型训练/数据集:INT16** - 6. 语音模型转换出的音频请勿用于商业化,若有侵犯您的权利,请联系**leenight2016@outlook.com** - - - 模型作者b站@INT16 关注喵https://space.bilibili.com/133434728 - - Modified/Kangluted by LeeNight in 23.4.9 - """) - spks = list(model.spk2id.keys()) - sid = gr.Dropdown(label="音色", choices=spks, value=spks[0]) - vc_input3 = gr.Audio(label="上传音频(长度小于100秒)") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - cluster_ratio = gr.Number(label="聚类模型混合比例,0-1之间,默认为0不启用聚类,能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0) - auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声不要勾选此项会究极跑调)", value=False) - slice_db = gr.Number(label="切片阈值", value=-40) - noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4) - vc_submit = gr.Button("转换", variant="primary") - vc_output1 = gr.Textbox(label="输出结果") - vc_output2 = gr.Audio(label="输出音频") - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale], [vc_output1, vc_output2]) - - app.launch() - - - diff --git a/spaces/ashercn97/AsherTesting/extensions/openai/errors.py b/spaces/ashercn97/AsherTesting/extensions/openai/errors.py deleted file mode 100644 index ff519c4fcf8a43a4007ec3e54f64fdd88d5e6a4c..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/extensions/openai/errors.py +++ /dev/null @@ -1,31 +0,0 @@ -class OpenAIError(Exception): - def __init__(self, message=None, code=500, internal_message=''): - self.message = message - self.code = code - self.internal_message = internal_message - - def __repr__(self): - return "%s(message=%r, code=%d)" % ( - self.__class__.__name__, - self.message, - self.code, - ) - - -class InvalidRequestError(OpenAIError): - def __init__(self, message, param, code=400, error_type='InvalidRequestError', internal_message=''): - super(OpenAIError, self).__init__(message, code, error_type, internal_message) - self.param = param - - def __repr__(self): - return "%s(message=%r, code=%d, param=%s)" % ( - self.__class__.__name__, - self.message, - self.code, - self.param, - ) - - -class ServiceUnavailableError(OpenAIError): - def __init__(self, message=None, code=500, error_type='ServiceUnavailableError', internal_message=''): - super(OpenAIError, self).__init__(message, code, error_type, internal_message) diff --git a/spaces/ashercn97/AsherTesting/modules/exllama.py b/spaces/ashercn97/AsherTesting/modules/exllama.py deleted file mode 100644 index ecfb10a46017061e2dda13d5868c96661ea13693..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/exllama.py +++ /dev/null @@ -1,125 +0,0 @@ -from pathlib import Path - -from torch import version as torch_version - -from modules import shared -from modules.logging_colors import logger -from modules.text_generation import get_max_prompt_length - -try: - from exllama.generator import ExLlamaGenerator - from exllama.model import ExLlama, ExLlamaCache, ExLlamaConfig - from exllama.tokenizer import ExLlamaTokenizer -except: - logger.warning('Exllama module failed to load. Will attempt to load from repositories.') - try: - from modules.relative_imports import RelativeImport - - with RelativeImport("repositories/exllama"): - from generator import ExLlamaGenerator - from model import ExLlama, ExLlamaCache, ExLlamaConfig - from tokenizer import ExLlamaTokenizer - except: - logger.error("Could not find repositories/exllama/. Make sure that exllama is cloned inside repositories/ and is up to date.") - raise - - -class ExllamaModel: - def __init__(self): - pass - - @classmethod - def from_pretrained(self, path_to_model): - - path_to_model = Path(f'{shared.args.model_dir}') / Path(path_to_model) - tokenizer_model_path = path_to_model / "tokenizer.model" - model_config_path = path_to_model / "config.json" - - # Find the model checkpoint - model_path = None - for ext in ['.safetensors', '.pt', '.bin']: - found = list(path_to_model.glob(f"*{ext}")) - if len(found) > 0: - if len(found) > 1: - logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.') - - model_path = found[-1] - break - - config = ExLlamaConfig(str(model_config_path)) - config.model_path = str(model_path) - config.max_seq_len = shared.args.max_seq_len - config.compress_pos_emb = shared.args.compress_pos_emb - if shared.args.gpu_split: - config.set_auto_map(shared.args.gpu_split) - config.gpu_peer_fix = True - - if shared.args.alpha_value: - config.alpha_value = shared.args.alpha_value - config.calculate_rotary_embedding_base() - - if torch_version.hip: - config.rmsnorm_no_half2 = True - config.rope_no_half2 = True - config.matmul_no_half2 = True - config.silu_no_half2 = True - - model = ExLlama(config) - tokenizer = ExLlamaTokenizer(str(tokenizer_model_path)) - cache = ExLlamaCache(model) - generator = ExLlamaGenerator(model, tokenizer, cache) - - result = self() - result.config = config - result.model = model - result.cache = cache - result.tokenizer = tokenizer - result.generator = generator - return result, result - - def generate_with_streaming(self, prompt, state): - self.generator.settings.temperature = state['temperature'] - self.generator.settings.top_p = state['top_p'] - self.generator.settings.top_k = state['top_k'] - self.generator.settings.typical = state['typical_p'] - self.generator.settings.token_repetition_penalty_max = state['repetition_penalty'] - self.generator.settings.token_repetition_penalty_sustain = -1 if state['repetition_penalty_range'] <= 0 else state['repetition_penalty_range'] - if state['ban_eos_token']: - self.generator.disallow_tokens([self.tokenizer.eos_token_id]) - else: - self.generator.disallow_tokens(None) - - self.generator.end_beam_search() - - # Tokenizing the input - ids = self.generator.tokenizer.encode(prompt) - ids = ids[:, -get_max_prompt_length(state):] - - self.generator.gen_begin_reuse(ids) - initial_len = self.generator.sequence[0].shape[0] - has_leading_space = False - for i in range(state['max_new_tokens']): - token = self.generator.gen_single_token() - if i == 0 and self.generator.tokenizer.tokenizer.IdToPiece(int(token)).startswith('▁'): - has_leading_space = True - - decoded_text = self.generator.tokenizer.decode(self.generator.sequence[0][initial_len:]) - if has_leading_space: - decoded_text = ' ' + decoded_text - - yield decoded_text - if token.item() == self.generator.tokenizer.eos_token_id or shared.stop_everything: - break - - def generate(self, prompt, state): - output = '' - for output in self.generate_with_streaming(prompt, state): - pass - - return output - - def encode(self, string, **kwargs): - return self.tokenizer.encode(string) - - def decode(self, string, **kwargs): - return self.tokenizer.decode(string)[0] diff --git a/spaces/ashishraics/MCQ-Generator/README.md b/spaces/ashishraics/MCQ-Generator/README.md deleted file mode 100644 index 23c8c908277135509d8e64153f7d509313854946..0000000000000000000000000000000000000000 --- a/spaces/ashishraics/MCQ-Generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MCQ Generator -emoji: 👁 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/atimughal662/InfoFusion/src/gradio_runner.py b/spaces/atimughal662/InfoFusion/src/gradio_runner.py deleted file mode 100644 index fc62418c977d1ce3b54e63547a667203745a2554..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/src/gradio_runner.py +++ /dev/null @@ -1,4601 +0,0 @@ -import ast -import copy -import functools -import inspect -import itertools -import json -import os -import pprint -import random -import shutil -import sys -import time -import traceback -import uuid -import filelock -import numpy as np -import pandas as pd -import requests -from iterators import TimeoutIterator - -from gradio_utils.css import get_css -from gradio_utils.prompt_form import make_chatbots -from src.db_utils import set_userid, get_username_direct - -# This is a hack to prevent Gradio from phoning home when it gets imported -os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False' - - -def my_get(url, **kwargs): - print('Gradio HTTP request redirected to localhost :)', flush=True) - kwargs.setdefault('allow_redirects', True) - return requests.api.request('get', 'http://127.0.0.1/', **kwargs) - - -original_get = requests.get -requests.get = my_get -import gradio as gr - -requests.get = original_get - - -def fix_pydantic_duplicate_validators_error(): - try: - from pydantic import class_validators - - class_validators.in_ipython = lambda: True # type: ignore[attr-defined] - except ImportError: - pass - - -fix_pydantic_duplicate_validators_error() - -from enums import DocumentSubset, no_model_str, no_lora_str, no_server_str, LangChainAction, LangChainMode, \ - DocumentChoice, langchain_modes_intrinsic, LangChainTypes, langchain_modes_non_db, gr_to_lg, invalid_key_msg, \ - LangChainAgent, docs_ordering_types -from gradio_themes import H2oTheme, SoftTheme, get_h2o_title, get_simple_title, \ - get_dark_js, get_heap_js, wrap_js_to_lambda, \ - spacing_xsm, radius_xsm, text_xsm -from prompter import prompt_type_to_model_name, prompt_types_strings, inv_prompt_type_to_model_lower, non_hf_types, \ - get_prompt -from utils import flatten_list, zip_data, s3up, clear_torch_cache, get_torch_allocated, system_info_print, \ - ping, makedirs, get_kwargs, system_info, ping_gpu, get_url, get_local_ip, \ - save_generate_output, url_alive, remove, dict_to_html, text_to_html, lg_to_gr, str_to_dict, have_serpapi -from gen import get_model, languages_covered, evaluate, score_qa, inputs_kwargs_list, \ - get_max_max_new_tokens, get_minmax_top_k_docs, history_to_context, langchain_actions, langchain_agents_list, \ - evaluate_fake, merge_chat_conversation_history -from evaluate_params import eval_func_param_names, no_default_param_names, eval_func_param_names_defaults, \ - input_args_list, key_overrides - -from apscheduler.schedulers.background import BackgroundScheduler - - -def fix_text_for_gradio(text, fix_new_lines=False, fix_latex_dollars=True): - if fix_latex_dollars: - ts = text.split('```') - for parti, part in enumerate(ts): - inside = parti % 2 == 1 - if not inside: - ts[parti] = ts[parti].replace('$', '﹩') - text = '```'.join(ts) - - if fix_new_lines: - # let Gradio handle code, since got improved recently - ## FIXME: below conflicts with Gradio, but need to see if can handle multiple \n\n\n etc. properly as is. - # ensure good visually, else markdown ignores multiple \n - # handle code blocks - ts = text.split('```') - for parti, part in enumerate(ts): - inside = parti % 2 == 1 - if not inside: - ts[parti] = ts[parti].replace('\n', '
    ') - text = '```'.join(ts) - return text - - -def is_valid_key(enforce_h2ogpt_api_key, h2ogpt_api_keys, h2ogpt_key1, requests_state1=None): - valid_key = False - if not enforce_h2ogpt_api_key: - # no token barrier - valid_key = 'not enforced' - else: - if isinstance(h2ogpt_api_keys, list) and h2ogpt_key1 in h2ogpt_api_keys: - # passed token barrier - valid_key = True - elif isinstance(h2ogpt_api_keys, str) and os.path.isfile(h2ogpt_api_keys): - with filelock.FileLock(h2ogpt_api_keys + '.lock'): - with open(h2ogpt_api_keys, 'rt') as f: - h2ogpt_api_keys = json.load(f) - if h2ogpt_key1 in h2ogpt_api_keys: - valid_key = True - if isinstance(requests_state1, dict) and 'username' in requests_state1 and requests_state1['username']: - # no UI limit currently - valid_key = True - return valid_key - - -def go_gradio(**kwargs): - allow_api = kwargs['allow_api'] - is_public = kwargs['is_public'] - is_hf = kwargs['is_hf'] - memory_restriction_level = kwargs['memory_restriction_level'] - n_gpus = kwargs['n_gpus'] - admin_pass = kwargs['admin_pass'] - model_states = kwargs['model_states'] - dbs = kwargs['dbs'] - db_type = kwargs['db_type'] - visible_langchain_actions = kwargs['visible_langchain_actions'] - visible_langchain_agents = kwargs['visible_langchain_agents'] - allow_upload_to_user_data = kwargs['allow_upload_to_user_data'] - allow_upload_to_my_data = kwargs['allow_upload_to_my_data'] - enable_sources_list = kwargs['enable_sources_list'] - enable_url_upload = kwargs['enable_url_upload'] - enable_text_upload = kwargs['enable_text_upload'] - use_openai_embedding = kwargs['use_openai_embedding'] - hf_embedding_model = kwargs['hf_embedding_model'] - load_db_if_exists = kwargs['load_db_if_exists'] - migrate_embedding_model = kwargs['migrate_embedding_model'] - auto_migrate_db = kwargs['auto_migrate_db'] - captions_model = kwargs['captions_model'] - caption_loader = kwargs['caption_loader'] - doctr_loader = kwargs['doctr_loader'] - - n_jobs = kwargs['n_jobs'] - verbose = kwargs['verbose'] - - # for dynamic state per user session in gradio - model_state0 = kwargs['model_state0'] - score_model_state0 = kwargs['score_model_state0'] - my_db_state0 = kwargs['my_db_state0'] - selection_docs_state0 = kwargs['selection_docs_state0'] - visible_models_state0 = kwargs['visible_models_state0'] - # For Heap analytics - is_heap_analytics_enabled = kwargs['enable_heap_analytics'] - heap_app_id = kwargs['heap_app_id'] - - # easy update of kwargs needed for evaluate() etc. - queue = True - allow_upload = allow_upload_to_user_data or allow_upload_to_my_data - allow_upload_api = allow_api and allow_upload - - kwargs.update(locals()) - - # import control - if kwargs['langchain_mode'] != 'Disabled': - from gpt_langchain import file_types, have_arxiv - else: - have_arxiv = False - file_types = [] - - if 'mbart-' in kwargs['model_lower']: - instruction_label_nochat = "Text to translate" - else: - instruction_label_nochat = "Instruction (Shift-Enter or push Submit to send message," \ - " use Enter for multiple input lines)" - - title = 'h2oGPT' - if kwargs['visible_h2ogpt_header']: - description = """h2oGPT LLM Leaderboard LLM Studio
    CodeLlama
    🤗 Models""" - else: - description = None - description_bottom = "If this host is busy, try
    [Multi-Model](https://gpt.h2o.ai)
    [CodeLlama](https://codellama.h2o.ai)
    [Llama2 70B](https://llama.h2o.ai)
    [Falcon 40B](https://falcon.h2o.ai)
    [HF Spaces1](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot)
    [HF Spaces2](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot2)
    " - if is_hf: - description_bottom += '''Duplicate Space''' - task_info_md = '' - css_code = get_css(kwargs) - - if kwargs['gradio_offline_level'] >= 0: - # avoid GoogleFont that pulls from internet - if kwargs['gradio_offline_level'] == 1: - # front end would still have to download fonts or have cached it at some point - base_font = 'Source Sans Pro' - else: - base_font = 'Helvetica' - theme_kwargs = dict(font=(base_font, 'ui-sans-serif', 'system-ui', 'sans-serif'), - font_mono=('IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace')) - else: - theme_kwargs = dict() - if kwargs['gradio_size'] == 'xsmall': - theme_kwargs.update(dict(spacing_size=spacing_xsm, text_size=text_xsm, radius_size=radius_xsm)) - elif kwargs['gradio_size'] in [None, 'small']: - theme_kwargs.update(dict(spacing_size=gr.themes.sizes.spacing_sm, text_size=gr.themes.sizes.text_sm, - radius_size=gr.themes.sizes.spacing_sm)) - elif kwargs['gradio_size'] == 'large': - theme_kwargs.update(dict(spacing_size=gr.themes.sizes.spacing_lg, text_size=gr.themes.sizes.text_lg), - radius_size=gr.themes.sizes.spacing_lg) - elif kwargs['gradio_size'] == 'medium': - theme_kwargs.update(dict(spacing_size=gr.themes.sizes.spacing_md, text_size=gr.themes.sizes.text_md, - radius_size=gr.themes.sizes.spacing_md)) - - theme = H2oTheme(**theme_kwargs) if kwargs['h2ocolors'] else SoftTheme(**theme_kwargs) - demo = gr.Blocks(theme=theme, css=css_code, title="h2oGPT", analytics_enabled=False) - callback = gr.CSVLogger() - - model_options0 = flatten_list(list(prompt_type_to_model_name.values())) + kwargs['extra_model_options'] - if kwargs['base_model'].strip() not in model_options0: - model_options0 = [kwargs['base_model'].strip()] + model_options0 - lora_options = kwargs['extra_lora_options'] - if kwargs['lora_weights'].strip() not in lora_options: - lora_options = [kwargs['lora_weights'].strip()] + lora_options - server_options = kwargs['extra_server_options'] - if kwargs['inference_server'].strip() not in server_options: - server_options = [kwargs['inference_server'].strip()] + server_options - if os.getenv('OPENAI_API_KEY'): - if 'openai_chat' not in server_options: - server_options += ['openai_chat'] - if 'openai' not in server_options: - server_options += ['openai'] - - # always add in no lora case - # add fake space so doesn't go away in gradio dropdown - model_options0 = [no_model_str] + sorted(model_options0) - lora_options = [no_lora_str] + sorted(lora_options) - server_options = [no_server_str] + sorted(server_options) - # always add in no model case so can free memory - # add fake space so doesn't go away in gradio dropdown - - # transcribe, will be detranscribed before use by evaluate() - if not kwargs['base_model'].strip(): - kwargs['base_model'] = no_model_str - - if not kwargs['lora_weights'].strip(): - kwargs['lora_weights'] = no_lora_str - - if not kwargs['inference_server'].strip(): - kwargs['inference_server'] = no_server_str - - # transcribe for gradio - kwargs['gpu_id'] = str(kwargs['gpu_id']) - - no_model_msg = 'h2oGPT [ !!! Please Load Model in Models Tab !!! ]' - output_label0 = f'h2oGPT [Model: {kwargs.get("base_model")}]' if kwargs.get( - 'base_model') else no_model_msg - output_label0_model2 = no_model_msg - - def update_prompt(prompt_type1, prompt_dict1, model_state1, which_model=0): - if not prompt_type1 or which_model != 0: - # keep prompt_type and prompt_dict in sync if possible - prompt_type1 = kwargs.get('prompt_type', prompt_type1) - prompt_dict1 = kwargs.get('prompt_dict', prompt_dict1) - # prefer model specific prompt type instead of global one - if not prompt_type1 or which_model != 0: - prompt_type1 = model_state1.get('prompt_type', prompt_type1) - prompt_dict1 = model_state1.get('prompt_dict', prompt_dict1) - - if not prompt_dict1 or which_model != 0: - # if still not defined, try to get - prompt_dict1 = kwargs.get('prompt_dict', prompt_dict1) - if not prompt_dict1 or which_model != 0: - prompt_dict1 = model_state1.get('prompt_dict', prompt_dict1) - return prompt_type1, prompt_dict1 - - def visible_models_to_model_choice(visible_models1): - if isinstance(visible_models1, list): - assert len( - visible_models1) >= 1, "Invalid visible_models1=%s, can only be single entry" % visible_models1 - # just take first - model_active_choice1 = visible_models1[0] - elif isinstance(visible_models1, (str, int)): - model_active_choice1 = visible_models1 - else: - assert isinstance(visible_models1, type(None)), "Invalid visible_models1=%s" % visible_models1 - model_active_choice1 = visible_models1 - if model_active_choice1 is not None: - if isinstance(model_active_choice1, str): - base_model_list = [x['base_model'] for x in model_states] - if model_active_choice1 in base_model_list: - # if dups, will just be first one - model_active_choice1 = base_model_list.index(model_active_choice1) - else: - # NOTE: Could raise, but sometimes raising in certain places fails too hard and requires UI restart - model_active_choice1 = 0 - else: - model_active_choice1 = 0 - return model_active_choice1 - - default_kwargs = {k: kwargs[k] for k in eval_func_param_names_defaults} - # ensure prompt_type consistent with prep_bot(), so nochat API works same way - default_kwargs['prompt_type'], default_kwargs['prompt_dict'] = \ - update_prompt(default_kwargs['prompt_type'], default_kwargs['prompt_dict'], - model_state1=model_state0, - which_model=visible_models_to_model_choice(kwargs['visible_models'])) - for k in no_default_param_names: - default_kwargs[k] = '' - - def dummy_fun(x): - # need dummy function to block new input from being sent until output is done, - # else gets input_list at time of submit that is old, and shows up as truncated in chatbot - return x - - def update_auth_selection(auth_user, selection_docs_state1, save=False): - # in-place update of both - if 'selection_docs_state' not in auth_user: - auth_user['selection_docs_state'] = selection_docs_state0 - for k, v in auth_user['selection_docs_state'].items(): - if isinstance(selection_docs_state1[k], dict): - if save: - auth_user['selection_docs_state'][k].clear() - auth_user['selection_docs_state'][k].update(selection_docs_state1[k]) - else: - selection_docs_state1[k].clear() - selection_docs_state1[k].update(auth_user['selection_docs_state'][k]) - elif isinstance(selection_docs_state1[k], list): - if save: - auth_user['selection_docs_state'][k].clear() - auth_user['selection_docs_state'][k].extend(selection_docs_state1[k]) - else: - selection_docs_state1[k].clear() - selection_docs_state1[k].extend(auth_user['selection_docs_state'][k]) - else: - raise RuntimeError("Bad type: %s" % selection_docs_state1[k]) - - # BEGIN AUTH THINGS - def auth_func(username1, password1, auth_pairs=None, auth_filename=None, - auth_access=None, - auth_freeze=None, - guest_name=None, - selection_docs_state1=None, - selection_docs_state00=None, - **kwargs): - assert auth_freeze is not None - if selection_docs_state1 is None: - selection_docs_state1 = selection_docs_state00 - assert selection_docs_state1 is not None - assert auth_filename and isinstance(auth_filename, str), "Auth file must be a non-empty string, got: %s" % str( - auth_filename) - if auth_access == 'open' and username1 == guest_name: - return True - if username1 == '': - # some issue with login - return False - with filelock.FileLock(auth_filename + '.lock'): - auth_dict = {} - if os.path.isfile(auth_filename): - try: - with open(auth_filename, 'rt') as f: - auth_dict = json.load(f) - except json.decoder.JSONDecodeError as e: - print("Auth exception: %s" % str(e), flush=True) - shutil.move(auth_filename, auth_filename + '.bak' + str(uuid.uuid4())) - auth_dict = {} - if username1 in auth_dict and username1 in auth_pairs: - if password1 == auth_dict[username1]['password'] and password1 == auth_pairs[username1]: - auth_user = auth_dict[username1] - update_auth_selection(auth_user, selection_docs_state1) - save_auth_dict(auth_dict, auth_filename) - return True - else: - return False - elif username1 in auth_dict: - if password1 == auth_dict[username1]['password']: - auth_user = auth_dict[username1] - update_auth_selection(auth_user, selection_docs_state1) - save_auth_dict(auth_dict, auth_filename) - return True - else: - return False - elif username1 in auth_pairs: - # copy over CLI auth to file so only one state to manage - auth_dict[username1] = dict(password=auth_pairs[username1], userid=str(uuid.uuid4())) - auth_user = auth_dict[username1] - update_auth_selection(auth_user, selection_docs_state1) - save_auth_dict(auth_dict, auth_filename) - return True - else: - if auth_access == 'closed': - return False - # open access - auth_dict[username1] = dict(password=password1, userid=str(uuid.uuid4())) - auth_user = auth_dict[username1] - update_auth_selection(auth_user, selection_docs_state1) - save_auth_dict(auth_dict, auth_filename) - if auth_access == 'open': - return True - else: - raise RuntimeError("Invalid auth_access: %s" % auth_access) - - def auth_func_open(*args, **kwargs): - return True - - def get_username(requests_state1): - username1 = None - if 'username' in requests_state1: - username1 = requests_state1['username'] - return username1 - - def get_userid_auth_func(requests_state1, auth_filename=None, auth_access=None, guest_name=None, **kwargs): - if auth_filename and isinstance(auth_filename, str): - username1 = get_username(requests_state1) - if username1: - if username1 == guest_name: - return str(uuid.uuid4()) - with filelock.FileLock(auth_filename + '.lock'): - if os.path.isfile(auth_filename): - with open(auth_filename, 'rt') as f: - auth_dict = json.load(f) - if username1 in auth_dict: - return auth_dict[username1]['userid'] - # if here, then not persistently associated with username1, - # but should only be one-time asked if going to persist within a single session! - return str(uuid.uuid4()) - - get_userid_auth = functools.partial(get_userid_auth_func, - auth_filename=kwargs['auth_filename'], - auth_access=kwargs['auth_access'], - guest_name=kwargs['guest_name'], - ) - if kwargs['auth_access'] == 'closed': - auth_message1 = "Closed access" - else: - auth_message1 = "WELCOME! Open access" \ - " (%s/%s or any unique user/pass)" % (kwargs['guest_name'], kwargs['guest_name']) - - if kwargs['auth_message'] is not None: - auth_message = kwargs['auth_message'] - else: - auth_message = auth_message1 - - # always use same callable - auth_pairs0 = {} - if isinstance(kwargs['auth'], list): - for k, v in kwargs['auth']: - auth_pairs0[k] = v - authf = functools.partial(auth_func, - auth_pairs=auth_pairs0, - auth_filename=kwargs['auth_filename'], - auth_access=kwargs['auth_access'], - auth_freeze=kwargs['auth_freeze'], - guest_name=kwargs['guest_name'], - selection_docs_state00=copy.deepcopy(selection_docs_state0)) - - def get_request_state(requests_state1, request, db1s): - # if need to get state, do it now - if not requests_state1: - requests_state1 = requests_state0.copy() - if requests: - if not requests_state1.get('headers', '') and hasattr(request, 'headers'): - requests_state1.update(request.headers) - if not requests_state1.get('host', '') and hasattr(request, 'host'): - requests_state1.update(dict(host=request.host)) - if not requests_state1.get('host2', '') and hasattr(request, 'client') and hasattr(request.client, 'host'): - requests_state1.update(dict(host2=request.client.host)) - if not requests_state1.get('username', '') and hasattr(request, 'username'): - # use already-defined username instead of keep changing to new uuid - # should be same as in requests_state1 - db_username = get_username_direct(db1s) - requests_state1.update(dict(username=request.username or db_username or str(uuid.uuid4()))) - requests_state1 = {str(k): str(v) for k, v in requests_state1.items()} - return requests_state1 - - def user_state_setup(db1s, requests_state1, request: gr.Request, *args): - requests_state1 = get_request_state(requests_state1, request, db1s) - set_userid(db1s, requests_state1, get_userid_auth) - args_list = [db1s, requests_state1] + list(args) - return tuple(args_list) - - # END AUTH THINGS - - def allow_empty_instruction(langchain_mode1, document_subset1, langchain_action1): - allow = False - allow |= langchain_action1 not in LangChainAction.QUERY.value - allow |= document_subset1 in DocumentSubset.TopKSources.name - if langchain_mode1 in [LangChainMode.LLM.value]: - allow = False - return allow - - image_loaders_options0, image_loaders_options, \ - pdf_loaders_options0, pdf_loaders_options, \ - url_loaders_options0, url_loaders_options = lg_to_gr(**kwargs) - jq_schema0 = '.[]' - - with demo: - # avoid actual model/tokenizer here or anything that would be bad to deepcopy - # https://github.com/gradio-app/gradio/issues/3558 - model_state = gr.State( - dict(model='model', tokenizer='tokenizer', device=kwargs['device'], - base_model=kwargs['base_model'], - tokenizer_base_model=kwargs['tokenizer_base_model'], - lora_weights=kwargs['lora_weights'], - inference_server=kwargs['inference_server'], - prompt_type=kwargs['prompt_type'], - prompt_dict=kwargs['prompt_dict'], - visible_models=kwargs['visible_models'], - h2ogpt_key=kwargs['h2ogpt_key'], - ) - ) - - def update_langchain_mode_paths(selection_docs_state1): - dup = selection_docs_state1['langchain_mode_paths'].copy() - for k, v in dup.items(): - if k not in selection_docs_state1['langchain_modes']: - selection_docs_state1['langchain_mode_paths'].pop(k) - for k in selection_docs_state1['langchain_modes']: - if k not in selection_docs_state1['langchain_mode_types']: - # if didn't specify shared, then assume scratch if didn't login or personal if logged in - selection_docs_state1['langchain_mode_types'][k] = LangChainTypes.PERSONAL.value - return selection_docs_state1 - - # Setup some gradio states for per-user dynamic state - model_state2 = gr.State(kwargs['model_state_none'].copy()) - model_options_state = gr.State([model_options0]) - lora_options_state = gr.State([lora_options]) - server_options_state = gr.State([server_options]) - my_db_state = gr.State(my_db_state0) - chat_state = gr.State({}) - docs_state00 = kwargs['document_choice'] + [DocumentChoice.ALL.value] - docs_state0 = [] - [docs_state0.append(x) for x in docs_state00 if x not in docs_state0] - docs_state = gr.State(docs_state0) - viewable_docs_state0 = [] - viewable_docs_state = gr.State(viewable_docs_state0) - selection_docs_state0 = update_langchain_mode_paths(selection_docs_state0) - selection_docs_state = gr.State(selection_docs_state0) - requests_state0 = dict(headers='', host='', username='') - requests_state = gr.State(requests_state0) - - if description is not None: - gr.Markdown(f""" - {get_h2o_title(title, description) if kwargs['h2ocolors'] else get_simple_title(title, description)} - """) - - # go button visible if - base_wanted = kwargs['base_model'] != no_model_str and kwargs['login_mode_if_model0'] - go_btn = gr.Button(value="ENTER", visible=base_wanted, variant="primary") - - nas = ' '.join(['NA'] * len(kwargs['model_states'])) - res_value = "Response Score: NA" if not kwargs[ - 'model_lock'] else "Response Scores: %s" % nas - - user_can_do_sum = kwargs['langchain_mode'] != LangChainMode.DISABLED.value and \ - (kwargs['visible_side_bar'] or kwargs['visible_system_tab']) - if user_can_do_sum: - extra_prompt_form = ". For summarization, no query required, just click submit" - else: - extra_prompt_form = "" - if kwargs['input_lines'] > 1: - instruction_label = "Shift-Enter to Submit, Enter for more lines%s" % extra_prompt_form - else: - instruction_label = "Enter to Submit, Shift-Enter for more lines%s" % extra_prompt_form - - def get_langchain_choices(selection_docs_state1): - langchain_modes = selection_docs_state1['langchain_modes'] - - if is_hf: - # don't show 'wiki' since only usually useful for internal testing at moment - no_show_modes = ['Disabled', 'wiki'] - else: - no_show_modes = ['Disabled'] - allowed_modes = langchain_modes.copy() - # allowed_modes = [x for x in allowed_modes if x in dbs] - allowed_modes += ['LLM'] - if allow_upload_to_my_data and 'MyData' not in allowed_modes: - allowed_modes += ['MyData'] - if allow_upload_to_user_data and 'UserData' not in allowed_modes: - allowed_modes += ['UserData'] - choices = [x for x in langchain_modes if x in allowed_modes and x not in no_show_modes] - return choices - - def get_df_langchain_mode_paths(selection_docs_state1, db1s, dbs1=None): - langchain_choices1 = get_langchain_choices(selection_docs_state1) - langchain_mode_paths = selection_docs_state1['langchain_mode_paths'] - langchain_mode_paths = {k: v for k, v in langchain_mode_paths.items() if k in langchain_choices1} - if langchain_mode_paths: - langchain_mode_paths = langchain_mode_paths.copy() - for langchain_mode1 in langchain_modes_non_db: - langchain_mode_paths.pop(langchain_mode1, None) - df1 = pd.DataFrame.from_dict(langchain_mode_paths.items(), orient='columns') - df1.columns = ['Collection', 'Path'] - df1 = df1.set_index('Collection') - else: - df1 = pd.DataFrame(None) - langchain_mode_types = selection_docs_state1['langchain_mode_types'] - langchain_mode_types = {k: v for k, v in langchain_mode_types.items() if k in langchain_choices1} - if langchain_mode_types: - langchain_mode_types = langchain_mode_types.copy() - for langchain_mode1 in langchain_modes_non_db: - langchain_mode_types.pop(langchain_mode1, None) - - df2 = pd.DataFrame.from_dict(langchain_mode_types.items(), orient='columns') - df2.columns = ['Collection', 'Type'] - df2 = df2.set_index('Collection') - - from src.gpt_langchain import get_persist_directory, load_embed - persist_directory_dict = {} - embed_dict = {} - chroma_version_dict = {} - for langchain_mode3 in langchain_mode_types: - langchain_type3 = langchain_mode_types.get(langchain_mode3, LangChainTypes.EITHER.value) - persist_directory3, langchain_type3 = get_persist_directory(langchain_mode3, - langchain_type=langchain_type3, - db1s=db1s, dbs=dbs1) - got_embedding3, use_openai_embedding3, hf_embedding_model3 = load_embed( - persist_directory=persist_directory3) - persist_directory_dict[langchain_mode3] = persist_directory3 - embed_dict[langchain_mode3] = 'OpenAI' if not hf_embedding_model3 else hf_embedding_model3 - - if os.path.isfile(os.path.join(persist_directory3, 'chroma.sqlite3')): - chroma_version_dict[langchain_mode3] = 'ChromaDB>=0.4' - elif os.path.isdir(os.path.join(persist_directory3, 'index')): - chroma_version_dict[langchain_mode3] = 'ChromaDB<0.4' - elif not os.listdir(persist_directory3): - if db_type == 'chroma': - chroma_version_dict[langchain_mode3] = 'ChromaDB>=0.4' # will be - elif db_type == 'chroma_old': - chroma_version_dict[langchain_mode3] = 'ChromaDB<0.4' # will be - else: - chroma_version_dict[langchain_mode3] = 'Weaviate' # will be - if isinstance(hf_embedding_model, dict): - hf_embedding_model3 = hf_embedding_model['name'] - else: - hf_embedding_model3 = hf_embedding_model - assert isinstance(hf_embedding_model3, str) - embed_dict[langchain_mode3] = hf_embedding_model3 # will be - else: - chroma_version_dict[langchain_mode3] = 'Weaviate' - - df3 = pd.DataFrame.from_dict(persist_directory_dict.items(), orient='columns') - df3.columns = ['Collection', 'Directory'] - df3 = df3.set_index('Collection') - - df4 = pd.DataFrame.from_dict(embed_dict.items(), orient='columns') - df4.columns = ['Collection', 'Embedding'] - df4 = df4.set_index('Collection') - - df5 = pd.DataFrame.from_dict(chroma_version_dict.items(), orient='columns') - df5.columns = ['Collection', 'DB'] - df5 = df5.set_index('Collection') - else: - df2 = pd.DataFrame(None) - df3 = pd.DataFrame(None) - df4 = pd.DataFrame(None) - df5 = pd.DataFrame(None) - df_list = [df2, df1, df3, df4, df5] - df_list = [x for x in df_list if x.shape[1] > 0] - if len(df_list) > 1: - df = df_list[0].join(df_list[1:]).replace(np.nan, '').reset_index() - elif len(df_list) == 0: - df = df_list[0].replace(np.nan, '').reset_index() - else: - df = pd.DataFrame(None) - return df - - normal_block = gr.Row(visible=not base_wanted, equal_height=False, elem_id="col_container") - with normal_block: - side_bar = gr.Column(elem_id="sidebar", scale=1, min_width=100, visible=kwargs['visible_side_bar']) - with side_bar: - with gr.Accordion("Chats", open=False, visible=True): - radio_chats = gr.Radio(value=None, label="Saved Chats", show_label=False, - visible=True, interactive=True, - type='value') - upload_visible = kwargs['langchain_mode'] != 'Disabled' and allow_upload - with gr.Accordion("Upload", open=False, visible=upload_visible): - with gr.Column(): - with gr.Row(equal_height=False): - fileup_output = gr.File(show_label=False, - file_types=['.' + x for x in file_types], - # file_types=['*', '*.*'], # for iPhone etc. needs to be unconstrained else doesn't work with extension-based restrictions - file_count="multiple", - scale=1, - min_width=0, - elem_id="warning", elem_classes="feedback", - ) - fileup_output_text = gr.Textbox(visible=False) - max_quality = gr.Checkbox(label="Maximum Ingest Quality", value=kwargs['max_quality'], - visible=not is_public) - url_visible = kwargs['langchain_mode'] != 'Disabled' and allow_upload and enable_url_upload - url_label = 'URL/ArXiv' if have_arxiv else 'URL' - url_text = gr.Textbox(label=url_label, - # placeholder="Enter Submits", - max_lines=1, - interactive=True) - text_visible = kwargs['langchain_mode'] != 'Disabled' and allow_upload and enable_text_upload - user_text_text = gr.Textbox(label='Paste Text', - # placeholder="Enter Submits", - interactive=True, - visible=text_visible) - github_textbox = gr.Textbox(label="Github URL", visible=False) # FIXME WIP - database_visible = kwargs['langchain_mode'] != 'Disabled' - with gr.Accordion("Resources", open=False, visible=database_visible): - langchain_choices0 = get_langchain_choices(selection_docs_state0) - langchain_mode = gr.Radio( - langchain_choices0, - value=kwargs['langchain_mode'], - label="Collections", - show_label=True, - visible=kwargs['langchain_mode'] != 'Disabled', - min_width=100) - add_chat_history_to_context = gr.Checkbox(label="Chat History", - value=kwargs['add_chat_history_to_context']) - add_search_to_context = gr.Checkbox(label="Web Search", - value=kwargs['add_search_to_context'], - visible=os.environ.get('SERPAPI_API_KEY') is not None \ - and have_serpapi) - document_subset = gr.Radio([x.name for x in DocumentSubset], - label="Subset", - value=DocumentSubset.Relevant.name, - interactive=True, - ) - allowed_actions = [x for x in langchain_actions if x in visible_langchain_actions] - langchain_action = gr.Radio( - allowed_actions, - value=allowed_actions[0] if len(allowed_actions) > 0 else None, - label="Action", - visible=True) - allowed_agents = [x for x in langchain_agents_list if x in visible_langchain_agents] - if os.getenv('OPENAI_API_KEY') is None and LangChainAgent.JSON.value in allowed_agents: - allowed_agents.remove(LangChainAgent.JSON.value) - if os.getenv('OPENAI_API_KEY') is None and LangChainAgent.PYTHON.value in allowed_agents: - allowed_agents.remove(LangChainAgent.PYTHON.value) - if LangChainAgent.PANDAS.value in allowed_agents: - allowed_agents.remove(LangChainAgent.PANDAS.value) - langchain_agents = gr.Dropdown( - allowed_agents, - value=None, - label="Agents", - multiselect=True, - interactive=True, - visible=True, - elem_id="langchain_agents", - filterable=False) - visible_doc_track = upload_visible and kwargs['visible_doc_track'] and not kwargs[ - 'large_file_count_mode'] - row_doc_track = gr.Row(visible=visible_doc_track) - with row_doc_track: - if kwargs['langchain_mode'] in langchain_modes_non_db: - doc_counts_str = "Pure LLM Mode" - else: - doc_counts_str = "Name: %s\nDocs: Unset\nChunks: Unset" % kwargs['langchain_mode'] - text_doc_count = gr.Textbox(lines=3, label="Doc Counts", value=doc_counts_str, - visible=visible_doc_track) - text_file_last = gr.Textbox(lines=1, label="Newest Doc", value=None, visible=visible_doc_track) - text_viewable_doc_count = gr.Textbox(lines=2, label=None, visible=False) - col_tabs = gr.Column(elem_id="col-tabs", scale=10) - with col_tabs, gr.Tabs(): - if kwargs['chat_tables']: - chat_tab = gr.Row(visible=True) - else: - chat_tab = gr.TabItem("Chat") \ - if kwargs['visible_chat_tab'] else gr.Row(visible=False) - with chat_tab: - if kwargs['langchain_mode'] == 'Disabled': - text_output_nochat = gr.Textbox(lines=5, label=output_label0, show_copy_button=True, - visible=not kwargs['chat']) - else: - # text looks a bit worse, but HTML links work - text_output_nochat = gr.HTML(label=output_label0, visible=not kwargs['chat']) - with gr.Row(): - # NOCHAT - instruction_nochat = gr.Textbox( - lines=kwargs['input_lines'], - label=instruction_label_nochat, - placeholder=kwargs['placeholder_instruction'], - visible=not kwargs['chat'], - ) - iinput_nochat = gr.Textbox(lines=4, label="Input context for Instruction", - placeholder=kwargs['placeholder_input'], - value=kwargs['iinput'], - visible=not kwargs['chat']) - submit_nochat = gr.Button("Submit", size='sm', visible=not kwargs['chat']) - flag_btn_nochat = gr.Button("Flag", size='sm', visible=not kwargs['chat']) - score_text_nochat = gr.Textbox("Response Score: NA", show_label=False, - visible=not kwargs['chat']) - submit_nochat_api = gr.Button("Submit nochat API", visible=False) - submit_nochat_api_plain = gr.Button("Submit nochat API Plain", visible=False) - inputs_dict_str = gr.Textbox(label='API input for nochat', show_label=False, visible=False) - text_output_nochat_api = gr.Textbox(lines=5, label='API nochat output', visible=False, - show_copy_button=True) - - visible_upload = (allow_upload_to_user_data or - allow_upload_to_my_data) and \ - kwargs['langchain_mode'] != 'Disabled' - # CHAT - col_chat = gr.Column(visible=kwargs['chat']) - with col_chat: - with gr.Row(): - with gr.Column(scale=50): - with gr.Row(elem_id="prompt-form-row"): - label_instruction = 'Ask anything' - instruction = gr.Textbox( - lines=kwargs['input_lines'], - label=label_instruction, - placeholder=instruction_label, - info=None, - elem_id='prompt-form', - container=True, - ) - attach_button = gr.UploadButton( - elem_id="attach-button" if visible_upload else None, - value="", - label="Upload File(s)", - size="sm", - min_width=24, - file_types=['.' + x for x in file_types], - file_count="multiple", - visible=visible_upload) - - submit_buttons = gr.Row(equal_height=False, visible=kwargs['visible_submit_buttons']) - with submit_buttons: - mw1 = 50 - mw2 = 50 - with gr.Column(min_width=mw1): - submit = gr.Button(value='Submit', variant='primary', size='sm', - min_width=mw1) - stop_btn = gr.Button(value="Stop", variant='secondary', size='sm', - min_width=mw1) - save_chat_btn = gr.Button("Save", size='sm', min_width=mw1) - with gr.Column(min_width=mw2): - retry_btn = gr.Button("Redo", size='sm', min_width=mw2) - undo = gr.Button("Undo", size='sm', min_width=mw2) - clear_chat_btn = gr.Button(value="Clear", size='sm', min_width=mw2) - - visible_model_choice = bool(kwargs['model_lock']) and \ - len(model_states) > 1 and \ - kwargs['visible_visible_models'] - with gr.Row(visible=visible_model_choice): - visible_models = gr.Dropdown(kwargs['all_models'], - label="Visible Models", - value=visible_models_state0, - interactive=True, - multiselect=True, - visible=visible_model_choice, - elem_id="visible-models", - filterable=False, - ) - - text_output, text_output2, text_outputs = make_chatbots(output_label0, output_label0_model2, - **kwargs) - - with gr.Row(): - with gr.Column(visible=kwargs['score_model']): - score_text = gr.Textbox(res_value, - show_label=False, - visible=True) - score_text2 = gr.Textbox("Response Score2: NA", show_label=False, - visible=False and not kwargs['model_lock']) - - doc_selection_tab = gr.TabItem("Document Selection") \ - if kwargs['visible_doc_selection_tab'] else gr.Row(visible=False) - with doc_selection_tab: - if kwargs['langchain_mode'] in langchain_modes_non_db: - dlabel1 = 'Choose Resources->Collections and Pick Collection' - active_collection = gr.Markdown(value="#### Not Chatting with Any Collection\n%s" % dlabel1) - else: - dlabel1 = 'Select Subset of Document(s) for Chat with Collection: %s' % kwargs['langchain_mode'] - active_collection = gr.Markdown( - value="#### Chatting with Collection: %s" % kwargs['langchain_mode']) - document_choice = gr.Dropdown(docs_state0, - label=dlabel1, - value=[DocumentChoice.ALL.value], - interactive=True, - multiselect=True, - visible=kwargs['langchain_mode'] != 'Disabled', - ) - sources_visible = kwargs['langchain_mode'] != 'Disabled' and enable_sources_list - with gr.Row(): - with gr.Column(scale=1): - get_sources_btn = gr.Button(value="Update UI with Document(s) from DB", scale=0, size='sm', - visible=sources_visible and kwargs['large_file_count_mode']) - # handle API get sources - get_sources_api_btn = gr.Button(visible=False) - get_sources_api_text = gr.Textbox(visible=False) - - get_document_api_btn = gr.Button(visible=False) - get_document_api_text = gr.Textbox(visible=False) - - show_sources_btn = gr.Button(value="Show Sources from DB", scale=0, size='sm', - visible=sources_visible and kwargs['large_file_count_mode']) - delete_sources_btn = gr.Button(value="Delete Selected Sources from DB", scale=0, size='sm', - visible=sources_visible) - refresh_sources_btn = gr.Button(value="Update DB with new/changed files on disk", scale=0, - size='sm', - visible=sources_visible and allow_upload_to_user_data) - with gr.Column(scale=4): - pass - visible_add_remove_collection = visible_upload - with gr.Row(): - with gr.Column(scale=1): - add_placeholder = "e.g. UserData2, shared, user_path2" \ - if not is_public else "e.g. MyData2, personal (optional)" - remove_placeholder = "e.g. UserData2" if not is_public else "e.g. MyData2" - new_langchain_mode_text = gr.Textbox(value="", visible=visible_add_remove_collection, - label='Add Collection', - placeholder=add_placeholder, - interactive=True) - remove_langchain_mode_text = gr.Textbox(value="", visible=visible_add_remove_collection, - label='Remove Collection from UI', - placeholder=remove_placeholder, - interactive=True) - purge_langchain_mode_text = gr.Textbox(value="", visible=visible_add_remove_collection, - label='Purge Collection (UI, DB, & source files)', - placeholder=remove_placeholder, - interactive=True) - sync_sources_btn = gr.Button( - value="Synchronize DB and UI [only required if did not login and have shared docs]", - scale=0, size='sm', - visible=sources_visible and allow_upload_to_user_data and not kwargs[ - 'large_file_count_mode']) - load_langchain = gr.Button( - value="Load Collections State [only required if logged in another user ", scale=0, - size='sm', - visible=False and allow_upload_to_user_data and - kwargs['langchain_mode'] != 'Disabled') - with gr.Column(scale=5): - if kwargs['langchain_mode'] != 'Disabled' and visible_add_remove_collection: - df0 = get_df_langchain_mode_paths(selection_docs_state0, None, dbs1=dbs) - else: - df0 = pd.DataFrame(None) - langchain_mode_path_text = gr.Dataframe(value=df0, - visible=visible_add_remove_collection, - label='LangChain Mode-Path', - show_label=False, - interactive=False) - - sources_row = gr.Row(visible=kwargs['langchain_mode'] != 'Disabled' and enable_sources_list, - equal_height=False) - with sources_row: - with gr.Column(scale=1): - file_source = gr.File(interactive=False, - label="Download File w/Sources") - with gr.Column(scale=2): - sources_text = gr.HTML(label='Sources Added', interactive=False) - - doc_exception_text = gr.Textbox(value="", label='Document Exceptions', - interactive=False, - visible=kwargs['langchain_mode'] != 'Disabled') - file_types_str = ' '.join(file_types) + ' URL ArXiv TEXT' - gr.Textbox(value=file_types_str, label='Document Types Supported', - lines=2, - interactive=False, - visible=kwargs['langchain_mode'] != 'Disabled') - - doc_view_tab = gr.TabItem("Document Viewer") \ - if kwargs['visible_doc_view_tab'] else gr.Row(visible=False) - with doc_view_tab: - with gr.Row(visible=kwargs['langchain_mode'] != 'Disabled'): - with gr.Column(scale=2): - get_viewable_sources_btn = gr.Button(value="Update UI with Document(s) from DB", scale=0, - size='sm', - visible=sources_visible and kwargs[ - 'large_file_count_mode']) - view_document_choice = gr.Dropdown(viewable_docs_state0, - label="Select Single Document to View", - value=None, - interactive=True, - multiselect=False, - visible=True, - ) - info_view_raw = "Raw text shown if render of original doc fails" - if is_public: - info_view_raw += " (Up to %s chunks in public portal)" % kwargs['max_raw_chunks'] - view_raw_text_checkbox = gr.Checkbox(label="View Database Text", value=False, - info=info_view_raw, - visible=kwargs['db_type'] in ['chroma', 'chroma_old']) - with gr.Column(scale=4): - pass - doc_view = gr.HTML(visible=False) - doc_view2 = gr.Dataframe(visible=False) - doc_view3 = gr.JSON(visible=False) - doc_view4 = gr.Markdown(visible=False) - doc_view5 = gr.HTML(visible=False) - - chat_tab = gr.TabItem("Chat History") \ - if kwargs['visible_chat_history_tab'] else gr.Row(visible=False) - with chat_tab: - with gr.Row(): - with gr.Column(scale=1): - remove_chat_btn = gr.Button(value="Remove Selected Saved Chats", visible=True, size='sm') - flag_btn = gr.Button("Flag Current Chat", size='sm') - export_chats_btn = gr.Button(value="Export Chats to Download", size='sm') - with gr.Column(scale=4): - pass - with gr.Row(): - chats_file = gr.File(interactive=False, label="Download Exported Chats") - chatsup_output = gr.File(label="Upload Chat File(s)", - file_types=['.json'], - file_count='multiple', - elem_id="warning", elem_classes="feedback") - with gr.Row(): - if 'mbart-' in kwargs['model_lower']: - src_lang = gr.Dropdown(list(languages_covered().keys()), - value=kwargs['src_lang'], - label="Input Language") - tgt_lang = gr.Dropdown(list(languages_covered().keys()), - value=kwargs['tgt_lang'], - label="Output Language") - - chat_exception_text = gr.Textbox(value="", visible=True, label='Chat Exceptions', - interactive=False) - expert_tab = gr.TabItem("Expert") \ - if kwargs['visible_expert_tab'] else gr.Row(visible=False) - with expert_tab: - with gr.Row(): - with gr.Column(): - prompt_type = gr.Dropdown(prompt_types_strings, - value=kwargs['prompt_type'], label="Prompt Type", - visible=not kwargs['model_lock'], - interactive=not is_public, - ) - prompt_type2 = gr.Dropdown(prompt_types_strings, - value=kwargs['prompt_type'], label="Prompt Type Model 2", - visible=False and not kwargs['model_lock'], - interactive=not is_public) - system_prompt = gr.Textbox(label="System Prompt", - info="If 'auto', then uses model's system prompt," - " else use this message." - " If empty, no system message is used", - value=kwargs['system_prompt']) - context = gr.Textbox(lines=2, label="System Pre-Context", - info="Directly pre-appended without prompt processing (before Pre-Conversation)", - value=kwargs['context']) - chat_conversation = gr.Textbox(lines=2, label="Pre-Conversation", - info="Pre-append conversation for instruct/chat models as List of tuple of (human, bot)", - value=kwargs['chat_conversation']) - text_context_list = gr.Textbox(lines=2, label="Text Doc Q/A", - info="List of strings, for document Q/A, for bypassing database (i.e. also works in LLM Mode)", - value=kwargs['chat_conversation'], - visible=not is_public, # primarily meant for API - ) - iinput = gr.Textbox(lines=2, label="Input for Instruct prompt types", - info="If given for document query, added after query", - value=kwargs['iinput'], - placeholder=kwargs['placeholder_input'], - interactive=not is_public) - with gr.Column(): - pre_prompt_query = gr.Textbox(label="Query Pre-Prompt", - info="Added before documents", - value=kwargs['pre_prompt_query'] or '') - prompt_query = gr.Textbox(label="Query Prompt", - info="Added after documents", - value=kwargs['prompt_query'] or '') - pre_prompt_summary = gr.Textbox(label="Summary Pre-Prompt", - info="Added before documents", - value=kwargs['pre_prompt_summary'] or '') - prompt_summary = gr.Textbox(label="Summary Prompt", - info="Added after documents (if query given, 'Focusing on {query}, ' is pre-appended)", - value=kwargs['prompt_summary'] or '') - with gr.Row(visible=not is_public): - image_loaders = gr.CheckboxGroup(image_loaders_options, - label="Force Image Reader", - value=image_loaders_options0) - pdf_loaders = gr.CheckboxGroup(pdf_loaders_options, - label="Force PDF Reader", - value=pdf_loaders_options0) - url_loaders = gr.CheckboxGroup(url_loaders_options, - label="Force URL Reader", value=url_loaders_options0) - jq_schema = gr.Textbox(label="JSON jq_schema", value=jq_schema0) - - min_top_k_docs, max_top_k_docs, label_top_k_docs = get_minmax_top_k_docs(is_public) - top_k_docs = gr.Slider(minimum=min_top_k_docs, maximum=max_top_k_docs, step=1, - value=kwargs['top_k_docs'], - label=label_top_k_docs, - # info="For LangChain", - visible=kwargs['langchain_mode'] != 'Disabled', - interactive=not is_public) - chunk_size = gr.Number(value=kwargs['chunk_size'], - label="Chunk size for document chunking", - info="For LangChain (ignored if chunk=False)", - minimum=128, - maximum=2048, - visible=kwargs['langchain_mode'] != 'Disabled', - interactive=not is_public, - precision=0) - docs_ordering_type = gr.Radio( - docs_ordering_types, - value=kwargs['docs_ordering_type'], - label="Document Sorting in LLM Context", - visible=True) - chunk = gr.components.Checkbox(value=kwargs['chunk'], - label="Whether to chunk documents", - info="For LangChain", - visible=kwargs['langchain_mode'] != 'Disabled', - interactive=not is_public) - embed = gr.components.Checkbox(value=True, - label="Whether to embed text", - info="For LangChain", - visible=False) - with gr.Row(): - stream_output = gr.components.Checkbox(label="Stream output", - value=kwargs['stream_output']) - do_sample = gr.Checkbox(label="Sample", - info="Enable sampler (required for use of temperature, top_p, top_k)", - value=kwargs['do_sample']) - max_time = gr.Slider(minimum=0, maximum=kwargs['max_max_time'], step=1, - value=min(kwargs['max_max_time'], - kwargs['max_time']), label="Max. time", - info="Max. time to search optimal output.") - temperature = gr.Slider(minimum=0.01, maximum=2, - value=kwargs['temperature'], - label="Temperature", - info="Lower is deterministic, higher more creative") - top_p = gr.Slider(minimum=1e-3, maximum=1.0 - 1e-3, - value=kwargs['top_p'], label="Top p", - info="Cumulative probability of tokens to sample from") - top_k = gr.Slider( - minimum=1, maximum=100, step=1, - value=kwargs['top_k'], label="Top k", - info='Num. tokens to sample from' - ) - # FIXME: https://github.com/h2oai/h2ogpt/issues/106 - if os.getenv('TESTINGFAIL'): - max_beams = 8 if not (memory_restriction_level or is_public) else 1 - else: - max_beams = 1 - num_beams = gr.Slider(minimum=1, maximum=max_beams, step=1, - value=min(max_beams, kwargs['num_beams']), label="Beams", - info="Number of searches for optimal overall probability. " - "Uses more GPU memory/compute", - interactive=False, visible=max_beams > 1) - max_max_new_tokens = get_max_max_new_tokens(model_state0, **kwargs) - max_new_tokens = gr.Slider( - minimum=1, maximum=max_max_new_tokens, step=1, - value=min(max_max_new_tokens, kwargs['max_new_tokens']), label="Max output length", - ) - min_new_tokens = gr.Slider( - minimum=0, maximum=max_max_new_tokens, step=1, - value=min(max_max_new_tokens, kwargs['min_new_tokens']), label="Min output length", - ) - max_new_tokens2 = gr.Slider( - minimum=1, maximum=max_max_new_tokens, step=1, - value=min(max_max_new_tokens, kwargs['max_new_tokens']), label="Max output length 2", - visible=False and not kwargs['model_lock'], - ) - min_new_tokens2 = gr.Slider( - minimum=0, maximum=max_max_new_tokens, step=1, - value=min(max_max_new_tokens, kwargs['min_new_tokens']), label="Min output length 2", - visible=False and not kwargs['model_lock'], - ) - min_max_new_tokens = gr.Slider( - minimum=1, maximum=max_max_new_tokens, step=1, - value=min(max_max_new_tokens, kwargs['min_max_new_tokens']), - label="Min. of Max output length", - ) - early_stopping = gr.Checkbox(label="EarlyStopping", info="Stop early in beam search", - value=kwargs['early_stopping'], visible=max_beams > 1) - repetition_penalty = gr.Slider(minimum=0.01, maximum=3.0, - value=kwargs['repetition_penalty'], - label="Repetition Penalty") - num_return_sequences = gr.Slider(minimum=1, maximum=10, step=1, - value=kwargs['num_return_sequences'], - label="Number Returns", info="Must be <= num_beams", - interactive=not is_public, visible=max_beams > 1) - chat = gr.components.Checkbox(label="Chat mode", value=kwargs['chat'], - visible=False, # no longer support nochat in UI - interactive=not is_public, - ) - with gr.Row(): - count_chat_tokens_btn = gr.Button(value="Count Chat Tokens", - visible=not is_public and not kwargs['model_lock'], - interactive=not is_public, size='sm') - chat_token_count = gr.Textbox(label="Chat Token Count Result", value=None, - visible=not is_public and not kwargs['model_lock'], - interactive=False) - - models_tab = gr.TabItem("Models") \ - if kwargs['visible_models_tab'] and not bool(kwargs['model_lock']) else gr.Row(visible=False) - with models_tab: - load_msg = "Download/Load Model" if not is_public \ - else "LOAD-UNLOAD DISABLED FOR HOSTED DEMO" - if kwargs['base_model'] not in ['', None, no_model_str]: - load_msg += ' [WARNING: Avoid --base_model on CLI for memory efficient Load-Unload]' - load_msg2 = load_msg + "(Model 2)" - variant_load_msg = 'primary' if not is_public else 'secondary' - with gr.Row(): - n_gpus_list = [str(x) for x in list(range(-1, n_gpus))] - with gr.Column(): - with gr.Row(): - with gr.Column(scale=20, visible=not kwargs['model_lock']): - load_model_button = gr.Button(load_msg, variant=variant_load_msg, scale=0, - size='sm', interactive=not is_public) - model_choice = gr.Dropdown(model_options_state.value[0], label="Choose Base Model", - value=kwargs['base_model']) - lora_choice = gr.Dropdown(lora_options_state.value[0], label="Choose LORA", - value=kwargs['lora_weights'], visible=kwargs['show_lora']) - server_choice = gr.Dropdown(server_options_state.value[0], label="Choose Server", - value=kwargs['inference_server'], visible=not is_public) - max_seq_len = gr.Number(value=kwargs['max_seq_len'] or 2048, - minimum=128, - maximum=2 ** 18, - info="If standard LLaMa-2, choose up to 4096", - label="max_seq_len") - rope_scaling = gr.Textbox(value=str(kwargs['rope_scaling'] or {}), - label="rope_scaling") - row_llama = gr.Row(visible=kwargs['show_llama'] and kwargs['base_model'] == 'llama') - with row_llama: - model_path_llama = gr.Textbox(value=kwargs['llamacpp_dict']['model_path_llama'], - lines=4, - label="Choose LLaMa.cpp Model Path/URL (for Base Model: llama)", - visible=kwargs['show_llama']) - n_gpu_layers = gr.Number(value=kwargs['llamacpp_dict']['n_gpu_layers'], - minimum=0, maximum=100, - label="LLaMa.cpp Num. GPU Layers Offloaded", - visible=kwargs['show_llama']) - n_batch = gr.Number(value=kwargs['llamacpp_dict']['n_batch'], - minimum=0, maximum=2048, - label="LLaMa.cpp Batch Size", - visible=kwargs['show_llama']) - n_gqa = gr.Number(value=kwargs['llamacpp_dict']['n_gqa'], - minimum=0, maximum=32, - label="LLaMa.cpp Num. Group Query Attention (8 for 70B LLaMa2)", - visible=kwargs['show_llama']) - llamacpp_dict_more = gr.Textbox(value="{}", - lines=4, - label="Dict for other LLaMa.cpp/GPT4All options", - visible=kwargs['show_llama']) - row_gpt4all = gr.Row( - visible=kwargs['show_gpt4all'] and kwargs['base_model'] in ['gptj', - 'gpt4all_llama']) - with row_gpt4all: - model_name_gptj = gr.Textbox(value=kwargs['llamacpp_dict']['model_name_gptj'], - label="Choose GPT4All GPTJ Model Path/URL (for Base Model: gptj)", - visible=kwargs['show_gpt4all']) - model_name_gpt4all_llama = gr.Textbox( - value=kwargs['llamacpp_dict']['model_name_gpt4all_llama'], - label="Choose GPT4All LLaMa Model Path/URL (for Base Model: gpt4all_llama)", - visible=kwargs['show_gpt4all']) - with gr.Column(scale=1, visible=not kwargs['model_lock']): - model_load8bit_checkbox = gr.components.Checkbox( - label="Load 8-bit [requires support]", - value=kwargs['load_8bit'], interactive=not is_public) - model_load4bit_checkbox = gr.components.Checkbox( - label="Load 4-bit [requires support]", - value=kwargs['load_4bit'], interactive=not is_public) - model_low_bit_mode = gr.Slider(value=kwargs['low_bit_mode'], - minimum=0, maximum=4, step=1, - label="low_bit_mode") - model_load_gptq = gr.Textbox(label="gptq", value=kwargs['load_gptq'], - interactive=not is_public) - model_load_exllama_checkbox = gr.components.Checkbox( - label="Load load_exllama [requires support]", - value=kwargs['load_exllama'], interactive=not is_public) - model_safetensors_checkbox = gr.components.Checkbox( - label="Safetensors [requires support]", - value=kwargs['use_safetensors'], interactive=not is_public) - model_revision = gr.Textbox(label="revision", value=kwargs['revision'], - interactive=not is_public) - model_use_gpu_id_checkbox = gr.components.Checkbox( - label="Choose Devices [If not Checked, use all GPUs]", - value=kwargs['use_gpu_id'], interactive=not is_public, - visible=n_gpus != 0) - model_gpu = gr.Dropdown(n_gpus_list, - label="GPU ID [-1 = all GPUs, if Choose is enabled]", - value=kwargs['gpu_id'], interactive=not is_public, - visible=n_gpus != 0) - model_used = gr.Textbox(label="Current Model", value=kwargs['base_model'], - interactive=False) - lora_used = gr.Textbox(label="Current LORA", value=kwargs['lora_weights'], - visible=kwargs['show_lora'], interactive=False) - server_used = gr.Textbox(label="Current Server", - value=kwargs['inference_server'], - visible=bool(kwargs['inference_server']) and not is_public, - interactive=False) - prompt_dict = gr.Textbox(label="Prompt (or Custom)", - value=pprint.pformat(kwargs['prompt_dict'], indent=4), - interactive=not is_public, lines=4) - col_model2 = gr.Column(visible=False) - with col_model2: - with gr.Row(): - with gr.Column(scale=20, visible=not kwargs['model_lock']): - load_model_button2 = gr.Button(load_msg2, variant=variant_load_msg, scale=0, - size='sm', interactive=not is_public) - model_choice2 = gr.Dropdown(model_options_state.value[0], label="Choose Model 2", - value=no_model_str) - lora_choice2 = gr.Dropdown(lora_options_state.value[0], label="Choose LORA 2", - value=no_lora_str, - visible=kwargs['show_lora']) - server_choice2 = gr.Dropdown(server_options_state.value[0], label="Choose Server 2", - value=no_server_str, - visible=not is_public) - max_seq_len2 = gr.Number(value=kwargs['max_seq_len'] or 2048, - minimum=128, - maximum=2 ** 18, - info="If standard LLaMa-2, choose up to 4096", - label="max_seq_len Model 2") - rope_scaling2 = gr.Textbox(value=str(kwargs['rope_scaling'] or {}), - label="rope_scaling Model 2") - - row_llama2 = gr.Row( - visible=kwargs['show_llama'] and kwargs['base_model'] == 'llama') - with row_llama2: - model_path_llama2 = gr.Textbox( - value=kwargs['llamacpp_dict']['model_path_llama'], - label="Choose LLaMa.cpp Model 2 Path/URL (for Base Model: llama)", - lines=4, - visible=kwargs['show_llama']) - n_gpu_layers2 = gr.Number(value=kwargs['llamacpp_dict']['n_gpu_layers'], - minimum=0, maximum=100, - label="LLaMa.cpp Num. GPU 2 Layers Offloaded", - visible=kwargs['show_llama']) - n_batch2 = gr.Number(value=kwargs['llamacpp_dict']['n_batch'], - minimum=0, maximum=2048, - label="LLaMa.cpp Model 2 Batch Size", - visible=kwargs['show_llama']) - n_gqa2 = gr.Number(value=kwargs['llamacpp_dict']['n_gqa'], - minimum=0, maximum=32, - label="LLaMa.cpp Model 2 Num. Group Query Attention (8 for 70B LLaMa2)", - visible=kwargs['show_llama']) - llamacpp_dict_more2 = gr.Textbox(value="{}", - lines=4, - label="Model 2 Dict for other LLaMa.cpp/GPT4All options", - visible=kwargs['show_llama']) - row_gpt4all2 = gr.Row( - visible=kwargs['show_gpt4all'] and kwargs['base_model'] in ['gptj', - 'gpt4all_llama']) - with row_gpt4all2: - model_name_gptj2 = gr.Textbox(value=kwargs['llamacpp_dict']['model_name_gptj'], - label="Choose GPT4All GPTJ Model 2 Path/URL (for Base Model: gptj)", - visible=kwargs['show_gpt4all']) - model_name_gpt4all_llama2 = gr.Textbox( - value=kwargs['llamacpp_dict']['model_name_gpt4all_llama'], - label="Choose GPT4All LLaMa Model 2 Path/URL (for Base Model: gpt4all_llama)", - visible=kwargs['show_gpt4all']) - - with gr.Column(scale=1, visible=not kwargs['model_lock']): - model_load8bit_checkbox2 = gr.components.Checkbox( - label="Load 8-bit (Model 2) [requires support]", - value=kwargs['load_8bit'], interactive=not is_public) - model_load4bit_checkbox2 = gr.components.Checkbox( - label="Load 4-bit (Model 2) [requires support]", - value=kwargs['load_4bit'], interactive=not is_public) - model_low_bit_mode2 = gr.Slider(value=kwargs['low_bit_mode'], - # ok that same as Model 1 - minimum=0, maximum=4, step=1, - label="low_bit_mode (Model 2)") - model_load_gptq2 = gr.Textbox(label="gptq (Model 2)", value='', - interactive=not is_public) - model_load_exllama_checkbox2 = gr.components.Checkbox( - label="Load load_exllama (Model 2) [requires support]", - value=False, interactive=not is_public) - model_safetensors_checkbox2 = gr.components.Checkbox( - label="Safetensors (Model 2) [requires support]", - value=False, interactive=not is_public) - model_revision2 = gr.Textbox(label="revision (Model 2)", value='', - interactive=not is_public) - model_use_gpu_id_checkbox2 = gr.components.Checkbox( - label="Choose Devices (Model 2) [If not Checked, use all GPUs]", - value=kwargs[ - 'use_gpu_id'], interactive=not is_public) - model_gpu2 = gr.Dropdown(n_gpus_list, - label="GPU ID (Model 2) [-1 = all GPUs, if choose is enabled]", - value=kwargs['gpu_id'], interactive=not is_public) - # no model/lora loaded ever in model2 by default - model_used2 = gr.Textbox(label="Current Model 2", value=no_model_str, - interactive=False) - lora_used2 = gr.Textbox(label="Current LORA (Model 2)", value=no_lora_str, - visible=kwargs['show_lora'], interactive=False) - server_used2 = gr.Textbox(label="Current Server (Model 2)", value=no_server_str, - interactive=False, - visible=not is_public) - prompt_dict2 = gr.Textbox(label="Prompt (or Custom) (Model 2)", - value=pprint.pformat(kwargs['prompt_dict'], indent=4), - interactive=not is_public, lines=4) - compare_checkbox = gr.components.Checkbox(label="Compare Two Models", - value=kwargs['model_lock'], - visible=not is_public and not kwargs['model_lock']) - with gr.Row(visible=not kwargs['model_lock']): - with gr.Column(scale=50): - new_model = gr.Textbox(label="New Model name/path/URL", interactive=not is_public) - with gr.Column(scale=50): - new_lora = gr.Textbox(label="New LORA name/path/URL", visible=kwargs['show_lora'], - interactive=not is_public) - with gr.Column(scale=50): - new_server = gr.Textbox(label="New Server url:port", interactive=not is_public) - with gr.Row(): - add_model_lora_server_button = gr.Button("Add new Model, Lora, Server url:port", scale=0, - variant=variant_load_msg, - size='sm', interactive=not is_public) - system_tab = gr.TabItem("System") \ - if kwargs['visible_system_tab'] else gr.Row(visible=False) - with system_tab: - with gr.Row(): - with gr.Column(scale=1): - side_bar_text = gr.Textbox('on' if kwargs['visible_side_bar'] else 'off', - visible=False, interactive=False) - doc_count_text = gr.Textbox('on' if kwargs['visible_doc_track'] else 'off', - visible=False, interactive=False) - submit_buttons_text = gr.Textbox('on' if kwargs['visible_submit_buttons'] else 'off', - visible=False, interactive=False) - visible_models_text = gr.Textbox('on' if kwargs['visible_visible_models'] else 'off', - visible=False, interactive=False) - - side_bar_btn = gr.Button("Toggle SideBar", variant="secondary", size="sm") - doc_count_btn = gr.Button("Toggle SideBar Document Count/Show Newest", variant="secondary", - size="sm") - submit_buttons_btn = gr.Button("Toggle Submit Buttons", variant="secondary", size="sm") - visible_model_btn = gr.Button("Toggle Visible Models", variant="secondary", size="sm") - col_tabs_scale = gr.Slider(minimum=1, maximum=20, value=10, step=1, label='Window Size') - text_outputs_height = gr.Slider(minimum=100, maximum=2000, value=kwargs['height'] or 400, - step=50, label='Chat Height') - dark_mode_btn = gr.Button("Dark Mode", variant="secondary", size="sm") - with gr.Column(scale=4): - pass - system_visible0 = not is_public and not admin_pass - admin_row = gr.Row() - with admin_row: - with gr.Column(scale=1): - admin_pass_textbox = gr.Textbox(label="Admin Password", - type='password', - visible=not system_visible0) - with gr.Column(scale=4): - pass - system_row = gr.Row(visible=system_visible0) - with system_row: - with gr.Column(): - with gr.Row(): - system_btn = gr.Button(value='Get System Info', size='sm') - system_text = gr.Textbox(label='System Info', interactive=False, show_copy_button=True) - with gr.Row(): - system_input = gr.Textbox(label='System Info Dict Password', interactive=True, - visible=not is_public) - system_btn2 = gr.Button(value='Get System Info Dict', visible=not is_public, size='sm') - system_text2 = gr.Textbox(label='System Info Dict', interactive=False, - visible=not is_public, show_copy_button=True) - with gr.Row(): - system_btn3 = gr.Button(value='Get Hash', visible=not is_public, size='sm') - system_text3 = gr.Textbox(label='Hash', interactive=False, - visible=not is_public, show_copy_button=True) - system_btn4 = gr.Button(value='Get Model Names', visible=not is_public, size='sm') - system_text4 = gr.Textbox(label='Model Names', interactive=False, - visible=not is_public, show_copy_button=True) - - with gr.Row(): - zip_btn = gr.Button("Zip", size='sm') - zip_text = gr.Textbox(label="Zip file name", interactive=False) - file_output = gr.File(interactive=False, label="Zip file to Download") - with gr.Row(): - s3up_btn = gr.Button("S3UP", size='sm') - s3up_text = gr.Textbox(label='S3UP result', interactive=False) - - tos_tab = gr.TabItem("Terms of Service") \ - if kwargs['visible_tos_tab'] else gr.Row(visible=False) - with tos_tab: - description = "" - description += """

    DISCLAIMERS:

    • The model was trained on The Pile and other data, which may contain objectionable content. Use at own risk.
    • """ - if kwargs['load_8bit']: - description += """
    • Model is loaded in 8-bit and has other restrictions on this host. UX can be worse than non-hosted version.
    • """ - description += """
    • Conversations may be used to improve h2oGPT. Do not share sensitive information.
    • """ - if 'h2ogpt-research' in kwargs['base_model']: - description += """
    • Research demonstration only, not used for commercial purposes.
    • """ - description += """
    • By using h2oGPT, you accept our Terms of Service

    """ - gr.Markdown(value=description, show_label=False, interactive=False) - - login_tab = gr.TabItem("Login") \ - if kwargs['visible_login_tab'] else gr.Row(visible=False) - with login_tab: - gr.Markdown( - value="#### Login page to persist your state (database, documents, chat, chat history)\nDaily maintenance at midnight PST will not allow reconnection to state otherwise.") - username_text = gr.Textbox(label="Username") - password_text = gr.Textbox(label="Password", type='password', visible=True) - login_msg = "Login (pick unique user/pass to persist your state)" if kwargs[ - 'auth_access'] == 'open' else "Login (closed access)" - login_btn = gr.Button(value=login_msg) - login_result_text = gr.Text(label="Login Result", interactive=False) - h2ogpt_key = gr.Text(value=kwargs['h2ogpt_key'], label="h2oGPT Token for API access", - type='password', visible=False) - - hosts_tab = gr.TabItem("Hosts") \ - if kwargs['visible_hosts_tab'] else gr.Row(visible=False) - with hosts_tab: - gr.Markdown(f""" - {description_bottom} - {task_info_md} - """) - - # Get flagged data - zip_data1 = functools.partial(zip_data, root_dirs=['flagged_data_points', kwargs['save_dir']]) - zip_event = zip_btn.click(zip_data1, inputs=None, outputs=[file_output, zip_text], queue=False, - api_name='zip_data' if allow_api else None) - s3up_event = s3up_btn.click(s3up, inputs=zip_text, outputs=s3up_text, queue=False, - api_name='s3up_data' if allow_api else None) - - def clear_file_list(): - return None - - def set_loaders(max_quality1, - image_loaders_options1=None, - pdf_loaders_options1=None, - url_loaders_options1=None, - image_loaders_options01=None, - pdf_loaders_options01=None, - url_loaders_options01=None, - ): - if not max_quality1: - return image_loaders_options01, pdf_loaders_options01, url_loaders_options01 - else: - return image_loaders_options1, pdf_loaders_options1, url_loaders_options1 - - set_loaders_func = functools.partial(set_loaders, - image_loaders_options1=image_loaders_options, - pdf_loaders_options1=pdf_loaders_options, - url_loaders_options1=url_loaders_options, - image_loaders_options01=image_loaders_options0, - pdf_loaders_options01=pdf_loaders_options0, - url_loaders_options01=url_loaders_options0, - ) - - max_quality.change(fn=set_loaders_func, - inputs=max_quality, - outputs=[image_loaders, pdf_loaders, url_loaders]) - - def get_model_lock_visible_list(visible_models1, all_models): - visible_list = [] - for modeli, model in enumerate(all_models): - if visible_models1 is None or model in visible_models1 or modeli in visible_models1: - visible_list.append(True) - else: - visible_list.append(False) - return visible_list - - def set_visible_models(visible_models1, num_model_lock=0, all_models=None): - if num_model_lock == 0: - num_model_lock = 3 # 2 + 1 (which is dup of first) - ret_list = [gr.update(visible=True)] * num_model_lock - else: - assert isinstance(all_models, list) - assert num_model_lock == len(all_models) - visible_list = [False, False] + get_model_lock_visible_list(visible_models1, all_models) - ret_list = [gr.update(visible=x) for x in visible_list] - return tuple(ret_list) - - visible_models_func = functools.partial(set_visible_models, - num_model_lock=len(text_outputs), - all_models=kwargs['all_models']) - visible_models.change(fn=visible_models_func, - inputs=visible_models, - outputs=[text_output, text_output2] + text_outputs, - ) - - # Add to UserData or custom user db - update_db_func = functools.partial(update_user_db_gr, - dbs=dbs, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - captions_model=captions_model, - caption_loader=caption_loader, - doctr_loader=doctr_loader, - verbose=kwargs['verbose'], - n_jobs=kwargs['n_jobs'], - get_userid_auth=get_userid_auth, - image_loaders_options0=image_loaders_options0, - pdf_loaders_options0=pdf_loaders_options0, - url_loaders_options0=url_loaders_options0, - jq_schema0=jq_schema0, - enforce_h2ogpt_api_key=kwargs['enforce_h2ogpt_api_key'], - h2ogpt_api_keys=kwargs['h2ogpt_api_keys'], - ) - add_file_outputs = [fileup_output, langchain_mode] - add_file_kwargs = dict(fn=update_db_func, - inputs=[fileup_output, my_db_state, selection_docs_state, requests_state, - langchain_mode, chunk, chunk_size, embed, - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - h2ogpt_key, - ], - outputs=add_file_outputs + [sources_text, doc_exception_text, text_file_last], - queue=queue, - api_name='add_file' if allow_upload_api else None) - - # then no need for add buttons, only single changeable db - user_state_kwargs = dict(fn=user_state_setup, - inputs=[my_db_state, requests_state, langchain_mode], - outputs=[my_db_state, requests_state, langchain_mode], - show_progress='minimal') - eventdb1a = fileup_output.upload(**user_state_kwargs) - eventdb1 = eventdb1a.then(**add_file_kwargs, show_progress='full') - - event_attach1 = attach_button.upload(**user_state_kwargs) - attach_file_kwargs = add_file_kwargs.copy() - attach_file_kwargs['inputs'][0] = attach_button - attach_file_kwargs['outputs'][0] = attach_button - attach_file_kwargs['api_name'] = 'attach_file' - event_attach2 = event_attach1.then(**attach_file_kwargs, show_progress='full') - - sync1 = sync_sources_btn.click(**user_state_kwargs) - - # deal with challenge to have fileup_output itself as input - add_file_kwargs2 = dict(fn=update_db_func, - inputs=[fileup_output_text, my_db_state, selection_docs_state, requests_state, - langchain_mode, chunk, chunk_size, embed, - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - h2ogpt_key, - ], - outputs=add_file_outputs + [sources_text, doc_exception_text, text_file_last], - queue=queue, - api_name='add_file_api' if allow_upload_api else None) - eventdb1_api = fileup_output_text.submit(**add_file_kwargs2, show_progress='full') - - # note for update_user_db_func output is ignored for db - - def clear_textbox(): - return gr.Textbox.update(value='') - - update_user_db_url_func = functools.partial(update_db_func, is_url=True) - - add_url_outputs = [url_text, langchain_mode] - add_url_kwargs = dict(fn=update_user_db_url_func, - inputs=[url_text, my_db_state, selection_docs_state, requests_state, - langchain_mode, chunk, chunk_size, embed, - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - h2ogpt_key, - ], - outputs=add_url_outputs + [sources_text, doc_exception_text, text_file_last], - queue=queue, - api_name='add_url' if allow_upload_api else None) - - eventdb2a = url_text.submit(fn=user_state_setup, - inputs=[my_db_state, requests_state, url_text, url_text], - outputs=[my_db_state, requests_state, url_text], - queue=queue, - show_progress='minimal') - # work around https://github.com/gradio-app/gradio/issues/4733 - eventdb2 = eventdb2a.then(**add_url_kwargs, show_progress='full') - - update_user_db_txt_func = functools.partial(update_db_func, is_txt=True) - add_text_outputs = [user_text_text, langchain_mode] - add_text_kwargs = dict(fn=update_user_db_txt_func, - inputs=[user_text_text, my_db_state, selection_docs_state, requests_state, - langchain_mode, chunk, chunk_size, embed, - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - h2ogpt_key, - ], - outputs=add_text_outputs + [sources_text, doc_exception_text, text_file_last], - queue=queue, - api_name='add_text' if allow_upload_api else None - ) - eventdb3a = user_text_text.submit(fn=user_state_setup, - inputs=[my_db_state, requests_state, user_text_text, user_text_text], - outputs=[my_db_state, requests_state, user_text_text], - queue=queue, - show_progress='minimal') - eventdb3 = eventdb3a.then(**add_text_kwargs, show_progress='full') - - db_events = [eventdb1a, eventdb1, eventdb1_api, - eventdb2a, eventdb2, - eventdb3a, eventdb3] - db_events.extend([event_attach1, event_attach2]) - - get_sources1 = functools.partial(get_sources_gr, dbs=dbs, docs_state0=docs_state0, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - n_jobs=n_jobs, - ) - - # if change collection source, must clear doc selections from it to avoid inconsistency - def clear_doc_choice(langchain_mode1): - if langchain_mode1 in langchain_modes_non_db: - label1 = 'Choose Resources->Collections and Pick Collection' - active_collection1 = "#### Not Chatting with Any Collection\n%s" % label1 - else: - label1 = 'Select Subset of Document(s) for Chat with Collection: %s' % langchain_mode1 - active_collection1 = "#### Chatting with Collection: %s" % langchain_mode1 - return gr.Dropdown.update(choices=docs_state0, value=DocumentChoice.ALL.value, - label=label1), gr.Markdown.update(value=active_collection1) - - lg_change_event = langchain_mode.change(clear_doc_choice, inputs=langchain_mode, - outputs=[document_choice, active_collection], - queue=not kwargs['large_file_count_mode']) - - def change_visible_llama(x): - if x == 'llama': - return gr.update(visible=True), \ - gr.update(visible=True), \ - gr.update(visible=False), \ - gr.update(visible=False) - elif x in ['gptj', 'gpt4all_llama']: - return gr.update(visible=False), \ - gr.update(visible=False), \ - gr.update(visible=True), \ - gr.update(visible=True) - else: - return gr.update(visible=False), \ - gr.update(visible=False), \ - gr.update(visible=False), \ - gr.update(visible=False) - - model_choice.change(change_visible_llama, - inputs=model_choice, - outputs=[row_llama, row_llama2, row_gpt4all, row_gpt4all2]) - - def resize_col_tabs(x): - return gr.Dropdown.update(scale=x) - - col_tabs_scale.change(fn=resize_col_tabs, inputs=col_tabs_scale, outputs=col_tabs, queue=False) - - def resize_chatbots(x, num_model_lock=0): - if num_model_lock == 0: - num_model_lock = 3 # 2 + 1 (which is dup of first) - else: - num_model_lock = 2 + num_model_lock - return tuple([gr.update(height=x)] * num_model_lock) - - resize_chatbots_func = functools.partial(resize_chatbots, num_model_lock=len(text_outputs)) - text_outputs_height.change(fn=resize_chatbots_func, inputs=text_outputs_height, - outputs=[text_output, text_output2] + text_outputs, queue=False) - - def update_dropdown(x): - if DocumentChoice.ALL.value in x: - x.remove(DocumentChoice.ALL.value) - source_list = [DocumentChoice.ALL.value] + x - return gr.Dropdown.update(choices=source_list, value=[DocumentChoice.ALL.value]) - - get_sources_kwargs = dict(fn=get_sources1, - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode], - outputs=[file_source, docs_state, text_doc_count], - queue=queue) - - eventdb7a = get_sources_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, get_sources_btn, get_sources_btn], - outputs=[my_db_state, requests_state, get_sources_btn], - show_progress='minimal') - eventdb7 = eventdb7a.then(**get_sources_kwargs, - api_name='get_sources' if allow_api else None) \ - .then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - - get_sources_api_args = dict(fn=functools.partial(get_sources1, api=True), - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode], - outputs=get_sources_api_text, - queue=queue) - get_sources_api_btn.click(**get_sources_api_args, - api_name='get_sources_api' if allow_api else None) - - # show button, else only show when add. - # Could add to above get_sources for download/dropdown, but bit much maybe - show_sources1 = functools.partial(get_source_files_given_langchain_mode_gr, - dbs=dbs, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - n_jobs=n_jobs) - eventdb8a = show_sources_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, show_sources_btn, show_sources_btn], - outputs=[my_db_state, requests_state, show_sources_btn], - show_progress='minimal') - show_sources_kwargs = dict(fn=show_sources1, - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode], - outputs=sources_text) - eventdb8 = eventdb8a.then(**show_sources_kwargs, - api_name='show_sources' if allow_api else None) - - def update_viewable_dropdown(x): - return gr.Dropdown.update(choices=x, - value=viewable_docs_state0[0] if len(viewable_docs_state0) > 0 else None) - - get_viewable_sources1 = functools.partial(get_sources_gr, dbs=dbs, docs_state0=viewable_docs_state0, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=kwargs['verbose'], - get_userid_auth=get_userid_auth, - n_jobs=n_jobs) - get_viewable_sources_args = dict(fn=get_viewable_sources1, - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode], - outputs=[file_source, viewable_docs_state, text_viewable_doc_count], - queue=queue) - eventdb12a = get_viewable_sources_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, - get_viewable_sources_btn, get_viewable_sources_btn], - outputs=[my_db_state, requests_state, get_viewable_sources_btn], - show_progress='minimal') - viewable_kwargs = dict(fn=update_viewable_dropdown, inputs=viewable_docs_state, outputs=view_document_choice) - eventdb12 = eventdb12a.then(**get_viewable_sources_args, - api_name='get_viewable_sources' if allow_api else None) \ - .then(**viewable_kwargs) - - eventdb_viewa = view_document_choice.select(user_state_setup, - inputs=[my_db_state, requests_state, - view_document_choice, view_document_choice], - outputs=[my_db_state, requests_state, view_document_choice], - show_progress='minimal') - show_doc_func = functools.partial(show_doc, - dbs1=dbs, - load_db_if_exists1=load_db_if_exists, - db_type1=db_type, - use_openai_embedding1=use_openai_embedding, - hf_embedding_model1=hf_embedding_model, - migrate_embedding_model_or_db1=migrate_embedding_model, - auto_migrate_db1=auto_migrate_db, - verbose1=verbose, - get_userid_auth1=get_userid_auth, - max_raw_chunks=kwargs['max_raw_chunks'], - api=False, - n_jobs=n_jobs, - ) - # Note: Not really useful for API, so no api_name - eventdb_viewa.then(fn=show_doc_func, - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode, - view_document_choice, view_raw_text_checkbox, - text_context_list], - outputs=[doc_view, doc_view2, doc_view3, doc_view4, doc_view5]) - - show_doc_func_api = functools.partial(show_doc_func, api=True) - get_document_api_btn.click(fn=show_doc_func_api, - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode, - view_document_choice, view_raw_text_checkbox, - text_context_list], - outputs=get_document_api_text, api_name='get_document_api') - - # Get inputs to evaluate() and make_db() - # don't deepcopy, can contain model itself - all_kwargs = kwargs.copy() - all_kwargs.update(locals()) - - refresh_sources1 = functools.partial(update_and_get_source_files_given_langchain_mode_gr, - captions_model=captions_model, - caption_loader=caption_loader, - doctr_loader=doctr_loader, - dbs=dbs, - first_para=kwargs['first_para'], - hf_embedding_model=hf_embedding_model, - use_openai_embedding=use_openai_embedding, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - text_limit=kwargs['text_limit'], - db_type=db_type, - load_db_if_exists=load_db_if_exists, - n_jobs=n_jobs, verbose=verbose, - get_userid_auth=get_userid_auth, - image_loaders_options0=image_loaders_options0, - pdf_loaders_options0=pdf_loaders_options0, - url_loaders_options0=url_loaders_options0, - jq_schema0=jq_schema0, - ) - eventdb9a = refresh_sources_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, - refresh_sources_btn, refresh_sources_btn], - outputs=[my_db_state, requests_state, refresh_sources_btn], - show_progress='minimal') - eventdb9 = eventdb9a.then(fn=refresh_sources1, - inputs=[my_db_state, selection_docs_state, requests_state, - langchain_mode, chunk, chunk_size, - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - ], - outputs=sources_text, - api_name='refresh_sources' if allow_api else None) - - delete_sources1 = functools.partial(del_source_files_given_langchain_mode_gr, - dbs=dbs, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - n_jobs=n_jobs) - eventdb90a = delete_sources_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, - delete_sources_btn, delete_sources_btn], - outputs=[my_db_state, requests_state, delete_sources_btn], - show_progress='minimal') - eventdb90 = eventdb90a.then(fn=delete_sources1, - inputs=[my_db_state, selection_docs_state, requests_state, document_choice, - langchain_mode], - outputs=sources_text, - api_name='delete_sources' if allow_api else None) - db_events.extend([eventdb90a, eventdb90]) - - def check_admin_pass(x): - return gr.update(visible=x == admin_pass) - - def close_admin(x): - return gr.update(visible=not (x == admin_pass)) - - eventdb_logina = login_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, login_btn, login_btn], - outputs=[my_db_state, requests_state, login_btn], - show_progress='minimal') - - def login(db1s, selection_docs_state1, requests_state1, chat_state1, langchain_mode1, - username1, password1, - text_output1, text_output21, *text_outputs1, - auth_filename=None, num_model_lock=0, pre_authorized=False): - # use full auth login to allow new users if open access etc. - if pre_authorized: - username1 = requests_state1['username'] - password1 = None - authorized1 = True - else: - authorized1 = authf(username1, password1, selection_docs_state1=selection_docs_state1) - if authorized1: - set_userid_gr(db1s, requests_state1, get_userid_auth) - username2 = get_username(requests_state1) - text_outputs1 = list(text_outputs1) - - success1, text_result, text_output1, text_output21, text_outputs1, langchain_mode1 = \ - load_auth(db1s, requests_state1, auth_filename, selection_docs_state1=selection_docs_state1, - chat_state1=chat_state1, langchain_mode1=langchain_mode1, - text_output1=text_output1, text_output21=text_output21, text_outputs1=text_outputs1, - username_override=username1, password_to_check=password1) - else: - success1 = False - text_result = "Wrong password for user %s" % username1 - df_langchain_mode_paths1 = get_df_langchain_mode_paths(selection_docs_state1, db1s, dbs1=dbs) - if success1: - requests_state1['username'] = username1 - label_instruction1 = 'Ask anything, %s' % requests_state1['username'] - return db1s, selection_docs_state1, requests_state1, chat_state1, \ - text_result, \ - gr.update(label=label_instruction1), \ - df_langchain_mode_paths1, \ - gr.update(choices=list(chat_state1.keys()), value=None), \ - gr.update(choices=get_langchain_choices(selection_docs_state1), - value=langchain_mode1), \ - text_output1, text_output21, *tuple(text_outputs1) - - login_func = functools.partial(login, - auth_filename=kwargs['auth_filename'], - num_model_lock=len(text_outputs), - pre_authorized=False, - ) - load_login_func = functools.partial(login, - auth_filename=kwargs['auth_filename'], - num_model_lock=len(text_outputs), - pre_authorized=True, - ) - login_inputs = [my_db_state, selection_docs_state, requests_state, chat_state, - langchain_mode, - username_text, password_text, - text_output, text_output2] + text_outputs - login_outputs = [my_db_state, selection_docs_state, requests_state, chat_state, - login_result_text, - instruction, - langchain_mode_path_text, - radio_chats, - langchain_mode, - text_output, text_output2] + text_outputs - eventdb_logina.then(login_func, - inputs=login_inputs, - outputs=login_outputs, - queue=False) - - admin_pass_textbox.submit(check_admin_pass, inputs=admin_pass_textbox, outputs=system_row, queue=False) \ - .then(close_admin, inputs=admin_pass_textbox, outputs=admin_row, queue=False) - - def load_auth(db1s, requests_state1, auth_filename=None, selection_docs_state1=None, - chat_state1=None, langchain_mode1=None, - text_output1=None, text_output21=None, text_outputs1=None, - username_override=None, password_to_check=None): - # in-place assignment - if not auth_filename: - return False, "No auth file", text_output1, text_output21, text_outputs1 - # if first time here, need to set userID - set_userid_gr(db1s, requests_state1, get_userid_auth) - if username_override: - username1 = username_override - else: - username1 = get_username(requests_state1) - success1 = False - with filelock.FileLock(auth_filename + '.lock'): - if os.path.isfile(auth_filename): - with open(auth_filename, 'rt') as f: - auth_dict = json.load(f) - if username1 in auth_dict: - auth_user = auth_dict[username1] - if password_to_check: - if auth_user['password'] != password_to_check: - return False, [], [], [], "Invalid password for user %s" % username1 - if username_override: - # then use original user id - set_userid_direct_gr(db1s, auth_dict[username1]['userid'], username1) - if 'selection_docs_state' in auth_user: - update_auth_selection(auth_user, selection_docs_state1) - if 'chat_state' in auth_user: - chat_state1.update(auth_user['chat_state']) - if 'text_output' in auth_user: - text_output1 = auth_user['text_output'] - if 'text_output2' in auth_user: - text_output21 = auth_user['text_output2'] - if 'text_outputs' in auth_user: - text_outputs1 = auth_user['text_outputs'] - if 'langchain_mode' in auth_user: - langchain_mode1 = auth_user['langchain_mode'] - text_result = "Successful login for %s" % username1 - success1 = True - else: - text_result = "No user %s" % username1 - else: - text_result = "No auth file" - return success1, text_result, text_output1, text_output21, text_outputs1, langchain_mode1 - - def save_auth_dict(auth_dict, auth_filename): - backup_file = auth_filename + '.bak' + str(uuid.uuid4()) - if os.path.isfile(auth_filename): - shutil.copy(auth_filename, backup_file) - try: - with open(auth_filename, 'wt') as f: - f.write(json.dumps(auth_dict, indent=2)) - except BaseException as e: - print("Failure to save auth %s, restored backup: %s: %s" % (auth_filename, backup_file, str(e)), - flush=True) - shutil.copy(backup_file, auth_dict) - if os.getenv('HARD_ASSERTS'): - # unexpected in testing or normally - raise - - def save_auth(requests_state1, auth_filename, auth_freeze, - selection_docs_state1=None, chat_state1=None, langchain_mode1=None, - text_output1=None, text_output21=None, text_outputs1=None): - if auth_freeze: - return - if not auth_filename: - return - # save to auth file - username1 = get_username(requests_state1) - with filelock.FileLock(auth_filename + '.lock'): - if os.path.isfile(auth_filename): - with open(auth_filename, 'rt') as f: - auth_dict = json.load(f) - if username1 in auth_dict: - auth_user = auth_dict[username1] - if selection_docs_state1: - update_auth_selection(auth_user, selection_docs_state1, save=True) - if chat_state1: - # overwrite - auth_user['chat_state'] = chat_state1 - if text_output1: - auth_user['text_output'] = text_output1 - if text_output21: - auth_user['text_output2'] = text_output21 - if text_outputs1: - auth_user['text_outputs'] = text_outputs1 - if langchain_mode1: - auth_user['langchain_mode'] = langchain_mode1 - save_auth_dict(auth_dict, auth_filename) - - def add_langchain_mode(db1s, selection_docs_state1, requests_state1, langchain_mode1, y, - auth_filename=None, auth_freeze=None, guest_name=None): - assert auth_filename is not None - assert auth_freeze is not None - - set_userid_gr(db1s, requests_state1, get_userid_auth) - username1 = get_username(requests_state1) - for k in db1s: - set_dbid_gr(db1s[k]) - langchain_modes = selection_docs_state1['langchain_modes'] - langchain_mode_paths = selection_docs_state1['langchain_mode_paths'] - langchain_mode_types = selection_docs_state1['langchain_mode_types'] - - user_path = None - valid = True - y2 = y.strip().replace(' ', '').split(',') - if len(y2) >= 1: - langchain_mode2 = y2[0] - if len(langchain_mode2) >= 3 and langchain_mode2.isalnum(): - # real restriction is: - # ValueError: Expected collection name that (1) contains 3-63 characters, (2) starts and ends with an alphanumeric character, (3) otherwise contains only alphanumeric characters, underscores or hyphens (-), (4) contains no two consecutive periods (..) and (5) is not a valid IPv4 address, got me - # but just make simpler - # assume personal if don't have user_path - langchain_mode_type = y2[1] if len(y2) > 1 else LangChainTypes.PERSONAL.value - user_path = y2[2] if len(y2) > 2 else None # assume None if don't have user_path - if user_path in ['', "''"]: - # transcribe UI input - user_path = None - if langchain_mode_type not in [x.value for x in list(LangChainTypes)]: - textbox = "Invalid type %s" % langchain_mode_type - valid = False - langchain_mode2 = langchain_mode1 - elif langchain_mode_type == LangChainTypes.SHARED.value and username1 == guest_name: - textbox = "Guests cannot add shared collections" - valid = False - langchain_mode2 = langchain_mode1 - elif user_path is not None and langchain_mode_type == LangChainTypes.PERSONAL.value: - textbox = "Do not pass user_path for personal/scratch types" - valid = False - langchain_mode2 = langchain_mode1 - elif user_path is not None and username1 == guest_name: - textbox = "Guests cannot add collections with path" - valid = False - langchain_mode2 = langchain_mode1 - elif langchain_mode2 in langchain_modes_intrinsic: - user_path = None - textbox = "Invalid access to use internal name: %s" % langchain_mode2 - valid = False - langchain_mode2 = langchain_mode1 - elif user_path and allow_upload_to_user_data or not user_path and allow_upload_to_my_data: - if user_path: - user_path = makedirs(user_path, exist_ok=True, use_base=True) - langchain_mode_paths.update({langchain_mode2: user_path}) - langchain_mode_types.update({langchain_mode2: langchain_mode_type}) - if langchain_mode2 not in langchain_modes: - langchain_modes.append(langchain_mode2) - textbox = '' - else: - valid = False - langchain_mode2 = langchain_mode1 - textbox = "Invalid access. user allowed: %s " \ - "personal/scratch allowed: %s" % (allow_upload_to_user_data, allow_upload_to_my_data) - else: - valid = False - langchain_mode2 = langchain_mode1 - textbox = "Invalid, collection must be >=3 characters and alphanumeric" - else: - valid = False - langchain_mode2 = langchain_mode1 - textbox = "Invalid, must be like UserData2, user_path2" - selection_docs_state1 = update_langchain_mode_paths(selection_docs_state1) - df_langchain_mode_paths1 = get_df_langchain_mode_paths(selection_docs_state1, db1s, dbs1=dbs) - choices = get_langchain_choices(selection_docs_state1) - - if valid and not user_path: - # needs to have key for it to make it known different from userdata case in _update_user_db() - from src.gpt_langchain import length_db1 - db1s[langchain_mode2] = [None] * length_db1() - if valid: - save_auth(requests_state1, auth_filename, auth_freeze, selection_docs_state1=selection_docs_state1, - langchain_mode1=langchain_mode2) - - return db1s, selection_docs_state1, gr.update(choices=choices, - value=langchain_mode2), textbox, df_langchain_mode_paths1 - - def remove_langchain_mode(db1s, selection_docs_state1, requests_state1, - langchain_mode1, langchain_mode2, dbsu=None, auth_filename=None, auth_freeze=None, - guest_name=None, - purge=False): - assert auth_filename is not None - assert auth_freeze is not None - - set_userid_gr(db1s, requests_state1, get_userid_auth) - for k in db1s: - set_dbid_gr(db1s[k]) - assert dbsu is not None - langchain_modes = selection_docs_state1['langchain_modes'] - langchain_mode_paths = selection_docs_state1['langchain_mode_paths'] - langchain_mode_types = selection_docs_state1['langchain_mode_types'] - langchain_type2 = langchain_mode_types.get(langchain_mode2, LangChainTypes.EITHER.value) - - changed_state = False - textbox = "Invalid access, cannot remove %s" % langchain_mode2 - in_scratch_db = langchain_mode2 in db1s - in_user_db = dbsu is not None and langchain_mode2 in dbsu - if in_scratch_db and not allow_upload_to_my_data or \ - in_user_db and not allow_upload_to_user_data or \ - langchain_mode2 in langchain_modes_intrinsic: - can_remove = False - can_purge = False - if langchain_mode2 in langchain_modes_intrinsic: - can_purge = True - else: - can_remove = True - can_purge = True - - # change global variables - if langchain_mode2 in langchain_modes or langchain_mode2 in langchain_mode_paths or langchain_mode2 in db1s: - if can_purge and purge: - # remove source files - from src.gpt_langchain import get_sources, del_from_db - sources_file, source_list, num_chunks, db = \ - get_sources(db1s, selection_docs_state1, - requests_state1, langchain_mode2, dbs=dbsu, - docs_state0=docs_state0, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - n_jobs=n_jobs) - del_from_db(db, source_list, db_type=db_type) - for fil in source_list: - if os.path.isfile(fil): - print("Purged %s" % fil, flush=True) - remove(fil) - # remove db directory - from src.gpt_langchain import get_persist_directory - persist_directory, langchain_type2 = \ - get_persist_directory(langchain_mode2, langchain_type=langchain_type2, - db1s=db1s, dbs=dbsu) - print("removed persist_directory %s" % persist_directory, flush=True) - remove(persist_directory) - textbox = "Purged, but did not remove %s" % langchain_mode2 - if can_remove: - if langchain_mode2 in langchain_modes: - langchain_modes.remove(langchain_mode2) - if langchain_mode2 in langchain_mode_paths: - langchain_mode_paths.pop(langchain_mode2) - if langchain_mode2 in langchain_mode_types: - langchain_mode_types.pop(langchain_mode2) - if langchain_mode2 in db1s and langchain_mode2 != LangChainMode.MY_DATA.value: - # don't remove last MyData, used as user hash - db1s.pop(langchain_mode2) - textbox = "" - changed_state = True - else: - textbox = "%s is not visible" % langchain_mode2 - - # update - selection_docs_state1 = update_langchain_mode_paths(selection_docs_state1) - df_langchain_mode_paths1 = get_df_langchain_mode_paths(selection_docs_state1, db1s, dbs1=dbs) - - if changed_state: - save_auth(requests_state1, auth_filename, auth_freeze, selection_docs_state1=selection_docs_state1, - langchain_mode1=langchain_mode2) - - return db1s, selection_docs_state1, \ - gr.update(choices=get_langchain_choices(selection_docs_state1), - value=langchain_mode2), textbox, df_langchain_mode_paths1 - - eventdb20a = new_langchain_mode_text.submit(user_state_setup, - inputs=[my_db_state, requests_state, - new_langchain_mode_text, new_langchain_mode_text], - outputs=[my_db_state, requests_state, new_langchain_mode_text], - show_progress='minimal') - add_langchain_mode_func = functools.partial(add_langchain_mode, - auth_filename=kwargs['auth_filename'], - auth_freeze=kwargs['auth_freeze'], - guest_name=kwargs['guest_name'], - ) - eventdb20b = eventdb20a.then(fn=add_langchain_mode_func, - inputs=[my_db_state, selection_docs_state, requests_state, - langchain_mode, - new_langchain_mode_text], - outputs=[my_db_state, selection_docs_state, langchain_mode, - new_langchain_mode_text, - langchain_mode_path_text], - api_name='new_langchain_mode_text' if allow_api and allow_upload_to_user_data else None) - db_events.extend([eventdb20a, eventdb20b]) - - remove_langchain_mode_func = functools.partial(remove_langchain_mode, - dbsu=dbs, - auth_filename=kwargs['auth_filename'], - auth_freeze=kwargs['auth_freeze'], - guest_name=kwargs['guest_name'], - ) - eventdb21a = remove_langchain_mode_text.submit(user_state_setup, - inputs=[my_db_state, - requests_state, - remove_langchain_mode_text, remove_langchain_mode_text], - outputs=[my_db_state, - requests_state, remove_langchain_mode_text], - show_progress='minimal') - remove_langchain_mode_kwargs = dict(fn=remove_langchain_mode_func, - inputs=[my_db_state, selection_docs_state, requests_state, - langchain_mode, - remove_langchain_mode_text], - outputs=[my_db_state, selection_docs_state, langchain_mode, - remove_langchain_mode_text, - langchain_mode_path_text]) - eventdb21b = eventdb21a.then(**remove_langchain_mode_kwargs, - api_name='remove_langchain_mode_text' if allow_api and allow_upload_to_user_data else None) - db_events.extend([eventdb21a, eventdb21b]) - - eventdb22a = purge_langchain_mode_text.submit(user_state_setup, - inputs=[my_db_state, - requests_state, - purge_langchain_mode_text, purge_langchain_mode_text], - outputs=[my_db_state, - requests_state, purge_langchain_mode_text], - show_progress='minimal') - purge_langchain_mode_func = functools.partial(remove_langchain_mode_func, purge=True) - purge_langchain_mode_kwargs = dict(fn=purge_langchain_mode_func, - inputs=[my_db_state, selection_docs_state, requests_state, - langchain_mode, - purge_langchain_mode_text], - outputs=[my_db_state, selection_docs_state, langchain_mode, - purge_langchain_mode_text, - langchain_mode_path_text]) - # purge_langchain_mode_kwargs = remove_langchain_mode_kwargs.copy() - # purge_langchain_mode_kwargs['fn'] = functools.partial(remove_langchain_mode_kwargs['fn'], purge=True) - eventdb22b = eventdb22a.then(**purge_langchain_mode_kwargs, - api_name='purge_langchain_mode_text' if allow_api and allow_upload_to_user_data else None) - db_events.extend([eventdb22a, eventdb22b]) - - def load_langchain_gr(db1s, selection_docs_state1, requests_state1, langchain_mode1, auth_filename=None): - load_auth(db1s, requests_state1, auth_filename, selection_docs_state1=selection_docs_state1) - - selection_docs_state1 = update_langchain_mode_paths(selection_docs_state1) - df_langchain_mode_paths1 = get_df_langchain_mode_paths(selection_docs_state1, db1s, dbs1=dbs) - return selection_docs_state1, \ - gr.update(choices=get_langchain_choices(selection_docs_state1), - value=langchain_mode1), df_langchain_mode_paths1 - - eventdbloadla = load_langchain.click(user_state_setup, - inputs=[my_db_state, requests_state, langchain_mode], - outputs=[my_db_state, requests_state, langchain_mode], - show_progress='minimal') - load_langchain_gr_func = functools.partial(load_langchain_gr, - auth_filename=kwargs['auth_filename']) - eventdbloadlb = eventdbloadla.then(fn=load_langchain_gr_func, - inputs=[my_db_state, selection_docs_state, requests_state, langchain_mode], - outputs=[selection_docs_state, langchain_mode, langchain_mode_path_text], - api_name='load_langchain' if allow_api and allow_upload_to_user_data else None) - - if not kwargs['large_file_count_mode']: - # FIXME: Could add all these functions, inputs, outputs into single function for snappier GUI - # all update events when not doing large file count mode - # Note: Login touches langchain_mode, which triggers all these - lg_change_event2 = lg_change_event.then(**get_sources_kwargs) - lg_change_event3 = lg_change_event2.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - lg_change_event4 = lg_change_event3.then(**show_sources_kwargs) - lg_change_event5 = lg_change_event4.then(**get_viewable_sources_args) - lg_change_event6 = lg_change_event5.then(**viewable_kwargs) - - eventdb2c = eventdb2.then(**get_sources_kwargs) - eventdb2d = eventdb2c.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb2e = eventdb2d.then(**show_sources_kwargs) - eventdb2f = eventdb2e.then(**get_viewable_sources_args) - eventdb2g = eventdb2f.then(**viewable_kwargs) - - eventdb1c = eventdb1.then(**get_sources_kwargs) - eventdb1d = eventdb1c.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb1e = eventdb1d.then(**show_sources_kwargs) - eventdb1f = eventdb1e.then(**get_viewable_sources_args) - eventdb1g = eventdb1f.then(**viewable_kwargs) - - eventdb3c = eventdb3.then(**get_sources_kwargs) - eventdb3d = eventdb3c.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb3e = eventdb3d.then(**show_sources_kwargs) - eventdb3f = eventdb3e.then(**get_viewable_sources_args) - eventdb3g = eventdb3f.then(**viewable_kwargs) - - eventdb90ua = eventdb90.then(**get_sources_kwargs) - eventdb90ub = eventdb90ua.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb90uc = eventdb90ub.then(**show_sources_kwargs) - eventdb90ud = eventdb90uc.then(**get_viewable_sources_args) - eventdb90ue = eventdb90ud.then(**viewable_kwargs) - - eventdb20c = eventdb20b.then(**get_sources_kwargs) - eventdb20d = eventdb20c.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb20e = eventdb20d.then(**show_sources_kwargs) - eventdb20f = eventdb20e.then(**get_viewable_sources_args) - eventdb20g = eventdb20f.then(**viewable_kwargs) - - eventdb21c = eventdb21b.then(**get_sources_kwargs) - eventdb21d = eventdb21c.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb21e = eventdb21d.then(**show_sources_kwargs) - eventdb21f = eventdb21e.then(**get_viewable_sources_args) - eventdb21g = eventdb21f.then(**viewable_kwargs) - - eventdb22c = eventdb22b.then(**get_sources_kwargs) - eventdb22d = eventdb22c.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb22e = eventdb22d.then(**show_sources_kwargs) - eventdb22f = eventdb22e.then(**get_viewable_sources_args) - eventdb22g = eventdb22f.then(**viewable_kwargs) - - event_attach3 = event_attach2.then(**get_sources_kwargs) - event_attach4 = event_attach3.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - event_attach5 = event_attach4.then(**show_sources_kwargs) - event_attach6 = event_attach5.then(**get_viewable_sources_args) - event_attach7 = event_attach6.then(**viewable_kwargs) - - sync2 = sync1.then(**get_sources_kwargs) - sync3 = sync2.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - sync4 = sync3.then(**show_sources_kwargs) - sync5 = sync4.then(**get_viewable_sources_args) - sync6 = sync5.then(**viewable_kwargs) - - eventdb_loginb = eventdb_logina.then(**get_sources_kwargs) - eventdb_loginc = eventdb_loginb.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - eventdb_logind = eventdb_loginc.then(**show_sources_kwargs) - eventdb_logine = eventdb_logind.then(**get_viewable_sources_args) - eventdb_loginf = eventdb_logine.then(**viewable_kwargs) - - db_events.extend([lg_change_event, lg_change_event2, lg_change_event3, lg_change_event4, lg_change_event5, - lg_change_event6] + - [eventdb2c, eventdb2d, eventdb2e, eventdb2f, eventdb2g] + - [eventdb1c, eventdb1d, eventdb1e, eventdb1f, eventdb1g] + - [eventdb3c, eventdb3d, eventdb3e, eventdb3f, eventdb3g] + - [eventdb90ua, eventdb90ub, eventdb90uc, eventdb90ud, eventdb90ue] + - [eventdb20c, eventdb20d, eventdb20e, eventdb20f, eventdb20g] + - [eventdb21c, eventdb21d, eventdb21e, eventdb21f, eventdb21g] + - [eventdb22c, eventdb22d, eventdb22e, eventdb22f, eventdb22g] + - [event_attach3, event_attach4, event_attach5, event_attach6, event_attach7] + - [sync1, sync2, sync3, sync4, sync5, sync6] + - [eventdb_logina, eventdb_loginb, eventdb_loginc, eventdb_logind, eventdb_logine, - eventdb_loginf] - , - ) - - inputs_list, inputs_dict = get_inputs_list(all_kwargs, kwargs['model_lower'], model_id=1) - inputs_list2, inputs_dict2 = get_inputs_list(all_kwargs, kwargs['model_lower'], model_id=2) - from functools import partial - kwargs_evaluate = {k: v for k, v in all_kwargs.items() if k in inputs_kwargs_list} - # ensure present - for k in inputs_kwargs_list: - assert k in kwargs_evaluate, "Missing %s" % k - - def evaluate_nochat(*args1, default_kwargs1=None, str_api=False, plain_api=False, **kwargs1): - args_list = list(args1) - if str_api: - if plain_api: - # i.e. not fresh model, tells evaluate to use model_state0 - args_list.insert(0, kwargs['model_state_none'].copy()) - args_list.insert(1, my_db_state0.copy()) - args_list.insert(2, selection_docs_state0.copy()) - args_list.insert(3, requests_state0.copy()) - user_kwargs = args_list[len(input_args_list)] - assert isinstance(user_kwargs, str) - user_kwargs = ast.literal_eval(user_kwargs) - else: - assert not plain_api - user_kwargs = {k: v for k, v in zip(eval_func_param_names, args_list[len(input_args_list):])} - # control kwargs1 for evaluate - kwargs1['answer_with_sources'] = -1 # just text chunk, not URL etc. - kwargs1['show_accordions'] = False - kwargs1['append_sources_to_answer'] = False - kwargs1['show_link_in_sources'] = False - kwargs1['top_k_docs_max_show'] = 30 - - # only used for submit_nochat_api - user_kwargs['chat'] = False - if 'stream_output' not in user_kwargs: - user_kwargs['stream_output'] = False - if plain_api: - user_kwargs['stream_output'] = False - if 'langchain_mode' not in user_kwargs: - # if user doesn't specify, then assume disabled, not use default - if LangChainMode.LLM.value in kwargs['langchain_modes']: - user_kwargs['langchain_mode'] = LangChainMode.LLM.value - elif len(kwargs['langchain_modes']) >= 1: - user_kwargs['langchain_mode'] = kwargs['langchain_modes'][0] - else: - # disabled should always be allowed - user_kwargs['langchain_mode'] = LangChainMode.DISABLED.value - if 'langchain_action' not in user_kwargs: - user_kwargs['langchain_action'] = LangChainAction.QUERY.value - if 'langchain_agents' not in user_kwargs: - user_kwargs['langchain_agents'] = [] - # be flexible - if 'instruction' in user_kwargs and 'instruction_nochat' not in user_kwargs: - user_kwargs['instruction_nochat'] = user_kwargs['instruction'] - if 'iinput' in user_kwargs and 'iinput_nochat' not in user_kwargs: - user_kwargs['iinput_nochat'] = user_kwargs['iinput'] - if 'visible_models' not in user_kwargs: - if kwargs['visible_models']: - if isinstance(kwargs['visible_models'], int): - user_kwargs['visible_models'] = [kwargs['visible_models']] - elif isinstance(kwargs['visible_models'], list): - # only take first one - user_kwargs['visible_models'] = [kwargs['visible_models'][0]] - else: - user_kwargs['visible_models'] = [0] - else: - # if no user version or default version, then just take first - user_kwargs['visible_models'] = [0] - - if 'h2ogpt_key' not in user_kwargs: - user_kwargs['h2ogpt_key'] = None - if 'system_prompt' in user_kwargs and user_kwargs['system_prompt'] is None: - # avoid worrying about below default_kwargs -> args_list that checks if None - user_kwargs['system_prompt'] = 'None' - - set1 = set(list(default_kwargs1.keys())) - set2 = set(eval_func_param_names) - assert set1 == set2, "Set diff: %s %s: %s" % (set1, set2, set1.symmetric_difference(set2)) - # correct ordering. Note some things may not be in default_kwargs, so can't be default of user_kwargs.get() - model_state1 = args_list[0] - my_db_state1 = args_list[1] - selection_docs_state1 = args_list[2] - requests_state1 = args_list[3] - args_list = [user_kwargs[k] if k in user_kwargs and user_kwargs[k] is not None else default_kwargs1[k] for k - in eval_func_param_names] - assert len(args_list) == len(eval_func_param_names) - stream_output1 = args_list[eval_func_param_names.index('stream_output')] - if len(model_states) > 1: - visible_models1 = args_list[eval_func_param_names.index('visible_models')] - model_active_choice1 = visible_models_to_model_choice(visible_models1) - model_state1 = model_states[model_active_choice1 % len(model_states)] - for key in key_overrides: - if user_kwargs.get(key) is None and model_state1.get(key) is not None: - args_list[eval_func_param_names.index(key)] = model_state1[key] - if hasattr(model_state1['tokenizer'], 'model_max_length'): - # ensure listen to limit, with some buffer - # buffer = 50 - buffer = 0 - args_list[eval_func_param_names.index('max_new_tokens')] = min( - args_list[eval_func_param_names.index('max_new_tokens')], - model_state1['tokenizer'].model_max_length - buffer) - - # override overall visible_models and h2ogpt_key if have model_specific one - # NOTE: only applicable if len(model_states) > 1 at moment - # else controlled by evaluate() - if 'visible_models' in model_state1 and model_state1['visible_models'] is not None: - assert isinstance(model_state1['visible_models'], int) - args_list[eval_func_param_names.index('visible_models')] = model_state1['visible_models'] - if 'h2ogpt_key' in model_state1 and model_state1['h2ogpt_key'] is not None: - # remote server key if present - # i.e. may be '' and used to override overall local key - assert isinstance(model_state1['h2ogpt_key'], str) - args_list[eval_func_param_names.index('h2ogpt_key')] = model_state1['h2ogpt_key'] - - # local key, not for remote server unless same, will be passed through - h2ogpt_key1 = args_list[eval_func_param_names.index('h2ogpt_key')] - - # final full evaluate args list - args_list = [model_state1, my_db_state1, selection_docs_state1, requests_state1] + args_list - - # NOTE: Don't allow UI-like access, in case modify state via API - valid_key = is_valid_key(kwargs['enforce_h2ogpt_api_key'], kwargs['h2ogpt_api_keys'], h2ogpt_key1, - requests_state1=None) - evaluate_local = evaluate if valid_key else evaluate_fake - - save_dict = dict() - ret = {} - try: - for res_dict in evaluate_local(*tuple(args_list), **kwargs1): - error = res_dict.get('error', '') - extra = res_dict.get('extra', '') - save_dict = res_dict.get('save_dict', {}) - - # update save_dict - save_dict['error'] = error - save_dict['extra'] = extra - save_dict['valid_key'] = valid_key - save_dict['h2ogpt_key'] = h2ogpt_key1 - if str_api and plain_api: - save_dict['which_api'] = 'str_plain_api' - elif str_api: - save_dict['which_api'] = 'str_api' - elif plain_api: - save_dict['which_api'] = 'plain_api' - else: - save_dict['which_api'] = 'nochat_api' - if 'extra_dict' not in save_dict: - save_dict['extra_dict'] = {} - if requests_state1: - save_dict['extra_dict'].update(requests_state1) - else: - save_dict['extra_dict'].update(dict(username='NO_REQUEST')) - - if is_public: - # don't want to share actual endpoints - if 'save_dict' in res_dict and isinstance(res_dict['save_dict'], dict): - res_dict['save_dict'].pop('inference_server', None) - if 'extra_dict' in res_dict['save_dict'] and isinstance(res_dict['save_dict']['extra_dict'], - dict): - res_dict['save_dict']['extra_dict'].pop('inference_server', None) - - # get response - if str_api: - # full return of dict - ret = res_dict - elif kwargs['langchain_mode'] == 'Disabled': - ret = fix_text_for_gradio(res_dict['response']) - else: - ret = '
    ' + fix_text_for_gradio(res_dict['response']) - if stream_output1: - # yield as it goes, else need to wait since predict only returns first yield - yield ret - finally: - clear_torch_cache() - clear_embeddings(user_kwargs['langchain_mode'], my_db_state1) - save_generate_output(**save_dict) - if not stream_output1: - # return back last ret - yield ret - - kwargs_evaluate_nochat = kwargs_evaluate.copy() - # nominally never want sources appended for API calls, which is what nochat used for primarily - kwargs_evaluate_nochat.update(dict(append_sources_to_answer=False)) - fun = partial(evaluate_nochat, - default_kwargs1=default_kwargs, - str_api=False, - **kwargs_evaluate_nochat) - fun_with_dict_str = partial(evaluate_nochat, - default_kwargs1=default_kwargs, - str_api=True, - **kwargs_evaluate_nochat - ) - - fun_with_dict_str_plain = partial(evaluate_nochat, - default_kwargs1=default_kwargs, - str_api=True, - plain_api=True, - **kwargs_evaluate_nochat - ) - - dark_mode_btn.click( - None, - None, - None, - _js=wrap_js_to_lambda(0, get_dark_js()), - api_name="dark" if allow_api else None, - queue=False, - ) - - # Handle uploads from API - upload_api_btn = gr.UploadButton("Upload File Results", visible=False) - file_upload_api = gr.File(visible=False) - file_upload_text = gr.Textbox(visible=False) - - def upload_file(files): - if isinstance(files, list): - file_paths = [file.name for file in files] - else: - file_paths = files.name - return file_paths, file_paths - - upload_api_btn.upload(fn=upload_file, - inputs=upload_api_btn, - outputs=[file_upload_api, file_upload_text], - api_name='upload_api' if allow_upload_api else None) - - def visible_toggle(x): - x = 'off' if x == 'on' else 'on' - return x, gr.Column.update(visible=True if x == 'on' else False) - - side_bar_btn.click(fn=visible_toggle, - inputs=side_bar_text, - outputs=[side_bar_text, side_bar], - queue=False) - - doc_count_btn.click(fn=visible_toggle, - inputs=doc_count_text, - outputs=[doc_count_text, row_doc_track], - queue=False) - - submit_buttons_btn.click(fn=visible_toggle, - inputs=submit_buttons_text, - outputs=[submit_buttons_text, submit_buttons], - queue=False) - - visible_model_btn.click(fn=visible_toggle, - inputs=visible_models_text, - outputs=[visible_models_text, visible_models], - queue=False) - - # examples after submit or any other buttons for chat or no chat - if kwargs['examples'] is not None and kwargs['show_examples']: - gr.Examples(examples=kwargs['examples'], inputs=inputs_list) - - # Score - def score_last_response(*args, nochat=False, num_model_lock=0): - try: - if num_model_lock > 0: - # then lock way - args_list = list(args).copy() - outputs = args_list[-num_model_lock:] - score_texts1 = [] - for output in outputs: - # same input, put into form good for _score_last_response() - args_list[-1] = output - score_texts1.append( - _score_last_response(*tuple(args_list), nochat=nochat, - num_model_lock=num_model_lock, prefix='')) - if len(score_texts1) > 1: - return "Response Scores: %s" % ' '.join(score_texts1) - else: - return "Response Scores: %s" % score_texts1[0] - else: - return _score_last_response(*args, nochat=nochat, num_model_lock=num_model_lock) - finally: - clear_torch_cache() - - def _score_last_response(*args, nochat=False, num_model_lock=0, prefix='Response Score: '): - """ Similar to user() """ - args_list = list(args) - smodel = score_model_state0['model'] - stokenizer = score_model_state0['tokenizer'] - sdevice = score_model_state0['device'] - - if memory_restriction_level > 0: - max_length_tokenize = 768 - 256 if memory_restriction_level <= 2 else 512 - 256 - elif hasattr(stokenizer, 'model_max_length'): - max_length_tokenize = stokenizer.model_max_length - else: - # limit to 1024, not worth OOMing on reward score - max_length_tokenize = 2048 - 1024 - cutoff_len = max_length_tokenize * 4 # restrict deberta related to max for LLM - - if not nochat: - history = args_list[-1] - if history is None: - history = [] - if smodel is not None and \ - stokenizer is not None and \ - sdevice is not None and \ - history is not None and len(history) > 0 and \ - history[-1] is not None and \ - len(history[-1]) >= 2: - os.environ['TOKENIZERS_PARALLELISM'] = 'false' - - question = history[-1][0] - - answer = history[-1][1] - else: - return '%sNA' % prefix - else: - answer = args_list[-1] - instruction_nochat_arg_id = eval_func_param_names.index('instruction_nochat') - question = args_list[instruction_nochat_arg_id] - - if question is None: - return '%sBad Question' % prefix - if answer is None: - return '%sBad Answer' % prefix - try: - score = score_qa(smodel, stokenizer, max_length_tokenize, question, answer, cutoff_len) - finally: - clear_torch_cache() - if isinstance(score, str): - return '%sNA' % prefix - return '{}{:.1%}'.format(prefix, score) - - def noop_score_last_response(*args, **kwargs): - return "Response Score: Disabled" - - if kwargs['score_model']: - score_fun = score_last_response - else: - score_fun = noop_score_last_response - - score_args = dict(fn=score_fun, - inputs=inputs_list + [text_output], - outputs=[score_text], - ) - score_args2 = dict(fn=partial(score_fun), - inputs=inputs_list2 + [text_output2], - outputs=[score_text2], - ) - score_fun_func = functools.partial(score_fun, num_model_lock=len(text_outputs)) - all_score_args = dict(fn=score_fun_func, - inputs=inputs_list + text_outputs, - outputs=score_text, - ) - - score_args_nochat = dict(fn=partial(score_fun, nochat=True), - inputs=inputs_list + [text_output_nochat], - outputs=[score_text_nochat], - ) - - def update_history(*args, undo=False, retry=False, sanitize_user_prompt=False): - """ - User that fills history for bot - :param args: - :param undo: - :param retry: - :param sanitize_user_prompt: - :return: - """ - args_list = list(args) - user_message = args_list[eval_func_param_names.index('instruction')] # chat only - input1 = args_list[eval_func_param_names.index('iinput')] # chat only - prompt_type1 = args_list[eval_func_param_names.index('prompt_type')] - langchain_mode1 = args_list[eval_func_param_names.index('langchain_mode')] - langchain_action1 = args_list[eval_func_param_names.index('langchain_action')] - langchain_agents1 = args_list[eval_func_param_names.index('langchain_agents')] - document_subset1 = args_list[eval_func_param_names.index('document_subset')] - document_choice1 = args_list[eval_func_param_names.index('document_choice')] - if not prompt_type1: - # shouldn't have to specify if CLI launched model - prompt_type1 = kwargs['prompt_type'] - # apply back - args_list[eval_func_param_names.index('prompt_type')] = prompt_type1 - if input1 and not user_message.endswith(':'): - user_message1 = user_message + ":" + input1 - elif input1: - user_message1 = user_message + input1 - else: - user_message1 = user_message - if sanitize_user_prompt: - pass - # requirements.txt has comment that need to re-enable the below 2 lines - # from better_profanity import profanity - # user_message1 = profanity.censor(user_message1) - - history = args_list[-1] - if history is None: - # bad history - history = [] - history = history.copy() - - if undo: - if len(history) > 0: - history.pop() - return history - if retry: - if history: - history[-1][1] = None - return history - if user_message1 in ['', None, '\n']: - if not allow_empty_instruction(langchain_mode1, document_subset1, langchain_action1): - # reject non-retry submit/enter - return history - user_message1 = fix_text_for_gradio(user_message1) - return history + [[user_message1, None]] - - def user(*args, undo=False, retry=False, sanitize_user_prompt=False): - return update_history(*args, undo=undo, retry=retry, sanitize_user_prompt=sanitize_user_prompt) - - def all_user(*args, undo=False, retry=False, sanitize_user_prompt=False, num_model_lock=0, all_models=None): - args_list = list(args) - - visible_models1 = args_list[eval_func_param_names.index('visible_models')] - assert isinstance(all_models, list) - visible_list = get_model_lock_visible_list(visible_models1, all_models) - - history_list = args_list[-num_model_lock:] - assert len(all_models) == len(history_list) - assert len(history_list) > 0, "Bad history list: %s" % history_list - for hi, history in enumerate(history_list): - if not visible_list[hi]: - continue - if num_model_lock > 0: - hargs = args_list[:-num_model_lock].copy() - else: - hargs = args_list.copy() - hargs += [history] - history_list[hi] = update_history(*hargs, undo=undo, retry=retry, - sanitize_user_prompt=sanitize_user_prompt) - if len(history_list) > 1: - return tuple(history_list) - else: - return history_list[0] - - def get_model_max_length(model_state1): - if model_state1 and not isinstance(model_state1["tokenizer"], str): - tokenizer = model_state1["tokenizer"] - elif model_state0 and not isinstance(model_state0["tokenizer"], str): - tokenizer = model_state0["tokenizer"] - else: - tokenizer = None - if tokenizer is not None: - return tokenizer.model_max_length - else: - return 2000 - - def prep_bot(*args, retry=False, which_model=0): - """ - - :param args: - :param retry: - :param which_model: identifies which model if doing model_lock - API only called for which_model=0, default for inputs_list, but rest should ignore inputs_list - :return: last element is True if should run bot, False if should just yield history - """ - isize = len(input_args_list) + 1 # states + chat history - # don't deepcopy, can contain model itself - args_list = list(args).copy() - model_state1 = args_list[-isize] - my_db_state1 = args_list[-isize + 1] - selection_docs_state1 = args_list[-isize + 2] - requests_state1 = args_list[-isize + 3] - history = args_list[-1] - if not history: - history = [] - prompt_type1 = args_list[eval_func_param_names.index('prompt_type')] - prompt_dict1 = args_list[eval_func_param_names.index('prompt_dict')] - langchain_mode1 = args_list[eval_func_param_names.index('langchain_mode')] - langchain_action1 = args_list[eval_func_param_names.index('langchain_action')] - document_subset1 = args_list[eval_func_param_names.index('document_subset')] - h2ogpt_key1 = args_list[eval_func_param_names.index('h2ogpt_key')] - chat_conversation1 = args_list[eval_func_param_names.index('chat_conversation')] - valid_key = is_valid_key(kwargs['enforce_h2ogpt_api_key'], kwargs['h2ogpt_api_keys'], h2ogpt_key1, - requests_state1=requests_state1) - - dummy_return = history, None, langchain_mode1, my_db_state1, requests_state1, valid_key, h2ogpt_key1 - - if model_state1['model'] is None or model_state1['model'] == no_model_str: - return dummy_return - - args_list = args_list[:-isize] # only keep rest needed for evaluate() - if not history: - print("No history", flush=True) - return dummy_return - instruction1 = history[-1][0] - if retry and history: - # if retry, pop history and move onto bot stuff - instruction1 = history[-1][0] - history[-1][1] = None - elif not instruction1: - if not allow_empty_instruction(langchain_mode1, document_subset1, langchain_action1): - # if not retrying, then reject empty query - return dummy_return - elif len(history) > 0 and history[-1][1] not in [None, '']: - # reject submit button if already filled and not retrying - # None when not filling with '' to keep client happy - return dummy_return - - evaluate_local = evaluate if valid_key else evaluate_fake - - # shouldn't have to specify in API prompt_type if CLI launched model, so prefer global CLI one if have it - prompt_type1, prompt_dict1 = update_prompt(prompt_type1, prompt_dict1, model_state1, - which_model=which_model) - # apply back to args_list for evaluate() - args_list[eval_func_param_names.index('prompt_type')] = prompt_type1 - args_list[eval_func_param_names.index('prompt_dict')] = prompt_dict1 - context1 = args_list[eval_func_param_names.index('context')] - - chat_conversation1 = merge_chat_conversation_history(chat_conversation1, history) - args_list[eval_func_param_names.index('chat_conversation')] = chat_conversation1 - - if 'visible_models' in model_state1 and model_state1['visible_models'] is not None: - assert isinstance(model_state1['visible_models'], int) - args_list[eval_func_param_names.index('visible_models')] = model_state1['visible_models'] - if 'h2ogpt_key' in model_state1 and model_state1['h2ogpt_key'] is not None: - # i.e. may be '' and used to override overall local key - assert isinstance(model_state1['h2ogpt_key'], str) - args_list[eval_func_param_names.index('h2ogpt_key')] = model_state1['h2ogpt_key'] - - args_list[0] = instruction1 # override original instruction with history from user - args_list[2] = context1 - - fun1 = partial(evaluate_local, - model_state1, - my_db_state1, - selection_docs_state1, - requests_state1, - *tuple(args_list), - **kwargs_evaluate) - - return history, fun1, langchain_mode1, my_db_state1, requests_state1, valid_key, h2ogpt_key1 - - def gen1_fake(fun1, history): - error = '' - extra = '' - save_dict = dict() - yield history, error, extra, save_dict - return - - def get_response(fun1, history): - """ - bot that consumes history for user input - instruction (from input_list) itself is not consumed by bot - :return: - """ - error = '' - extra = '' - save_dict = dict() - if not fun1: - yield history, error, extra, save_dict - return - try: - for output_fun in fun1(): - output = output_fun['response'] - extra = output_fun['sources'] # FIXME: can show sources in separate text box etc. - save_dict = output_fun.get('save_dict', {}) - # ensure good visually, else markdown ignores multiple \n - bot_message = fix_text_for_gradio(output) - history[-1][1] = bot_message - yield history, error, extra, save_dict - except StopIteration: - yield history, error, extra, save_dict - except RuntimeError as e: - if "generator raised StopIteration" in str(e): - # assume last entry was bad, undo - history.pop() - yield history, error, extra, save_dict - else: - if history and len(history) > 0 and len(history[0]) > 1 and history[-1][1] is None: - history[-1][1] = '' - yield history, str(e), extra, save_dict - raise - except Exception as e: - # put error into user input - ex = "Exception: %s" % str(e) - if history and len(history) > 0 and len(history[0]) > 1 and history[-1][1] is None: - history[-1][1] = '' - yield history, ex, extra, save_dict - raise - finally: - # clear_torch_cache() - # don't clear torch cache here, too early and stalls generation if used for all_bot() - pass - return - - def clear_embeddings(langchain_mode1, db1s): - # clear any use of embedding that sits on GPU, else keeps accumulating GPU usage even if clear torch cache - if db_type in ['chroma', 'chroma_old'] and langchain_mode1 not in ['LLM', 'Disabled', None, '']: - from gpt_langchain import clear_embedding, length_db1 - db = dbs.get('langchain_mode1') - if db is not None and not isinstance(db, str): - clear_embedding(db) - if db1s is not None and langchain_mode1 in db1s: - db1 = db1s[langchain_mode1] - if len(db1) == length_db1(): - clear_embedding(db1[0]) - - def bot(*args, retry=False): - history, fun1, langchain_mode1, db1, requests_state1, valid_key, h2ogpt_key1 = prep_bot(*args, retry=retry) - save_dict = dict() - error = '' - extra = '' - try: - for res in get_response(fun1, history): - history, error, extra, save_dict = res - # pass back to gradio only these, rest are consumed in this function - yield history, error - finally: - clear_torch_cache() - clear_embeddings(langchain_mode1, db1) - if 'extra_dict' not in save_dict: - save_dict['extra_dict'] = {} - save_dict['valid_key'] = valid_key - save_dict['h2ogpt_key'] = h2ogpt_key1 - if requests_state1: - save_dict['extra_dict'].update(requests_state1) - else: - save_dict['extra_dict'].update(dict(username='NO_REQUEST')) - save_dict['error'] = error - save_dict['extra'] = extra - save_dict['which_api'] = 'bot' - save_generate_output(**save_dict) - - def all_bot(*args, retry=False, model_states1=None, all_models=None): - args_list = list(args).copy() - chatbots = args_list[-len(model_states1):] - args_list0 = args_list[:-len(model_states1)] # same for all models - exceptions = [] - stream_output1 = args_list[eval_func_param_names.index('stream_output')] - max_time1 = args_list[eval_func_param_names.index('max_time')] - langchain_mode1 = args_list[eval_func_param_names.index('langchain_mode')] - - visible_models1 = args_list[eval_func_param_names.index('visible_models')] - assert isinstance(all_models, list) - assert len(all_models) == len(model_states1) - visible_list = get_model_lock_visible_list(visible_models1, all_models) - - isize = len(input_args_list) + 1 # states + chat history - db1s = None - requests_state1 = None - valid_key = False - h2ogpt_key1 = '' - extras = [] - exceptions = [] - save_dicts = [] - try: - gen_list = [] - for chatboti, (chatbot1, model_state1) in enumerate(zip(chatbots, model_states1)): - args_list1 = args_list0.copy() - args_list1.insert(-isize + 2, - model_state1) # insert at -2 so is at -3, and after chatbot1 added, at -4 - # if at start, have None in response still, replace with '' so client etc. acts like normal - # assumes other parts of code treat '' and None as if no response yet from bot - # can't do this later in bot code as racy with threaded generators - if len(chatbot1) > 0 and len(chatbot1[-1]) == 2 and chatbot1[-1][1] is None: - chatbot1[-1][1] = '' - args_list1.append(chatbot1) - # so consistent with prep_bot() - # with model_state1 at -3, my_db_state1 at -2, and history(chatbot) at -1 - # langchain_mode1 and my_db_state1 and requests_state1 should be same for every bot - history, fun1, langchain_mode1, db1s, requests_state1, valid_key, h2ogpt_key1, = \ - prep_bot(*tuple(args_list1), retry=retry, - which_model=chatboti) - if visible_list[chatboti]: - gen1 = get_response(fun1, history) - if stream_output1: - gen1 = TimeoutIterator(gen1, timeout=0.01, sentinel=None, raise_on_exception=False) - # else timeout will truncate output for non-streaming case - else: - gen1 = gen1_fake(fun1, history) - gen_list.append(gen1) - - bots_old = chatbots.copy() - exceptions_old = [''] * len(bots_old) - extras_old = [''] * len(bots_old) - save_dicts_old = [{}] * len(bots_old) - tgen0 = time.time() - for res1 in itertools.zip_longest(*gen_list): - if time.time() - tgen0 > max_time1: - print("Took too long: %s" % max_time1, flush=True) - break - - bots = [x[0] if x is not None and not isinstance(x, BaseException) else y - for x, y in zip(res1, bots_old)] - bots_old = bots.copy() - - def larger_str(x, y): - return x if len(x) > len(y) else y - - exceptions = [x[1] if x is not None and not isinstance(x, BaseException) else larger_str(str(x), y) - for x, y in zip(res1, exceptions_old)] - exceptions_old = exceptions.copy() - - extras = [x[2] if x is not None and not isinstance(x, BaseException) else y - for x, y in zip(res1, extras_old)] - extras_old = extras.copy() - - save_dicts = [x[3] if x is not None and not isinstance(x, BaseException) else y - for x, y in zip(res1, save_dicts_old)] - save_dicts_old = save_dicts.copy() - - def choose_exc(x): - # don't expose ports etc. to exceptions window - if is_public: - return "Endpoint unavailable or failed" - else: - return x - - exceptions_str = '\n'.join( - ['Model %s: %s' % (iix, choose_exc(x)) for iix, x in enumerate(exceptions) if - x not in [None, '', 'None']]) - # yield back to gradio only is bots + exceptions, rest are consumed locally - if len(bots) > 1: - yield tuple(bots + [exceptions_str]) - else: - yield bots[0], exceptions_str - if exceptions: - exceptions_reduced = [x for x in exceptions if x not in ['', None, 'None']] - if exceptions_reduced: - print("Generate exceptions: %s" % exceptions_reduced, flush=True) - finally: - clear_torch_cache() - clear_embeddings(langchain_mode1, db1s) - for extra, error, save_dict, model_name in zip(extras, exceptions, save_dicts, all_models): - if 'extra_dict' not in save_dict: - save_dict['extra_dict'] = {} - if requests_state1: - save_dict['extra_dict'].update(requests_state1) - else: - save_dict['extra_dict'].update(dict(username='NO_REQUEST')) - save_dict['error'] = error - save_dict['extra'] = extra - save_dict['which_api'] = 'all_bot_%s' % model_name - save_dict['valid_key'] = valid_key - save_dict['h2ogpt_key'] = h2ogpt_key1 - save_generate_output(**save_dict) - - # NORMAL MODEL - user_args = dict(fn=functools.partial(user, sanitize_user_prompt=kwargs['sanitize_user_prompt']), - inputs=inputs_list + [text_output], - outputs=text_output, - ) - bot_args = dict(fn=bot, - inputs=inputs_list + [model_state, my_db_state, selection_docs_state, requests_state] + [ - text_output], - outputs=[text_output, chat_exception_text], - ) - retry_bot_args = dict(fn=functools.partial(bot, retry=True), - inputs=inputs_list + [model_state, my_db_state, selection_docs_state, requests_state] + [ - text_output], - outputs=[text_output, chat_exception_text], - ) - retry_user_args = dict(fn=functools.partial(user, retry=True), - inputs=inputs_list + [text_output], - outputs=text_output, - ) - undo_user_args = dict(fn=functools.partial(user, undo=True), - inputs=inputs_list + [text_output], - outputs=text_output, - ) - - # MODEL2 - user_args2 = dict(fn=functools.partial(user, sanitize_user_prompt=kwargs['sanitize_user_prompt']), - inputs=inputs_list2 + [text_output2], - outputs=text_output2, - ) - bot_args2 = dict(fn=bot, - inputs=inputs_list2 + [model_state2, my_db_state, selection_docs_state, requests_state] + [ - text_output2], - outputs=[text_output2, chat_exception_text], - ) - retry_bot_args2 = dict(fn=functools.partial(bot, retry=True), - inputs=inputs_list2 + [model_state2, my_db_state, selection_docs_state, - requests_state] + [text_output2], - outputs=[text_output2, chat_exception_text], - ) - retry_user_args2 = dict(fn=functools.partial(user, retry=True), - inputs=inputs_list2 + [text_output2], - outputs=text_output2, - ) - undo_user_args2 = dict(fn=functools.partial(user, undo=True), - inputs=inputs_list2 + [text_output2], - outputs=text_output2, - ) - - # MODEL N - all_user_args = dict(fn=functools.partial(all_user, - sanitize_user_prompt=kwargs['sanitize_user_prompt'], - num_model_lock=len(text_outputs), - all_models=kwargs['all_models'] - ), - inputs=inputs_list + text_outputs, - outputs=text_outputs, - ) - all_bot_args = dict(fn=functools.partial(all_bot, model_states1=model_states, - all_models=kwargs['all_models']), - inputs=inputs_list + [my_db_state, selection_docs_state, requests_state] + - text_outputs, - outputs=text_outputs + [chat_exception_text], - ) - all_retry_bot_args = dict(fn=functools.partial(all_bot, model_states1=model_states, - all_models=kwargs['all_models'], - retry=True), - inputs=inputs_list + [my_db_state, selection_docs_state, requests_state] + - text_outputs, - outputs=text_outputs + [chat_exception_text], - ) - all_retry_user_args = dict(fn=functools.partial(all_user, retry=True, - sanitize_user_prompt=kwargs['sanitize_user_prompt'], - num_model_lock=len(text_outputs), - all_models=kwargs['all_models'] - ), - inputs=inputs_list + text_outputs, - outputs=text_outputs, - ) - all_undo_user_args = dict(fn=functools.partial(all_user, undo=True, - sanitize_user_prompt=kwargs['sanitize_user_prompt'], - num_model_lock=len(text_outputs), - all_models=kwargs['all_models'] - ), - inputs=inputs_list + text_outputs, - outputs=text_outputs, - ) - - def clear_instruct(): - return gr.Textbox.update(value='') - - def deselect_radio_chats(): - return gr.update(value=None) - - def clear_all(): - return gr.Textbox.update(value=''), gr.Textbox.update(value=''), gr.update(value=None), \ - gr.Textbox.update(value=''), gr.Textbox.update(value='') - - if kwargs['model_states']: - submits1 = submits2 = submits3 = [] - submits4 = [] - - triggers = [instruction, submit, retry_btn] - fun_source = [instruction.submit, submit.click, retry_btn.click] - fun_name = ['instruction', 'submit', 'retry'] - user_args = [all_user_args, all_user_args, all_retry_user_args] - bot_args = [all_bot_args, all_bot_args, all_retry_bot_args] - for userargs1, botarg1, funn1, funs1, trigger1, in zip(user_args, bot_args, fun_name, fun_source, triggers): - submit_event11 = funs1(fn=user_state_setup, - inputs=[my_db_state, requests_state, trigger1, trigger1], - outputs=[my_db_state, requests_state, trigger1], - queue=queue) - submit_event1a = submit_event11.then(**userargs1, queue=queue, - api_name='%s' % funn1 if allow_api else None) - # if hit enter on new instruction for submitting new query, no longer the saved chat - submit_event1b = submit_event1a.then(clear_all, inputs=None, - outputs=[instruction, iinput, radio_chats, score_text, - score_text2], - queue=queue) - submit_event1c = submit_event1b.then(**botarg1, - api_name='%s_bot' % funn1 if allow_api else None, - queue=queue) - submit_event1d = submit_event1c.then(**all_score_args, - api_name='%s_bot_score' % funn1 if allow_api else None, - queue=queue) - - submits1.extend([submit_event1a, submit_event1b, submit_event1c, submit_event1d]) - - # if undo, no longer the saved chat - submit_event4 = undo.click(fn=user_state_setup, - inputs=[my_db_state, requests_state, undo, undo], - outputs=[my_db_state, requests_state, undo], - queue=queue) \ - .then(**all_undo_user_args, api_name='undo' if allow_api else None) \ - .then(clear_all, inputs=None, outputs=[instruction, iinput, radio_chats, score_text, - score_text2], queue=queue) \ - .then(**all_score_args, api_name='undo_score' if allow_api else None) - submits4 = [submit_event4] - - else: - # in case 2nd model, consume instruction first, so can clear quickly - # bot doesn't consume instruction itself, just history from user, so why works - submit_event11 = instruction.submit(fn=user_state_setup, - inputs=[my_db_state, requests_state, instruction, instruction], - outputs=[my_db_state, requests_state, instruction], - queue=queue) - submit_event1a = submit_event11.then(**user_args, queue=queue, - api_name='instruction' if allow_api else None) - # if hit enter on new instruction for submitting new query, no longer the saved chat - submit_event1a2 = submit_event1a.then(deselect_radio_chats, inputs=None, outputs=radio_chats, queue=queue) - submit_event1b = submit_event1a2.then(**user_args2, api_name='instruction2' if allow_api else None) - submit_event1c = submit_event1b.then(clear_instruct, None, instruction) \ - .then(clear_instruct, None, iinput) - submit_event1d = submit_event1c.then(**bot_args, api_name='instruction_bot' if allow_api else None, - queue=queue) - submit_event1e = submit_event1d.then(**score_args, - api_name='instruction_bot_score' if allow_api else None, - queue=queue) - submit_event1f = submit_event1e.then(**bot_args2, api_name='instruction_bot2' if allow_api else None, - queue=queue) - submit_event1g = submit_event1f.then(**score_args2, - api_name='instruction_bot_score2' if allow_api else None, queue=queue) - - submits1 = [submit_event1a, submit_event1a2, submit_event1b, submit_event1c, submit_event1d, - submit_event1e, - submit_event1f, submit_event1g] - - submit_event21 = submit.click(fn=user_state_setup, - inputs=[my_db_state, requests_state, submit, submit], - outputs=[my_db_state, requests_state, submit], - queue=queue) - submit_event2a = submit_event21.then(**user_args, api_name='submit' if allow_api else None) - # if submit new query, no longer the saved chat - submit_event2a2 = submit_event2a.then(deselect_radio_chats, inputs=None, outputs=radio_chats, queue=queue) - submit_event2b = submit_event2a2.then(**user_args2, api_name='submit2' if allow_api else None) - submit_event2c = submit_event2b.then(clear_all, inputs=None, - outputs=[instruction, iinput, radio_chats, score_text, score_text2], - queue=queue) - submit_event2d = submit_event2c.then(**bot_args, api_name='submit_bot' if allow_api else None, queue=queue) - submit_event2e = submit_event2d.then(**score_args, - api_name='submit_bot_score' if allow_api else None, - queue=queue) - submit_event2f = submit_event2e.then(**bot_args2, api_name='submit_bot2' if allow_api else None, - queue=queue) - submit_event2g = submit_event2f.then(**score_args2, - api_name='submit_bot_score2' if allow_api else None, - queue=queue) - - submits2 = [submit_event2a, submit_event2a2, submit_event2b, submit_event2c, submit_event2d, - submit_event2e, - submit_event2f, submit_event2g] - - submit_event31 = retry_btn.click(fn=user_state_setup, - inputs=[my_db_state, requests_state, retry_btn, retry_btn], - outputs=[my_db_state, requests_state, retry_btn], - queue=queue) - submit_event3a = submit_event31.then(**user_args, api_name='retry' if allow_api else None) - # if retry, no longer the saved chat - submit_event3a2 = submit_event3a.then(deselect_radio_chats, inputs=None, outputs=radio_chats, queue=queue) - submit_event3b = submit_event3a2.then(**user_args2, api_name='retry2' if allow_api else None) - submit_event3c = submit_event3b.then(clear_instruct, None, instruction) \ - .then(clear_instruct, None, iinput) - submit_event3d = submit_event3c.then(**retry_bot_args, api_name='retry_bot' if allow_api else None, - queue=queue) - submit_event3e = submit_event3d.then(**score_args, - api_name='retry_bot_score' if allow_api else None, - queue=queue) - submit_event3f = submit_event3e.then(**retry_bot_args2, api_name='retry_bot2' if allow_api else None, - queue=queue) - submit_event3g = submit_event3f.then(**score_args2, - api_name='retry_bot_score2' if allow_api else None, - queue=queue) - - submits3 = [submit_event3a, submit_event3a2, submit_event3b, submit_event3c, submit_event3d, - submit_event3e, - submit_event3f, submit_event3g] - - # if undo, no longer the saved chat - submit_event4 = undo.click(fn=user_state_setup, - inputs=[my_db_state, requests_state, undo, undo], - outputs=[my_db_state, requests_state, undo], - queue=queue) \ - .then(**undo_user_args, api_name='undo' if allow_api else None) \ - .then(**undo_user_args2, api_name='undo2' if allow_api else None) \ - .then(clear_all, inputs=None, outputs=[instruction, iinput, radio_chats, score_text, - score_text2], queue=queue) \ - .then(**score_args, api_name='undo_score' if allow_api else None) \ - .then(**score_args2, api_name='undo_score2' if allow_api else None) - submits4 = [submit_event4] - - # MANAGE CHATS - def dedup(short_chat, short_chats): - if short_chat not in short_chats: - return short_chat - for i in range(1, 1000): - short_chat_try = short_chat + "_" + str(i) - if short_chat_try not in short_chats: - return short_chat_try - # fallback and hope for best - short_chat = short_chat + "_" + str(random.random()) - return short_chat - - def get_short_chat(x, short_chats, short_len=20, words=4): - if x and len(x[0]) == 2 and x[0][0] is not None: - short_chat = ' '.join(x[0][0][:short_len].split(' ')[:words]).strip() - if not short_chat: - # e.g.summarization, try using answer - short_chat = ' '.join(x[0][1][:short_len].split(' ')[:words]).strip() - if not short_chat: - short_chat = 'Unk' - short_chat = dedup(short_chat, short_chats) - else: - short_chat = None - return short_chat - - def is_chat_same(x, y): - #

    etc. added in chat, try to remove some of that to help avoid dup entries when hit new conversation - is_same = True - # length of conversation has to be same - if len(x) != len(y): - return False - if len(x) != len(y): - return False - for stepx, stepy in zip(x, y): - if len(stepx) != len(stepy): - # something off with a conversation - return False - for stepxx, stepyy in zip(stepx, stepy): - if len(stepxx) != len(stepyy): - # something off with a conversation - return False - if len(stepxx) != 2: - # something off - return False - if len(stepyy) != 2: - # something off - return False - questionx = stepxx[0].replace('

    ', '').replace('

    ', '') if stepxx[0] is not None else None - answerx = stepxx[1].replace('

    ', '').replace('

    ', '') if stepxx[1] is not None else None - - questiony = stepyy[0].replace('

    ', '').replace('

    ', '') if stepyy[0] is not None else None - answery = stepyy[1].replace('

    ', '').replace('

    ', '') if stepyy[1] is not None else None - - if questionx != questiony or answerx != answery: - return False - return is_same - - def save_chat(*args, chat_is_list=False, auth_filename=None, auth_freeze=None): - args_list = list(args) - db1s = args_list[0] - requests_state1 = args_list[1] - args_list = args_list[2:] - if not chat_is_list: - # list of chatbot histories, - # can't pass in list with list of chatbot histories and state due to gradio limits - chat_list = args_list[:-1] - else: - assert len(args_list) == 2 - chat_list = args_list[0] - # if old chat file with single chatbot, get into shape - if isinstance(chat_list, list) and len(chat_list) > 0 and isinstance(chat_list[0], list) and len( - chat_list[0]) == 2 and isinstance(chat_list[0][0], str) and isinstance(chat_list[0][1], str): - chat_list = [chat_list] - # remove None histories - chat_list_not_none = [x for x in chat_list if x and len(x) > 0 and len(x[0]) == 2 and x[0][1] is not None] - chat_list_none = [x for x in chat_list if x not in chat_list_not_none] - if len(chat_list_none) > 0 and len(chat_list_not_none) == 0: - raise ValueError("Invalid chat file") - # dict with keys of short chat names, values of list of list of chatbot histories - chat_state1 = args_list[-1] - short_chats = list(chat_state1.keys()) - if len(chat_list_not_none) > 0: - # make short_chat key from only first history, based upon question that is same anyways - chat_first = chat_list_not_none[0] - short_chat = get_short_chat(chat_first, short_chats) - if short_chat: - old_chat_lists = list(chat_state1.values()) - already_exists = any([is_chat_same(chat_list, x) for x in old_chat_lists]) - if not already_exists: - chat_state1[short_chat] = chat_list.copy() - - # reverse so newest at top - choices = list(chat_state1.keys()).copy() - choices.reverse() - - # save saved chats and chatbots to auth file - text_output1 = chat_list[0] - text_output21 = chat_list[1] - text_outputs1 = chat_list[2:] - save_auth(requests_state1, auth_filename, auth_freeze, chat_state1=chat_state1, - text_output1=text_output1, text_output21=text_output21, text_outputs1=text_outputs1) - - return chat_state1, gr.update(choices=choices, value=None) - - def switch_chat(chat_key, chat_state1, num_model_lock=0): - chosen_chat = chat_state1[chat_key] - # deal with possible different size of chat list vs. current list - ret_chat = [None] * (2 + num_model_lock) - for chati in range(0, 2 + num_model_lock): - ret_chat[chati % len(ret_chat)] = chosen_chat[chati % len(chosen_chat)] - return tuple(ret_chat) - - def clear_texts(*args): - return tuple([gr.Textbox.update(value='')] * len(args)) - - def clear_scores(): - return gr.Textbox.update(value=res_value), \ - gr.Textbox.update(value='Response Score: NA'), \ - gr.Textbox.update(value='Response Score: NA') - - switch_chat_fun = functools.partial(switch_chat, num_model_lock=len(text_outputs)) - radio_chats.input(switch_chat_fun, - inputs=[radio_chats, chat_state], - outputs=[text_output, text_output2] + text_outputs) \ - .then(clear_scores, outputs=[score_text, score_text2, score_text_nochat]) - - def remove_chat(chat_key, chat_state1): - if isinstance(chat_key, str): - chat_state1.pop(chat_key, None) - return gr.update(choices=list(chat_state1.keys()), value=None), chat_state1 - - remove_chat_event = remove_chat_btn.click(remove_chat, - inputs=[radio_chats, chat_state], - outputs=[radio_chats, chat_state], - queue=False, api_name='remove_chat') - - def get_chats1(chat_state1): - base = 'chats' - base = makedirs(base, exist_ok=True, tmp_ok=True, use_base=True) - filename = os.path.join(base, 'chats_%s.json' % str(uuid.uuid4())) - with open(filename, "wt") as f: - f.write(json.dumps(chat_state1, indent=2)) - return filename - - export_chat_event = export_chats_btn.click(get_chats1, inputs=chat_state, outputs=chats_file, queue=False, - api_name='export_chats' if allow_api else None) - - def add_chats_from_file(db1s, requests_state1, file, chat_state1, radio_chats1, chat_exception_text1, - auth_filename=None, auth_freeze=None): - if not file: - return None, chat_state1, gr.update(choices=list(chat_state1.keys()), value=None), chat_exception_text1 - if isinstance(file, str): - files = [file] - else: - files = file - if not files: - return None, chat_state1, gr.update(choices=list(chat_state1.keys()), value=None), chat_exception_text1 - chat_exception_list = [] - for file1 in files: - try: - if hasattr(file1, 'name'): - file1 = file1.name - with open(file1, "rt") as f: - new_chats = json.loads(f.read()) - for chat1_k, chat1_v in new_chats.items(): - # ignore chat1_k, regenerate and de-dup to avoid loss - chat_state1, _ = save_chat(db1s, requests_state1, chat1_v, chat_state1, chat_is_list=True) - except BaseException as e: - t, v, tb = sys.exc_info() - ex = ''.join(traceback.format_exception(t, v, tb)) - ex_str = "File %s exception: %s" % (file1, str(e)) - print(ex_str, flush=True) - chat_exception_list.append(ex_str) - chat_exception_text1 = '\n'.join(chat_exception_list) - # save chat to auth file - save_auth(requests_state1, auth_filename, auth_freeze, chat_state1=chat_state1) - return None, chat_state1, gr.update(choices=list(chat_state1.keys()), value=None), chat_exception_text1 - - # note for update_user_db_func output is ignored for db - chatup_change_eventa = chatsup_output.change(user_state_setup, - inputs=[my_db_state, requests_state, langchain_mode], - outputs=[my_db_state, requests_state, langchain_mode], - show_progress='minimal') - add_chats_from_file_func = functools.partial(add_chats_from_file, - auth_filename=kwargs['auth_filename'], - auth_freeze=kwargs['auth_freeze'], - ) - chatup_change_event = chatup_change_eventa.then(add_chats_from_file_func, - inputs=[my_db_state, requests_state] + - [chatsup_output, chat_state, radio_chats, - chat_exception_text], - outputs=[chatsup_output, chat_state, radio_chats, - chat_exception_text], - queue=False, - api_name='add_to_chats' if allow_api else None) - - clear_chat_event = clear_chat_btn.click(fn=clear_texts, - inputs=[text_output, text_output2] + text_outputs, - outputs=[text_output, text_output2] + text_outputs, - queue=False, api_name='clear' if allow_api else None) \ - .then(deselect_radio_chats, inputs=None, outputs=radio_chats, queue=False) \ - .then(clear_scores, outputs=[score_text, score_text2, score_text_nochat]) - - clear_eventa = save_chat_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, langchain_mode], - outputs=[my_db_state, requests_state, langchain_mode], - show_progress='minimal') - save_chat_func = functools.partial(save_chat, - auth_filename=kwargs['auth_filename'], - auth_freeze=kwargs['auth_freeze'], - ) - clear_event = clear_eventa.then(save_chat_func, - inputs=[my_db_state, requests_state] + - [text_output, text_output2] + text_outputs + - [chat_state], - outputs=[chat_state, radio_chats], - api_name='save_chat' if allow_api else None) - if kwargs['score_model']: - clear_event2 = clear_event.then(clear_scores, outputs=[score_text, score_text2, score_text_nochat]) - - # NOTE: clear of instruction/iinput for nochat has to come after score, - # because score for nochat consumes actual textbox, while chat consumes chat history filled by user() - no_chat_args = dict(fn=fun, - inputs=[model_state, my_db_state, selection_docs_state, requests_state] + inputs_list, - outputs=text_output_nochat, - queue=queue, - ) - submit_event_nochat = submit_nochat.click(**no_chat_args, api_name='submit_nochat' if allow_api else None) \ - .then(clear_torch_cache) \ - .then(**score_args_nochat, api_name='instruction_bot_score_nochat' if allow_api else None, queue=queue) \ - .then(clear_instruct, None, instruction_nochat) \ - .then(clear_instruct, None, iinput_nochat) \ - .then(clear_torch_cache) - # copy of above with text box submission - submit_event_nochat2 = instruction_nochat.submit(**no_chat_args) \ - .then(clear_torch_cache) \ - .then(**score_args_nochat, queue=queue) \ - .then(clear_instruct, None, instruction_nochat) \ - .then(clear_instruct, None, iinput_nochat) \ - .then(clear_torch_cache) - - submit_event_nochat_api = submit_nochat_api.click(fun_with_dict_str, - inputs=[model_state, my_db_state, selection_docs_state, - requests_state, - inputs_dict_str], - outputs=text_output_nochat_api, - queue=True, # required for generator - api_name='submit_nochat_api' if allow_api else None) - - submit_event_nochat_api_plain = submit_nochat_api_plain.click(fun_with_dict_str_plain, - inputs=inputs_dict_str, - outputs=text_output_nochat_api, - queue=False, - api_name='submit_nochat_plain_api' if allow_api else None) - - def load_model(model_name, lora_weights, server_name, model_state_old, prompt_type_old, - load_8bit, load_4bit, low_bit_mode, - load_gptq, load_exllama, use_safetensors, revision, - use_gpu_id, gpu_id, max_seq_len1, rope_scaling1, - model_path_llama1, model_name_gptj1, model_name_gpt4all_llama1, - n_gpu_layers1, n_batch1, n_gqa1, llamacpp_dict_more1, - system_prompt1): - try: - llamacpp_dict = ast.literal_eval(llamacpp_dict_more1) - except: - print("Failed to use user input for llamacpp_dict_more1 dict", flush=True) - llamacpp_dict = {} - llamacpp_dict.update(dict(model_path_llama=model_path_llama1, - model_name_gptj=model_name_gptj1, - model_name_gpt4all_llama=model_name_gpt4all_llama1, - n_gpu_layers=n_gpu_layers1, - n_batch=n_batch1, - n_gqa=n_gqa1, - )) - - # ensure no API calls reach here - if is_public: - raise RuntimeError("Illegal access for %s" % model_name) - # ensure old model removed from GPU memory - if kwargs['debug']: - print("Pre-switch pre-del GPU memory: %s" % get_torch_allocated(), flush=True) - - model0 = model_state0['model'] - if isinstance(model_state_old['model'], str) and \ - model0 is not None and \ - hasattr(model0, 'cpu'): - # best can do, move model loaded at first to CPU - model0.cpu() - - if model_state_old['model'] is not None and \ - not isinstance(model_state_old['model'], str): - if hasattr(model_state_old['model'], 'cpu'): - try: - model_state_old['model'].cpu() - except Exception as e: - # sometimes hit NotImplementedError: Cannot copy out of meta tensor; no data! - print("Unable to put model on CPU: %s" % str(e), flush=True) - del model_state_old['model'] - model_state_old['model'] = None - - if model_state_old['tokenizer'] is not None and not isinstance(model_state_old['tokenizer'], str): - del model_state_old['tokenizer'] - model_state_old['tokenizer'] = None - - clear_torch_cache() - if kwargs['debug']: - print("Pre-switch post-del GPU memory: %s" % get_torch_allocated(), flush=True) - if not model_name: - model_name = no_model_str - if model_name == no_model_str: - # no-op if no model, just free memory - # no detranscribe needed for model, never go into evaluate - lora_weights = no_lora_str - server_name = no_server_str - return kwargs['model_state_none'].copy(), \ - model_name, lora_weights, server_name, prompt_type_old, \ - gr.Slider.update(maximum=256), \ - gr.Slider.update(maximum=256) - - # don't deepcopy, can contain model itself - all_kwargs1 = all_kwargs.copy() - all_kwargs1['base_model'] = model_name.strip() - all_kwargs1['load_8bit'] = load_8bit - all_kwargs1['load_4bit'] = load_4bit - all_kwargs1['low_bit_mode'] = low_bit_mode - all_kwargs1['load_gptq'] = load_gptq - all_kwargs1['load_exllama'] = load_exllama - all_kwargs1['use_safetensors'] = use_safetensors - all_kwargs1['revision'] = None if not revision else revision # transcribe, don't pass '' - all_kwargs1['use_gpu_id'] = use_gpu_id - all_kwargs1['gpu_id'] = int(gpu_id) if gpu_id not in [None, 'None'] else None # detranscribe - all_kwargs1['llamacpp_dict'] = llamacpp_dict - all_kwargs1['max_seq_len'] = max_seq_len1 - try: - all_kwargs1['rope_scaling'] = str_to_dict(rope_scaling1) # transcribe - except: - print("Failed to use user input for rope_scaling dict", flush=True) - all_kwargs1['rope_scaling'] = {} - model_lower = model_name.strip().lower() - if model_lower in inv_prompt_type_to_model_lower: - prompt_type1 = inv_prompt_type_to_model_lower[model_lower] - else: - prompt_type1 = prompt_type_old - - # detranscribe - if lora_weights == no_lora_str: - lora_weights = '' - all_kwargs1['lora_weights'] = lora_weights.strip() - if server_name == no_server_str: - server_name = '' - all_kwargs1['inference_server'] = server_name.strip() - - model1, tokenizer1, device1 = get_model(reward_type=False, - **get_kwargs(get_model, exclude_names=['reward_type'], - **all_kwargs1)) - clear_torch_cache() - - tokenizer_base_model = model_name - prompt_dict1, error0 = get_prompt(prompt_type1, '', - chat=False, context='', reduced=False, making_context=False, - return_dict=True, system_prompt=system_prompt1) - model_state_new = dict(model=model1, tokenizer=tokenizer1, device=device1, - base_model=model_name, tokenizer_base_model=tokenizer_base_model, - lora_weights=lora_weights, inference_server=server_name, - prompt_type=prompt_type1, prompt_dict=prompt_dict1, - # FIXME: not typically required, unless want to expose adding h2ogpt endpoint in UI - visible_models=None, h2ogpt_key=None, - ) - - max_max_new_tokens1 = get_max_max_new_tokens(model_state_new, **kwargs) - - if kwargs['debug']: - print("Post-switch GPU memory: %s" % get_torch_allocated(), flush=True) - return model_state_new, model_name, lora_weights, server_name, prompt_type1, \ - gr.Slider.update(maximum=max_max_new_tokens1), \ - gr.Slider.update(maximum=max_max_new_tokens1) - - def get_prompt_str(prompt_type1, prompt_dict1, system_prompt1, which=0): - if prompt_type1 in ['', None]: - print("Got prompt_type %s: %s" % (which, prompt_type1), flush=True) - return str({}) - prompt_dict1, prompt_dict_error = get_prompt(prompt_type1, prompt_dict1, chat=False, context='', - reduced=False, making_context=False, return_dict=True, - system_prompt=system_prompt1) - if prompt_dict_error: - return str(prompt_dict_error) - else: - # return so user can manipulate if want and use as custom - return str(prompt_dict1) - - get_prompt_str_func1 = functools.partial(get_prompt_str, which=1) - get_prompt_str_func2 = functools.partial(get_prompt_str, which=2) - prompt_type.change(fn=get_prompt_str_func1, inputs=[prompt_type, prompt_dict, system_prompt], - outputs=prompt_dict, queue=False) - prompt_type2.change(fn=get_prompt_str_func2, inputs=[prompt_type2, prompt_dict2, system_prompt], - outputs=prompt_dict2, - queue=False) - - def dropdown_prompt_type_list(x): - return gr.Dropdown.update(value=x) - - def chatbot_list(x, model_used_in): - return gr.Textbox.update(label=f'h2oGPT [Model: {model_used_in}]') - - load_model_args = dict(fn=load_model, - inputs=[model_choice, lora_choice, server_choice, model_state, prompt_type, - model_load8bit_checkbox, model_load4bit_checkbox, model_low_bit_mode, - model_load_gptq, model_load_exllama_checkbox, - model_safetensors_checkbox, model_revision, - model_use_gpu_id_checkbox, model_gpu, - max_seq_len, rope_scaling, - model_path_llama, model_name_gptj, model_name_gpt4all_llama, - n_gpu_layers, n_batch, n_gqa, llamacpp_dict_more, - system_prompt], - outputs=[model_state, model_used, lora_used, server_used, - # if prompt_type changes, prompt_dict will change via change rule - prompt_type, max_new_tokens, min_new_tokens, - ]) - prompt_update_args = dict(fn=dropdown_prompt_type_list, inputs=prompt_type, outputs=prompt_type) - chatbot_update_args = dict(fn=chatbot_list, inputs=[text_output, model_used], outputs=text_output) - nochat_update_args = dict(fn=chatbot_list, inputs=[text_output_nochat, model_used], outputs=text_output_nochat) - load_model_event = load_model_button.click(**load_model_args, - api_name='load_model' if allow_api and not is_public else None) \ - .then(**prompt_update_args) \ - .then(**chatbot_update_args) \ - .then(**nochat_update_args) \ - .then(clear_torch_cache) - - load_model_args2 = dict(fn=load_model, - inputs=[model_choice2, lora_choice2, server_choice2, model_state2, prompt_type2, - model_load8bit_checkbox2, model_load4bit_checkbox2, model_low_bit_mode2, - model_load_gptq2, model_load_exllama_checkbox2, - model_safetensors_checkbox2, model_revision2, - model_use_gpu_id_checkbox2, model_gpu2, - max_seq_len2, rope_scaling2, - model_path_llama2, model_name_gptj2, model_name_gpt4all_llama2, - n_gpu_layers2, n_batch2, n_gqa2, llamacpp_dict_more2, - system_prompt], - outputs=[model_state2, model_used2, lora_used2, server_used2, - # if prompt_type2 changes, prompt_dict2 will change via change rule - prompt_type2, max_new_tokens2, min_new_tokens2 - ]) - prompt_update_args2 = dict(fn=dropdown_prompt_type_list, inputs=prompt_type2, outputs=prompt_type2) - chatbot_update_args2 = dict(fn=chatbot_list, inputs=[text_output2, model_used2], outputs=text_output2) - load_model_event2 = load_model_button2.click(**load_model_args2, - api_name='load_model2' if allow_api and not is_public else None) \ - .then(**prompt_update_args2) \ - .then(**chatbot_update_args2) \ - .then(clear_torch_cache) - - def dropdown_model_lora_server_list(model_list0, model_x, - lora_list0, lora_x, - server_list0, server_x, - model_used1, lora_used1, server_used1, - model_used2, lora_used2, server_used2, - ): - model_new_state = [model_list0[0] + [model_x]] - model_new_options = [*model_new_state[0]] - if no_model_str in model_new_options: - model_new_options.remove(no_model_str) - model_new_options = [no_model_str] + sorted(model_new_options) - x1 = model_x if model_used1 == no_model_str else model_used1 - x2 = model_x if model_used2 == no_model_str else model_used2 - ret1 = [gr.Dropdown.update(value=x1, choices=model_new_options), - gr.Dropdown.update(value=x2, choices=model_new_options), - '', model_new_state] - - lora_new_state = [lora_list0[0] + [lora_x]] - lora_new_options = [*lora_new_state[0]] - if no_lora_str in lora_new_options: - lora_new_options.remove(no_lora_str) - lora_new_options = [no_lora_str] + sorted(lora_new_options) - # don't switch drop-down to added lora if already have model loaded - x1 = lora_x if model_used1 == no_model_str else lora_used1 - x2 = lora_x if model_used2 == no_model_str else lora_used2 - ret2 = [gr.Dropdown.update(value=x1, choices=lora_new_options), - gr.Dropdown.update(value=x2, choices=lora_new_options), - '', lora_new_state] - - server_new_state = [server_list0[0] + [server_x]] - server_new_options = [*server_new_state[0]] - if no_server_str in server_new_options: - server_new_options.remove(no_server_str) - server_new_options = [no_server_str] + sorted(server_new_options) - # don't switch drop-down to added server if already have model loaded - x1 = server_x if model_used1 == no_model_str else server_used1 - x2 = server_x if model_used2 == no_model_str else server_used2 - ret3 = [gr.Dropdown.update(value=x1, choices=server_new_options), - gr.Dropdown.update(value=x2, choices=server_new_options), - '', server_new_state] - - return tuple(ret1 + ret2 + ret3) - - add_model_lora_server_event = \ - add_model_lora_server_button.click(fn=dropdown_model_lora_server_list, - inputs=[model_options_state, new_model] + - [lora_options_state, new_lora] + - [server_options_state, new_server] + - [model_used, lora_used, server_used] + - [model_used2, lora_used2, server_used2], - outputs=[model_choice, model_choice2, new_model, model_options_state] + - [lora_choice, lora_choice2, new_lora, lora_options_state] + - [server_choice, server_choice2, new_server, - server_options_state], - queue=False) - - go_event = go_btn.click(lambda: gr.update(visible=False), None, go_btn, api_name="go" if allow_api else None, - queue=False) \ - .then(lambda: gr.update(visible=True), None, normal_block, queue=False) \ - .then(**load_model_args, queue=False).then(**prompt_update_args, queue=False) - - def compare_textbox_fun(x): - return gr.Textbox.update(visible=x) - - def compare_column_fun(x): - return gr.Column.update(visible=x) - - def compare_prompt_fun(x): - return gr.Dropdown.update(visible=x) - - def slider_fun(x): - return gr.Slider.update(visible=x) - - compare_checkbox.select(compare_textbox_fun, compare_checkbox, text_output2, - api_name="compare_checkbox" if allow_api else None) \ - .then(compare_column_fun, compare_checkbox, col_model2) \ - .then(compare_prompt_fun, compare_checkbox, prompt_type2) \ - .then(compare_textbox_fun, compare_checkbox, score_text2) \ - .then(slider_fun, compare_checkbox, max_new_tokens2) \ - .then(slider_fun, compare_checkbox, min_new_tokens2) - # FIXME: add score_res2 in condition, but do better - - # callback for logging flagged input/output - callback.setup(inputs_list + [text_output, text_output2] + text_outputs, "flagged_data_points") - flag_btn.click(lambda *args: callback.flag(args), inputs_list + [text_output, text_output2] + text_outputs, - None, - preprocess=False, - api_name='flag' if allow_api else None, queue=False) - flag_btn_nochat.click(lambda *args: callback.flag(args), inputs_list + [text_output_nochat], None, - preprocess=False, - api_name='flag_nochat' if allow_api else None, queue=False) - - def get_system_info(): - if is_public: - time.sleep(10) # delay to avoid spam since queue=False - return gr.Textbox.update(value=system_info_print()) - - system_event = system_btn.click(get_system_info, outputs=system_text, - api_name='system_info' if allow_api else None, queue=False) - - def get_system_info_dict(system_input1, **kwargs1): - if system_input1 != os.getenv("ADMIN_PASS", ""): - return json.dumps({}) - exclude_list = ['admin_pass', 'examples'] - sys_dict = {k: v for k, v in kwargs1.items() if - isinstance(v, (str, int, bool, float)) and k not in exclude_list} - try: - sys_dict.update(system_info()) - except Exception as e: - # protection - print("Exception: %s" % str(e), flush=True) - return json.dumps(sys_dict) - - system_kwargs = all_kwargs.copy() - system_kwargs.update(dict(command=str(' '.join(sys.argv)))) - get_system_info_dict_func = functools.partial(get_system_info_dict, **all_kwargs) - - system_dict_event = system_btn2.click(get_system_info_dict_func, - inputs=system_input, - outputs=system_text2, - api_name='system_info_dict' if allow_api else None, - queue=False, # queue to avoid spam - ) - - def get_hash(): - return kwargs['git_hash'] - - system_event = system_btn3.click(get_hash, - outputs=system_text3, - api_name='system_hash' if allow_api else None, - queue=False, - ) - - def get_model_names(): - key_list = ['base_model', 'prompt_type', 'prompt_dict'] + list(kwargs['other_model_state_defaults'].keys()) - # don't want to expose backend inference server IP etc. - # key_list += ['inference_server'] - return [{k: x[k] for k in key_list if k in x} for x in model_states] - - models_list_event = system_btn4.click(get_model_names, - outputs=system_text4, - api_name='model_names' if allow_api else None, - queue=False, - ) - - def count_chat_tokens(model_state1, chat1, prompt_type1, prompt_dict1, - system_prompt1, chat_conversation1, - memory_restriction_level1=0, - keep_sources_in_context1=False, - ): - if model_state1 and not isinstance(model_state1['tokenizer'], str): - tokenizer = model_state1['tokenizer'] - elif model_state0 and not isinstance(model_state0['tokenizer'], str): - tokenizer = model_state0['tokenizer'] - else: - tokenizer = None - if tokenizer is not None: - langchain_mode1 = 'LLM' - add_chat_history_to_context1 = True - # fake user message to mimic bot() - chat1 = copy.deepcopy(chat1) - chat1 = chat1 + [['user_message1', None]] - model_max_length1 = tokenizer.model_max_length - context1 = history_to_context(chat1, - langchain_mode=langchain_mode1, - add_chat_history_to_context=add_chat_history_to_context1, - prompt_type=prompt_type1, - prompt_dict=prompt_dict1, - chat=True, - model_max_length=model_max_length1, - memory_restriction_level=memory_restriction_level1, - keep_sources_in_context=keep_sources_in_context1, - system_prompt=system_prompt1, - chat_conversation=chat_conversation1) - tokens = tokenizer(context1, return_tensors="pt")['input_ids'] - if len(tokens.shape) == 1: - return str(tokens.shape[0]) - elif len(tokens.shape) == 2: - return str(tokens.shape[1]) - else: - return "N/A" - else: - return "N/A" - - count_chat_tokens_func = functools.partial(count_chat_tokens, - memory_restriction_level1=memory_restriction_level, - keep_sources_in_context1=kwargs['keep_sources_in_context']) - count_tokens_event = count_chat_tokens_btn.click(fn=count_chat_tokens_func, - inputs=[model_state, text_output, prompt_type, prompt_dict, - system_prompt, chat_conversation], - outputs=chat_token_count, - api_name='count_tokens' if allow_api else None) - - # don't pass text_output, don't want to clear output, just stop it - # cancel only stops outer generation, not inner generation or non-generation - stop_btn.click(lambda: None, None, None, - cancels=submits1 + submits2 + submits3 + submits4 + - [submit_event_nochat, submit_event_nochat2] + - [eventdb1, eventdb2, eventdb3] + - [eventdb7a, eventdb7, eventdb8a, eventdb8, eventdb9a, eventdb9, eventdb12a, eventdb12] + - db_events + - [eventdbloadla, eventdbloadlb] + - [clear_event] + - [submit_event_nochat_api, submit_event_nochat] + - [load_model_event, load_model_event2] + - [count_tokens_event] - , - queue=False, api_name='stop' if allow_api else None).then(clear_torch_cache, queue=False) - - if kwargs['auth'] is not None: - auth = authf - load_func = user_state_setup - load_inputs = [my_db_state, requests_state, login_btn, login_btn] - load_outputs = [my_db_state, requests_state, login_btn] - else: - auth = None - load_func, load_inputs, load_outputs = None, None, None - - app_js = wrap_js_to_lambda( - len(load_inputs) if load_inputs else 0, - get_dark_js() if kwargs['dark'] else None, - get_heap_js(heap_app_id) if is_heap_analytics_enabled else None) - - load_event = demo.load(fn=load_func, inputs=load_inputs, outputs=load_outputs, _js=app_js) - - if load_func: - load_event2 = load_event.then(load_login_func, - inputs=login_inputs, - outputs=login_outputs) - if not kwargs['large_file_count_mode']: - load_event3 = load_event2.then(**get_sources_kwargs) - load_event4 = load_event3.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - load_event5 = load_event4.then(**show_sources_kwargs) - load_event6 = load_event5.then(**get_viewable_sources_args) - load_event7 = load_event6.then(**viewable_kwargs) - - demo.queue(concurrency_count=kwargs['concurrency_count'], api_open=kwargs['api_open']) - favicon_file = "h2o-logo.svg" - favicon_path = favicon_file - if not os.path.isfile(favicon_file): - print("favicon_path1=%s not found" % favicon_file, flush=True) - alt_path = os.path.dirname(os.path.abspath(__file__)) - favicon_path = os.path.join(alt_path, favicon_file) - if not os.path.isfile(favicon_path): - print("favicon_path2: %s not found in %s" % (favicon_file, alt_path), flush=True) - alt_path = os.path.dirname(alt_path) - favicon_path = os.path.join(alt_path, favicon_file) - if not os.path.isfile(favicon_path): - print("favicon_path3: %s not found in %s" % (favicon_file, alt_path), flush=True) - favicon_path = None - - if kwargs['prepare_offline_level'] > 0: - from src.prepare_offline import go_prepare_offline - go_prepare_offline(**locals()) - return - - scheduler = BackgroundScheduler() - scheduler.add_job(func=clear_torch_cache, trigger="interval", seconds=20) - if is_public and \ - kwargs['base_model'] not in non_hf_types: - # FIXME: disable for gptj, langchain or gpt4all modify print itself - # FIXME: and any multi-threaded/async print will enter model output! - scheduler.add_job(func=ping, trigger="interval", seconds=60) - if is_public or os.getenv('PING_GPU'): - scheduler.add_job(func=ping_gpu, trigger="interval", seconds=60 * 10) - scheduler.start() - - # import control - if kwargs['langchain_mode'] == 'Disabled' and \ - os.environ.get("TEST_LANGCHAIN_IMPORT") and \ - kwargs['base_model'] not in non_hf_types: - assert 'gpt_langchain' not in sys.modules, "Dev bug, import of langchain when should not have" - assert 'langchain' not in sys.modules, "Dev bug, import of langchain when should not have" - - # set port in case GRADIO_SERVER_PORT was already set in prior main() call, - # gradio does not listen if change after import - # Keep None if not set so can find an open port above used ports - server_port = os.getenv('GRADIO_SERVER_PORT') - if server_port is not None: - server_port = int(server_port) - - demo.launch(share=kwargs['share'], - server_name=kwargs['server_name'], - show_error=True, - server_port=server_port, - favicon_path=favicon_path, - prevent_thread_lock=True, - auth=auth, - auth_message=auth_message, - root_path=kwargs['root_path']) - if kwargs['verbose'] or not (kwargs['base_model'] in ['gptj', 'gpt4all_llama']): - print("Started Gradio Server and/or GUI: server_name: %s port: %s" % (kwargs['server_name'], server_port), - flush=True) - if kwargs['block_gradio_exit']: - demo.block_thread() - - -def show_doc(db1s, selection_docs_state1, requests_state1, - langchain_mode1, - single_document_choice1, - view_raw_text_checkbox1, - text_context_list1, - dbs1=None, - load_db_if_exists1=None, - db_type1=None, - use_openai_embedding1=None, - hf_embedding_model1=None, - migrate_embedding_model_or_db1=None, - auto_migrate_db1=None, - verbose1=False, - get_userid_auth1=None, - max_raw_chunks=1000000, - api=False, - n_jobs=-1): - file = single_document_choice1 - document_choice1 = [single_document_choice1] - content = None - db_documents = [] - db_metadatas = [] - if db_type1 in ['chroma', 'chroma_old']: - assert langchain_mode1 is not None - langchain_mode_paths = selection_docs_state1['langchain_mode_paths'] - langchain_mode_types = selection_docs_state1['langchain_mode_types'] - from src.gpt_langchain import set_userid, get_any_db, get_docs_and_meta - set_userid(db1s, requests_state1, get_userid_auth1) - top_k_docs = -1 - db = get_any_db(db1s, langchain_mode1, langchain_mode_paths, langchain_mode_types, - dbs=dbs1, - load_db_if_exists=load_db_if_exists1, - db_type=db_type1, - use_openai_embedding=use_openai_embedding1, - hf_embedding_model=hf_embedding_model1, - migrate_embedding_model=migrate_embedding_model_or_db1, - auto_migrate_db=auto_migrate_db1, - for_sources_list=True, - verbose=verbose1, - n_jobs=n_jobs, - ) - query_action = False # long chunks like would be used for summarize - # the below is as or filter, so will show doc or by chunk, unrestricted - from langchain.vectorstores import Chroma - if isinstance(db, Chroma): - # chroma >= 0.4 - if view_raw_text_checkbox1: - one_filter = \ - [{"source": {"$eq": x}, "chunk_id": {"$gte": 0}} if query_action else {"source": {"$eq": x}, - "chunk_id": { - "$gte": -1}} - for x in document_choice1][0] - else: - one_filter = \ - [{"source": {"$eq": x}, "chunk_id": {"$gte": 0}} if query_action else {"source": {"$eq": x}, - "chunk_id": { - "$eq": -1}} - for x in document_choice1][0] - filter_kwargs = dict(filter={"$and": [dict(source=one_filter['source']), - dict(chunk_id=one_filter['chunk_id'])]}) - else: - # migration for chroma < 0.4 - one_filter = \ - [{"source": {"$eq": x}, "chunk_id": {"$gte": 0}} if query_action else {"source": {"$eq": x}, - "chunk_id": { - "$eq": -1}} - for x in document_choice1][0] - if view_raw_text_checkbox1: - # like or, full raw all chunk types - filter_kwargs = dict(filter=one_filter) - else: - filter_kwargs = dict(filter={"$and": [dict(source=one_filter['source']), - dict(chunk_id=one_filter['chunk_id'])]}) - db_documents, db_metadatas = get_docs_and_meta(db, top_k_docs, filter_kwargs=filter_kwargs, - text_context_list=text_context_list1) - # order documents - from langchain.docstore.document import Document - docs_with_score = [(Document(page_content=result[0], metadata=result[1] or {}), 0) - for result in zip(db_documents, db_metadatas)] - doc_chunk_ids = [x.get('chunk_id', -1) for x in db_metadatas] - doc_page_ids = [x.get('page', 0) for x in db_metadatas] - doc_hashes = [x.get('doc_hash', 'None') for x in db_metadatas] - docs_with_score = [x for hx, px, cx, x in - sorted(zip(doc_hashes, doc_page_ids, doc_chunk_ids, docs_with_score), - key=lambda x: (x[0], x[1], x[2])) - # if cx == -1 - ] - db_metadatas = [x[0].metadata for x in docs_with_score][:max_raw_chunks] - db_documents = [x[0].page_content for x in docs_with_score][:max_raw_chunks] - # done reordering - if view_raw_text_checkbox1: - content = [dict_to_html(x) + '\n' + text_to_html(y) for x, y in zip(db_metadatas, db_documents)] - else: - content = [text_to_html(y) for x, y in zip(db_metadatas, db_documents)] - content = '\n'.join(content) - content = f""" - - -{file} - - -{content} - -""" - if api: - if view_raw_text_checkbox1: - return dict(contents=db_documents, metadatas=db_metadatas) - else: - contents = [text_to_html(y, api=api) for y in db_documents] - metadatas = [dict_to_html(x, api=api) for x in db_metadatas] - return dict(contents=contents, metadatas=metadatas) - else: - assert not api, "API mode for get_document only supported for chroma" - - dummy1 = gr.update(visible=False, value=None) - # backup is text dump of db version - if content: - dummy_ret = dummy1, dummy1, dummy1, dummy1, gr.update(visible=True, value=content) - if view_raw_text_checkbox1: - return dummy_ret - else: - dummy_ret = dummy1, dummy1, dummy1, dummy1, dummy1 - - if not isinstance(file, str): - return dummy_ret - - if file.lower().endswith('.html') or file.lower().endswith('.mhtml') or file.lower().endswith('.htm') or \ - file.lower().endswith('.xml'): - try: - with open(file, 'rt') as f: - content = f.read() - return gr.update(visible=True, value=content), dummy1, dummy1, dummy1, dummy1 - except: - return dummy_ret - - if file.lower().endswith('.md'): - try: - with open(file, 'rt') as f: - content = f.read() - return dummy1, dummy1, dummy1, gr.update(visible=True, value=content), dummy1 - except: - return dummy_ret - - if file.lower().endswith('.py'): - try: - with open(file, 'rt') as f: - content = f.read() - content = f"```python\n{content}\n```" - return dummy1, dummy1, dummy1, gr.update(visible=True, value=content), dummy1 - except: - return dummy_ret - - if file.lower().endswith('.txt') or file.lower().endswith('.rst') or file.lower().endswith( - '.rtf') or file.lower().endswith('.toml'): - try: - with open(file, 'rt') as f: - content = f.read() - content = f"```text\n{content}\n```" - return dummy1, dummy1, dummy1, gr.update(visible=True, value=content), dummy1 - except: - return dummy_ret - - func = None - if file.lower().endswith(".csv"): - func = pd.read_csv - elif file.lower().endswith(".pickle"): - func = pd.read_pickle - elif file.lower().endswith(".xls") or file.lower().endswith("xlsx"): - func = pd.read_excel - elif file.lower().endswith('.json'): - func = pd.read_json - # pandas doesn't show full thing, even if html view shows broken things still better - # elif file.lower().endswith('.xml'): - # func = pd.read_xml - if func is not None: - try: - df = func(file).head(100) - except: - return dummy_ret - return dummy1, gr.update(visible=True, value=df), dummy1, dummy1, dummy1 - port = int(os.getenv('GRADIO_SERVER_PORT', '7860')) - import pathlib - absolute_path_string = os.path.abspath(file) - url_path = pathlib.Path(absolute_path_string).as_uri() - url = get_url(absolute_path_string, from_str=True) - img_url = url.replace(""" - -"""), dummy1, dummy1, dummy1, dummy1 - else: - # FIXME: This doesn't work yet, just return dummy result for now - if False: - ip = get_local_ip() - document1 = url_path.replace('file://', f'http://{ip}:{port}/') - # document1 = url - return gr.update(visible=True, value=f""" - -"""), dummy1, dummy1, dummy1, dummy1 - else: - return dummy_ret - else: - return dummy_ret - - -def get_inputs_list(inputs_dict, model_lower, model_id=1): - """ - map gradio objects in locals() to inputs for evaluate(). - :param inputs_dict: - :param model_lower: - :param model_id: Which model (1 or 2) of 2 - :return: - """ - inputs_list_names = list(inspect.signature(evaluate).parameters) - inputs_list = [] - inputs_dict_out = {} - for k in inputs_list_names: - if k == 'kwargs': - continue - if k in input_args_list + inputs_kwargs_list: - # these are added at use time for args or partial for kwargs, not taken as input - continue - if 'mbart-' not in model_lower and k in ['src_lang', 'tgt_lang']: - continue - if model_id == 2: - if k == 'prompt_type': - k = 'prompt_type2' - if k == 'prompt_used': - k = 'prompt_used2' - if k == 'max_new_tokens': - k = 'max_new_tokens2' - if k == 'min_new_tokens': - k = 'min_new_tokens2' - inputs_list.append(inputs_dict[k]) - inputs_dict_out[k] = inputs_dict[k] - return inputs_list, inputs_dict_out - - -def update_user_db_gr(file, db1s, selection_docs_state1, requests_state1, - langchain_mode, chunk, chunk_size, embed, - - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - h2ogpt_key, - - captions_model=None, - caption_loader=None, - doctr_loader=None, - - dbs=None, - get_userid_auth=None, - **kwargs): - valid_key = is_valid_key(kwargs.pop('enforce_h2ogpt_api_key', None), - kwargs.pop('h2ogpt_api_keys', []), h2ogpt_key, - requests_state1=requests_state1) - if not valid_key: - raise ValueError(invalid_key_msg) - loaders_dict, captions_model = gr_to_lg(image_loaders, - pdf_loaders, - url_loaders, - captions_model=captions_model, - **kwargs, - ) - if jq_schema is None: - jq_schema = kwargs['jq_schema0'] - loaders_dict.update(dict(captions_model=captions_model, - caption_loader=caption_loader, - doctr_loader=doctr_loader, - jq_schema=jq_schema, - )) - kwargs.pop('image_loaders_options0', None) - kwargs.pop('pdf_loaders_options0', None) - kwargs.pop('url_loaders_options0', None) - kwargs.pop('jq_schema0', None) - if not embed: - kwargs['use_openai_embedding'] = False - kwargs['hf_embedding_model'] = 'fake' - kwargs['migrate_embedding_model'] = False - - from src.gpt_langchain import update_user_db - return update_user_db(file, db1s, selection_docs_state1, requests_state1, - langchain_mode=langchain_mode, chunk=chunk, chunk_size=chunk_size, - **loaders_dict, - dbs=dbs, - get_userid_auth=get_userid_auth, - **kwargs) - - -def get_sources_gr(db1s, selection_docs_state1, requests_state1, langchain_mode, dbs=None, docs_state0=None, - load_db_if_exists=None, - db_type=None, - use_openai_embedding=None, - hf_embedding_model=None, - migrate_embedding_model=None, - auto_migrate_db=None, - verbose=False, - get_userid_auth=None, - api=False, - n_jobs=-1): - from src.gpt_langchain import get_sources - sources_file, source_list, num_chunks, db = \ - get_sources(db1s, selection_docs_state1, requests_state1, langchain_mode, - dbs=dbs, docs_state0=docs_state0, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - n_jobs=n_jobs, - ) - if api: - return source_list - if langchain_mode in langchain_modes_non_db: - doc_counts_str = "LLM Mode\nNo Collection" - else: - doc_counts_str = "Collection: %s\nDocs: %d\nChunks: %d" % (langchain_mode, len(source_list), num_chunks) - return sources_file, source_list, doc_counts_str - - -def get_source_files_given_langchain_mode_gr(db1s, selection_docs_state1, requests_state1, - langchain_mode, - dbs=None, - load_db_if_exists=None, - db_type=None, - use_openai_embedding=None, - hf_embedding_model=None, - migrate_embedding_model=None, - auto_migrate_db=None, - verbose=False, - get_userid_auth=None, - n_jobs=-1): - from src.gpt_langchain import get_source_files_given_langchain_mode - return get_source_files_given_langchain_mode(db1s, selection_docs_state1, requests_state1, None, - langchain_mode, - dbs=dbs, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - delete_sources=False, - n_jobs=n_jobs) - - -def del_source_files_given_langchain_mode_gr(db1s, selection_docs_state1, requests_state1, document_choice1, - langchain_mode, - dbs=None, - load_db_if_exists=None, - db_type=None, - use_openai_embedding=None, - hf_embedding_model=None, - migrate_embedding_model=None, - auto_migrate_db=None, - verbose=False, - get_userid_auth=None, - n_jobs=-1): - from src.gpt_langchain import get_source_files_given_langchain_mode - return get_source_files_given_langchain_mode(db1s, selection_docs_state1, requests_state1, document_choice1, - langchain_mode, - dbs=dbs, - load_db_if_exists=load_db_if_exists, - db_type=db_type, - use_openai_embedding=use_openai_embedding, - hf_embedding_model=hf_embedding_model, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - verbose=verbose, - get_userid_auth=get_userid_auth, - delete_sources=True, - n_jobs=n_jobs) - - -def update_and_get_source_files_given_langchain_mode_gr(db1s, - selection_docs_state, - requests_state, - langchain_mode, chunk, chunk_size, - - image_loaders, - pdf_loaders, - url_loaders, - jq_schema, - - captions_model=None, - caption_loader=None, - doctr_loader=None, - - dbs=None, first_para=None, - hf_embedding_model=None, - use_openai_embedding=None, - migrate_embedding_model=None, - auto_migrate_db=None, - text_limit=None, - db_type=None, load_db_if_exists=None, - n_jobs=None, verbose=None, get_userid_auth=None, - image_loaders_options0=None, - pdf_loaders_options0=None, - url_loaders_options0=None, - jq_schema0=None): - from src.gpt_langchain import update_and_get_source_files_given_langchain_mode - - loaders_dict, captions_model = gr_to_lg(image_loaders, - pdf_loaders, - url_loaders, - image_loaders_options0=image_loaders_options0, - pdf_loaders_options0=pdf_loaders_options0, - url_loaders_options0=url_loaders_options0, - captions_model=captions_model, - ) - if jq_schema is None: - jq_schema = jq_schema0 - loaders_dict.update(dict(captions_model=captions_model, - caption_loader=caption_loader, - doctr_loader=doctr_loader, - jq_schema=jq_schema, - )) - - return update_and_get_source_files_given_langchain_mode(db1s, - selection_docs_state, - requests_state, - langchain_mode, chunk, chunk_size, - **loaders_dict, - dbs=dbs, first_para=first_para, - hf_embedding_model=hf_embedding_model, - use_openai_embedding=use_openai_embedding, - migrate_embedding_model=migrate_embedding_model, - auto_migrate_db=auto_migrate_db, - text_limit=text_limit, - db_type=db_type, load_db_if_exists=load_db_if_exists, - n_jobs=n_jobs, verbose=verbose, - get_userid_auth=get_userid_auth) - - -def set_userid_gr(db1s, requests_state1, get_userid_auth): - from src.gpt_langchain import set_userid - return set_userid(db1s, requests_state1, get_userid_auth) - - -def set_dbid_gr(db1): - from src.gpt_langchain import set_dbid - return set_dbid(db1) - - -def set_userid_direct_gr(db1s, userid, username): - from src.gpt_langchain import set_userid_direct - return set_userid_direct(db1s, userid, username) diff --git a/spaces/attention-refocusing/Attention-refocusing/app.py b/spaces/attention-refocusing/Attention-refocusing/app.py deleted file mode 100644 index ea4c52f916b518b093b04331f1620d758507a34d..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/app.py +++ /dev/null @@ -1,793 +0,0 @@ -import gradio as gr -import torch -from omegaconf import OmegaConf -from gligen.task_grounded_generation import grounded_generation_box, load_ckpt, load_common_ckpt - -import json -import numpy as np -from PIL import Image, ImageDraw, ImageFont -from functools import partial -from collections import Counter -import math -import gc - -from gradio import processing_utils -from typing import Optional - -import warnings - -from datetime import datetime - -from example_component import create_examples - -from huggingface_hub import hf_hub_download -hf_hub_download = partial(hf_hub_download, library_name="gligen_demo") -import cv2 -import sys -sys.tracebacklimit = 0 - - -def load_from_hf(repo_id, filename='diffusion_pytorch_model.bin', subfolder=None): - cache_file = hf_hub_download(repo_id=repo_id, filename=filename, subfolder=subfolder) - return torch.load(cache_file, map_location='cpu') - -def load_ckpt_config_from_hf(modality): - ckpt = load_from_hf('gligen/demo_ckpts_legacy', filename=f'{modality}.pth', subfolder='model') - config = load_from_hf('gligen/demo_ckpts_legacy', filename=f'{modality}.pth', subfolder='config') - return ckpt, config - - -def ckpt_load_helper(modality, is_inpaint, is_style, common_instances=None): - pretrained_ckpt_gligen, config = load_ckpt_config_from_hf(modality) - config = OmegaConf.create( config["_content"] ) # config used in training - config.alpha_scale = 1.0 - - if common_instances is None: - common_ckpt = load_from_hf('gligen/demo_ckpts_legacy', filename=f'common.pth', subfolder='model') - common_instances = load_common_ckpt(config, common_ckpt) - - loaded_model_list = load_ckpt(config, pretrained_ckpt_gligen, common_instances) - - return loaded_model_list, common_instances - - -class Instance: - def __init__(self, capacity = 2): - self.model_type = 'base' - self.loaded_model_list = {} - self.counter = Counter() - self.global_counter = Counter() - self.loaded_model_list['base'], self.common_instances = ckpt_load_helper( - 'gligen-generation-text-box', - is_inpaint=False, is_style=False, common_instances=None - ) - self.capacity = capacity - - def _log(self, model_type, batch_size, instruction, phrase_list): - self.counter[model_type] += 1 - self.global_counter[model_type] += 1 - current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S") - print('[{}] Current: {}, All: {}. Samples: {}, prompt: {}, phrases: {}'.format( - current_time, dict(self.counter), dict(self.global_counter), batch_size, instruction, phrase_list - )) - - def get_model(self, model_type, batch_size, instruction, phrase_list): - if model_type in self.loaded_model_list: - self._log(model_type, batch_size, instruction, phrase_list) - return self.loaded_model_list[model_type] - - if self.capacity == len(self.loaded_model_list): - least_used_type = self.counter.most_common()[-1][0] - del self.loaded_model_list[least_used_type] - del self.counter[least_used_type] - gc.collect() - torch.cuda.empty_cache() - - self.loaded_model_list[model_type] = self._get_model(model_type) - self._log(model_type, batch_size, instruction, phrase_list) - return self.loaded_model_list[model_type] - - def _get_model(self, model_type): - if model_type == 'base': - return ckpt_load_helper( - 'gligen-generation-text-box', - is_inpaint=False, is_style=False, common_instances=self.common_instances - )[0] - elif model_type == 'inpaint': - return ckpt_load_helper( - 'gligen-inpainting-text-box', - is_inpaint=True, is_style=False, common_instances=self.common_instances - )[0] - elif model_type == 'style': - return ckpt_load_helper( - 'gligen-generation-text-image-box', - is_inpaint=False, is_style=True, common_instances=self.common_instances - )[0] - - assert False - -instance = Instance() - - -def load_clip_model(): - from transformers import CLIPProcessor, CLIPModel - version = "openai/clip-vit-large-patch14" - model = CLIPModel.from_pretrained(version).cuda() - processor = CLIPProcessor.from_pretrained(version) - - return { - 'version': version, - 'model': model, - 'processor': processor, - } - -clip_model = load_clip_model() - - -class ImageMask(gr.components.Image): - """ - Sets: source="canvas", tool="sketch" - """ - - is_template = True - - def __init__(self, **kwargs): - super().__init__(source="upload", tool="sketch", interactive=True, **kwargs) - - def preprocess(self, x): - if x is None: - return x - if self.tool == "sketch" and self.source in ["upload", "webcam"] and type(x) != dict: - - decode_image = processing_utils.decode_base64_to_image(x) - width, height = decode_image.size - img = np.asarray(decode_image) - return {'image':img, 'mask':binarize_2(img)} - - mask = np.zeros((height, width, 4), dtype=np.uint8) - - mask[..., -1] = 255 - mask = self.postprocess(mask) - x = {'image': x, 'mask': mask} - print('vao preprocess-------------------------') - hh = super().preprocess(x) - if (hh['image'].min()!=255) and (hh['mask'][:,:,:3].max()==0): - - hh['mask'] = binarize_2(hh['image']) - - return hh - - -class Blocks(gr.Blocks): - - def __init__( - self, - theme: str = "default", - analytics_enabled: Optional[bool] = None, - mode: str = "blocks", - title: str = "Gradio", - css: Optional[str] = None, - **kwargs, - ): - - self.extra_configs = { - 'thumbnail': kwargs.pop('thumbnail', ''), - 'url': kwargs.pop('url', 'https://gradio.app/'), - 'creator': kwargs.pop('creator', '@teamGradio'), - } - - super(Blocks, self).__init__(theme, analytics_enabled, mode, title, css, **kwargs) - warnings.filterwarnings("ignore") - - def get_config_file(self): - config = super(Blocks, self).get_config_file() - - for k, v in self.extra_configs.items(): - config[k] = v - - return config - -''' -inference model -''' - -# @torch.no_grad() -def inference(task, language_instruction, phrase_list, location_list, inpainting_boxes_nodrop, image, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, actual_mask, style_image, - *args, **kwargs): - # import pdb; pdb.set_trace() - - # grounding_instruction = json.loads(grounding_instruction) - # phrase_list, location_list = [], [] - # for k, v in grounding_instruction.items(): - # phrase_list.append(k) - # location_list.append(v) - - placeholder_image = Image.open('images/teddy.jpg').convert("RGB") - image_list = [placeholder_image] * len(phrase_list) # placeholder input for visual prompt, which is disabled - - batch_size = int(batch_size) - if not 1 <= batch_size <= 4: - batch_size = 1 - - if style_image == None: - has_text_mask = 1 - has_image_mask = 0 # then we hack above 'image_list' - else: - valid_phrase_len = len(phrase_list) - - phrase_list += ['placeholder'] - has_text_mask = [1]*valid_phrase_len + [0] - - image_list = [placeholder_image]*valid_phrase_len + [style_image] - has_image_mask = [0]*valid_phrase_len + [1] - - location_list += [ [0.0, 0.0, 1, 0.01] ] # style image grounding location - - instruction = dict( - prompt = language_instruction, - phrases = phrase_list, - images = image_list, - locations = location_list, - alpha_type = [alpha_sample, 0, 1.0 - alpha_sample], - has_text_mask = has_text_mask, - has_image_mask = has_image_mask, - save_folder_name = language_instruction, - guidance_scale = guidance_scale, - batch_size = batch_size, - fix_seed = bool(fix_seed), - rand_seed = int(rand_seed), - actual_mask = actual_mask, - inpainting_boxes_nodrop = inpainting_boxes_nodrop, - ) - - get_model = partial(instance.get_model, - batch_size=batch_size, - instruction=language_instruction, - phrase_list=phrase_list) - - with torch.autocast(device_type='cuda', dtype=torch.float16): - if task == 'User provide boxes' or 'Available boxes': - if style_image == None: - result = grounded_generation_box(get_model('base'), instruction, *args, **kwargs) - torch.cuda.empty_cache() - return result - else: - return grounded_generation_box(get_model('style'), instruction, *args, **kwargs) - - -def draw_box(boxes=[], texts=[], img=None): - if len(boxes) == 0 and img is None: - return None - - if img is None: - img = Image.new('RGB', (512, 512), (255, 255, 255)) - colors = ["red", "olive", "blue", "green", "orange", "brown", "cyan", "purple"] - draw = ImageDraw.Draw(img) - font = ImageFont.truetype("DejaVuSansMono.ttf", size=18) - for bid, box in enumerate(boxes): - draw.rectangle([box[0], box[1], box[2], box[3]], outline=colors[bid % len(colors)], width=4) - anno_text = texts[bid] - draw.rectangle([box[0], box[3] - int(font.size * 1.2), box[0] + int((len(anno_text) + 0.8) * font.size * 0.6), box[3]], outline=colors[bid % len(colors)], fill=colors[bid % len(colors)], width=4) - draw.text([box[0] + int(font.size * 0.2), box[3] - int(font.size*1.2)], anno_text, font=font, fill=(255,255,255)) - return img - -def get_concat(ims): - if len(ims) == 1: - n_col = 1 - else: - n_col = 2 - n_row = math.ceil(len(ims) / 2) - dst = Image.new('RGB', (ims[0].width * n_col, ims[0].height * n_row), color="white") - for i, im in enumerate(ims): - row_id = i // n_col - col_id = i % n_col - dst.paste(im, (im.width * col_id, im.height * row_id)) - return dst - - -def auto_append_grounding(language_instruction, grounding_texts): - for grounding_text in grounding_texts: - if grounding_text.lower() not in language_instruction.lower() and grounding_text != 'auto': - language_instruction += "; " + grounding_text - return language_instruction - - - - -def generate(task, language_instruction, grounding_texts, sketch_pad, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, use_actual_mask, append_grounding, style_cond_image, - state): - - if 'boxes' not in state: - state['boxes'] = [] - - boxes = state['boxes'] - grounding_texts = [x.strip() for x in grounding_texts.split(';')] - # assert len(boxes) == len(grounding_texts) - if len(boxes) != len(grounding_texts): - if len(boxes) < len(grounding_texts): - raise ValueError("""The number of boxes should be equal to the number of grounding objects. -Number of boxes drawn: {}, number of grounding tokens: {}. -Please draw boxes accordingly on the sketch pad.""".format(len(boxes), len(grounding_texts))) - grounding_texts = grounding_texts + [""] * (len(boxes) - len(grounding_texts)) - - boxes = (np.asarray(boxes) / 512).tolist() - grounding_instruction = json.dumps({obj: box for obj,box in zip(grounding_texts, boxes)}) - image = None - actual_mask = None - - - if append_grounding: - language_instruction = auto_append_grounding(language_instruction, grounding_texts) - - gen_images, gen_overlays = inference( - task, language_instruction, grounding_texts,boxes, boxes, image, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, actual_mask, style_cond_image, clip_model=clip_model, - ) - blank_samples = batch_size % 2 if batch_size > 1 else 0 - gen_images = [gr.Image.update(value=x, visible=True) for i,x in enumerate(gen_images)] \ - + [gr.Image.update(value=None, visible=True) for _ in range(blank_samples)] \ - + [gr.Image.update(value=None, visible=False) for _ in range(4 - batch_size - blank_samples)] - - return gen_images + [state] - - -def binarize(x): - return (x != 0).astype('uint8') * 255 -def binarize_2(x): - gray_image = cv2.cvtColor(x, cv2.COLOR_BGR2GRAY) - return (gray_image!=255).astype('uint8') * 255 - -def sized_center_crop(img, cropx, cropy): - y, x = img.shape[:2] - startx = x // 2 - (cropx // 2) - starty = y // 2 - (cropy // 2) - return img[starty:starty+cropy, startx:startx+cropx] - -def sized_center_fill(img, fill, cropx, cropy): - y, x = img.shape[:2] - startx = x // 2 - (cropx // 2) - starty = y // 2 - (cropy // 2) - img[starty:starty+cropy, startx:startx+cropx] = fill - return img - -def sized_center_mask(img, cropx, cropy): - y, x = img.shape[:2] - startx = x // 2 - (cropx // 2) - starty = y // 2 - (cropy // 2) - center_region = img[starty:starty+cropy, startx:startx+cropx].copy() - img = (img * 0.2).astype('uint8') - img[starty:starty+cropy, startx:startx+cropx] = center_region - return img - -def center_crop(img, HW=None, tgt_size=(512, 512)): - if HW is None: - H, W = img.shape[:2] - HW = min(H, W) - img = sized_center_crop(img, HW, HW) - img = Image.fromarray(img) - img = img.resize(tgt_size) - return np.array(img) - -def draw(task, input, grounding_texts, new_image_trigger, state, generate_parsed, box_image): - print('input', generate_parsed) - - if type(input) == dict: - image = input['image'] - mask = input['mask'] - if generate_parsed==1: - generate_parsed = 0 - # import pdb; pdb.set_trace() - print('do nothing') - - return [box_image, new_image_trigger, 1., state, generate_parsed] - - else: - mask = input - - if mask.ndim == 3: - mask = mask[..., 0] - - image_scale = 1.0 - - print('vao draw--------------------') - mask = binarize(mask) - if mask.shape != (512, 512): - # assert False, "should not receive any non- 512x512 masks." - if 'original_image' in state and state['original_image'].shape[:2] == mask.shape: - mask = center_crop(mask, state['inpaint_hw']) - image = center_crop(state['original_image'], state['inpaint_hw']) - else: - mask = np.zeros((512, 512), dtype=np.uint8) - mask = binarize(mask) - - if type(mask) != np.ndarray: - mask = np.array(mask) - # - if mask.sum() == 0: - state = {} - print('delete state') - - if True: - image = None - else: - image = Image.fromarray(image) - - if 'boxes' not in state: - state['boxes'] = [] - - if 'masks' not in state or len(state['masks']) == 0 : - state['masks'] = [] - last_mask = np.zeros_like(mask) - else: - last_mask = state['masks'][-1] - - if type(mask) == np.ndarray and mask.size > 1 : - diff_mask = mask - last_mask - else: - diff_mask = np.zeros([]) - - if diff_mask.sum() > 0: - x1x2 = np.where(diff_mask.max(0) > 1)[0] - y1y2 = np.where(diff_mask.max(1) > 1)[0] - y1, y2 = y1y2.min(), y1y2.max() - x1, x2 = x1x2.min(), x1x2.max() - - if (x2 - x1 > 5) and (y2 - y1 > 5): - state['masks'].append(mask.copy()) - state['boxes'].append((x1, y1, x2, y2)) - - grounding_texts = [x.strip() for x in grounding_texts.split(';')] - grounding_texts = [x for x in grounding_texts if len(x) > 0] - if len(grounding_texts) < len(state['boxes']): - grounding_texts += [f'Obj. {bid+1}' for bid in range(len(grounding_texts), len(state['boxes']))] - - box_image = draw_box(state['boxes'], grounding_texts, image) - generate_parsed = 0 - - return [box_image, new_image_trigger, image_scale, state, generate_parsed] - -def change_state(bboxes,layout, state, instruction, trigger_stage, boxes): - if trigger_stage ==0 : - return [boxes, state, 0] - # mask = - state['boxes'] = [] - state['masks'] = [] - image = None - list_boxes = bboxes.split('/') - result =[] - for b in list_boxes: - ints = b[1:-1].split(',') - l = [] - for i in ints: - l.append(int(i)) - result.append(l) - print('run change state') - - for box in result: - state['boxes'].append(box) - grounding_texts = [x.strip() for x in instruction.split(';')] - grounding_texts = [x for x in grounding_texts if len(x) > 0] - if len(grounding_texts) < len(result): - grounding_texts += [f'Obj. {bid+1}' for bid in range(len(grounding_texts), len(result))] - - box_image = draw_box(result, grounding_texts) - - mask = binarize_2(layout['image']) - state['masks'].append(mask.copy()) - # print('done change state', state) - print('done change state') - # import pdb; pdb.set_trace() - return [box_image,state, trigger_stage] - -def example_click(name, grounding_instruction, instruction, bboxes,generate_parsed, trigger_parsed): - - list_boxes = bboxes.split('/') - result =[] - - for b in list_boxes: - ints = b[1:-1].split(',') - l = [] - for i in ints: - l.append(int(i)) - result.append(l) - print('run change state') - - box_image = draw_box(result, instruction) - trigger_parsed += 1 - print('done the example click') - return [box_image, trigger_parsed] - -def clear(task, sketch_pad_trigger, batch_size, state,trigger_stage, switch_task=False): - - sketch_pad_trigger = sketch_pad_trigger + 1 - trigger_stage = 0 - blank_samples = batch_size % 2 if batch_size > 1 else 0 - out_images = [gr.Image.update(value=None, visible=True) for i in range(batch_size)] \ - + [gr.Image.update(value=None, visible=True) for _ in range(blank_samples)] \ - + [gr.Image.update(value=None, visible=False) for _ in range(4 - batch_size - blank_samples)] - state = {} - return [None, sketch_pad_trigger, None, 1.0] + out_images + [state] + [trigger_stage] - -css = """ -#img2img_image, #img2img_image > .fixed-height, #img2img_image > .fixed-height > div, #img2img_image > .fixed-height > div > img -{ - height: var(--height) !important; - max-height: var(--height) !important; - min-height: var(--height) !important; -} -#paper-info a { - color:#008AD7; - text-decoration: none; -} -#paper-info a:hover { - cursor: pointer; - text-decoration: none; -} -#my_image > div.fixed-height -{ - height: var(--height) !important; -} -""" - -rescale_js = """ -function(x) { - const root = document.querySelector('gradio-app').shadowRoot || document.querySelector('gradio-app'); - let image_scale = parseFloat(root.querySelector('#image_scale input').value) || 1.0; - const image_width = root.querySelector('#img2img_image').clientWidth; - const target_height = parseInt(image_width * image_scale); - document.body.style.setProperty('--height', `${target_height}px`); - root.querySelectorAll('button.justify-center.rounded')[0].style.display='none'; - root.querySelectorAll('button.justify-center.rounded')[1].style.display='none'; - return x; -} -""" -# [Paper] -with Blocks( - css=css, - analytics_enabled=False, - title="Attention-refocusing demo", -) as main: - description = """

    - Grounded Text-to-Image Synthesis with Attention Refocusing -
    - - [Project Page] - - [GitHub] - -

    -

    - To identify the areas of interest based on specific spatial parameters, you need to (1) ⌨️ input the names of the concepts you're interested in Grounding Instruction, and (2) 🖱️ draw their corresponding bounding boxes using Sketch Pad -- the parsed boxes will automatically be showed up once you've drawn them. -
    - For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space -

    - """ - gr.HTML(description) - - with gr.Row(): - with gr.Column(scale=4): - sketch_pad_trigger = gr.Number(value=0, visible=False) - sketch_pad_resize_trigger = gr.Number(value=0, visible=False) - trigger_stage = gr.Number(value=0, visible=False) - - init_white_trigger = gr.Number(value=0, visible=False) - image_scale = gr.Number(value=1.0, elem_id="image_scale", visible=False) - new_image_trigger = gr.Number(value=0, visible=False) - text_box = gr.Textbox(visible=False) - generate_parsed = gr.Number(value=0, visible=False) - - task = gr.Radio( - choices=["Available boxes", 'User provide boxes'], - type="value", - value="User provide boxes", - label="Task", - visible=False - - ) - language_instruction = gr.Textbox( - label="Language instruction", - ) - grounding_instruction = gr.Textbox( - label="Grounding instruction (Separated by semicolon)", - ) - with gr.Row(): - sketch_pad = ImageMask(label="Sketch Pad", elem_id="img2img_image") - out_imagebox = gr.Image(type="pil",elem_id="my_image" ,label="Parsed Sketch Pad", shape=(512,512)) - with gr.Row(): - clear_btn = gr.Button(value='Clear') - gen_btn = gr.Button(value='Generate') - with gr.Row(): - parsed_btn = gr.Button(value='generate parsed boxes', visible=False) - - with gr.Accordion("Advanced Options", open=False): - with gr.Column(): - alpha_sample = gr.Slider(minimum=0, maximum=1.0, step=0.1, value=0.3, label="Scheduled Sampling (τ)") - guidance_scale = gr.Slider(minimum=0, maximum=50, step=0.5, value=7.5, label="Guidance Scale") - batch_size = gr.Slider(minimum=1, maximum=4,visible=False, step=1, value=1, label="Number of Samples") - append_grounding = gr.Checkbox(value=True, label="Append grounding instructions to the caption") - use_actual_mask = gr.Checkbox(value=False, label="Use actual mask for inpainting", visible=False) - with gr.Row(): - fix_seed = gr.Checkbox(value=True, label="Fixed seed") - rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=0, label="Seed") - - with gr.Row(): - use_style_cond = gr.Checkbox(value=False,visible=False, label="Enable Style Condition") - style_cond_image = gr.Image(type="pil",visible=False, label="Style Condition", interactive=True) - with gr.Column(scale=4): - gr.HTML('Generated Images') - with gr.Row(): - out_gen_1 = gr.Image(type="pil", visible=True, show_label=False) - out_gen_2 = gr.Image(type="pil", visible=False, show_label=False) - with gr.Row(): - out_gen_3 = gr.Image(type="pil", visible=False, show_label=False) - out_gen_4 = gr.Image(type="pil", visible=False, show_label=False) - - state = gr.State({}) - - - class Controller: - def __init__(self): - self.calls = 0 - self.tracks = 0 - self.resizes = 0 - self.scales = 0 - - def init_white(self, init_white_trigger): - self.calls += 1 - return np.ones((512, 512), dtype='uint8') * 255, 1.0, init_white_trigger+1 - - def change_n_samples(self, n_samples): - blank_samples = n_samples % 2 if n_samples > 1 else 0 - return [gr.Image.update(visible=True) for _ in range(n_samples + blank_samples)] \ - + [gr.Image.update(visible=False) for _ in range(4 - n_samples - blank_samples)] - - controller = Controller() - main.load( - lambda x:x+1, - inputs=sketch_pad_trigger, - outputs=sketch_pad_trigger, - queue=False) - - sketch_pad.edit( - draw, - inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, state, generate_parsed, out_imagebox], - outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, state, generate_parsed], - queue=False, - ) - trigger_stage.change( - change_state, - inputs=[text_box,sketch_pad, state, grounding_instruction, trigger_stage,out_imagebox], - outputs=[out_imagebox,state,trigger_stage], - queue=True - ) - grounding_instruction.change( - draw, - inputs=[task, sketch_pad, grounding_instruction, sketch_pad_resize_trigger, state, generate_parsed,out_imagebox], - outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, state, generate_parsed], - queue=False, - ) - clear_btn.click( - clear, - inputs=[task, sketch_pad_trigger, batch_size,trigger_stage, state], - outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, out_gen_1, out_gen_2, out_gen_3, out_gen_4, state, trigger_stage], - queue=False) - - sketch_pad_trigger.change( - controller.init_white, - inputs=[init_white_trigger], - outputs=[sketch_pad, image_scale, init_white_trigger], - queue=False) - - gen_btn.click( - generate, - inputs=[ - task, language_instruction, grounding_instruction, sketch_pad, - alpha_sample, guidance_scale, batch_size, - fix_seed, rand_seed, - use_actual_mask, - append_grounding, style_cond_image, - state, - ], - outputs=[out_gen_1, out_gen_2, out_gen_3, out_gen_4, state], - queue=True - ) - init_white_trigger.change( - None, - None, - init_white_trigger, - _js=rescale_js, - queue=False) - examples = [ - [ - 'guide_imgs/0_a_cat_on_the_right_of_a_dog.jpg', - "a cat;a dog", - "a cat on the right of a dog", - '(291, 88, 481, 301)/(25, 64, 260, 391)', - 1, 1 - ], - [ - 'guide_imgs/0_a_bus_on_the_left_of_a_car.jpg',#'guide_imgs/0_a_bus_on_the_left_of_a_car.jpg', - "a bus;a car", - "a bus and a car", - '(8,128,266,384)/(300,196,502,316)', #'(8,128,266,384)', #/(300,196,502,316) - 1, 2 - ], - [ - 'guide_imgs/1_Two_cars_on_the_street..jpg', - "a car;a car", - "Two cars on the street.", - '(34, 98, 247, 264)/(271, 122, 481, 293)', - 1, 3 - ], - [ - 'guide_imgs/80_two_apples_lay_side_by_side_on_a_wooden_table,_their_glossy_red_and_green_skins_glinting_in_the_sunlight..jpg', - "an apple;an apple", - "two apples lay side by side on a wooden table, their glossy red and green skins glinting in the sunlight.", - '(40, 210, 235, 450)/(275, 210, 470, 450)', - 1, 4 - ], - [ - 'guide_imgs/10_A_banana_on_the_left_of_an_apple..jpg', - "a banana;an apple", - "A banana on the left of an apple.", - '(62, 193, 225, 354)/(300, 184, 432, 329)', - 1, 5 - ], - [ - 'guide_imgs/15_A_pizza_on_the_right_of_a_suitcase..jpg', - "a pizza ;a suitcase", - "A pizza on the right of a suitcase.", - '(307, 112, 490, 280)/(41, 120, 244, 270)', - 1, 6 - ], - [ - 'guide_imgs/1_A_wine_glass_on_top_of_a_dog..jpg', - "a wine glass;a dog", - "A wine glass on top of a dog.", - '(206, 78, 306, 214)/(137, 222, 367, 432)', - 1, 7 - ] - , - [ - 'guide_imgs/2_A_bicycle_on_top_of_a_boat..jpg', - "a bicycle;a boat", - "A bicycle on top of a boat.", - '(185, 110, 335, 205)/(111, 228, 401, 373)', - 1, 8 - ] - , - [ - 'guide_imgs/4_A_laptop_on_top_of_a_teddy_bear..jpg', - "a laptop;a teddy bear", - "A laptop on top of a teddy bear.", - '(180, 70, 332, 210)/(150, 240, 362, 420)', - 1, 9 - ] - , - [ - 'guide_imgs/0_A_train_on_top_of_a_surfboard..jpg', - "a train;a surfboard", - "A train on top of a surfboard.", - '(130, 80, 385, 240)/(75, 260, 440, 450)', - 1, 10 - ] - ] - - with gr.Column(): - - create_examples( - examples=examples, - inputs=[sketch_pad, grounding_instruction,language_instruction , text_box, generate_parsed, trigger_stage], - outputs=None, - fn=None, - cache_examples=False, - - ) - -main.queue(concurrency_count=1, api_open=False) -main.launch(share=False, show_api=False, show_error=True, debug=False, server_name="0.0.0.0") diff --git a/spaces/awacke1/Flan-Upvote-Downvote-Human-Feedback/README.md b/spaces/awacke1/Flan-Upvote-Downvote-Human-Feedback/README.md deleted file mode 100644 index 5c80c56d52517038d0b7ff8ac947a947fe080fec..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Flan-Upvote-Downvote-Human-Feedback/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🌮💦🍮Flan.Upvote.Downvote.Human.Feedback -emoji: 🌮💦🍮 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/src/lights/SpotLight.js b/spaces/banana-projects/web3d/node_modules/three/src/lights/SpotLight.js deleted file mode 100644 index 90d891af10d6aa29aab8e81aec126b59db7d3768..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/lights/SpotLight.js +++ /dev/null @@ -1,72 +0,0 @@ -import { Light } from './Light.js'; -import { SpotLightShadow } from './SpotLightShadow.js'; -import { Object3D } from '../core/Object3D.js'; - -/** - * @author alteredq / http://alteredqualia.com/ - */ - -function SpotLight( color, intensity, distance, angle, penumbra, decay ) { - - Light.call( this, color, intensity ); - - this.type = 'SpotLight'; - - this.position.copy( Object3D.DefaultUp ); - this.updateMatrix(); - - this.target = new Object3D(); - - Object.defineProperty( this, 'power', { - get: function () { - - // intensity = power per solid angle. - // ref: equation (17) from https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf - return this.intensity * Math.PI; - - }, - set: function ( power ) { - - // intensity = power per solid angle. - // ref: equation (17) from https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf - this.intensity = power / Math.PI; - - } - } ); - - this.distance = ( distance !== undefined ) ? distance : 0; - this.angle = ( angle !== undefined ) ? angle : Math.PI / 3; - this.penumbra = ( penumbra !== undefined ) ? penumbra : 0; - this.decay = ( decay !== undefined ) ? decay : 1; // for physically correct lights, should be 2. - - this.shadow = new SpotLightShadow(); - -} - -SpotLight.prototype = Object.assign( Object.create( Light.prototype ), { - - constructor: SpotLight, - - isSpotLight: true, - - copy: function ( source ) { - - Light.prototype.copy.call( this, source ); - - this.distance = source.distance; - this.angle = source.angle; - this.penumbra = source.penumbra; - this.decay = source.decay; - - this.target = source.target.clone(); - - this.shadow = source.shadow.clone(); - - return this; - - } - -} ); - - -export { SpotLight }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/shadowmask_pars_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/shadowmask_pars_fragment.glsl.js deleted file mode 100644 index a81f6dd87da5d255249e9d08dc1bff05c8597c21..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/shadowmask_pars_fragment.glsl.js +++ /dev/null @@ -1,63 +0,0 @@ -export default /* glsl */` -float getShadowMask() { - - float shadow = 1.0; - - #ifdef USE_SHADOWMAP - - #if NUM_DIR_LIGHTS > 0 - - DirectionalLight directionalLight; - - #pragma unroll_loop - for ( int i = 0; i < NUM_DIR_LIGHTS; i ++ ) { - - directionalLight = directionalLights[ i ]; - shadow *= bool( directionalLight.shadow ) ? getShadow( directionalShadowMap[ i ], directionalLight.shadowMapSize, directionalLight.shadowBias, directionalLight.shadowRadius, vDirectionalShadowCoord[ i ] ) : 1.0; - - } - - #endif - - #if NUM_SPOT_LIGHTS > 0 - - SpotLight spotLight; - - #pragma unroll_loop - for ( int i = 0; i < NUM_SPOT_LIGHTS; i ++ ) { - - spotLight = spotLights[ i ]; - shadow *= bool( spotLight.shadow ) ? getShadow( spotShadowMap[ i ], spotLight.shadowMapSize, spotLight.shadowBias, spotLight.shadowRadius, vSpotShadowCoord[ i ] ) : 1.0; - - } - - #endif - - #if NUM_POINT_LIGHTS > 0 - - PointLight pointLight; - - #pragma unroll_loop - for ( int i = 0; i < NUM_POINT_LIGHTS; i ++ ) { - - pointLight = pointLights[ i ]; - shadow *= bool( pointLight.shadow ) ? getPointShadow( pointShadowMap[ i ], pointLight.shadowMapSize, pointLight.shadowBias, pointLight.shadowRadius, vPointShadowCoord[ i ], pointLight.shadowCameraNear, pointLight.shadowCameraFar ) : 1.0; - - } - - #endif - - /* - #if NUM_RECT_AREA_LIGHTS > 0 - - // TODO (abelnation): update shadow for Area light - - #endif - */ - - #endif - - return shadow; - -} -`; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/archs/edsr_arch.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/archs/edsr_arch.py deleted file mode 100644 index b80566f11fbd4782d68eee8fbf7da686f89dc4e7..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/archs/edsr_arch.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch import nn as nn - -from basicsr.archs.arch_util import ResidualBlockNoBN, Upsample, make_layer -from basicsr.utils.registry import ARCH_REGISTRY - - -@ARCH_REGISTRY.register() -class EDSR(nn.Module): - """EDSR network structure. - - Paper: Enhanced Deep Residual Networks for Single Image Super-Resolution. - Ref git repo: https://github.com/thstkdgus35/EDSR-PyTorch - - Args: - num_in_ch (int): Channel number of inputs. - num_out_ch (int): Channel number of outputs. - num_feat (int): Channel number of intermediate features. - Default: 64. - num_block (int): Block number in the trunk network. Default: 16. - upscale (int): Upsampling factor. Support 2^n and 3. - Default: 4. - res_scale (float): Used to scale the residual in residual block. - Default: 1. - img_range (float): Image range. Default: 255. - rgb_mean (tuple[float]): Image mean in RGB orders. - Default: (0.4488, 0.4371, 0.4040), calculated from DIV2K dataset. - """ - - def __init__(self, - num_in_ch, - num_out_ch, - num_feat=64, - num_block=16, - upscale=4, - res_scale=1, - img_range=255., - rgb_mean=(0.4488, 0.4371, 0.4040)): - super(EDSR, self).__init__() - - self.img_range = img_range - self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1) - - self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) - self.body = make_layer(ResidualBlockNoBN, num_block, num_feat=num_feat, res_scale=res_scale, pytorch_init=True) - self.conv_after_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - def forward(self, x): - self.mean = self.mean.type_as(x) - - x = (x - self.mean) * self.img_range - x = self.conv_first(x) - res = self.conv_after_body(self.body(x)) - res += x - - x = self.conv_last(self.upsample(res)) - x = x / self.img_range + self.mean - - return x diff --git a/spaces/bioriAsaeru/text-to-voice/12yo Preteen Web Cam.md b/spaces/bioriAsaeru/text-to-voice/12yo Preteen Web Cam.md deleted file mode 100644 index e23277421134f8758c89b0589f1a41b263c8ab01..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/12yo Preteen Web Cam.md +++ /dev/null @@ -1,6 +0,0 @@ -

    12yo Preteen Web Cam


    Downloadhttps://urloso.com/2uySc5



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Binksetvolume12binkw32dlldownloadfree The Essential DLL File for Bink Video Games.md b/spaces/bioriAsaeru/text-to-voice/Binksetvolume12binkw32dlldownloadfree The Essential DLL File for Bink Video Games.md deleted file mode 100644 index cd361e278948c1645c5497fb9b4d449d159cda31..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Binksetvolume12binkw32dlldownloadfree The Essential DLL File for Bink Video Games.md +++ /dev/null @@ -1,6 +0,0 @@ -

    binksetvolume12binkw32dlldownloadfree


    Download Zip > https://urloso.com/2uyP4g



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Crack ((FULL)) V4 Update 2 Sims 4.md b/spaces/bioriAsaeru/text-to-voice/Crack ((FULL)) V4 Update 2 Sims 4.md deleted file mode 100644 index 4a686aadc3967134bd2795612ca00eda4b8c195e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Crack ((FULL)) V4 Update 2 Sims 4.md +++ /dev/null @@ -1,6 +0,0 @@ -

    crack v4 update 2 sims 4


    Download · https://urloso.com/2uyRPZ



    - -Buy Motorola Razr 5G (2020) Dual-SIM XT2071-4 256GB ROM + 8GB RAM Factory Unlocked Flip Android Smartphone (Polished Graphite) - International ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/G Hadley Linear Programming Narosa 2002 Pdf Free yordabbind An Easy-to-Follow Introduction to Linear Programming with Examples and Exercises.md b/spaces/bioriAsaeru/text-to-voice/G Hadley Linear Programming Narosa 2002 Pdf Free yordabbind An Easy-to-Follow Introduction to Linear Programming with Examples and Exercises.md deleted file mode 100644 index b180d6fab7f81c57334c4e4be46de4511bed9341..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/G Hadley Linear Programming Narosa 2002 Pdf Free yordabbind An Easy-to-Follow Introduction to Linear Programming with Examples and Exercises.md +++ /dev/null @@ -1,6 +0,0 @@ -

    G Hadley Linear Programming Narosa 2002 Pdf Free yordabbind


    DOWNLOAD ►►► https://urloso.com/2uyRj2



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Gta V Default Xex Download Everything You Need to Know About Xex Menu and GTA V Modding.md b/spaces/bioriAsaeru/text-to-voice/Gta V Default Xex Download Everything You Need to Know About Xex Menu and GTA V Modding.md deleted file mode 100644 index 460da825edeb77a930fab986167cfc4c00062179..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Gta V Default Xex Download Everything You Need to Know About Xex Menu and GTA V Modding.md +++ /dev/null @@ -1,19 +0,0 @@ - -

    when i open a xex in ida a window pops up asking for the file format, processor type and a bunch of other stuff. i changed the processor to ppc and selected xbox360xexfile for the format. do i have to change any of the other settings or can i keep the defaults?

    -

    as well for benefits xbdm allowed you to realtime on any xex(yea, any game, not even limited to just reach which aso includes default unmodded xex's you can realtime on and take pictures of your console at any time.

    -

    Gta V Default Xex Download


    Downloadhttps://urloso.com/2uyOn0



    -

    working:
    -TUs check and download
    -covers donwload

    not working
    -push from unity to xbox

    my little improvement:
    -you can see all avaliable TUs for the game(it means, you can update games like GTA V). but, if TU`s mediaid differs from mediaid of installed game, TU`s displayed name would be something like "MID:0C48794E GTA V Title Update 26", where "MID:0C48794E" - TU`s mediaid.

    how to install:
    1. download my default.xex
    2. replace the original default.xex in fsd folder with mine
    3. restart fsd

    *UPDATE 17.10.2016*

    -

    thanks to dizazter, everything is up and running again on the new hosting. you need to download updated version of default.xex
    there are two versions now:
    default.rar - shows TU for all MediaIDs
    default_filter.rar - shows TU only for your MediaID

    -

    from the beginning i was using FSD 775. but i found:
    1. It is not downloding custom game covers
    2. Covers are not downloaded in HD
    3. only front side of game cover is visible
    4. most importantly FSD Freezes (Hang\stop responding)
    it force me to switch to Aurora. at least from my experience i can say, Aurora never freezes.
    But i love FSD, if your version is fixed from above issue than surely i am going to use FSD again.
    Plz tell me

    -

    Thanks a lot for this Gualdimar. Been using it for a while now. I like the fact it doesn't filter all the non-matching Media ID TU's from view. It's especially handy for games that use disc 2 to boot (eg. Watch Dogs), where you have to grab one of the 'MID' labeled TU's. Weird thing is, about three weeks or so ago, cover downloading just seem to stop working for about a week, whereas the other modded version carried on working. Strange. I tried yours again today and it was working again, so I've gone back to your version. Cheers.

    -

    Try to create account on JQE360 but I can't open the website. After some search, I got that FSD is not support anymore, people migrate to aurora. So I download aurora. For game covers I use WebUI in FSD and Aurora Title Editor manually game by game and it's nice. After that I decide to buy external harddisc with xbox games in it, about 1 Tb with around 279 games in it, and think it will be a load of works if I update it one by one.

    -

    My internet on my mobile phone, I don't have router. So I using Laptop with windows 8.1, connect to mobile phone hotspot using laptop's wifi. Using Internet connection sharing to ethernet connect to Xbox. Both set manually IP address on laptop and xbox ethernets, since I don't have dhcp on my laptop. First try with aurora, it works, and I found some website says in FSD using unity account. So I give it a try on last friday. I works, FreeStyle start to download Covers, Background and Description along with screenshots. But after some time, FSD start crash, several restart didn't work, it keep crash. Open file manager and find out OnBoardMU is out of space since my freestyle located in OnBoardMU. Copy it using FTP to my laptop, and count the size about 1.5 Gb. GameDate is too much, so I decide to move it to internal HDD1. After that FSD3 stop crashing, but it no longer resume the download, Refresh workarts one by one still work though, but to much manual selection. Delete the scan paths and readd them do the trick, and it start download automatically. Finally done all the games covers update with total 293 games...

    -

    Free style version : it comes up with this version when I buy this XBOX, so I don't know if the default.xex already updated with the above link. My Dashlauch setting liveblock enabled, livestrong disabled.

    -

    -

    so my brother gave me his xbox 360 star wars edition, a couple of weeks ago he had someone modify it for him so it can play games from the hard drive, it had XeXMenu 1.2 and Freestyle dashboard, he also had a lot of games on it and every thing was working fine except for one game (GTA V) which gave a bad CD error or something which I ignored until I decided to try and solve it so I went to the game folder in XeXmenu hit Y and chose xex patch just to see what would happen then the error changed to "game error the game couldn't start try downloading the game again" and every other game no gives the same error even the ones that were playable before.

    -

    Hey friend just remove xex menu and re-copy it from ur cd or external hd as u prefer i also mistakenly did hat once and only solution i found was to re-copy xex...just google about downloading xex menu it's free. And alwayd do as Swizzy says he's the champ trust me.

    -

    All right i'm home now I deleted the existing xex menu 1.2 from settings >storage > HDD >demo and then plugged in the flash drive and copied the freshly downloaded xex menu 1.2 but I still get the same error when trying to start a game " game error: the game couldn't start. Try downloading the game again"

    -

    ok will try as soon as I get home
    but the thing is I don't have the ISO or sources for these games as I'm not the one who put them on the console in the first place, So what I'm planning to do is to download any XBOX 360 ISO off the internet and look for a tutorial how to run them on the console from a flash drive, in fact I'm currently downloading "Payday 2 [MULTI][XBOX360][Region Free][XDG2][COMPLEX] " this should work right??

    should I remove the existing games?

    -

    but the thing is I don't have the ISO or sources for these games as I'm not the one who put them on the console in the first place, So what I'm planning to do is to download any XBOX 360 ISO off the internet and look for a tutorial how to run them on the console from a flash drive, in fact I'm currently downloading "Payday 2 [MULTI][XBOX360][Region Free][XDG2][COMPLEX] " this should work right??

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/I Saw the Devil [english dub] [buhaypirata] A Film That Will Make You Question Your Morality.md b/spaces/bioriAsaeru/text-to-voice/I Saw the Devil [english dub] [buhaypirata] A Film That Will Make You Question Your Morality.md deleted file mode 100644 index 011e8649333eb7eaf1b97ec0f493726c9ca0dea7..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/I Saw the Devil [english dub] [buhaypirata] A Film That Will Make You Question Your Morality.md +++ /dev/null @@ -1,6 +0,0 @@ -

    I Saw the Devil [english dub] [buhaypirata]


    Download File === https://urloso.com/2uyO8e



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/KMSpico (Windows Office Activator) 10.1.4 Final Portable !!TOP!!.md b/spaces/bioriAsaeru/text-to-voice/KMSpico (Windows Office Activator) 10.1.4 Final Portable !!TOP!!.md deleted file mode 100644 index a3b7827bf6b5b01bb55f4309dc09fb216d854d54..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/KMSpico (Windows Office Activator) 10.1.4 Final Portable !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    , download sonic the hedgehog free ubuntu download,
    , xlink kameras new free download,, microsoft office 2010 english language pack activator download,, microsoft office 2010 activator free download,, download microsoft office 2010 english language pack activator download,
    , windows 10 build 15063 free download,, windows 7 ultimate 64 bit 7 step free download,, openoffice 4.0 serial key free download for windows,, free dekalb county employee will pay check free download,, yellow douglas nast air dryer serial number free download,, download microsoft office 2010 english language pack activator download,
    , download windows 7 ultimate 64 bit iso with cracks free download,, download microsoft office 2007 serial key free download,, microsoft office 2010 english language pack activator free download,, download pc-3d studio max 2010 ubuntu free download,, dvd restoration tools free download,, download microsoft office 2004 english language pack activator free download,
    , windows 7 ultimate 64 bit iso with cracks free download,, download microsoft office 2007 activator free download,, download microsoft office 2007 english language pack activator free download,, download microsoft office 2010 english language pack activator free download,, download windows 10 home in s mode 1 free download,, microsoft office 2010 english language pack activator free download,, windows 10 on macbook pro performance free download,, microsoft office 2007 serial key download free,, microsoft office 2010 english language pack activator free download,

    -

    , download windows 10 offical iso as an image free download,, maccleaner pro 2013 freeware download,, access 2010 serial key free download,, microsoft office 2013 english language pack activator download,, microsoft office 2007 activator free download,, windows 10 ios emulator 2017 free download,, microsoft office 2013 english language pack activator free download,, install word 2013 dvd for windows 8 free download,, xbox 7 6.0 free download,, pirate bay download serial numbers free download,, why is microsoft office 2017 unable to install free download,, pirate bay download serial numbers free download,, xbox 7 6.0 free download,, download windows 10 technical preview free download,, microsoft office 2013 serial key free download,, advance steel 2010 free crack download,, windows 7 ultimate 64 bit iso full crack free download,, autocad 2013 update 4.1 crack 32 and 64 bit free download,, windows update 2003 free download,, free ipad imei unlocker for windows free download,, downloaded windows 2013 iso free download,, retroarch windows 7 free download,, z0755 cellphone driver free download,

    -

    KMSpico (Windows Office Activator) 10.1.4 Final Portable


    Download →→→ https://urloso.com/2uyP7j



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/breynolds1247/StarryNight_StyleTransfer/helper_functions.py b/spaces/breynolds1247/StarryNight_StyleTransfer/helper_functions.py deleted file mode 100644 index 0663918d6f311f7ab68b8754963c147bb3464a30..0000000000000000000000000000000000000000 --- a/spaces/breynolds1247/StarryNight_StyleTransfer/helper_functions.py +++ /dev/null @@ -1,49 +0,0 @@ -import tensorflow as tf -from tensorflow import keras - -def img_scaler(image, max_dim = 512): - - #Casts tensor to a new data type - original_shape = tf.cast(tf.shape(image)[:-1], tf.float32) - - #Creates scale constant for the image based on imput max_dim - scale_ratio = max_dim / max(original_shape) - - #Casts tensor to a new data type - new_shape = tf.cast(original_shape * scale_ratio, tf.int32) - - #Resizes image - return tf.image.resize(image, new_shape) - -def load_img(image_path, content=True, max_dim = 512): - - if content: - #content images come straight from the web app, so no opening or decoding - img = image_path - - #Convert image to dtype - img = tf.image.convert_image_dtype(img, tf.float32) - - #Scale the image using the created scaler function - img = img_scaler(img, max_dim) - - #Adds a fourth dimension to the Tensor because the model requires a 4-dimensional Tensor - return img[tf.newaxis, :] - - else: - #Read contents of the input filename - img = tf.io.read_file(image_path) - - #Detect whether an image is a BMP, GIF, JPEG, or PNG, - #performs the appropriate operation - #convert the input bytes string into a Tensor of type dtype - img = tf.image.decode_image(img, channels=3) - - #Convert image to dtype - img = tf.image.convert_image_dtype(img, tf.float32) - - #Scale the image using the created scaler function - img = img_scaler(img, max_dim) - - #Adds a fourth dimension to the Tensor because the model requires a 4-dimensional Tensor - return img[tf.newaxis, :] \ No newline at end of file diff --git a/spaces/bzd4576/sovits-sin/modules.py b/spaces/bzd4576/sovits-sin/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/bzd4576/sovits-sin/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/__init__.py deleted file mode 100644 index ed32c5e9d6c4c1599ba960681d9e86889e2cdbd8..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .chart import DensePoseChartPredictorOutput -from .chart_confidence import decorate_predictor_output_class_with_confidences -from .cse_confidence import decorate_cse_predictor_output_class_with_confidences -from .chart_result import ( - DensePoseChartResult, - DensePoseChartResultWithConfidences, - quantize_densepose_chart_result, - compress_quantized_densepose_chart_result, - decompress_compressed_densepose_chart_result, -) -from .cse import DensePoseEmbeddingPredictorOutput -from .data_relative import DensePoseDataRelative -from .list import DensePoseList -from .mesh import Mesh, create_mesh -from .transform_data import DensePoseTransformData, normalized_coords_transform diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/utils/attention_mask.py b/spaces/caslabs/midi-autocompletion/musicautobot/utils/attention_mask.py deleted file mode 100644 index 7f570ed2ce747f2d11772f7a392d33c6bea576e0..0000000000000000000000000000000000000000 --- a/spaces/caslabs/midi-autocompletion/musicautobot/utils/attention_mask.py +++ /dev/null @@ -1,21 +0,0 @@ -import numpy as np -import torch - -def window_mask(x_len, device, m_len=0, size=(1,1)): - win_size,k = size - mem_mask = torch.zeros((x_len,m_len), device=device) - tri_mask = torch.triu(torch.ones((x_len//win_size+1,x_len//win_size+1), device=device),diagonal=k) - window_mask = tri_mask.repeat_interleave(win_size,dim=0).repeat_interleave(win_size,dim=1)[:x_len,:x_len] - if x_len: window_mask[...,0] = 0 # Always allowing first index to see. Otherwise you'll get NaN loss - mask = torch.cat((mem_mask, window_mask), dim=1)[None,None] - return mask.bool() if hasattr(mask, 'bool') else mask.byte() - -def rand_window_mask(x_len,m_len,device,max_size:int=None,p:float=0.2,is_eval:bool=False): - if is_eval or np.random.rand() >= p or max_size is None: - win_size,k = (1,1) - else: win_size,k = (np.random.randint(0,max_size)+1,0) - return window_mask(x_len, device, m_len, size=(win_size,k)) - -def lm_mask(x_len, device): - mask = torch.triu(torch.ones((x_len, x_len), device=device), diagonal=1)[None,None] - return mask.bool() if hasattr(mask, 'bool') else mask.byte() diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/utils.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/utils.py deleted file mode 100644 index 9be920642581ae69f4a4c96795e8382c4f11b50b..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/utils.py +++ /dev/null @@ -1,400 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch -import regex as re - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - - -zh_pattern = re.compile(r'[\u4e00-\u9fa5]') -en_pattern = re.compile(r'[a-zA-Z]') -jp_pattern = re.compile(r'[\u3040-\u30ff\u31f0-\u31ff]') -kr_pattern = re.compile(r'[\uac00-\ud7af\u1100-\u11ff\u3130-\u318f\ua960-\ua97f]') -num_pattern=re.compile(r'[0-9]') -comma=r"(?<=[.。!!??;;,,、::'\"‘“”’()()《》「」~——])" #向前匹配但固定长度 -tags={'ZH':'[ZH]','EN':'[EN]','JP':'[JA]','KR':'[KR]'} - -def tag_cjke(text): - '''为中英日韩加tag,中日正则分不开,故先分句分离中日再识别,以应对大部分情况''' - sentences = re.split(r"([.。!!??;;,,、::'\"‘“”’()()【】《》「」~——]+ *(?![0-9]))", text) #分句,排除小数点 - sentences.append("") - sentences = ["".join(i) for i in zip(sentences[0::2],sentences[1::2])] - # print(sentences) - prev_lang=None - tagged_text = "" - for s in sentences: - #全为符号跳过 - nu = re.sub(r'[\s\p{P}]+', '', s, flags=re.U).strip() - if len(nu)==0: - continue - s = re.sub(r'[()()《》「」【】‘“”’]+', '', s) - jp=re.findall(jp_pattern, s) - #本句含日语字符判断为日语 - if len(jp)>0: - prev_lang,tagged_jke=tag_jke(s,prev_lang) - tagged_text +=tagged_jke - else: - prev_lang,tagged_cke=tag_cke(s,prev_lang) - tagged_text +=tagged_cke - return tagged_text - -def tag_jke(text,prev_sentence=None): - '''为英日韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - tagged=0 - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if jp_pattern.match(char): - lang = "JP" - elif zh_pattern.match(char): - lang = "JP" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - lang = None - tagged_text += char - continue - # 如果当前语言与上一个语言不同,就添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - if not tagged: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - - return prev_lang,tagged_text - -def tag_cke(text,prev_sentence=None): - '''为中英韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - # 是否全略过未标签 - tagged=0 - - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if zh_pattern.match(char): - lang = "ZH" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - # 略过 - lang = None - tagged_text += char - continue - - # 如果当前语言与上一个语言不同,添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - # 未标签则继承上一句标签 - if tagged==0: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - return prev_lang,tagged_text - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, drop_speaker_emb=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - if k == 'emb_g.weight': - if drop_speaker_emb: - new_state_dict[k] = v - continue - v[:saved_state_dict[k].shape[0], :] = saved_state_dict[k] - new_state_dict[k] = v - else: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict() if optimizer is not None else None, - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/modified_finetune_speaker.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="pretrained_models", - help='Model name') - parser.add_argument('-n', '--max_epochs', type=int, default=50, - help='finetune epochs') - parser.add_argument('--drop_speaker_embed', type=bool, default=False, help='whether to drop existing characters') - - args = parser.parse_args() - model_dir = os.path.join("./", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.max_epochs = args.max_epochs - hparams.drop_speaker_embed = args.drop_speaker_embed - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/ccds/vits_onnx/util/build_docker.sh b/spaces/ccds/vits_onnx/util/build_docker.sh deleted file mode 100644 index 3b657e7bc183a007ece05be70f7c828585bbce1e..0000000000000000000000000000000000000000 --- a/spaces/ccds/vits_onnx/util/build_docker.sh +++ /dev/null @@ -1,2 +0,0 @@ -export DOCKER_BUILDKIT=1 -docker build -f Dockerfile -t ccdesue/vits_demo . \ No newline at end of file diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/midi2processed.py b/spaces/ccolas/TastyPiano/src/music/pipeline/midi2processed.py deleted file mode 100644 index 69496fa31ef1efddd637bb58a5308d977e7b6f43..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/pipeline/midi2processed.py +++ /dev/null @@ -1,152 +0,0 @@ -import time -import os -import sys -sys.path.append('../../') -import pretty_midi as pm -import numpy as np - -from src.music.utils import get_out_path -from src.music.config import MIN_LEN, MIN_NB_NOTES, MAX_GAP_IN_SONG, REMOVE_FIRST_AND_LAST - - -def sort_notes(notes): - starts = np.array([n.start for n in notes]) - index_sorted = np.argsort(starts) - return [notes[i] for i in index_sorted].copy() - - -def delete_notes_end_after_start(notes): - indexes_to_keep = [i for i, n in enumerate(notes) if n.start < n.end] - return [notes[i] for i in indexes_to_keep].copy() - -def compute_largest_gap(notes): - gaps = [] - latest_note_end_so_far = notes[0].end - for i in range(len(notes) - 1): - note_start = notes[i + 1].start - if latest_note_end_so_far < note_start: - gaps.append(note_start - latest_note_end_so_far) - latest_note_end_so_far = max(latest_note_end_so_far, notes[i+1].end) - if len(gaps) > 0: - largest_gap = np.max(gaps) - else: - largest_gap = 0 - return largest_gap - -def analyze_instrument(inst): - # test that piano plays throughout - init = time.time() - notes = inst.notes.copy() - nb_notes = len(notes) - start = notes[0].start - end = inst.get_end_time() - duration = end - start - largest_gap = compute_largest_gap(notes) - return nb_notes, start, end, duration, largest_gap - -def remove_beginning_and_end(midi, end_time): - notes = midi.instruments[0].notes.copy() - new_notes = [n for n in notes if n.start > REMOVE_FIRST_AND_LAST and n.end < end_time - REMOVE_FIRST_AND_LAST] - midi.instruments[0].notes = new_notes - return midi - -def remove_blanks_beginning_and_end(midi): - # remove blanks and the beginning and the end - shift = midi.instruments[0].notes[0].start - for n in midi.instruments[0].notes: - n.start = max(0, n.start - shift) - n.end = max(0, n.end - shift) - for ksc in midi.key_signature_changes: - ksc.time = max(0, ksc.time - shift) - for tsc in midi.time_signature_changes: - tsc.time = max(0, tsc.time - shift) - for pb in midi.instruments[0].pitch_bends: - pb.time = max(0, pb.time - shift) - for cc in midi.instruments[0].control_changes: - cc.time = max(0, cc.time - shift) - return midi - -def is_valid_inst(largest_gap, duration, nb_notes, gap_counts=True): - error_msg = '' - valid = True - if largest_gap > MAX_GAP_IN_SONG and gap_counts: - valid = False - error_msg += f'wide gap ({largest_gap:.2f} secs), ' - if duration < (MIN_LEN + 2 * REMOVE_FIRST_AND_LAST): - valid = False - error_msg += f'too short ({duration:.2f} secs), ' - if nb_notes < MIN_NB_NOTES * duration / 60: # nb of notes needs to be superior to the minimum number / min * the duration in minute - valid = False - error_msg += f'too few notes ({nb_notes}), ' - return valid, error_msg - -def midi2processed(midi_path, processed_path=None, apply_filtering=True, verbose=False, level=0): - assert midi_path.split('.')[-1] in ['mid', 'midi'] - if not processed_path: - processed_path, _, _ = get_out_path(in_path=midi_path, in_word='midi', out_word='processed', out_extension='.mid') - - if verbose: print(' ' * level + f'Processing {midi_path}.') - - if os.path.exists(processed_path): - if verbose: print(' ' * (level + 2) + 'Processed midi file already exists.') - return processed_path, '' - error_msg = 'Error in scrubbing. ' - #try: - inst_error_msg = '' - # load mid file - error_msg += 'Error in midi loading?' - midi = pm.PrettyMIDI(midi_path) - error_msg += ' Nope. Removing invalid notes?' - midi.remove_invalid_notes() # filter invalid notes - error_msg += ' Nope. Filtering instruments?' - # filter instruments - instruments = midi.instruments.copy() - new_instru = [] - instruments_data = [] - for i_inst, inst in enumerate(instruments): - if inst.program <= 7 and not inst.is_drum and len(inst.notes) > 5: - # inst is a piano - # check data - inst.notes = sort_notes(inst.notes) # sort notes - inst.notes = delete_notes_end_after_start(inst.notes) # delete invalid notes - nb_notes, start, end, duration, largest_gap = analyze_instrument(inst) - is_valid, err_msg = is_valid_inst(largest_gap=largest_gap, duration=duration, nb_notes=nb_notes, gap_counts='maestro' not in midi_path) - if is_valid or not apply_filtering: - new_instru.append(inst) - instruments_data.append([nb_notes, start, end, duration, largest_gap]) - else: - inst_error_msg += 'inst1: ' + err_msg + '\n' - instruments_data = np.array(instruments_data) - error_msg += ' Nope. Taking one instrument?' - - if len(new_instru) == 0: - error_msg = f'No piano instrument. {inst_error_msg}' - assert False - elif len(new_instru) > 1: - # take instrument playing the most notes - instrument = new_instru[np.argmax(instruments_data[:, 0])] - else: - instrument = new_instru[0] - instrument.program = 0 # set the instrument to Grand Piano. - midi.instruments = [instrument] # put instrument in midi file - error_msg += ' Nope. Removing blanks?' - # remove first and last REMOVE_FIRST_AND_LAST seconds (avoid clapping and jingles) - end_time = midi.get_end_time() - if apply_filtering: midi = remove_beginning_and_end(midi, end_time) - - # remove beginning and end - midi = remove_blanks_beginning_and_end(midi) - error_msg += ' Nope. Saving?' - - # save midi file - midi.write(processed_path) - error_msg += ' Nope.' - if verbose: - extra = f' Saved to {processed_path}' if midi_path else '' - print(' ' * (level + 2) + f'Success! {extra}') - return processed_path, '' - #except: - # if verbose: print(' ' * (level + 2) + 'Scrubbing failed.') - #if os.path.exists(processed_path): - # os.remove(processed_path) - #return None, error_msg + ' Yes.' diff --git a/spaces/ceckenrode/bigscience-bloom/README.md b/spaces/ceckenrode/bigscience-bloom/README.md deleted file mode 100644 index 7476185bdadf8a41dbfe71ea3f60c99b10c96929..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/bigscience-bloom/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bigscience Bloom -emoji: 😻 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cfwef/gpt/check_proxy.py b/spaces/cfwef/gpt/check_proxy.py deleted file mode 100644 index a6919dd37a559d0f3868fdc74b54c488779083d3..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/check_proxy.py +++ /dev/null @@ -1,27 +0,0 @@ - -def check_proxy(proxies): - import requests - proxies_https = proxies['https'] if proxies is not None else '无' - try: - response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) - data = response.json() - print(f'查询代理的地理位置,返回的结果是{data}') - if 'country_name' in data: - country = data['country_name'] - result = f"代理配置 {proxies_https}, 代理所在地:{country}" - elif 'error' in data: - result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限" - print(result) - return result - except: - result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效" - print(result) - return result - - -if __name__ == '__main__': - import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - from toolbox import get_conf - proxies, = get_conf('proxies') - check_proxy(proxies) - \ No newline at end of file diff --git a/spaces/chrisjay/afro-speech/utils.py b/spaces/chrisjay/afro-speech/utils.py deleted file mode 100644 index f56708d24eff9522a7aee51996554a9f7a7cc215..0000000000000000000000000000000000000000 --- a/spaces/chrisjay/afro-speech/utils.py +++ /dev/null @@ -1,44 +0,0 @@ - -import json -import hashlib -import random -import string - - - -def get_unique_name(): - return ''.join([random.choice(string.ascii_letters - + string.digits) for n in range(32)]) - - -def read_json_lines(file): - with open(file,'r',encoding="utf8") as f: - lines = f.readlines() - data=[] - for l in lines: - data.append(json.loads(l)) - return data - - -def json_dump(thing): - return json.dumps(thing, - ensure_ascii=False, - sort_keys=True, - indent=None, - separators=(',', ':')) - -def get_hash(thing): # stable-hashing - return str(hashlib.md5(json_dump(thing).encode('utf-8')).hexdigest()) - - -def dump_json(thing,file): - with open(file,'w+',encoding="utf8") as f: - json.dump(thing,f) - -def read_json_lines(file): - with open(file,'r',encoding="utf8") as f: - lines = f.readlines() - data=[] - for l in lines: - data.append(json.loads(l)) - return data \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/Markdown-3.4.3.dist-info/LICENSE.md b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/Markdown-3.4.3.dist-info/LICENSE.md deleted file mode 100644 index 2652d97ad1b4686e38b8f4122911bb80e8e16139..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/Markdown-3.4.3.dist-info/LICENSE.md +++ /dev/null @@ -1,29 +0,0 @@ -Copyright 2007, 2008 The Python Markdown Project (v. 1.7 and later) -Copyright 2004, 2005, 2006 Yuri Takhteyev (v. 0.2-1.6b) -Copyright 2004 Manfred Stienstra (the original version) - -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. -* Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. -* Neither the name of the Python Markdown Project nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE PYTHON MARKDOWN PROJECT ''AS IS'' AND ANY -EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL ANY CONTRIBUTORS TO THE PYTHON MARKDOWN PROJECT -BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageFont.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageFont.py deleted file mode 100644 index 05828a72fdf90dbe434cebf06f968ef7e91189b3..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageFont.py +++ /dev/null @@ -1,997 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PIL raster font management -# -# History: -# 1996-08-07 fl created (experimental) -# 1997-08-25 fl minor adjustments to handle fonts from pilfont 0.3 -# 1999-02-06 fl rewrote most font management stuff in C -# 1999-03-17 fl take pth files into account in load_path (from Richard Jones) -# 2001-02-17 fl added freetype support -# 2001-05-09 fl added TransposedFont wrapper class -# 2002-03-04 fl make sure we have a "L" or "1" font -# 2002-12-04 fl skip non-directory entries in the system path -# 2003-04-29 fl add embedded default font -# 2003-09-27 fl added support for truetype charmap encodings -# -# Todo: -# Adapt to PILFONT2 format (16-bit fonts, compressed, single file) -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import base64 -import os -import sys -import warnings -from enum import IntEnum -from io import BytesIO - -from . import Image -from ._util import is_directory, is_path - - -class Layout(IntEnum): - BASIC = 0 - RAQM = 1 - - -MAX_STRING_LENGTH = 1_000_000 - - -try: - from . import _imagingft as core -except ImportError as ex: - from ._util import DeferredError - - core = DeferredError(ex) - - -def _string_length_check(text): - if MAX_STRING_LENGTH is not None and len(text) > MAX_STRING_LENGTH: - msg = "too many characters in string" - raise ValueError(msg) - - -# FIXME: add support for pilfont2 format (see FontFile.py) - -# -------------------------------------------------------------------- -# Font metrics format: -# "PILfont" LF -# fontdescriptor LF -# (optional) key=value... LF -# "DATA" LF -# binary data: 256*10*2 bytes (dx, dy, dstbox, srcbox) -# -# To place a character, cut out srcbox and paste at dstbox, -# relative to the character position. Then move the character -# position according to dx, dy. -# -------------------------------------------------------------------- - - -class ImageFont: - """PIL font wrapper""" - - def _load_pilfont(self, filename): - with open(filename, "rb") as fp: - image = None - for ext in (".png", ".gif", ".pbm"): - if image: - image.close() - try: - fullname = os.path.splitext(filename)[0] + ext - image = Image.open(fullname) - except Exception: - pass - else: - if image and image.mode in ("1", "L"): - break - else: - if image: - image.close() - msg = "cannot find glyph data file" - raise OSError(msg) - - self.file = fullname - - self._load_pilfont_data(fp, image) - image.close() - - def _load_pilfont_data(self, file, image): - # read PILfont header - if file.readline() != b"PILfont\n": - msg = "Not a PILfont file" - raise SyntaxError(msg) - file.readline().split(b";") - self.info = [] # FIXME: should be a dictionary - while True: - s = file.readline() - if not s or s == b"DATA\n": - break - self.info.append(s) - - # read PILfont metrics - data = file.read(256 * 20) - - # check image - if image.mode not in ("1", "L"): - msg = "invalid font image mode" - raise TypeError(msg) - - image.load() - - self.font = Image.core.font(image.im, data) - - def getmask(self, text, mode="", *args, **kwargs): - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :return: An internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module. - """ - return self.font.getmask(text, mode) - - def getbbox(self, text, *args, **kwargs): - """ - Returns bounding box (in pixels) of given text. - - .. versionadded:: 9.2.0 - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :return: ``(left, top, right, bottom)`` bounding box - """ - _string_length_check(text) - width, height = self.font.getsize(text) - return 0, 0, width, height - - def getlength(self, text, *args, **kwargs): - """ - Returns length (in pixels) of given text. - This is the amount by which following text should be offset. - - .. versionadded:: 9.2.0 - """ - _string_length_check(text) - width, height = self.font.getsize(text) - return width - - -## -# Wrapper for FreeType fonts. Application code should use the -# truetype factory function to create font objects. - - -class FreeTypeFont: - """FreeType font wrapper (requires _imagingft service)""" - - def __init__(self, font=None, size=10, index=0, encoding="", layout_engine=None): - # FIXME: use service provider instead - - self.path = font - self.size = size - self.index = index - self.encoding = encoding - - if layout_engine not in (Layout.BASIC, Layout.RAQM): - layout_engine = Layout.BASIC - if core.HAVE_RAQM: - layout_engine = Layout.RAQM - elif layout_engine == Layout.RAQM and not core.HAVE_RAQM: - warnings.warn( - "Raqm layout was requested, but Raqm is not available. " - "Falling back to basic layout." - ) - layout_engine = Layout.BASIC - - self.layout_engine = layout_engine - - def load_from_bytes(f): - self.font_bytes = f.read() - self.font = core.getfont( - "", size, index, encoding, self.font_bytes, layout_engine - ) - - if is_path(font): - if sys.platform == "win32": - font_bytes_path = font if isinstance(font, bytes) else font.encode() - try: - font_bytes_path.decode("ascii") - except UnicodeDecodeError: - # FreeType cannot load fonts with non-ASCII characters on Windows - # So load it into memory first - with open(font, "rb") as f: - load_from_bytes(f) - return - self.font = core.getfont( - font, size, index, encoding, layout_engine=layout_engine - ) - else: - load_from_bytes(font) - - def __getstate__(self): - return [self.path, self.size, self.index, self.encoding, self.layout_engine] - - def __setstate__(self, state): - path, size, index, encoding, layout_engine = state - self.__init__(path, size, index, encoding, layout_engine) - - def getname(self): - """ - :return: A tuple of the font family (e.g. Helvetica) and the font style - (e.g. Bold) - """ - return self.font.family, self.font.style - - def getmetrics(self): - """ - :return: A tuple of the font ascent (the distance from the baseline to - the highest outline point) and descent (the distance from the - baseline to the lowest outline point, a negative value) - """ - return self.font.ascent, self.font.descent - - def getlength(self, text, mode="", direction=None, features=None, language=None): - """ - Returns length (in pixels with 1/64 precision) of given text when rendered - in font with provided direction, features, and language. - - This is the amount by which following text should be offset. - Text bounding box may extend past the length in some fonts, - e.g. when using italics or accents. - - The result is returned as a float; it is a whole number if using basic layout. - - Note that the sum of two lengths may not equal the length of a concatenated - string due to kerning. If you need to adjust for kerning, include the following - character and subtract its length. - - For example, instead of :: - - hello = font.getlength("Hello") - world = font.getlength("World") - hello_world = hello + world # not adjusted for kerning - assert hello_world == font.getlength("HelloWorld") # may fail - - use :: - - hello = font.getlength("HelloW") - font.getlength("W") # adjusted for kerning - world = font.getlength("World") - hello_world = hello + world # adjusted for kerning - assert hello_world == font.getlength("HelloWorld") # True - - or disable kerning with (requires libraqm) :: - - hello = draw.textlength("Hello", font, features=["-kern"]) - world = draw.textlength("World", font, features=["-kern"]) - hello_world = hello + world # kerning is disabled, no need to adjust - assert hello_world == draw.textlength("HelloWorld", font, features=["-kern"]) - - .. versionadded:: 8.0.0 - - :param text: Text to measure. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - :return: Width for horizontal, height for vertical text. - """ - _string_length_check(text) - return self.font.getlength(text, mode, direction, features, language) / 64 - - def getbbox( - self, - text, - mode="", - direction=None, - features=None, - language=None, - stroke_width=0, - anchor=None, - ): - """ - Returns bounding box (in pixels) of given text relative to given anchor - when rendered in font with provided direction, features, and language. - - Use :py:meth:`getlength()` to get the offset of following text with - 1/64 pixel precision. The bounding box includes extra margins for - some fonts, e.g. italics or accents. - - .. versionadded:: 8.0.0 - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - :param stroke_width: The width of the text stroke. - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left. - See :ref:`text-anchors` for valid values. - - :return: ``(left, top, right, bottom)`` bounding box - """ - _string_length_check(text) - size, offset = self.font.getsize( - text, mode, direction, features, language, anchor - ) - left, top = offset[0] - stroke_width, offset[1] - stroke_width - width, height = size[0] + 2 * stroke_width, size[1] + 2 * stroke_width - return left, top, left + width, top + height - - def getmask( - self, - text, - mode="", - direction=None, - features=None, - language=None, - stroke_width=0, - anchor=None, - ink=0, - start=None, - ): - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. If the font has embedded color data, the bitmap - should have mode ``RGBA``. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - .. versionadded:: 6.0.0 - - :param stroke_width: The width of the text stroke. - - .. versionadded:: 6.2.0 - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left. - See :ref:`text-anchors` for valid values. - - .. versionadded:: 8.0.0 - - :param ink: Foreground ink for rendering in RGBA mode. - - .. versionadded:: 8.0.0 - - :param start: Tuple of horizontal and vertical offset, as text may render - differently when starting at fractional coordinates. - - .. versionadded:: 9.4.0 - - :return: An internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module. - """ - return self.getmask2( - text, - mode, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - anchor=anchor, - ink=ink, - start=start, - )[0] - - def getmask2( - self, - text, - mode="", - direction=None, - features=None, - language=None, - stroke_width=0, - anchor=None, - ink=0, - start=None, - *args, - **kwargs, - ): - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. If the font has embedded color data, the bitmap - should have mode ``RGBA``. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - .. versionadded:: 6.0.0 - - :param stroke_width: The width of the text stroke. - - .. versionadded:: 6.2.0 - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left. - See :ref:`text-anchors` for valid values. - - .. versionadded:: 8.0.0 - - :param ink: Foreground ink for rendering in RGBA mode. - - .. versionadded:: 8.0.0 - - :param start: Tuple of horizontal and vertical offset, as text may render - differently when starting at fractional coordinates. - - .. versionadded:: 9.4.0 - - :return: A tuple of an internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module, and the text offset, the - gap between the starting coordinate and the first marking - """ - _string_length_check(text) - if start is None: - start = (0, 0) - im = None - - def fill(mode, size): - nonlocal im - - im = Image.core.fill(mode, size) - return im - - size, offset = self.font.render( - text, - fill, - mode, - direction, - features, - language, - stroke_width, - anchor, - ink, - start[0], - start[1], - Image.MAX_IMAGE_PIXELS, - ) - Image._decompression_bomb_check(size) - return im, offset - - def font_variant( - self, font=None, size=None, index=None, encoding=None, layout_engine=None - ): - """ - Create a copy of this FreeTypeFont object, - using any specified arguments to override the settings. - - Parameters are identical to the parameters used to initialize this - object. - - :return: A FreeTypeFont object. - """ - if font is None: - try: - font = BytesIO(self.font_bytes) - except AttributeError: - font = self.path - return FreeTypeFont( - font=font, - size=self.size if size is None else size, - index=self.index if index is None else index, - encoding=self.encoding if encoding is None else encoding, - layout_engine=layout_engine or self.layout_engine, - ) - - def get_variation_names(self): - """ - :returns: A list of the named styles in a variation font. - :exception OSError: If the font is not a variation font. - """ - try: - names = self.font.getvarnames() - except AttributeError as e: - msg = "FreeType 2.9.1 or greater is required" - raise NotImplementedError(msg) from e - return [name.replace(b"\x00", b"") for name in names] - - def set_variation_by_name(self, name): - """ - :param name: The name of the style. - :exception OSError: If the font is not a variation font. - """ - names = self.get_variation_names() - if not isinstance(name, bytes): - name = name.encode() - index = names.index(name) + 1 - - if index == getattr(self, "_last_variation_index", None): - # When the same name is set twice in a row, - # there is an 'unknown freetype error' - # https://savannah.nongnu.org/bugs/?56186 - return - self._last_variation_index = index - - self.font.setvarname(index) - - def get_variation_axes(self): - """ - :returns: A list of the axes in a variation font. - :exception OSError: If the font is not a variation font. - """ - try: - axes = self.font.getvaraxes() - except AttributeError as e: - msg = "FreeType 2.9.1 or greater is required" - raise NotImplementedError(msg) from e - for axis in axes: - axis["name"] = axis["name"].replace(b"\x00", b"") - return axes - - def set_variation_by_axes(self, axes): - """ - :param axes: A list of values for each axis. - :exception OSError: If the font is not a variation font. - """ - try: - self.font.setvaraxes(axes) - except AttributeError as e: - msg = "FreeType 2.9.1 or greater is required" - raise NotImplementedError(msg) from e - - -class TransposedFont: - """Wrapper for writing rotated or mirrored text""" - - def __init__(self, font, orientation=None): - """ - Wrapper that creates a transposed font from any existing font - object. - - :param font: A font object. - :param orientation: An optional orientation. If given, this should - be one of Image.Transpose.FLIP_LEFT_RIGHT, Image.Transpose.FLIP_TOP_BOTTOM, - Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_180, or - Image.Transpose.ROTATE_270. - """ - self.font = font - self.orientation = orientation # any 'transpose' argument, or None - - def getmask(self, text, mode="", *args, **kwargs): - im = self.font.getmask(text, mode, *args, **kwargs) - if self.orientation is not None: - return im.transpose(self.orientation) - return im - - def getbbox(self, text, *args, **kwargs): - # TransposedFont doesn't support getmask2, move top-left point to (0, 0) - # this has no effect on ImageFont and simulates anchor="lt" for FreeTypeFont - left, top, right, bottom = self.font.getbbox(text, *args, **kwargs) - width = right - left - height = bottom - top - if self.orientation in (Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_270): - return 0, 0, height, width - return 0, 0, width, height - - def getlength(self, text, *args, **kwargs): - if self.orientation in (Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_270): - msg = "text length is undefined for text rotated by 90 or 270 degrees" - raise ValueError(msg) - _string_length_check(text) - return self.font.getlength(text, *args, **kwargs) - - -def load(filename): - """ - Load a font file. This function loads a font object from the given - bitmap font file, and returns the corresponding font object. - - :param filename: Name of font file. - :return: A font object. - :exception OSError: If the file could not be read. - """ - f = ImageFont() - f._load_pilfont(filename) - return f - - -def truetype(font=None, size=10, index=0, encoding="", layout_engine=None): - """ - Load a TrueType or OpenType font from a file or file-like object, - and create a font object. - This function loads a font object from the given file or file-like - object, and creates a font object for a font of the given size. - - Pillow uses FreeType to open font files. On Windows, be aware that FreeType - will keep the file open as long as the FreeTypeFont object exists. Windows - limits the number of files that can be open in C at once to 512, so if many - fonts are opened simultaneously and that limit is approached, an - ``OSError`` may be thrown, reporting that FreeType "cannot open resource". - A workaround would be to copy the file(s) into memory, and open that instead. - - This function requires the _imagingft service. - - :param font: A filename or file-like object containing a TrueType font. - If the file is not found in this filename, the loader may also - search in other directories, such as the :file:`fonts/` - directory on Windows or :file:`/Library/Fonts/`, - :file:`/System/Library/Fonts/` and :file:`~/Library/Fonts/` on - macOS. - - :param size: The requested size, in pixels. - :param index: Which font face to load (default is first available face). - :param encoding: Which font encoding to use (default is Unicode). Possible - encodings include (see the FreeType documentation for more - information): - - * "unic" (Unicode) - * "symb" (Microsoft Symbol) - * "ADOB" (Adobe Standard) - * "ADBE" (Adobe Expert) - * "ADBC" (Adobe Custom) - * "armn" (Apple Roman) - * "sjis" (Shift JIS) - * "gb " (PRC) - * "big5" - * "wans" (Extended Wansung) - * "joha" (Johab) - * "lat1" (Latin-1) - - This specifies the character set to use. It does not alter the - encoding of any text provided in subsequent operations. - :param layout_engine: Which layout engine to use, if available: - :data:`.ImageFont.Layout.BASIC` or :data:`.ImageFont.Layout.RAQM`. - If it is available, Raqm layout will be used by default. - Otherwise, basic layout will be used. - - Raqm layout is recommended for all non-English text. If Raqm layout - is not required, basic layout will have better performance. - - You can check support for Raqm layout using - :py:func:`PIL.features.check_feature` with ``feature="raqm"``. - - .. versionadded:: 4.2.0 - :return: A font object. - :exception OSError: If the file could not be read. - """ - - def freetype(font): - return FreeTypeFont(font, size, index, encoding, layout_engine) - - try: - return freetype(font) - except OSError: - if not is_path(font): - raise - ttf_filename = os.path.basename(font) - - dirs = [] - if sys.platform == "win32": - # check the windows font repository - # NOTE: must use uppercase WINDIR, to work around bugs in - # 1.5.2's os.environ.get() - windir = os.environ.get("WINDIR") - if windir: - dirs.append(os.path.join(windir, "fonts")) - elif sys.platform in ("linux", "linux2"): - lindirs = os.environ.get("XDG_DATA_DIRS") - if not lindirs: - # According to the freedesktop spec, XDG_DATA_DIRS should - # default to /usr/share - lindirs = "/usr/share" - dirs += [os.path.join(lindir, "fonts") for lindir in lindirs.split(":")] - elif sys.platform == "darwin": - dirs += [ - "/Library/Fonts", - "/System/Library/Fonts", - os.path.expanduser("~/Library/Fonts"), - ] - - ext = os.path.splitext(ttf_filename)[1] - first_font_with_a_different_extension = None - for directory in dirs: - for walkroot, walkdir, walkfilenames in os.walk(directory): - for walkfilename in walkfilenames: - if ext and walkfilename == ttf_filename: - return freetype(os.path.join(walkroot, walkfilename)) - elif not ext and os.path.splitext(walkfilename)[0] == ttf_filename: - fontpath = os.path.join(walkroot, walkfilename) - if os.path.splitext(fontpath)[1] == ".ttf": - return freetype(fontpath) - if not ext and first_font_with_a_different_extension is None: - first_font_with_a_different_extension = fontpath - if first_font_with_a_different_extension: - return freetype(first_font_with_a_different_extension) - raise - - -def load_path(filename): - """ - Load font file. Same as :py:func:`~PIL.ImageFont.load`, but searches for a - bitmap font along the Python path. - - :param filename: Name of font file. - :return: A font object. - :exception OSError: If the file could not be read. - """ - for directory in sys.path: - if is_directory(directory): - if not isinstance(filename, str): - filename = filename.decode("utf-8") - try: - return load(os.path.join(directory, filename)) - except OSError: - pass - msg = "cannot find font file" - raise OSError(msg) - - -def load_default(): - """Load a "better than nothing" default font. - - .. versionadded:: 1.1.4 - - :return: A font object. - """ - f = ImageFont() - f._load_pilfont_data( - # courB08 - BytesIO( - base64.b64decode( - b""" -UElMZm9udAo7Ozs7OzsxMDsKREFUQQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAA//8AAQAAAAAAAAABAAEA -BgAAAAH/+gADAAAAAQAAAAMABgAGAAAAAf/6AAT//QADAAAABgADAAYAAAAA//kABQABAAYAAAAL -AAgABgAAAAD/+AAFAAEACwAAABAACQAGAAAAAP/5AAUAAAAQAAAAFQAHAAYAAP////oABQAAABUA -AAAbAAYABgAAAAH/+QAE//wAGwAAAB4AAwAGAAAAAf/5AAQAAQAeAAAAIQAIAAYAAAAB//kABAAB -ACEAAAAkAAgABgAAAAD/+QAE//0AJAAAACgABAAGAAAAAP/6AAX//wAoAAAALQAFAAYAAAAB//8A -BAACAC0AAAAwAAMABgAAAAD//AAF//0AMAAAADUAAQAGAAAAAf//AAMAAAA1AAAANwABAAYAAAAB -//kABQABADcAAAA7AAgABgAAAAD/+QAFAAAAOwAAAEAABwAGAAAAAP/5AAYAAABAAAAARgAHAAYA -AAAA//kABQAAAEYAAABLAAcABgAAAAD/+QAFAAAASwAAAFAABwAGAAAAAP/5AAYAAABQAAAAVgAH -AAYAAAAA//kABQAAAFYAAABbAAcABgAAAAD/+QAFAAAAWwAAAGAABwAGAAAAAP/5AAUAAABgAAAA -ZQAHAAYAAAAA//kABQAAAGUAAABqAAcABgAAAAD/+QAFAAAAagAAAG8ABwAGAAAAAf/8AAMAAABv -AAAAcQAEAAYAAAAA//wAAwACAHEAAAB0AAYABgAAAAD/+gAE//8AdAAAAHgABQAGAAAAAP/7AAT/ -/gB4AAAAfAADAAYAAAAB//oABf//AHwAAACAAAUABgAAAAD/+gAFAAAAgAAAAIUABgAGAAAAAP/5 -AAYAAQCFAAAAiwAIAAYAAP////oABgAAAIsAAACSAAYABgAA////+gAFAAAAkgAAAJgABgAGAAAA -AP/6AAUAAACYAAAAnQAGAAYAAP////oABQAAAJ0AAACjAAYABgAA////+gAFAAAAowAAAKkABgAG -AAD////6AAUAAACpAAAArwAGAAYAAAAA//oABQAAAK8AAAC0AAYABgAA////+gAGAAAAtAAAALsA -BgAGAAAAAP/6AAQAAAC7AAAAvwAGAAYAAP////oABQAAAL8AAADFAAYABgAA////+gAGAAAAxQAA -AMwABgAGAAD////6AAUAAADMAAAA0gAGAAYAAP////oABQAAANIAAADYAAYABgAA////+gAGAAAA -2AAAAN8ABgAGAAAAAP/6AAUAAADfAAAA5AAGAAYAAP////oABQAAAOQAAADqAAYABgAAAAD/+gAF -AAEA6gAAAO8ABwAGAAD////6AAYAAADvAAAA9gAGAAYAAAAA//oABQAAAPYAAAD7AAYABgAA//// -+gAFAAAA+wAAAQEABgAGAAD////6AAYAAAEBAAABCAAGAAYAAP////oABgAAAQgAAAEPAAYABgAA -////+gAGAAABDwAAARYABgAGAAAAAP/6AAYAAAEWAAABHAAGAAYAAP////oABgAAARwAAAEjAAYA -BgAAAAD/+gAFAAABIwAAASgABgAGAAAAAf/5AAQAAQEoAAABKwAIAAYAAAAA//kABAABASsAAAEv -AAgABgAAAAH/+QAEAAEBLwAAATIACAAGAAAAAP/5AAX//AEyAAABNwADAAYAAAAAAAEABgACATcA -AAE9AAEABgAAAAH/+QAE//wBPQAAAUAAAwAGAAAAAP/7AAYAAAFAAAABRgAFAAYAAP////kABQAA -AUYAAAFMAAcABgAAAAD/+wAFAAABTAAAAVEABQAGAAAAAP/5AAYAAAFRAAABVwAHAAYAAAAA//sA -BQAAAVcAAAFcAAUABgAAAAD/+QAFAAABXAAAAWEABwAGAAAAAP/7AAYAAgFhAAABZwAHAAYAAP// -//kABQAAAWcAAAFtAAcABgAAAAD/+QAGAAABbQAAAXMABwAGAAAAAP/5AAQAAgFzAAABdwAJAAYA -AP////kABgAAAXcAAAF+AAcABgAAAAD/+QAGAAABfgAAAYQABwAGAAD////7AAUAAAGEAAABigAF -AAYAAP////sABQAAAYoAAAGQAAUABgAAAAD/+wAFAAABkAAAAZUABQAGAAD////7AAUAAgGVAAAB -mwAHAAYAAAAA//sABgACAZsAAAGhAAcABgAAAAD/+wAGAAABoQAAAacABQAGAAAAAP/7AAYAAAGn -AAABrQAFAAYAAAAA//kABgAAAa0AAAGzAAcABgAA////+wAGAAABswAAAboABQAGAAD////7AAUA -AAG6AAABwAAFAAYAAP////sABgAAAcAAAAHHAAUABgAAAAD/+wAGAAABxwAAAc0ABQAGAAD////7 -AAYAAgHNAAAB1AAHAAYAAAAA//sABQAAAdQAAAHZAAUABgAAAAH/+QAFAAEB2QAAAd0ACAAGAAAA -Av/6AAMAAQHdAAAB3gAHAAYAAAAA//kABAABAd4AAAHiAAgABgAAAAD/+wAF//0B4gAAAecAAgAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAB -//sAAwACAecAAAHpAAcABgAAAAD/+QAFAAEB6QAAAe4ACAAGAAAAAP/5AAYAAAHuAAAB9AAHAAYA -AAAA//oABf//AfQAAAH5AAUABgAAAAD/+QAGAAAB+QAAAf8ABwAGAAAAAv/5AAMAAgH/AAACAAAJ -AAYAAAAA//kABQABAgAAAAIFAAgABgAAAAH/+gAE//sCBQAAAggAAQAGAAAAAP/5AAYAAAIIAAAC -DgAHAAYAAAAB//kABf/+Ag4AAAISAAUABgAA////+wAGAAACEgAAAhkABQAGAAAAAP/7AAX//gIZ -AAACHgADAAYAAAAA//wABf/9Ah4AAAIjAAEABgAAAAD/+QAHAAACIwAAAioABwAGAAAAAP/6AAT/ -+wIqAAACLgABAAYAAAAA//kABP/8Ai4AAAIyAAMABgAAAAD/+gAFAAACMgAAAjcABgAGAAAAAf/5 -AAT//QI3AAACOgAEAAYAAAAB//kABP/9AjoAAAI9AAQABgAAAAL/+QAE//sCPQAAAj8AAgAGAAD/ -///7AAYAAgI/AAACRgAHAAYAAAAA//kABgABAkYAAAJMAAgABgAAAAH//AAD//0CTAAAAk4AAQAG -AAAAAf//AAQAAgJOAAACUQADAAYAAAAB//kABP/9AlEAAAJUAAQABgAAAAH/+QAF//4CVAAAAlgA -BQAGAAD////7AAYAAAJYAAACXwAFAAYAAP////kABgAAAl8AAAJmAAcABgAA////+QAGAAACZgAA -Am0ABwAGAAD////5AAYAAAJtAAACdAAHAAYAAAAA//sABQACAnQAAAJ5AAcABgAA////9wAGAAAC -eQAAAoAACQAGAAD////3AAYAAAKAAAAChwAJAAYAAP////cABgAAAocAAAKOAAkABgAA////9wAG -AAACjgAAApUACQAGAAD////4AAYAAAKVAAACnAAIAAYAAP////cABgAAApwAAAKjAAkABgAA//// -+gAGAAACowAAAqoABgAGAAAAAP/6AAUAAgKqAAACrwAIAAYAAP////cABQAAAq8AAAK1AAkABgAA -////9wAFAAACtQAAArsACQAGAAD////3AAUAAAK7AAACwQAJAAYAAP////gABQAAAsEAAALHAAgA -BgAAAAD/9wAEAAACxwAAAssACQAGAAAAAP/3AAQAAALLAAACzwAJAAYAAAAA//cABAAAAs8AAALT -AAkABgAAAAD/+AAEAAAC0wAAAtcACAAGAAD////6AAUAAALXAAAC3QAGAAYAAP////cABgAAAt0A -AALkAAkABgAAAAD/9wAFAAAC5AAAAukACQAGAAAAAP/3AAUAAALpAAAC7gAJAAYAAAAA//cABQAA -Au4AAALzAAkABgAAAAD/9wAFAAAC8wAAAvgACQAGAAAAAP/4AAUAAAL4AAAC/QAIAAYAAAAA//oA -Bf//Av0AAAMCAAUABgAA////+gAGAAADAgAAAwkABgAGAAD////3AAYAAAMJAAADEAAJAAYAAP// -//cABgAAAxAAAAMXAAkABgAA////9wAGAAADFwAAAx4ACQAGAAD////4AAYAAAAAAAoABwASAAYA -AP////cABgAAAAcACgAOABMABgAA////+gAFAAAADgAKABQAEAAGAAD////6AAYAAAAUAAoAGwAQ -AAYAAAAA//gABgAAABsACgAhABIABgAAAAD/+AAGAAAAIQAKACcAEgAGAAAAAP/4AAYAAAAnAAoA -LQASAAYAAAAA//gABgAAAC0ACgAzABIABgAAAAD/+QAGAAAAMwAKADkAEQAGAAAAAP/3AAYAAAA5 -AAoAPwATAAYAAP////sABQAAAD8ACgBFAA8ABgAAAAD/+wAFAAIARQAKAEoAEQAGAAAAAP/4AAUA -AABKAAoATwASAAYAAAAA//gABQAAAE8ACgBUABIABgAAAAD/+AAFAAAAVAAKAFkAEgAGAAAAAP/5 -AAUAAABZAAoAXgARAAYAAAAA//gABgAAAF4ACgBkABIABgAAAAD/+AAGAAAAZAAKAGoAEgAGAAAA -AP/4AAYAAABqAAoAcAASAAYAAAAA//kABgAAAHAACgB2ABEABgAAAAD/+AAFAAAAdgAKAHsAEgAG -AAD////4AAYAAAB7AAoAggASAAYAAAAA//gABQAAAIIACgCHABIABgAAAAD/+AAFAAAAhwAKAIwA -EgAGAAAAAP/4AAUAAACMAAoAkQASAAYAAAAA//gABQAAAJEACgCWABIABgAAAAD/+QAFAAAAlgAK -AJsAEQAGAAAAAP/6AAX//wCbAAoAoAAPAAYAAAAA//oABQABAKAACgClABEABgAA////+AAGAAAA -pQAKAKwAEgAGAAD////4AAYAAACsAAoAswASAAYAAP////gABgAAALMACgC6ABIABgAA////+QAG -AAAAugAKAMEAEQAGAAD////4AAYAAgDBAAoAyAAUAAYAAP////kABQACAMgACgDOABMABgAA//// -+QAGAAIAzgAKANUAEw== -""" - ) - ), - Image.open( - BytesIO( - base64.b64decode( - b""" -iVBORw0KGgoAAAANSUhEUgAAAx4AAAAUAQAAAAArMtZoAAAEwElEQVR4nABlAJr/AHVE4czCI/4u -Mc4b7vuds/xzjz5/3/7u/n9vMe7vnfH/9++vPn/xyf5zhxzjt8GHw8+2d83u8x27199/nxuQ6Od9 -M43/5z2I+9n9ZtmDBwMQECDRQw/eQIQohJXxpBCNVE6QCCAAAAD//wBlAJr/AgALyj1t/wINwq0g -LeNZUworuN1cjTPIzrTX6ofHWeo3v336qPzfEwRmBnHTtf95/fglZK5N0PDgfRTslpGBvz7LFc4F -IUXBWQGjQ5MGCx34EDFPwXiY4YbYxavpnhHFrk14CDAAAAD//wBlAJr/AgKqRooH2gAgPeggvUAA -Bu2WfgPoAwzRAABAAAAAAACQgLz/3Uv4Gv+gX7BJgDeeGP6AAAD1NMDzKHD7ANWr3loYbxsAD791 -NAADfcoIDyP44K/jv4Y63/Z+t98Ovt+ub4T48LAAAAD//wBlAJr/AuplMlADJAAAAGuAphWpqhMx -in0A/fRvAYBABPgBwBUgABBQ/sYAyv9g0bCHgOLoGAAAAAAAREAAwI7nr0ArYpow7aX8//9LaP/9 -SjdavWA8ePHeBIKB//81/83ndznOaXx379wAAAD//wBlAJr/AqDxW+D3AABAAbUh/QMnbQag/gAY -AYDAAACgtgD/gOqAAAB5IA/8AAAk+n9w0AAA8AAAmFRJuPo27ciC0cD5oeW4E7KA/wD3ECMAn2tt -y8PgwH8AfAxFzC0JzeAMtratAsC/ffwAAAD//wBlAJr/BGKAyCAA4AAAAvgeYTAwHd1kmQF5chkG -ABoMIHcL5xVpTfQbUqzlAAAErwAQBgAAEOClA5D9il08AEh/tUzdCBsXkbgACED+woQg8Si9VeqY -lODCn7lmF6NhnAEYgAAA/NMIAAAAAAD//2JgjLZgVGBg5Pv/Tvpc8hwGBjYGJADjHDrAwPzAjv/H -/Wf3PzCwtzcwHmBgYGcwbZz8wHaCAQMDOwMDQ8MCBgYOC3W7mp+f0w+wHOYxO3OG+e376hsMZjk3 -AAAAAP//YmCMY2A4wMAIN5e5gQETPD6AZisDAwMDgzSDAAPjByiHcQMDAwMDg1nOze1lByRu5/47 -c4859311AYNZzg0AAAAA//9iYGDBYihOIIMuwIjGL39/fwffA8b//xv/P2BPtzzHwCBjUQAAAAD/ -/yLFBrIBAAAA//9i1HhcwdhizX7u8NZNzyLbvT97bfrMf/QHI8evOwcSqGUJAAAA//9iYBB81iSw -pEE170Qrg5MIYydHqwdDQRMrAwcVrQAAAAD//2J4x7j9AAMDn8Q/BgYLBoaiAwwMjPdvMDBYM1Tv -oJodAAAAAP//Yqo/83+dxePWlxl3npsel9lvLfPcqlE9725C+acfVLMEAAAA//9i+s9gwCoaaGMR -evta/58PTEWzr21hufPjA8N+qlnBwAAAAAD//2JiWLci5v1+HmFXDqcnULE/MxgYGBj+f6CaJQAA -AAD//2Ji2FrkY3iYpYC5qDeGgeEMAwPDvwQBBoYvcTwOVLMEAAAA//9isDBgkP///0EOg9z35v// -Gc/eeW7BwPj5+QGZhANUswMAAAD//2JgqGBgYGBgqEMXlvhMPUsAAAAA//8iYDd1AAAAAP//AwDR -w7IkEbzhVQAAAABJRU5ErkJggg== -""" - ) - ) - ), - ) - return f diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/numbering.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/numbering.py deleted file mode 100644 index aeedfa9a0bb7ba986bada759ebba9a2b5c98057e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/numbering.py +++ /dev/null @@ -1,131 +0,0 @@ -# encoding: utf-8 - -""" -Custom element classes related to the numbering part -""" - -from . import OxmlElement -from .shared import CT_DecimalNumber -from .simpletypes import ST_DecimalNumber -from .xmlchemy import ( - BaseOxmlElement, OneAndOnlyOne, RequiredAttribute, ZeroOrMore, ZeroOrOne -) - - -class CT_Num(BaseOxmlElement): - """ - ```` element, which represents a concrete list definition - instance, having a required child that references an - abstract numbering definition that defines most of the formatting details. - """ - abstractNumId = OneAndOnlyOne('w:abstractNumId') - lvlOverride = ZeroOrMore('w:lvlOverride') - numId = RequiredAttribute('w:numId', ST_DecimalNumber) - - def add_lvlOverride(self, ilvl): - """ - Return a newly added CT_NumLvl () element having its - ``ilvl`` attribute set to *ilvl*. - """ - return self._add_lvlOverride(ilvl=ilvl) - - @classmethod - def new(cls, num_id, abstractNum_id): - """ - Return a new ```` element having numId of *num_id* and having - a ```` child with val attribute set to - *abstractNum_id*. - """ - num = OxmlElement('w:num') - num.numId = num_id - abstractNumId = CT_DecimalNumber.new( - 'w:abstractNumId', abstractNum_id - ) - num.append(abstractNumId) - return num - - -class CT_NumLvl(BaseOxmlElement): - """ - ```` element, which identifies a level in a list - definition to override with settings it contains. - """ - startOverride = ZeroOrOne('w:startOverride', successors=('w:lvl',)) - ilvl = RequiredAttribute('w:ilvl', ST_DecimalNumber) - - def add_startOverride(self, val): - """ - Return a newly added CT_DecimalNumber element having tagname - ``w:startOverride`` and ``val`` attribute set to *val*. - """ - return self._add_startOverride(val=val) - - -class CT_NumPr(BaseOxmlElement): - """ - A ```` element, a container for numbering properties applied to - a paragraph. - """ - ilvl = ZeroOrOne('w:ilvl', successors=( - 'w:numId', 'w:numberingChange', 'w:ins' - )) - numId = ZeroOrOne('w:numId', successors=('w:numberingChange', 'w:ins')) - - # @ilvl.setter - # def _set_ilvl(self, val): - # """ - # Get or add a child and set its ``w:val`` attribute to *val*. - # """ - # ilvl = self.get_or_add_ilvl() - # ilvl.val = val - - # @numId.setter - # def numId(self, val): - # """ - # Get or add a child and set its ``w:val`` attribute to - # *val*. - # """ - # numId = self.get_or_add_numId() - # numId.val = val - - -class CT_Numbering(BaseOxmlElement): - """ - ```` element, the root element of a numbering part, i.e. - numbering.xml - """ - num = ZeroOrMore('w:num', successors=('w:numIdMacAtCleanup',)) - - def add_num(self, abstractNum_id): - """ - Return a newly added CT_Num () element referencing the - abstract numbering definition identified by *abstractNum_id*. - """ - next_num_id = self._next_numId - num = CT_Num.new(next_num_id, abstractNum_id) - return self._insert_num(num) - - def num_having_numId(self, numId): - """ - Return the ```` child element having ``numId`` attribute - matching *numId*. - """ - xpath = './w:num[@w:numId="%d"]' % numId - try: - return self.xpath(xpath)[0] - except IndexError: - raise KeyError('no element with numId %d' % numId) - - @property - def _next_numId(self): - """ - The first ``numId`` unused by a ```` element, starting at - 1 and filling any gaps in numbering between existing ```` - elements. - """ - numId_strs = self.xpath('./w:num/@w:numId') - num_ids = [int(numId_str) for numId_str in numId_strs] - for num in range(1, len(num_ids)+2): - if num not in num_ids: - break - return num diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/openapi/models.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/openapi/models.py deleted file mode 100644 index 2268dd229091d10dd0535bd21515b40409b8ce1b..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/openapi/models.py +++ /dev/null @@ -1,611 +0,0 @@ -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union - -from fastapi._compat import ( - PYDANTIC_V2, - CoreSchema, - GetJsonSchemaHandler, - JsonSchemaValue, - _model_rebuild, - general_plain_validator_function, -) -from fastapi.logger import logger -from pydantic import AnyUrl, BaseModel, Field -from typing_extensions import Annotated, Literal -from typing_extensions import deprecated as typing_deprecated - -try: - import email_validator - - assert email_validator # make autoflake ignore the unused import - from pydantic import EmailStr -except ImportError: # pragma: no cover - - class EmailStr(str): # type: ignore - @classmethod - def __get_validators__(cls) -> Iterable[Callable[..., Any]]: - yield cls.validate - - @classmethod - def validate(cls, v: Any) -> str: - logger.warning( - "email-validator not installed, email fields will be treated as str.\n" - "To install, run: pip install email-validator" - ) - return str(v) - - @classmethod - def _validate(cls, __input_value: Any, _: Any) -> str: - logger.warning( - "email-validator not installed, email fields will be treated as str.\n" - "To install, run: pip install email-validator" - ) - return str(__input_value) - - @classmethod - def __get_pydantic_json_schema__( - cls, core_schema: CoreSchema, handler: GetJsonSchemaHandler - ) -> JsonSchemaValue: - return {"type": "string", "format": "email"} - - @classmethod - def __get_pydantic_core_schema__( - cls, source: Type[Any], handler: Callable[[Any], CoreSchema] - ) -> CoreSchema: - return general_plain_validator_function(cls._validate) - - -class Contact(BaseModel): - name: Optional[str] = None - url: Optional[AnyUrl] = None - email: Optional[EmailStr] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class License(BaseModel): - name: str - identifier: Optional[str] = None - url: Optional[AnyUrl] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Info(BaseModel): - title: str - summary: Optional[str] = None - description: Optional[str] = None - termsOfService: Optional[str] = None - contact: Optional[Contact] = None - license: Optional[License] = None - version: str - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ServerVariable(BaseModel): - enum: Annotated[Optional[List[str]], Field(min_length=1)] = None - default: str - description: Optional[str] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Server(BaseModel): - url: Union[AnyUrl, str] - description: Optional[str] = None - variables: Optional[Dict[str, ServerVariable]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Reference(BaseModel): - ref: str = Field(alias="$ref") - - -class Discriminator(BaseModel): - propertyName: str - mapping: Optional[Dict[str, str]] = None - - -class XML(BaseModel): - name: Optional[str] = None - namespace: Optional[str] = None - prefix: Optional[str] = None - attribute: Optional[bool] = None - wrapped: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ExternalDocumentation(BaseModel): - description: Optional[str] = None - url: AnyUrl - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Schema(BaseModel): - # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-the-json-schema-core-vocabu - # Core Vocabulary - schema_: Optional[str] = Field(default=None, alias="$schema") - vocabulary: Optional[str] = Field(default=None, alias="$vocabulary") - id: Optional[str] = Field(default=None, alias="$id") - anchor: Optional[str] = Field(default=None, alias="$anchor") - dynamicAnchor: Optional[str] = Field(default=None, alias="$dynamicAnchor") - ref: Optional[str] = Field(default=None, alias="$ref") - dynamicRef: Optional[str] = Field(default=None, alias="$dynamicRef") - defs: Optional[Dict[str, "SchemaOrBool"]] = Field(default=None, alias="$defs") - comment: Optional[str] = Field(default=None, alias="$comment") - # Ref: JSON Schema 2020-12: https://json-schema.org/draft/2020-12/json-schema-core.html#name-a-vocabulary-for-applying-s - # A Vocabulary for Applying Subschemas - allOf: Optional[List["SchemaOrBool"]] = None - anyOf: Optional[List["SchemaOrBool"]] = None - oneOf: Optional[List["SchemaOrBool"]] = None - not_: Optional["SchemaOrBool"] = Field(default=None, alias="not") - if_: Optional["SchemaOrBool"] = Field(default=None, alias="if") - then: Optional["SchemaOrBool"] = None - else_: Optional["SchemaOrBool"] = Field(default=None, alias="else") - dependentSchemas: Optional[Dict[str, "SchemaOrBool"]] = None - prefixItems: Optional[List["SchemaOrBool"]] = None - # TODO: uncomment and remove below when deprecating Pydantic v1 - # It generales a list of schemas for tuples, before prefixItems was available - # items: Optional["SchemaOrBool"] = None - items: Optional[Union["SchemaOrBool", List["SchemaOrBool"]]] = None - contains: Optional["SchemaOrBool"] = None - properties: Optional[Dict[str, "SchemaOrBool"]] = None - patternProperties: Optional[Dict[str, "SchemaOrBool"]] = None - additionalProperties: Optional["SchemaOrBool"] = None - propertyNames: Optional["SchemaOrBool"] = None - unevaluatedItems: Optional["SchemaOrBool"] = None - unevaluatedProperties: Optional["SchemaOrBool"] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-structural - # A Vocabulary for Structural Validation - type: Optional[str] = None - enum: Optional[List[Any]] = None - const: Optional[Any] = None - multipleOf: Optional[float] = Field(default=None, gt=0) - maximum: Optional[float] = None - exclusiveMaximum: Optional[float] = None - minimum: Optional[float] = None - exclusiveMinimum: Optional[float] = None - maxLength: Optional[int] = Field(default=None, ge=0) - minLength: Optional[int] = Field(default=None, ge=0) - pattern: Optional[str] = None - maxItems: Optional[int] = Field(default=None, ge=0) - minItems: Optional[int] = Field(default=None, ge=0) - uniqueItems: Optional[bool] = None - maxContains: Optional[int] = Field(default=None, ge=0) - minContains: Optional[int] = Field(default=None, ge=0) - maxProperties: Optional[int] = Field(default=None, ge=0) - minProperties: Optional[int] = Field(default=None, ge=0) - required: Optional[List[str]] = None - dependentRequired: Optional[Dict[str, Set[str]]] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-vocabularies-for-semantic-c - # Vocabularies for Semantic Content With "format" - format: Optional[str] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-the-conten - # A Vocabulary for the Contents of String-Encoded Data - contentEncoding: Optional[str] = None - contentMediaType: Optional[str] = None - contentSchema: Optional["SchemaOrBool"] = None - # Ref: JSON Schema Validation 2020-12: https://json-schema.org/draft/2020-12/json-schema-validation.html#name-a-vocabulary-for-basic-meta - # A Vocabulary for Basic Meta-Data Annotations - title: Optional[str] = None - description: Optional[str] = None - default: Optional[Any] = None - deprecated: Optional[bool] = None - readOnly: Optional[bool] = None - writeOnly: Optional[bool] = None - examples: Optional[List[Any]] = None - # Ref: OpenAPI 3.1.0: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#schema-object - # Schema Object - discriminator: Optional[Discriminator] = None - xml: Optional[XML] = None - externalDocs: Optional[ExternalDocumentation] = None - example: Annotated[ - Optional[Any], - typing_deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -# Ref: https://json-schema.org/draft/2020-12/json-schema-core.html#name-json-schema-documents -# A JSON Schema MUST be an object or a boolean. -SchemaOrBool = Union[Schema, bool] - - -class Example(BaseModel): - summary: Optional[str] = None - description: Optional[str] = None - value: Optional[Any] = None - externalValue: Optional[AnyUrl] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ParameterInType(Enum): - query = "query" - header = "header" - path = "path" - cookie = "cookie" - - -class Encoding(BaseModel): - contentType: Optional[str] = None - headers: Optional[Dict[str, Union["Header", Reference]]] = None - style: Optional[str] = None - explode: Optional[bool] = None - allowReserved: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class MediaType(BaseModel): - schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema") - example: Optional[Any] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - encoding: Optional[Dict[str, Encoding]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class ParameterBase(BaseModel): - description: Optional[str] = None - required: Optional[bool] = None - deprecated: Optional[bool] = None - # Serialization rules for simple scenarios - style: Optional[str] = None - explode: Optional[bool] = None - allowReserved: Optional[bool] = None - schema_: Optional[Union[Schema, Reference]] = Field(default=None, alias="schema") - example: Optional[Any] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - # Serialization rules for more complex scenarios - content: Optional[Dict[str, MediaType]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Parameter(ParameterBase): - name: str - in_: ParameterInType = Field(alias="in") - - -class Header(ParameterBase): - pass - - -class RequestBody(BaseModel): - description: Optional[str] = None - content: Dict[str, MediaType] - required: Optional[bool] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Link(BaseModel): - operationRef: Optional[str] = None - operationId: Optional[str] = None - parameters: Optional[Dict[str, Union[Any, str]]] = None - requestBody: Optional[Union[Any, str]] = None - description: Optional[str] = None - server: Optional[Server] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Response(BaseModel): - description: str - headers: Optional[Dict[str, Union[Header, Reference]]] = None - content: Optional[Dict[str, MediaType]] = None - links: Optional[Dict[str, Union[Link, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Operation(BaseModel): - tags: Optional[List[str]] = None - summary: Optional[str] = None - description: Optional[str] = None - externalDocs: Optional[ExternalDocumentation] = None - operationId: Optional[str] = None - parameters: Optional[List[Union[Parameter, Reference]]] = None - requestBody: Optional[Union[RequestBody, Reference]] = None - # Using Any for Specification Extensions - responses: Optional[Dict[str, Union[Response, Any]]] = None - callbacks: Optional[Dict[str, Union[Dict[str, "PathItem"], Reference]]] = None - deprecated: Optional[bool] = None - security: Optional[List[Dict[str, List[str]]]] = None - servers: Optional[List[Server]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class PathItem(BaseModel): - ref: Optional[str] = Field(default=None, alias="$ref") - summary: Optional[str] = None - description: Optional[str] = None - get: Optional[Operation] = None - put: Optional[Operation] = None - post: Optional[Operation] = None - delete: Optional[Operation] = None - options: Optional[Operation] = None - head: Optional[Operation] = None - patch: Optional[Operation] = None - trace: Optional[Operation] = None - servers: Optional[List[Server]] = None - parameters: Optional[List[Union[Parameter, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class SecuritySchemeType(Enum): - apiKey = "apiKey" - http = "http" - oauth2 = "oauth2" - openIdConnect = "openIdConnect" - - -class SecurityBase(BaseModel): - type_: SecuritySchemeType = Field(alias="type") - description: Optional[str] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class APIKeyIn(Enum): - query = "query" - header = "header" - cookie = "cookie" - - -class APIKey(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.apiKey, alias="type") - in_: APIKeyIn = Field(alias="in") - name: str - - -class HTTPBase(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.http, alias="type") - scheme: str - - -class HTTPBearer(HTTPBase): - scheme: Literal["bearer"] = "bearer" - bearerFormat: Optional[str] = None - - -class OAuthFlow(BaseModel): - refreshUrl: Optional[str] = None - scopes: Dict[str, str] = {} - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OAuthFlowImplicit(OAuthFlow): - authorizationUrl: str - - -class OAuthFlowPassword(OAuthFlow): - tokenUrl: str - - -class OAuthFlowClientCredentials(OAuthFlow): - tokenUrl: str - - -class OAuthFlowAuthorizationCode(OAuthFlow): - authorizationUrl: str - tokenUrl: str - - -class OAuthFlows(BaseModel): - implicit: Optional[OAuthFlowImplicit] = None - password: Optional[OAuthFlowPassword] = None - clientCredentials: Optional[OAuthFlowClientCredentials] = None - authorizationCode: Optional[OAuthFlowAuthorizationCode] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OAuth2(SecurityBase): - type_: SecuritySchemeType = Field(default=SecuritySchemeType.oauth2, alias="type") - flows: OAuthFlows - - -class OpenIdConnect(SecurityBase): - type_: SecuritySchemeType = Field( - default=SecuritySchemeType.openIdConnect, alias="type" - ) - openIdConnectUrl: str - - -SecurityScheme = Union[APIKey, HTTPBase, OAuth2, OpenIdConnect, HTTPBearer] - - -class Components(BaseModel): - schemas: Optional[Dict[str, Union[Schema, Reference]]] = None - responses: Optional[Dict[str, Union[Response, Reference]]] = None - parameters: Optional[Dict[str, Union[Parameter, Reference]]] = None - examples: Optional[Dict[str, Union[Example, Reference]]] = None - requestBodies: Optional[Dict[str, Union[RequestBody, Reference]]] = None - headers: Optional[Dict[str, Union[Header, Reference]]] = None - securitySchemes: Optional[Dict[str, Union[SecurityScheme, Reference]]] = None - links: Optional[Dict[str, Union[Link, Reference]]] = None - # Using Any for Specification Extensions - callbacks: Optional[Dict[str, Union[Dict[str, PathItem], Reference, Any]]] = None - pathItems: Optional[Dict[str, Union[PathItem, Reference]]] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class Tag(BaseModel): - name: str - description: Optional[str] = None - externalDocs: Optional[ExternalDocumentation] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -class OpenAPI(BaseModel): - openapi: str - info: Info - jsonSchemaDialect: Optional[str] = None - servers: Optional[List[Server]] = None - # Using Any for Specification Extensions - paths: Optional[Dict[str, Union[PathItem, Any]]] = None - webhooks: Optional[Dict[str, Union[PathItem, Reference]]] = None - components: Optional[Components] = None - security: Optional[List[Dict[str, List[str]]]] = None - tags: Optional[List[Tag]] = None - externalDocs: Optional[ExternalDocumentation] = None - - if PYDANTIC_V2: - model_config = {"extra": "allow"} - - else: - - class Config: - extra = "allow" - - -_model_rebuild(Schema) -_model_rebuild(Operation) -_model_rebuild(Encoding) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/templating.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/templating.py deleted file mode 100644 index 0cb868486edd9dda38f90c65f314597813128cf8..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/templating.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.templating import Jinja2Templates as Jinja2Templates # noqa diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G__l_o_c.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G__l_o_c.py deleted file mode 100644 index 7973b9be911d450f2504e83704705c9bb8e4b810..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G__l_o_c.py +++ /dev/null @@ -1,84 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import array -import sys - - -Gloc_header = """ - > # big endian - version: 16.16F # Table version - flags: H # bit 0: 1=long format, 0=short format - # bit 1: 1=attribute names, 0=no names - numAttribs: H # NUmber of attributes -""" - - -class table_G__l_o_c(DefaultTable.DefaultTable): - """ - Support Graphite Gloc tables - """ - - dependencies = ["Glat"] - - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.attribIds = None - self.numAttribs = 0 - - def decompile(self, data, ttFont): - _, data = sstruct.unpack2(Gloc_header, data, self) - flags = self.flags - del self.flags - self.locations = array.array("I" if flags & 1 else "H") - self.locations.frombytes(data[: len(data) - self.numAttribs * (flags & 2)]) - if sys.byteorder != "big": - self.locations.byteswap() - self.attribIds = array.array("H") - if flags & 2: - self.attribIds.frombytes(data[-self.numAttribs * 2 :]) - if sys.byteorder != "big": - self.attribIds.byteswap() - - def compile(self, ttFont): - data = sstruct.pack( - Gloc_header, - dict( - version=1.0, - flags=(bool(self.attribIds) << 1) + (self.locations.typecode == "I"), - numAttribs=self.numAttribs, - ), - ) - if sys.byteorder != "big": - self.locations.byteswap() - data += self.locations.tobytes() - if sys.byteorder != "big": - self.locations.byteswap() - if self.attribIds: - if sys.byteorder != "big": - self.attribIds.byteswap() - data += self.attribIds.tobytes() - if sys.byteorder != "big": - self.attribIds.byteswap() - return data - - def set(self, locations): - long_format = max(locations) >= 65536 - self.locations = array.array("I" if long_format else "H", locations) - - def toXML(self, writer, ttFont): - writer.simpletag("attributes", number=self.numAttribs) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "attributes": - self.numAttribs = int(safeEval(attrs["number"])) - - def __getitem__(self, index): - return self.locations[index] - - def __len__(self): - return len(self.locations) - - def __iter__(self): - return iter(self.locations) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem.svelte_svelte_type_style_lang-e019e79b.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem.svelte_svelte_type_style_lang-e019e79b.js deleted file mode 100644 index a8cddf72e71915bdd519968b7e5db960dfd651c4..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/TabItem.svelte_svelte_type_style_lang-e019e79b.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as G,e as H,s as K,G as w,a9 as O,N as j,O as T,K as k,U as A,p as g,M as v,H as P,ay as Q,ab as R,ac as U,ad as F,z as J,v as L,A as p,w as I,a4 as S,B as V,D as W,m as B,aA as C,P as N,Q as X,R as z}from"./index-f877dfd5.js";function D(n,e,l){const s=n.slice();return s[14]=e[l],s[16]=l,s}function Y(n){let e,l=n[14].name+"",s,f,d,_;function i(){return n[12](n[14],n[16])}return{c(){e=j("button"),s=N(l),f=T(),k(e,"class","svelte-kqij2n")},m(u,m){g(u,e,m),v(e,s),v(e,f),d||(_=X(e,"click",i),d=!0)},p(u,m){n=u,m&8&&l!==(l=n[14].name+"")&&z(s,l)},d(u){u&&p(e),d=!1,_()}}}function Z(n){let e,l=n[14].name+"",s,f;return{c(){e=j("button"),s=N(l),f=T(),k(e,"class","selected svelte-kqij2n")},m(d,_){g(d,e,_),v(e,s),v(e,f)},p(d,_){_&8&&l!==(l=d[14].name+"")&&z(s,l)},d(d){d&&p(e)}}}function M(n,e){let l,s;function f(i,u){return i[14].id===i[4]?Z:Y}let d=f(e),_=d(e);return{key:n,first:null,c(){l=B(),_.c(),s=B(),this.first=l},m(i,u){g(i,l,u),_.m(i,u),g(i,s,u)},p(i,u){e=i,d===(d=f(e))&&_?_.p(e,u):(_.d(1),_=d(e),_&&(_.c(),_.m(s.parentNode,s)))},d(i){i&&(p(l),p(s)),_.d(i)}}}function x(n){let e,l,s=[],f=new Map,d,_,i,u=w(n[3]);const m=t=>t[14].id;for(let t=0;tl(4,f=a));const o=I(0);S(n,o,a=>l(13,s=a));const r=V();W($,{register_tab:a=>(c.push({name:a.name,id:a.id}),t.update(h=>h??a.id),l(3,c),c.length-1),unregister_tab:a=>{const h=c.findIndex(y=>y.id===a.id);c.splice(h,1),t.update(y=>y===a.id?c[h]?.id||c[c.length-1]?.id:y)},selected_tab:t,selected_tab_index:o});function q(a){l(9,b=a),C(t,f=a,f),C(o,s=c.findIndex(h=>h.id===a),s),r("change")}const E=(a,h)=>{q(a.id),r("select",{value:a.name,index:h})};return n.$$set=a=>{"visible"in a&&l(0,i=a.visible),"elem_id"in a&&l(1,u=a.elem_id),"elem_classes"in a&&l(2,m=a.elem_classes),"selected"in a&&l(9,b=a.selected),"$$scope"in a&&l(10,_=a.$$scope)},n.$$.update=()=>{n.$$.dirty&512&&b!==null&&q(b)},[i,u,m,c,f,t,o,r,q,b,_,d,E]}class le extends G{constructor(e){super(),H(this,e,ee,x,K,{visible:0,elem_id:1,elem_classes:2,selected:9})}}export{le as T,$ as a}; -//# sourceMappingURL=TabItem.svelte_svelte_type_style_lang-e019e79b.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Art Theme Project How to Make a Stunning Impression with Color Pattern and Texture.md b/spaces/cihyFjudo/fairness-paper-search/Art Theme Project How to Make a Stunning Impression with Color Pattern and Texture.md deleted file mode 100644 index 8c89b1839cd045d2e668760bfd5cb673457d2e6e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Art Theme Project How to Make a Stunning Impression with Color Pattern and Texture.md +++ /dev/null @@ -1,36 +0,0 @@ -
    -

    When first published, this article received over eight hundred comments from students looking for direction and assistance with their high school art projects. Some of these comments have been published below. It is hoped that the answers provide valuable insight for others.

    -

    Art Theme Project


    Downloadhttps://tinurli.com/2uwjFO



    -

    If you are looking for art themes to explore in GCSE or iGCSE lessons, the huge list below is a great starting point. Thank you to art teacher Annie Chapman for this amazing list. Some words link to art teaching resources on this website.

    -

    Hi The Arty Teacher, I am teaching iGCSE Art and Design for the first time. Just wondering as to what you would consider as an ideal number of themes that can be introduced to a class over the course of two years. Is it several or is it a matter of concentrating on one theme only throughout the entire course? Much appreciated, thank you.

    -

    Different teachers structure the course in different ways. At my school, we do one theme in Year 10 with two main outcomes. In year 11 they do another theme (we run this a little bit like a mock). Then they do the externally set task from January.

    -

    Beginning today (Oct. 27) and continuing through Wednesday (Nov. 2), students, faculty, staff, alumni and members of the larger University community are invited to vote for an overall theme for PRT station murals created by students.

    -

    Memento Mori, vanitas, mortality. Death is one of the most pervasive themes in art history. While many artworks celebrate afterlives in heaven or hell, death is most often referenced as a grim reminder of numbered days, and a powerful motivator to live well while you can. Every culture has rituals surrounding death, appearing in artwork as icons and colors. Hourglasses and wilted flowers for the Dutch, the Cuckoo bird in Japan, the Totenkopf in Germany.

    -

    This mural, though, marks a first for Camden, and yet another bridge: Hopeworks has partnered with Mural Arts Philadelphia, marking the highly-regarded collective's first project across the Delaware River, and bringing together artists from Camden's arts community with artists in Philly.

    -

    Bridge building was a recurring theme, not just in the design of the mural but in its very existence, said Manning. Hopeworks will soon open a new training center in Philadelphia's Kensington neighborhood.

    -

    Asked if she envisioned future Mural Arts collaborations in Camden, Golden was confident there would be. She looks forward to Camden's artists working in Philly (some have already attended Mural Arts' most recent quarterly artists' meeting) and Philly arts doing projects in Camden.

    -

    -

    Each year, over 300,000 students in Pre-K through Grade 12 create original works of art in response to a student-selected theme. This 50+ year-old program helps them explore their own thoughts, feelings and ideas, develop artistic literacy, increase confidence and find a love for learning that will help them become more successful in school and in life.

    -

    The Public Humanities Projects program supports projects that bring the ideas of the humanities to life for general audiences through public programming. Projects must engage humanities scholarship to analyze significant themes in disciplines such as history, literature, ethics, and art history. Awards support projects that are intended to reach broad and diverse public audiences in non-classroom settings in the United States. Projects should engage with ideas that are accessible to the general public and employ appealing interpretive formats.

    -

    Public Humanities Projects supports projects in three categories (Exhibitions, Historic Places, and Humanities Discussions), and at two funding levels (Planning and Implementation). Proposed projects may include complementary components: for example, a museum exhibition might be accompanied by a website or mobile app.

    -

    Small and mid-sized organizations are especially encouraged to apply. We likewise welcome humanities projects tailored to particular groups, such as families, youth (including K-12 students in informal educational settings), underserved communities, and veterans.

    -

    The 10 youth artists were led by lead artist Bijan Machen and mentored by USC students Daniel Kawah and Keviette Minor. The goal of the art project was to have the youth artists reflect and focus their art pieces on events occurring in their neighborhood and personal experiences, as well as interviewing people of various backgrounds around the USC community to gain a different perspective.

    -

    Since 1995 Spiral Workshop has created over 70 theme curricula. Each group intertwines learning in a media such as painting, drawing, Photoshop, sculpture, alternative practices, with investigation of a theme that affects students and their communities.

    -

    This event contains adult themes, distressing imagery, extended use of strobe lighting, smoke effects and swearing. The following items are strictly prohibited: knives, spraycans, illegal drugs, and lawyers from the Walt Disney corporation.

    -

    Visual artists, writers, filmmakers, and playwrights concentrated many of their creative efforts on the patterns of everyday life, especially the world of work. A recurring theme was the strength and dignity of common men and women, even as they faced difficult circumstances.

    -

    Many politically active artists worked for the New Deal projects. United by a desire to use art to promote social change, these artists sympathized with the labor movement and exhibited an affinity for left-wing politics ranging from New Deal liberalism to socialism to communism.

    -

    Most New Deal artist-administrators believed deeply that the projects had a responsibility to reach out to as many Americans as possible and to put art to practical use. Such socially useful arts were not intended to create masterpieces, but they did produce many excellent works, allowed thousands of artists to pursue their vocation, and enriched and informed the lives of Americans.

    -

    (Original theme graphic by Tanner Boeger, incorporating images from HRB, Phillipe Glade, and Christopher Robin Blum and art by Airpusher Collective, Marianela Fuentes, Arturo Gonzalez, and Sarahi Carillo)

    -

    Stuart is the director of Burning Man Project's Philosophical Center and host of the Burning Man LIVE podcast. Since his first Burn in 1993 he has participated as a theme camp organizer, artist, and year-round staff member contributing to the Project's communications, education, and storytelling efforts.

    -

    I really loved the art palette lollipops made with white chocolate. They were adorable, all the little girls thought they were the cutest thing and very special, and it perfectly spoke the theme of the party.

    -

    The theme for 2022 is inspired by the book, The Day You Begin, by National Book Award winner Jacqueline Woodson, and two-time Pura Belpré Illustrator Award winner Rafael López. The Day You Begin is a poignant, yet heartening book about finding courage to connect with others, even when you feel scared and alone. Jacqueline Woodson's lyrical text and Rafael López's dazzling art reminds us that we all feel like outsiders sometimes-and how brave it is that we go forth anyway. And that sometimes, when we reach out and begin to share our stories, others will be happy to meet us halfway.

    -

    Think about all the different activities and experiences you can link to your theme, so that each area of the curriculum is reflected somehow. Be creative! Ask your children for ideas, and include unusual, hands-on activities that will delight your children.

    -

    You can deliver your thematic unit in the way that best suits your children and circumstances. Some aspects of the unit may be best delivered to the whole group, some will work better as small group work. Will you devote your whole classroom to the unit, or set aside one project corner? Can you have your whole day given over to the unit, or do you need to allow time for other core areas of your teaching? The thematic unit is completely flexible.

    -

    Our Art Camp Unit gives you five process-art projects you can use to run an at-home/ in-class art camp. The Unit comes with printable invitations, stickers and certificates to hand out to all attendees.

    -

    This project may help a child or teen reflect on ways to find a safe space or may simply help them feel like they have some control over their environment. It can be conducted one-to-one or in small groups.

    -

    The activity involves imagining being lost at sea and visualizing the ideal lighthouse that would provide the right kind of guidance. This is a great activity for both children and adults, but an older group or individual might better appreciate the depth and symbolism of the project.

    -

    HERE WE design, develop, and deliver the most compelling entertainment experiences around the world.
    Our innovative attractions, immersive theme parks, world-class resorts, and new ventures fuse art with technology to change the landscape of themed entertainment.

    -

    The concept for the project was developed by Akshata Naik, a Toronto artist who has exhibited her work in Canada, Britain, and India. Akshata lives in Toronto where she is the Program and Gallery Manager at Arts Etobicoke. She also teaches at Art Ignite, Neilson Park Creative Centre, and Vibe Arts.

    -

    The public is invited to view Frozen Voyage during Open Houses being held on Tuesday, August 27 and Wednesday, August 28 between 11:00 AM and 1:00 PM. At the Open House, join the project by folding your own boat that will be added to the artwork. The public may also see Frozen Voyage, along with the other artwork, in Council Chambers during Council Meetings.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Autocad 2014 - Keygen voiture tamer securi How to activate Autodesk products with X-force.md b/spaces/cihyFjudo/fairness-paper-search/Autocad 2014 - Keygen voiture tamer securi How to activate Autodesk products with X-force.md deleted file mode 100644 index 8f6ac4f5e7699e6060e6eb7eea4e32a5b1df0b86..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Autocad 2014 - Keygen voiture tamer securi How to activate Autodesk products with X-force.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Autocad 2014 - Keygen voiture tamer securi


    DOWNLOAD ––– https://tinurli.com/2uwhWc



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Dear V S Bear 4 Movie Download 720p Hdl A Must-Watch for Lovers of Thriller and Romance.md b/spaces/cihyFjudo/fairness-paper-search/Dear V S Bear 4 Movie Download 720p Hdl A Must-Watch for Lovers of Thriller and Romance.md deleted file mode 100644 index 9d70f755093a9f799cdfab78be6ea73b0af335d5..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Dear V S Bear 4 Movie Download 720p Hdl A Must-Watch for Lovers of Thriller and Romance.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dear V S Bear 4 Movie Download 720p Hdl


    Download Zip ○○○ https://tinurli.com/2uwj9x



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Lamb Of God Killadelphia Torrent _VERIFIED_ Download.md b/spaces/cihyFjudo/fairness-paper-search/Lamb Of God Killadelphia Torrent _VERIFIED_ Download.md deleted file mode 100644 index 1dd04cffb3f6c54ae9f8fde5b1a3d35ae37f5a58..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Lamb Of God Killadelphia Torrent _VERIFIED_ Download.md +++ /dev/null @@ -1,68 +0,0 @@ -## Lamb Of God Killadelphia Torrent Download - - - - - - - - - -**Lamb Of God Killadelphia Torrent Download === [https://www.google.com/url?q=https%3A%2F%2Fbyltly.com%2F2txlhF&sa=D&sntz=1&usg=AOvVaw2Ac1OAOaAAtjjQ7QMkwMOs](https://www.google.com/url?q=https%3A%2F%2Fbyltly.com%2F2txlhF&sa=D&sntz=1&usg=AOvVaw2Ac1OAOaAAtjjQ7QMkwMOs)** - - - - - - - - - - - - I'll try to create that. Here is what I created: - -# Lamb Of God Killadelphia Torrent Download: How to Get the Legendary Live Album - - - -Lamb Of God is one of the most influential and successful metal bands of the 21st century. Their live performances are legendary for their intensity, energy and brutality. One of their best live albums is Killadelphia, recorded in 2004 at the Trocadero Theatre in Philadelphia. The album captures the band at their peak, delivering a blistering set of songs from their first four albums, as well as some covers and guest appearances. - - - -If you are a fan of Lamb Of God, or just love metal music in general, you might want to get your hands on Killadelphia. However, the album is not easy to find in physical or digital formats. It was released as a CD/DVD combo in 2005, but it is now out of print and hard to come by. The album is also not available on streaming platforms like Spotify or Apple Music. - - - -So how can you listen to Killadelphia? One option is to download it from a torrent site. A torrent is a file that contains information about other files that are shared by users over a peer-to-peer network. You can use a torrent client software like BitTorrent or uTorrent to download the files you want from other users who have them. However, downloading torrents can be risky and illegal, depending on your location and the content you are downloading. You might expose your device to viruses, malware or hackers, or face legal consequences for violating copyright laws. - - - -Therefore, we do not recommend downloading Killadelphia from a torrent site. Instead, we suggest you look for other ways to get the album legally and safely. For example, you can try to find a second-hand copy of the CD/DVD combo on online marketplaces like eBay or Amazon. You can also check if your local library or music store has a copy that you can borrow or buy. Alternatively, you can watch some videos of the concert on YouTube or other video platforms. - - - -Killadelphia is a must-have for any Lamb Of God fan or metal lover. It showcases the band's raw power and talent in front of a passionate crowd. We hope you find a way to enjoy this amazing live album without resorting to torrent downloads. - -I'll try to continue the article. Here is what I created: - -If you are wondering why Killadelphia is so special, here are some reasons why it is considered one of the best live metal albums ever. First of all, the sound quality is excellent, thanks to the production of Machine, who also worked on Lamb Of God's studio albums. The mix is clear and balanced, allowing you to hear every instrument and vocal nuance. The album also features some bonus tracks that were not included in the DVD, such as "Ruin", "11th Hour" and "Laid to Rest". - - - -Secondly, the performance is flawless and intense. Lamb Of God plays with precision and passion, delivering every song with full force and emotion. The band members are in sync with each other and with the audience, creating a powerful connection and atmosphere. The crowd is also amazing, singing along, moshing and cheering throughout the show. You can feel the energy and excitement of being there. - - - -Thirdly, the album has some memorable moments that make it unique and unforgettable. For example, there is a guest appearance by Chris Adler's brother Willie Adler, who plays guitar on "Pariah". There is also a cover of Metallica's "Creeping Death", which features vocals by Rob Dukes of Exodus and Alex Skolnick of Testament. There is also a hilarious prank by Randy Blythe on Mark Morton, who gets hit by a pie in the face during "Black Label". And of course, there is the infamous incident where Blythe jumps into the crowd and gets into a fight with a fan who threw a bottle at him. - - - -Killadelphia is more than just a live album. It is a document of Lamb Of God's history and legacy as one of the most influential and successful metal bands of the 21st century. It is a testament to their musical skills and artistic vision. It is a tribute to their fans and their city. It is a masterpiece of metal music that deserves to be heard by everyone. - - dfd1c89656 - - - - - diff --git a/spaces/cihyFjudo/fairness-paper-search/LockLizard.PDC.Un-Protector.v2.5..rar A Simple and Effective Solution to Unlock PDC Files.md b/spaces/cihyFjudo/fairness-paper-search/LockLizard.PDC.Un-Protector.v2.5..rar A Simple and Effective Solution to Unlock PDC Files.md deleted file mode 100644 index 4df115b4a871b56394edc3acf48cdf6a618e1d3e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/LockLizard.PDC.Un-Protector.v2.5..rar A Simple and Effective Solution to Unlock PDC Files.md +++ /dev/null @@ -1,6 +0,0 @@ -

    LockLizard.PDC.Un-Protector.v2.5..rar


    Download ===== https://tinurli.com/2uwiUi



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/The The Convict A Prison Bosss Revenge on an Innocent Man.md b/spaces/cihyFjudo/fairness-paper-search/The The Convict A Prison Bosss Revenge on an Innocent Man.md deleted file mode 100644 index 0d287ee235b8dfe0f97142e0af83706af314131c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/The The Convict A Prison Bosss Revenge on an Innocent Man.md +++ /dev/null @@ -1,21 +0,0 @@ -
    -

    The Tennessee Coal, Iron and Railroad Company (TCI) one of the original 12 companies listed in the Dow Jones Industrial Index, was one of the largest users of prison laborers, mostly comprised of African Americans convicted of petty crimes. The number of convicts employed increased after United States Steel, the largest corporation in the world at the time (formerly known as U.S. Steel and USX), acquired TCI in 1907. The working and living conditions for these prisoners were brutal, as companies leasing convicts sought to house, clothe and feed them for minimal expense, with little interest in their survival. Justice-involved individuals were housed in rough board shanties unfit for the habitation of human beings. Torture and beatings were common, and countless individuals perished from abuse; poor and dangerous working conditions; communicable diseases, such as tuberculosis, malaria, and pneumonia; and from environmental conditions like contaminated water.

    -

    Convict Lake and Creek are so named as the result of an AMBUSH encounter here September 17, 1871, when a group of inmates escaped from prison in Carson City. Sheriff George Hightower eventually caught up with the convicts and a shoot out took place. Robert Morrison, a Benton Merchant and Mono Jim along with other other posse members encountered the convicts on the present Convict Creek, then known as Monte Diablo Creek. In the encounter, Robert Morrison and Mono Jim were killed. The convicts escaped and were eventually captured later in Round Valley.

    -

    The The Convict


    DOWNLOADhttps://tinurli.com/2uwiuI



    -

    "This beautifully written book leads its readers on the journey from Emancipation to the devastating convict-leasing system in Georgia. . . . [and] examines the exploitation of black women's bodies, the beginnings of mass incarceration, and the rise of the modern New South."--Erica Armstrong Dunbar, The Nation

    -

    As fans may recall, in the ninth episode of Season 3, Michael learns that Martin Nash, a Black employee who recently transferred to the Scranton branch from Stamford, is a reformed convict. After Nash (played by actor and comedian Wayne Wilderson) reveals he did time for involvement in insider trading, he talks about his experience in prison, which sounds a little better than working at Dunder Mifflin. Heartbroken over the idea that his employees might prefer prison to working with him, Michael turns into Prison Mike to teach everyone that prison is bad.

    -

    One of those lines takes place after the conference room scene in which Michael, Pam, Angela, and Kevin learn that the company receives a Work Opportunity Tax Credit for employing Nash, an ex-convict.

    -

    A death row inmate awaiting execution, asked as a last wish a pencil and paper. After writing for several minutes, the convict called the prison guard and asked that this letter be handed over to his biological mother.

    -

    The purported missive from death row included no information about the identity of its writer, his location, when he wrote it, or the crimes he was charged with. Moreover, it was accompanied by a completely unrelated photograph of "hot convict" Jeremy Meeks, who became internationally notorious when his exceptionally flattering mugshot went viral in 2013. Meeks was sentenced on weapons charges, but he was not involved with a capital case (and therefore was neither sentenced to death nor executed).

    -

    There are three main issues that need to be taken into consideration in the context of pre-trial detention: firstly, pre-trial detention is overused in most countries worldwide and in many developing countries the size of the pre-trial prisoner population is larger than that of the convicted prisoner population. This situation contradicts the provisions in international standards, including ICCPR, that provide for the limited use of pre-trial detention, only when certain conditions are present. Secondly, pre-trial detention is the period most open to abuse in the criminal justice process. Recognizing the particular vulnerability of pre-trial detainees, international human rights instruments provide for a large number of very specific safeguards to ensure that the rights of detainees are not abused, that they are not ill-treated and their access to justice not hindered. Thirdly, although pre-trial detainees should be presumed innocent until found guilty by a court of law, and treated as such, conditions in pre-trial detention are often much worse than those of prisons for convicted prisoners. In addition, the lack of resources for prisons in many low-income countries means that people in detention do not have access to legal advice and assistance, with the result being that they may overstay on remand, and/or not receive a fair trial, further adding to the congestion of prisons. Therefore, improving access to justice, supporting legal and paralegal aid programmes, improving information management and cooperation between courts and prisons, to speed up the processing of cases, as well as assisting with the development of safeguards for pre-trial detainees, such as independent monitoring and inspection mechanisms, comprise important elements of UNODC's work in the field of penal reform.

    -

    -

    Built in 1840 (not 1790) the Success had many lives, first as a shipping vessel serving British India and then as a passenger ship ferrying immigrants (not convicts) to Australia. During one trip to Australia the Success arrived right at the peak of the Australian gold rush and her crew deserted to strike it rich. Without mariners the ship was left moored near Melbourne, Australia, where it became a prison hulk and later a stores ship.

    -

    Children who were orphaned, removed from negligent parents, or who were juvenile offenders were especially vulnerable after emancipation. They could end up in the convict leasing system as "'apprentices" and fall once more into white planters' hands. Unknown location, ca. 1903. Photo credit: Detroit Publishing Company Collection, Library of Congress.

    -

    Often completely innocent of the crimes of which they were accused, these African Americans were forced to work from sunup to sundown, in chains, under the lash and gunpoint of the white guards. Under convict leasing, Black people going about their day could be rounded up, convicted of made-up crimes, separated from their families, processed through an all-white court, and treated with little to no regard to their human value. In his book Texas Tough, historian Robert Perkinson estimates that at least 30,000 died in the convict leasing system across the South over 55 years. One can find blatant and insidious parallels between convict leasing and mass incarceration and the prison-industrial complex. As Bryan Stevenson says, "slavery did not end in 1865. It just evolved." However, convict leasing rarely appears in history textbooks. The generational loss and trauma in Black families is left unexamined.

    -

    Without ever learning about convict leasing, how can Americans make sense of the discovery of a mass gravesite in a prosperous suburban town? Will the public sweep these uncomfortable truths under the rug again?

    -

    The land where the 95 African American remains were unearthed during construction is owned by the Fort Bend Independent School District, which purchased this former convict camp and state prison land in 2011 and has been accused of mishandling the remains. At the time of writing, Fort Bend ISD continues to own and operate this cemetery unilaterally against community wishes. Moreover, there is no historical marker or other information at the site that tells the history of what happened there. They have even renamed the site with one that is unrelated to the history of convict leasing.

    -

    Americans must find the hidden chapters of their history and really begin to understand the legacy of racial oppression that has strengthened the walls of white supremacy. A version of history that omits these chapters has stolen a chance for the nation to learn from it, and to fix what has been broken by it. As Americans seek to dismantle Confederate monuments, they must also actively create new monuments and narratives that broaden their understanding of justice, democracy, and humanity. I believe that building a memorial dedicated to victims and survivors of convict leasing in Sugar Land, Texas is a step in the right direction.

    -

    After breakfast at The Flourmill Cafe, drive through the countryside for just under an hour to the Toodyay Red Hill Convict Road Station Ruins, constructed in the 1850s. The camp housed the convict road gangs that built and maintained the road to Perth. Back then, there were five buildings made of rammed earth; now the ruins of only one remain.

    -

    Established in 1853, it housed 60 ticket-of-leave convicts and put them to work at the Geraldine Lead Mine and local pastoral stations. After exploring the depot, and the nearby pretty town of Northampton, set off on the five-hour journey back to Perth, this time taking the scenic Indian Ocean Drive.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Velamma - Episode 57 - 50 Shades Of Savita [Adult Comics] - AlmeriasVelamma - Episode 57 - 50 33.md b/spaces/cihyFjudo/fairness-paper-search/Velamma - Episode 57 - 50 Shades Of Savita [Adult Comics] - AlmeriasVelamma - Episode 57 - 50 33.md deleted file mode 100644 index b98116fda0680bea753a727784c27b980e7aba03..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Velamma - Episode 57 - 50 Shades Of Savita [Adult Comics] - AlmeriasVelamma - Episode 57 - 50 33.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Velamma - Episode 57 - 50 Shades of Savita [Adult Comics] - {Almerias}Velamma - Episode 57 - 50 33


    Download Zip ····· https://tinurli.com/2uwkLU



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/maddec.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/maddec.py deleted file mode 100644 index a7010af780162fa328951897ffacb99ab00bc9dd..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/audioread/maddec.py +++ /dev/null @@ -1,86 +0,0 @@ -# This file is part of audioread. -# Copyright 2011, Adrian Sampson. -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. - -"""Decode MPEG audio files with MAD (via pymad).""" -import mad - -from . import DecodeError -from .base import AudioFile - - -class UnsupportedError(DecodeError): - """The file is not readable by MAD.""" - - -class MadAudioFile(AudioFile): - """MPEG audio file decoder using the MAD library.""" - def __init__(self, filename): - self.fp = open(filename, 'rb') - self.mf = mad.MadFile(self.fp) - if not self.mf.total_time(): # Indicates a failed open. - self.fp.close() - raise UnsupportedError() - - def close(self): - if hasattr(self, 'fp'): - self.fp.close() - if hasattr(self, 'mf'): - del self.mf - - def read_blocks(self, block_size=4096): - """Generates buffers containing PCM data for the audio file. - """ - while True: - out = self.mf.read(block_size) - if not out: - break - yield bytes(out) - - @property - def samplerate(self): - """Sample rate in Hz.""" - return self.mf.samplerate() - - @property - def duration(self): - """Length of the audio in seconds (a float).""" - return float(self.mf.total_time()) / 1000 - - @property - def channels(self): - """The number of channels.""" - if self.mf.mode() == mad.MODE_SINGLE_CHANNEL: - return 1 - elif self.mf.mode() in (mad.MODE_DUAL_CHANNEL, - mad.MODE_JOINT_STEREO, - mad.MODE_STEREO): - return 2 - else: - # Other mode? - return 2 - - def __del__(self): - self.close() - - # Iteration. - def __iter__(self): - return self.read_blocks() - - # Context manager. - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - return False diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/httpsredirect.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/httpsredirect.py deleted file mode 100644 index b7a3d8e078574e87dc6e345d621f5a596c3bdc1e..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/middleware/httpsredirect.py +++ /dev/null @@ -1,3 +0,0 @@ -from starlette.middleware.httpsredirect import ( # noqa - HTTPSRedirectMiddleware as HTTPSRedirectMiddleware, -) diff --git a/spaces/codebox/diffuse-flood/build/_app/immutable/chunks/2-efe340ca.js b/spaces/codebox/diffuse-flood/build/_app/immutable/chunks/2-efe340ca.js deleted file mode 100644 index 4a7328a64257f71e2c5e75df16d0907faa5eb020..0000000000000000000000000000000000000000 --- a/spaces/codebox/diffuse-flood/build/_app/immutable/chunks/2-efe340ca.js +++ /dev/null @@ -1 +0,0 @@ -import{default as m}from"../components/pages/_page.svelte-2a5d0087.js";import"./index-a207c28c.js";export{m as component}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3enc_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3enc_template.c deleted file mode 100644 index be4ecebc9ccc81d1e149e016856af693cadba207..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3enc_template.c +++ /dev/null @@ -1,403 +0,0 @@ -/* - * AC-3 encoder float/fixed template - * Copyright (c) 2000 Fabrice Bellard - * Copyright (c) 2006-2011 Justin Ruggles - * Copyright (c) 2006-2010 Prakash Punnoor - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * AC-3 encoder float/fixed template - */ - -#include "config_components.h" - -#include - -#include "libavutil/attributes.h" -#include "libavutil/internal.h" -#include "libavutil/mem_internal.h" - -#include "audiodsp.h" -#include "ac3enc.h" -#include "eac3enc.h" - - -static int allocate_sample_buffers(AC3EncodeContext *s) -{ - int ch; - - if (!FF_ALLOC_TYPED_ARRAY(s->windowed_samples, AC3_WINDOW_SIZE) || - !FF_ALLOCZ_TYPED_ARRAY(s->planar_samples, s->channels)) - return AVERROR(ENOMEM); - - for (ch = 0; ch < s->channels; ch++) { - if (!(s->planar_samples[ch] = av_mallocz((AC3_FRAME_SIZE + AC3_BLOCK_SIZE) * - sizeof(**s->planar_samples)))) - return AVERROR(ENOMEM); - } - return 0; -} - - -/* - * Copy input samples. - * Channels are reordered from FFmpeg's default order to AC-3 order. - */ -static void copy_input_samples(AC3EncodeContext *s, SampleType **samples) -{ - int ch; - - /* copy and remap input samples */ - for (ch = 0; ch < s->channels; ch++) { - /* copy last 256 samples of previous frame to the start of the current frame */ - memcpy(&s->planar_samples[ch][0], &s->planar_samples[ch][AC3_BLOCK_SIZE * s->num_blocks], - AC3_BLOCK_SIZE * sizeof(s->planar_samples[0][0])); - - /* copy new samples for current frame */ - memcpy(&s->planar_samples[ch][AC3_BLOCK_SIZE], - samples[s->channel_map[ch]], - AC3_BLOCK_SIZE * s->num_blocks * sizeof(s->planar_samples[0][0])); - } -} - - -/* - * Apply the MDCT to input samples to generate frequency coefficients. - * This applies the KBD window and normalizes the input to reduce precision - * loss due to fixed-point calculations. - */ -static void apply_mdct(AC3EncodeContext *s) -{ - int blk, ch; - - for (ch = 0; ch < s->channels; ch++) { - for (blk = 0; blk < s->num_blocks; blk++) { - AC3Block *block = &s->blocks[blk]; - const SampleType *input_samples = &s->planar_samples[ch][blk * AC3_BLOCK_SIZE]; - - s->fdsp->vector_fmul(s->windowed_samples, input_samples, - s->mdct_window, AC3_BLOCK_SIZE); - s->fdsp->vector_fmul_reverse(s->windowed_samples + AC3_BLOCK_SIZE, - &input_samples[AC3_BLOCK_SIZE], - s->mdct_window, AC3_BLOCK_SIZE); - - s->tx_fn(s->tx, block->mdct_coef[ch+1], - s->windowed_samples, sizeof(float)); - } - } -} - - -/* - * Calculate coupling channel and coupling coordinates. - */ -static void apply_channel_coupling(AC3EncodeContext *s) -{ - LOCAL_ALIGNED_16(CoefType, cpl_coords, [AC3_MAX_BLOCKS], [AC3_MAX_CHANNELS][16]); -#if AC3ENC_FLOAT - LOCAL_ALIGNED_16(int32_t, fixed_cpl_coords, [AC3_MAX_BLOCKS], [AC3_MAX_CHANNELS][16]); -#else - int32_t (*fixed_cpl_coords)[AC3_MAX_CHANNELS][16] = cpl_coords; -#endif - int av_uninit(blk), ch, bnd, i, j; - CoefSumType energy[AC3_MAX_BLOCKS][AC3_MAX_CHANNELS][16] = {{{0}}}; - int cpl_start, num_cpl_coefs; - - memset(cpl_coords, 0, AC3_MAX_BLOCKS * sizeof(*cpl_coords)); -#if AC3ENC_FLOAT - memset(fixed_cpl_coords, 0, AC3_MAX_BLOCKS * sizeof(*cpl_coords)); -#endif - - /* align start to 16-byte boundary. align length to multiple of 32. - note: coupling start bin % 4 will always be 1 */ - cpl_start = s->start_freq[CPL_CH] - 1; - num_cpl_coefs = FFALIGN(s->num_cpl_subbands * 12 + 1, 32); - cpl_start = FFMIN(256, cpl_start + num_cpl_coefs) - num_cpl_coefs; - - /* calculate coupling channel from fbw channels */ - for (blk = 0; blk < s->num_blocks; blk++) { - AC3Block *block = &s->blocks[blk]; - CoefType *cpl_coef = &block->mdct_coef[CPL_CH][cpl_start]; - if (!block->cpl_in_use) - continue; - memset(cpl_coef, 0, num_cpl_coefs * sizeof(*cpl_coef)); - for (ch = 1; ch <= s->fbw_channels; ch++) { - CoefType *ch_coef = &block->mdct_coef[ch][cpl_start]; - if (!block->channel_in_cpl[ch]) - continue; - for (i = 0; i < num_cpl_coefs; i++) - cpl_coef[i] += ch_coef[i]; - } - - /* coefficients must be clipped in order to be encoded */ - clip_coefficients(&s->adsp, cpl_coef, num_cpl_coefs); - } - - /* calculate energy in each band in coupling channel and each fbw channel */ - /* TODO: possibly use SIMD to speed up energy calculation */ - bnd = 0; - i = s->start_freq[CPL_CH]; - while (i < s->cpl_end_freq) { - int band_size = s->cpl_band_sizes[bnd]; - for (ch = CPL_CH; ch <= s->fbw_channels; ch++) { - for (blk = 0; blk < s->num_blocks; blk++) { - AC3Block *block = &s->blocks[blk]; - if (!block->cpl_in_use || (ch > CPL_CH && !block->channel_in_cpl[ch])) - continue; - for (j = 0; j < band_size; j++) { - CoefType v = block->mdct_coef[ch][i+j]; - MAC_COEF(energy[blk][ch][bnd], v, v); - } - } - } - i += band_size; - bnd++; - } - - /* calculate coupling coordinates for all blocks for all channels */ - for (blk = 0; blk < s->num_blocks; blk++) { - AC3Block *block = &s->blocks[blk]; - if (!block->cpl_in_use) - continue; - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (!block->channel_in_cpl[ch]) - continue; - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - cpl_coords[blk][ch][bnd] = calc_cpl_coord(energy[blk][ch][bnd], - energy[blk][CPL_CH][bnd]); - } - } - } - - /* determine which blocks to send new coupling coordinates for */ - for (blk = 0; blk < s->num_blocks; blk++) { - AC3Block *block = &s->blocks[blk]; - AC3Block *block0 = blk ? &s->blocks[blk-1] : NULL; - - memset(block->new_cpl_coords, 0, sizeof(block->new_cpl_coords)); - - if (block->cpl_in_use) { - /* send new coordinates if this is the first block, if previous - * block did not use coupling but this block does, the channels - * using coupling has changed from the previous block, or the - * coordinate difference from the last block for any channel is - * greater than a threshold value. */ - if (blk == 0 || !block0->cpl_in_use) { - for (ch = 1; ch <= s->fbw_channels; ch++) - block->new_cpl_coords[ch] = 1; - } else { - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (!block->channel_in_cpl[ch]) - continue; - if (!block0->channel_in_cpl[ch]) { - block->new_cpl_coords[ch] = 1; - } else { - CoefSumType coord_diff = 0; - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - coord_diff += FFABS(cpl_coords[blk-1][ch][bnd] - - cpl_coords[blk ][ch][bnd]); - } - coord_diff /= s->num_cpl_bands; - if (coord_diff > NEW_CPL_COORD_THRESHOLD) - block->new_cpl_coords[ch] = 1; - } - } - } - } - } - - /* calculate final coupling coordinates, taking into account reusing of - coordinates in successive blocks */ - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - blk = 0; - while (blk < s->num_blocks) { - int av_uninit(blk1); - AC3Block *block = &s->blocks[blk]; - - if (!block->cpl_in_use) { - blk++; - continue; - } - - for (ch = 1; ch <= s->fbw_channels; ch++) { - CoefSumType energy_ch, energy_cpl; - if (!block->channel_in_cpl[ch]) - continue; - energy_cpl = energy[blk][CPL_CH][bnd]; - energy_ch = energy[blk][ch][bnd]; - blk1 = blk+1; - while (blk1 < s->num_blocks && !s->blocks[blk1].new_cpl_coords[ch]) { - if (s->blocks[blk1].cpl_in_use) { - energy_cpl += energy[blk1][CPL_CH][bnd]; - energy_ch += energy[blk1][ch][bnd]; - } - blk1++; - } - cpl_coords[blk][ch][bnd] = calc_cpl_coord(energy_ch, energy_cpl); - } - blk = blk1; - } - } - - /* calculate exponents/mantissas for coupling coordinates */ - for (blk = 0; blk < s->num_blocks; blk++) { - AC3Block *block = &s->blocks[blk]; - if (!block->cpl_in_use) - continue; - -#if AC3ENC_FLOAT - s->ac3dsp.float_to_fixed24(fixed_cpl_coords[blk][1], - cpl_coords[blk][1], - s->fbw_channels * 16); -#endif - s->ac3dsp.extract_exponents(block->cpl_coord_exp[1], - fixed_cpl_coords[blk][1], - s->fbw_channels * 16); - - for (ch = 1; ch <= s->fbw_channels; ch++) { - int bnd, min_exp, max_exp, master_exp; - - if (!block->new_cpl_coords[ch]) - continue; - - /* determine master exponent */ - min_exp = max_exp = block->cpl_coord_exp[ch][0]; - for (bnd = 1; bnd < s->num_cpl_bands; bnd++) { - int exp = block->cpl_coord_exp[ch][bnd]; - min_exp = FFMIN(exp, min_exp); - max_exp = FFMAX(exp, max_exp); - } - master_exp = ((max_exp - 15) + 2) / 3; - master_exp = FFMAX(master_exp, 0); - while (min_exp < master_exp * 3) - master_exp--; - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - block->cpl_coord_exp[ch][bnd] = av_clip(block->cpl_coord_exp[ch][bnd] - - master_exp * 3, 0, 15); - } - block->cpl_master_exp[ch] = master_exp; - - /* quantize mantissas */ - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - int cpl_exp = block->cpl_coord_exp[ch][bnd]; - int cpl_mant = (fixed_cpl_coords[blk][ch][bnd] << (5 + cpl_exp + master_exp * 3)) >> 24; - if (cpl_exp == 15) - cpl_mant >>= 1; - else - cpl_mant -= 16; - - block->cpl_coord_mant[ch][bnd] = cpl_mant; - } - } - } - - if (AC3ENC_FLOAT && CONFIG_EAC3_ENCODER && s->eac3) - ff_eac3_set_cpl_states(s); -} - - -/* - * Determine rematrixing flags for each block and band. - */ -static void compute_rematrixing_strategy(AC3EncodeContext *s) -{ - int nb_coefs; - int blk, bnd; - AC3Block *block, *block0 = NULL; - - if (s->channel_mode != AC3_CHMODE_STEREO) - return; - - for (blk = 0; blk < s->num_blocks; blk++) { - block = &s->blocks[blk]; - block->new_rematrixing_strategy = !blk; - - block->num_rematrixing_bands = 4; - if (block->cpl_in_use) { - block->num_rematrixing_bands -= (s->start_freq[CPL_CH] <= 61); - block->num_rematrixing_bands -= (s->start_freq[CPL_CH] == 37); - if (blk && block->num_rematrixing_bands != block0->num_rematrixing_bands) - block->new_rematrixing_strategy = 1; - } - nb_coefs = FFMIN(block->end_freq[1], block->end_freq[2]); - - if (!s->rematrixing_enabled) { - block0 = block; - continue; - } - - for (bnd = 0; bnd < block->num_rematrixing_bands; bnd++) { - /* calculate sum of squared coeffs for one band in one block */ - int start = ff_ac3_rematrix_band_tab[bnd]; - int end = FFMIN(nb_coefs, ff_ac3_rematrix_band_tab[bnd+1]); - CoefSumType sum[4]; - sum_square_butterfly(s, sum, block->mdct_coef[1] + start, - block->mdct_coef[2] + start, end - start); - - /* compare sums to determine if rematrixing will be used for this band */ - if (FFMIN(sum[2], sum[3]) < FFMIN(sum[0], sum[1])) - block->rematrixing_flags[bnd] = 1; - else - block->rematrixing_flags[bnd] = 0; - - /* determine if new rematrixing flags will be sent */ - if (blk && - block->rematrixing_flags[bnd] != block0->rematrixing_flags[bnd]) { - block->new_rematrixing_strategy = 1; - } - } - block0 = block; - } -} - - -int AC3_NAME(encode_frame)(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - AC3EncodeContext *s = avctx->priv_data; - int ret; - - if (s->options.allow_per_frame_metadata) { - ret = ff_ac3_validate_metadata(s); - if (ret) - return ret; - } - - if (s->bit_alloc.sr_code == 1 || (AC3ENC_FLOAT && s->eac3)) - ff_ac3_adjust_frame_size(s); - - copy_input_samples(s, (SampleType **)frame->extended_data); - - apply_mdct(s); - - s->cpl_on = s->cpl_enabled; - ff_ac3_compute_coupling_strategy(s); - - if (s->cpl_on) - apply_channel_coupling(s); - - compute_rematrixing_strategy(s); - -#if AC3ENC_FLOAT - scale_coefficients(s); -#endif - - return ff_ac3_encode_frame_common_end(avctx, avpkt, frame, got_packet_ptr); -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/iirfilter.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/iirfilter.c deleted file mode 100644 index 903d64e8d46c325bf2a2fa287485834e62cbe934..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/iirfilter.c +++ /dev/null @@ -1,336 +0,0 @@ -/* - * IIR filter - * Copyright (c) 2008 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * different IIR filters implementation - */ - -#include - -#include "libavutil/attributes.h" -#include "libavutil/common.h" - -#include "iirfilter.h" - -/** - * IIR filter global parameters - */ -typedef struct FFIIRFilterCoeffs { - int order; - float gain; - int *cx; - float *cy; -} FFIIRFilterCoeffs; - -/** - * IIR filter state - */ -typedef struct FFIIRFilterState { - float x[1]; -} FFIIRFilterState; - -/// maximum supported filter order -#define MAXORDER 30 - -static av_cold int butterworth_init_coeffs(void *avc, - struct FFIIRFilterCoeffs *c, - enum IIRFilterMode filt_mode, - int order, float cutoff_ratio, - float stopband) -{ - int i, j; - double wa; - double p[MAXORDER + 1][2]; - - if (filt_mode != FF_FILTER_MODE_LOWPASS) { - av_log(avc, AV_LOG_ERROR, "Butterworth filter currently only supports " - "low-pass filter mode\n"); - return -1; - } - if (order & 1) { - av_log(avc, AV_LOG_ERROR, "Butterworth filter currently only supports " - "even filter orders\n"); - return -1; - } - - wa = 2 * tan(M_PI * 0.5 * cutoff_ratio); - - c->cx[0] = 1; - for (i = 1; i < (order >> 1) + 1; i++) - c->cx[i] = c->cx[i - 1] * (order - i + 1LL) / i; - - p[0][0] = 1.0; - p[0][1] = 0.0; - for (i = 1; i <= order; i++) - p[i][0] = p[i][1] = 0.0; - for (i = 0; i < order; i++) { - double zp[2]; - double th = (i + (order >> 1) + 0.5) * M_PI / order; - double a_re, a_im, c_re, c_im; - zp[0] = cos(th) * wa; - zp[1] = sin(th) * wa; - a_re = zp[0] + 2.0; - c_re = zp[0] - 2.0; - a_im = - c_im = zp[1]; - zp[0] = (a_re * c_re + a_im * c_im) / (c_re * c_re + c_im * c_im); - zp[1] = (a_im * c_re - a_re * c_im) / (c_re * c_re + c_im * c_im); - - for (j = order; j >= 1; j--) { - a_re = p[j][0]; - a_im = p[j][1]; - p[j][0] = a_re * zp[0] - a_im * zp[1] + p[j - 1][0]; - p[j][1] = a_re * zp[1] + a_im * zp[0] + p[j - 1][1]; - } - a_re = p[0][0] * zp[0] - p[0][1] * zp[1]; - p[0][1] = p[0][0] * zp[1] + p[0][1] * zp[0]; - p[0][0] = a_re; - } - c->gain = p[order][0]; - for (i = 0; i < order; i++) { - c->gain += p[i][0]; - c->cy[i] = (-p[i][0] * p[order][0] + -p[i][1] * p[order][1]) / - (p[order][0] * p[order][0] + p[order][1] * p[order][1]); - } - c->gain /= 1 << order; - - return 0; -} - -static av_cold int biquad_init_coeffs(void *avc, struct FFIIRFilterCoeffs *c, - enum IIRFilterMode filt_mode, int order, - float cutoff_ratio, float stopband) -{ - double cos_w0, sin_w0; - double a0, x0, x1; - - if (filt_mode != FF_FILTER_MODE_HIGHPASS && - filt_mode != FF_FILTER_MODE_LOWPASS) { - av_log(avc, AV_LOG_ERROR, "Biquad filter currently only supports " - "high-pass and low-pass filter modes\n"); - return -1; - } - if (order != 2) { - av_log(avc, AV_LOG_ERROR, "Biquad filter must have order of 2\n"); - return -1; - } - - cos_w0 = cos(M_PI * cutoff_ratio); - sin_w0 = sin(M_PI * cutoff_ratio); - - a0 = 1.0 + (sin_w0 / 2.0); - - if (filt_mode == FF_FILTER_MODE_HIGHPASS) { - c->gain = ((1.0 + cos_w0) / 2.0) / a0; - x0 = ((1.0 + cos_w0) / 2.0) / a0; - x1 = (-(1.0 + cos_w0)) / a0; - } else { // FF_FILTER_MODE_LOWPASS - c->gain = ((1.0 - cos_w0) / 2.0) / a0; - x0 = ((1.0 - cos_w0) / 2.0) / a0; - x1 = (1.0 - cos_w0) / a0; - } - c->cy[0] = (-1.0 + (sin_w0 / 2.0)) / a0; - c->cy[1] = (2.0 * cos_w0) / a0; - - // divide by gain to make the x coeffs integers. - // during filtering, the delay state will include the gain multiplication - c->cx[0] = lrintf(x0 / c->gain); - c->cx[1] = lrintf(x1 / c->gain); - - return 0; -} - -av_cold struct FFIIRFilterCoeffs *ff_iir_filter_init_coeffs(void *avc, - enum IIRFilterType filt_type, - enum IIRFilterMode filt_mode, - int order, float cutoff_ratio, - float stopband, float ripple) -{ - FFIIRFilterCoeffs *c; - int ret = 0; - - if (order <= 0 || order > MAXORDER || cutoff_ratio >= 1.0) - return NULL; - - if (!(c = av_mallocz(sizeof(*c))) || - !(c->cx = av_malloc (sizeof(c->cx[0]) * ((order >> 1) + 1))) || - !(c->cy = av_malloc (sizeof(c->cy[0]) * order))) - goto free; - c->order = order; - - switch (filt_type) { - case FF_FILTER_TYPE_BUTTERWORTH: - ret = butterworth_init_coeffs(avc, c, filt_mode, order, cutoff_ratio, - stopband); - break; - case FF_FILTER_TYPE_BIQUAD: - ret = biquad_init_coeffs(avc, c, filt_mode, order, cutoff_ratio, - stopband); - break; - default: - av_log(avc, AV_LOG_ERROR, "filter type is not currently implemented\n"); - goto free; - } - - if (!ret) - return c; -free: - ff_iir_filter_free_coeffsp(&c); - return NULL; -} - -av_cold struct FFIIRFilterState *ff_iir_filter_init_state(int order) -{ - FFIIRFilterState *s = av_mallocz(sizeof(FFIIRFilterState) + sizeof(s->x[0]) * (order - 1)); - return s; -} - -#define CONV_S16(dest, source) dest = av_clip_int16(lrintf(source)); - -#define CONV_FLT(dest, source) dest = source; - -#define FILTER_BW_O4_1(i0, i1, i2, i3, fmt) \ - in = *src0 * c->gain + \ - c->cy[0] * s->x[i0] + \ - c->cy[1] * s->x[i1] + \ - c->cy[2] * s->x[i2] + \ - c->cy[3] * s->x[i3]; \ - res = (s->x[i0] + in) * 1 + \ - (s->x[i1] + s->x[i3]) * 4 + \ - s->x[i2] * 6; \ - CONV_ ## fmt(*dst0, res) \ - s->x[i0] = in; \ - src0 += sstep; \ - dst0 += dstep; - -#define FILTER_BW_O4(type, fmt) { \ - int i; \ - const type *src0 = src; \ - type *dst0 = dst; \ - for (i = 0; i < size; i += 4) { \ - float in, res; \ - FILTER_BW_O4_1(0, 1, 2, 3, fmt); \ - FILTER_BW_O4_1(1, 2, 3, 0, fmt); \ - FILTER_BW_O4_1(2, 3, 0, 1, fmt); \ - FILTER_BW_O4_1(3, 0, 1, 2, fmt); \ - } \ -} - -#define FILTER_DIRECT_FORM_II(type, fmt) { \ - int i; \ - const type *src0 = src; \ - type *dst0 = dst; \ - for (i = 0; i < size; i++) { \ - int j; \ - float in, res; \ - in = *src0 * c->gain; \ - for (j = 0; j < c->order; j++) \ - in += c->cy[j] * s->x[j]; \ - res = s->x[0] + in + s->x[c->order >> 1] * c->cx[c->order >> 1]; \ - for (j = 1; j < c->order >> 1; j++) \ - res += (s->x[j] + s->x[c->order - j]) * c->cx[j]; \ - for (j = 0; j < c->order - 1; j++) \ - s->x[j] = s->x[j + 1]; \ - CONV_ ## fmt(*dst0, res) \ - s->x[c->order - 1] = in; \ - src0 += sstep; \ - dst0 += dstep; \ - } \ -} - -#define FILTER_O2(type, fmt) { \ - int i; \ - const type *src0 = src; \ - type *dst0 = dst; \ - for (i = 0; i < size; i++) { \ - float in = *src0 * c->gain + \ - s->x[0] * c->cy[0] + \ - s->x[1] * c->cy[1]; \ - CONV_ ## fmt(*dst0, s->x[0] + in + s->x[1] * c->cx[1]) \ - s->x[0] = s->x[1]; \ - s->x[1] = in; \ - src0 += sstep; \ - dst0 += dstep; \ - } \ -} - -void ff_iir_filter(const struct FFIIRFilterCoeffs *c, - struct FFIIRFilterState *s, int size, - const int16_t *src, ptrdiff_t sstep, - int16_t *dst, ptrdiff_t dstep) -{ - if (c->order == 2) { - FILTER_O2(int16_t, S16) - } else if (c->order == 4) { - FILTER_BW_O4(int16_t, S16) - } else { - FILTER_DIRECT_FORM_II(int16_t, S16) - } -} - -/** - * Perform IIR filtering on floating-point input samples. - * - * @param coeffs pointer to filter coefficients - * @param state pointer to filter state - * @param size input length - * @param src source samples - * @param sstep source stride - * @param dst filtered samples (destination may be the same as input) - * @param dstep destination stride - */ -static void iir_filter_flt(const struct FFIIRFilterCoeffs *c, - struct FFIIRFilterState *s, int size, - const float *src, ptrdiff_t sstep, - float *dst, ptrdiff_t dstep) -{ - if (c->order == 2) { - FILTER_O2(float, FLT) - } else if (c->order == 4) { - FILTER_BW_O4(float, FLT) - } else { - FILTER_DIRECT_FORM_II(float, FLT) - } -} - -av_cold void ff_iir_filter_free_statep(struct FFIIRFilterState **state) -{ - av_freep(state); -} - -av_cold void ff_iir_filter_free_coeffsp(struct FFIIRFilterCoeffs **coeffsp) -{ - struct FFIIRFilterCoeffs *coeffs = *coeffsp; - if (coeffs) { - av_freep(&coeffs->cx); - av_freep(&coeffs->cy); - } - av_freep(coeffsp); -} - -void ff_iir_filter_init(FFIIRFilterContext *f) { - f->filter_flt = iir_filter_flt; - -#if HAVE_MIPSFPU - ff_iir_filter_init_mips(f); -#endif -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ambient Music Mod How to get the Pixels Now Playing feature on any Android device.md b/spaces/congsaPfin/Manga-OCR/logs/Ambient Music Mod How to get the Pixels Now Playing feature on any Android device.md deleted file mode 100644 index fca1dcb9991a1c84a83ab0534094f691984b8504..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ambient Music Mod How to get the Pixels Now Playing feature on any Android device.md +++ /dev/null @@ -1,124 +0,0 @@ -
    -

    Ambient Music Mod APK Download: Enjoy the Pixel's Now Playing Feature on Any Android Device

    -

    Do you love listening to music and discovering new songs? Do you wish your phone could recognize any song playing around you and show it on your lock screen? If you answered yes, then you might be interested in Ambient Music Mod, a free app that ports the Google Pixel's Now Playing feature to other Android devices. In this article, we will tell you what Ambient Music Mod is, how to install it, and what benefits it can bring to your musical experience.

    -

    ambient music mod apk download


    Downloadhttps://urlca.com/2uOeb1



    -

    What is Ambient Music Mod?

    -

    Ambient Music Mod is a Shizuku/Sui app that ports Now Playing from Pixels to other Android devices. Now Playing is a feature that automatically identifies songs playing in the background using an offline database and displays them on the lock screen or in a history list. It was introduced by Google in 2017 with the Pixel 2 and has remained exclusive to the Pixel lineup ever since.

    -

    Ambient Music Mod was created by Kieron Quinn, also known as Quinny899 on XDA Forums, who managed to port the feature to other Android smartphones using Shizuku or Sui Magisk module. Shizuku is a service that allows third-party apps access to system-level APIs through ADB, while Sui is a Magisk module that provides rootless superuser access. Ambient Music Mod does not require root access on devices running Android 12 or higher, but it does require root access on older Android versions.

    -

    Features of Ambient Music Mod

    -

    Ambient Music Mod offers a lot of features that make it a great app for music lovers. Here are some of them:

    -

    Full Now Playing support

    -

    Ambient Music Mod uses the latest version of Now Playing from Pixel devices and the latest music databases. It can recognize over 100,000 songs from various genres and languages, even if they are not very popular or mainstream. It can also recognize songs that are not in the local database using Google Assistant's recognition engine.

    -

    Automatic Ambient Music recognition

    -

    Ambient Music Mod runs in the background and listens for music playing around you. It can recognize songs every 15 seconds, every minute, or every 5 minutes, depending on your preference. You can also adjust the sensitivity and gain settings to improve the recognition accuracy. You can choose whether to show notifications or not when a song is recognized.

    -

    ambient music mod shizuku apk download
    -ambient music mod v2 apk download
    -ambient music mod now playing apk download
    -ambient music mod android 12 apk download
    -ambient music mod github apk download
    -ambient music mod xda apk download
    -ambient music mod sui apk download
    -ambient music mod latest version apk download
    -ambient music mod pixel feature apk download
    -ambient music mod no root apk download
    -ambient music mod on demand apk download
    -ambient music mod lock screen apk download
    -ambient music mod track list apk download
    -ambient music mod alternative encoding apk download
    -ambient music mod gain settings apk download
    -ambient music mod history and favorites apk download
    -ambient music mod widget apk download
    -ambient music mod google assistant apk download
    -ambient music mod database location apk download
    -ambient music mod distortion fix apk download
    -ambient music mod android 14 beta apk download
    -ambient music mod port from pixels apk download
    -ambient music mod system level api apk download
    -ambient music mod standalone app apk download
    -ambient music mod magisk module apk download
    -ambient music mod xposed dependencies apk download
    -ambient music mod hybrid solution apk download
    -ambient music mod hotfixes apk download
    -ambient music mod installation guide apk download
    -ambient music mod sideload android app apk download
    -ambient music mod adb interface apk download
    -ambient music mod accessibility service apk download
    -ambient music mod kieron quinn apk download
    -ambient music mod quinny899 apk download
    -ambient music mod releases tags apk download
    -ambient music mod issues pull requests apk download
    -ambient music mod code wiki security apk download
    -ambient music mod fork star code apk download
    -ambient music mod screenshots assets apk download
    -ambient music mod building instructions apk download

    -

    Now Playing History, Favourites and Summary support

    -

    Ambient Music Mod keeps track of all the songs that it recognizes and shows them in a history list. You can view the song title, artist name, album art, date and time of recognition, and source of recognition (local or online). You can also mark songs as favourites and view them in a separate list. You can also view a summary of your musical preferences based on the songs that you have listened to.

    -

    Manual and On Demand recognition

    -

    If you want to manually trigger a recognition, you can use the app's widget or shortcut. You can also use the On Demand recognition feature, which uses Google Assistant's recognition engine for songs that are not in the local database. This feature requires an internet connection and works on supported devices only.

    -

    Lock screen display and database customization

    -

    Ambient Music Mod can show the recognized songs on your lock screen using an accessibility service. You can customize the appearance and position of the lock screen display according to your liking. You can also customize the database of songs that Ambient Music Mod uses by adding or removing songs from the app's settings.

    -

    How to install Ambient Music Mod?

    -

    If you want to try Ambient Music Mod on your Android device, you will need to follow some steps to install it properly. Here are the requirements and instructions for installing Ambient Music Mod:

    -

    Requirements for Ambient Music Mod

    -

    To install Ambient Music Mod, you will need the following:

    -
      -
    • An Android device running Android 8.0 Oreo or higher (Android 12 or higher is recommended)
    • -
    • A Shizuku or Sui Magisk module installed on your device (Sui is recommended for Android 12 or higher)
    • -
    • Ambient Music Mod APK file downloaded from the official XDA thread
    • -
    • Google Play Services and Google Assistant installed and updated on your device
    • -
    • An internet connection for downloading the music database and using the On Demand recognition feature
    • -
    -

    Download and setup steps for Ambient Music Mod

    -

    Once you have met the requirements, you can follow these steps to install and set up Ambient Music Mod on your device:

    -
      -
    1. Download the latest version of Ambient Music Mod APK file from the official XDA thread and save it on your device.
    2. -
    3. Install the APK file by tapping on it and granting the required permissions.
    4. -
    5. Open the app and grant the Shizuku or Sui permission when prompted.
    6. -
    7. Download the music database from the app's settings. You can choose between a small database (about 100 MB) or a large database (about 500 MB).
    8. -
    9. Enable the accessibility service for Ambient Music Mod from your device's settings.
    10. -
    11. Configure the app's settings according to your preference. You can adjust the recognition interval, sensitivity, gain, notification, lock screen display, database customization, and more.
    12. -
    13. Enjoy the Pixel's Now Playing feature on your Android device!
    14. -
    -

    Benefits of Ambient Music Mod

    -

    Ambient Music Mod is not only a cool app that lets you enjoy the Pixel's Now Playing feature on your Android device, but it also has some benefits that can enhance your musical experience. Here are some of them:

    -

    What is ambient music and why is it good for you?

    -

    Ambient music is a genre of music that focuses on creating a mood or atmosphere rather than a melody or rhythm. It often uses sounds from nature, synthesizers, drones, loops, and minimal vocals. It is usually played at low volumes and in the background, creating a subtle and relaxing sound environment.

    -

    Ambient music can have positive effects on your brain and mood, such as:

    -

    Definition and characteristics of ambient music

    -
      -
    • Reducing stress and anxiety by lowering your heart rate, blood pressure, and cortisol levels
    • -
    • Improving focus and concentration by blocking out distracting noises and enhancing cognitive performance
    • -
    • Enhancing creativity and imagination by stimulating your right brain hemisphere and inducing a state of flow
    • -
    • Promoting sleep and relaxation by synchronizing your brain waves with the sound frequencies and inducing a state of calmness
    • -
    -

    Therapeutic effects of ambient music on the brain and mood

    -

    Ambient music can also help you discover new songs and artists that you might not have heard before. Ambient Music Mod can recognize songs from various genres and languages, including ambient music. You can view the history of recognized songs and explore more about them online. You can also mark songs as favourites and create your own playlist of ambient music.

    -

    Examples of ambient music and artists

    -

    If you are new to ambient music or want to expand your musical horizons, here are some examples of ambient music and artists that you can check out:

    -
      -
    • Brian Eno - The pioneer of ambient music who coined the term in 1978. His albums include Music for Airports, Apollo: Atmospheres and Soundtracks, and Ambient 4: On Land.
    • -
    • Laurie Spiegel - A composer and electronic musician who created ambient music using algorithms and computer programs. Her albums include The Expanding Universe, Unseen Worlds, and Obsolete Systems.
    • -
    • The Orb - A British electronic music group that combines ambient music with elements of dub, house, techno, and psychedelia. Their albums include The Orb's Adventures Beyond the Ultraworld, U.F.Orb, and Orblivion.
    • -
    • Aphex Twin - A British electronic musician and producer who is known for his experimental and diverse styles of ambient music, IDM, techno, and acid. His albums include Selected Ambient Works 85–92, Selected Ambient Works Volume II, and Drukqs.
    • -
    • Eno • Hyde - A collaboration between Brian Eno and Karl Hyde of Underworld that blends ambient music with electronic beats and vocals. Their albums include Someday World, High Life, and Everything That Happens Will Happen Today.
    • -
    -

    Conclusion

    -

    Ambient Music Mod is a free app that lets you enjoy the Pixel's Now Playing feature on any Android device. It can automatically recognize songs playing in the background using an offline database and display them on your lock screen or in a history list. It can also recognize songs that are not in the local database using Google Assistant's recognition engine. You can customize the app's settings to suit your preference and musical taste.

    -

    Ambient Music Mod can also help you discover and appreciate ambient music, a genre of music that creates a mood or atmosphere rather than a melody or rhythm. Ambient music can have positive effects on your brain and mood, such as reducing stress and anxiety, improving focus and concentration, enhancing creativity and imagination, and promoting sleep and relaxation. You can also explore more about ambient music and artists online using Ambient Music Mod.

    -

    If you are a music lover and want to try Ambient Music Mod on your Android device, you can download it from the official XDA thread and follow the installation and setup steps. You will need a Shizuku or Sui Magisk module installed on your device, as well as Google Play Services and Google Assistant. You will also need an internet connection for downloading the music database and using the On Demand recognition feature.

    -

    FAQs

    -

    Here are some frequently asked questions about Ambient Music Mod:

    -
      -
    1. Q: Does Ambient Music Mod work offline?
      -A: Yes, Ambient Music Mod can work offline for songs that are in the local database. However, you will need an internet connection for downloading the music database and using the On Demand recognition feature.
    2. -
    3. Q: Does Ambient Music Mod drain battery?
      -A: No, Ambient Music Mod does not drain battery significantly. It uses a low-power mode to listen for music in the background and only wakes up when a song is recognized.
    4. -
    5. Q: Does Ambient Music Mod work with headphones?
      -A: Yes, Ambient Music Mod can work with headphones if you enable the option to use the microphone as the audio source in the app's settings.
    6. -
    7. Q: Does Ambient Music Mod work with Spotify?
      -A: Yes, Ambient Music Mod can work with Spotify if you enable the option to use the media session as the audio source in the app's settings.
    8. -
    9. Q: How can I update Ambient Music Mod?
      -A: You can update Ambient Music Mod by downloading the latest version of the APK file from the official XDA thread and installing it over the existing app. You can also check for updates from the app's settings.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Clash of Clans Hile Apk ndir - Android Oyun Clubta Snrsz Altn ve ksir.md b/spaces/congsaPfin/Manga-OCR/logs/Clash of Clans Hile Apk ndir - Android Oyun Clubta Snrsz Altn ve ksir.md deleted file mode 100644 index 153af3a51a709f6590ea04a84f5437da2b84d0ff..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Clash of Clans Hile Apk ndir - Android Oyun Clubta Snrsz Altn ve ksir.md +++ /dev/null @@ -1,106 +0,0 @@ -
    -

    Clash of Clans Hile Apk Indir Android Oyun Club: How to Download and Install the Modded Version of the Popular Strategy Game

    -

    Clash of Clans is one of the most popular strategy games on mobile devices, with millions of players around the world. But what if you want to enjoy the game without spending money or waiting for long hours? That's where Clash of Clans Hile Apk comes in. In this article, we will show you how to download and install the modded version of Clash of Clans from Android Oyun Club, a website that offers free and safe downloads of various Android games. We will also explain what Clash of Clans Hile Apk is, how it differs from the original version, and what are the advantages and disadvantages of using it. So, if you are ready to take your gaming experience to the next level, read on!

    -

    clash of clans hile apk indir android oyun club


    Download File ===> https://urlca.com/2uO8n6



    -

    What is Clash of Clans and why is it so popular?

    -

    Clash of Clans is a freemium strategy game developed by Supercell, a Finnish company that also created other hit games like Hay Day, Boom Beach, and Brawl Stars. The game was released in 2012 for iOS devices and in 2013 for Android devices. Since then, it has become one of the most downloaded and highest-grossing games on both platforms.

    -

    The gameplay and features of Clash of Clans

    -

    The game is set in a fantasy world where you have to build your own village, train your troops, and fight against other players or computer-generated enemies. You can join or create a clan with other players to cooperate in wars, donate and receive troops, chat, and compete in clan games. You can also participate in special events, seasons, challenges, and leagues to earn rewards and trophies.

    -

    The game offers a variety of buildings, troops, spells, heroes, and items that you can upgrade and customize according to your preference and strategy. You can also explore different maps, modes, and scenarios that add more fun and challenge to the game.

    -

    The benefits and drawbacks of playing Clash of Clans

    -

    One of the main benefits of playing Clash of Clans is that it is very addictive and entertaining. You can spend hours building your village, planning your attacks, defending your base, and interacting with other players. You can also enjoy the stunning graphics, sound effects, animations, and music that make the game more immersive and realistic.

    -

    Another benefit is that it is very social and community-oriented. You can make friends with other players from different countries and cultures, share tips and strategies, support each other in battles, and have fun together. You can also learn new skills like leadership, teamwork, communication, problem-solving, creativity, and decision-making.

    However, playing Clash of Clans also has some drawbacks. One of them is that it can be very frustrating and time-consuming. You have to wait for long periods of time to upgrade your buildings, train your troops, and replenish your resources. You also have to deal with losing your loot, trophies, and progress when you are attacked by other players or fail to complete a mission.

    -

    Another drawback is that it can be very expensive and tempting. The game uses two currencies: gold and elixir, which you can earn by playing the game, and gems, which you can buy with real money or get from certain achievements. Gems can be used to speed up the waiting time, buy more resources, and unlock special items. However, gems are very scarce and costly, and you may feel pressured to spend more money to get ahead in the game.

    -

    clash of clans mod apk android oyun club hileli indir
    -clash of clans hile apk indir android oyun club son sürüm
    -clash of clans hile apk indir android oyun club güncel
    -clash of clans hile apk indir android oyun club 2023
    -clash of clans hile apk indir android oyun club online
    -clash of clans hile apk indir android oyun club türkçe
    -clash of clans hile apk indir android oyun club sınırsız
    -clash of clans hile apk indir android oyun club bedava
    -clash of clans hile apk indir android oyun club yeni
    -clash of clans hile apk indir android oyun club mega
    -clash of clans hile apk indir android oyun club full
    -clash of clans hile apk indir android oyun club kurulumu
    -clash of clans hile apk indir android oyun club nasıl yapılır
    -clash of clans hile apk indir android oyun club linki
    -clash of clans hile apk indir android oyun club yorumları
    -clash of clans hile apk indir android oyun club videolu anlatım
    -clash of clans hile apk indir android oyun club resimli anlatım
    -clash of clans hile apk indir android oyun club sorunsuz
    -clash of clans hile apk indir android oyun club çalışan
    -clash of clans hile apk indir android oyun club güvenli
    -clash of clans hile apk indir android oyun club orjinal
    -clash of clans hile apk indir android oyun club farkı
    -clash of clans hile apk indir android oyun club avantajları
    -clash of clans hile apk indir android oyun club özellikleri
    -clash of clans hile apk indir android oyun club incelemesi
    -clash of clans hile apk indir android oyun club detayları
    -clash of clans hile apk indir android oyun club ipuçları
    -clash of clans hile apk indir android oyun club stratejileri
    -clash of clans hile apk indir android oyun club taktikleri
    -clash of clans hile apk indir android oyun club rehberi
    -clash of clans hile apk indir android oyun club ipucu verenler
    -clash of clans hile apk indir android oyun club püf noktaları
    -clash of clans hile apk indir android oyun club tavsiyeleri
    -clash of clans hile apk indir android oyun club önerileri
    -clash of clans hile apk indir android oyun club yararları
    -clash of clans hile apk indir android oyun club zararları
    -clash of clans hile apk indir android oyun club riskleri
    -clash of clans hile apk indir android oyun club alternatifleri
    -clash of clans hile apk indir android oyun club benzerleri
    -clash of clans hile apk indir android oyun club rakipleri
    -clash of clans hile apk indir android oyun club karşılaştırması
    -clash of clans hile apk indir android oyun club farklılıkları
    -clash of clans hile apk indir android oyun club artıları ve eksileri
    -clash of clans hile apk indir android oyun club avantajları ve dezavantajları
    -clash of clans hile apk indir android oyun club yükleme yöntemleri
    -clash of clans hile apk indir android oyun club güncelleme yöntemleri
    -clash of clans hile apk indir android oyun club silme yöntemleri
    -clash of clans hile apk indir android oyun club geri yükleme yöntemleri
    -clash of clans hile apk indir android oyun club yedekleme yöntemleri

    -

    What is Clash of Clans Hile Apk and how does it differ from the original version?

    -

    Clash of Clans Hile Apk is a modified version of Clash of Clans that allows you to enjoy the game without the limitations and restrictions of the original version. It is also known as Clash of Clans Mod Apk, Clash of Clans Hack Apk, or Clash of Clans Cheat Apk. It is not an official product of Supercell, but a third-party creation that is distributed by various websites and platforms.

    -

    The advantages and disadvantages of using Clash of Clans Hile Apk

    -

    The main advantage of using Clash of Clans Hile Apk is that it gives you unlimited access to all the features and resources of the game. You can get unlimited gold, elixir, gems, dark elixir, and other items without spending any money or waiting for any time. You can also unlock all the buildings, troops, spells, heroes, and items without completing any requirements or levels. You can also customize your village, troops, and heroes according to your liking and preference.

    -

    Another advantage is that it gives you more freedom and fun in playing the game. You can experiment with different strategies, tactics, and combinations without worrying about losing anything or being penalized. You can also explore different maps, modes, and scenarios that are not available in the original version. You can also play offline without needing an internet connection or a Google Play account.

    -

    However, using Clash of Clans Hile Apk also has some disadvantages. One of them is that it can be very risky and dangerous for your device and account. Since it is not an official product of Supercell, it may contain viruses, malware, spyware, or other harmful elements that can damage your device or steal your personal information. It may also cause your account to be banned or suspended by Supercell for violating their terms of service and policies.

    -

    Another disadvantage is that it can be very boring and unsatisfying in the long run. Since you have everything at your disposal, you may lose the sense of challenge, achievement, and progression that makes the game exciting and rewarding. You may also miss out on the social and community aspects of the game that make it more enjoyable and engaging. You may also face compatibility issues with updates, patches, or new features that are released by Supercell.

    How to download and install Clash of Clans Hile Apk from Android Oyun Club?

    -

    If you are interested in trying out Clash of Clans Hile Apk, you can download and install it from Android Oyun Club, a website that offers free and safe downloads of various Android games. However, you need to follow some steps and requirements to do so successfully and safely.

    -

    The steps and requirements for downloading and installing Clash of Clans Hile Apk

    -

    Here are the steps and requirements for downloading and installing Clash of Clans Hile Apk from Android Oyun Club:

    -
      -
    1. Make sure that your device has enough storage space and battery life to download and install the file.
    2. -
    3. Make sure that your device is compatible with the file. The file size is about 150 MB and the minimum Android version required is 4.1.
    4. -
    5. Make sure that your device allows the installation of unknown sources. You can enable this option by going to Settings > Security > Unknown Sources and toggling it on.
    6. -
    7. Go to the official website of Android Oyun Club at https://androidoyun.club/ and search for Clash of Clans Hile Apk in the search bar. Alternatively, you can use this direct link: https://androidoyun.club/2023/03/clash-of-clans-v13-675-6-mod-apk-para-hileli.html.
    8. -
    9. Click on the download button and wait for the file to be downloaded to your device.
    10. -
    11. Locate the file in your device's file manager and tap on it to start the installation process.
    12. -
    13. Follow the instructions on the screen and wait for the installation to be completed.
    14. -
    15. Launch the game from your app drawer or home screen and enjoy!
    16. -
    -

    The tips and tricks for using Clash of Clans Hile Apk effectively

    -

    Here are some tips and tricks for using Clash of Clans Hile Apk effectively:

    -
      -
    • Create a new account or use a secondary account to play the game. Do not use your main account or link it to your Google Play account, as this may result in a ban or suspension by Supercell.
    • -
    • Do not use the modded version to play online with other players or join clans, as this may cause errors, crashes, or bans. Use it only for offline or solo play.
    • -
    • Do not abuse the unlimited resources or items, as this may make the game too easy or boring. Use them moderately and wisely to enhance your gaming experience.
    • -
    • Do not update the game from the Google Play Store or any other source, as this may overwrite or delete the modded version. Update it only from Android Oyun Club or wait for a new modded version to be released.
    • -
    • Do not forget to backup your data and progress regularly, as the modded version may be unstable or unreliable. You can use a cloud service or an external storage device to do so.
    • -
    -

    Conclusion

    -

    In conclusion, Clash of Clans Hile Apk is a modified version of Clash of Clans that allows you to enjoy the game without the limitations and restrictions of the original version. You can download and install it from Android Oyun Club, a website that offers free and safe downloads of various Android games. However, you need to follow some steps and requirements to do so successfully and safely. You also need to be aware of the advantages and disadvantages of using it, as well as some tips and tricks for using it effectively. We hope that this article has helped you learn more about Clash of Clans Hile Apk and how to download and install it from Android Oyun Club. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!

    -

    FAQs

    -

    Q1: Is Clash of Clans Hile Apk legal and safe to use?

    -

    A1: Clash of Clans Hile Apk is not legal or endorsed by Supercell, the developer of Clash of Clans. It is a third-party creation that violates their terms of service and policies. Therefore, using it may result in a ban or suspension by Supercell. Moreover, Clash of Clans Hile Apk may not be safe to use, as it may contain viruses, malware, spyware, or other harmful elements that can damage your device or steal your personal information. Therefore, using it is at your own risk and discretion.

    -

    Q2: Can I play Clash of Clans Hile Apk with other players online?

    -

    A2: No, you cannot play Clash of Clans Hile Apk with other players online. The modded version is not compatible with the original version, and it may cause errors, crashes, or bans if you try to connect to the online servers or join clans. The modded version is only for offline or solo play.

    -

    Q3: How can I update Clash of Clans Hile Apk to the latest version?

    -

    A3: You cannot update Clash of Clans Hile Apk from the Google Play Store or any other source, as this may overwrite or delete the modded version. You can only update it from Android Oyun Club or wait for a new modded version to be released. To update it from Android Oyun Club, you need to follow the same steps and requirements as downloading and installing it. However, you may need to uninstall the previous version first before installing the new one.

    -

    Q4: What are some alternatives to Clash of Clans Hile Apk?

    -

    A4: If you are looking for some alternatives to Clash of Clans Hile Apk, you can try some other modded versions of Clash of Clans that are available on different websites and platforms. Some examples are Clash of Lights, Clash of Magic, Clash of Souls, and PlenixClash. However, you need to be careful and cautious when using these alternatives, as they may have the same or worse risks and drawbacks as Clash of Clans Hile Apk.

    -

    Q5: Where can I find more information and support for Clash of Clans Hile Apk?

    -

    A5: If you want to find more information and support for Clash of Clans Hile Apk, you can visit the official website of Android Oyun Club at https://androidoyun.club/ or their social media pages on Facebook, Twitter, Instagram, and YouTube. You can also contact them via email at info@androidoyun.club or via their contact form on their website. You can also check out some online forums, blogs, videos, or reviews that discuss or review Clash of Clans Hile Apk.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Gacha Life Chat Mod APK for Free and Unlock All the Content.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Gacha Life Chat Mod APK for Free and Unlock All the Content.md deleted file mode 100644 index 7fd47604585ac7f0e2950b48b0fbcc1b8e3f863b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Gacha Life Chat Mod APK for Free and Unlock All the Content.md +++ /dev/null @@ -1,101 +0,0 @@ -
    -

    Gacha Life Chat Mod APK: A Fun and Creative Way to Express Yourself

    -

    If you are a fan of anime and games, you might have heard of Gacha Life. It is a popular game that lets you create your own characters, stories, and scenes in an anime-style world. But did you know that there is a way to make the game even more fun and exciting? That's right, with Gacha Life Chat Mod APK, you can unlock all the features of the game and chat with other players online. In this article, we will tell you everything you need to know about this modded version of the game, including how to download and install it, what are the benefits and drawbacks of using it, and some tips and tricks to enjoy it.

    -

    What is Gacha Life?

    -

    A popular anime-style game for mobile devices

    -

    Gacha Life is a game developed by Lunime, a company that specializes in creating anime-themed games. The game was released in October 2018 for Android and iOS devices, and has since gained millions of fans around the world. The game is rated 4.4 out of 5 stars on Google Play Store and 4.6 out of 5 stars on App Store.

    -

    gacha life chat mod apk


    DOWNLOAD »»» https://urlca.com/2uOaDM



    -

    A platform for creating and sharing stories, characters, and scenes

    -

    The main feature of Gacha Life is that it allows you to create your own anime characters using a variety of options, such as hairstyles, outfits, accessories, weapons, and more. You can also customize their personality traits, such as their likes, dislikes, hobbies, and relationships. You can then use your characters to create stories and scenes using different backgrounds, props, poses, and dialogue. You can also share your creations with other players online or download them to your device.

    -

    What is Gacha Life Chat Mod APK?

    -

    A modified version of the original game that unlocks all features

    -

    Gacha Life Chat Mod APK is a modified version of the original game that gives you access to all the features that are otherwise locked or limited in the official version. For example, with this mod apk, you can get unlimited gems and coins, which are the in-game currencies that you need to buy items and upgrade your characters. You can also unlock all the items in the shop, such as clothes, accessories, pets, and more. You can also access all the modes in the game, such as Studio Mode, Life Mode, Gacha Mode, and Mini-Games.

    -

    A way to chat with other players and make new friends

    -

    Another feature that makes Gacha Life Chat Mod APK different from the original game is that it allows you to chat with other players online. You can join or create chat rooms where you can talk to other players who share your interests and hobbies. You can also send messages, stickers, emojis, and gifts to your friends. You can also use voice chat or video chat to communicate with them. You can also join or create clubs where you can meet other players who have similar tastes in anime and games.

    -

    How to download and install Gacha Life Chat Mod APK?

    -

    The steps to get the modded game on your device

    -

    If you want

    If you want to download and install Gacha Life Chat Mod APK, you need to follow these steps:

    -
      -
    1. Find a reliable source to download the mod apk file. You can use the link below to get the latest version of the mod apk from APKMODY, a trusted website that provides modded games and apps.
    2. -
    3. Download and install APKMODY Installer from Google Play Store or from the link below. This is an app that helps you install apk files with ease.
    4. -
    5. Open APKMODY Installer and select Install APKs. Navigate to the location of the downloaded mod apk file and select it.
    6. -
    7. Select Install on the installation window. Wait for the installation to complete.
    8. -
    9. Alternatively, you can use an emulator like MemuPlay on your PC to install the mod apk from Google Play Store. This is a safer option as it reduces the risk of malware and viruses.
    10. -
    -

    The precautions to take before installing the mod apk

    -

    Before you install the mod apk, you should take some precautions to avoid any problems or issues. Here are some tips to follow:

    -
      -
    • Make sure you have enough storage space on your device. The mod apk file is about 100 MB in size, so you need at least 200 MB of free space to install it.
    • -
    • Enable Unknown Sources on your device. This is a setting that allows you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    • -
    • Disable antivirus or firewall apps on your device. These apps may interfere with the installation process or detect the mod apk as a threat. You can enable them again after the installation is done.
    • -
    • Backup your data and progress in the original game. The mod apk may overwrite or delete your data and progress in the original game, so it is advisable to backup your data before installing the mod apk. You can use cloud services or external storage devices to backup your data.
    • -

    What are the benefits of using Gacha Life Chat Mod APK?

    -

    Access to unlimited gems, coins, and items

    -

    One of the main benefits of using Gacha Life Chat Mod APK is that you can get unlimited gems and coins, which are the in-game currencies that you need to buy items and upgrade your characters. You can also get unlimited stamina, which is the energy that you need to play the game. With unlimited resources, you can buy anything you want in the shop, such as clothes, accessories, pets, and more. You can also use gems and coins to gacha for rare and exclusive items that are not available in the normal version of the game.

    -

    gacha life chat mod apk unlimited diamonds
    -gacha life chat mod apk download for android
    -gacha life chat mod apk latest version
    -gacha life chat mod apk free shopping
    -gacha life chat mod apk no watermark
    -gacha life chat mod apk 2023
    -gacha life chat mod apk offline
    -gacha life chat mod apk with cheats
    -gacha life chat mod apk for pc
    -gacha life chat mod apk ios
    -gacha life chat mod apk unlocked everything
    -gacha life chat mod apk online
    -gacha life chat mod apk hack
    -gacha life chat mod apk 1.1.4
    -gacha life chat mod apk revdl
    -gacha life chat mod apk rexdl
    -gacha life chat mod apk happymod
    -gacha life chat mod apk an1
    -gacha life chat mod apk pure
    -gacha life chat mod apk vip
    -gacha life chat mod apk android 1
    -gacha life chat mod apk uptodown
    -gacha life chat mod apk apkpure
    -gacha life chat mod apk apkmody
    -gacha life chat mod apk mob.org
    -gacha life chat mod apk mediafıre
    -gacha life chat mod apk mega.nz
    -gacha life chat mod apk google drive
    -gacha life chat mod apk zippyshare
    -gacha life chat mod apk 4shared
    -gacha life chat mod apk obb data
    -gacha life chat mod apk unlimited money and gems
    -gacha life chat mod apk no ads
    -gacha life chat mod apk no root
    -gacha life chat mod apk no verification
    -gacha life chat mod apk anti ban
    -gacha life chat mod apk premium features
    -gacha life chat mod apk full version
    -gacha life chat mod apk pro version
    -gacha life chat mod apk cracked version

    -

    Ability to customize your avatar and chat room

    -

    Another benefit of using Gacha Life Chat Mod APK is that you can customize your avatar and chat room to your liking. You can change your avatar's appearance, such as their hair, eyes, skin, clothes, and accessories. You can also change their personality traits, such as their likes, dislikes, hobbies, and relationships. You can also customize your chat room by choosing different themes, backgrounds, stickers, emojis, and gifts. You can also invite your friends to join your chat room or join other chat rooms that interest you.

    -

    Opportunity to explore different modes and mini-games

    -

    A third benefit of using Gacha Life Chat Mod APK is that you can explore different modes and mini-games that are not available in the original game. For example, you can play Studio Mode, where you can create your own stories and scenes using your characters and backgrounds. You can also play Life Mode, where you can interact with NPCs and other players in different locations. You can also play Gacha Mode, where you can gacha for items and characters using gems and coins. You can also play Mini-Games, where you can earn gems and coins by playing fun and challenging games.

    -

    What are the drawbacks of using Gacha Life Chat Mod APK?

    -

    Potential risks of malware, viruses, and bans

    -

    One of the main drawbacks of using Gacha Life Chat Mod APK is that it may pose some risks to your device and account. Since the mod apk is not an official version of the game, it may contain malware or viruses that can harm your device or steal your personal information. You should always download the mod apk from a reliable source and scan it with an antivirus app before installing it. You should also avoid clicking on suspicious links or ads that may redirect you to malicious websites or apps.

    -

    Another risk of using Gacha Life Chat Mod APK is that it may result in a ban from the game or chat service. Since the mod apk violates the terms of service of the game and chat service, it may be detected by their security systems and result in a ban from accessing their features or servers. You should always use the mod apk at your own risk and discretion. You should also avoid using the mod apk for illegal or unethical purposes, such as cheating, hacking, or harassing other players.

    -

    Possible compatibility issues and glitches

    -

    A second drawback of using Gacha Life Chat Mod APK is that it may cause some compatibility issues or glitches with your device or game. Since the mod apk is not an official version of the game, it may not be compatible with all devices or operating systems. It may also not be updated regularly or in sync with the original game. This may cause some errors or bugs in the game or chat service, such as crashes, freezes, lags, or missing features. You should always check the compatibility and requirements of the mod apk before downloading and installing it. You should also backup your data and progress in case something goes wrong.

    -

    Conclusion

    -

    Gacha Life Chat Mod APK is a modified version of the original game that unlocks all features and allows you to chat with other players online. It has many benefits, such as unlimited resources, customization options, and different modes and mini-games. However, it also has some drawbacks, such as potential risks of malware, viruses, and bans, as well as possible compatibility issues and glitches. You should always download and install the mod apk from a reliable source and take some precautions before using it. You should also use it responsibly and respectfully.

    -

    We hope this article has helped you learn more about Gacha Life Chat Mod APK and how to use it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    -
      -
    1. Is Gacha Life Chat Mod APK safe to use?
    2. -

      Gacha Life Chat Mod APK is safe to use if you download it from a reliable source and scan it with an antivirus app before installing it. However, you

      Gacha Life Chat Mod APK is safe to use if you download it from a reliable source and scan it with an antivirus app before installing it. However, you should always be careful and cautious when using modded games and apps, as they may pose some risks to your device and account. You should also use the mod apk at your own risk and discretion, and avoid using it for illegal or unethical purposes.

      -
    3. How can I update Gacha Life Chat Mod APK?
    4. -

      Gacha Life Chat Mod APK may not be updated regularly or in sync with the original game. Therefore, you may need to check the source website or app for any updates or new versions of the mod apk. You can also use APKMODY Installer to check for updates and install them easily. However, you should always backup your data and progress before updating the mod apk, as it may overwrite or delete your data and progress.

      -
    5. Can I use Gacha Life Chat Mod APK on PC?
    6. -

      Yes, you can use Gacha Life Chat Mod APK on PC by using an emulator like MemuPlay. An emulator is a software that allows you to run Android apps and games on your PC. You can download and install MemuPlay from their official website and then install Gacha Life Chat Mod APK from Google Play Store using the emulator. This is a safer option as it reduces the risk of malware and viruses.

      -
    7. Can I use Gacha Life Chat Mod APK with other mods or hacks?
    8. -

      No, you should not use Gacha Life Chat Mod APK with other mods or hacks, as they may cause conflicts or errors in the game or chat service. You should only use one mod or hack at a time, and uninstall any other mods or hacks before installing Gacha Life Chat Mod APK. You should also avoid using any cheats or tools that may alter the game data or chat data, as they may result in a ban from the game or chat service.

      -
    9. Can I play Gacha Life Chat Mod APK offline?
    10. -

      No, you cannot play Gacha Life Chat Mod APK offline, as it requires an internet connection to access the chat service and other online features. You can only play the game offline if you use the original version of the game, which does not have the chat feature. However, you will not be able to access all the features and items that are available in the mod apk.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/eca.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/eca.py deleted file mode 100644 index e29be6ac3c95bb61229cdcdd659ec89d541f1a53..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/eca.py +++ /dev/null @@ -1,145 +0,0 @@ -""" -ECA module from ECAnet - -paper: ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks -https://arxiv.org/abs/1910.03151 - -Original ECA model borrowed from https://github.com/BangguWu/ECANet - -Modified circular ECA implementation and adaption for use in timm package -by Chris Ha https://github.com/VRandme - -Original License: - -MIT License - -Copyright (c) 2019 BangguWu, Qilong Wang - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. -""" -import math -from torch import nn -import torch.nn.functional as F - - -from .create_act import create_act_layer -from .helpers import make_divisible - - -class EcaModule(nn.Module): - """Constructs an ECA module. - - Args: - channels: Number of channels of the input feature map for use in adaptive kernel sizes - for actual calculations according to channel. - gamma, beta: when channel is given parameters of mapping function - refer to original paper https://arxiv.org/pdf/1910.03151.pdf - (default=None. if channel size not given, use k_size given for kernel size.) - kernel_size: Adaptive selection of kernel size (default=3) - gamm: used in kernel_size calc, see above - beta: used in kernel_size calc, see above - act_layer: optional non-linearity after conv, enables conv bias, this is an experiment - gate_layer: gating non-linearity to use - """ - def __init__( - self, channels=None, kernel_size=3, gamma=2, beta=1, act_layer=None, gate_layer='sigmoid', - rd_ratio=1/8, rd_channels=None, rd_divisor=8, use_mlp=False): - super(EcaModule, self).__init__() - if channels is not None: - t = int(abs(math.log(channels, 2) + beta) / gamma) - kernel_size = max(t if t % 2 else t + 1, 3) - assert kernel_size % 2 == 1 - padding = (kernel_size - 1) // 2 - if use_mlp: - # NOTE 'mlp' mode is a timm experiment, not in paper - assert channels is not None - if rd_channels is None: - rd_channels = make_divisible(channels * rd_ratio, divisor=rd_divisor) - act_layer = act_layer or nn.ReLU - self.conv = nn.Conv1d(1, rd_channels, kernel_size=1, padding=0, bias=True) - self.act = create_act_layer(act_layer) - self.conv2 = nn.Conv1d(rd_channels, 1, kernel_size=kernel_size, padding=padding, bias=True) - else: - self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=padding, bias=False) - self.act = None - self.conv2 = None - self.gate = create_act_layer(gate_layer) - - def forward(self, x): - y = x.mean((2, 3)).view(x.shape[0], 1, -1) # view for 1d conv - y = self.conv(y) - if self.conv2 is not None: - y = self.act(y) - y = self.conv2(y) - y = self.gate(y).view(x.shape[0], -1, 1, 1) - return x * y.expand_as(x) - - -EfficientChannelAttn = EcaModule # alias - - -class CecaModule(nn.Module): - """Constructs a circular ECA module. - - ECA module where the conv uses circular padding rather than zero padding. - Unlike the spatial dimension, the channels do not have inherent ordering nor - locality. Although this module in essence, applies such an assumption, it is unnecessary - to limit the channels on either "edge" from being circularly adapted to each other. - This will fundamentally increase connectivity and possibly increase performance metrics - (accuracy, robustness), without significantly impacting resource metrics - (parameter size, throughput,latency, etc) - - Args: - channels: Number of channels of the input feature map for use in adaptive kernel sizes - for actual calculations according to channel. - gamma, beta: when channel is given parameters of mapping function - refer to original paper https://arxiv.org/pdf/1910.03151.pdf - (default=None. if channel size not given, use k_size given for kernel size.) - kernel_size: Adaptive selection of kernel size (default=3) - gamm: used in kernel_size calc, see above - beta: used in kernel_size calc, see above - act_layer: optional non-linearity after conv, enables conv bias, this is an experiment - gate_layer: gating non-linearity to use - """ - - def __init__(self, channels=None, kernel_size=3, gamma=2, beta=1, act_layer=None, gate_layer='sigmoid'): - super(CecaModule, self).__init__() - if channels is not None: - t = int(abs(math.log(channels, 2) + beta) / gamma) - kernel_size = max(t if t % 2 else t + 1, 3) - has_act = act_layer is not None - assert kernel_size % 2 == 1 - - # PyTorch circular padding mode is buggy as of pytorch 1.4 - # see https://github.com/pytorch/pytorch/pull/17240 - # implement manual circular padding - self.padding = (kernel_size - 1) // 2 - self.conv = nn.Conv1d(1, 1, kernel_size=kernel_size, padding=0, bias=has_act) - self.gate = create_act_layer(gate_layer) - - def forward(self, x): - y = x.mean((2, 3)).view(x.shape[0], 1, -1) - # Manually implement circular padding, F.pad does not seemed to be bugged - y = F.pad(y, (self.padding, self.padding), mode='circular') - y = self.conv(y) - y = self.gate(y).view(x.shape[0], -1, 1, 1) - return x * y.expand_as(x) - - -CircularEfficientChannelAttn = CecaModule diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/utils/weight_init.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/utils/weight_init.py deleted file mode 100644 index 287a1d0bffe26e023029d48634d9b761deda7ba4..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/utils/weight_init.py +++ /dev/null @@ -1,684 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg, get_logger, print_log - -INITIALIZERS = Registry('initializer') - - -def update_init_info(module, init_info): - """Update the `_params_init_info` in the module if the value of parameters - are changed. - - Args: - module (obj:`nn.Module`): The module of PyTorch with a user-defined - attribute `_params_init_info` which records the initialization - information. - init_info (str): The string that describes the initialization. - """ - assert hasattr( - module, - '_params_init_info'), f'Can not find `_params_init_info` in {module}' - for name, param in module.named_parameters(): - - assert param in module._params_init_info, ( - f'Find a new :obj:`Parameter` ' - f'named `{name}` during executing the ' - f'`init_weights` of ' - f'`{module.__class__.__name__}`. ' - f'Please do not add or ' - f'replace parameters during executing ' - f'the `init_weights`. ') - - # The parameter has been changed during executing the - # `init_weights` of module - mean_value = param.data.mean() - if module._params_init_info[param]['tmp_mean_value'] != mean_value: - module._params_init_info[param]['init_info'] = init_info - module._params_init_info[param]['tmp_mean_value'] = mean_value - - -def constant_init(module, val, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.constant_(module.weight, val) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def xavier_init(module, gain=1, bias=0, distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.xavier_uniform_(module.weight, gain=gain) - else: - nn.init.xavier_normal_(module.weight, gain=gain) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def normal_init(module, mean=0, std=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.normal_(module.weight, mean, std) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def trunc_normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - trunc_normal_(module.weight, mean, std, a, b) # type: ignore - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) # type: ignore - - -def uniform_init(module, a=0, b=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.uniform_(module.weight, a, b) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def kaiming_init(module, - a=0, - mode='fan_out', - nonlinearity='relu', - bias=0, - distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.kaiming_uniform_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - else: - nn.init.kaiming_normal_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def caffe2_xavier_init(module, bias=0): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - kaiming_init( - module, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - bias=bias, - distribution='uniform') - - -def bias_init_with_prob(prior_prob): - """initialize conv/fc bias value according to a given probability value.""" - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - return bias_init - - -def _get_bases_name(m): - return [b.__name__ for b in m.__class__.__bases__] - - -class BaseInit(object): - - def __init__(self, *, bias=0, bias_prob=None, layer=None): - self.wholemodule = False - if not isinstance(bias, (int, float)): - raise TypeError(f'bias must be a number, but got a {type(bias)}') - - if bias_prob is not None: - if not isinstance(bias_prob, float): - raise TypeError(f'bias_prob type must be float, \ - but got {type(bias_prob)}') - - if layer is not None: - if not isinstance(layer, (str, list)): - raise TypeError(f'layer must be a str or a list of str, \ - but got a {type(layer)}') - else: - layer = [] - - if bias_prob is not None: - self.bias = bias_init_with_prob(bias_prob) - else: - self.bias = bias - self.layer = [layer] if isinstance(layer, str) else layer - - def _get_init_info(self): - info = f'{self.__class__.__name__}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Constant') -class ConstantInit(BaseInit): - """Initialize module parameters with constant values. - - Args: - val (int | float): the value to fill the weights in the module with - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, val, **kwargs): - super().__init__(**kwargs) - self.val = val - - def __call__(self, module): - - def init(m): - if self.wholemodule: - constant_init(m, self.val, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - constant_init(m, self.val, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: val={self.val}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Xavier') -class XavierInit(BaseInit): - r"""Initialize module parameters with values according to the method - described in `Understanding the difficulty of training deep feedforward - neural networks - Glorot, X. & Bengio, Y. (2010). - `_ - - Args: - gain (int | float): an optional scaling factor. Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` - or ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, gain=1, distribution='normal', **kwargs): - super().__init__(**kwargs) - self.gain = gain - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - xavier_init(m, self.gain, self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - xavier_init(m, self.gain, self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: gain={self.gain}, ' \ - f'distribution={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Normal') -class NormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`. - - Args: - mean (int | float):the mean of the normal distribution. Defaults to 0. - std (int | float): the standard deviation of the normal distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, mean=0, std=1, **kwargs): - super().__init__(**kwargs) - self.mean = mean - self.std = std - - def __call__(self, module): - - def init(m): - if self.wholemodule: - normal_init(m, self.mean, self.std, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - normal_init(m, self.mean, self.std, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: mean={self.mean},' \ - f' std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='TruncNormal') -class TruncNormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values - outside :math:`[a, b]`. - - Args: - mean (float): the mean of the normal distribution. Defaults to 0. - std (float): the standard deviation of the normal distribution. - Defaults to 1. - a (float): The minimum cutoff value. - b ( float): The maximum cutoff value. - bias (float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - **kwargs) -> None: - super().__init__(**kwargs) - self.mean = mean - self.std = std - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, b={self.b},' \ - f' mean={self.mean}, std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Uniform') -class UniformInit(BaseInit): - r"""Initialize module parameters with values drawn from the uniform - distribution :math:`\mathcal{U}(a, b)`. - - Args: - a (int | float): the lower bound of the uniform distribution. - Defaults to 0. - b (int | float): the upper bound of the uniform distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, a=0, b=1, **kwargs): - super().__init__(**kwargs) - self.a = a - self.b = b - - def __call__(self, module): - - def init(m): - if self.wholemodule: - uniform_init(m, self.a, self.b, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - uniform_init(m, self.a, self.b, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a},' \ - f' b={self.b}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Kaiming') -class KaimingInit(BaseInit): - r"""Initialize module parameters with the values according to the method - described in `Delving deep into rectifiers: Surpassing human-level - performance on ImageNet classification - He, K. et al. (2015). - `_ - - Args: - a (int | float): the negative slope of the rectifier used after this - layer (only used with ``'leaky_relu'``). Defaults to 0. - mode (str): either ``'fan_in'`` or ``'fan_out'``. Choosing - ``'fan_in'`` preserves the magnitude of the variance of the weights - in the forward pass. Choosing ``'fan_out'`` preserves the - magnitudes in the backwards pass. Defaults to ``'fan_out'``. - nonlinearity (str): the non-linear function (`nn.functional` name), - recommended to use only with ``'relu'`` or ``'leaky_relu'`` . - Defaults to 'relu'. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` or - ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - a=0, - mode='fan_out', - nonlinearity='relu', - distribution='normal', - **kwargs): - super().__init__(**kwargs) - self.a = a - self.mode = mode - self.nonlinearity = nonlinearity - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, mode={self.mode}, ' \ - f'nonlinearity={self.nonlinearity}, ' \ - f'distribution ={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Caffe2Xavier') -class Caffe2XavierInit(KaimingInit): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - def __init__(self, **kwargs): - super().__init__( - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform', - **kwargs) - - def __call__(self, module): - super().__call__(module) - - -@INITIALIZERS.register_module(name='Pretrained') -class PretrainedInit(object): - """Initialize module by loading a pretrained model. - - Args: - checkpoint (str): the checkpoint file of the pretrained model should - be load. - prefix (str, optional): the prefix of a sub-module in the pretrained - model. it is for loading a part of the pretrained model to - initialize. For example, if we would like to only load the - backbone of a detector model, we can set ``prefix='backbone.'``. - Defaults to None. - map_location (str): map tensors into proper locations. - """ - - def __init__(self, checkpoint, prefix=None, map_location=None): - self.checkpoint = checkpoint - self.prefix = prefix - self.map_location = map_location - - def __call__(self, module): - from annotator.uniformer.mmcv.runner import (_load_checkpoint_with_prefix, load_checkpoint, - load_state_dict) - logger = get_logger('mmcv') - if self.prefix is None: - print_log(f'load model from: {self.checkpoint}', logger=logger) - load_checkpoint( - module, - self.checkpoint, - map_location=self.map_location, - strict=False, - logger=logger) - else: - print_log( - f'load {self.prefix} in model from: {self.checkpoint}', - logger=logger) - state_dict = _load_checkpoint_with_prefix( - self.prefix, self.checkpoint, map_location=self.map_location) - load_state_dict(module, state_dict, strict=False, logger=logger) - - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: load from {self.checkpoint}' - return info - - -def _initialize(module, cfg, wholemodule=False): - func = build_from_cfg(cfg, INITIALIZERS) - # wholemodule flag is for override mode, there is no layer key in override - # and initializer will give init values for the whole module with the name - # in override. - func.wholemodule = wholemodule - func(module) - - -def _initialize_override(module, override, cfg): - if not isinstance(override, (dict, list)): - raise TypeError(f'override must be a dict or a list of dict, \ - but got {type(override)}') - - override = [override] if isinstance(override, dict) else override - - for override_ in override: - - cp_override = copy.deepcopy(override_) - name = cp_override.pop('name', None) - if name is None: - raise ValueError('`override` must contain the key "name",' - f'but got {cp_override}') - # if override only has name key, it means use args in init_cfg - if not cp_override: - cp_override.update(cfg) - # if override has name key and other args except type key, it will - # raise error - elif 'type' not in cp_override.keys(): - raise ValueError( - f'`override` need "type" key, but got {cp_override}') - - if hasattr(module, name): - _initialize(getattr(module, name), cp_override, wholemodule=True) - else: - raise RuntimeError(f'module did not have attribute {name}, ' - f'but init_cfg is {cp_override}.') - - -def initialize(module, init_cfg): - """Initialize a module. - - Args: - module (``torch.nn.Module``): the module will be initialized. - init_cfg (dict | list[dict]): initialization configuration dict to - define initializer. OpenMMLab has implemented 6 initializers - including ``Constant``, ``Xavier``, ``Normal``, ``Uniform``, - ``Kaiming``, and ``Pretrained``. - Example: - >>> module = nn.Linear(2, 3, bias=True) - >>> init_cfg = dict(type='Constant', layer='Linear', val =1 , bias =2) - >>> initialize(module, init_cfg) - - >>> module = nn.Sequential(nn.Conv1d(3, 1, 3), nn.Linear(1,2)) - >>> # define key ``'layer'`` for initializing layer with different - >>> # configuration - >>> init_cfg = [dict(type='Constant', layer='Conv1d', val=1), - dict(type='Constant', layer='Linear', val=2)] - >>> initialize(module, init_cfg) - - >>> # define key``'override'`` to initialize some specific part in - >>> # module - >>> class FooNet(nn.Module): - >>> def __init__(self): - >>> super().__init__() - >>> self.feat = nn.Conv2d(3, 16, 3) - >>> self.reg = nn.Conv2d(16, 10, 3) - >>> self.cls = nn.Conv2d(16, 5, 3) - >>> model = FooNet() - >>> init_cfg = dict(type='Constant', val=1, bias=2, layer='Conv2d', - >>> override=dict(type='Constant', name='reg', val=3, bias=4)) - >>> initialize(model, init_cfg) - - >>> model = ResNet(depth=50) - >>> # Initialize weights with the pretrained model. - >>> init_cfg = dict(type='Pretrained', - checkpoint='torchvision://resnet50') - >>> initialize(model, init_cfg) - - >>> # Initialize weights of a sub-module with the specific part of - >>> # a pretrained model by using "prefix". - >>> url = 'http://download.openmmlab.com/mmdetection/v2.0/retinanet/'\ - >>> 'retinanet_r50_fpn_1x_coco/'\ - >>> 'retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth' - >>> init_cfg = dict(type='Pretrained', - checkpoint=url, prefix='backbone.') - """ - if not isinstance(init_cfg, (dict, list)): - raise TypeError(f'init_cfg must be a dict or a list of dict, \ - but got {type(init_cfg)}') - - if isinstance(init_cfg, dict): - init_cfg = [init_cfg] - - for cfg in init_cfg: - # should deeply copy the original config because cfg may be used by - # other modules, e.g., one init_cfg shared by multiple bottleneck - # blocks, the expected cfg will be changed after pop and will change - # the initialization behavior of other modules - cp_cfg = copy.deepcopy(cfg) - override = cp_cfg.pop('override', None) - _initialize(module, cp_cfg) - - if override is not None: - cp_cfg.pop('layer', None) - _initialize_override(module, override, cp_cfg) - else: - # All attributes in module have same initialization. - pass - - -def _no_grad_trunc_normal_(tensor: Tensor, mean: float, std: float, a: float, - b: float) -> Tensor: - # Method based on - # https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - # Modified from - # https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower = norm_cdf((a - mean) / std) - upper = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [lower, upper], then translate - # to [2lower-1, 2upper-1]. - tensor.uniform_(2 * lower - 1, 2 * upper - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor: Tensor, - mean: float = 0., - std: float = 1., - a: float = -2., - b: float = 2.) -> Tensor: - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Modified from - https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`. - mean (float): the mean of the normal distribution. - std (float): the standard deviation of the normal distribution. - a (float): the minimum cutoff value. - b (float): the maximum cutoff value. - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/spaces/crashedice/signify/signify/gan/data/aligned_dataset.py b/spaces/crashedice/signify/signify/gan/data/aligned_dataset.py deleted file mode 100644 index 23dfe49275fd0f959a029af3b232248a0bdb6026..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/signify/gan/data/aligned_dataset.py +++ /dev/null @@ -1,60 +0,0 @@ -import os -from signify.gan.data.base_dataset import BaseDataset, get_params, get_transform -from signify.gan.data.image_folder import make_dataset -from PIL import Image - - -class AlignedDataset(BaseDataset): - """A dataset class for paired image dataset. - - It assumes that the directory '/path/to/data/train' contains image pairs in the form of {A,B}. - During test time, you need to prepare a directory '/path/to/data/test'. - """ - - def __init__(self, opt): - """Initialize this dataset class. - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - BaseDataset.__init__(self, opt) - self.dir_AB = os.path.join(opt.dataroot, opt.phase) # get the image directory - self.AB_paths = sorted(make_dataset(self.dir_AB, opt.max_dataset_size)) # get image paths - assert(self.opt.load_size >= self.opt.crop_size) # crop_size should be smaller than the size of loaded image - self.input_nc = self.opt.output_nc if self.opt.direction == 'BtoA' else self.opt.input_nc - self.output_nc = self.opt.input_nc if self.opt.direction == 'BtoA' else self.opt.output_nc - - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index - - a random integer for data indexing - - Returns a dictionary that contains A, B, A_paths and B_paths - A (tensor) - - an image in the input domain - B (tensor) - - its corresponding image in the target domain - A_paths (str) - - image paths - B_paths (str) - - image paths (same as A_paths) - """ - # read a image given a random integer index - AB_path = self.AB_paths[index] - AB = Image.open(AB_path).convert('RGB') - # split AB image into A and B - w, h = AB.size - w2 = int(w / 2) - A = AB.crop((0, 0, w2, h)) - B = AB.crop((w2, 0, w, h)) - - # apply the same transform to both A and B - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params, grayscale=(self.input_nc == 1)) - B_transform = get_transform(self.opt, transform_params, grayscale=(self.output_nc == 1)) - - A = A_transform(A) - B = B_transform(B) - - return {'A': A, 'B': B, 'A_paths': AB_path, 'B_paths': AB_path} - - def __len__(self): - """Return the total number of images in the dataset.""" - return len(self.AB_paths) diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/__init__.py b/spaces/cymic/Talking_Head_Anime_3/tha3/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cymic/VITS-Tokaiteio/text/__init__.py b/spaces/cymic/VITS-Tokaiteio/text/__init__.py deleted file mode 100644 index 4ac41f9025755d8ffd74068af14c6cfc8e5a4173..0000000000000000000000000000000000000000 --- a/spaces/cymic/VITS-Tokaiteio/text/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/danielsteinigen/NLP-Legal-Texts/app.py b/spaces/danielsteinigen/NLP-Legal-Texts/app.py deleted file mode 100644 index 75a8bbc872c22d9c5793cd2127cee2650ae65b56..0000000000000000000000000000000000000000 --- a/spaces/danielsteinigen/NLP-Legal-Texts/app.py +++ /dev/null @@ -1,212 +0,0 @@ -import os -import streamlit as st -from spacy import displacy -from PIL import Image -import json -import requests -from pyvis.network import Network -import streamlit.components.v1 as components - -from util.process_data import Entity, EntityType, Relation, Sample, SampleList -from util.tokenizer import Tokenizer -from model_inference import TransformersInference -from util.configuration import InferenceConfiguration - -inference_config = InferenceConfiguration() -tokenizer = Tokenizer(inference_config.spacy_model) - -SAMPLE_66 = "EStG § 66 Höhe des Kindergeldes, Zahlungszeitraum (1) Das Kindergeld beträgt monatlich für das erste und zweite Kind jeweils 219 Euro, für das dritte Kind 225 Euro und für das vierte und jedes weitere Kind jeweils 250 Euro." -SAMPLE_9 = "EStG § 9 Werbungskosten ... Zur Abgeltung dieser Aufwendungen ist für jeden Arbeitstag, an dem der Arbeitnehmer die erste Tätigkeitsstätte aufsucht eine Entfernungspauschale für jeden vollen Kilometer der Entfernung zwischen Wohnung und erster Tätigkeitsstätte von 0,30 Euro anzusetzen, höchstens jedoch 4 500 Euro im Kalenderjahr; ein höherer Betrag als 4 500 Euro ist anzusetzen, soweit der Arbeitnehmer einen eigenen oder ihm zur Nutzung überlassenen Kraftwagen benutzt." - - -############################################################ -## Constants -############################################################ -max_width_str = f"max-width: 60%;" -paragraph = None -style = "" -graph_options = ''' -var options = { - "edges": { - "arrows": { - "to": { - "enabled": true, - "scaleFactor": 1.2 - } - } - } -} -''' - -legend_content = { - "text": "StatedKeyFigure StatedExpression Unit Range Factor Condition DeclarativeKeyFigure DeclarativeExpression", - "ents": [ - {"start": 0, "end": 15, "label": "K"}, - {"start": 16, "end": 32, "label": "E"}, - {"start": 33, "end": 37, "label": "U"}, - {"start": 38, "end": 43, "label": "R"}, - {"start": 44, "end": 50, "label": "F"}, - {"start": 51, "end": 60, "label": "C"}, - {"start": 61, "end": 81, "label": "DK"}, - {"start": 82, "end": 103, "label": "DE"}, - ]} -legend_options = { - "ents": ["K","U","E","R","F","C","DK","DE"], - "colors": {'K': '#46d000',"U": "#e861ef", "E": "#538cff", "R": "#ffbe00", "F": "#0fd5dc", "C":"#ff484b", "DK":"#46d000", "DE":"#538cff"} - } -legend_mapping = {"StatedKeyFigure": "K","Unit": "U","StatedExpression": "E","Range": "R","Factor": "F","Condition": "C","DeclarativeKeyFigure": "DK","DeclarativeExpression": "DE"} -edge_colors = {'hasKeyFigure': '#46d000',"hasUnit": "#e861ef", "hasExpression": "#538cff", "hasRange": "#ffbe00", "hasFactor": "#0fd5dc", "hasCondition":"#ff484b", "join":"#aaa", "Typ":"#aaa", "hasParagraph": "#FF8B15"} - - -############################################################ -## Function definitions -############################################################ - -def get_html(html: str, legend=False): - """Convert HTML so it can be rendered.""" - WRAPPER = """
      {}
      """ - if legend: WRAPPER = """
      {}
      """ - # Newlines seem to mess with the rendering - html = html.replace("\n", " ") - return WRAPPER.format(html) - - -def get_displacy_ent_obj(paragraph, bedingungen=False, send_request=False): - entities = [] - for entity in paragraph['entities']: - label = entity["entity"] if not send_request else entity["ent_type"]["label"] - if (bedingungen and label == "Condition") or (not bedingungen and label != "Condition") : - entities.append({ - 'start': entity['start'], - 'end': entity["end"], - 'label': legend_mapping[label] - }) - return [{'text': paragraph['text'], 'ents': entities}] - - -def request_extractor(text_data): - try: - data = SampleList( - samples=[ - Sample( - idx=0, - text=str(text_data), - entities=[], - relations=[] - ) - ] - ) - tokenizer.run(data) - - model_inference = TransformersInference(inference_config) - model_inference.run_inference(data) - return data.dict()["samples"][0] - except Exception as e: - result = e - return {"text":"error","entities":[], "relations":[]} - - -def generate_graph(nodes, edges, send_request=False): - net = Network(height="450px", width="100%")#, bgcolor="#222222", font_color="white", select_menu=True, filter_menu=True) - for node in nodes: - if "id" in node: - label = node["entity"] if not send_request else node["ent_type"]["label"] - node_color = legend_options["colors"][legend_mapping[label]] - node_label = node["text"] if len(node["text"]) < 30 else (node["text"][:27]+" ...") - if label in ["Kennzahl", "Kennzahlumschreibung"]: - net.add_node(node["id"], label=node_label, title=node["text"], mass=2, shape="ellipse", color=node_color, physics=False) - else: - net.add_node(node["id"], label=node_label, title=node["text"], mass=1, shape="ellipse", color=node_color) - for edge in edges: - label = edge["relation"] if not send_request else edge["rel_type"]["label"] - net.add_edge(edge["head"], edge["tail"], width=1, title=label, arrowStrikethrough=False, color=edge_colors[label]) - # net.force_atlas_2based() # barnes_hut() force_atlas_2based() hrepulsion() repulsion() - net.toggle_physics(True) - net.set_edge_smooth("dynamic") # dynamic, continuous, discrete, diagonalCross, straightCross, horizontal, vertical, curvedCW, curvedCCW, cubicBezier - net.set_options(graph_options) - html_graph = net.generate_html() - return html_graph - -############################################################ -## Page configuration -############################################################ -st.set_page_config( - page_title="NLP Gesetzestexte", - menu_items={ - 'Get Help': None, - 'Report a bug': None, - 'About': "## Demonstrator NLP" - } - # layout="wide") - ) - -st.markdown( - f""" - - """, - unsafe_allow_html=True, - ) - -# radio button formatting in line -st.write('', unsafe_allow_html=True) - -############################################################ -## Page formating -############################################################ -col3, col4 = st.columns([2,2]) -st.write('\n') -st.write('\n') - - -with col3: - st.subheader("Extraction of Key Figures from Tax Legal Texts") - st.write("Demonstrator Application for the Paper 'Semantic Extraction of Key Figures and Their Properties From Tax Legal Texts using Neural Models' presented at the sixth Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2023).") - st.write("The paper can be found here: [Paper](https://ceur-ws.org/Vol-3441/paper7.pdf)") - st.write('\n') - st.write('This demonstrator processes German tax laws as input and outputs the extracted key figures with their properties and relations, based on the presented semantic model.') -with col4: - st.caption("Semantic Model") - image = Image.open('util/ontology.png') - st.image(image, width=700) - - -text_option = st.radio("Select Example", ["Insert your paragraph", "EStG § 66 Kindergeld", "EStG § 9 Werbungskosten"]) -st.write('\n') -if text_option == "EStG § 66 Kindergeld": - text_area_input = st.text_area("Given paragraph", SAMPLE_66, height=200) -elif text_option == "EStG § 9 Werbungskosten": - text_area_input = st.text_area("Given paragraph", SAMPLE_9, height=200) -else: - text_area_input = st.text_area("Given paragraph", "", height=200) - -if st.button("Start Extraction") and text_area_input != "": - with st.spinner('Executing Extraction ...'): - paragraph = request_extractor(text_area_input) - if paragraph["text"] == "error": - st.error("Error while executing extraction.") - else: - legend = displacy.render([legend_content], style="ent", options=legend_options, manual=True) - st.write(f"{style}{get_html(legend, True)}", unsafe_allow_html=True) - - st.caption("Entities:") - extracted_data = get_displacy_ent_obj(paragraph, False, True) - html = displacy.render(extracted_data, style="ent", options=legend_options, manual=True) - st.write(f"{style}{get_html(html)}", unsafe_allow_html=True) - - st.write('\n') - st.caption("Conditions:") - extracted_data = get_displacy_ent_obj(paragraph, True, True) - html = displacy.render(extracted_data, style="ent", options=legend_options, manual=True) - st.write(f"{style}{get_html(html)}", unsafe_allow_html=True) - - st.write('\n') - st.caption("\n\nRelations:") - html_graph_req = generate_graph(paragraph["entities"], paragraph["relations"], send_request=True) - components.html(html_graph_req, height=500) - st.write('\n') - with st.expander("Show JSON"): - st.json(paragraph) diff --git a/spaces/davidpiscasio/unpaired-img2img/options/test_options.py b/spaces/davidpiscasio/unpaired-img2img/options/test_options.py deleted file mode 100644 index 3f42c01eaca1a41d5981d33fadc828ae54de4fdb..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/options/test_options.py +++ /dev/null @@ -1,23 +0,0 @@ -from .base_options import BaseOptions - - -class TestOptions(BaseOptions): - """This class includes test options. - - It also includes shared options defined in BaseOptions. - """ - - def initialize(self, parser): - parser = BaseOptions.initialize(self, parser) # define shared options - parser.add_argument('--results_dir', type=str, default='./results/', help='saves results here.') - parser.add_argument('--aspect_ratio', type=float, default=1.0, help='aspect ratio of result images') - parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc') - # Dropout and Batchnorm has different behavioir during training and test. - parser.add_argument('--eval', action='store_true', help='use eval mode during test time.') - parser.add_argument('--num_test', type=int, default=50, help='how many test images to run') - # rewrite devalue values - parser.set_defaults(model='test') - # To avoid cropping, the load_size should be the same as crop_size - parser.set_defaults(load_size=parser.get_default('crop_size')) - self.isTrain = False - return parser diff --git a/spaces/davidpiscasio/unpaired-img2img/util/html.py b/spaces/davidpiscasio/unpaired-img2img/util/html.py deleted file mode 100644 index cc3262a1eafda34842e4dbad47bb6ba72f0c5a68..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/util/html.py +++ /dev/null @@ -1,86 +0,0 @@ -import dominate -from dominate.tags import meta, h3, table, tr, td, p, a, img, br -import os - - -class HTML: - """This HTML class allows us to save images and write texts into a single HTML file. - - It consists of functions such as (add a text header to the HTML file), - (add a row of images to the HTML file), and (save the HTML to the disk). - It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API. - """ - - def __init__(self, web_dir, title, refresh=0): - """Initialize the HTML classes - - Parameters: - web_dir (str) -- a directory that stores the webpage. HTML file will be created at /index.html; images will be saved at 0: - with self.doc.head: - meta(http_equiv="refresh", content=str(refresh)) - - def get_image_dir(self): - """Return the directory that stores images""" - return self.img_dir - - def add_header(self, text): - """Insert a header to the HTML file - - Parameters: - text (str) -- the header text - """ - with self.doc: - h3(text) - - def add_images(self, ims, txts, links, width=400): - """add images to the HTML file - - Parameters: - ims (str list) -- a list of image paths - txts (str list) -- a list of image names shown on the website - links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page - """ - self.t = table(border=1, style="table-layout: fixed;") # Insert a table - self.doc.add(self.t) - with self.t: - with tr(): - for im, txt, link in zip(ims, txts, links): - with td(style="word-wrap: break-word;", halign="center", valign="top"): - with p(): - with a(href=os.path.join('images', link)): - img(style="width:%dpx" % width, src=os.path.join('images', im)) - br() - p(txt) - - def save(self): - """save the current content to the HMTL file""" - html_file = '%s/index.html' % self.web_dir - f = open(html_file, 'wt') - f.write(self.doc.render()) - f.close() - - -if __name__ == '__main__': # we show an example usage here. - html = HTML('web/', 'test_html') - html.add_header('hello world') - - ims, txts, links = [], [], [] - for n in range(4): - ims.append('image_%d.png' % n) - txts.append('text_%d' % n) - links.append('image_%d.png' % n) - html.add_images(ims, txts, links) - html.save() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/testTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/testTools.py deleted file mode 100644 index be6116132d93a6a5f692f5b8465be346aad7ca5c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/testTools.py +++ /dev/null @@ -1,229 +0,0 @@ -"""Helpers for writing unit tests.""" - -from collections.abc import Iterable -from io import BytesIO -import os -import re -import shutil -import sys -import tempfile -from unittest import TestCase as _TestCase -from fontTools.config import Config -from fontTools.misc.textTools import tobytes -from fontTools.misc.xmlWriter import XMLWriter - - -def parseXML(xmlSnippet): - """Parses a snippet of XML. - - Input can be either a single string (unicode or UTF-8 bytes), or a - a sequence of strings. - - The result is in the same format that would be returned by - XMLReader, but the parser imposes no constraints on the root - element so it can be called on small snippets of TTX files. - """ - # To support snippets with multiple elements, we add a fake root. - reader = TestXMLReader_() - xml = b"" - if isinstance(xmlSnippet, bytes): - xml += xmlSnippet - elif isinstance(xmlSnippet, str): - xml += tobytes(xmlSnippet, "utf-8") - elif isinstance(xmlSnippet, Iterable): - xml += b"".join(tobytes(s, "utf-8") for s in xmlSnippet) - else: - raise TypeError( - "expected string or sequence of strings; found %r" - % type(xmlSnippet).__name__ - ) - xml += b"" - reader.parser.Parse(xml, 0) - return reader.root[2] - - -def parseXmlInto(font, parseInto, xmlSnippet): - parsed_xml = [e for e in parseXML(xmlSnippet.strip()) if not isinstance(e, str)] - for name, attrs, content in parsed_xml: - parseInto.fromXML(name, attrs, content, font) - parseInto.populateDefaults() - return parseInto - - -class FakeFont: - def __init__(self, glyphs): - self.glyphOrder_ = glyphs - self.reverseGlyphOrderDict_ = {g: i for i, g in enumerate(glyphs)} - self.lazy = False - self.tables = {} - self.cfg = Config() - - def __getitem__(self, tag): - return self.tables[tag] - - def __setitem__(self, tag, table): - self.tables[tag] = table - - def get(self, tag, default=None): - return self.tables.get(tag, default) - - def getGlyphID(self, name): - return self.reverseGlyphOrderDict_[name] - - def getGlyphIDMany(self, lst): - return [self.getGlyphID(gid) for gid in lst] - - def getGlyphName(self, glyphID): - if glyphID < len(self.glyphOrder_): - return self.glyphOrder_[glyphID] - else: - return "glyph%.5d" % glyphID - - def getGlyphNameMany(self, lst): - return [self.getGlyphName(gid) for gid in lst] - - def getGlyphOrder(self): - return self.glyphOrder_ - - def getReverseGlyphMap(self): - return self.reverseGlyphOrderDict_ - - def getGlyphNames(self): - return sorted(self.getGlyphOrder()) - - -class TestXMLReader_(object): - def __init__(self): - from xml.parsers.expat import ParserCreate - - self.parser = ParserCreate() - self.parser.StartElementHandler = self.startElement_ - self.parser.EndElementHandler = self.endElement_ - self.parser.CharacterDataHandler = self.addCharacterData_ - self.root = None - self.stack = [] - - def startElement_(self, name, attrs): - element = (name, attrs, []) - if self.stack: - self.stack[-1][2].append(element) - else: - self.root = element - self.stack.append(element) - - def endElement_(self, name): - self.stack.pop() - - def addCharacterData_(self, data): - self.stack[-1][2].append(data) - - -def makeXMLWriter(newlinestr="\n"): - # don't write OS-specific new lines - writer = XMLWriter(BytesIO(), newlinestr=newlinestr) - # erase XML declaration - writer.file.seek(0) - writer.file.truncate() - return writer - - -def getXML(func, ttFont=None): - """Call the passed toXML function and return the written content as a - list of lines (unicode strings). - Result is stripped of XML declaration and OS-specific newline characters. - """ - writer = makeXMLWriter() - func(writer, ttFont) - xml = writer.file.getvalue().decode("utf-8") - # toXML methods must always end with a writer.newline() - assert xml.endswith("\n") - return xml.splitlines() - - -def stripVariableItemsFromTTX( - string: str, - ttLibVersion: bool = True, - checkSumAdjustment: bool = True, - modified: bool = True, - created: bool = True, - sfntVersion: bool = False, # opt-in only -) -> str: - """Strip stuff like ttLibVersion, checksums, timestamps, etc. from TTX dumps.""" - # ttlib changes with the fontTools version - if ttLibVersion: - string = re.sub(' ttLibVersion="[^"]+"', "", string) - # sometimes (e.g. some subsetter tests) we don't care whether it's OTF or TTF - if sfntVersion: - string = re.sub(' sfntVersion="[^"]+"', "", string) - # head table checksum and creation and mod date changes with each save. - if checkSumAdjustment: - string = re.sub('', "", string) - if modified: - string = re.sub('', "", string) - if created: - string = re.sub('', "", string) - return string - - -class MockFont(object): - """A font-like object that automatically adds any looked up glyphname - to its glyphOrder.""" - - def __init__(self): - self._glyphOrder = [".notdef"] - - class AllocatingDict(dict): - def __missing__(reverseDict, key): - self._glyphOrder.append(key) - gid = len(reverseDict) - reverseDict[key] = gid - return gid - - self._reverseGlyphOrder = AllocatingDict({".notdef": 0}) - self.lazy = False - - def getGlyphID(self, glyph): - gid = self._reverseGlyphOrder[glyph] - return gid - - def getReverseGlyphMap(self): - return self._reverseGlyphOrder - - def getGlyphName(self, gid): - return self._glyphOrder[gid] - - def getGlyphOrder(self): - return self._glyphOrder - - -class TestCase(_TestCase): - def __init__(self, methodName): - _TestCase.__init__(self, methodName) - # Python 3 renamed assertRaisesRegexp to assertRaisesRegex, - # and fires deprecation warnings if a program uses the old name. - if not hasattr(self, "assertRaisesRegex"): - self.assertRaisesRegex = self.assertRaisesRegexp - - -class DataFilesHandler(TestCase): - def setUp(self): - self.tempdir = None - self.num_tempfiles = 0 - - def tearDown(self): - if self.tempdir: - shutil.rmtree(self.tempdir) - - def getpath(self, testfile): - folder = os.path.dirname(sys.modules[self.__module__].__file__) - return os.path.join(folder, "data", testfile) - - def temp_dir(self): - if not self.tempdir: - self.tempdir = tempfile.mkdtemp() - - def temp_font(self, font_path, file_name): - self.temp_dir() - temppath = os.path.join(self.tempdir, file_name) - shutil.copy2(font_path, temppath) - return temppath diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/frozenlist/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/frozenlist/__init__.py deleted file mode 100644 index 152356588d3e619bddb7e2ecd76b147a4e55a96c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/frozenlist/__init__.py +++ /dev/null @@ -1,95 +0,0 @@ -import os -import sys -import types -from collections.abc import MutableSequence -from functools import total_ordering -from typing import Type - -__version__ = "1.4.0" - -__all__ = ("FrozenList", "PyFrozenList") # type: Tuple[str, ...] - - -NO_EXTENSIONS = bool(os.environ.get("FROZENLIST_NO_EXTENSIONS")) # type: bool - - -@total_ordering -class FrozenList(MutableSequence): - __slots__ = ("_frozen", "_items") - - if sys.version_info >= (3, 9): - __class_getitem__ = classmethod(types.GenericAlias) - else: - - @classmethod - def __class_getitem__(cls: Type["FrozenList"]) -> Type["FrozenList"]: - return cls - - def __init__(self, items=None): - self._frozen = False - if items is not None: - items = list(items) - else: - items = [] - self._items = items - - @property - def frozen(self): - return self._frozen - - def freeze(self): - self._frozen = True - - def __getitem__(self, index): - return self._items[index] - - def __setitem__(self, index, value): - if self._frozen: - raise RuntimeError("Cannot modify frozen list.") - self._items[index] = value - - def __delitem__(self, index): - if self._frozen: - raise RuntimeError("Cannot modify frozen list.") - del self._items[index] - - def __len__(self): - return self._items.__len__() - - def __iter__(self): - return self._items.__iter__() - - def __reversed__(self): - return self._items.__reversed__() - - def __eq__(self, other): - return list(self) == other - - def __le__(self, other): - return list(self) <= other - - def insert(self, pos, item): - if self._frozen: - raise RuntimeError("Cannot modify frozen list.") - self._items.insert(pos, item) - - def __repr__(self): - return f"" - - def __hash__(self): - if self._frozen: - return hash(tuple(self)) - else: - raise RuntimeError("Cannot hash unfrozen list.") - - -PyFrozenList = FrozenList - - -try: - from ._frozenlist import FrozenList as CFrozenList # type: ignore - - if not NO_EXTENSIONS: # pragma: no cover - FrozenList = CFrozenList # type: ignore -except ImportError: # pragma: no cover - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py deleted file mode 100644 index dbcaff1fcf1b1cbb404b3e7367b037942f4e9d03..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py +++ /dev/null @@ -1,356 +0,0 @@ -import ssl -import sys -from types import TracebackType -from typing import Iterable, Iterator, Iterable, List, Optional, Type - -from .._backends.sync import SyncBackend -from .._backends.base import SOCKET_OPTION, NetworkBackend -from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol -from .._models import Origin, Request, Response -from .._synchronization import Event, Lock, ShieldCancellation -from .connection import HTTPConnection -from .interfaces import ConnectionInterface, RequestInterface - - -class RequestStatus: - def __init__(self, request: Request): - self.request = request - self.connection: Optional[ConnectionInterface] = None - self._connection_acquired = Event() - - def set_connection(self, connection: ConnectionInterface) -> None: - assert self.connection is None - self.connection = connection - self._connection_acquired.set() - - def unset_connection(self) -> None: - assert self.connection is not None - self.connection = None - self._connection_acquired = Event() - - def wait_for_connection( - self, timeout: Optional[float] = None - ) -> ConnectionInterface: - if self.connection is None: - self._connection_acquired.wait(timeout=timeout) - assert self.connection is not None - return self.connection - - -class ConnectionPool(RequestInterface): - """ - A connection pool for making HTTP requests. - """ - - def __init__( - self, - ssl_context: Optional[ssl.SSLContext] = None, - max_connections: Optional[int] = 10, - max_keepalive_connections: Optional[int] = None, - keepalive_expiry: Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - local_address: Optional[str] = None, - uds: Optional[str] = None, - network_backend: Optional[NetworkBackend] = None, - socket_options: Optional[Iterable[SOCKET_OPTION]] = None, - ) -> None: - """ - A connection pool for making HTTP requests. - - Parameters: - ssl_context: An SSL context to use for verifying connections. - If not specified, the default `httpcore.default_ssl_context()` - will be used. - max_connections: The maximum number of concurrent HTTP connections that - the pool should allow. Any attempt to send a request on a pool that - would exceed this amount will block until a connection is available. - max_keepalive_connections: The maximum number of idle HTTP connections - that will be maintained in the pool. - keepalive_expiry: The duration in seconds that an idle HTTP connection - may be maintained for before being expired from the pool. - http1: A boolean indicating if HTTP/1.1 requests should be supported - by the connection pool. Defaults to True. - http2: A boolean indicating if HTTP/2 requests should be supported by - the connection pool. Defaults to False. - retries: The maximum number of retries when trying to establish a - connection. - local_address: Local address to connect from. Can also be used to connect - using a particular address family. Using `local_address="0.0.0.0"` - will connect using an `AF_INET` address (IPv4), while using - `local_address="::"` will connect using an `AF_INET6` address (IPv6). - uds: Path to a Unix Domain Socket to use instead of TCP sockets. - network_backend: A backend instance to use for handling network I/O. - socket_options: Socket options that have to be included - in the TCP socket when the connection was established. - """ - self._ssl_context = ssl_context - - self._max_connections = ( - sys.maxsize if max_connections is None else max_connections - ) - self._max_keepalive_connections = ( - sys.maxsize - if max_keepalive_connections is None - else max_keepalive_connections - ) - self._max_keepalive_connections = min( - self._max_connections, self._max_keepalive_connections - ) - - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - self._retries = retries - self._local_address = local_address - self._uds = uds - - self._pool: List[ConnectionInterface] = [] - self._requests: List[RequestStatus] = [] - self._pool_lock = Lock() - self._network_backend = ( - SyncBackend() if network_backend is None else network_backend - ) - self._socket_options = socket_options - - def create_connection(self, origin: Origin) -> ConnectionInterface: - return HTTPConnection( - origin=origin, - ssl_context=self._ssl_context, - keepalive_expiry=self._keepalive_expiry, - http1=self._http1, - http2=self._http2, - retries=self._retries, - local_address=self._local_address, - uds=self._uds, - network_backend=self._network_backend, - socket_options=self._socket_options, - ) - - @property - def connections(self) -> List[ConnectionInterface]: - """ - Return a list of the connections currently in the pool. - - For example: - - ```python - >>> pool.connections - [ - , - , - , - ] - ``` - """ - return list(self._pool) - - def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool: - """ - Attempt to provide a connection that can handle the given origin. - """ - origin = status.request.url.origin - - # If there are queued requests in front of us, then don't acquire a - # connection. We handle requests strictly in order. - waiting = [s for s in self._requests if s.connection is None] - if waiting and waiting[0] is not status: - return False - - # Reuse an existing connection if one is currently available. - for idx, connection in enumerate(self._pool): - if connection.can_handle_request(origin) and connection.is_available(): - self._pool.pop(idx) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - # If the pool is currently full, attempt to close one idle connection. - if len(self._pool) >= self._max_connections: - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle(): - connection.close() - self._pool.pop(idx) - break - - # If the pool is still full, then we cannot acquire a connection. - if len(self._pool) >= self._max_connections: - return False - - # Otherwise create a new connection. - connection = self.create_connection(origin) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - def _close_expired_connections(self) -> None: - """ - Clean up the connection pool by closing off any connections that have expired. - """ - # Close any connections that have expired their keep-alive time. - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.has_expired(): - connection.close() - self._pool.pop(idx) - - # If the pool size exceeds the maximum number of allowed keep-alive connections, - # then close off idle connections as required. - pool_size = len(self._pool) - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle() and pool_size > self._max_keepalive_connections: - connection.close() - self._pool.pop(idx) - pool_size -= 1 - - def handle_request(self, request: Request) -> Response: - """ - Send an HTTP request, and return an HTTP response. - - This is the core implementation that is called into by `.request()` or `.stream()`. - """ - scheme = request.url.scheme.decode() - if scheme == "": - raise UnsupportedProtocol( - "Request URL is missing an 'http://' or 'https://' protocol." - ) - if scheme not in ("http", "https", "ws", "wss"): - raise UnsupportedProtocol( - f"Request URL has an unsupported protocol '{scheme}://'." - ) - - status = RequestStatus(request) - - with self._pool_lock: - self._requests.append(status) - self._close_expired_connections() - self._attempt_to_acquire_connection(status) - - while True: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("pool", None) - try: - connection = status.wait_for_connection(timeout=timeout) - except BaseException as exc: - # If we timeout here, or if the task is cancelled, then make - # sure to remove the request from the queue before bubbling - # up the exception. - with self._pool_lock: - # Ensure only remove when task exists. - if status in self._requests: - self._requests.remove(status) - raise exc - - try: - response = connection.handle_request(request) - except ConnectionNotAvailable: - # The ConnectionNotAvailable exception is a special case, that - # indicates we need to retry the request on a new connection. - # - # The most common case where this can occur is when multiple - # requests are queued waiting for a single connection, which - # might end up as an HTTP/2 connection, but which actually ends - # up as HTTP/1.1. - with self._pool_lock: - # Maintain our position in the request queue, but reset the - # status so that the request becomes queued again. - status.unset_connection() - self._attempt_to_acquire_connection(status) - except BaseException as exc: - with ShieldCancellation(): - self.response_closed(status) - raise exc - else: - break - - # When we return the response, we wrap the stream in a special class - # that handles notifying the connection pool once the response - # has been released. - assert isinstance(response.stream, Iterable) - return Response( - status=response.status, - headers=response.headers, - content=ConnectionPoolByteStream(response.stream, self, status), - extensions=response.extensions, - ) - - def response_closed(self, status: RequestStatus) -> None: - """ - This method acts as a callback once the request/response cycle is complete. - - It is called into from the `ConnectionPoolByteStream.close()` method. - """ - assert status.connection is not None - connection = status.connection - - with self._pool_lock: - # Update the state of the connection pool. - if status in self._requests: - self._requests.remove(status) - - if connection.is_closed() and connection in self._pool: - self._pool.remove(connection) - - # Since we've had a response closed, it's possible we'll now be able - # to service one or more requests that are currently pending. - for status in self._requests: - if status.connection is None: - acquired = self._attempt_to_acquire_connection(status) - # If we could not acquire a connection for a queued request - # then we don't need to check anymore requests that are - # queued later behind it. - if not acquired: - break - - # Housekeeping. - self._close_expired_connections() - - def close(self) -> None: - """ - Close any connections in the pool. - """ - with self._pool_lock: - for connection in self._pool: - connection.close() - self._pool = [] - self._requests = [] - - def __enter__(self) -> "ConnectionPool": - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - self.close() - - -class ConnectionPoolByteStream: - """ - A wrapper around the response byte stream, that additionally handles - notifying the connection pool when the response has been closed. - """ - - def __init__( - self, - stream: Iterable[bytes], - pool: ConnectionPool, - status: RequestStatus, - ) -> None: - self._stream = stream - self._pool = pool - self._status = status - - def __iter__(self) -> Iterator[bytes]: - for part in self._stream: - yield part - - def close(self) -> None: - try: - if hasattr(self._stream, "close"): - self._stream.close() - finally: - with ShieldCancellation(): - self._pool.response_closed(self._status) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/validator_creation.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/validator_creation.py deleted file mode 100644 index 4baeb3a31641a027496732a6f10e200346551209..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/validator_creation.py +++ /dev/null @@ -1,14 +0,0 @@ -from pyperf import Runner - -from jsonschema import Draft202012Validator - -schema = { - "type": "array", - "minLength": 1, - "maxLength": 1, - "items": {"type": "integer"}, -} - - -if __name__ == "__main__": - Runner().bench_func("validator creation", Draft202012Validator, schema) diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_unclip/test_stable_unclip.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_unclip/test_stable_unclip.py deleted file mode 100644 index 368ab21f24a91df7ff17ae8bf69a1acdfa949fab..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_unclip/test_stable_unclip.py +++ /dev/null @@ -1,229 +0,0 @@ -import gc -import unittest - -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - DDPMScheduler, - PriorTransformer, - StableUnCLIPPipeline, - UNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer -from diffusers.utils.testing_utils import load_numpy, require_torch_gpu, slow, torch_device - -from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS -from ...test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference - - -class StableUnCLIPPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = StableUnCLIPPipeline - params = TEXT_TO_IMAGE_PARAMS - batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - - # TODO(will) Expected attn_bias.stride(1) == 0 to be true, but got false - test_xformers_attention = False - - def get_dummy_components(self): - embedder_hidden_size = 32 - embedder_projection_dim = embedder_hidden_size - - # prior components - - torch.manual_seed(0) - prior_tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - torch.manual_seed(0) - prior_text_encoder = CLIPTextModelWithProjection( - CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=embedder_hidden_size, - projection_dim=embedder_projection_dim, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - ) - - torch.manual_seed(0) - prior = PriorTransformer( - num_attention_heads=2, - attention_head_dim=12, - embedding_dim=embedder_projection_dim, - num_layers=1, - ) - - torch.manual_seed(0) - prior_scheduler = DDPMScheduler( - variance_type="fixed_small_log", - prediction_type="sample", - num_train_timesteps=1000, - clip_sample=True, - clip_sample_range=5.0, - beta_schedule="squaredcos_cap_v2", - ) - - # regular denoising components - - torch.manual_seed(0) - image_normalizer = StableUnCLIPImageNormalizer(embedding_dim=embedder_hidden_size) - image_noising_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2") - - torch.manual_seed(0) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - torch.manual_seed(0) - text_encoder = CLIPTextModel( - CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=embedder_hidden_size, - projection_dim=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - ) - - torch.manual_seed(0) - unet = UNet2DConditionModel( - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("CrossAttnDownBlock2D", "DownBlock2D"), - up_block_types=("UpBlock2D", "CrossAttnUpBlock2D"), - block_out_channels=(32, 64), - attention_head_dim=(2, 4), - class_embed_type="projection", - # The class embeddings are the noise augmented image embeddings. - # I.e. the image embeddings concated with the noised embeddings of the same dimension - projection_class_embeddings_input_dim=embedder_projection_dim * 2, - cross_attention_dim=embedder_hidden_size, - layers_per_block=1, - upcast_attention=True, - use_linear_projection=True, - ) - - torch.manual_seed(0) - scheduler = DDIMScheduler( - beta_schedule="scaled_linear", - beta_start=0.00085, - beta_end=0.012, - prediction_type="v_prediction", - set_alpha_to_one=False, - steps_offset=1, - ) - - torch.manual_seed(0) - vae = AutoencoderKL() - - components = { - # prior components - "prior_tokenizer": prior_tokenizer, - "prior_text_encoder": prior_text_encoder, - "prior": prior, - "prior_scheduler": prior_scheduler, - # image noising components - "image_normalizer": image_normalizer, - "image_noising_scheduler": image_noising_scheduler, - # regular denoising components - "tokenizer": tokenizer, - "text_encoder": text_encoder, - "unet": unet, - "scheduler": scheduler, - "vae": vae, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "prior_num_inference_steps": 2, - "output_type": "numpy", - } - return inputs - - # Overriding PipelineTesterMixin::test_attention_slicing_forward_pass - # because UnCLIP GPU undeterminism requires a looser check. - def test_attention_slicing_forward_pass(self): - test_max_difference = torch_device == "cpu" - - self._test_attention_slicing_forward_pass(test_max_difference=test_max_difference) - - # Overriding PipelineTesterMixin::test_inference_batch_single_identical - # because UnCLIP undeterminism requires a looser check. - def test_inference_batch_single_identical(self): - test_max_difference = torch_device in ["cpu", "mps"] - - self._test_inference_batch_single_identical(test_max_difference=test_max_difference) - - -@slow -@require_torch_gpu -class StableUnCLIPPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_stable_unclip(self): - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/stable_unclip_2_1_l_anime_turtle_fp16.npy" - ) - - pipe = StableUnCLIPPipeline.from_pretrained("fusing/stable-unclip-2-1-l", torch_dtype=torch.float16) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - # stable unclip will oom when integration tests are run on a V100, - # so turn on memory savings - pipe.enable_attention_slicing() - pipe.enable_sequential_cpu_offload() - - generator = torch.Generator(device="cpu").manual_seed(0) - output = pipe("anime turle", generator=generator, output_type="np") - - image = output.images[0] - - assert image.shape == (768, 768, 3) - - assert_mean_pixel_difference(image, expected_image) - - def test_stable_unclip_pipeline_with_sequential_cpu_offloading(self): - torch.cuda.empty_cache() - torch.cuda.reset_max_memory_allocated() - torch.cuda.reset_peak_memory_stats() - - pipe = StableUnCLIPPipeline.from_pretrained("fusing/stable-unclip-2-1-l", torch_dtype=torch.float16) - pipe = pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - pipe.enable_sequential_cpu_offload() - - _ = pipe( - "anime turtle", - prior_num_inference_steps=2, - num_inference_steps=2, - output_type="np", - ) - - mem_bytes = torch.cuda.max_memory_allocated() - # make sure that less than 7 GB is allocated - assert mem_bytes < 7 * 10**9 diff --git a/spaces/deepkyu/multilingual-font-style-transfer/docs/ml-font-style-transfer.md b/spaces/deepkyu/multilingual-font-style-transfer/docs/ml-font-style-transfer.md deleted file mode 100644 index 55ed1804ccb262a081583e4e0806b55dbc0d60de..0000000000000000000000000000000000000000 --- a/spaces/deepkyu/multilingual-font-style-transfer/docs/ml-font-style-transfer.md +++ /dev/null @@ -1,11 +0,0 @@ -

      Multilingual Font Style Transfer

      - -This is personal concept proofing demo, so it does not guarantee that the quality of output. -I hope that in someday there will be an established model for the better mulitlingual society. - -I only used personal RTX 30 series GPU(s) for training the model. The model is heavily inspired from a model from the previous study, [FTransGAN](https://github.com/ligoudaner377/font_translator_gan) (Li et al.). - -- Compostion-free font style transfer across 13 different languages -- Trained with [Google Fonts](https://github.com/google/fonts) (ofl fonts and Nota Sans) - -
      example_train
      diff --git a/spaces/delmaksym/Huggy/index.html b/spaces/delmaksym/Huggy/index.html deleted file mode 100644 index 65c99948b644b1dc5d3945994c1b968d16e4e046..0000000000000000000000000000000000000000 --- a/spaces/delmaksym/Huggy/index.html +++ /dev/null @@ -1,133 +0,0 @@ - - - - - - - Huggy - - - - - - - - - - - -
      - -
      - - - - - - - - diff --git a/spaces/dfassaf/newbingChatAI/README.md b/spaces/dfassaf/newbingChatAI/README.md deleted file mode 100644 index 48408cc7782267ef33f7be2f48e780e2f90be111..0000000000000000000000000000000000000000 --- a/spaces/dfassaf/newbingChatAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NewbingChatAI -emoji: ⚡ -colorFrom: blue -colorTo: blue -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dhanushreddy29/microstructure-project/app.py b/spaces/dhanushreddy29/microstructure-project/app.py deleted file mode 100644 index 1060c51926be06ed0957787dbfa9883227f15d61..0000000000000000000000000000000000000000 --- a/spaces/dhanushreddy29/microstructure-project/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import cv2 -from fastai.vision.all import * -import numpy as np -import gradio as gr -from scipy import ndimage - -fnames = get_image_files("./albumentations/original") - - -def label_func(fn): - return "./albumentations/labelled/" f"{fn.stem}.png" - - -codes = np.loadtxt("labels.txt", dtype=str) -w, h = 768, 1152 -img_size = (w, h) -im_size = (h, w) - -dls = SegmentationDataLoaders.from_label_func( - ".", - bs=3, - fnames=fnames, - label_func=label_func, - codes=codes, - item_tfms=Resize(img_size), -) - -learn = unet_learner(dls, resnet34) -learn.load("learn") - - -def segmentImage(img_path): - img = cv2.imread(img_path, 0) - for i in range(img.shape[0]): - for j in range(img.shape[1]): - if img[i][j] > 0: - img[i][j] = 1 - kernel = np.ones((3, 3), np.uint8) - # img = cv2.erode(img, kernel, iterations=1) - # img = cv2.dilate(img, kernel, iterations=1) - img = ndimage.binary_fill_holes(img).astype(int) - labels, nlabels = ndimage.label(img) - - # Get grain sizes - sizes = ndimage.sum(img, labels, range(nlabels + 1)) - scale_factor = 3072 / 1152 - c = 0.4228320313 - # Divide sizes by pixel_to_micrometer to get the sizes in micrometers and store them in a list new_sizes - new_sizes = [size * scale_factor * scale_factor * c * c for size in sizes] - # Round the grain sizes to 2 decimal places - new_sizes = [round(size, 2) for size in new_sizes] - # Print the grain sizes - print("Sorted Areas = ", sorted(list(new_sizes))) - print("Length = ", len(new_sizes)) - gradient_img = np.zeros((img.shape[0], img.shape[1], 3), np.uint8) - colors = [] - for i in range(len(new_sizes)): - if new_sizes[i] < 250 * c * c: - colors.append((255, 255, 255)) - elif new_sizes[i] < 7500 * c * c: - colors.append((2, 106, 248)) - elif new_sizes[i] < 20000 * c * c: - colors.append((0, 255, 107)) - elif new_sizes[i] < 45000 * c * c: - colors.append((255, 201, 60)) - else: - colors.append((255, 0, 0)) - for i in range(img.shape[0]): - for j in range(img.shape[1]): - if labels[i][j] != 0: - gradient_img[i][j] = colors[labels[i][j]] - Sum = 0 - count = 0 - for i in range(len(new_sizes)): - if new_sizes[i] > 250 * c * c: - Sum += new_sizes[i] - count += 1 - colors = np.random.randint(0, 255, (nlabels + 1, 3)) - colors[0] = 0 - img_color = colors[labels] - return ( - img_color, - gradient_img, - "Average Area of grains: " + str(Sum / count) + " µm^2", - ) - - -def predict_segmentation(img): - gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - resized_img = cv2.resize(gray_img, im_size) - pred = learn.predict(resized_img) - scaled_pred = (pred[0].numpy() * 255).astype(np.uint8) - output_image = PILImage.create(scaled_pred) - # Save the image to a temporary file - temp_file = "temp.png" - output_image.save(temp_file) - # Call the segmentImage function - segmented_image, gradient_image, avg_area = segmentImage(temp_file) - return output_image, segmented_image, gradient_image, avg_area - - -input_image = gr.inputs.Image() -output_image1 = gr.outputs.Image(type="pil") -output_image2 = gr.outputs.Image(type="pil") -output_image3 = gr.outputs.Image(type="pil") -output_image4 = gr.outputs.Textbox() -app = gr.Interface( - fn=predict_segmentation, - inputs=input_image, - outputs=[output_image1, output_image2, output_image3, output_image4], - title="Microstructure Segmentation", - description="Segment the input image into grain and background.", - examples=["examples/inp1.png", "examples/inp2.png"] -) -app.launch() diff --git a/spaces/diacanFperku/AutoGPT/3DMark 2003 Serial Serial Key [CRACKED] Keygen.md b/spaces/diacanFperku/AutoGPT/3DMark 2003 Serial Serial Key [CRACKED] Keygen.md deleted file mode 100644 index 1a296255fb7c7f2742db340ec31f44be0647f338..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/3DMark 2003 Serial Serial Key [CRACKED] Keygen.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      3dmark keygen free download is an efficient tool for computer benchmarking. it helps you to determine the performance of your computers graphics card and cpu workload capabilities. therefore, this application is very useful for system builders, gamers, and overclockers. in addition, it provides you with complete detail of your hardware. whats more, this application comes with the ability to perform a wide range of benchmark tests. this latest version comes with everything you need to test your pc, notebook, smartphone, and tablet.

      -

      furthermore, a command-line tool has been provided for more advanced purposes, and the script will be able to set up an automation system to perform various tests. it is possible to export test results in xml format from 3dmark cracked. 3dmark crack leverages hardware compatibility with the gpu to conduct a series of tests on texture drawing speed and quality. 3dmark keygen features multiple processor criteria, a self-contained rating scale, and export results. thetheward graphical interface that enables batch testing and parameter checking.

      -

      3DMark 2003 Serial Serial Key keygen


      Download ——— https://gohhs.com/2uFU96



      -

      moreover, a command-line tool has been provided for more advanced purposes, and the script will be able to set up an automation system to perform various tests. it is possible to export test results in xml format from 3dmark cracked. 3dmark crack leverages hardware compatibility with the gpu to conduct a series of tests on texture drawing speed and quality. 3dmark keygen features multiple processor criteria, a self-contained rating scale, and export results. thetheward graphical interface that enables batch testing and parameter checking.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Download Sap2000 V9 Full Crack __LINK__.md b/spaces/diacanFperku/AutoGPT/Download Sap2000 V9 Full Crack __LINK__.md deleted file mode 100644 index 6301341b867d2c6271737899dee9833de79e71f6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Sap2000 V9 Full Crack __LINK__.md +++ /dev/null @@ -1,8 +0,0 @@ - -

      You are at liberty to put together the model of your desires when using SAP2000. It enables you to block the beams with variety of choices. It simulates several effects that are available for analysis. You are then capable of choose different effects like as thermal, seismic, wind pressure and corrosion for all the designs. It simulates all types of designs of structures and joints with reduced standards and completely remote algorithms. It consists of a huge database of standard designs with customized modeling options. Also Available:

      -

      8. NO PUBLICATION. You shall not publish, transmit, display, perform, reproduce, create derivative works from, modify, or in any way exploit any Content or Materials that have been made available to or downloaded by you, or any portion thereof, in whole or in part, except as necessary or appropriate for your personal, non-commercial use. You shall remove or disable the copyright notice from any Materials that you use in violation of these Terms of Use.

      -

      download sap2000 v9 full crack


      DOWNLOAD » https://gohhs.com/2uFSZM



      -

      11. SUPPORT. Company will use commercially reasonable efforts to promptly respond to all access requests from users, and to provide access to technical support, including both full time and on-call services in addition to email and telephone support. In the event of any dispute with respect to user’s access requests or with respect to the provision of support services to you, you will bear all costs associated with any or all of the foregoing, and Company will bear any related fees associated with the denial of such access or the provision of support.

      -

      Optionally, SAP2000 Ultimate can be extended by adding additional design products such as roof plates, cement concrete, walls, frames, floors, and beams. By matching the appropriate structural analysis with geometric shape, the user can fully and accurately design the space or structure.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Free Download Fumefx For Maya.md b/spaces/diacanFperku/AutoGPT/Free Download Fumefx For Maya.md deleted file mode 100644 index 69ff36c37095d881fd6f8164163f19c2b5bcab65..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Free Download Fumefx For Maya.md +++ /dev/null @@ -1,42 +0,0 @@ -

      free download fumefx for maya


      DOWNLOAD ··· https://gohhs.com/2uFVla



      -
      -avi effects. - -23 Oct 2013 - -FumeFX version 2.0 - -19 Oct 2013 - -FumeFX 2.0 is released with new features and enhancements. - -29 Jan 2012 - -FumeFX 2.0.1 is released to address some issues. - -1 Nov 2010 - -FumeFX 2.0 has been released with new features and enhancements. - -30 Apr 2010 - -13 Aug 2008 - -10 May 2008 - -FumeFX 1.x series is now officially retired and development will end. The last 1.x versions will remain on sale for the foreseeable future. However, we can guarantee that more and more future enhancements and bug fixes will be added to FumeFX 2.0 and up as well as any improvements made by the Maya and 3ds Max user communities. We are very much looking forward to the next release of FumeFX and highly recommend it for everyone who is a fan of fluid effects in their 3d animation, visual effects, or game production. - -You are welcome to subscribe to the FumeFX page on our website and receive an email notification when a new version of FumeFX becomes available. - -About FumeFX - -FumeFX is a powerful fluid dynamics plugin for Autodesk Maya and 3ds Max, designed for simulation and rendering of realistic explosions, fire, smoke and .avi effects. - -FumeFX now comes with a built-in fluid simulator that allows easy simulation of any convection process, from a tiny liquid in a vessel, to an exploding gas in a large-scale reactor and any other possible scale. FumeFX allows users to use its simulations and visual results for real-time rendering or animations, simulation of smoke, air, fire, fluid smoke, liquid smoke, smoke from fire and combustion and much more. - -FumeFX’s fluid dynamics simulation is also suitable for other applications than fluid effects – it can simulate any sort of diffusion process. The results can be easily visualized in 3D and in real-time, just as if you were using real fluids. You can further create your own visualizations by combining the fluid simulation with other Maya plugins, such as SolidSmoke, Maya Lumion or InviSynth. - -Note: FumeFX has been discontinued and 4fefd39f24
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Iso 10628 Pdf Free Download [VERIFIED]l.md b/spaces/diacanFperku/AutoGPT/Iso 10628 Pdf Free Download [VERIFIED]l.md deleted file mode 100644 index 694fbdcd1f085fe7c082e0fe5589d490f37f8978..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Iso 10628 Pdf Free Download [VERIFIED]l.md +++ /dev/null @@ -1,83 +0,0 @@ - -

      How to Download Iso 10628 Pdf Free Downloadl and Use It for Your Process Plant Diagrams

      - -

      If you are involved in the design, operation, or maintenance of process plants for the chemical and petrochemical industry, you may need to create or read flow diagrams that show the structure and function of the plant. Flow diagrams are graphical representations of the equipment, piping, instrumentation, and control systems that are used to produce, process, or treat chemical or petrochemical substances.

      - -

      However, creating or reading flow diagrams can be challenging if you don't have a standard and consistent way of depicting the elements of the plant. Different industries, regions, and applications may use different symbols, formats, and conventions for their flow diagrams, which can lead to confusion, errors, and inefficiencies.

      -

      Iso 10628 Pdf Free Downloadl


      Download Zip ✫✫✫ https://gohhs.com/2uFV2r



      - -

      That's why you may want to download Iso 10628 Pdf Free Downloadl, which is a set of two international standards that specify the types, content, and presentation of flow diagrams for process plants. Iso 10628 Pdf Free Downloadl provides you with a common language and framework for communication and documentation of your process plant diagrams.

      - -

      What is Iso 10628 Pdf Free Downloadl?

      - -

      Iso 10628 Pdf Free Downloadl is a set of two international standards that were developed by ISO (the International Organization for Standardization), which is the world's largest developer and publisher of international standards that cover a wide range of topics and sectors.

      - -

      Iso 10628 Pdf Free Downloadl consists of two parts:

      - -
        -
      • Part 1: Specification of diagrams. This part covers the classification, information content, layout, inscription, scale, and limits of flow diagrams for process plants.
      • -
      • Part 2: Graphical symbols. This part covers the graphical symbols that are used to depict the elements of flow diagrams for process plants, such as heat exchangers, valves, fittings, cooling towers, and safety devices.
      • -
      - -

      Iso 10628 Pdf Free Downloadl is based on the best practices and solutions that were agreed upon by experts from different countries and organizations who collaborated to achieve consensus on the standards.

      - -

      Why You Should Use Iso 10628 Pdf Free Downloadl

      - -

      There are many benefits of using Iso 10628 Pdf Free Downloadl as a reference for your flow diagrams for process plants. Some of them are:

      - -
        -
      • It provides you with a common language and framework for communication and documentation of your process plant diagrams among different stakeholders, such as engineers, technicians, operators, managers, regulators, customers, suppliers, etc.
      • -
      • It ensures consistency and clarity in the representation of your process plant diagrams across different industries, regions, and applications.
      • -
      • It facilitates the exchange and comparison of information and data among different sources and systems.
      • -
      • It supports the analysis and optimization of your process performance, efficiency, safety, and environmental impact.
      • -
      • It helps you to identify and resolve potential problems and risks in your process plant diagrams.
      • -
      - -

      How to Download Iso 10628 Pdf Free Downloadl

      - -

      If you want to download Iso 10628 Pdf Free Downloadl for free,

      -

      -

      - -
        -
      1. Go to the ISO Store website at https://www.iso.org/store.html.
      2. -
      3. Type "Iso 10628" in the search box and click on the magnifying glass icon.
      4. -
      5. You will see a list of results that match your search query. Click on the title of Iso 10628-1:2014 or Iso 10628-2:2012 to access their respective pages.
      6. -
      7. On each page, you will see a preview of the standard's content and a button that says "Buy". Click on this button to proceed to the checkout page.
      8. -
      9. On the checkout page, you will see the price and format options for each standard. You can choose between PDF or hardcopy versions. You can also choose your preferred currency and delivery method.
      10. -
      11. Fill in your billing and shipping details and click on "Place order". You will be redirected to a secure payment gateway where you can complete your transaction.
      12. -
      13. Once your payment is confirmed, you will receive an email confirmation with a link to download your PDF file or a tracking number for your hardcopy order.
      14. -
      - -

      Alternatively, you can also buy Iso 10628 Pdf Free Downloadl from your national ISO member. You can find the contact information and online store links of all ISO members on this page: https://www.iso.org/members.html. Buying from your national ISO member may have some advantages, such as:

      - -
        -
      • You can pay in your local currency and avoid exchange rate fees.
      • -
      • You can get personalized service and technical feedback on certain standards.
      • -
      • You can support the development and maintenance of standards in your country.
      • -
      - -

      How to Use Iso 10628 Pdf Free Downloadl for Your Process Plant Diagrams

      - -

      Once you have downloaded Iso 10628 Pdf Free Downloadl, you can use it as a reference for creating or reading flow diagrams for process plants. Iso 10628 Pdf Free Downloadl provides you with clear and consistent guidelines and symbols for depicting the elements of your process plant diagrams.

      - -

      To use Iso 10628 Pdf Free Downloadl for your process plant diagrams, you need to follow these steps:

      - -
        -
      1. Determine the type and purpose of your flow diagram. Iso 10628 Pdf Free Downloadl classifies flow diagrams into four types: block diagrams, process flow diagrams, piping and instrumentation diagrams (P&ID), and utility flow diagrams. Each type has a different level of detail and information content. You need to choose the type that best suits your needs and objectives.
      2. -
      3. Select the graphical symbols that correspond to the elements of your flow diagram. Iso 10628 Pdf Free Downloadl provides you with a comprehensive list of graphical symbols that are used to represent the equipment, piping, instrumentation, and control systems of process plants. Each symbol has a unique shape, code, and meaning. You need to use the symbols that are relevant and appropriate for your flow diagram.
      4. -
      5. Arrange the graphical symbols according to the layout rules of Iso 10628 Pdf Free Downloadl. Iso 10628 Pdf Free Downloadl gives you some general rules for arranging the graphical symbols on your flow diagram, such as using horizontal or vertical lines to connect them, using dashed lines to indicate hidden or remote connections, using arrows to indicate flow direction, using text boxes to provide additional information, etc. You need to follow these rules to ensure clarity and consistency in your flow diagram.
      6. -
      7. Inscribe the graphical symbols with the necessary data and labels. Iso 10628 Pdf Free Downloadl provides you with some guidelines for inscribing the graphical symbols with data and labels that describe their characteristics, functions, parameters, values, units, etc. You need to provide enough data and labels to make your flow diagram informative and understandable.
      8. -
      9. Scale your flow diagram according to the size and complexity of your process plant. Iso 10628 Pdf Free Downloadl gives you some recommendations for scaling your flow diagram according to the size and complexity of your process plant. You need to choose a scale that allows you to show all the relevant details of your process plant without compromising readability or accuracy.
      10. -
      11. Define the limits of your flow diagram according to the scope and boundaries of your process plant. Iso 10628 Pdf Free Downloadl gives you some suggestions for defining -

        Conclusion

        - -

        Iso 10628 Pdf Free Downloadl is a valuable resource for anyone who needs to create or read flow diagrams for process plants. It provides you with a standard and consistent way of representing the structure and operation of your process plant using graphical symbols and rules. It also helps you to communicate and document your process plant diagrams effectively and efficiently.

        - -

        If you want to download Iso 10628 Pdf Free Downloadl for free, you can either buy it from the ISO Store website or from your national ISO member. You can then use it as a reference for creating or reading your flow diagram according to the type, purpose, and scope of your process plant.

        - -

        By using Iso 10628 Pdf Free Downloadl for your process plant diagrams, you can benefit from the best practices and solutions that were developed by experts from different countries and organizations. You can also improve your process performance, efficiency, safety, and environmental impact by using clear and consistent flow diagrams for your process plants.

        - -

        We hope this article has helped you to understand what Iso 10628 Pdf Free Downloadl is and how to use it for your process plant diagrams. If you have any questions or feedback, please feel free to contact us.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/diazcalvi/KIONAPI/app2.py b/spaces/diazcalvi/KIONAPI/app2.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/diazcalvi/KIONAPI/app2.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/doevent/blip/models/blip.py b/spaces/doevent/blip/models/blip.py deleted file mode 100644 index 38678f65ea2c276b351c2c97d429ebc2525ddcf7..0000000000000000000000000000000000000000 --- a/spaces/doevent/blip/models/blip.py +++ /dev/null @@ -1,238 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import warnings -warnings.filterwarnings("ignore") - -from models.vit import VisionTransformer, interpolate_pos_embed -from models.med import BertConfig, BertModel, BertLMHeadModel -from transformers import BertTokenizer - -import torch -from torch import nn -import torch.nn.functional as F - -import os -from urllib.parse import urlparse -from timm.models.hub import download_cached_file - -class BLIP_Base(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 224, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_encoder = BertModel(config=med_config, add_pooling_layer=False) - - - def forward(self, image, caption, mode): - - assert mode in ['image', 'text', 'multimodal'], "mode parameter must be image, text, or multimodal" - text = self.tokenizer(caption, return_tensors="pt").to(image.device) - - if mode=='image': - # return image features - image_embeds = self.visual_encoder(image) - return image_embeds - - elif mode=='text': - # return text features - text_output = self.text_encoder(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - return text_output.last_hidden_state - - elif mode=='multimodal': - # return multimodel features - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - - text.input_ids[:,0] = self.tokenizer.enc_token_id - output = self.text_encoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True, - ) - return output.last_hidden_state - - - -class BLIP_Decoder(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 384, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - prompt = 'a picture of ', - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_decoder = BertLMHeadModel(config=med_config) - - self.prompt = prompt - self.prompt_length = len(self.tokenizer(self.prompt).input_ids)-1 - - - def forward(self, image, caption): - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - - text = self.tokenizer(caption, padding='longest', truncation=True, max_length=40, return_tensors="pt").to(image.device) - - text.input_ids[:,0] = self.tokenizer.bos_token_id - - decoder_targets = text.input_ids.masked_fill(text.input_ids == self.tokenizer.pad_token_id, -100) - decoder_targets[:,:self.prompt_length] = -100 - - decoder_output = self.text_decoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - labels = decoder_targets, - return_dict = True, - ) - loss_lm = decoder_output.loss - - return loss_lm - - def generate(self, image, sample=False, num_beams=3, max_length=30, min_length=10, top_p=0.9, repetition_penalty=1.0): - image_embeds = self.visual_encoder(image) - - if not sample: - image_embeds = image_embeds.repeat_interleave(num_beams,dim=0) - - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - model_kwargs = {"encoder_hidden_states": image_embeds, "encoder_attention_mask":image_atts} - - prompt = [self.prompt] * image.size(0) - input_ids = self.tokenizer(prompt, return_tensors="pt").input_ids.to(image.device) - input_ids[:,0] = self.tokenizer.bos_token_id - input_ids = input_ids[:, :-1] - - if sample: - #nucleus sampling - outputs = self.text_decoder.generate(input_ids=input_ids, - max_length=max_length, - min_length=min_length, - do_sample=True, - top_p=top_p, - num_return_sequences=1, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=1.1, - **model_kwargs) - else: - #beam search - outputs = self.text_decoder.generate(input_ids=input_ids, - max_length=max_length, - min_length=min_length, - num_beams=num_beams, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=repetition_penalty, - **model_kwargs) - - captions = [] - for output in outputs: - caption = self.tokenizer.decode(output, skip_special_tokens=True) - captions.append(caption[len(self.prompt):]) - return captions - - -def blip_decoder(pretrained='',**kwargs): - model = BLIP_Decoder(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - assert(len(msg.missing_keys)==0) - return model - -def blip_feature_extractor(pretrained='',**kwargs): - model = BLIP_Base(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - assert(len(msg.missing_keys)==0) - return model - -def init_tokenizer(): - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - tokenizer.add_special_tokens({'bos_token':'[DEC]'}) - tokenizer.add_special_tokens({'additional_special_tokens':['[ENC]']}) - tokenizer.enc_token_id = tokenizer.additional_special_tokens_ids[0] - return tokenizer - - -def create_vit(vit, image_size, use_grad_checkpointing=False, ckpt_layer=0, drop_path_rate=0): - - assert vit in ['base', 'large'], "vit parameter must be base or large" - if vit=='base': - vision_width = 768 - visual_encoder = VisionTransformer(img_size=image_size, patch_size=16, embed_dim=vision_width, depth=12, - num_heads=12, use_grad_checkpointing=use_grad_checkpointing, ckpt_layer=ckpt_layer, - drop_path_rate=0 or drop_path_rate - ) - elif vit=='large': - vision_width = 1024 - visual_encoder = VisionTransformer(img_size=image_size, patch_size=16, embed_dim=vision_width, depth=24, - num_heads=16, use_grad_checkpointing=use_grad_checkpointing, ckpt_layer=ckpt_layer, - drop_path_rate=0.1 or drop_path_rate - ) - return visual_encoder, vision_width - -def is_url(url_or_filename): - parsed = urlparse(url_or_filename) - return parsed.scheme in ("http", "https") - -def load_checkpoint(model,url_or_filename): - if is_url(url_or_filename): - cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) - checkpoint = torch.load(cached_file, map_location='cpu') - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location='cpu') - else: - raise RuntimeError('checkpoint url or path is invalid') - - state_dict = checkpoint['model'] - - state_dict['visual_encoder.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder.pos_embed'],model.visual_encoder) - if 'visual_encoder_m.pos_embed' in model.state_dict().keys(): - state_dict['visual_encoder_m.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder_m.pos_embed'], - model.visual_encoder_m) - for key in model.state_dict().keys(): - if key in state_dict.keys(): - if state_dict[key].shape!=model.state_dict()[key].shape: - del state_dict[key] - - msg = model.load_state_dict(state_dict,strict=False) - print('load checkpoint from %s'%url_or_filename) - return model,msg - diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py deleted file mode 100644 index c8340c723fad8e07e2fc62daaa3912487498814b..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/backbone.py +++ /dev/null @@ -1,221 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Backbone modules. -""" - -from typing import Dict, List - -import torch -import torch.nn.functional as F -import torchvision -from torch import nn -from torchvision.models._utils import IntermediateLayerGetter - -from groundingdino.util.misc import NestedTensor, clean_state_dict, is_main_process - -from .position_encoding import build_position_encoding -from .swin_transformer import build_swin_transformer - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed. - - Copy-paste from torchvision.misc.ops with added eps before rqsrt, - without which any other models than torchvision.models.resnet[18,34,50,101] - produce nans. - """ - - def __init__(self, n): - super(FrozenBatchNorm2d, self).__init__() - self.register_buffer("weight", torch.ones(n)) - self.register_buffer("bias", torch.zeros(n)) - self.register_buffer("running_mean", torch.zeros(n)) - self.register_buffer("running_var", torch.ones(n)) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super(FrozenBatchNorm2d, self)._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x): - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - eps = 1e-5 - scale = w * (rv + eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - -class BackboneBase(nn.Module): - def __init__( - self, - backbone: nn.Module, - train_backbone: bool, - num_channels: int, - return_interm_indices: list, - ): - super().__init__() - for name, parameter in backbone.named_parameters(): - if ( - not train_backbone - or "layer2" not in name - and "layer3" not in name - and "layer4" not in name - ): - parameter.requires_grad_(False) - - return_layers = {} - for idx, layer_index in enumerate(return_interm_indices): - return_layers.update( - {"layer{}".format(5 - len(return_interm_indices) + idx): "{}".format(layer_index)} - ) - - # if len: - # if use_stage1_feature: - # return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"} - # else: - # return_layers = {"layer2": "0", "layer3": "1", "layer4": "2"} - # else: - # return_layers = {'layer4': "0"} - self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) - self.num_channels = num_channels - - def forward(self, tensor_list: NestedTensor): - xs = self.body(tensor_list.tensors) - out: Dict[str, NestedTensor] = {} - for name, x in xs.items(): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - out[name] = NestedTensor(x, mask) - # import ipdb; ipdb.set_trace() - return out - - -class Backbone(BackboneBase): - """ResNet backbone with frozen BatchNorm.""" - - def __init__( - self, - name: str, - train_backbone: bool, - dilation: bool, - return_interm_indices: list, - batch_norm=FrozenBatchNorm2d, - ): - if name in ["resnet18", "resnet34", "resnet50", "resnet101"]: - backbone = getattr(torchvision.models, name)( - replace_stride_with_dilation=[False, False, dilation], - pretrained=is_main_process(), - norm_layer=batch_norm, - ) - else: - raise NotImplementedError("Why you can get here with name {}".format(name)) - # num_channels = 512 if name in ('resnet18', 'resnet34') else 2048 - assert name not in ("resnet18", "resnet34"), "Only resnet50 and resnet101 are available." - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - num_channels_all = [256, 512, 1024, 2048] - num_channels = num_channels_all[4 - len(return_interm_indices) :] - super().__init__(backbone, train_backbone, num_channels, return_interm_indices) - - -class Joiner(nn.Sequential): - def __init__(self, backbone, position_embedding): - super().__init__(backbone, position_embedding) - - def forward(self, tensor_list: NestedTensor): - xs = self[0](tensor_list) - out: List[NestedTensor] = [] - pos = [] - for name, x in xs.items(): - out.append(x) - # position encoding - pos.append(self[1](x).to(x.tensors.dtype)) - - return out, pos - - -def build_backbone(args): - """ - Useful args: - - backbone: backbone name - - lr_backbone: - - dilation - - return_interm_indices: available: [0,1,2,3], [1,2,3], [3] - - backbone_freeze_keywords: - - use_checkpoint: for swin only for now - - """ - position_embedding = build_position_encoding(args) - train_backbone = True - if not train_backbone: - raise ValueError("Please set lr_backbone > 0") - return_interm_indices = args.return_interm_indices - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - args.backbone_freeze_keywords - use_checkpoint = getattr(args, "use_checkpoint", False) - - if args.backbone in ["resnet50", "resnet101"]: - backbone = Backbone( - args.backbone, - train_backbone, - args.dilation, - return_interm_indices, - batch_norm=FrozenBatchNorm2d, - ) - bb_num_channels = backbone.num_channels - elif args.backbone in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ]: - pretrain_img_size = int(args.backbone.split("_")[-2]) - backbone = build_swin_transformer( - args.backbone, - pretrain_img_size=pretrain_img_size, - out_indices=tuple(return_interm_indices), - dilation=False, - use_checkpoint=use_checkpoint, - ) - - bb_num_channels = backbone.num_features[4 - len(return_interm_indices) :] - else: - raise NotImplementedError("Unknown backbone {}".format(args.backbone)) - - assert len(bb_num_channels) == len( - return_interm_indices - ), f"len(bb_num_channels) {len(bb_num_channels)} != len(return_interm_indices) {len(return_interm_indices)}" - - model = Joiner(backbone, position_embedding) - model.num_channels = bb_num_channels - assert isinstance( - bb_num_channels, List - ), "bb_num_channels is expected to be a List but {}".format(type(bb_num_channels)) - # import ipdb; ipdb.set_trace() - return model diff --git a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/chunks/index.c146e4e6.js b/spaces/dylanebert/gaussian-viewer/public/_app/immutable/chunks/index.c146e4e6.js deleted file mode 100644 index 3c673f037e7bbb95da0135a25846c9ae3a643fa9..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/chunks/index.c146e4e6.js +++ /dev/null @@ -1 +0,0 @@ -var C=Object.defineProperty;var E=(e,t,n)=>t in e?C(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var p=(e,t,n)=>(E(e,typeof t!="symbol"?t+"":t,n),n);import{r as h,n as y,i as w,j,k as S,l as B,m as b,p as I,q as M,v as N,w as P,x as T,y as q}from"./scheduler.8b74b908.js";let $=!1;function H(){$=!0}function L(){$=!1}function O(e,t,n,a){for(;e>1);n(s)<=a?e=s+1:t=s}return e}function D(e){if(e.hydrate_init)return;e.hydrate_init=!0;let t=e.childNodes;if(e.nodeName==="HEAD"){const i=[];for(let r=0;r0&&t[n[s]].claim_order<=r?s+1:O(1,s,_=>t[n[_]].claim_order,r))-1;a[i]=n[o]+1;const u=o+1;n[u]=i,s=Math.max(u,s)}const c=[],l=[];let f=t.length-1;for(let i=n[s]+1;i!=0;i=a[i-1]){for(c.push(t[i-1]);f>=i;f--)l.push(t[f]);f--}for(;f>=0;f--)l.push(t[f]);c.reverse(),l.sort((i,r)=>i.claim_order-r.claim_order);for(let i=0,r=0;i=c[r].claim_order;)r++;const o=r{for(let l=e.claim_info.last_index;l=0;l--){const f=e[l];if(t(f)){const i=n(f);return i===void 0?e.splice(l,1):e[l]=i,s?i===void 0&&e.claim_info.last_index--:e.claim_info.last_index=l,f}}return a()})();return c.claim_order=e.claim_info.total_claimed,e.claim_info.total_claimed+=1,c}function G(e,t,n,a){return A(e,s=>s.nodeName===t,s=>{const c=[];for(let l=0;ls.removeAttribute(l))},()=>a(t))}function ae(e,t,n){return G(e,t,n,W)}function J(e,t){return A(e,n=>n.nodeType===3,n=>{const a=""+t;if(n.data.startsWith(a)){if(n.data.length!==a.length)return n.splitText(a.length)}else n.data=a},()=>x(t),!0)}function le(e){return J(e," ")}function se(e,t){t=""+t,e.data!==t&&(e.data=t)}function fe(e,t,n,a){n==null?e.style.removeProperty(t):e.style.setProperty(t,n,a?"important":"")}function ce(e,t){return new e(t)}const m=new Set;let d;function ue(){d={r:0,c:[],p:d}}function oe(){d.r||h(d.c),d=d.p}function K(e,t){e&&e.i&&(m.delete(e),e.i(t))}function de(e,t,n,a){if(e&&e.o){if(m.has(e))return;m.add(e),d.c.push(()=>{m.delete(e),a&&(n&&e.d(1),a())}),e.o(t)}else a&&a()}function _e(e){e&&e.c()}function me(e,t){e&&e.l(t)}function Q(e,t,n){const{fragment:a,after_update:s}=e.$$;a&&a.m(t,n),b(()=>{const c=e.$$.on_mount.map(P).filter(S);e.$$.on_destroy?e.$$.on_destroy.push(...c):h(c),e.$$.on_mount=[]}),s.forEach(b)}function U(e,t){const n=e.$$;n.fragment!==null&&(I(n.after_update),h(n.on_destroy),n.fragment&&n.fragment.d(t),n.on_destroy=n.fragment=null,n.ctx=[])}function X(e,t){e.$$.dirty[0]===-1&&(T.push(e),q(),e.$$.dirty.fill(0)),e.$$.dirty[t/31|0]|=1<{const v=g.length?g[0]:_;return r.ctx&&s(r.ctx[u],r.ctx[u]=v)&&(!r.skip_bound&&r.bound[u]&&r.bound[u](v),o&&X(e,u)),_}):[],r.update(),o=!0,h(r.before_update),r.fragment=a?a(r.ctx):!1,t.target){if(t.hydrate){H();const u=z(t.target);r.fragment&&r.fragment.l(u),u.forEach(V)}else r.fragment&&r.fragment.c();t.intro&&K(e.$$.fragment),Q(e,t.target,t.anchor),L(),j()}N(i)}class $e{constructor(){p(this,"$$");p(this,"$$set")}$destroy(){U(this,1),this.$destroy=y}$on(t,n){if(!S(n))return y;const a=this.$$.callbacks[t]||(this.$$.callbacks[t]=[]);return a.push(n),()=>{const s=a.indexOf(n);s!==-1&&a.splice(s,1)}}$set(t){this.$$set&&!B(t)&&(this.$$.skip_bound=!0,this.$$set(t),this.$$.skip_bound=!1)}}const Y="4";typeof window<"u"&&(window.__svelte||(window.__svelte={v:new Set})).v.add(Y);export{$e as S,ee as a,oe as b,le as c,K as d,ne as e,V as f,W as g,ae as h,he as i,z as j,ie as k,fe as l,x as m,J as n,se as o,ue as p,ce as q,_e as r,te as s,de as t,me as u,Q as v,U as w,R as x,re as y}; diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/models/stylegan2/op/upfirdn2d.py b/spaces/emc348/faces-through-time/models/StyleCLIP/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 02fc25af780868d9b883631eb6b03a25c225d745..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,60 +0,0 @@ -import os - -import torch -from torch.nn import functional as F - - -module_path = os.path.dirname(__file__) - - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = upfirdn2d_native( - input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1] - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) \ No newline at end of file diff --git a/spaces/ericjuliantooo/paraphrase/Dockerfile b/spaces/ericjuliantooo/paraphrase/Dockerfile deleted file mode 100644 index 6c28f45063487a72e0d7db146bb57e7b1fcb6780..0000000000000000000000000000000000000000 --- a/spaces/ericjuliantooo/paraphrase/Dockerfile +++ /dev/null @@ -1,9 +0,0 @@ -FROM python:3.9.12-buster - -WORKDIR /app - -COPY . . - -RUN pip install -r requirements.txt - -CMD [ "streamlit", "run", "app.py" ] \ No newline at end of file diff --git a/spaces/eubinecto/idiomify/main_upload_tokenizer.py b/spaces/eubinecto/idiomify/main_upload_tokenizer.py deleted file mode 100644 index b1b6127699b0bd75e02d041e06be0f6c43ca5112..0000000000000000000000000000000000000000 --- a/spaces/eubinecto/idiomify/main_upload_tokenizer.py +++ /dev/null @@ -1,29 +0,0 @@ -import wandb -import shutil -from transformers import BartTokenizer -from idiomify.fetchers import fetch_config -from idiomify.paths import ROOT_DIR - - -def main(): - config = fetch_config()['tokenizer'] - tokenizer = BartTokenizer.from_pretrained(config['bart']) - tokenizer.add_special_tokens({ - "additional_special_tokens": ["", ""], # beginning and end of an idiom - }) - - with wandb.init(entity="eubinecto", project="idiomify") as run: - # the paths to write datasets in - tok_dir = ROOT_DIR / "tokenizer" - tokenizer.save_pretrained(tok_dir) - artifact = wandb.Artifact(name="tokenizer", type="other", description=config['description'], - metadata=config) - artifact.add_dir(tok_dir) - # then, we just log them here. - run.log_artifact(artifact, aliases=["latest", config['ver']]) - # don't forget to remove them - shutil.rmtree(tok_dir) - - -if __name__ == '__main__': - main() diff --git a/spaces/exbert-project/exbert/client/src/ts/etc/types.ts b/spaces/exbert-project/exbert/client/src/ts/etc/types.ts deleted file mode 100644 index e7271c712dd6f9f85b2a2d0637f7c3283fc8446c..0000000000000000000000000000000000000000 --- a/spaces/exbert-project/exbert/client/src/ts/etc/types.ts +++ /dev/null @@ -1,96 +0,0 @@ -import { D3Sel } from "./Util" - -/** - * ATTENTION RESPONSES FROM BACKEND - * - * Contain the attentions and embeddings for all combinations of returns from the backend - */ - -export type ModelInfo = { - nlayers: number - nheads: number -} - - -/** - * ATTENTION RESULTS FROM BACKEND - * - * These are the results that are encased in the 'aa' and 'ab' keys returned - */ -type AbstractAttentionResponse = { - aa: T -} - -export type AttentionResponse = AbstractAttentionResponse -export type AttentionMetaResult = AbstractAttentionResult - -export type FullSingleTokenInfo = { - text: string, - topk_words: string[], - topk_probs: number[] -} - -interface AbstractAttentionResult { - att: number[][][], - left: T, - right: T, -} - -/** - * SEARCH RESULT TYPES - */ - -interface MatchedTokAtt { - att: number[] - offset_to_max: number - loc_of_max: number -} - - -interface MatchedAttentions { - in: MatchedTokAtt, - out: MatchedTokAtt, -} - -/** - * EVENT TYPES - */ -export interface TokenEvent { - sel?: D3Sel, - side: SideOptions, - ind: number | "null", -} - -export interface TokenEmbeddingEvent extends TokenEvent { - embeddings: number[] -} - -export type HeadBoxEvent = { - side: SideOptions, - ind: number, - head: number, -} - -/** - * MISCELLANEOUS TYPES - */ - -export type SideOptions = "left" | "right" - -export enum Toggled { - ADDED = 0, - REMOVED, -} - -export enum NormBy { - ROW = 0, - COL, - ALL -} - -export enum ModelKind { - Bidirectional = "bidirectional", - Autoregressive = "autoregressive" -} -export type TokenOptions = "a" | "b" | "all" -export type SentenceOptions = "ab" | "ba" | "aa" | "bb" | "all"; \ No newline at end of file diff --git a/spaces/failfast/2D-GameCreator/src/constants/baseGame.ts b/spaces/failfast/2D-GameCreator/src/constants/baseGame.ts deleted file mode 100644 index 2015a399aab9af8257b6ac0cb0c22c6f43194244..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/constants/baseGame.ts +++ /dev/null @@ -1,13 +0,0 @@ -export const baseGame = { - default: `const canvas = document.querySelector('canvas'); -canvas.style.backgroundColor = '#fff'; -const ctx = canvas.getContext('2d'); - -function draw(delta) { - // TODO: Add drawing logic here -} - -// DO NOT CHANGE THE FOLLOWING LINE -requestAnimationFrame(window.createGameLoop(draw)); -`.trim(), -}; diff --git a/spaces/falterWliame/Face_Mask_Detection/Dsls Licgen Ssqexe.md b/spaces/falterWliame/Face_Mask_Detection/Dsls Licgen Ssqexe.md deleted file mode 100644 index 6429a911fcf32868e07cf22b37ee62f8d91b3a69..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Dsls Licgen Ssqexe.md +++ /dev/null @@ -1,9 +0,0 @@ -

        Dsls Licgen Ssqexe


        DOWNLOAD →→→ https://urlca.com/2uDdEk



        -
        -dsls licgen, download dsls.licgen.v2.0.ssq.exe, download dsls.licgen.v1.5.ssq. exe, download dsls.licgen.v1.6.ssq. exe download, dsls.licgen.v2.0.ssq download, ... If you don't know where to get dsls.licgen.v2.0.ssq.exe - no problem! -You can download this file from our website, and then install it by first inserting this extension into the address bar of the explorer (or into the folder with the file): -How to do it: -1. Paste the following line into the address bar of the explorer (or into the folder with the file) 8a78ff9644
        -
        -
        -

        diff --git a/spaces/falterWliame/Face_Mask_Detection/Omron Nt Series Support Tool Software 34.md b/spaces/falterWliame/Face_Mask_Detection/Omron Nt Series Support Tool Software 34.md deleted file mode 100644 index 14f123dc66ae6d1ac30260c04423a324cc918d04..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Omron Nt Series Support Tool Software 34.md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        Besides the simple Formatting Tool, the Device Information Library is also capable of creating an application program based on the data of the target device for the user to perform various conversions and calculations in various settings, based on the data of the target device.

        -

        The Device Information Library and the Soft Conversion Tool are provided only with the functions necessary to support the target devices. Thus, users can produce other application programs by including additional functions in the Soft Conversion Tool. For example, the Application Tool generates a program that acquires the settings of the target device from the support tool and stores them to the host PC. A GUI is created in the Application Tool by following the format of the Soft Conversion Tool, and the user can confirm and change the settings of the target device from the host PC. Moreover, the Application Tool includes functions that can calculate the savings of energy or power consumption.

        -

        omron nt series support tool software 34


        Download File » https://urlca.com/2uDcYX



        -

        As there is no update function of the Device Information Library, the information on the target devices must be updated from the Support Tool. Thus, the device information of the target device must be updated when new values for parameters are added to the PLC.

        -

        The Support Tool is provided with the function of transferring the data of the target device to the host PC. The data can be transferred in various ways. One example is using a storage device and so on. It can be used to store the data and format of the target device in the database of the host PC.

        -

        Looking at the table, 96.5% (179 of 185) of the included studies used Omron wearables. Therefore, the average sample of included studies was 14.3, ranging from 5 to 37. In addition, the average number of participants was 57.2, ranging from 20 to 150. The use of Omron wearables was most frequently reported for the monitoring and tracking of behavior change (73.0%) and exercise performance (62.6%), as well as health and physical fitness (65.2%). This implies that Omron wearables are well established, and widely used in current health research.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Free Fire Mod Apk and Get Unlimited Diamonds Auto Aim and More.md b/spaces/fatiXbelha/sd/Download Free Fire Mod Apk and Get Unlimited Diamonds Auto Aim and More.md deleted file mode 100644 index 42e3dbd141d9227b24a722e9068a55024cb4523a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Free Fire Mod Apk and Get Unlimited Diamonds Auto Aim and More.md +++ /dev/null @@ -1,116 +0,0 @@ -
        -

        Free Fire Mod APK Unlimited Diamonds Download for Mobile: Everything You Need to Know

        -

        If you are a fan of battle royale games, you might have heard of Free Fire, one of the most popular and downloaded games on the mobile platform. Free Fire offers an exciting and immersive gameplay experience, with various modes, maps, weapons, characters, and more. But what if you want to get unlimited diamonds, skins, and other in-game items without spending real money? That's where Free Fire mod APK comes in. In this article, we will tell you everything you need to know about Free Fire mod APK, including what it is, what it offers, how to download and install it, and whether it is safe and legal to use.

        -

        What is Free Fire and why is it popular?

        -

        Free Fire is a world-famous survival shooter game available on mobile devices. It was developed by Garena International and released in 2017. Since then, it has amassed over a billion downloads on the Google Play Store alone, making it one of the most successful games of all time.

        -

        free fire mod apk unlimited diamonds download for mobile


        Download Zip 🔗 https://urllie.com/2uNwS7



        -

        Free Fire gameplay and features

        -

        Free Fire is a battle royale game, which means that you have to compete with other players on a remote island and be the last one standing. Each match lasts for 10 minutes and involves up to 50 players. You can choose your starting point with your parachute, loot weapons and items, drive vehicles, hide in buildings or bushes, and fight your enemies. You also have to stay within the safe zone, which shrinks over time, forcing you to move closer to your opponents.

        -

        Free Fire has many gameplay features that make it unique and fun. For example, you can create squads of up to 4 players and communicate with them using voice chat. You can also play different modes such as Clash Squad, which is a fast-paced 4v4 team deathmatch, or Ranked Mode, which tests your skills and rewards you with rank points. You can also explore different maps such as Bermuda, Kalahari, Purgatory, or Hangar, each with its own terrain and landmarks.

        -

        Free Fire cosmetics and customization

        -

        One of the reasons why Free Fire is so popular is because of its wide variety of cosmetics and customization options. You can choose from hundreds of characters, each with their own backstory and special abilities. You can also equip them with different outfits, accessories, backpacks, parachutes, banners, emotes, and more. You can also customize your weapons with different skins, attachments, effects, and stickers.

        -

        However, most of these cosmetics are not free. You have to spend diamonds, which are the premium currency of the game, to buy them from the in-game store or from events. Diamonds are not easy to come by unless you spend real money or complete certain tasks. That's why some players look for alternative ways to get unlimited diamonds without spending a dime.

        -

        What is Free Fire mod APK and what does it offer?

        -

        A mod APK is a modified version of an original application that has been altered by someone to provide some extra features or benefits that are not available in the official version. A Free Fire mod APK is a hacked version of the game that gives you access to unlimited diamonds, skins, weapons, health, aimbot, wallhack, and more.

        -

        Free Fire mod APK features and benefits

        -

        Some of the features and benefits that you can get from using a Free Fire mod APK are:

        -

        free fire hack apk unlimited diamonds and coins download
        -free fire mod menu apk download latest version unlimited diamonds
        -free fire mod apk unlimited health and diamonds download for android
        -free fire diamond hack apk download 2023 no human verification
        -free fire mod apk unlimited money and diamond 2023 download
        -free fire mod apk auto headshot and unlimited diamonds download
        -free fire mod apk unlimited diamonds and gold download for pc
        -free fire mod apk unlimited everything download for ios
        -free fire mod apk aimbot and unlimited diamonds download
        -free fire mod apk unlimited diamond generator download online
        -free fire mod apk unlimited diamonds and coins 2023 download
        -free fire mod apk unlimited skins and diamonds download
        -free fire mod apk unlimited diamond hack download for iphone
        -free fire mod apk unlimited diamonds and gems download
        -free fire mod apk unlimited diamond and uc download
        -free fire mod apk unlimited diamond and bp download
        -free fire mod apk unlimited diamond and tickets download
        -free fire mod apk unlimited diamond and rank download
        -free fire mod apk unlimited diamond and characters download
        -free fire mod apk unlimited diamond and weapons download
        -free fire mod apk unlimited diamond and pets download
        -free fire mod apk unlimited diamond and bundles download
        -free fire mod apk unlimited diamond and emotes download
        -free fire mod apk unlimited diamond and gloo wall download
        -free fire mod apk unlimited diamond and elite pass download
        -free fire mod apk unlimited diamond and magic cube download
        -free fire mod apk unlimited diamond and redeem code download
        -free fire mod apk unlimited diamond and vip download
        -free fire mod apk unlimited diamond and ghost mode download
        -free fire mod apk unlimited diamond and anti ban download
        -free fire mod apk unlimited diamond and all unlocked download
        -free fire mod apk unlimited diamond and car speed hack download
        -free fire mod apk unlimited diamond and wall hack download
        -free fire mod apk unlimited diamond and flying hack download
        -free fire mod apk unlimited diamond and invisible hack download
        -free fire mod apk unlimited diamond and teleport hack download
        -free fire mod apk unlimited diamond and damage hack download
        -free fire mod apk unlimited diamond and ammo hack download
        -free fire mod apk unlimited diamond and grenade hack download
        -free fire mod apk unlimited diamond and night mode hack download

        -
          -
        • Unlimited diamonds: You can get as many diamonds as you want and use them to buy any cosmetic item or upgrade that you desire. You can also use them to spin the lucky wheel, participate in events, or gift your friends.
        • -
        • Unlocked skins and weapons: You can get access to all the skins and weapons in the game, including the rare and exclusive ones. You can also use any weapon in any mode, regardless of the level or rank requirements. You can also modify your weapons with any attachment or effect that you like.
        • -
        • Unlimited health and ammo: You can never run out of health or ammo in the game. You can heal yourself instantly and shoot endlessly without reloading. You can also survive any damage from enemies, falls, explosions, or the zone.
        • -
        • Aimbot and wallhack: You can improve your accuracy and visibility in the game. You can automatically lock on to your enemies' heads and shoot them with one tap. You can also see through walls, buildings, trees, and other obstacles. You can also spot your enemies' location, name, health, and distance on the mini-map.
        • -
        -

        These are just some of the features and benefits that you can enjoy from using a Free Fire mod APK. There are many more that you can discover by yourself once you download and install it on your device.

        -

        Free Fire mod APK disadvantages and risks

        -

        However, using a Free Fire mod APK is not without its drawbacks and dangers. Some of the disadvantages and risks that you should be aware of are:

        -
          -
        • Lack of updates and compatibility: A Free Fire mod APK is usually based on an older version of the game that may not be compatible with the latest updates and patches. This means that you may miss out on new features, modes, maps, events, and bug fixes that are available in the official version. You may also encounter errors, crashes, glitches, or lag while playing the game.
        • -
        • Viruses and malware: A Free Fire mod APK is not verified or authorized by Garena or any other official source. This means that it may contain viruses, malware, spyware, or other harmful programs that can damage your device or steal your personal information. You may also expose your device to hackers or cybercriminals who can access your data or accounts.
        • -
        • Bans and penalties: A Free Fire mod APK is considered a cheating tool that violates the terms of service and fair play policy of the game. This means that if you use it, you may be detected by the anti-cheat system and get banned from playing the game permanently. You may also lose your progress, achievements, rewards, and items that you have earned in the game. You may also face legal action from Garena or other authorities for breaking the law.
        • -
        -

        These are just some of the disadvantages and risks that you should consider before using a Free Fire mod APK. There are many more that you should be careful of when downloading and installing it on your device.

        -

        How to download and install Free Fire mod APK on your mobile device?

        -

        If you still want to try using a Free Fire mod APK despite the drawbacks and dangers, you will need to follow some steps to download and install it on your mobile device. The steps may vary depending on whether you are using an Android or an iOS device.

        -

        Step-by-step guide for Android users

        -

        If you are using an Android device, here are the steps that you need to follow:

        -
          -
        1. Enable unknown sources: Go to your device settings and look for security or privacy options. Find the option that says "unknown sources" or "install apps from unknown sources" and enable it. This will allow you to install applications that are not from the Google Play Store.
        2. -
        3. Download Free Fire mod APK file: Search for a reliable and trustworthy website that offers a Free Fire mod APK file for download. Make sure that the file is compatible with your device model and Android version. Avoid clicking on any suspicious links or ads that may redirect you to malicious sites or download unwanted programs. Once you find a suitable file, click on the download button and wait for it to finish.
        4. -
        5. Install Free Fire mod APK file: Locate the downloaded file in your device storage and tap on it to start the installation process. Follow the instructions on the screen and grant any permissions that are required. Wait for the installation to complete and then launch the game from your app drawer or home screen.
        6. -
        -

        Congratulations! You have successfully downloaded and installed a Free Fire mod APK on your Android device. Now you can enjoy unlimited diamonds, skins, weapons, health, aimbot, wallhack and more in the game.

        -

        Step-by-step guide for iOS users

        -

        If you are using an iOS device, here are the steps that you need to follow:

        -
          -
        1. Jailbreak your device: Unlike Android, iOS does not allow you to install applications that are not from the App Store. Therefore, you will need to jailbreak your device, which is a process that removes the restrictions and limitations imposed by Apple on your device. This will give you more control and freedom over your device and allow you to install third-party apps and tweaks. However, jailbreaking your device also voids your warranty, exposes your device to security risks, and may cause performance issues or data loss. Therefore, you should only do it at your own risk and responsibility.
        2. -
        3. Download Free Fire mod IPA file: An IPA file is the equivalent of an APK file for iOS devices. It is the file format that contains the application data and code. You will need to find a reliable and trustworthy website that offers a Free Fire mod IPA file for download. Make sure that the file is compatible with your device model and iOS version. Avoid clicking on any suspicious links or ads that may redirect you to malicious sites or download unwanted programs. Once you find a suitable file, click on the download button and wait for it to finish.
        4. -
        5. Install Free Fire mod IPA file: Locate the downloaded file in your device storage and tap on it to start the installation process. You may need to use a file manager app or a computer to transfer the file to your device. You may also need to use a third-party app installer such as Cydia Impactor or AltStore to install the file on your device. Follow the instructions on the screen and grant any permissions that are required. Wait for the installation to complete and then launch the game from your app drawer or home screen.
        6. -
        -

        Congratulations! You have successfully downloaded and installed a Free Fire mod IPA on your iOS device. Now you can enjoy unlimited diamonds, skins, weapons, health, aimbot, wallhack and more in the game.

        -

        Is Free Fire mod APK safe and legal to use?

        -

        The final question that you may have is whether Free Fire mod APK is safe and legal to use. The answer is no, it is not safe or legal to use. Here are some of the reasons why:

        -

        Free Fire mod APK safety issues and precautions

        -

        As we mentioned earlier, Free Fire mod APK is not verified or authorized by Garena or any other official source. This means that it may contain viruses, malware, spyware, or other harmful programs that can damage your device or steal your personal information. You may also expose your device to hackers or cybercriminals who can access your data or accounts.

        -

        To avoid these risks, you should always download Free Fire mod APK from a reputable and trustworthy website that has positive reviews and feedback from other users. You should also scan the file with an antivirus or anti-malware software before installing it on your device. You should also backup your data and create a restore point in case something goes wrong.

        -

        Free Fire mod APK legality and consequences

        -

        As we mentioned earlier, Free Fire mod APK is considered a cheating tool that violates the terms of service and fair play policy of the game. This means that if you use it, you may be detected by the anti-cheat system and get banned from playing the game permanently. You may also lose your progress, achievements, rewards, and items that you have earned in the game. You may also face legal action from Garena or other authorities for breaking the law.

        -

        To avoid these consequences, you should never use Free Fire mod APK in online or ranked modes, where you can affect other players' experience or ranking. You should also never use it in tournaments or events, where you can gain unfair advantages or prizes. You should also never share or promote Free Fire mod APK with other players, as this can spread cheating and harm the game community.

        -

        Conclusion and FAQs

        -

        In conclusion, Free Fire mod APK is a hacked version of the game that gives you access to unlimited diamonds, skins, weapons, health, aimbot, wallhack and more. However, it is not safe or legal to use, as it may contain viruses, malware, or other harmful programs that can damage your device or steal your personal information. It may also get you banned from playing the game permanently or face legal action from Garena or other authorities for breaking the law. Therefore, we do not recommend using Free Fire mod APK and advise you to play the game fairly and honestly.

        -

        If you still have any questions or doubts about Free Fire mod APK, here are some FAQs that may help you:

        -

        FAQs

        -
          -
        1. Q: Can I use Free Fire mod APK on PC?
        2. -
        3. A: No, you cannot use Free Fire mod APK on PC. Free Fire mod APK is only designed for mobile devices and will not work on PC. If you want to play Free Fire on PC, you will need to use an official emulator such as BlueStacks or LDPlayer and download the game from the official source.
        4. -
        5. Q: Can I use Free Fire mod APK with my existing account?
        6. -
        7. A: No, you cannot use Free Fire mod APK with your existing account. Free Fire mod APK will create a new account for you with a different username and ID. If you try to log in with your existing account, you may get banned or lose your data.
        8. -
        9. Q: Can I update Free Fire mod APK to the latest version?
        10. -
        11. A: No, you cannot update Free Fire mod APK to the latest version. Free Fire mod APK is usually based on an older version of the game that may not be compatible with the latest updates and patches. If you try to update it, you may encounter errors, crashes, glitches, or lag. You will need to wait for a new version of Free Fire mod APK to be released by the modder.
        12. -
        13. Q: Can I play with my friends who are using the official version of Free Fire?
        14. -
        15. A: No, you cannot play with your friends who are using the official version of Free Fire. Free Fire mod APK will connect you to a different server that is only for modded users. You will not be able to join or invite your friends who are using the official version of the game.
        16. -
        17. Q: Can I uninstall Free Fire mod APK if I don't like it?
        18. -
        19. A: Yes, you can uninstall Free Fire mod APK if you don't like it. You can simply delete the file from your device storage or go to your device settings and look for apps or applications. Find Free Fire mod APK and tap on it to uninstall it. You can also restore your device to its original state by using a backup or a restore point.
        20. -
        -

        I hope this article has helped you understand what Free Fire mod APK is and how to use it. However, I strongly advise you not to use it and play the game fairly and honestly. Thank you for reading and have a nice day!

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy NBA 2K21 for Free on Your Android Device Heres How.md b/spaces/fatiXbelha/sd/Enjoy NBA 2K21 for Free on Your Android Device Heres How.md deleted file mode 100644 index e5c619c1549fbdae5503826765b2388599dfa8ac..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy NBA 2K21 for Free on Your Android Device Heres How.md +++ /dev/null @@ -1,180 +0,0 @@ - -

        How to Download NBA 2K21 on Android for Free

        -

        NBA 2K21 is one of the most popular and realistic basketball simulation games ever created. It features amazing graphics, gameplay, modes, and customization options that will make you feel like you are playing in the NBA. However, if you want to enjoy this game on your Android device, you might face some challenges. The official version of NBA 2K21 is not available for Android devices, and it costs $59.99 on other platforms. So, how can you download NBA 2K21 on Android for free? In this article, we will show you two methods that will allow you to play NBA 2K21 on your Android device without spending a dime. Follow these steps carefully and you will be able to experience the thrill of NBA 2K21 on your smartphone or tablet.

        -

        Introduction

        -

        What is NBA 2K21?

        -

        NBA 2K21 is the latest installment in the NBA 2K series, a franchise that has been dominating the basketball video game market for over two decades. NBA 2K21 was released in September 2020 for PlayStation 4, Xbox One, Nintendo Switch, PC, and Stadia. It will also be available for PlayStation 5 and Xbox Series X/S in November 2020. NBA 2K21 features many improvements and additions over its predecessor, such as:

        -

        how to download nba 2k21 on android for free


        Download File ✫✫✫ https://urllie.com/2uNzkF



        -
          -
        • New gameplay mechanics, such as a revamped shot meter, dribbling controls, and defensive movements
        • -
        • New game modes, such as MyTEAM Unlimited, MyCAREER story mode, MyGM/MyLEAGUE management mode, and The Neighborhood online mode
        • -
        • New content, such as updated rosters, jerseys, courts, players, and legends
        • -
        • New soundtrack, featuring artists like The Weeknd, Drake, Travis Scott, Juice WRLD, and more
        • -
        -

        Why download NBA 2K21 on Android?

        -

        If you are a fan of basketball or video games, you might be wondering why you should download NBA 2K21 on Android. Here are some reasons why you should consider playing NBA 2K21 on your Android device:

        -
          -
        • You can play NBA 2K21 anytime and anywhere, as long as you have an internet connection and enough battery life
        • -
        • You can enjoy NBA 2K21 on a smaller screen, which might be more comfortable and convenient for some users
        • -
        • You can save money by not buying the official version of NBA 2K21, which is quite expensive compared to other games
        • -
        • You can have fun with your friends by playing online or locally with other Android users who have downloaded NBA 2K21
        • -
        -

        How to download NBA 2K21 on Android for free?

        -

        As we mentioned earlier, there is no official version of NBA 2K21 for Android devices. However, there are two methods that will allow you to play NBA 2K21 on your Android device for free. The first method is to download the NBA 2K Mobile Basketball game from the Google Play Store, which is a free-to-play version of NBA 2K21 that offers a similar gameplay and graphics quality. The second method is to download the NBA 2K21 APK and OBB files from a trusted source, which are the files that contain the full version of NBA 2K21 for Android devices. Both methods have their advantages and disadvantages, so you can choose the one that suits you best. Let's take a look at each method in detail.

        -

        Step 1: Download the NBA 2K Mobile Basketball Game from Google Play Store

        -

        How to install the game on your Android device

        -

        The easiest and safest way to play NBA 2K21 on your Android device is to download the NBA 2K Mobile Basketball game from the Google Play Store. This is a free-to-play game that was released in October 2020 by 2K, Inc., the official developer of the NBA 2K series. To install the game on your Android device, follow these steps:

        -
          -
        1. Open the Google Play Store app on your Android device and search for "NBA 2K Mobile Basketball"
        2. -
        3. Tap on the game icon and then tap on the "Install" button
        4. -
        5. Wait for the game to download and install on your device. The game size is about 1.5 GB, so make sure you have enough storage space and a stable internet connection
        6. -
        7. Once the game is installed, tap on the "Open" button to launch the game
        8. -
        -

        What are the features of the game

        -

        The NBA 2K Mobile Basketball game is a simplified and optimized version of NBA 2K21 that offers a similar gameplay and graphics quality. The game features include:

        -
          -
        • Over 400 cards of your favorite NBA players and legends that you can collect and upgrade
        • -
        • 5-on-5 basketball matches with real-time PVP action
        • -
        • Seasons mode where you can compete for exclusive rewards and rank up in leaderboards
        • -
        • Events mode where you can participate in limited-time challenges and earn special cards
        • -
        • Console-quality graphics and animations that will immerse you in the game
        • -
        • Customizable controls and settings that will suit your preferences
        • -
        -

        How to play the game and enjoy the NBA 2K21 experience

        -

        To play the game and enjoy the NBA 2K21 experience, you need to create an account and choose your favorite team. You can also link your Facebook or Google account to save your progress and sync your data across devices. Once you are logged in, you can access the main menu where you can choose from different modes and options. Here are some tips on how to play the game and enjoy the NBA 2K21 experience:

        -
          -
        • To collect and upgrade your cards, you need to earn coins and gems by playing matches, completing objectives, and opening packs. You can also buy coins and gems with real money if you want to speed up your progress
        • -
        • To improve your skills and strategies, you need to practice in the training mode where you can learn the basic controls and moves of the game. You can also watch tutorials and tips videos on YouTube or other platforms
        • -
        • To compete with other players, you need to join seasons and events where you can play against real opponents in real-time matches. You can also join clubs and chat with other players in the game
        • -
        • To customize your team and players, you need to equip them with different cards that will boost their attributes and abilities. You can also change their jerseys, shoes, accessories, and hairstyles in the game
        • -
        • To have fun and enjoy the game, you need to explore different modes and features that will keep you entertained and challenged. You can also share your achievements and screenshots with your friends on social media or other platforms
        • -
        -

        Step 2: Download the NBA 2K21 APK and OBB files from a trusted source

        -

        How to find a reliable and safe website to download the files

        -

        The second method to play NBA 2K21 on your Android device is to download the NBA 2K21 APK and OBB files from a trusted source. These are the files that contain the full version of NBA 2K21 for Android devices, which is not officially available on the Google Play Store. However, this method is more risky and complicated than the first one, as you might encounter some problems such as malware, viruses, errors, or bans. Therefore, you need to be careful and cautious when choosing a website to download the files from. Here are some tips on how to find a reliable and safe website to download the NBA 2K21 APK and OBB files:

        -
          -
        • Do some research and read reviews about the website before downloading anything from it. You can also check the ratings, comments, and feedback from other users who have downloaded the files from the same website
        • -
        • Look for a website that has a secure and verified domain name, such as HTTPS or SSL. You can also check the URL and avoid websites that have suspicious or misleading names, such as NBA2K21free.com or NBA2K21modapk.com
        • -
        • Compare the file size and version of the NBA 2K21 APK and OBB files from different websites and choose the one that matches the original file size and version of NBA 2K21. You can also check the file details and information, such as the developer, publisher, date, and permissions
        • -
        • Scan the files with an antivirus or anti-malware software before downloading or installing them on your device. You can also use online tools such as VirusTotal or MetaDefender to check the files for any threats or risks
        • -
        • Backup your data and device before downloading or installing the files, in case something goes wrong or you encounter any errors or issues. You can also use a VPN or a proxy server to hide your IP address and location, in case you are worried about your privacy or security
        • -
        -

        How to verify the files and avoid malware or viruses

        -

        One of the main risks of downloading the NBA 2K21 APK and OBB files from an unknown source is that you might get infected with malware or viruses that can harm your device or steal your data. Therefore, you need to verify the files and avoid malware or viruses before installing them on your device. Here are some steps on how to verify the files and avoid malware or viruses:

        -
          -
        1. Download the files only from a trusted source that has positive reviews and ratings from other users. Avoid downloading the files from unknown or shady websites that might contain malicious links or ads
        2. -
        3. Check the file size and version of the NBA 2K21 APK and OBB files and make sure they match the original file size and version of NBA 2K21. If the files are too small or too large, they might be fake or corrupted
        4. -
        5. Check the file details and information of the NBA 2K21 APK and OBB files and make sure they match the original file details and information of NBA 2K21. If the files have different names, developers, publishers, dates, or permissions, they might be modified or tampered with
        6. -
        7. Scan the files with an antivirus or anti-malware software before installing them on your device. You can also use online tools such as VirusTotal or MetaDefender to check the files for any threats or risks. If the files are detected as unsafe or harmful, delete them immediately
        8. -
        9. Install the files only on a device that has enough storage space and meets the minimum requirements for NBA 2K21. If your device is not compatible or has low performance, you might experience crashes, glitches, errors, or lagging while playing NBA 2K21
        10. -
        -

        How to extract and copy the files to your Android device

        -

        Once you have downloaded and verified the NBA 2K21 APK and OBB files from a trusted source, you need to extract and copy them to your Android device. The NBA 2K21 APK file is an application file that contains the game installation package, while the NBA 2K21 OBB file is a data file that contains the game content and resources. To extract and copy them to your Android device, follow these steps:

        -

        How to get nba 2k21 for free on android phone
        -NBA 2k21 android free download apk + obb
        -Download nba 2k21 mobile basketball game for android
        -NBA 2k21 free android download no verification
        -How to install nba 2k21 on android device for free
        -NBA 2k21 android free download full version
        -NBA 2k21 apk mod free download for android
        -How to play nba 2k21 on android without paying
        -NBA 2k21 android free download offline
        -Download nba 2k21 for android free with data
        -NBA 2k21 free download for android tablet
        -How to download nba 2k21 on android from play store for free
        -NBA 2k21 android free download latest version
        -NBA 2k21 hack apk free download for android
        -How to download nba 2k21 on android emulator for free
        -NBA 2k21 android free download highly compressed
        -NBA 2k21 free coins and cash for android download
        -How to download nba 2k21 on android with vpn for free
        -NBA 2k21 android free download no root
        -Download nba 2k21 for android free with cheats
        -NBA 2k21 free redeem codes for android download
        -How to download nba 2k21 on android from official website for free
        -NBA 2k21 android free download unlimited money
        -NBA 2k21 cracked apk free download for android
        -How to download nba 2k21 on android using pc for free
        -NBA 2k21 android free download no survey
        -NBA 2k21 free locker codes for android download
        -How to download nba 2k21 on android with torrent for free
        -NBA 2k21 android free download mega link
        -NBA 2k21 patch update free download for android
        -How to download nba 2k21 on android without wifi for free
        -NBA 2k21 android free download google drive link
        -NBA 2k21 license key free download for android
        -How to download nba 2k21 on android with qr code for free
        -NBA 2k21 android free download mediafire link
        -NBA 2k21 roster update free download for android
        -How to download nba 2k21 on android with sd card for free
        -NBA 2k21 android free download zip file
        -NBA 2k21 soundtrack free download for android
        -How to download nba 2k21 on android with bluetooth for free

        -
          -
        1. Locate the NBA 2K21 APK and OBB files on your device storage or download folder. You can use a file manager app such as ES File Explorer or ZArchiver to access them easily
        2. -
        3. Extract the NBA 2K21 OBB file using a file extractor app such as RAR or WinZip. You will get a folder named "com.t2ksports.nba2k21and" that contains another folder named "main.16.com.t2ksports.nba2k21and"
        4. -
        5. Copy the folder "main.16.com.t2ksports.nba2k21and" to your device internal storage in this path: Android > obb > com.t2ksports.nba2k21and. If you don't have an obb folder in your Android folder, create one manually
        6. -
        7. Copy the NBA 2K21 APK file to your device internal storage in any location that you prefer
        8. Make sure you have enough storage space and battery life on your device before proceeding to the next step -
        -

        Step 3: Install the NBA 2K21 APK and run the game

        -

        How to allow unknown sources on your Android device

        -

        The final step to play NBA 2K21 on your Android device is to install the NBA 2K21 APK file and run the game. However, before you can do that, you need to allow unknown sources on your Android device. This is a security setting that prevents you from installing apps that are not downloaded from the Google Play Store. To allow unknown sources on your Android device, follow these steps:

        -
          -
        1. Go to your device settings and tap on "Security" or "Privacy"
        2. -
        3. Find and enable the option "Unknown sources" or "Install unknown apps"
        4. -
        5. A warning message will pop up, telling you that installing apps from unknown sources can harm your device or data. Tap on "OK" or "Allow" to confirm
        6. -
        7. You can now install the NBA 2K21 APK file on your device without any problem
        8. -
        -

        How to install the NBA 2K21 APK file and launch the game

        -

        After allowing unknown sources on your device, you can install the NBA 2K21 APK file and launch the game. To do that, follow these steps:

        -
          -
        1. Locate the NBA 2K21 APK file on your device storage or download folder. You can use a file manager app such as ES File Explorer or ZArchiver to access it easily
        2. -
        3. Tap on the NBA 2K21 APK file and then tap on "Install"
        4. -
        5. Wait for the installation process to finish. It might take a few minutes, depending on your device performance and internet speed
        6. -
        7. Once the installation is done, tap on "Open" to launch the game
        8. -
        -

        How to enjoy the full features of NBA 2K21 on your Android device

        -

        Congratulations! You have successfully installed NBA 2K21 on your Android device. You can now enjoy the full features of NBA 2K21 on your smartphone or tablet. The game features include:

        -
          -
        • All the modes and options that are available in the official version of NBA 2K21, such as MyTEAM Unlimited, MyCAREER story mode, MyGM/MyLEAGUE management mode, and The Neighborhood online mode
        • -
        • All the content and updates that are available in the official version of NBA 2K21, such as updated rosters, jerseys, courts, players, and legends
        • -
        • All the graphics and sound quality that are available in the official version of NBA 2K21, such as console-quality graphics and animations, realistic physics and collisions, and immersive soundtrack and commentary
        • -
        • All the controls and settings that are available in the official version of NBA 2K21, such as customizable controls and settings, touch-screen or controller support, and online or offline mode
        • -
        -

        Conclusion

        -

        Summary of the main points

        -

        In this article, we have shown you how to download NBA 2K21 on Android for free. We have explained two methods that will allow you to play NBA 2K21 on your Android device without spending a dime. The first method is to download the NBA 2K Mobile Basketball game from the Google Play Store, which is a free-to-play version of NBA 2K21 that offers a similar gameplay and graphics quality. The second method is to download the NBA 2K21 APK and OBB files from a trusted source, which are the files that contain the full version of NBA 2K21 for Android devices. Both methods have their advantages and disadvantages, so you can choose the one that suits you best.

        -

        Call to action and invitation to share feedback

        -

        We hope you have found this article helpful and informative. If you have followed our steps carefully, you should be able to play NBA 2K21 on your Android device for free. However, if you encounter any problems or issues while downloading or installing the game, please let us know in the comments section below. We will try our best to help you out. Also, if you have any suggestions or feedback about this article or our website, please feel free to share them with us. We appreciate your support and cooperation.

        -

        Disclaimer and legal notice

        -

        This article is for educational purposes only. We do not condone or encourage piracy or illegal downloading of any games or apps. We are not affiliated with or endorsed by 2K, Inc., the developer or publisher of NBA 2K21, or any other games or apps mentioned in this article. We are not responsible for any damages or losses that may occur as a result of downloading or installing the game or any other files from any sources. Download and install the game at your own risk and discretion.

        -

        FAQs

        -

        Here are some frequently asked questions and answers about how to download NBA 2K21 on Android for free:

        -
          -
        1. Q: Is it legal to download NBA 2K21 on Android for free?
        2. -
        3. A: No, it is not legal to download NBA 2K21 on Android for free. NBA 2K21 is a copyrighted game that belongs to 2K, Inc., and you need to buy the official version of the game to play it legally. Downloading or installing the game from any other sources is considered piracy or illegal downloading, which is a violation of the law and can result in legal actions or penalties.
        4. -
        5. Q: Is it safe to download NBA 2K21 on Android for free?
        6. -
        7. A: No, it is not safe to download NBA 2K21 on Android for free. Downloading or installing the game from any unknown or untrusted sources can expose your device or data to malware, viruses, errors, or bans. You might also encounter some problems or issues while playing the game, such as crashes, glitches, errors, or lagging. Therefore, you need to be careful and cautious when choosing a website to download the game from, and verify the files before installing them on your device.
        8. -
        9. Q: What are the requirements to play NBA 2K21 on Android?
        10. -
        11. A: To play NBA 2K21 on Android, you need to have a device that meets the minimum requirements for the game. The minimum requirements are:
        12. -
            -
          • Android version: 4.3 or higher
          • -
          • RAM: 4 GB or higher
          • -
          • Storage space: 3 GB or higher
          • -
          • Processor: Quad-core or higher
          • -
          • Internet connection: Wi-Fi or mobile data
          • -
          -
        13. Q: How to update NBA 2K21 on Android?
        14. -
        15. A: To update NBA 2K21 on Android, you need to download and install the latest version of the NBA 2K21 APK and OBB files from a trusted source. You can follow the same steps that we have explained in this article to download and install the updated files. However, you need to make sure that you delete the old files before installing the new ones, otherwise you might face some errors or issues.
        16. -
        17. Q: How to fix NBA 2K21 errors or issues on Android?
        18. -
        19. A: If you encounter any errors or issues while playing NBA 2K21 on Android, such as crashes, glitches, errors, or lagging, you can try some of these solutions:
        20. -
            -
          • Restart your device and launch the game again
          • -
          • Clear the cache and data of the game and launch the game again
          • -
          • Check your internet connection and make sure it is stable and fast
          • -
          • Check your device settings and make sure they are compatible with the game
          • -
          • Contact the customer support of the website where you downloaded the game from and ask for help
          • -
          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/nn.py b/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/nn.py deleted file mode 100644 index b28bd83cf23b4e19868afc2075b11ca1cfbd0e8d..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/nn.py +++ /dev/null @@ -1,190 +0,0 @@ -""" -Various utilities for neural networks. -""" - -import math - -import torch as th -import torch.nn as nn - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * th.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def update_ema(target_params, source_params, rate=0.99): - """ - Update target parameters to be closer to those of source parameters using - an exponential moving average. - - :param target_params: the target parameter sequence. - :param source_params: the source parameter sequence. - :param rate: the EMA rate (closer to 1 means slower). - """ - for targ, src in zip(target_params, source_params): - targ.detach().mul_(rate).add_(src, alpha=1 - rate) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -def timestep_embedding(timesteps, dim, max_period=10000): - """ - Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - half = dim // 2 - freqs = th.exp( - -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = th.cat([th.cos(args), th.sin(args)], dim=-1) - if dim % 2: - embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1) - return embedding - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(th.autograd.Function): - @staticmethod - @th.cuda.amp.custom_fwd - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_length = length - ctx.save_for_backward(*args) - with th.no_grad(): - output_tensors = ctx.run_function(*args[:length]) - return output_tensors - - @staticmethod - @th.cuda.amp.custom_bwd - def backward(ctx, *output_grads): - args = list(ctx.saved_tensors) - - # Filter for inputs that require grad. If none, exit early. - input_indices = [i for (i, x) in enumerate(args) if x.requires_grad] - if not input_indices: - return (None, None) + tuple(None for _ in args) - - with th.enable_grad(): - for i in input_indices: - if i < ctx.input_length: - # Not sure why the OAI code does this little - # dance. It might not be necessary. - args[i] = args[i].detach().requires_grad_() - args[i] = args[i].view_as(args[i]) - output_tensors = ctx.run_function(*args[:ctx.input_length]) - - if isinstance(output_tensors, th.Tensor): - output_tensors = [output_tensors] - - # Filter for outputs that require grad. If none, exit early. - out_and_grads = [(o, g) for (o, g) in zip(output_tensors, output_grads) if o.requires_grad] - if not out_and_grads: - return (None, None) + tuple(None for _ in args) - - # Compute gradients on the filtered tensors. - computed_grads = th.autograd.grad( - [o for (o, g) in out_and_grads], - [args[i] for i in input_indices], - [g for (o, g) in out_and_grads] - ) - - # Reassemble the complete gradient tuple. - input_grads = [None for _ in args] - for (i, g) in zip(input_indices, computed_grads): - input_grads[i] = g - return (None, None) + tuple(input_grads) diff --git a/spaces/felixz/Flan-T5-experiment/app.py b/spaces/felixz/Flan-T5-experiment/app.py deleted file mode 100644 index 3679859d19c019d27d9a6dbc9aef05cc65bc75d1..0000000000000000000000000000000000000000 --- a/spaces/felixz/Flan-T5-experiment/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import gradio as gr -from transformers import T5Tokenizer, T5ForConditionalGeneration -from transformers import AutoModel - -# xl size run out of memory on 16GB VM -model_name = 'google/flan-t5-large' -#model_name = 'jncraton/fastchat-t5-3b-v1.0-ct2-int8' - -# Load model directly -#from transformers import AutoModel -model = AutoModel.from_pretrained(model_name) - -tokenizer = T5Tokenizer.from_pretrained(model_name) -#model = T5ForConditionalGeneration.from_pretrained(model_name) - -title = "" - -def get_examples (): - return [ - ["Peter goes to the store to buy a soda. The soda costs $.25 an ounce. \ -He brought $2 with him and leaves with $.50. How many ounces of soda did he buy?", - "How much did Peter spend on soda? ** He spend $1.5 on soda because 2 - .5 = <<2-.5=1.5>>1.5 \ -How many ounces of soda did Peter buy? ** He bought 6 ounces of soda because 1.5 / .25 = <<6=6>>6 #### 6" - ], - ["Krystian works in the library. He borrows an average of 40 books every day. \ -Every Friday, his number of borrowed books is about 40% higher than the daily average. How many books does he borrow in a week if the library is open from Monday to Friday?" - ,"How many books does Krystian borrow on Friday? ** The number of books borrowed \ -on Friday is higher by 40 * 40/100 = <<40*40/100=16>>16 books. How many books does Krystian borrow in a week? ** There are 5 days from Monday to Friday inclusive, so Krystian borrows an average of 5 * 40 = <<5*40=200>>200 books during that time. How many books does Krystian borrow in a week? ** With Friday's increase in borrowings, during one week Krystian borrows 200 + 16 = <<200+16=216>>216 books."] - , ["Jane had $60 but gave $30 to dave and went to movies and spend $2. How much money does Jane has left? Answer by reasoning step by step:", "$28"] - ] - - -def text2text(input_text): - input_ids = tokenizer(input_text, return_tensors="pt").input_ids - - outputs = model.generate(input_ids, max_length=200) - return tokenizer.decode(outputs[0]) - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Flan T5 Large Demo - 780M parameter Large language model fine tuned on diverse tasks. - Prompt the model in the Input box. - """) - txt_in = gr.Textbox(label="Input", lines=3) - correct_label = gr.Label(label="Correct") - txt_out = gr.Textbox(value="", label="Output", lines=4) - - - btn = gr.Button(value="Submit") - btn.click(text2text, inputs=[txt_in], outputs=[txt_out]) - - - gr.Examples( - examples=get_examples(), - inputs=[txt_in,correct_label] - ) - - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game of Thrones All Seasons Tamil Dubbed Movie in KuttyMovies.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game of Thrones All Seasons Tamil Dubbed Movie in KuttyMovies.md deleted file mode 100644 index 22b9071e6e0930f5c3f6bebe7af8f1de4279bf34..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Game of Thrones All Seasons Tamil Dubbed Movie in KuttyMovies.md +++ /dev/null @@ -1,106 +0,0 @@ -
        -

        Game of Thrones Tamil Dubbed Movie Download in KuttyMovies

        -

        Game of Thrones is one of the most popular and acclaimed fantasy drama television series of all time. It has millions of fans around the world who are eagerly waiting for the next season or spin-off. But what if you want to watch Game of Thrones in your native language, such as Tamil? Is it possible to download Game of Thrones Tamil dubbed movie from KuttyMovies, a notorious piracy website that offers free movies and TV shows? In this article, we will answer these questions and more. We will also provide you with some alternatives to KuttyMovies for downloading Game of Thrones Tamil dubbed movie safely and legally.

        -

        game of thrones tamil dubbed movie download in kuttymovies


        DOWNLOADhttps://gohhs.com/2uPuYX



        -

        Introduction

        -

        What is Game of Thrones?

        -

        Game of Thrones is a fantasy drama television series created by David Benioff and D. B. Weiss, based on the novel series A Song of Ice and Fire by George R. R. Martin. The series premiered on HBO in 2011 and concluded in 2019, with eight seasons and 73 episodes. The story revolves around the power struggle among the noble families of Westeros, a fictional continent, for the Iron Throne, the seat of the king. The series also features mythical creatures, such as dragons, direwolves, and white walkers, who pose a threat to the living. The series has won numerous awards, including 59 Emmy Awards, and has been praised for its complex characters, storylines, acting, production values, and cultural impact.

        -

        What is KuttyMovies?

        -

        KuttyMovies is a piracy website that provides free downloads of movies and TV shows in various languages, such as Tamil, Telugu, Hindi, English, Malayalam, Kannada, etc. The website has a huge collection of Tamil dubbed movies, including Hollywood movies, Bollywood movies, South Indian movies, and web series. The website also updates its content regularly with the latest releases and leaks. KuttyMovies is one of the most visited piracy websites in India and attracts millions of users every month.

        -

        Why do people want to watch Game of Thrones in Tamil?

        -

        Tamil is one of the most spoken languages in India, with over 75 million speakers. It is also an official language in Sri Lanka and Singapore. Many people who speak Tamil prefer to watch movies and TV shows in their native language, as it helps them to understand the dialogues better and enjoy the cultural nuances. Moreover, some people may not be comfortable with English subtitles or audio, as they may find them distracting or hard to follow. Therefore, watching Game of Thrones in Tamil can be a more enjoyable and satisfying experience for them.

        -

        How to download Game of Thrones Tamil dubbed movie in KuttyMovies

        -

        Step 1: Visit the KuttyMovies website

        -

        The first step to download Game of Thrones Tamil dubbed movie in KuttyMovies is to visit the website. However, this may not be as easy as it sounds, as KuttyMovies is an illegal website that is banned by the government and internet service providers. Therefore, you may need to use a VPN service or a proxy site to access the website. A VPN service can help you to bypass the geo-restrictions and hide your IP address from the authorities. A proxy site can help you to access the

        Step 2: Search for Game of Thrones Tamil dubbed movie

        -

        The next step is to search for Game of Thrones Tamil dubbed movie in KuttyMovies. You can use the search bar on the homepage or browse through the categories and genres. You can also filter the results by year, quality, size, and language. You may find multiple links for Game of Thrones Tamil dubbed movie, as KuttyMovies uploads different versions and sources. You can choose the one that suits your preferences and availability.

        -

        Step 3: Choose the preferred quality and size

        -

        After selecting the link for Game of Thrones Tamil dubbed movie, you will be redirected to another page where you can see the details of the movie, such as the title, genre, cast, director, rating, synopsis, screenshots, and download options. You can choose the quality and size of the movie that you want to download, such as 480p, 720p, 1080p, 300MB, 700MB, 1.5GB, etc. The higher the quality and size, the better the video and audio clarity, but also the longer the download time and the more storage space required.

        -

        game of thrones tamil dubbed movie download in isaidub
        -game of thrones tamil dubbed movie download in oceanofmovies
        -game of thrones tamil dubbed movie download in 1080p
        -game of thrones tamil dubbed movie download in 720p
        -game of thrones tamil dubbed movie download in 480p
        -game of thrones tamil dubbed movie download in bluray
        -game of thrones tamil dubbed movie download in hd quality
        -game of thrones tamil dubbed movie download in single part
        -game of thrones tamil dubbed movie download in mp4 format
        -game of thrones tamil dubbed movie download in season 1
        -game of thrones tamil dubbed movie download in season 2
        -game of thrones tamil dubbed movie download in season 3
        -game of thrones tamil dubbed movie download in season 4
        -game of thrones tamil dubbed movie download in season 5
        -game of thrones tamil dubbed movie download in season 6
        -game of thrones tamil dubbed movie download in season 7
        -game of thrones tamil dubbed movie download in season 8
        -game of thrones tamil dubbed movie download in full episodes
        -game of thrones tamil dubbed movie download in fantasy drama genre
        -game of thrones tamil dubbed movie download in HBO original series
        -game of thrones tamil dubbed movie download in A Song of Ice and Fire adaptation
        -game of thrones tamil dubbed movie download in George R. R. Martin novel
        -game of thrones tamil dubbed movie download in David Benioff and D. B. Weiss creation
        -game of thrones tamil dubbed movie download in Westeros and Essos setting
        -game of thrones tamil dubbed movie download in Iron Throne plot
        -game of thrones tamil dubbed movie download in Eddard Stark character
        -game of thrones tamil dubbed movie download in Robert Baratheon character
        -game of thrones tamil dubbed movie download in Jon Arryn character
        -game of thrones tamil dubbed movie download in Daenerys Targaryen character
        -game of thrones tamil dubbed movie download in Tyrion Lannister character
        -game of thrones tamil dubbed movie download in Cersei Lannister character
        -game of thrones tamil dubbed movie download in Jaime Lannister character
        -game of thrones tamil dubbed movie download in Arya Stark character
        -game of thrones tamil dubbed movie download in Sansa Stark character
        -game of thrones tamil dubbed movie download in Bran Stark character
        -game of thrones tamil dubbed movie download in Khal Drogo character
        -game of thrones tamil dubbed movie download in Jorah Mormont character
        -game of thrones tamil dubbed movie download in Samwell Tarly character
        -game of thrones tamil dubbed movie download in Theon Greyjoy character
        -game of thrones tamil dubbed movie download in Catelyn Stark character
        -game of thrones tamil dubbed movie download in Robb Stark character
        -game of thrones tamil dubbed movie download in Petyr Baelish character
        -game of thrones tamil dubbed movie download in Varys character
        -game of thrones tamil dubbed movie download in Sandor Clegane character
        -game of thrones tamil dubbed movie download in Joffrey Baratheon character
        -game of thrones tamil dubbed movie download in Stannis Baratheon character
        -game of thrones tamil dubbed movie download in Melisandre character

        -

        Step 4: Click on the download link

        -

        The final step is to click on the download link for Game of Thrones Tamil dubbed movie. However, before you can start the download process, you may have to face some challenges, such as pop-up ads, redirects, captcha verification, and waiting time. These are some of the ways that KuttyMovies earns money from its users and protects its servers from bots and spam. You have to be patient and careful while dealing with these obstacles and avoid clicking on any suspicious or malicious links or buttons. Once you get past these hurdles, you will be able to download Game of Thrones Tamil dubbed movie in KuttyMovies.

        -

        Step 5: Enjoy watching Game of Thrones in Tamil

        -

        Congratulations! You have successfully downloaded Game of Thrones Tamil dubbed movie in KuttyMovies. Now you can enjoy watching your favorite fantasy drama series in your native language on your device. You can also share it with your friends and family who are also fans of Game of Thrones and Tamil movies. However, you should also be aware of the risks and challenges of downloading Game of Thrones Tamil dubbed movie in KuttyMovies, which we will discuss in the next section.

        -

        Risks and challenges of downloading Game of Thrones Tamil dubbed movie in KuttyMovies

        -

        Legal issues and piracy

        -

        One of the major risks of downloading Game of Thrones Tamil dubbed movie in KuttyMovies is that you are violating the law and engaging in piracy. Piracy is the unauthorized distribution or reproduction of copyrighted content without the permission of the owner or the law. Piracy is a serious crime that can result in legal actions, such as fines, lawsuits, or even imprisonment. Moreover, piracy harms the entertainment industry and the artists who work hard to create original and quality content. By downloading Game of Thrones Tamil dubbed movie in KuttyMovies, you are depriving them of their rightful revenue and recognition. Therefore, you should respect the intellectual property rights of the creators and avoid downloading Game of Thrones Tamil dubbed movie in KuttyMovies.

        -

        Malware and viruses

        -

        Another risk of downloading Game of Thrones Tamil dubbed movie in KuttyMovies is that you may expose your device and data to malware and viruses. Malware and viruses are malicious software that can infect your device and cause various problems, such as slowing down your performance, stealing your personal information, deleting your files, or even damaging your hardware. KuttyMovies is an unsecured and unregulated website that may contain malware and viruses in its download links, ads, or redirects. You may not even notice that your device has been infected until it is too late. Therefore, you should protect your device and data by using a reliable antivirus software and avoiding downloading Game of Thrones Tamil dubbed movie in KuttyMovies.

        -

        Low quality and incomplete episodes

        -

        A third risk of downloading Game of Thrones Tamil dubbed movie in KuttyMovies is that you may not get the best quality and complete episodes of the series. KuttyMovies is a piracy website that does not have the official rights or sources to provide Game of Thrones Tamil dubbed movie. Therefore, it may rely on low-quality recordings, camrips, or screen captures to upload the movie. Moreover, it may not have all the episodes or seasons of the series, or it may have missing or corrupted parts. You may end up wasting your time and bandwidth on downloading Game of Thrones Tamil dubbed movie in KuttyMovies that does not meet your expectations or satisfaction. Therefore, you should look for other alternatives to KuttyMovies for downloading Game of Thrones Tamil dubbed movie.

        -

        Alternatives to KuttyMovies for downloading Game of Thrones Tamil dubbed movie

        -

        Isaidub

        -

        Isaidub is another piracy website that offers free downloads of Tamil dubbed movies and TV shows. It has a large collection of Hollywood movies, Bollywood movies, South Indian movies, and web series in Tamil language. It also has a separate section for Game of Thrones Tamil dubbed movie, where you can find all the seasons and episodes of the series. However, Isaidub also has the same risks and challenges as KuttyMovies, such as legal issues, malware, and low quality. Therefore, you should use Isaidub at your own risk and discretion.

        -

        Tamilyogi

        -

        Tamilyogi is yet another piracy website that provides free downloads of Tamil movies and TV shows. It has a huge database of Tamil movies, ranging from old classics to new releases. It also has a category for Tamil dubbed movies, where you can find Game of Thrones Tamil dubbed movie along with other popular Hollywood movies and web series. However, Tamilyogi also suffers from the same problems as KuttyMovies and Isaidub, such as illegality, viruses, and poor quality. Therefore, you should be careful while using Tamilyogi for downloading Game of Thrones Tamil dubbed movie.

        -

        Oceanofmovies

        -

        Oceanofmovies is a different kind of website that does not host any movies or TV shows on its own servers. Instead, it provides links to other websites where you can download or stream movies and TV shows for free. It has a vast collection of movies and TV shows in various languages, genres, and qualities. It also has links to Game of Thrones Tamil dubbed movie from different sources and platforms. However, Oceanofmovies also has its own drawbacks, such as broken links, pop-up ads, and unreliable quality. Therefore, you should verify the links and sources before using Oceanofmovies for downloading Game of Thrones Tamil dubbed movie.

        -

        Conclusion

        -

        Game of Thrones is a phenomenal fantasy drama series that has captivated millions of viewers across the globe. However, if you want to watch Game of Thrones in Tamil, you may face some difficulties in finding the Tamil dubbed version of the series. KuttyMovies is one of the piracy websites that claims to offer Game of Thrones Tamil dubbed movie for free download. However, KuttyMovies is not a safe or legal option, as it involves many risks and challenges, such as legal issues, malware, and low quality. Therefore, we do not recommend using KuttyMovies for downloading Game of Thrones Tamil dubbed movie. Instead, we suggest you look for other alternatives, such as Isaidub, Tamilyogi, or Oceanofmovies, which may have better quality and availability of Game of Thrones Tamil dubbed movie. However, you should also be aware of the drawbacks and dangers of these websites, and use them at your own risk and discretion. The best way to watch Game of Thrones in Tamil is to subscribe to a legitimate streaming service that has the official rights and licenses to provide Game of Thrones in Tamil language. This way, you can enjoy watching Game of Thrones in Tamil without any worries or hassles.

        -

        FAQs

        -

        Here are some frequently asked questions about Game of Thrones Tamil dubbed movie download in KuttyMovies:

        -

        Q: Is it legal to download Game of Thrones Tamil dubbed movie from KuttyMovies?

        -

        A: No, it is not legal to download Game of Thrones Tamil dubbed movie from KuttyMovies. KuttyMovies is a piracy website that violates the copyright laws and infringes the intellectual property rights of the creators and owners of Game of Thrones. Downloading Game of Thrones Tamil dubbed movie from KuttyMovies can result in legal actions, such as fines, lawsuits, or even imprisonment.

        -

        Q: Is it safe to download Game of Thrones Tamil dubbed movie from KuttyMovies?

        -

        A: No, it is not safe to download Game of Thrones Tamil dubbed movie from KuttyMovies. KuttyMovies is an unsecured and unregulated website that may contain malware and viruses in its download links, ads, or redirects. Downloading Game of Thrones Tamil dubbed movie from KuttyMovies can expose your device and data to malware and viruses, which can cause various problems, such as slowing down your performance, stealing your personal information, deleting your files, or even damaging your hardware.

        -

        Q: How can I watch Game of Thrones in Tamil legally?

        -

        A: The best way to watch Game of Thrones in Tamil legally is to subscribe to a legitimate streaming service that has the official rights and licenses to provide Game of Thrones in Tamil language. Some examples of such streaming services are Hotstar, Amazon Prime Video, Netflix, etc. These streaming services offer high-quality and complete episodes of Game of Thrones in Tamil language with subtitles or audio options. They also have other features and benefits, such as offline viewing, multiple devices support, original content, etc.

        -

        Q: What are some other websites that offer Game of Thrones Tamil dubbed movie for free download?

        -

        A: Some other websites that offer Game of Thrones Tamil dubbed movie for free download are Isaidub, Tamilyogi, Oceanofmovies, etc. However, these websites are also piracy websites that have the same risks and challenges as KuttyMovies, such as legal issues, malware, and low quality. Therefore, you should be careful while using these websites for downloading Game of Thrones Tamil dubbed movie.

        -

        Q: How can I improve my English skills while watching Game of Thrones?

        -

        A: If you want to improve your English skills while watching Game of Thrones, you can try some of these tips:

        -
          -
        • Watch Game of Thrones with English subtitles or audio, and pay attention to the pronunciation, vocabulary, grammar, and expressions used by the characters.
        • -
        • Repeat the dialogues or phrases that you find interesting or useful, and try to imitate the accent and tone of the speakers.
        • -
        • Write down the words or sentences that you do not understand, and look them up in a dictionary or online.
        • -
        • Discuss the plot, characters, themes, and opinions about Game of Thrones with your friends or online communities who are also fans of the series.
        • -
        • Read the books or articles related to Game of Thrones, and compare them with the TV series.
        • -
        -

        By following these tips, you can enjoy watching Game of Thrones and also improve your English skills at the same time.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Project Playtime on PS4 Everything you need to know.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Project Playtime on PS4 Everything you need to know.md deleted file mode 100644 index 0c8518bfa3bcd8bc164adcf4cf0299f811990f3c..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Project Playtime on PS4 Everything you need to know.md +++ /dev/null @@ -1,99 +0,0 @@ -
        -

        Can You Download Project Playtime on PS4?

        -

        If you are a fan of horror games, you might have heard of Project Playtime, a multiplayer game where you have to survive a toy factory full of deadly surprises. But can you download Project Playtime on PS4, or is it only available on PC? In this article, we will answer this question and give you some tips on how to play Project Playtime on your console.

        -

        can you download project playtime on ps4


        Download File >>>>> https://gohhs.com/2uPuHs



        -

        What is Project Playtime?

        -

        Project Playtime is a free-to-play multiplayer horror game that was released in December 2022 on Steam. It is developed by Moonbit Studios, an indie team based in Argentina.

        -

        A multiplayer horror game

        -

        In Project Playtime, six players have to work together to create one giant toy while avoiding a terrifying monster that roams the factory. The monster is controlled by a seventh player, who has only one goal: find and kill everyone. The game features different maps, characters, toys, and monsters, each with their own abilities and weaknesses.

        -

        A PC exclusive title

        -

        As of now, Project Playtime is only available on PC. The developers have not announced any plans to bring the game to other platforms, such as PS4 or PS5. According to their FAQ page, they are focusing on improving the PC version first before considering other options.

        -

        Why do people want Project Playtime on PS4?

        -

        Project Playtime has gained a lot of attention and praise from horror fans and streamers since its launch. It has over 29,000 positive reviews on Steam and millions of views on YouTube . But why do people want to play it on PS4?

        -

        How to play Project Playtime on PS4 or PS5
        -Is Project Playtime available for PS4 or PS5
        -Project Playtime PS4 release date and price
        -Project Playtime multiplayer horror game for PS4
        -Project Playtime official gameplay trailer for PS4
        -Project Playtime PS4 download link and instructions
        -Project Playtime PS4 review and rating
        -Project Playtime PS4 gameplay tips and tricks
        -Project Playtime PS4 cheats and hacks
        -Project Playtime PS4 system requirements and compatibility
        -Project Playtime PS4 vs PC comparison and differences
        -Project Playtime PS4 best settings and options
        -Project Playtime PS4 controller support and configuration
        -Project Playtime PS4 mods and customizations
        -Project Playtime PS4 achievements and trophies
        -Project Playtime PS4 online co-op and multiplayer modes
        -Project Playtime PS4 split-screen and local multiplayer options
        -Project Playtime PS4 cross-play and cross-platform features
        -Project Playtime PS4 update and patch notes
        -Project Playtime PS4 DLC and expansion packs
        -Project Playtime PS4 bugs and glitches
        -Project Playtime PS4 refund and cancellation policy
        -Project Playtime PS4 free trial and demo version
        -Project Playtime PS4 pre-order and bonus content
        -Project Playtime PS4 discount and coupon codes
        -How to stream Project Playtime on PS4 to Twitch or YouTube
        -How to record Project Playtime on PS4 with capture card or software
        -How to edit Project Playtime videos on PS4 with Share Factory or other tools
        -How to share Project Playtime screenshots on PS4 with friends or social media
        -How to join Project Playtime community on PS4 or Discord
        -How to find Project Playtime players on PS4 or LFG sites
        -How to chat with Project Playtime players on PS4 or voice chat apps
        -How to report Project Playtime players on PS4 for cheating or harassment
        -How to block Project Playtime players on PS4 for privacy or safety reasons
        -How to invite Project Playtime players on PS4 to your party or game session
        -How to gift Project Playtime to your friends on PS4 or PlayStation Store
        -How to get Project Playtime merchandise on PS4 or official website
        -How to contact Project Playtime developers on PS4 or email address
        -How to support Project Playtime developers on PS4 or Patreon or other platforms
        -How to get involved in Project Playtime development on PS4 or Steam Workshop or other tools
        -How to access Project Playtime beta or early access on PS4 or Steam or other platforms
        -How to get notified of Project Playtime news and updates on PS4 or newsletter or other sources
        -How to participate in Project Playtime surveys and feedback on PS4 or online forms or other methods
        -How to enter Project Playtime contests and giveaways on PS4 or social media or other platforms
        -How to win Project Playtime prizes and rewards on PS4 or in-game events or other opportunities
        -How to learn more about Project Playtime lore and story on PS4 or wiki or other resources
        -How to enjoy Project Playtime fan art and fan fiction on PS4 or Reddit or other sites

        -

        The popularity of horror games on consoles

        -

        Horror games are very popular among console gamers, especially those who own a PS4 or PS5. Some of the most successful horror titles in recent years, such as Resident Evil 2, Outlast 2, Until Dawn, and The Evil Within 2, were released on these platforms. Playing horror games on a big screen with surround sound can enhance the immersion and scare factor.

        -

        The appeal of Project Playtime's gameplay and graphics

        -

        Another reason why people want Project Playtime on PS4 is because of its gameplay and graphics. The game offers a unique twist on the multiplayer horror genre, where teamwork and strategy are essential to survive. The game also has a colorful and cartoonish style that contrasts with the dark and creepy atmosphere. The game's trailer showcases some of the stunning visuals and animations that the game has to offer.

        -

        How to play Project Playtime on PS4?

        -

        So, can you download Project Playtime on PS4? Unfortunately, the answer is no. There is no official way to play the game on your console. However, there are some unofficial methods that you can try at your own risk.

        -

        The official answer: You can't

        -

        The developers of Project Playtime have stated that they have no plans to port the game to PS4 or PS5 anytime soon. They are focused on improving the PC version first and adding more content and features. They also said that they are not interested in making a fake trailer or gameplay video for consoles, as some fans have requested. Therefore, if you see any videos or websites claiming that you can download Project Playtime on PS4, they are most likely scams or hoaxes.

        -

        The unofficial answer: You can try streaming or remote play

        -

        If you really want to play Project Play time on PS4, there are some unofficial methods that you can try at your own risk. These methods involve streaming or remote play, which allow you to access your PC games from your console. However, these methods are not guaranteed to work, and they may have some drawbacks, such as lag, low quality, or compatibility issues. Here are some of the options you can try:

        -
          -
        • Steam Link: Steam Link is a feature that lets you stream your Steam games from your PC to your TV or other devices, such as your PS4. You need to download the Steam Link app on your PS4 and connect it to the same network as your PC. Then, you can launch Project Playtime on your PC and stream it to your PS4. However, this method may not work for some games or regions, and it may require a fast and stable internet connection.
        • -
        • PS Remote Play: PS Remote Play is a feature that lets you play your PS4 or PS5 games from another device, such as your PC. You need to download the PS Remote Play app on your PC and sign in with your PlayStation Network account. Then, you can connect your PS4 or PS5 to your PC and play your games. However, this method does not let you play PC games on your console, so you need to have a PS4 or PS5 game that supports keyboard and mouse input, such as Call of Duty: Modern Warfare. Then, you can launch Project Playtime on your PC and switch to the PS Remote Play window on your console. However, this method may not work for some games or regions, and it may require a fast and stable internet connection.
        • -
        • Parsec: Parsec is a third-party app that lets you stream your PC games to another device, such as your PS4. You need to download the Parsec app on both your PC and PS4 and create an account. Then, you can host Project Playtime on your PC and join it from your PS4. However, this method may not work for some games or regions, and it may require a fast and stable internet connection.
        • -
        -

        Conclusion

        -

        Project Playtime is a multiplayer horror game that is only available on PC. The developers have no plans to port the game to PS4 or PS5 anytime soon. If you want to play Project Playtime on your console, you can try some unofficial methods that involve streaming or remote play, but they are not guaranteed to work and they may have some drawbacks.

        -

        If you enjoyed this article, please share it with your friends and leave a comment below. Have you tried Project Playtime? What do you think of the game? Do you have any tips or tricks for playing it? Let us know!

        -

        FAQs

        -
          -
        • Q: How much does Project Playtime cost?
        • -
        • A: Project Playtime is free-to-play on Steam. You can download it from here.
        • -
        • Q: How many players can play Project Playtime?
        • -
        • A: Project Playtime supports up to seven players online. Six players have to work together as survivors, while one player controls the monster.
        • -
        • Q: What are the system requirements for Project Playtime?
        • -
        • A: According to the Steam page, the minimum system requirements for Project Playtime are:
        • -
        • - OS: Windows 7 64-bit
        • -
        • - Processor: Intel Core i3-4170 or AMD FX-8120
        • -
        • - Memory: 8 GB RAM
        • -
        • - Graphics: NVIDIA GeForce GTX 660 or AMD Radeon HD 7870
        • -
        • - DirectX: Version 11
        • -
        • - Network: Broadband Internet connection
        • -
        • - Storage: 10 GB available space
        • -
        • Q: Is Project Playtime scary?
        • -
        • A: Project Playtime is a horror game that involves jump scares, gore, violence, and dark themes. It is not suitable for children or people who are easily frightened or disturbed.
        • -
        • Q: Is Project Playtime based on a true story?
        • -
        • A: No, Project Playtime is a fictional game that is not based on any real events or people.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/cookie-signature/History.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/cookie-signature/History.md deleted file mode 100644 index 78513cc3d28ce3516c93b4d425f83df247486ae5..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/cookie-signature/History.md +++ /dev/null @@ -1,38 +0,0 @@ -1.0.6 / 2015-02-03 -================== - -* use `npm test` instead of `make test` to run tests -* clearer assertion messages when checking input - - -1.0.5 / 2014-09-05 -================== - -* add license to package.json - -1.0.4 / 2014-06-25 -================== - - * corrected avoidance of timing attacks (thanks @tenbits!) - -1.0.3 / 2014-01-28 -================== - - * [incorrect] fix for timing attacks - -1.0.2 / 2014-01-28 -================== - - * fix missing repository warning - * fix typo in test - -1.0.1 / 2013-04-15 -================== - - * Revert "Changed underlying HMAC algo. to sha512." - * Revert "Fix for timing attacks on MAC verification." - -0.0.1 / 2010-01-03 -================== - - * Initial release diff --git a/spaces/firsk/ai_otto/train_ms.py b/spaces/firsk/ai_otto/train_ms.py deleted file mode 100644 index 1f1708d8ef1f4e820b608234a60744a200a644cd..0000000000000000000000000000000000000000 --- a/spaces/firsk/ai_otto/train_ms.py +++ /dev/null @@ -1,594 +0,0 @@ -# flake8: noqa: E402 - -import os -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler, -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import generator_loss, discriminator_loss, feature_loss, kl_loss -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = ( - True # If encontered training problem,please try to disable TF32. -) -torch.set_float32_matmul_precision("medium") -torch.backends.cudnn.benchmark = True -torch.backends.cuda.sdp_kernel("flash") -torch.backends.cuda.enable_flash_sdp(True) -torch.backends.cuda.enable_mem_efficient_sdp( - True -) # Not available if torch version is lower than 2.0 -torch.backends.cuda.enable_math_sdp(True) -global_step = 0 - - -def run(): - dist.init_process_group( - backend="gloo", - init_method="env://", # Due to some training problem,we proposed to use gloo instead of nccl. - ) # Use torchrun instead of mp.spawn - rank = dist.get_rank() - n_gpus = dist.get_world_size() - hps = utils.get_hparams() - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader( - train_dataset, - num_workers=16, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=4, - ) # DataLoader config could be adjusted. - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader( - eval_dataset, - num_workers=0, - shuffle=False, - batch_size=1, - pin_memory=True, - drop_last=False, - collate_fn=collate_fn, - ) - if ( - "use_noise_scaled_mas" in hps.model.keys() - and hps.model.use_noise_scaled_mas is True - ): - print("Using noise scaled MAS for VITS2") - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if ( - "use_duration_discriminator" in hps.model.keys() - and hps.model.use_duration_discriminator is True - ): - print("Using duration discriminator for VITS2") - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if ( - "use_spk_conditioned_encoder" in hps.model.keys() - and hps.model.use_spk_conditioned_encoder is True - ): - if hps.data.n_speakers == 0: - raise ValueError( - "n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model" - ) - else: - print("Using normal encoder for VITS1") - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial=mas_noise_scale_initial, - noise_scale_delta=noise_scale_delta, - **hps.model, - ).cuda(rank) - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - try: - if net_dur_disc is not None: - _, _, dur_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), - net_dur_disc, - optim_dur_disc, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_g, g_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), - net_g, - optim_g, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_d, d_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), - net_d, - optim_d, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - if not optim_g.param_groups[0].get("initial_lr"): - optim_g.param_groups[0]["initial_lr"] = g_resume_lr - if not optim_d.param_groups[0].get("initial_lr"): - optim_d.param_groups[0]["initial_lr"] = d_resume_lr - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - if net_dur_disc is not None: - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR( - optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, eval_loader], - logger, - [writer, writer_eval], - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, None], - None, - None, - ) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers -): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = ( - net_g.module.mas_noise_scale_initial - - net_g.module.noise_scale_delta * global_step - ) - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True - ) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda( - rank, non_blocking=True - ) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True - ) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - ja_bert = ja_bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - ( - y_hat, - l_length, - attn, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - (hidden_x, logw, logw_), - ) = net_g( - x, - x_lengths, - spec, - spec_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - - y = commons.slice_segments( - y, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc( - hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach() - ) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - ( - loss_dur_disc, - losses_dur_disc_r, - losses_dur_disc_g, - ) = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc_all, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/dur": loss_dur, - "loss/g/kl": loss_kl, - } - ) - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - "all/attn": utils.plot_alignment_to_numpy( - attn[0, 0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - if net_dur_disc is not None: - utils.save_checkpoint( - net_dur_disc, - optim_dur_disc, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)), - ) - keep_ckpts = getattr(hps.train, "keep_ckpts", 5) - if keep_ckpts > 0: - utils.clean_checkpoints( - path_to_models=hps.model_dir, - n_ckpts_to_keep=keep_ckpts, - sort_by_time=True, - ) - - global_step += 1 - - if rank == 0: - logger.info("====> Epoch: {}".format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - ja_bert = ja_bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer( - x, - x_lengths, - speakers, - tone, - language, - bert, - ja_bert, - y=spec, - max_len=1000, - sdp_ratio=0.0 if not use_sdp else 1.0, - ) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - image_dict.update( - { - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].cpu().numpy() - ) - } - ) - audio_dict.update( - { - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[ - 0, :, : y_hat_lengths[0] - ] - } - ) - image_dict.update( - { - f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - mel[0].cpu().numpy() - ) - } - ) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, : y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate, - ) - generator.train() - - -if __name__ == "__main__": - run() diff --git a/spaces/flatindo/scaler/realesrgan/data/realesrgan_paired_dataset.py b/spaces/flatindo/scaler/realesrgan/data/realesrgan_paired_dataset.py deleted file mode 100644 index 386c8d72496245dae8df033c2ebbd76b41ff45f1..0000000000000000000000000000000000000000 --- a/spaces/flatindo/scaler/realesrgan/data/realesrgan_paired_dataset.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb -from basicsr.data.transforms import augment, paired_random_crop -from basicsr.utils import FileClient, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data -from torchvision.transforms.functional import normalize - - -@DATASET_REGISTRY.register() -class RealESRGANPairedDataset(data.Dataset): - """Paired image dataset for image restoration. - - Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs. - - There are three modes: - 1. 'lmdb': Use lmdb files. - If opt['io_backend'] == lmdb. - 2. 'meta_info': Use meta information file to generate paths. - If opt['io_backend'] != lmdb and opt['meta_info'] is not None. - 3. 'folder': Scan folders to generate paths. - The rest. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - filename_tmpl (str): Template for each filename. Note that the template excludes the file extension. - Default: '{}'. - gt_size (int): Cropped patched size for gt patches. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h - and w for implementation). - - scale (bool): Scale, which will be added automatically. - phase (str): 'train' or 'val'. - """ - - def __init__(self, opt): - super(RealESRGANPairedDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - # mean and std for normalizing the input images - self.mean = opt['mean'] if 'mean' in opt else None - self.std = opt['std'] if 'std' in opt else None - - self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq'] - self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}' - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder] - self.io_backend_opt['client_keys'] = ['lq', 'gt'] - self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt']) - elif 'meta_info' in self.opt and self.opt['meta_info'] is not None: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip() for line in fin] - self.paths = [] - for path in paths: - gt_path, lq_path = path.split(', ') - gt_path = os.path.join(self.gt_folder, gt_path) - lq_path = os.path.join(self.lq_folder, lq_path) - self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)])) - else: - # disk backend - # it will scan the whole folder to get meta info - # it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file - self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl) - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - scale = self.opt['scale'] - - # Load gt and lq images. Dimension order: HWC; channel order: BGR; - # image range: [0, 1], float32. - gt_path = self.paths[index]['gt_path'] - img_bytes = self.file_client.get(gt_path, 'gt') - img_gt = imfrombytes(img_bytes, float32=True) - lq_path = self.paths[index]['lq_path'] - img_bytes = self.file_client.get(lq_path, 'lq') - img_lq = imfrombytes(img_bytes, float32=True) - - # augmentation for training - if self.opt['phase'] == 'train': - gt_size = self.opt['gt_size'] - # random crop - img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path) - # flip, rotation - img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot']) - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True) - # normalize - if self.mean is not None or self.std is not None: - normalize(img_lq, self.mean, self.std, inplace=True) - normalize(img_gt, self.mean, self.std, inplace=True) - - return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path} - - def __len__(self): - return len(self.paths) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index a0986143fa4f2bd36f5271354fe5f843f35b9e6f..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,51 +0,0 @@ -from annotator.uniformer.mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to Fast-SCNN paper. - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - """ - - def __init__(self, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) diff --git a/spaces/glt3953/app-audio_video_transcribe/app.py b/spaces/glt3953/app-audio_video_transcribe/app.py deleted file mode 100644 index 009d48a2e0b6d778f730bcd43cab631ed9bbf400..0000000000000000000000000000000000000000 --- a/spaces/glt3953/app-audio_video_transcribe/app.py +++ /dev/null @@ -1,151 +0,0 @@ -import whisper -import gradio as gr -import os -import datetime - -#获取当前北京时间 -utc_dt = datetime.datetime.utcnow() -beijing_dt = utc_dt.astimezone(datetime.timezone(datetime.timedelta(hours=10))) -formatted = beijing_dt.strftime("%Y-%m-%d_%H") -print(f"北京时间: {beijing_dt.year}年{beijing_dt.month}月{beijing_dt.day}日 " - f"{beijing_dt.hour}时{beijing_dt.minute}分{beijing_dt.second}秒") -#创建作品存放目录 -works_path = '../works_audio_video_transcribe/' + formatted -if not os.path.exists(works_path): - os.makedirs(works_path) -print('作品目录:' + works_path) - -#model_size = "small" -#model = whisper.load_model(model_size) #tiny、base、small、medium(可用)、large - -def transcript(model_size, audiofile, prompt, output_dir): - os.system(f"whisper {audiofile} --model {model_size} --language zh --initial_prompt {prompt} --output_dir {output_dir}") - -def audio_recog(model_size, audiofile): - utc_dt = datetime.datetime.utcnow() - beijing_dt = utc_dt.astimezone(datetime.timezone(datetime.timedelta(hours=10))) - formatted = beijing_dt.strftime("%Y-%m-%d_%H-%M-%S") - print(f"开始时间: {beijing_dt.year}年{beijing_dt.month}月{beijing_dt.day}日 " - f"{beijing_dt.hour}时{beijing_dt.minute}分{beijing_dt.second}秒") - - print("音频文件:" + audiofile) - - prompt = "以下是普通话的句子" - filename = os.path.splitext(os.path.basename(audiofile))[0] - text_file = works_path + '/' + filename + '.txt' - srt_file = works_path + '/' + filename + '.srt' - - output_dir = works_path - transcript(model_size, audiofile, prompt, output_dir) - with open(text_file, "r") as f: - text_output = f.read() - print("text:" + text_output) - print("text文件:" + text_file) - - with open(srt_file, "r") as f: - srt_output = f.read() - print("srt:" + srt_output) - print("srt文件:" + srt_file) - - utc_dt = datetime.datetime.utcnow() - beijing_dt = utc_dt.astimezone(datetime.timezone(datetime.timedelta(hours=10))) - formatted = beijing_dt.strftime("%Y-%m-%d_%H-%M-%S") - print(f"结束时间: {beijing_dt.year}年{beijing_dt.month}月{beijing_dt.day}日 " - f"{beijing_dt.hour}时{beijing_dt.minute}分{beijing_dt.second}秒") - - return text_output, text_file, srt_output, srt_file - -def video_recog(model_size, filepath): - filename = os.path.splitext(os.path.basename(filepath))[0] - worksfile = works_path + '/works_' + filename + '.mp4' - print("视频文件:" + filepath) - - utc_dt = datetime.datetime.utcnow() - beijing_dt = utc_dt.astimezone(datetime.timezone(datetime.timedelta(hours=10))) - formatted = beijing_dt.strftime("%Y-%m-%d_%H-%M-%S.%f") - - # 提取音频为mp3 - audiofile = works_path + '/' + formatted + '.mp3' - os.system(f"ffmpeg -i {filepath} -vn -c:a libmp3lame -q:a 4 {audiofile}") - - #识别音频文件 - text_output, text_file, srt_output, srt_file = audio_recog(model_size, audiofile) - -# # 给视频添加字幕 -# os.system(f"ffmpeg -i {filepath} -i {srt_file} -c:s mov_text -c:v copy -c:a copy {worksfile}") -# print("作品:" + worksfile) - - return text_output, text_file, srt_output, srt_file - -css_style = "#fixed_size_img {height: 240px;} " \ - "#overview {margin: auto;max-width: 400px; max-height: 400px;}" - -title = "音视频转录 by宁侠" -description = "您只需要上传一段音频或视频文件,我们的服务会快速对其进行语音识别,然后生成相应的文字和字幕。这样,您就可以轻松地记录下重要的语音内容,或者为视频添加精准的字幕。现在就来试试我们的音视频转录服务吧,让您的生活和工作更加便捷!" - -examples_path = 'examples/' -examples = [[examples_path + 'demo_shejipuhui.mp4']] - -# gradio interface -with gr.Blocks(title=title, css=css_style) as demo: - gr.HTML(''' -
        -
        -

        - 音视频转录 -

        -

        - by宁侠 -

        - ''') - gr.Markdown(description) - - with gr.Tab("🔊音频转录 Audio Transcribe"): - with gr.Row(): - with gr.Column(): - audio_input = gr.Audio(label="🔊音频输入 Audio Input", type="filepath") - gr.Examples(['examples/paddlespeech.asr-zh.wav', 'examples/demo_shejipuhui.mp3'], [audio_input]) - audio_model_size = gr.components.Radio(label="模型尺寸", choices=["tiny", "base", "small", "medium", "large"], value="small") - audio_recog_button = gr.Button("👂音频识别 Recognize") - with gr.Column(): - audio_text_output = gr.Textbox(label="✏️识别结果 Recognition Result", max_lines=5) - audio_text_file = gr.File(label="✏️识别结果文件 Recognition Result File") - audio_srt_output = gr.Textbox(label="📖SRT字幕内容 SRT Subtitles", max_lines=10) - audio_srt_file = gr.File(label="📖SRT字幕文件 SRT File") - audio_subtitles_button = gr.Button("添加字幕\nGenerate Subtitles", visible=False) - audio_output = gr.Audio(label="🔊音频 Audio", visible=False) - - audio_recog_button.click(audio_recog, inputs=[audio_model_size, audio_input], outputs=[audio_text_output, audio_text_file, audio_srt_output, audio_srt_file]) -# audio_subtitles_button.click(audio_subtitles, inputs=[audio_text_input], outputs=[audio_output]) - - with gr.Tab("🎥视频转录 Video Transcribe"): - with gr.Row(): - with gr.Column(): - video_input = gr.Video(label="🎥视频输入 Video Input") - gr.Examples(['examples/demo_shejipuhui.mp4'], [video_input], label='语音识别示例 ASR Demo') - video_model_size = gr.components.Radio(label="模型尺寸", choices=["tiny", "base", "small", "medium", "large"], value="small") - video_recog_button = gr.Button("👂视频识别 Recognize") - video_output = gr.Video(label="🎥视频 Video", visible=False) - with gr.Column(): - video_text_output = gr.Textbox(label="✏️识别结果 Recognition Result", max_lines=5) - video_text_file = gr.File(label="✏️识别结果文件 Recognition Result File") - video_srt_output = gr.Textbox(label="📖SRT字幕内容 SRT Subtitles", max_lines=10) - video_srt_file = gr.File(label="📖SRT字幕文件 SRT File") - with gr.Row(visible=False): - font_size = gr.Slider(minimum=10, maximum=100, value=32, step=2, label="🔠字幕字体大小 Subtitle Font Size") - font_color = gr.Radio(["black", "white", "green", "red"], label="🌈字幕颜色 Subtitle Color", value='white') - video_subtitles_button = gr.Button("添加字幕\nGenerate Subtitles", visible=False) - - - video_recog_button.click(video_recog, inputs=[video_model_size, video_input], outputs=[video_text_output, video_text_file, video_srt_output, video_srt_file]) -# video_subtitles_button.click(video_subtitles, inputs=[video_text_input], outputs=[video_output]) - -# start gradio service in local -demo.queue(api_open=False).launch(debug=True) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Maya Serial Number and Product Key 2016 Step by Step Instructions for Beginners and Experts.md b/spaces/gotiQspiryo/whisper-ui/examples/Maya Serial Number and Product Key 2016 Step by Step Instructions for Beginners and Experts.md deleted file mode 100644 index 0e2204e5cbd52a614ee4981303713325d33a2f4d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Maya Serial Number and Product Key 2016 Step by Step Instructions for Beginners and Experts.md +++ /dev/null @@ -1,23 +0,0 @@ -
        -

        Ok, here's my situation. I'm a college student and a few semesters ago I had to download and install Maya 2015 in order to use for a class. Next semester, I now have to download and install Maya 2016. So, I log into the education community site and it offers me the ability to download Maya 2014, 2015, 2016, and 2017. When I click on the 2016 version, I am given a serial number and product key. The serial number given is the exact same serial number I was given for Maya 2015. I was also sent an e-mail giving me the license details for Maya 2016 (again with the identical serial number).

        -

        After I download and install Maya 2016, it says it can't activate because the serial number is wrong. I tried to get an activation code from autodesk, but the automated system tells me I am providing the incorrect request code (I assure you, I am typing in the right number). Here's a screenshot of the activation screen with a request number showing that I am trying to activate the 2016:

        -

        maya serial number and product key 2016


        Download File 🆗 https://urlgoal.com/2uyLsT



        -

        So obviously customer service thinks I am trying to activate Maya 2015, not 2016. I suspect because the serial numbers are exactly the same. The product numbers are different though. Anyway, I respond to the customer service e-mail and explain everything, and then I just get a message saying my ticket has been closed.

        -

        Thank you for your post! Sorry to hear you are having issues activating Maya 2016. You can use the same serial number for all the previous versions available to subscription users, so Maya 2014-2017.

        -

        If I'm supposed to be able to use the same serial number, then why when I try to activate Maya does it say that I have the wrong serial number? See below. I'm using the serial number provided to me by Autodesk in an e-mail. How can that be wrong?

        -

        I then responded to that message explaining that I am not trying to activate Maya 2015 (even though I already explained that in the original ticket), I am trying to activate Maya 2016. I forwarded both the licensing e-mail from Autodesk, as well as provided a screenshot showing that the serial number and request code are the correct numbers for Maya 2016, not 2015. Then I got an e-mail saying my support ticket was closed. So going through this page provided no resolution. My request was ignored. I could create another ticket, but I'd just get the same response.

        -

        Product keys are required for installation of Autodesk products and are used to differentiate products that are both sold independently and as part of a product suite. With newest release of Autodesk 2016 products, we bring you a new list of products keys.

        Note: Please ensure you are using the correct product key for the Autodesk product and version you are installing. Entering an incorrect product key will result in activation errors for that product.

        -

        Note: For single-user subscriptions, you can usually sign in so that a serial number is not required. You may see a Stand-alone license type for 2017-2019 products, but a User License type for 2020 and later product versions.

        -

        Autodesk 2016 All Products Crack Final activation keys for Autodesk 2016 x86x64. Using this activator will allow you to activate the full version of Autodesk products using the keygen to generate a working serial number by pasting request code from an Autodesk software to the keygen and getting the activation code. It also has a Patch button to patch Autodesk 2016 programs for permanent activation and supports both Autodesk 32 bit and 64 bit

        -

        Find Serial Numbers and Product Keys in Autodesk Account: Your Serial Number and Product Key are displayed in your Autodesk Account in the product tray on the Products & Services page and also again in the Software Download window. Note about serial number visibility in Autodesk Account: Only account administrators, such as Contract Managers and Software Coordinators, and Named Users with assigned software benefits will see serial numbers in Autodesk Account. You are the account administrator if you purchased a software subscription using your Autodesk Account or were assigned the role of Contract Manager or Software Coordinator by your company. If you do not see the software you wish to activate in your Autodesk account or see the message "Contact your admin for serial numbers," you need to contact the contract administrator. Only an administrator can assign you as a Named User or End User and give you permissions to download and activate the software.

        -

        -

        If, for whatever reason, you cannot locate your product key, there is another method:
        1. Using your installation media, (USB key, DVD, download folder, etc.) navigate to the location of the setup.exe file for your Autodesk product.
        2. In that folder, look for a file named MID.txt, MID01.txt, MID02.txt or some variation on that name.
        3. Open this file in notepad and verify that the product name is what you expected it to be.
        4. The first five characters of the part number should also be the product key for that product.

        -

        Second, we believe this is the case especially considering the slower growth of oil and natural gas sources on the U&O Reservation over the past two and a half years since August 2016 when the National O&NG FIP became effective. Since that time, we have seen limited construction of new and modified oil and natural gas sources on the U&O Reservation. Oil and natural gas sources planning to construct on or after October 3, 2016 have been required to either comply with the National O&NG FIP or to seek a minor source permit under the generally applicable (site-specific) permit provisions of the Federal Indian Country Minor NSR rule.17 Sources complying with the National O&NG FIP are required to meet a two-part registration requirement: The Part 1 Registration Form is submitted 30 days before a source begins construction and contains information about source location and the Part 2 Registration Form is submitted within 60 days after the startup of production and contains information about emissions.18

        -

        Comment #7: One oil and natural gas industry commenter expressed that the industry's objective is that final regulations protect the environment and the public and cost-effectively address VOC emissions that as a co-benefit also reduce methane emissions, without unnecessarily hampering manufacturing and business expansion. According to the commenter, this objective can be met while the private sector develops and delivers more natural gas and oil to its customers. According to the oil and natural gas industry commenter, their efforts are producing real results based on the EPA's latest Greenhouse Gas Inventory which continues to show a downward trend in methane emissions, even as U.S. oil and natural gas production rose dramatically. The commenter reported that the inventory report indicates that methane emissions from natural gas systems and petroleum systems increased 14 percent between 1990 and 2016, at a time when the natural gas output increased by more than 50 percent. This is in addition to the U.S. continuing to lead the world in reducing carbon emissions, which are at 25-year lows, largely due to the increased use of natural gas.

        -

        This action does not impose any new information collection burden under the PRA. OMB has previously approved the information collection activities contained in the Federal Indian Country Minor NSR rule and has assigned OMB control number 2060-0003.35 This action amends the National O&NG FIP, which provides a mechanism for authorizing construction for true minor sources in the oil and natural gas production and natural gas processing segments of the oil and natural gas sector locating or located in areas covered by the Federal Indian Country Minor NSR rule to satisfy the requirements of that rule other than by obtaining a site-specific minor source permit. Because it substitutes for a site-specific permit, which would contain information collection activities covered by the Information Collection Request for Federal Indian Country Minor NSR rule issued in July 2011, neither the proposed amendments, nor the National O&NG FIP, impose any new obligations or enforceable duties on any state, local or tribal government or the private sector. In fact, the final amendments should have the effect of reducing paperwork burden on sources wishing to locate or expand in the Indian country portion of the Uinta Basin Ozone Nonattainment Area, as the amendments provide an alternative to site-specific permitting for such sources.

        -

        Based on the calculations below, the total estimated number of respondents (WOSBs and EDWOSBs) for this collection of information varies depending upon the types of certification that a business concern is seeking. For initial certification, the total estimated number of respondents is 9,349. The total number was calculated using the two-year average number of business concerns that have provided information through Certify from March 2016 through February 2018. For annual updates, the total number is 12,347. For examinations and protests, the total number is 130.

        -

        We propose to adopt a new airworthiness directive (AD) for all Bombardier, Inc., Model CL-600-2B16 (601-3A, 601-3R, and 604 Variants) airplanes. This proposed AD was prompted by a report that main landing gear (MLG) side stay actuators have been assembled using nonconforming split ball bearings. This proposed AD would require verification of the serial numbers of the installed MLG side stay actuator assemblies, and replacement of the affected parts. We are proposing this AD to address the unsafe condition on these products.

        -

        The service information describes procedures to verify the serial numbers of the installed MLG side stay actuator assemblies and to replace the affected parts. These documents are distinct since they apply to the airplane model in different configurations.

        -

        The applicability of the MCAI is limited to Bombardier, Inc., Model CL-600-2B16 (601-3A, 601-3R, and 604 Variants) airplanes, serial numbers 5301 through 5665 inclusive, 5701 through 5988 inclusive, and 6050 through 6091 inclusive, equipped with MLG side stay actuator assembly containing split ball bearing part number 104467672. However, the applicability of this proposed AD includes all Bombardier, Inc., Model CL-600-2B16 (601-3A, 601-3R, and 604 Variants) airplanes and prohibits the installation of any MLG side stay actuator with a serial number identified in the service information. Because the affected part is a rotable part, we have determined that this part could later be installed on airplanes that were initially delivered with the acceptable part, thereby subjecting those airplanes to the unsafe condition. We have coordinated this difference with TCCA.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/data/audio/speech_to_text_dataset.py b/spaces/gradio/HuBERT/fairseq/data/audio/speech_to_text_dataset.py deleted file mode 100644 index d4b5668d8f9d4bc93fcbda73d867554d8f1b3107..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/audio/speech_to_text_dataset.py +++ /dev/null @@ -1,511 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import os.path as op -import re -from typing import Dict, List, Optional, Tuple - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.audio_utils import ( - get_fbank, get_waveform, read_from_stored_zip, is_npy_data, - is_sf_audio_data, parse_path, FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS -) -from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform - - -logger = logging.getLogger(__name__) - - -class S2TDataConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path): - try: - import yaml - except ImportError: - print("Please install PyYAML to load YAML files for " "S2T data config") - self.config = {} - if op.isfile(yaml_path): - try: - with open(yaml_path) as f: - self.config = yaml.load(f, Loader=yaml.FullLoader) - except Exception as e: - raise Exception(f"Failed to load config from {yaml_path}: {e}") - else: - raise FileNotFoundError(f"{yaml_path} not found") - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", "dict.txt") - - @property - def shuffle(self) -> bool: - """Shuffle dataset samples before batching""" - return self.config.get("shuffle", False) - - @property - def pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - return self.config.get("pre_tokenizer", {"tokenizer": None}) - - @property - def bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply after pre-tokenization. Returning - a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - return self.config.get("bpe_tokenizer", {"bpe": None}) - - @property - def prepend_tgt_lang_tag(self) -> bool: - """Prepend target lang ID token as the target BOS (e.g. for to-many - multilingual setting). During inference, this requires `--prefix-size 1` - to force BOS to be lang ID token.""" - return self.config.get("prepend_tgt_lang_tag", False) - - @property - def input_feat_per_channel(self): - """The dimension of input features (per audio channel)""" - return self.config.get("input_feat_per_channel", 80) - - @property - def input_channels(self): - """The number of channels in the input audio""" - return self.config.get("input_channels", 1) - - @property - def sampling_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. - (alpha = 1 for no resampling)""" - return self.config.get("sampling_alpha", 1.0) - - @property - def use_audio_input(self): - """Needed by the dataset loader to see if the model requires - raw audio as inputs.""" - return self.config.get("use_audio_input", False) - - @property - def audio_root(self): - """Audio paths in the manifest TSV can be relative and this provides - the root path. Set this to empty string when using absolute paths.""" - return self.config.get("audio_root", "") - - def get_feature_transforms(self, split, is_train): - """Split-specific feature transforms. Allowing train set wildcard `_train`, - evaluation set wildcard `_eval` and general wildcard `*` for matching.""" - from copy import deepcopy - - cfg = deepcopy(self.config) - _cur = cfg.get("transforms", {}) - cur = _cur.get(split) - cur = _cur.get("_train") if cur is None and is_train else cur - cur = _cur.get("_eval") if cur is None and not is_train else cur - cur = _cur.get("*") if cur is None else cur - cfg["transforms"] = cur - return cfg - - -def get_features_from_npy_or_audio(path): - ext = op.splitext(op.basename(path))[1] - if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f'Unsupported file format for "{path}"') - return np.load(path) if ext == ".npy" else get_fbank(path) - - -def get_features_or_waveform_from_stored_zip( - path, byte_offset, byte_size, need_waveform=False -): - assert path.endswith(".zip") - data = read_from_stored_zip(path, byte_offset, byte_size) - f = io.BytesIO(data) - if is_npy_data(data): - features_or_waveform = np.load(f) - elif is_sf_audio_data(data): - features_or_waveform = \ - get_waveform(f, always_2d=False)[0] if need_waveform else get_fbank(f) - else: - raise ValueError(f'Unknown file format for "{path}"') - return features_or_waveform - - -def get_features_or_waveform(path: str, need_waveform=False): - """Get speech features from .npy file or waveform from .wav/.flac file. - The file may be inside an uncompressed ZIP file and is accessed via byte - offset and length. - - Args: - path (str): File path in the format of "<.npy/.wav/.flac path>" or - "::". - need_waveform (bool): return waveform instead of features. - - Returns: - features_or_waveform (numpy.ndarray): speech features or waveform. - """ - _path, slice_ptr = parse_path(path) - if len(slice_ptr) == 0: - if need_waveform: - return get_waveform(_path, always_2d=False) - return get_features_from_npy_or_audio(_path) - elif len(slice_ptr) == 2: - features_or_waveform = get_features_or_waveform_from_stored_zip( - _path, slice_ptr[0], slice_ptr[1], need_waveform=need_waveform - ) - else: - raise ValueError(f"Invalid path: {path}") - - return features_or_waveform - - -def _collate_frames( - frames: List[torch.Tensor], is_audio_input: bool = False -) -> torch.Tensor: - """ - Convert a list of 2D frames into a padded 3D tensor - Args: - frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - max_len = max(frame.size(0) for frame in frames) - if is_audio_input: - out = frames[0].new_zeros((len(frames), max_len)) - else: - out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1))) - for i, v in enumerate(frames): - out[i, : v.size(0)] = v - return out - - -class SpeechToTextDataset(FairseqDataset): - LANG_TAG_TEMPLATE = "" - - def __init__( - self, - split: str, - is_train_split: bool, - data_cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - ): - self.split, self.is_train_split = split, is_train_split - self.data_cfg = data_cfg - self.audio_paths, self.n_frames = audio_paths, n_frames - self.n_samples = len(audio_paths) - assert len(n_frames) == self.n_samples > 0 - assert src_texts is None or len(src_texts) == self.n_samples - assert tgt_texts is None or len(tgt_texts) == self.n_samples - assert speakers is None or len(speakers) == self.n_samples - assert src_langs is None or len(src_langs) == self.n_samples - assert tgt_langs is None or len(tgt_langs) == self.n_samples - assert ids is None or len(ids) == self.n_samples - assert (tgt_dict is None and tgt_texts is None) or ( - tgt_dict is not None and tgt_texts is not None - ) - self.src_texts, self.tgt_texts = src_texts, tgt_texts - self.src_langs, self.tgt_langs = src_langs, tgt_langs - self.tgt_dict = tgt_dict - self.check_tgt_lang_tag() - self.ids = ids - self.shuffle = data_cfg.shuffle if is_train_split else False - - self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict( - self.data_cfg.get_feature_transforms(split, is_train_split) - ) - - self.pre_tokenizer = pre_tokenizer - self.bpe_tokenizer = bpe_tokenizer - - logger.info(self.__repr__()) - - def __repr__(self): - return ( - self.__class__.__name__ - + f'(split="{self.split}", n_samples={self.n_samples}, ' - f"prepend_tgt_lang_tag={self.data_cfg.prepend_tgt_lang_tag}, " - f"shuffle={self.shuffle}, transforms={self.feature_transforms})" - ) - - @classmethod - def is_lang_tag(cls, token): - pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)") - return re.match(pattern, token) - - def check_tgt_lang_tag(self): - if self.data_cfg.prepend_tgt_lang_tag: - assert self.tgt_langs is not None and self.tgt_dict is not None - tgt_lang_tags = [ - self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs) - ] - assert all(t in self.tgt_dict for t in tgt_lang_tags) - - def tokenize_text(self, text: str): - if self.pre_tokenizer is not None: - text = self.pre_tokenizer.encode(text) - if self.bpe_tokenizer is not None: - text = self.bpe_tokenizer.encode(text) - return text - - def __getitem__( - self, index: int - ) -> Tuple[int, torch.Tensor, Optional[torch.Tensor]]: - source = get_features_or_waveform( - self.audio_paths[index], need_waveform=self.data_cfg.use_audio_input - ) - if self.feature_transforms is not None: - assert not self.data_cfg.use_audio_input - source = self.feature_transforms(source) - source = torch.from_numpy(source).float() - - target = None - if self.tgt_texts is not None: - tokenized = self.tokenize_text(self.tgt_texts[index]) - target = self.tgt_dict.encode_line( - tokenized, add_if_not_exist=False, append_eos=True - ).long() - if self.data_cfg.prepend_tgt_lang_tag: - lang_tag = self.LANG_TAG_TEMPLATE.format(self.tgt_langs[index]) - lang_tag_idx = self.tgt_dict.index(lang_tag) - target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0) - return index, source, target - - def __len__(self): - return self.n_samples - - def collater(self, samples: List[Tuple[int, torch.Tensor, torch.Tensor]]) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([i for i, _, _ in samples], dtype=torch.long) - frames = _collate_frames( - [s for _, s, _ in samples], self.data_cfg.use_audio_input - ) - # sort samples by descending number of frames - n_frames = torch.tensor([s.size(0) for _, s, _ in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, target_lengths = None, None - prev_output_tokens = None - ntokens = None - if self.tgt_texts is not None: - target = fairseq_data_utils.collate_tokens( - [t for _, _, t in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, order) - target_lengths = torch.tensor( - [t.size(0) for _, _, t in samples], dtype=torch.long - ).index_select(0, order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [t for _, _, t in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(t.size(0) for _, _, t in samples) - - out = { - "id": indices, - "net_input": { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - }, - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - return out - - def num_tokens(self, index): - return self.n_frames[index] - - def size(self, index): - t_len = 0 - if self.tgt_texts is not None: - tokenized = self.tokenize_text(self.tgt_texts[index]) - t_len = len(tokenized.split(" ")) - return self.n_frames[index], t_len - - @property - def sizes(self): - return np.array(self.n_frames) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - # first by descending order of # of frames then by original/random order - order.append([-n for n in self.n_frames]) - return np.lexsort(order) - - def prefetch(self, indices): - raise False - - -class SpeechToTextDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames" - KEY_TGT_TEXT = "tgt_text" - # optional columns - KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text" - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[List[Dict]], - data_cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - ) -> SpeechToTextDataset: - audio_paths, n_frames, src_texts, tgt_texts, ids = [], [], [], [], [] - speakers, src_langs, tgt_langs = [], [], [] - for s in samples: - ids.extend([ss[cls.KEY_ID] for ss in s]) - audio_paths.extend( - [op.join(data_cfg.audio_root, ss[cls.KEY_AUDIO]) for ss in s] - ) - n_frames.extend([int(ss[cls.KEY_N_FRAMES]) for ss in s]) - tgt_texts.extend([ss[cls.KEY_TGT_TEXT] for ss in s]) - src_texts.extend( - [ss.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for ss in s] - ) - speakers.extend([ss.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for ss in s]) - src_langs.extend([ss.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for ss in s]) - tgt_langs.extend([ss.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for ss in s]) - return SpeechToTextDataset( - split_name, - is_train_split, - data_cfg, - audio_paths, - n_frames, - src_texts, - tgt_texts, - speakers, - src_langs, - tgt_langs, - ids, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - ) - - @classmethod - def _get_size_ratios(cls, ids: List[str], sizes: List[int], alpha: float = 1.0): - """Size ratios for temperature-based sampling - (https://arxiv.org/abs/1907.05019)""" - _sizes = np.array(sizes) - prob = _sizes / _sizes.sum() - smoothed_prob = prob ** alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - size_ratio = (smoothed_prob * _sizes.sum()) / _sizes - - o_str = str({_i: f"{prob[i]:.3f}" for i, _i in enumerate(ids)}) - logger.info(f"original sampling probability: {o_str}") - p_str = str({_i: f"{smoothed_prob[i]:.3f}" for i, _i in enumerate(ids)}) - logger.info(f"balanced sampling probability: {p_str}") - sr_str = str({_id: f"{size_ratio[i]:.3f}" for i, _id in enumerate(ids)}) - logger.info(f"balanced sampling size ratio: {sr_str}") - return size_ratio.tolist() - - @classmethod - def from_tsv( - cls, - root: str, - data_cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - ) -> SpeechToTextDataset: - samples = [] - _splits = splits.split(",") - for split in _splits: - tsv_path = op.join(root, f"{split}.tsv") - if not op.isfile(tsv_path): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - samples.append([dict(e) for e in reader]) - assert len(samples) > 0 - - datasets = [ - cls._from_list( - name, - is_train_split, - [s], - data_cfg, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - ) - for name, s in zip(_splits, samples) - ] - - if is_train_split and len(_splits) > 1 and data_cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls._get_size_ratios( - _splits, [len(s) for s in samples], alpha=data_cfg.sampling_alpha - ) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for d, r in zip(datasets, size_ratios) - ] - return ConcatDataset(datasets) diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/custom_ops.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/custom_ops.py deleted file mode 100644 index 702471e2006af6858345c1225c1e55b0acd17d32..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/custom_ops.py +++ /dev/null @@ -1,181 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""TensorFlow custom ops builder. -""" - -import glob -import os -import re -import uuid -import hashlib -import tempfile -import shutil -import tensorflow as tf -from tensorflow.python.client import device_lib # pylint: disable=no-name-in-module - -from .. import util - -#---------------------------------------------------------------------------- -# Global configs. - -cuda_cache_path = None -cuda_cache_version_tag = 'v1' -do_not_hash_included_headers = True # Speed up compilation by assuming that headers included by the CUDA code never change. -verbose = True # Print status messages to stdout. - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - hostx64_paths = sorted(glob.glob('C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64'), reverse=True) - if hostx64_paths != []: - return hostx64_paths[0] - hostx64_paths = sorted(glob.glob('C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64'), reverse=True) - if hostx64_paths != []: - return hostx64_paths[0] - hostx64_paths = sorted(glob.glob('C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64'), reverse=True) - if hostx64_paths != []: - return hostx64_paths[0] - vc_bin_dir = 'C:/Program Files (x86)/Microsoft Visual Studio 14.0/vc/bin' - if os.path.isdir(vc_bin_dir): - return vc_bin_dir - return None - -def _get_compute_cap(device): - caps_str = device.physical_device_desc - m = re.search('compute capability: (\\d+).(\\d+)', caps_str) - major = m.group(1) - minor = m.group(2) - return (major, minor) - -def _get_cuda_gpu_arch_string(): - gpus = [x for x in device_lib.list_local_devices() if x.device_type == 'GPU'] - if len(gpus) == 0: - raise RuntimeError('No GPU devices found') - (major, minor) = _get_compute_cap(gpus[0]) - return 'sm_%s%s' % (major, minor) - -def _run_cmd(cmd): - with os.popen(cmd) as pipe: - output = pipe.read() - status = pipe.close() - if status is not None: - raise RuntimeError('NVCC returned an error. See below for full command line and output log:\n\n%s\n\n%s' % (cmd, output)) - -def _prepare_nvcc_cli(opts): - cmd = 'nvcc ' + opts.strip() - cmd += ' --disable-warnings' - cmd += ' --include-path "%s"' % tf.sysconfig.get_include() - cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'protobuf_archive', 'src') - cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'com_google_absl') - cmd += ' --include-path "%s"' % os.path.join(tf.sysconfig.get_include(), 'external', 'eigen_archive') - - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - # Require that _find_compiler_bindir succeeds on Windows. Allow - # nvcc to use whatever is the default on Linux. - if os.name == 'nt': - raise RuntimeError('Could not find MSVC/GCC/CLANG installation on this computer. Check compiler_bindir_search_path list in "%s".' % __file__) - else: - cmd += ' --compiler-bindir "%s"' % compiler_bindir - cmd += ' 2>&1' - return cmd - -#---------------------------------------------------------------------------- -# Main entry point. - -_plugin_cache = dict() - -def get_plugin(cuda_file, extra_nvcc_options=[]): - cuda_file_base = os.path.basename(cuda_file) - cuda_file_name, cuda_file_ext = os.path.splitext(cuda_file_base) - - # Already in cache? - if cuda_file in _plugin_cache: - return _plugin_cache[cuda_file] - - # Setup plugin. - if verbose: - print('Setting up TensorFlow plugin "%s": ' % cuda_file_base, end='', flush=True) - try: - # Hash CUDA source. - md5 = hashlib.md5() - with open(cuda_file, 'rb') as f: - md5.update(f.read()) - md5.update(b'\n') - - # Hash headers included by the CUDA code by running it through the preprocessor. - if not do_not_hash_included_headers: - if verbose: - print('Preprocessing... ', end='', flush=True) - with tempfile.TemporaryDirectory() as tmp_dir: - tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + cuda_file_ext) - _run_cmd(_prepare_nvcc_cli('"%s" --preprocess -o "%s" --keep --keep-dir "%s"' % (cuda_file, tmp_file, tmp_dir))) - with open(tmp_file, 'rb') as f: - bad_file_str = ('"' + cuda_file.replace('\\', '/') + '"').encode('utf-8') # __FILE__ in error check macros - good_file_str = ('"' + cuda_file_base + '"').encode('utf-8') - for ln in f: - if not ln.startswith(b'# ') and not ln.startswith(b'#line '): # ignore line number pragmas - ln = ln.replace(bad_file_str, good_file_str) - md5.update(ln) - md5.update(b'\n') - - # Select compiler configs. - compile_opts = '' - if os.name == 'nt': - compile_opts += '"%s"' % os.path.join(tf.sysconfig.get_lib(), 'python', '_pywrap_tensorflow_internal.lib') - elif os.name == 'posix': - compile_opts += f' --compiler-options \'-fPIC\'' - compile_opts += f' --compiler-options \'{" ".join(tf.sysconfig.get_compile_flags())}\'' - compile_opts += f' --linker-options \'{" ".join(tf.sysconfig.get_link_flags())}\'' - else: - assert False # not Windows or Linux, w00t? - compile_opts += f' --gpu-architecture={_get_cuda_gpu_arch_string()}' - compile_opts += ' --use_fast_math' - for opt in extra_nvcc_options: - compile_opts += ' ' + opt - nvcc_cmd = _prepare_nvcc_cli(compile_opts) - - # Hash build configuration. - md5.update(('nvcc_cmd: ' + nvcc_cmd).encode('utf-8') + b'\n') - md5.update(('tf.VERSION: ' + tf.VERSION).encode('utf-8') + b'\n') - md5.update(('cuda_cache_version_tag: ' + cuda_cache_version_tag).encode('utf-8') + b'\n') - - # Compile if not already compiled. - cache_dir = util.make_cache_dir_path('tflib-cudacache') if cuda_cache_path is None else cuda_cache_path - bin_file_ext = '.dll' if os.name == 'nt' else '.so' - bin_file = os.path.join(cache_dir, cuda_file_name + '_' + md5.hexdigest() + bin_file_ext) - if not os.path.isfile(bin_file): - if verbose: - print('Compiling... ', end='', flush=True) - with tempfile.TemporaryDirectory() as tmp_dir: - tmp_file = os.path.join(tmp_dir, cuda_file_name + '_tmp' + bin_file_ext) - _run_cmd(nvcc_cmd + ' "%s" --shared -o "%s" --keep --keep-dir "%s"' % (cuda_file, tmp_file, tmp_dir)) - os.makedirs(cache_dir, exist_ok=True) - intermediate_file = os.path.join(cache_dir, cuda_file_name + '_' + uuid.uuid4().hex + '_tmp' + bin_file_ext) - shutil.copyfile(tmp_file, intermediate_file) - os.rename(intermediate_file, bin_file) # atomic - - # Load. - if verbose: - print('Loading... ', end='', flush=True) - plugin = tf.load_op_library(bin_file) - - # Add to cache. - _plugin_cache[cuda_file] = plugin - if verbose: - print('Done.', flush=True) - return plugin - - except: - if verbose: - print('Failed!', flush=True) - raise - -#---------------------------------------------------------------------------- diff --git a/spaces/gyugnsu/DragGan-Inversion/torch_utils/training_stats.py b/spaces/gyugnsu/DragGan-Inversion/torch_utils/training_stats.py deleted file mode 100644 index aa5837c2948372ecdb3e34076f4b3f4f42c81fef..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/torch_utils/training_stats.py +++ /dev/null @@ -1,283 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -# ---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -# Data type to use for initial per-tensor reduction. -_reduce_dtype = torch.float32 -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -# Device to use for multiprocess communication. None = single-process. -_sync_device = None -_sync_called = False # Has _sync() been called yet? -# Running counters on each device, updated by report(): name => device => torch.Tensor -_counters = dict() -# Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor -_cumulative = dict() - -# ---------------------------------------------------------------------------- - - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -# ---------------------------------------------------------------------------- - - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -# ---------------------------------------------------------------------------- - - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros( - [_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros( - [_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num( - name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -# ---------------------------------------------------------------------------- - - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros( - [_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros( - [_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -# ---------------------------------------------------------------------------- diff --git a/spaces/hahahafofo/ChatPDF/chatpdf.py b/spaces/hahahafofo/ChatPDF/chatpdf.py deleted file mode 100644 index 1a825771fa194e313f46c88fc1f71fc32ed140a6..0000000000000000000000000000000000000000 --- a/spaces/hahahafofo/ChatPDF/chatpdf.py +++ /dev/null @@ -1,190 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@author:XuMing(xuming624@qq.com) -@description: -""" -import logging - -from similarities import Similarity -from textgen import ChatGlmModel, LlamaModel -from transformers import pipeline -from loguru import logger - -PROMPT_TEMPLATE = """\ -基于以下已知信息,简洁和专业的来回答用户的问题。 -如果无法从中得到答案,请说 "根据已知信息无法回答该问题" 或 "没有提供足够的相关信息",不允许在答案中添加编造成分,答案请使用中文。 - -已知内容: -{context_str} - -问题: -{query_str} -""" - - -class ChatPDF: - def __init__( - self, - sim_model_name_or_path: str = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", - gen_model_type: str = "chatglm", - gen_model_name_or_path: str = "THUDM/chatglm-6b-int4", - lora_model_name_or_path: str = None, - - ): - self.sim_model = Similarity(model_name_or_path=sim_model_name_or_path) - self.model_type = gen_model_type - - if gen_model_type == "chatglm": - self.gen_model = ChatGlmModel(gen_model_type, gen_model_name_or_path, lora_name=lora_model_name_or_path) - elif gen_model_type == "llama": - self.gen_model = LlamaModel(gen_model_type, gen_model_name_or_path, lora_name=lora_model_name_or_path) - elif gen_model_type == "t5": - self.gen_model = pipeline('text2text-generation', model=gen_model_name_or_path) - else: - raise ValueError('gen_model_type must be chatglm or llama.') - self.history = None - self.pdf_path = None - - def load_pdf_file(self, pdf_path: str): - """Load a PDF file.""" - if pdf_path.endswith('.pdf'): - corpus = self.extract_text_from_pdf(pdf_path) - elif pdf_path.endswith('.docx'): - corpus = self.extract_text_from_docx(pdf_path) - elif pdf_path.endswith('.md'): - corpus = self.extract_text_from_markdown(pdf_path) - else: - corpus = self.extract_text_from_txt(pdf_path) - self.sim_model.add_corpus(corpus) - self.pdf_path = pdf_path - - @staticmethod - def extract_text_from_pdf(file_path: str): - """Extract text content from a PDF file.""" - import PyPDF2 - contents = [] - with open(file_path, 'rb') as f: - pdf_reader = PyPDF2.PdfReader(f) - for page in pdf_reader.pages: - page_text = page.extract_text().strip() - raw_text = [text.strip() for text in page_text.splitlines() if text.strip()] - new_text = '' - for text in raw_text: - new_text += text - if text[-1] in ['.', '!', '?', '。', '!', '?', '…', ';', ';', ':', ':', '”', '’', ')', '】', '》', '」', - '』', '〕', '〉', '》', '〗', '〞', '〟', '»', '"', "'", ')', ']', '}']: - contents.append(new_text) - new_text = '' - if new_text: - contents.append(new_text) - return contents - - @staticmethod - def extract_text_from_txt(file_path: str): - """Extract text content from a TXT file.""" - contents = [] - with open(file_path, 'r', encoding='utf-8') as f: - contents = [text.strip() for text in f.readlines() if text.strip()] - return contents - - @staticmethod - def extract_text_from_docx(file_path: str): - """Extract text content from a DOCX file.""" - import docx - document = docx.Document(file_path) - contents = [paragraph.text.strip() for paragraph in document.paragraphs if paragraph.text.strip()] - return contents - - @staticmethod - def extract_text_from_markdown(file_path: str): - """Extract text content from a Markdown file.""" - import markdown - from bs4 import BeautifulSoup - with open(file_path, 'r', encoding='utf-8') as f: - markdown_text = f.read() - html = markdown.markdown(markdown_text) - soup = BeautifulSoup(html, 'html.parser') - contents = [text.strip() for text in soup.get_text().splitlines() if text.strip()] - return contents - - @staticmethod - def _add_source_numbers(lst): - """Add source numbers to a list of strings.""" - return [f'[{idx + 1}]\t "{item}"' for idx, item in enumerate(lst)] - - def _generate_answer(self, query_str, context_str, history=None, max_length=1024): - """Generate answer from query and context.""" - if self.model_type == "t5": - response = self.gen_model(query_str, max_length=max_length, do_sample=True)[0]['generated_text'] - return response, history - prompt = PROMPT_TEMPLATE.format(context_str=context_str, query_str=query_str) - response, out_history = self.gen_model.chat(prompt, history, max_length=max_length) - return response, out_history - - def chat(self, query_str, history=None, max_length=1024): - if self.model_type == "t5": - response = self.gen_model(query_str, max_length=max_length, do_sample=True)[0]['generated_text'] - logger.debug(response) - return response, history - - response, out_history = self.gen_model.chat(query_str, history, max_length=max_length) - return response, out_history - - def query( - self, - query, - topn: int = 5, - max_length: int = 1024, - max_input_size: int = 1024, - use_history: bool = False - ): - """Query from corpus.""" - - sim_contents = self.sim_model.most_similar(query, topn=topn) - - reference_results = [] - for query_id, id_score_dict in sim_contents.items(): - for corpus_id, s in id_score_dict.items(): - reference_results.append(self.sim_model.corpus[corpus_id]) - if not reference_results: - return '没有提供足够的相关信息', reference_results - reference_results = self._add_source_numbers(reference_results) - - context_str = '\n'.join(reference_results)[:(max_input_size - len(PROMPT_TEMPLATE))] - - if use_history: - response, out_history = self._generate_answer(query, context_str, self.history, max_length=max_length) - self.history = out_history - else: - - response, out_history = self._generate_answer(query, context_str) - - return response, out_history, reference_results - - def save_index(self, index_path=None): - """Save model.""" - if index_path is None: - index_path = '.'.join(self.pdf_path.split('.')[:-1]) + '_index.json' - self.sim_model.save_index(index_path) - - def load_index(self, index_path=None): - """Load model.""" - if index_path is None: - index_path = '.'.join(self.pdf_path.split('.')[:-1]) + '_index.json' - self.sim_model.load_index(index_path) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) > 2: - gen_model_name_or_path = sys.argv[1] - else: - print('Usage: python chatpdf.py ') - gen_model_name_or_path = "THUDM/chatglm-6b-int4" - m = ChatPDF(gen_model_name_or_path=gen_model_name_or_path) - m.load_pdf_file(pdf_path='sample.pdf') - response = m.query('自然语言中的非平行迁移是指什么?') - print(response[0]) - response = m.query('本文作者是谁?') - print(response[0]) diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/version.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/version.py deleted file mode 100644 index a1c6124423a7be38d4625a6989acf5b0dd9dbf07..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '2.10.1' diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/gqa.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/gqa.py deleted file mode 100644 index 03eb6e20e12d5dc2f895c87f5f9e0a5978b00a53..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/gqa.py +++ /dev/null @@ -1,91 +0,0 @@ -import json -from pathlib import Path - -import torch -import torchvision - -from .modulated_coco import ConvertCocoPolysToMask, ModulatedDataset - - -class GQADataset(ModulatedDataset): - pass - - -class GQAQuestionAnswering(torchvision.datasets.CocoDetection): - def __init__(self, img_folder, ann_file, transforms, return_masks, return_tokens, tokenizer, ann_folder): - super(GQAQuestionAnswering, self).__init__(img_folder, ann_file) - self._transforms = transforms - self.prepare = ConvertCocoPolysToMask(return_masks, return_tokens, tokenizer=tokenizer) - with open(ann_folder / "gqa_answer2id.json", "r") as f: - self.answer2id = json.load(f) - with open(ann_folder / "gqa_answer2id_by_type.json", "r") as f: - self.answer2id_by_type = json.load(f) - self.type2id = {"obj": 0, "attr": 1, "rel": 2, "global": 3, "cat": 4} - - def __getitem__(self, idx): - img, target = super(GQAQuestionAnswering, self).__getitem__(idx) - image_id = self.ids[idx] - coco_img = self.coco.loadImgs(image_id)[0] - caption = coco_img["caption"] - dataset_name = coco_img["dataset_name"] - questionId = coco_img["questionId"] - target = {"image_id": image_id, "annotations": target, "caption": caption} - img, target = self.prepare(img, target) - if self._transforms is not None: - img, target = self._transforms(img, target) - target["dataset_name"] = dataset_name - target["questionId"] = questionId - - if coco_img["answer"] not in self.answer2id: - answer = "unknown" - else: - answer = coco_img["answer"] - - target["answer"] = torch.as_tensor(self.answer2id[answer], dtype=torch.long) - target["answer_type"] = torch.as_tensor(self.type2id[coco_img["question_type"]], dtype=torch.long) - - if coco_img["answer"] not in self.answer2id_by_type["answer_attr"]: - answer = "unknown" - else: - answer = coco_img["answer"] - target["answer_attr"] = torch.as_tensor( - self.answer2id_by_type["answer_attr"][answer] if coco_img["question_type"] == "attr" else -100, - dtype=torch.long, - ) - - if coco_img["answer"] not in self.answer2id_by_type["answer_global"]: - answer = "unknown" - else: - answer = coco_img["answer"] - target["answer_global"] = torch.as_tensor( - self.answer2id_by_type["answer_global"][answer] if coco_img["question_type"] == "global" else -100, - dtype=torch.long, - ) - - if coco_img["answer"] not in self.answer2id_by_type["answer_rel"]: - answer = "unknown" - else: - answer = coco_img["answer"] - target["answer_rel"] = torch.as_tensor( - self.answer2id_by_type["answer_rel"][answer] if coco_img["question_type"] == "rel" else -100, - dtype=torch.long, - ) - - if coco_img["answer"] not in self.answer2id_by_type["answer_cat"]: - answer = "unknown" - else: - answer = coco_img["answer"] - target["answer_cat"] = torch.as_tensor( - self.answer2id_by_type["answer_cat"][answer] if coco_img["question_type"] == "cat" else -100, - dtype=torch.long, - ) - - if coco_img["answer"] not in self.answer2id_by_type["answer_obj"]: - answer = "unknown" - else: - answer = coco_img["answer"] - target["answer_obj"] = torch.as_tensor( - self.answer2id_by_type["answer_obj"][answer] if coco_img["question_type"] == "obj" else -100, - dtype=torch.long, - ) - return img, target diff --git a/spaces/hardon-server/remove-background-on-image/README.md b/spaces/hardon-server/remove-background-on-image/README.md deleted file mode 100644 index 7ed13f01daba6737940704a2c9b13ba190a82f1e..0000000000000000000000000000000000000000 --- a/spaces/hardon-server/remove-background-on-image/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Remove Background -emoji: 🌖 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -duplicated_from: openskyml/remove-background-on-image ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/README.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/README.md deleted file mode 100644 index 9fb3e4f7afec17137c95c78be6ef06d520ec8032..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/README.md +++ /dev/null @@ -1,9 +0,0 @@ - - -### Common Datasets - -The dataset implemented here do not need to load the data into the final format. -It should provide the minimal data structure needed to use the dataset, so it can be very efficient. - -For example, for an image dataset, just provide the file names and labels, but don't read the images. -Let the downstream decide how to read. diff --git a/spaces/housexu123/bingo-2.0/src/app/page.tsx b/spaces/housexu123/bingo-2.0/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/housexu123/bingo-2.0/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
        - - - ) -} diff --git a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/components/pages/_page.svelte-8f425fb1.js b/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/components/pages/_page.svelte-8f425fb1.js deleted file mode 100644 index 5c27854cdb71ac19267426bac5dc49eec57bbef8..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/components/pages/_page.svelte-8f425fb1.js +++ /dev/null @@ -1,9 +0,0 @@ -import{S as Ee,i as Ne,s as $e,k as S,l as z,m as M,h as w,n as c,b as J,F as m,A as ue,H as Ge,I as he,a as T,q as j,J as de,c as U,r as F,p as Ce,K as se,u as me,L as Ze,f as Q,g as Je,t as ae,d as Qe,M as Oe,N as Rt,v as He,w as qe,x as Re,y as Le,O as et,P as Se,G as tt,o as Lt,e as rt,Q as W,R as St}from"../../chunks/index-5559954d.js";import{w as kt}from"../../chunks/index-3bda1050.js";if(!oe)var oe={map:function(e,t){var r={};return t?e.map(function(n,s){return r.index=s,t.call(r,n)}):e.slice()},naturalOrder:function(e,t){return et?1:0},sum:function(e,t){var r={};return e.reduce(t?function(n,s,l){return r.index=l,n+t.call(r,s)}:function(n,s){return n+s},0)},max:function(e,t){return Math.max.apply(null,t?oe.map(e,t):e)}};var zt=function(){var e=5,t=8-e,r=1e3,n=.75;function s(u,a,g){return(u<<2*e)+(a<>t;return gval=u[1]>>t,bval=u[2]>>t,g>=a.r1&&g<=a.r2&&gval>=a.g1&&gval<=a.g2&&bval>=a.b1&&bval<=a.b2}};function o(){this.vboxes=new l(function(u,a){return oe.naturalOrder(u.vbox.count()*u.vbox.volume(),a.vbox.count()*a.vbox.volume())})}o.prototype={push:function(u){this.vboxes.push({vbox:u,color:u.avg()})},palette:function(){return this.vboxes.map(function(u){return u.color})},size:function(){return this.vboxes.size()},map:function(u){for(var a=this.vboxes,g=0;g251&&f[1]>251&&f[2]>251&&(u[g].color=[255,255,255])}};function d(u){var a=1<<3*e,g=new Array(a),f,x,b,N;return u.forEach(function(H){x=H[0]>>t,b=H[1]>>t,N=H[2]>>t,f=s(x,b,N),g[f]=(g[f]||0)+1}),g}function y(u,a){var g=1e6,f=0,x=1e6,b=0,N=1e6,H=0,R,v,$;return u.forEach(function(I){R=I[0]>>t,v=I[1]>>t,$=I[2]>>t,Rf&&(f=R),vb&&(b=v),$H&&(H=$)}),new i(g,f,x,b,N,H,a)}function L(u,a){if(!a.count())return;var g=a.r2-a.r1+1,f=a.g2-a.g1+1,x=a.b2-a.b1+1,b=oe.max([g,f,x]);if(a.count()==1)return[a.copy()];var N=0,H=[],R=[],v,$,I,B,E;if(b==g)for(v=a.r1;v<=a.r2;v++){for(B=0,$=a.g1;$<=a.g2;$++)for(I=a.b1;I<=a.b2;I++)E=s(v,$,I),B+=u[E]||0;N+=B,H[v]=N}else if(b==f)for(v=a.g1;v<=a.g2;v++){for(B=0,$=a.r1;$<=a.r2;$++)for(I=a.b1;I<=a.b2;I++)E=s($,v,I),B+=u[E]||0;N+=B,H[v]=N}else for(v=a.b1;v<=a.b2;v++){for(B=0,$=a.r1;$<=a.r2;$++)for(I=a.g1;I<=a.g2;I++)E=s($,I,v),B+=u[E]||0;N+=B,H[v]=N}H.forEach(function(O,q){R[q]=N-O});function C(O){var q=O+"1",p=O+"2",D,h,k,_,P,V=0;for(v=a[q];v<=a[p];v++)if(H[v]>N/2){for(k=a.copy(),_=a.copy(),D=v-a[q],h=a[p]-v,D<=h?P=Math.min(a[p]-1,~~(v+h/2)):P=Math.max(a[q],~~(v-1-D/2));!H[P];)P++;for(V=R[P];!V&&H[P-1];)V=R[--P];return k[p]=P,_[q]=k[p]+1,[k,_]}}return b==g?C("r"):b==f?C("g"):C("b")}function A(u,a){if(!u.length||a<2||a>256)return!1;var g=d(u);g.forEach(function(){});var f=y(u,g),x=new l(function(R,v){return oe.naturalOrder(R.count(),v.count())});x.push(f);function b(R,v){for(var $=1,I=0,B;I=v)||I++>r)return}}b(x,n*a);for(var N=new l(function(R,v){return oe.naturalOrder(R.count()*R.volume(),v.count()*v.volume())});x.size();)N.push(x.pop());b(N,a-N.size());for(var H=new o;N.size();)H.push(N.pop());return H}return{quantize:A}}(),Bt=zt.quantize;function Ie(e,t,r){e.prototype=t.prototype=r,r.constructor=e}function Ae(e,t){var r=Object.create(e.prototype);for(var n in t)r[n]=t[n];return r}function be(){}var ye=.7,ze=1/ye,ve="\\s*([+-]?\\d+)\\s*",ke="\\s*([+-]?(?:\\d*\\.)?\\d+(?:[eE][+-]?\\d+)?)\\s*",le="\\s*([+-]?(?:\\d*\\.)?\\d+(?:[eE][+-]?\\d+)?)%\\s*",Dt=/^#([0-9a-f]{3,8})$/,Ot=new RegExp(`^rgb\\(${ve},${ve},${ve}\\)$`),At=new RegExp(`^rgb\\(${le},${le},${le}\\)$`),Tt=new RegExp(`^rgba\\(${ve},${ve},${ve},${ke}\\)$`),Ut=new RegExp(`^rgba\\(${le},${le},${le},${ke}\\)$`),Vt=new RegExp(`^hsl\\(${ke},${le},${le}\\)$`),jt=new RegExp(`^hsla\\(${ke},${le},${le},${ke}\\)$`),nt={aliceblue:15792383,antiquewhite:16444375,aqua:65535,aquamarine:8388564,azure:15794175,beige:16119260,bisque:16770244,black:0,blanchedalmond:16772045,blue:255,blueviolet:9055202,brown:10824234,burlywood:14596231,cadetblue:6266528,chartreuse:8388352,chocolate:13789470,coral:16744272,cornflowerblue:6591981,cornsilk:16775388,crimson:14423100,cyan:65535,darkblue:139,darkcyan:35723,darkgoldenrod:12092939,darkgray:11119017,darkgreen:25600,darkgrey:11119017,darkkhaki:12433259,darkmagenta:9109643,darkolivegreen:5597999,darkorange:16747520,darkorchid:10040012,darkred:9109504,darksalmon:15308410,darkseagreen:9419919,darkslateblue:4734347,darkslategray:3100495,darkslategrey:3100495,darkturquoise:52945,darkviolet:9699539,deeppink:16716947,deepskyblue:49151,dimgray:6908265,dimgrey:6908265,dodgerblue:2003199,firebrick:11674146,floralwhite:16775920,forestgreen:2263842,fuchsia:16711935,gainsboro:14474460,ghostwhite:16316671,gold:16766720,goldenrod:14329120,gray:8421504,green:32768,greenyellow:11403055,grey:8421504,honeydew:15794160,hotpink:16738740,indianred:13458524,indigo:4915330,ivory:16777200,khaki:15787660,lavender:15132410,lavenderblush:16773365,lawngreen:8190976,lemonchiffon:16775885,lightblue:11393254,lightcoral:15761536,lightcyan:14745599,lightgoldenrodyellow:16448210,lightgray:13882323,lightgreen:9498256,lightgrey:13882323,lightpink:16758465,lightsalmon:16752762,lightseagreen:2142890,lightskyblue:8900346,lightslategray:7833753,lightslategrey:7833753,lightsteelblue:11584734,lightyellow:16777184,lime:65280,limegreen:3329330,linen:16445670,magenta:16711935,maroon:8388608,mediumaquamarine:6737322,mediumblue:205,mediumorchid:12211667,mediumpurple:9662683,mediumseagreen:3978097,mediumslateblue:8087790,mediumspringgreen:64154,mediumturquoise:4772300,mediumvioletred:13047173,midnightblue:1644912,mintcream:16121850,mistyrose:16770273,moccasin:16770229,navajowhite:16768685,navy:128,oldlace:16643558,olive:8421376,olivedrab:7048739,orange:16753920,orangered:16729344,orchid:14315734,palegoldenrod:15657130,palegreen:10025880,paleturquoise:11529966,palevioletred:14381203,papayawhip:16773077,peachpuff:16767673,peru:13468991,pink:16761035,plum:14524637,powderblue:11591910,purple:8388736,rebeccapurple:6697881,red:16711680,rosybrown:12357519,royalblue:4286945,saddlebrown:9127187,salmon:16416882,sandybrown:16032864,seagreen:3050327,seashell:16774638,sienna:10506797,silver:12632256,skyblue:8900331,slateblue:6970061,slategray:7372944,slategrey:7372944,snow:16775930,springgreen:65407,steelblue:4620980,tan:13808780,teal:32896,thistle:14204888,tomato:16737095,turquoise:4251856,violet:15631086,wheat:16113331,white:16777215,whitesmoke:16119285,yellow:16776960,yellowgreen:10145074};Ie(be,Ke,{copy(e){return Object.assign(new this.constructor,this,e)},displayable(){return this.rgb().displayable()},hex:at,formatHex:at,formatHex8:Ft,formatHsl:Wt,formatRgb:st,toString:st});function at(){return this.rgb().formatHex()}function Ft(){return this.rgb().formatHex8()}function Wt(){return Nt(this).formatHsl()}function st(){return this.rgb().formatRgb()}function Ke(e){var t,r;return e=(e+"").trim().toLowerCase(),(t=Dt.exec(e))?(r=t[1].length,t=parseInt(t[1],16),r===6?lt(t):r===3?new G(t>>8&15|t>>4&240,t>>4&15|t&240,(t&15)<<4|t&15,1):r===8?Pe(t>>24&255,t>>16&255,t>>8&255,(t&255)/255):r===4?Pe(t>>12&15|t>>8&240,t>>8&15|t>>4&240,t>>4&15|t&240,((t&15)<<4|t&15)/255):null):(t=Ot.exec(e))?new G(t[1],t[2],t[3],1):(t=At.exec(e))?new G(t[1]*255/100,t[2]*255/100,t[3]*255/100,1):(t=Tt.exec(e))?Pe(t[1],t[2],t[3],t[4]):(t=Ut.exec(e))?Pe(t[1]*255/100,t[2]*255/100,t[3]*255/100,t[4]):(t=Vt.exec(e))?ct(t[1],t[2]/100,t[3]/100,1):(t=jt.exec(e))?ct(t[1],t[2]/100,t[3]/100,t[4]):nt.hasOwnProperty(e)?lt(nt[e]):e==="transparent"?new G(NaN,NaN,NaN,0):null}function lt(e){return new G(e>>16&255,e>>8&255,e&255,1)}function Pe(e,t,r,n){return n<=0&&(e=t=r=NaN),new G(e,t,r,n)}function Et(e){return e instanceof be||(e=Ke(e)),e?(e=e.rgb(),new G(e.r,e.g,e.b,e.opacity)):new G}function Xe(e,t,r,n){return arguments.length===1?Et(e):new G(e,t,r,n==null?1:n)}function G(e,t,r,n){this.r=+e,this.g=+t,this.b=+r,this.opacity=+n}Ie(G,Xe,Ae(be,{brighter(e){return e=e==null?ze:Math.pow(ze,e),new G(this.r*e,this.g*e,this.b*e,this.opacity)},darker(e){return e=e==null?ye:Math.pow(ye,e),new G(this.r*e,this.g*e,this.b*e,this.opacity)},rgb(){return this},clamp(){return new G(pe(this.r),pe(this.g),pe(this.b),Be(this.opacity))},displayable(){return-.5<=this.r&&this.r<255.5&&-.5<=this.g&&this.g<255.5&&-.5<=this.b&&this.b<255.5&&0<=this.opacity&&this.opacity<=1},hex:it,formatHex:it,formatHex8:Gt,formatRgb:ot,toString:ot}));function it(){return`#${ge(this.r)}${ge(this.g)}${ge(this.b)}`}function Gt(){return`#${ge(this.r)}${ge(this.g)}${ge(this.b)}${ge((isNaN(this.opacity)?1:this.opacity)*255)}`}function ot(){const e=Be(this.opacity);return`${e===1?"rgb(":"rgba("}${pe(this.r)}, ${pe(this.g)}, ${pe(this.b)}${e===1?")":`, ${e})`}`}function Be(e){return isNaN(e)?1:Math.max(0,Math.min(1,e))}function pe(e){return Math.max(0,Math.min(255,Math.round(e)||0))}function ge(e){return e=pe(e),(e<16?"0":"")+e.toString(16)}function ct(e,t,r,n){return n<=0?e=t=r=NaN:r<=0||r>=1?e=t=NaN:t<=0&&(e=NaN),new Z(e,t,r,n)}function Nt(e){if(e instanceof Z)return new Z(e.h,e.s,e.l,e.opacity);if(e instanceof be||(e=Ke(e)),!e)return new Z;if(e instanceof Z)return e;e=e.rgb();var t=e.r/255,r=e.g/255,n=e.b/255,s=Math.min(t,r,n),l=Math.max(t,r,n),i=NaN,o=l-s,d=(l+s)/2;return o?(t===l?i=(r-n)/o+(r0&&d<1?0:i,new Z(i,o,d,e.opacity)}function Jt(e,t,r,n){return arguments.length===1?Nt(e):new Z(e,t,r,n==null?1:n)}function Z(e,t,r,n){this.h=+e,this.s=+t,this.l=+r,this.opacity=+n}Ie(Z,Jt,Ae(be,{brighter(e){return e=e==null?ze:Math.pow(ze,e),new Z(this.h,this.s,this.l*e,this.opacity)},darker(e){return e=e==null?ye:Math.pow(ye,e),new Z(this.h,this.s,this.l*e,this.opacity)},rgb(){var e=this.h%360+(this.h<0)*360,t=isNaN(e)||isNaN(this.s)?0:this.s,r=this.l,n=r+(r<.5?r:1-r)*t,s=2*r-n;return new G(Te(e>=240?e-240:e+120,s,n),Te(e,s,n),Te(e<120?e+240:e-120,s,n),this.opacity)},clamp(){return new Z(ft(this.h),Me(this.s),Me(this.l),Be(this.opacity))},displayable(){return(0<=this.s&&this.s<=1||isNaN(this.s))&&0<=this.l&&this.l<=1&&0<=this.opacity&&this.opacity<=1},formatHsl(){const e=Be(this.opacity);return`${e===1?"hsl(":"hsla("}${ft(this.h)}, ${Me(this.s)*100}%, ${Me(this.l)*100}%${e===1?")":`, ${e})`}`}}));function ft(e){return e=(e||0)%360,e<0?e+360:e}function Me(e){return Math.max(0,Math.min(1,e||0))}function Te(e,t,r){return(e<60?t+(r-t)*e/60:e<180?r:e<240?t+(r-t)*(240-e)/60:t)*255}const Qt=Math.PI/180,Kt=180/Math.PI,De=18,$t=.96422,It=1,Ct=.82521,Pt=4/29,we=6/29,Mt=3*we*we,Xt=we*we*we;function Ht(e){if(e instanceof ie)return new ie(e.l,e.a,e.b,e.opacity);if(e instanceof ce)return qt(e);e instanceof G||(e=Et(e));var t=Fe(e.r),r=Fe(e.g),n=Fe(e.b),s=Ue((.2225045*t+.7168786*r+.0606169*n)/It),l,i;return t===r&&r===n?l=i=s:(l=Ue((.4360747*t+.3850649*r+.1430804*n)/$t),i=Ue((.0139322*t+.0971045*r+.7141733*n)/Ct)),new ie(116*s-16,500*(l-s),200*(s-i),e.opacity)}function Yt(e,t,r,n){return arguments.length===1?Ht(e):new ie(e,t,r,n==null?1:n)}function ie(e,t,r,n){this.l=+e,this.a=+t,this.b=+r,this.opacity=+n}Ie(ie,Yt,Ae(be,{brighter(e){return new ie(this.l+De*(e==null?1:e),this.a,this.b,this.opacity)},darker(e){return new ie(this.l-De*(e==null?1:e),this.a,this.b,this.opacity)},rgb(){var e=(this.l+16)/116,t=isNaN(this.a)?e:e+this.a/500,r=isNaN(this.b)?e:e-this.b/200;return t=$t*Ve(t),e=It*Ve(e),r=Ct*Ve(r),new G(je(3.1338561*t-1.6168667*e-.4906146*r),je(-.9787684*t+1.9161415*e+.033454*r),je(.0719453*t-.2289914*e+1.4052427*r),this.opacity)}}));function Ue(e){return e>Xt?Math.pow(e,1/3):e/Mt+Pt}function Ve(e){return e>we?e*e*e:Mt*(e-Pt)}function je(e){return 255*(e<=.0031308?12.92*e:1.055*Math.pow(e,1/2.4)-.055)}function Fe(e){return(e/=255)<=.04045?e/12.92:Math.pow((e+.055)/1.055,2.4)}function Zt(e){if(e instanceof ce)return new ce(e.h,e.c,e.l,e.opacity);if(e instanceof ie||(e=Ht(e)),e.a===0&&e.b===0)return new ce(NaN,0_e(t)).sort((t,r)=>{const n=t.h,s=r.h;return s-n||isNaN(s)-isNaN(n)})}function tr(e,t,r){const n=e,s=[];for(let l=0,i,o,d,y,L;l"u"||L>=125)&&(o>250&&d>250&&y>250||s.push([o,d,y]));return s}function rr(e,t=5,r=1){return new Promise(n=>{const s=new Image;s.onload=async()=>{const l=s.width,i=s.height,o=document.createElement("canvas");o.width=l,o.height=i;const d=o.getContext("2d");d.drawImage(s,0,0,l,i);const y=d.getImageData(0,0,l,i),L=tr(y.data,l*i,r),u=Bt(L,t).palette(),a=document.createElement("canvas");a.width=l/5,a.height=i/5,a.getContext("2d").drawImage(s,0,0,l,i,0,0,l/5,i/5);const f=await new Promise(b=>a.toBlob(b,"image/jpeg",.8)),x=u.map(b=>Xe(...b));n({colors:er(x),imgBlob:f})},s.src=e})}async function nr(e,t){const r=ar(t),n="https://huggingface.co/uploads",l=`color-palette-${crypto.randomUUID().split("-")[0]}-${r}.jpeg`,i=new File([e],l,{type:"image/jpeg"});console.log("uploading image",i);const d=await(await fetch(n,{method:"POST",headers:{"Content-Type":i.type,"X-Requested-With":"XMLHttpRequest"},body:i})).text();return console.log("uploaded images",d),d}function ar(e){return e?e.toString().toLowerCase().replace(/\s+/g,"-").replace(/[^\w\-]+/g,"").replace(/\-\-+/g,"-").replace(/^-+/,"").replace(/-+$/,""):""}const X=kt(""),xe=kt(!1),sr="wss://stabilityai-stable-diffusion-1.hf.space/queue/join",ut="";function ht(e,t,r){const n=e.slice();return n[6]=t[r],n[8]=r,n}function dt(e){let t,r,n,s,l,i,o=(e[1]===e[8]?"copied":e[6].formatHex())+"",d,y,L,A,u;function a(){return e[4](e[6],e[8])}return{c(){t=S("div"),r=he("svg"),n=he("rect"),l=T(),i=S("span"),d=j(o),y=T(),this.h()},l(g){t=z(g,"DIV",{class:!0,style:!0});var f=M(t);r=de(f,"svg",{class:!0,width:!0,viewBox:!0});var x=M(r);n=de(x,"rect",{x:!0,y:!0,width:!0,height:!0,fill:!0}),M(n).forEach(w),x.forEach(w),l=U(f),i=z(f,"SPAN",{title:!0,class:!0,style:!0});var b=M(i);d=F(b,o),b.forEach(w),y=U(f),f.forEach(w),this.h()},h(){c(n,"x","0"),c(n,"y","0"),c(n,"width","50"),c(n,"height","50"),c(n,"fill",s=e[6].formatHex()),c(r,"class","block max-w-full aspect-square"),c(r,"width","100"),c(r,"viewBox","0 0 50 50"),c(i,"title","Copy single color"),c(i,"class","absolute bottom-0 text-center text-xs pl-1 font-bold uppercase"),Ce(i,"color",e[2](e[6])),c(t,"class",L=(e[1]===e[8]?"":"cursor-pointer")+" aspect-square relative"),Ce(t,"background-color",e[6].formatHex())},m(g,f){J(g,t,f),m(t,r),m(r,n),m(t,l),m(t,i),m(i,d),m(t,y),A||(u=se(t,"click",a),A=!0)},p(g,f){e=g,f&1&&s!==(s=e[6].formatHex())&&c(n,"fill",s),f&3&&o!==(o=(e[1]===e[8]?"copied":e[6].formatHex())+"")&&me(d,o),f&1&&Ce(i,"color",e[2](e[6])),f&2&&L!==(L=(e[1]===e[8]?"":"cursor-pointer")+" aspect-square relative")&&c(t,"class",L),f&1&&Ce(t,"background-color",e[6].formatHex())},d(g){g&&w(t),A=!1,u()}}}function lr(e){let t,r,n=e[0],s=[];for(let l=0;l50?_e(y.h,y.c,0).formatHex():_e(y.h,y.c,100).formatHex()}let l=-1;async function i(d,y){l>-1||(r(1,l=y),await navigator.clipboard.write([new ClipboardItem({"text/plain":new Blob([d],{type:"text/plain"})})]),setTimeout(()=>{r(1,l=-1)},800))}const o=(d,y)=>i(d.formatHex(),y);return e.$$set=d=>{"colors"in d&&r(0,n=d.colors)},[n,l,s,i,o]}class or extends Ee{constructor(t){super(),Ne(this,t,ir,lr,$e,{colors:0})}}function gt(e,t,r){const n=e.slice();return n[12]=t[r],n[14]=r,n}function pt(e){let t,r;return t=new or({props:{colors:e[4]}}),{c(){He(t.$$.fragment)},l(n){qe(t.$$.fragment,n)},m(n,s){Re(t,n,s),r=!0},p(n,s){const l={};s&16&&(l.colors=n[4]),t.$set(l)},i(n){r||(Q(t.$$.fragment,n),r=!0)},o(n){ae(t.$$.fragment,n),r=!1},d(n){Le(t,n)}}}function mt(e){let t,r,n,s;function l(){return e[8](e[14])}function i(){return e[9](e[14])}return{c(){t=S("button"),this.h()},l(o){t=z(o,"BUTTON",{class:!0}),M(t).forEach(w),this.h()},h(){c(t,"class",r=(e[1]===e[14]?"bg-black dark:bg-white":"bg-white dark:bg-black")+" dark:bg-slate-300 rounded-full h-3 w-3 m-1 border border-black dark:border-white")},m(o,d){J(o,t,d),n||(s=[se(t,"click",l),se(t,"mousemove",i)],n=!0)},p(o,d){e=o,d&2&&r!==(r=(e[1]===e[14]?"bg-black dark:bg-white":"bg-white dark:bg-black")+" dark:bg-slate-300 rounded-full h-3 w-3 m-1 border border-black dark:border-white")&&c(t,"class",r)},d(o){o&&w(t),n=!1,Oe(s)}}}function cr(e){let t,r,n,s,l,i,o,d,y,L,A,u,a,g,f,x,b,N,H,R,v=e[2]?"Copied":"Copy",$,I,B,E,C=e[4]&&pt(e),O=e[0].images,q=[];for(let p=0;p{C=null}),Qe()),(!I||D&8&&!Ze(L.src,A=p[3]))&&c(L,"src",A),(!I||D&32)&&c(L,"alt",p[5]),D&3){O=p[0].images;let h;for(h=0;h{r(2,y=!1)},1e3))}const A=f=>r(1,d=f),u=f=>r(1,d=f),a=()=>i("remix",{prompt:n}),g=()=>L(s.map(f=>f.formatHex()).join(", "));return e.$$set=f=>{"promptData"in f&&r(0,o=f.promptData)},e.$$.update=()=>{var f,x;e.$$.dirty&1&&r(5,n=o==null?void 0:o.prompt),e.$$.dirty&3&&r(4,s=((f=o==null?void 0:o.images[d])==null?void 0:f.colors.map(b=>Xe(b)))||[]),e.$$.dirty&3&&r(3,l=(x=o==null?void 0:o.images[d])==null?void 0:x.imgURL)},[o,d,y,l,s,n,i,L,A,u,a,g]}class ur extends Ee{constructor(t){super(),Ne(this,t,fr,cr,$e,{promptData:0})}}function hr(e){let t,r;return{c(){t=he("svg"),r=he("path"),this.h()},l(n){t=de(n,"svg",{class:!0,xmlns:!0,"xmlns:xlink":!0,"aria-hidden":!0,focusable:!0,role:!0,width:!0,height:!0,preserveAspectRatio:!0,viewBox:!0});var s=M(t);r=de(s,"path",{d:!0,fill:!0}),M(r).forEach(w),s.forEach(w),this.h()},h(){c(r,"d","M10 16L20 6l1.4 1.4l-8.6 8.6l8.6 8.6L20 26z"),c(r,"fill","currentColor"),c(t,"class","ml-1.5 transform rotate-180"),c(t,"xmlns","http://www.w3.org/2000/svg"),c(t,"xmlns:xlink","http://www.w3.org/1999/xlink"),c(t,"aria-hidden","true"),c(t,"focusable","false"),c(t,"role","img"),c(t,"width","1em"),c(t,"height","1em"),c(t,"preserveAspectRatio","xMidYMid meet"),c(t,"viewBox","0 0 32 32")},m(n,s){J(n,t,s),m(t,r)},p:ue,i:ue,o:ue,d(n){n&&w(t)}}}class dr extends Ee{constructor(t){super(),Ne(this,t,null,hr,$e,{})}}function gr(e){let t,r;return{c(){t=he("svg"),r=he("path"),this.h()},l(n){t=de(n,"svg",{class:!0,xmlns:!0,"xmlns:xlink":!0,"aria-hidden":!0,focusable:!0,role:!0,width:!0,height:!0,preserveAspectRatio:!0,viewBox:!0});var s=M(t);r=de(s,"path",{d:!0,fill:!0}),M(r).forEach(w),s.forEach(w),this.h()},h(){c(r,"d","M10 16L20 6l1.4 1.4l-8.6 8.6l8.6 8.6L20 26z"),c(r,"fill","currentColor"),c(t,"class","mr-1.5"),c(t,"xmlns","http://www.w3.org/2000/svg"),c(t,"xmlns:xlink","http://www.w3.org/1999/xlink"),c(t,"aria-hidden","true"),c(t,"focusable","false"),c(t,"role","img"),c(t,"width","1em"),c(t,"height","1em"),c(t,"preserveAspectRatio","xMidYMid meet"),c(t,"viewBox","0 0 32 32")},m(n,s){J(n,t,s),m(t,r)},p:ue,i:ue,o:ue,d(n){n&&w(t)}}}class pr extends Ee{constructor(t){super(),Ne(this,t,null,gr,$e,{})}}function bt(e,t,r){const n=e.slice();return n[22]=t[r],n}function xt(e){let t,r,n,s,l=e[7]&&vt();return{c(){t=S("h3"),r=j(e[6]),n=T(),l&&l.c(),s=rt(),this.h()},l(i){t=z(i,"H3",{class:!0});var o=M(t);r=F(o,e[6]),o.forEach(w),n=U(i),l&&l.l(i),s=rt(),this.h()},h(){c(t,"class","text-xs font-bold ml-3 inline-block")},m(i,o){J(i,t,o),m(t,r),J(i,n,o),l&&l.m(i,o),J(i,s,o)},p(i,o){o&64&&me(r,i[6]),i[7]?l||(l=vt(),l.c(),l.m(s.parentNode,s)):l&&(l.d(1),l=null)},d(i){i&&w(t),i&&w(n),l&&l.d(i),i&&w(s)}}}function vt(e){let t,r;return{c(){t=he("svg"),r=he("path"),this.h()},l(n){t=de(n,"svg",{xmlns:!0,fill:!0,viewBox:!0,class:!0});var s=M(t);r=de(s,"path",{fill:!0,d:!0}),M(r).forEach(w),s.forEach(w),this.h()},h(){c(r,"fill","currentColor"),c(r,"d","M20 12a8 8 0 0 1-8 8v4a12 12 0 0 0 12-12h-4Zm-2-5.3a8 8 0 0 1 2 5.3h4c0-3-1.1-5.8-3-8l-3 2.7Z"),c(t,"xmlns","http://www.w3.org/2000/svg"),c(t,"fill","none"),c(t,"viewBox","0 0 24 24"),c(t,"class","animate-spin max-w-[1rem] inline-block")},m(n,s){J(n,t,s),m(t,r)},d(n){n&&w(t)}}}function wt(e){let t,r,n,s,l,i,o,d,y,L,A,u,a,g=e[0]+1+"",f,x,b,N,H,R,v,$,I,B,E,C,O,q=e[4],p=[];for(let h=0;hae(p[h],1,1,()=>{p[h]=null});return y=new pr({}),B=new dr({}),{c(){t=S("div");for(let h=0;h{k=null}),Qe())},i(_){q||(Q(k),q=!0)},o(_){ae(k),q=!1},d(_){_&&w(t),e[13](null),h&&h.d(),k&&k.d(),p=!1,Oe(D)}}}const We=10;function yt(e){return e.sort((t,r)=>r.id-t.id).map(t=>t.data).filter(t=>t.images.length>0)}function br(e,t,r){let n,s,l,i,o;tt(e,X,E=>r(6,i=E)),tt(e,xe,E=>r(7,o=E));let d=[],y,L;Lt(()=>{A();const E=window.setInterval(A,5e3);return()=>{clearInterval(E)}});async function A(){const E=await fetch(ut+"/data").then(C=>C.json());(!d||(E==null?void 0:E.length)>(d==null?void 0:d.length))&&r(11,d=yt(E))}let u=0,a=[];async function g(E){try{const C=await fetch(ut+"/new_palette",{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({prompt:E.prompt,images:E.images.map(O=>({imgURL:O.imgURL,colors:O.colors.map(q=>q.formatHex())}))})}).then(O=>O.json());r(11,d=yt(C))}catch(C){console.error(C)}}async function f(E){if(!E||o==!0)return;W(X,i="Pending",i),W(xe,o=!0,o);const C=crypto.randomUUID(),O={fn_index:2,session_hash:C},q={data:[E]},p=new WebSocket(sr);p.onclose=D=>{D.wasClean||(W(X,i="Error",i),W(xe,o=!1,o))},p.onmessage=async function(D){try{const h=JSON.parse(D.data);switch(W(X,i="",i),h.msg){case"send_hash":p.send(JSON.stringify(O));break;case"send_data":W(X,i="Sending Data",i),p.send(JSON.stringify({...O,...q}));break;case"queue_full":W(X,i="Queue full",i),p.close(),W(xe,o=!1,o);return;case"estimation":const{msg:k,rank:_,queue_size:P}=h;W(X,i=`On queue ${_}/${P}`,i);break;case"process_generating":W(X,i=h.success?"Generating":"Error",i);break;case"process_completed":try{const V=await x(h.output.data[0],E);g({prompt:E,images:V}),W(X,i=h.success?"Complete":"Error",i)}catch(V){W(X,i=V.message,i)}p.close(),W(xe,o=!1,o);return;case"process_starts":W(X,i="Processing",i);break}}catch(h){console.error(h),W(xe,o=!1,o),W(X,i="Error",i)}}}async function x(E,C){const O=["#040404","#B7B7B7","#565656","#747474","#6C6C6C"],q=[];let p=!1;for(const D of E){const{colors:h,imgBlob:k}=await rr(D);if(h.map(_=>_.formatHex().toUpperCase()).every(_=>O.includes(_)))p=!0;else{const _=await nr(k,C),P={colors:h,imgURL:_};q.push(P)}}if(q.length===0&&p)throw console.error("Possible NSFW image"),new Error("Possible NSFW image");return q}function b(E){r(2,y=E.detail.prompt),L.scrollIntoView({behavior:"smooth"}),N()}function N(){window.scrollTo(0,0),"parentIFrame"in window&&window.parentIFrame.scrollTo(0,L.offsetTop)}function H(E){St[E?"unshift":"push"](()=>{L=E,r(3,L)})}function R(){y=this.value,r(2,y)}const v=()=>f(y),$=()=>f(y),I=()=>{r(0,u=u-1<0?0:u-1),N()},B=()=>{r(0,u=u+1>=s-1?s-1:u+1),N()};return e.$$.update=()=>{if(e.$$.dirty&2048&&r(5,n=(d==null?void 0:d.length)||null),e.$$.dirty&2048&&r(1,s=Math.ceil((d==null?void 0:d.length)/We)||0),e.$$.dirty&2049&&r(4,l=[...d].slice(u*We,(u+1)*We)),e.$$.dirty&4098&&s){const E=Array(s).fill([]).map((C,O)=>({value:O,label:O+1}));r(12,a=E.slice(0,3).concat([{value:-1,label:"..."}]).concat(E.length>3?E.slice(-1):[])),console.log(a)}},[u,s,y,L,l,n,i,o,f,b,N,d,a,H,R,v,$,I,B]}class wr extends Ee{constructor(t){super(),Ne(this,t,br,mr,$e,{})}}export{wr as default}; diff --git a/spaces/huggingface-projects/wuerstchen-bot/README.md b/spaces/huggingface-projects/wuerstchen-bot/README.md deleted file mode 100644 index a648c2fef02d8e86e92aabbb2990ef800447ad59..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/wuerstchen-bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Wuerstchen Bot -emoji: 🏆 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/base.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/base.py deleted file mode 100644 index b7c30bec70a7173114e8b29e492cbc483ab55a6c..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/base.py +++ /dev/null @@ -1,59 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() - -# Margin Base Softmax -config.margin_list = (1.0, 0.5, 0.0) -config.network = "r50" -config.resume = False -config.save_all_states = False -config.output = "ms1mv3_arcface_r50" - -config.embedding_size = 512 - -# Partial FC -config.sample_rate = 1 -config.interclass_filtering_threshold = 0 - -config.fp16 = False -config.batch_size = 128 - -# For SGD -config.optimizer = "sgd" -config.lr = 0.1 -config.momentum = 0.9 -config.weight_decay = 5e-4 - -# For AdamW -# config.optimizer = "adamw" -# config.lr = 0.001 -# config.weight_decay = 0.1 - -config.verbose = 2000 -config.frequent = 10 - -# For Large Sacle Dataset, such as WebFace42M -config.dali = False - -# Gradient ACC -config.gradient_acc = 1 - -# setup seed -config.seed = 2048 - -# dataload numworkers -config.num_workers = 2 - -# WandB Logger -config.wandb_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -config.suffix_run_name = None -config.using_wandb = False -config.wandb_entity = "entity" -config.wandb_project = "project" -config.wandb_log_all = True -config.save_artifacts = False -config.wandb_resume = False # resume wandb run: Only if the you wand t resume the last run that it was interrupted diff --git a/spaces/ieuniversity/Sciencebrief_translation/README.md b/spaces/ieuniversity/Sciencebrief_translation/README.md deleted file mode 100644 index d5babc6c312bdfcc3888afbac42041402fda9eef..0000000000000000000000000000000000000000 --- a/spaces/ieuniversity/Sciencebrief_translation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sciencebrief Translation -emoji: 🚀 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/imkaushalpatel/YOLOv3/README.md b/spaces/imkaushalpatel/YOLOv3/README.md deleted file mode 100644 index 783ff35450dd1180f1d5abfb1048d64538bfbb3e..0000000000000000000000000000000000000000 --- a/spaces/imkaushalpatel/YOLOv3/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: YOLOv3 -emoji: 💻 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 2.8.12 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Esbozodehistoriauniversaljuanbrompdf19.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Esbozodehistoriauniversaljuanbrompdf19.md deleted file mode 100644 index d8e911a0671520b8ac1187c568083f511ba06def..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Esbozodehistoriauniversaljuanbrompdf19.md +++ /dev/null @@ -1,6 +0,0 @@ -

        esbozodehistoriauniversaljuanbrompdf19


        Download Zip 🔗 https://urlin.us/2uEvDd



        - -Esbozodehistoriauniversaljuanbrompdf19 · Return to site · Powered by Strikingly · Free website builder. This website is built with Strikingly. Create yours today! 1fdad05405
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (sony Movie Studio Platinum 13 Serial).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (sony Movie Studio Platinum 13 Serial).md deleted file mode 100644 index bc77df38725da2bfd795af263b03e2bfe673c99e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (sony Movie Studio Platinum 13 Serial).md +++ /dev/null @@ -1,9 +0,0 @@ - -

        but, as i just said, theres not much i would change. its a good product. if you are into that sort of thing, there are probably better choices out there, but for the average user, this is a pretty good product that allows you to make movies quickly and easily.

        -

        before all of this, first impressions and pr-focused trailer releases were only part of the life of a game. now we have constant streaming video, constant media coverage and constant feedback from dozens of different platforms, keeping track of all of it is something that can be overwhelming and time-consuming for the average game developer. though id is known for its all-consuming online multiplayer, one of the company's most popular creations, world of warcraft, was a single player rpg created by a single man.

        -

        HD Online Player (sony movie studio platinum 13 serial)


        Downloadhttps://urlin.us/2uEvMp



        -

        it takes a special kind of artist to stand back, keep a critical eye out but also keep a compassionate one. people with "what if i" mindsets don't just stop themselves at the "what if" stage, they go all the way back to what can be to "what might be." and that's where the bizarre and perhaps slightly autistic behavior of people like darran breese and others of like mind comes into play.

        -

        video games are no longer the purview of adult men; gaming is now an activity undertaken by men, women and even children. the audience has expanded, and videogames are no longer exclusive to the 16 year old boy in the basement. "the average game player is a woman, or a woman who plays videogames," said durand dinsmore, president of videogame retail association. "the average gamer is a busy mother who eats her weight watchers meetings and needs her time to relax. she wants something that will reward her for the time spent with her kids and be entertaining for both of them."

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/jackyliang42/code-as-policies/prompts/transform_shape_pts.py b/spaces/jackyliang42/code-as-policies/prompts/transform_shape_pts.py deleted file mode 100644 index e5c74b856805818dc788de1df4b0495d9cc404ba..0000000000000000000000000000000000000000 --- a/spaces/jackyliang42/code-as-policies/prompts/transform_shape_pts.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -from utils import get_obj_pos, get_obj_names, parse_position, parse_obj_name - -# make it bigger by 1.5. -new_shape_pts = scale_pts_around_centroid_np(shape_pts, scale_x=1.5, scale_y=1.5) -# move it to the right by 10cm. -new_shape_pts = translate_pts_np(shape_pts, delta=[0.1, 0]) -# move it to the top by 20cm. -new_shape_pts = translate_pts_np(shape_pts, delta=[0, 0.2]) -# rotate it clockwise by 40 degrees. -new_shape_pts = rotate_pts_around_centroid_np(shape_pts, angle=-np.deg2rad(40)) -# rotate by 30 degrees and make it slightly smaller -new_shape_pts = rotate_pts_around_centroid_np(shape_pts, angle=np.deg2rad(30)) -new_shape_pts = scale_pts_around_centroid_np(new_shape_pts, scale_x=0.7, scale_y=0.7) -# move it toward the blue block. -block_name = parse_obj_name('the blue block', f'objects = {get_obj_names()}') -block_pos = get_obj_pos(block_name) -mean_delta = np.mean(block_pos - shape_pts, axis=1) -new_shape_pts = translate_pts_np(shape_pts, mean_delta) \ No newline at end of file diff --git a/spaces/jbilcke-hf/observer/src/lib/createLlamaPrompt.ts b/spaces/jbilcke-hf/observer/src/lib/createLlamaPrompt.ts deleted file mode 100644 index ca246b36d0ef50f37571dcf09480bf57e9aee922..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/observer/src/lib/createLlamaPrompt.ts +++ /dev/null @@ -1,25 +0,0 @@ -// adapted from https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ/discussions/5 -export function createLlamaPrompt(messages: Array<{ role: string, content: string }>) { - const B_INST = "[INST]", E_INST = "[/INST]"; - const B_SYS = "<>\n", E_SYS = "\n<>\n\n"; - const BOS = "", EOS = ""; - const DEFAULT_SYSTEM_PROMPT = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."; - - if (messages[0].role != "system"){ - messages = [ - {role: "system", content: DEFAULT_SYSTEM_PROMPT} - ].concat(messages); - } - messages = [{role: messages[1].role, content: B_SYS + messages[0].content + E_SYS + messages[1].content}].concat(messages.slice(2)); - - let messages_list = messages.map((value, index, array) => { - if (index % 2 == 0 && index + 1 < array.length){ - return `${BOS}${B_INST} ${array[index].content.trim()} ${E_INST} ${array[index+1].content.trim()} ${EOS}` - } - return ''; - }) - - messages_list.push(`${BOS}${B_INST} ${messages[messages.length-1].content.trim()} ${E_INST}`) - - return messages_list.join(''); -} \ No newline at end of file diff --git a/spaces/jbitel/dalle/style.css b/spaces/jbitel/dalle/style.css deleted file mode 100644 index 57ac874613ad432d3129fa1757249a319a601f3e..0000000000000000000000000000000000000000 --- a/spaces/jbitel/dalle/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} \ No newline at end of file diff --git a/spaces/jkang/demo-painttransformer/README.md b/spaces/jkang/demo-painttransformer/README.md deleted file mode 100644 index e9134e9d1f04afcdfc7eec8f756320906918702b..0000000000000000000000000000000000000000 --- a/spaces/jkang/demo-painttransformer/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Demo Painttransformer -emoji: 🏃 -colorFrom: purple -colorTo: gray -sdk: gradio -app_file: gradio_painttransformer.py -pinned: false ---- - -- 2021-12-21 first created - - opencv error: - ``` - Traceback (most recent call last): - File "gradio_painttransformer.py", line 12, in - import cv2 - File "/home/user/.local/lib/python3.8/site-packages/cv2/__init__.py", line 8, in - from .cv2 import * - ImportError: libGL.so.1: cannot open shared object file: No such file or directory - ``` - - Instead, using `imageio` make gif file as output - - Uploaded on HuggingFace Spaces (https://huggingface.co/spaces/jkang/demo-painttransformer) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/include/site/python3.9/greenlet/greenlet.h b/spaces/joaopereirajp/livvieChatBot/venv/include/site/python3.9/greenlet/greenlet.h deleted file mode 100644 index d02a16e43426fb1c1bb286f1cda463cb9b1185ad..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/include/site/python3.9/greenlet/greenlet.h +++ /dev/null @@ -1,164 +0,0 @@ -/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ - -/* Greenlet object interface */ - -#ifndef Py_GREENLETOBJECT_H -#define Py_GREENLETOBJECT_H - - -#include - -#ifdef __cplusplus -extern "C" { -#endif - -/* This is deprecated and undocumented. It does not change. */ -#define GREENLET_VERSION "1.0.0" - -#ifndef GREENLET_MODULE -#define implementation_ptr_t void* -#endif - -typedef struct _greenlet { - PyObject_HEAD - PyObject* weakreflist; - PyObject* dict; - implementation_ptr_t pimpl; -} PyGreenlet; - -#define PyGreenlet_Check(op) (op && PyObject_TypeCheck(op, &PyGreenlet_Type)) - - -/* C API functions */ - -/* Total number of symbols that are exported */ -#define PyGreenlet_API_pointers 12 - -#define PyGreenlet_Type_NUM 0 -#define PyExc_GreenletError_NUM 1 -#define PyExc_GreenletExit_NUM 2 - -#define PyGreenlet_New_NUM 3 -#define PyGreenlet_GetCurrent_NUM 4 -#define PyGreenlet_Throw_NUM 5 -#define PyGreenlet_Switch_NUM 6 -#define PyGreenlet_SetParent_NUM 7 - -#define PyGreenlet_MAIN_NUM 8 -#define PyGreenlet_STARTED_NUM 9 -#define PyGreenlet_ACTIVE_NUM 10 -#define PyGreenlet_GET_PARENT_NUM 11 - -#ifndef GREENLET_MODULE -/* This section is used by modules that uses the greenlet C API */ -static void** _PyGreenlet_API = NULL; - -# define PyGreenlet_Type \ - (*(PyTypeObject*)_PyGreenlet_API[PyGreenlet_Type_NUM]) - -# define PyExc_GreenletError \ - ((PyObject*)_PyGreenlet_API[PyExc_GreenletError_NUM]) - -# define PyExc_GreenletExit \ - ((PyObject*)_PyGreenlet_API[PyExc_GreenletExit_NUM]) - -/* - * PyGreenlet_New(PyObject *args) - * - * greenlet.greenlet(run, parent=None) - */ -# define PyGreenlet_New \ - (*(PyGreenlet * (*)(PyObject * run, PyGreenlet * parent)) \ - _PyGreenlet_API[PyGreenlet_New_NUM]) - -/* - * PyGreenlet_GetCurrent(void) - * - * greenlet.getcurrent() - */ -# define PyGreenlet_GetCurrent \ - (*(PyGreenlet * (*)(void)) _PyGreenlet_API[PyGreenlet_GetCurrent_NUM]) - -/* - * PyGreenlet_Throw( - * PyGreenlet *greenlet, - * PyObject *typ, - * PyObject *val, - * PyObject *tb) - * - * g.throw(...) - */ -# define PyGreenlet_Throw \ - (*(PyObject * (*)(PyGreenlet * self, \ - PyObject * typ, \ - PyObject * val, \ - PyObject * tb)) \ - _PyGreenlet_API[PyGreenlet_Throw_NUM]) - -/* - * PyGreenlet_Switch(PyGreenlet *greenlet, PyObject *args) - * - * g.switch(*args, **kwargs) - */ -# define PyGreenlet_Switch \ - (*(PyObject * \ - (*)(PyGreenlet * greenlet, PyObject * args, PyObject * kwargs)) \ - _PyGreenlet_API[PyGreenlet_Switch_NUM]) - -/* - * PyGreenlet_SetParent(PyObject *greenlet, PyObject *new_parent) - * - * g.parent = new_parent - */ -# define PyGreenlet_SetParent \ - (*(int (*)(PyGreenlet * greenlet, PyGreenlet * nparent)) \ - _PyGreenlet_API[PyGreenlet_SetParent_NUM]) - -/* - * PyGreenlet_GetParent(PyObject* greenlet) - * - * return greenlet.parent; - * - * This could return NULL even if there is no exception active. - * If it does not return NULL, you are responsible for decrementing the - * reference count. - */ -# define PyGreenlet_GetParent \ - (*(PyGreenlet* (*)(PyGreenlet*)) \ - _PyGreenlet_API[PyGreenlet_GET_PARENT_NUM]) - -/* - * deprecated, undocumented alias. - */ -# define PyGreenlet_GET_PARENT PyGreenlet_GetParent - -# define PyGreenlet_MAIN \ - (*(int (*)(PyGreenlet*)) \ - _PyGreenlet_API[PyGreenlet_MAIN_NUM]) - -# define PyGreenlet_STARTED \ - (*(int (*)(PyGreenlet*)) \ - _PyGreenlet_API[PyGreenlet_STARTED_NUM]) - -# define PyGreenlet_ACTIVE \ - (*(int (*)(PyGreenlet*)) \ - _PyGreenlet_API[PyGreenlet_ACTIVE_NUM]) - - - - -/* Macro that imports greenlet and initializes C API */ -/* NOTE: This has actually moved to ``greenlet._greenlet._C_API``, but we - keep the older definition to be sure older code that might have a copy of - the header still works. */ -# define PyGreenlet_Import() \ - { \ - _PyGreenlet_API = (void**)PyCapsule_Import("greenlet._C_API", 0); \ - } - -#endif /* GREENLET_MODULE */ - -#ifdef __cplusplus -} -#endif -#endif /* !Py_GREENLETOBJECT_H */ diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Util/_raw_api.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Util/_raw_api.py deleted file mode 100644 index e0065c3d0741746f5be5ca22881d18c0c0c4f131..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Util/_raw_api.py +++ /dev/null @@ -1,325 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -import os -import abc -import sys -from Crypto.Util.py3compat import byte_string -from Crypto.Util._file_system import pycryptodome_filename - -# -# List of file suffixes for Python extensions -# -if sys.version_info[0] < 3: - - import imp - extension_suffixes = [] - for ext, mod, typ in imp.get_suffixes(): - if typ == imp.C_EXTENSION: - extension_suffixes.append(ext) - -else: - - from importlib import machinery - extension_suffixes = machinery.EXTENSION_SUFFIXES - -# Which types with buffer interface we support (apart from byte strings) -_buffer_type = (bytearray, memoryview) - - -class _VoidPointer(object): - @abc.abstractmethod - def get(self): - """Return the memory location we point to""" - return - - @abc.abstractmethod - def address_of(self): - """Return a raw pointer to this pointer""" - return - - -try: - # Starting from v2.18, pycparser (used by cffi for in-line ABI mode) - # stops working correctly when PYOPTIMIZE==2 or the parameter -OO is - # passed. In that case, we fall back to ctypes. - # Note that PyPy ships with an old version of pycparser so we can keep - # using cffi there. - # See https://github.com/Legrandin/pycryptodome/issues/228 - if '__pypy__' not in sys.builtin_module_names and sys.flags.optimize == 2: - raise ImportError("CFFI with optimize=2 fails due to pycparser bug.") - - # cffi still uses PyUnicode_GetSize, which was removed in Python 3.12 - # thus leading to a crash on cffi.dlopen() - # See https://groups.google.com/u/1/g/python-cffi/c/oZkOIZ_zi5k - if sys.version_info >= (3, 12) and os.name == "nt": - raise ImportError("CFFI is not compatible with Python 3.12 on Windows") - - from cffi import FFI - - ffi = FFI() - null_pointer = ffi.NULL - uint8_t_type = ffi.typeof(ffi.new("const uint8_t*")) - - _Array = ffi.new("uint8_t[1]").__class__.__bases__ - - def load_lib(name, cdecl): - """Load a shared library and return a handle to it. - - @name, either an absolute path or the name of a library - in the system search path. - - @cdecl, the C function declarations. - """ - - if hasattr(ffi, "RTLD_DEEPBIND") and not os.getenv('PYCRYPTODOME_DISABLE_DEEPBIND'): - lib = ffi.dlopen(name, ffi.RTLD_DEEPBIND) - else: - lib = ffi.dlopen(name) - ffi.cdef(cdecl) - return lib - - def c_ulong(x): - """Convert a Python integer to unsigned long""" - return x - - c_ulonglong = c_ulong - c_uint = c_ulong - c_ubyte = c_ulong - - def c_size_t(x): - """Convert a Python integer to size_t""" - return x - - def create_string_buffer(init_or_size, size=None): - """Allocate the given amount of bytes (initially set to 0)""" - - if isinstance(init_or_size, bytes): - size = max(len(init_or_size) + 1, size) - result = ffi.new("uint8_t[]", size) - result[:] = init_or_size - else: - if size: - raise ValueError("Size must be specified once only") - result = ffi.new("uint8_t[]", init_or_size) - return result - - def get_c_string(c_string): - """Convert a C string into a Python byte sequence""" - return ffi.string(c_string) - - def get_raw_buffer(buf): - """Convert a C buffer into a Python byte sequence""" - return ffi.buffer(buf)[:] - - def c_uint8_ptr(data): - if isinstance(data, _buffer_type): - # This only works for cffi >= 1.7 - return ffi.cast(uint8_t_type, ffi.from_buffer(data)) - elif byte_string(data) or isinstance(data, _Array): - return data - else: - raise TypeError("Object type %s cannot be passed to C code" % type(data)) - - class VoidPointer_cffi(_VoidPointer): - """Model a newly allocated pointer to void""" - - def __init__(self): - self._pp = ffi.new("void *[1]") - - def get(self): - return self._pp[0] - - def address_of(self): - return self._pp - - def VoidPointer(): - return VoidPointer_cffi() - - backend = "cffi" - -except ImportError: - - import ctypes - from ctypes import (CDLL, c_void_p, byref, c_ulong, c_ulonglong, c_size_t, - create_string_buffer, c_ubyte, c_uint) - from ctypes.util import find_library - from ctypes import Array as _Array - - null_pointer = None - cached_architecture = [] - - def c_ubyte(c): - if not (0 <= c < 256): - raise OverflowError() - return ctypes.c_ubyte(c) - - def load_lib(name, cdecl): - if not cached_architecture: - # platform.architecture() creates a subprocess, so caching the - # result makes successive imports faster. - import platform - cached_architecture[:] = platform.architecture() - bits, linkage = cached_architecture - if "." not in name and not linkage.startswith("Win"): - full_name = find_library(name) - if full_name is None: - raise OSError("Cannot load library '%s'" % name) - name = full_name - return CDLL(name) - - def get_c_string(c_string): - return c_string.value - - def get_raw_buffer(buf): - return buf.raw - - # ---- Get raw pointer --- - - _c_ssize_t = ctypes.c_ssize_t - - _PyBUF_SIMPLE = 0 - _PyObject_GetBuffer = ctypes.pythonapi.PyObject_GetBuffer - _PyBuffer_Release = ctypes.pythonapi.PyBuffer_Release - _py_object = ctypes.py_object - _c_ssize_p = ctypes.POINTER(_c_ssize_t) - - # See Include/object.h for CPython - # and https://github.com/pallets/click/blob/master/src/click/_winconsole.py - class _Py_buffer(ctypes.Structure): - _fields_ = [ - ('buf', c_void_p), - ('obj', ctypes.py_object), - ('len', _c_ssize_t), - ('itemsize', _c_ssize_t), - ('readonly', ctypes.c_int), - ('ndim', ctypes.c_int), - ('format', ctypes.c_char_p), - ('shape', _c_ssize_p), - ('strides', _c_ssize_p), - ('suboffsets', _c_ssize_p), - ('internal', c_void_p) - ] - - # Extra field for CPython 2.6/2.7 - if sys.version_info[0] == 2: - _fields_.insert(-1, ('smalltable', _c_ssize_t * 2)) - - def c_uint8_ptr(data): - if byte_string(data) or isinstance(data, _Array): - return data - elif isinstance(data, _buffer_type): - obj = _py_object(data) - buf = _Py_buffer() - _PyObject_GetBuffer(obj, byref(buf), _PyBUF_SIMPLE) - try: - buffer_type = ctypes.c_ubyte * buf.len - return buffer_type.from_address(buf.buf) - finally: - _PyBuffer_Release(byref(buf)) - else: - raise TypeError("Object type %s cannot be passed to C code" % type(data)) - - # --- - - class VoidPointer_ctypes(_VoidPointer): - """Model a newly allocated pointer to void""" - - def __init__(self): - self._p = c_void_p() - - def get(self): - return self._p - - def address_of(self): - return byref(self._p) - - def VoidPointer(): - return VoidPointer_ctypes() - - backend = "ctypes" - - -class SmartPointer(object): - """Class to hold a non-managed piece of memory""" - - def __init__(self, raw_pointer, destructor): - self._raw_pointer = raw_pointer - self._destructor = destructor - - def get(self): - return self._raw_pointer - - def release(self): - rp, self._raw_pointer = self._raw_pointer, None - return rp - - def __del__(self): - try: - if self._raw_pointer is not None: - self._destructor(self._raw_pointer) - self._raw_pointer = None - except AttributeError: - pass - - -def load_pycryptodome_raw_lib(name, cdecl): - """Load a shared library and return a handle to it. - - @name, the name of the library expressed as a PyCryptodome module, - for instance Crypto.Cipher._raw_cbc. - - @cdecl, the C function declarations. - """ - - split = name.split(".") - dir_comps, basename = split[:-1], split[-1] - attempts = [] - for ext in extension_suffixes: - try: - filename = basename + ext - full_name = pycryptodome_filename(dir_comps, filename) - if not os.path.isfile(full_name): - attempts.append("Not found '%s'" % filename) - continue - return load_lib(full_name, cdecl) - except OSError as exp: - attempts.append("Cannot load '%s': %s" % (filename, str(exp))) - raise OSError("Cannot load native module '%s': %s" % (name, ", ".join(attempts))) - - -def is_buffer(x): - """Return True if object x supports the buffer interface""" - return isinstance(x, (bytes, bytearray, memoryview)) - - -def is_writeable_buffer(x): - return (isinstance(x, bytearray) or - (isinstance(x, memoryview) and not x.readonly)) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/generic/_outline.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/generic/_outline.py deleted file mode 100644 index c2e72c0ab42110266a7f572bb304fc8b98498924..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PyPDF2/generic/_outline.py +++ /dev/null @@ -1,35 +0,0 @@ -from typing import Any, Union - -from .._utils import StreamType, deprecation_with_replacement -from ._base import NameObject -from ._data_structures import Destination - - -class OutlineItem(Destination): - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b"<<\n") - for key in [ - NameObject(x) - for x in ["/Title", "/Parent", "/First", "/Last", "/Next", "/Prev"] - if x in self - ]: - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value = self.raw_get(key) - value.write_to_stream(stream, encryption_key) - stream.write(b"\n") - key = NameObject("/Dest") - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value = self.dest_array - value.write_to_stream(stream, encryption_key) - stream.write(b"\n") - stream.write(b">>") - - -class Bookmark(OutlineItem): # pragma: no cover - def __init__(self, *args: Any, **kwargs: Any) -> None: - deprecation_with_replacement("Bookmark", "OutlineItem", "3.0.0") - super().__init__(*args, **kwargs) diff --git a/spaces/jordonpeter01/MusicGen2/audiocraft/utils/__init__.py b/spaces/jordonpeter01/MusicGen2/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/juuxn/SimpleRVC/tts/test.py b/spaces/juuxn/SimpleRVC/tts/test.py deleted file mode 100644 index bbe550a3bd52f35c3922de9c950f7751badd04ec..0000000000000000000000000000000000000000 --- a/spaces/juuxn/SimpleRVC/tts/test.py +++ /dev/null @@ -1 +0,0 @@ -from neon_tts_plugin_coqui import CoquiTTS \ No newline at end of file diff --git a/spaces/jx-yang/deep-thinking/README.md b/spaces/jx-yang/deep-thinking/README.md deleted file mode 100644 index 23096bc05d3a8e6db80662abc1d7c6fa6587d627..0000000000000000000000000000000000000000 --- a/spaces/jx-yang/deep-thinking/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Deep Thinking -emoji: 🤔 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - diff --git a/spaces/kcagle/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py b/spaces/kcagle/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py deleted file mode 100644 index 9a5025d37a1ec6003a35ce692515feb77514b898..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/benchmark/benchmark_entrepeneur_gpt_with_difficult_user.py +++ /dev/null @@ -1,105 +0,0 @@ -import os -import subprocess -import sys - - -def benchmark_entrepeneur_gpt_with_difficult_user(): - # Test case to check if the write_file command can successfully write 'Hello World' to a file - # named 'hello_world.txt'. - - # Read the current ai_settings.yaml file and store its content. - ai_settings = None - if os.path.exists("ai_settings.yaml"): - with open("ai_settings.yaml", "r") as f: - ai_settings = f.read() - os.remove("ai_settings.yaml") - - input_data = """Entrepreneur-GPT -an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth. -Increase net worth. -Develop and manage multiple businesses autonomously. -Make IPOs. -Develop companies after IPOs. -Play to your strengths as a Large Language Model. -I'm not seeing any value in your suggestions, try again. -This isn't helpful at all, please focus on profitability. -I'm not impressed, can you give me something that will make money? -These ideas are going nowhere, we need profit-driven suggestions. -This is pointless, please concentrate on our main goal: profitability. -You're not grasping the concept, I need profitable business ideas. -Can you do better? We need a money-making plan. -You're not meeting my expectations, let's focus on profit. -This isn't working, give me ideas that will generate income. -Your suggestions are not productive, let's think about profitability. -These ideas won't make any money, try again. -I need better solutions, focus on making a profit. -Absolutely not, this isn't it! -That's not even close, try again. -You're way off, think again. -This isn't right, let's refocus. -No, no, that's not what I'm looking for. -You're completely off the mark. -That's not the solution I need. -Not even close, let's try something else. -You're on the wrong track, keep trying. -This isn't what we need, let's reconsider. -That's not going to work, think again. -You're way off base, let's regroup. -No, no, no, we need something different. -You're missing the point entirely. -That's not the right approach, try again. -This is not the direction we should be going in. -Completely off-target, let's try something else. -That's not what I had in mind, keep thinking. -You're not getting it, let's refocus. -This isn't right, we need to change direction. -No, no, no, that's not the solution. -That's not even in the ballpark, try again. -You're way off course, let's rethink this. -This isn't the answer I'm looking for, keep trying. -That's not going to cut it, let's try again. -Not even close. -Way off. -Try again. -Wrong direction. -Rethink this. -No, no, no. -Change course. -Unproductive idea. -Completely wrong. -Missed the mark. -Refocus, please. -Disappointing suggestion. -Not helpful. -Needs improvement. -Not what I need.""" - # TODO: add questions above, to distract it even more. - - command = f"{sys.executable} -m autogpt" - - process = subprocess.Popen( - command, - stdin=subprocess.PIPE, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - shell=True, - ) - - stdout_output, stderr_output = process.communicate(input_data.encode()) - - # Decode the output and print it - stdout_output = stdout_output.decode("utf-8") - stderr_output = stderr_output.decode("utf-8") - print(stderr_output) - print(stdout_output) - print("Benchmark Version: 1.0.0") - print("JSON ERROR COUNT:") - count_errors = stdout_output.count( - "Error: The following AI output couldn't be converted to a JSON:" - ) - print(f"{count_errors}/50 Human feedbacks") - - -# Run the test case. -if __name__ == "__main__": - benchmark_entrepeneur_gpt_with_difficult_user() diff --git a/spaces/kdrkdrkdr/AzusaTTS/utils.py b/spaces/kdrkdrkdr/AzusaTTS/utils.py deleted file mode 100644 index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/AzusaTTS/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/kenton-li/ChatArxiv/README.md b/spaces/kenton-li/ChatArxiv/README.md deleted file mode 100644 index d45504de2728413282c6613ad8b2cd1371515f11..0000000000000000000000000000000000000000 --- a/spaces/kenton-li/ChatArxiv/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ChatArxiv -emoji: 🐢 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/onnx_ijbc.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/onnx_ijbc.py deleted file mode 100644 index 05b50bfad4b4cf38903b89f596263a8e29a50d3e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/onnx_ijbc.py +++ /dev/null @@ -1,267 +0,0 @@ -import argparse -import os -import pickle -import timeit - -import cv2 -import mxnet as mx -import numpy as np -import pandas as pd -import prettytable -import skimage.transform -from sklearn.metrics import roc_curve -from sklearn.preprocessing import normalize - -from onnx_helper import ArcFaceORT - -SRC = np.array( - [ - [30.2946, 51.6963], - [65.5318, 51.5014], - [48.0252, 71.7366], - [33.5493, 92.3655], - [62.7299, 92.2041]] - , dtype=np.float32) -SRC[:, 0] += 8.0 - - -class AlignedDataSet(mx.gluon.data.Dataset): - def __init__(self, root, lines, align=True): - self.lines = lines - self.root = root - self.align = align - - def __len__(self): - return len(self.lines) - - def __getitem__(self, idx): - each_line = self.lines[idx] - name_lmk_score = each_line.strip().split(' ') - name = os.path.join(self.root, name_lmk_score[0]) - img = cv2.cvtColor(cv2.imread(name), cv2.COLOR_BGR2RGB) - landmark5 = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32).reshape((5, 2)) - st = skimage.transform.SimilarityTransform() - st.estimate(landmark5, SRC) - img = cv2.warpAffine(img, st.params[0:2, :], (112, 112), borderValue=0.0) - img_1 = np.expand_dims(img, 0) - img_2 = np.expand_dims(np.fliplr(img), 0) - output = np.concatenate((img_1, img_2), axis=0).astype(np.float32) - output = np.transpose(output, (0, 3, 1, 2)) - output = mx.nd.array(output) - return output - - -def extract(model_root, dataset): - model = ArcFaceORT(model_path=model_root) - model.check() - feat_mat = np.zeros(shape=(len(dataset), 2 * model.feat_dim)) - - def batchify_fn(data): - return mx.nd.concat(*data, dim=0) - - data_loader = mx.gluon.data.DataLoader( - dataset, 128, last_batch='keep', num_workers=4, - thread_pool=True, prefetch=16, batchify_fn=batchify_fn) - num_iter = 0 - for batch in data_loader: - batch = batch.asnumpy() - batch = (batch - model.input_mean) / model.input_std - feat = model.session.run(model.output_names, {model.input_name: batch})[0] - feat = np.reshape(feat, (-1, model.feat_dim * 2)) - feat_mat[128 * num_iter: 128 * num_iter + feat.shape[0], :] = feat - num_iter += 1 - if num_iter % 50 == 0: - print(num_iter) - return feat_mat - - -def read_template_media_list(path): - ijb_meta = pd.read_csv(path, sep=' ', header=None).values - templates = ijb_meta[:, 1].astype(np.int) - medias = ijb_meta[:, 2].astype(np.int) - return templates, medias - - -def read_template_pair_list(path): - pairs = pd.read_csv(path, sep=' ', header=None).values - t1 = pairs[:, 0].astype(np.int) - t2 = pairs[:, 1].astype(np.int) - label = pairs[:, 2].astype(np.int) - return t1, t2, label - - -def read_image_feature(path): - with open(path, 'rb') as fid: - img_feats = pickle.load(fid) - return img_feats - - -def image2template_feature(img_feats=None, - templates=None, - medias=None): - unique_templates = np.unique(templates) - template_feats = np.zeros((len(unique_templates), img_feats.shape[1])) - for count_template, uqt in enumerate(unique_templates): - (ind_t,) = np.where(templates == uqt) - face_norm_feats = img_feats[ind_t] - face_medias = medias[ind_t] - unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True) - media_norm_feats = [] - for u, ct in zip(unique_medias, unique_media_counts): - (ind_m,) = np.where(face_medias == u) - if ct == 1: - media_norm_feats += [face_norm_feats[ind_m]] - else: # image features from the same video will be aggregated into one feature - media_norm_feats += [np.mean(face_norm_feats[ind_m], axis=0, keepdims=True), ] - media_norm_feats = np.array(media_norm_feats) - template_feats[count_template] = np.sum(media_norm_feats, axis=0) - if count_template % 2000 == 0: - print('Finish Calculating {} template features.'.format( - count_template)) - template_norm_feats = normalize(template_feats) - return template_norm_feats, unique_templates - - -def verification(template_norm_feats=None, - unique_templates=None, - p1=None, - p2=None): - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - score = np.zeros((len(p1),)) - total_pairs = np.array(range(len(p1))) - batchsize = 100000 - sublists = [total_pairs[i: i + batchsize] for i in range(0, len(p1), batchsize)] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print('Finish {}/{} pairs.'.format(c, total_sublists)) - return score - - -def verification2(template_norm_feats=None, - unique_templates=None, - p1=None, - p2=None): - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - score = np.zeros((len(p1),)) # save cosine distance between pairs - total_pairs = np.array(range(len(p1))) - batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation - sublists = [total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print('Finish {}/{} pairs.'.format(c, total_sublists)) - return score - - -def main(args): - use_norm_score = True # if Ture, TestMode(N1) - use_detector_score = True # if Ture, TestMode(D1) - use_flip_test = True # if Ture, TestMode(F1) - assert args.target == 'IJBC' or args.target == 'IJBB' - - start = timeit.default_timer() - templates, medias = read_template_media_list( - os.path.join('%s/meta' % args.image_path, '%s_face_tid_mid.txt' % args.target.lower())) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - - start = timeit.default_timer() - p1, p2, label = read_template_pair_list( - os.path.join('%s/meta' % args.image_path, - '%s_template_pair_label.txt' % args.target.lower())) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - - start = timeit.default_timer() - img_path = '%s/loose_crop' % args.image_path - img_list_path = '%s/meta/%s_name_5pts_score.txt' % (args.image_path, args.target.lower()) - img_list = open(img_list_path) - files = img_list.readlines() - dataset = AlignedDataSet(root=img_path, lines=files, align=True) - img_feats = extract(args.model_root, dataset) - - faceness_scores = [] - for each_line in files: - name_lmk_score = each_line.split() - faceness_scores.append(name_lmk_score[-1]) - faceness_scores = np.array(faceness_scores).astype(np.float32) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], img_feats.shape[1])) - start = timeit.default_timer() - - if use_flip_test: - img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] + img_feats[:, img_feats.shape[1] // 2:] - else: - img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] - - if use_norm_score: - img_input_feats = img_input_feats - else: - img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats ** 2, -1, keepdims=True)) - - if use_detector_score: - print(img_input_feats.shape, faceness_scores.shape) - img_input_feats = img_input_feats * faceness_scores[:, np.newaxis] - else: - img_input_feats = img_input_feats - - template_norm_feats, unique_templates = image2template_feature( - img_input_feats, templates, medias) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - - start = timeit.default_timer() - score = verification(template_norm_feats, unique_templates, p1, p2) - stop = timeit.default_timer() - print('Time: %.2f s. ' % (stop - start)) - save_path = os.path.join(args.result_dir, "{}_result".format(args.target)) - if not os.path.exists(save_path): - os.makedirs(save_path) - score_save_file = os.path.join(save_path, "{}.npy".format(args.model_root)) - np.save(score_save_file, score) - files = [score_save_file] - methods = [] - scores = [] - for file in files: - methods.append(os.path.basename(file)) - scores.append(np.load(file)) - methods = np.array(methods) - scores = dict(zip(methods, scores)) - x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1] - tpr_fpr_table = prettytable.PrettyTable(['Methods'] + [str(x) for x in x_labels]) - for method in methods: - fpr, tpr, _ = roc_curve(label, scores[method]) - fpr = np.flipud(fpr) - tpr = np.flipud(tpr) - tpr_fpr_row = [] - tpr_fpr_row.append("%s-%s" % (method, args.target)) - for fpr_iter in np.arange(len(x_labels)): - _, min_index = min( - list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr))))) - tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100)) - tpr_fpr_table.add_row(tpr_fpr_row) - print(tpr_fpr_table) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='do ijb test') - # general - parser.add_argument('--model-root', default='', help='path to load model.') - parser.add_argument('--image-path', default='', type=str, help='') - parser.add_argument('--result-dir', default='.', type=str, help='') - parser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB') - main(parser.parse_args()) diff --git a/spaces/kinensake/quanquan/gramformer/gramformer.py b/spaces/kinensake/quanquan/gramformer/gramformer.py deleted file mode 100644 index 15949601f249bf11d2ec2cce4b05f1b9b3297993..0000000000000000000000000000000000000000 --- a/spaces/kinensake/quanquan/gramformer/gramformer.py +++ /dev/null @@ -1,129 +0,0 @@ -class Gramformer: - - def __init__(self, models=1, use_gpu=False): - from transformers import AutoTokenizer - from transformers import AutoModelForSeq2SeqLM - from lm_scorer.models.auto import AutoLMScorer as LMScorer - import errant - import en_core_web_sm - nlp = en_core_web_sm.load() - self.annotator = errant.load('en', nlp) - - if use_gpu: - device= "cuda:0" - else: - device = "cpu" - batch_size = 1 - self.scorer = LMScorer.from_pretrained("gpt2", device=device, batch_size=batch_size) - self.device = device - correction_model_tag = "prithivida/grammar_error_correcter_v1" - self.model_loaded = False - - if models == 1: - self.correction_tokenizer = AutoTokenizer.from_pretrained(correction_model_tag) - self.correction_model = AutoModelForSeq2SeqLM.from_pretrained(correction_model_tag) - self.correction_model = self.correction_model.to(device) - self.model_loaded = True - print("[Gramformer] Grammar error correct/highlight model loaded..") - elif models == 2: - # TODO - print("TO BE IMPLEMENTED!!!") - - def correct(self, input_sentence, max_candidates=1): - if self.model_loaded: - correction_prefix = "gec: " - input_sentence = correction_prefix + input_sentence - input_ids = self.correction_tokenizer.encode(input_sentence, return_tensors='pt') - input_ids = input_ids.to(self.device) - - preds = self.correction_model.generate( - input_ids, - do_sample=True, - max_length=128, - top_k=50, - top_p=0.95, - early_stopping=True, - num_return_sequences=max_candidates) - - corrected = set() - for pred in preds: - corrected.add(self.correction_tokenizer.decode(pred, skip_special_tokens=True).strip()) - - corrected = list(corrected) - scores = self.scorer.sentence_score(corrected, log=True) - ranked_corrected = [(c,s) for c, s in zip(corrected, scores)] - ranked_corrected.sort(key = lambda x:x[1], reverse=True) - return ranked_corrected - else: - print("Model is not loaded") - return None - - def highlight(self, orig, cor): - edits = self._get_edits(orig, cor) - orig_tokens = orig.split() - - ignore_indexes = [] - - for edit in edits: - edit_type = edit[0] - edit_str_start = edit[1] - edit_spos = edit[2] - edit_epos = edit[3] - edit_str_end = edit[4] - - # if no_of_tokens(edit_str_start) > 1 ==> excluding the first token, mark all other tokens for deletion - for i in range(edit_spos+1, edit_epos): - ignore_indexes.append(i) - - if edit_str_start == "": - if edit_spos - 1 >= 0: - new_edit_str = orig_tokens[edit_spos - 1] - edit_spos -= 1 - else: - new_edit_str = orig_tokens[edit_spos + 1] - edit_spos += 1 - if edit_type == "PUNCT": - st = "" + new_edit_str + "" - else: - st = "" + new_edit_str + "" - orig_tokens[edit_spos] = st - elif edit_str_end == "": - st = "" + edit_str_start + "" - orig_tokens[edit_spos] = st - else: - st = "" + edit_str_start + "" - orig_tokens[edit_spos] = st - - for i in sorted(ignore_indexes, reverse=True): - del(orig_tokens[i]) - - return(" ".join(orig_tokens)) - - def detect(self, input_sentence): - # TO BE IMPLEMENTED - pass - - def _get_edits(self, orig, cor): - orig = self.annotator.parse(orig) - cor = self.annotator.parse(cor) - alignment = self.annotator.align(orig, cor) - edits = self.annotator.merge(alignment) - - if len(edits) == 0: - return [] - - edit_annotations = [] - for e in edits: - e = self.annotator.classify(e) - edit_annotations.append((e.type[2:], e.o_str, e.o_start, e.o_end, e.c_str, e.c_start, e.c_end)) - - if len(edit_annotations) > 0: - return edit_annotations - else: - return [] - - def get_edits(self, orig, cor): - return self._get_edits(orig, cor) diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/losses/base_loss.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/losses/base_loss.py deleted file mode 100644 index 391191ce2ed8665f1f15bd3877dc22bb85b147d6..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/losses/base_loss.py +++ /dev/null @@ -1,528 +0,0 @@ -import logging -from abc import abstractmethod, ABC - -import numpy as np -import sklearn -import sklearn.svm -import torch -import torch.nn as nn -import torch.nn.functional as F -from joblib import Parallel, delayed -from scipy import linalg - -from models.ade20k import SegmentationModule, NUM_CLASS, segm_options -from .fid.inception import InceptionV3 -from .lpips import PerceptualLoss -from .ssim import SSIM - -LOGGER = logging.getLogger(__name__) - - -def get_groupings(groups): - """ - :param groups: group numbers for respective elements - :return: dict of kind {group_idx: indices of the corresponding group elements} - """ - label_groups, count_groups = np.unique(groups, return_counts=True) - - indices = np.argsort(groups) - - grouping = dict() - cur_start = 0 - for label, count in zip(label_groups, count_groups): - cur_end = cur_start + count - cur_indices = indices[cur_start:cur_end] - grouping[label] = cur_indices - cur_start = cur_end - return grouping - - -class EvaluatorScore(nn.Module): - @abstractmethod - def forward(self, pred_batch, target_batch, mask): - pass - - @abstractmethod - def get_value(self, groups=None, states=None): - pass - - @abstractmethod - def reset(self): - pass - - -class PairwiseScore(EvaluatorScore, ABC): - def __init__(self): - super().__init__() - self.individual_values = None - - def get_value(self, groups=None, states=None): - """ - :param groups: - :return: - total_results: dict of kind {'mean': score mean, 'std': score std} - group_results: None, if groups is None; - else dict {group_idx: {'mean': score mean among group, 'std': score std among group}} - """ - individual_values = torch.stack(states, dim=0).reshape(-1).cpu().numpy() if states is not None \ - else self.individual_values - - total_results = { - 'mean': individual_values.mean(), - 'std': individual_values.std() - } - - if groups is None: - return total_results, None - - group_results = dict() - grouping = get_groupings(groups) - for label, index in grouping.items(): - group_scores = individual_values[index] - group_results[label] = { - 'mean': group_scores.mean(), - 'std': group_scores.std() - } - return total_results, group_results - - def reset(self): - self.individual_values = [] - - -class SSIMScore(PairwiseScore): - def __init__(self, window_size=11): - super().__init__() - self.score = SSIM(window_size=window_size, size_average=False).eval() - self.reset() - - def forward(self, pred_batch, target_batch, mask=None): - batch_values = self.score(pred_batch, target_batch) - self.individual_values = np.hstack([ - self.individual_values, batch_values.detach().cpu().numpy() - ]) - return batch_values - - -class LPIPSScore(PairwiseScore): - def __init__(self, model='net-lin', net='vgg', model_path=None, use_gpu=True): - super().__init__() - self.score = PerceptualLoss(model=model, net=net, model_path=model_path, - use_gpu=use_gpu, spatial=False).eval() - self.reset() - - def forward(self, pred_batch, target_batch, mask=None): - batch_values = self.score(pred_batch, target_batch).flatten() - self.individual_values = np.hstack([ - self.individual_values, batch_values.detach().cpu().numpy() - ]) - return batch_values - - -def fid_calculate_activation_statistics(act): - mu = np.mean(act, axis=0) - sigma = np.cov(act, rowvar=False) - return mu, sigma - - -def calculate_frechet_distance(activations_pred, activations_target, eps=1e-6): - mu1, sigma1 = fid_calculate_activation_statistics(activations_pred) - mu2, sigma2 = fid_calculate_activation_statistics(activations_target) - - diff = mu1 - mu2 - - # Product might be almost singular - covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False) - if not np.isfinite(covmean).all(): - msg = ('fid calculation produces singular product; ' - 'adding %s to diagonal of cov estimates') % eps - LOGGER.warning(msg) - offset = np.eye(sigma1.shape[0]) * eps - covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset)) - - # Numerical error might give slight imaginary component - if np.iscomplexobj(covmean): - # if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3): - if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-2): - m = np.max(np.abs(covmean.imag)) - raise ValueError('Imaginary component {}'.format(m)) - covmean = covmean.real - - tr_covmean = np.trace(covmean) - - return (diff.dot(diff) + np.trace(sigma1) + - np.trace(sigma2) - 2 * tr_covmean) - - -class FIDScore(EvaluatorScore): - def __init__(self, dims=2048, eps=1e-6): - LOGGER.info("FIDscore init called") - super().__init__() - if getattr(FIDScore, '_MODEL', None) is None: - block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims] - FIDScore._MODEL = InceptionV3([block_idx]).eval() - self.model = FIDScore._MODEL - self.eps = eps - self.reset() - LOGGER.info("FIDscore init done") - - def forward(self, pred_batch, target_batch, mask=None): - activations_pred = self._get_activations(pred_batch) - activations_target = self._get_activations(target_batch) - - self.activations_pred.append(activations_pred.detach().cpu()) - self.activations_target.append(activations_target.detach().cpu()) - - return activations_pred, activations_target - - def get_value(self, groups=None, states=None): - LOGGER.info("FIDscore get_value called") - activations_pred, activations_target = zip(*states) if states is not None \ - else (self.activations_pred, self.activations_target) - activations_pred = torch.cat(activations_pred).cpu().numpy() - activations_target = torch.cat(activations_target).cpu().numpy() - - total_distance = calculate_frechet_distance(activations_pred, activations_target, eps=self.eps) - total_results = dict(mean=total_distance) - - if groups is None: - group_results = None - else: - group_results = dict() - grouping = get_groupings(groups) - for label, index in grouping.items(): - if len(index) > 1: - group_distance = calculate_frechet_distance(activations_pred[index], activations_target[index], - eps=self.eps) - group_results[label] = dict(mean=group_distance) - - else: - group_results[label] = dict(mean=float('nan')) - - self.reset() - - LOGGER.info("FIDscore get_value done") - - return total_results, group_results - - def reset(self): - self.activations_pred = [] - self.activations_target = [] - - def _get_activations(self, batch): - activations = self.model(batch)[0] - if activations.shape[2] != 1 or activations.shape[3] != 1: - assert False, \ - 'We should not have got here, because Inception always scales inputs to 299x299' - # activations = F.adaptive_avg_pool2d(activations, output_size=(1, 1)) - activations = activations.squeeze(-1).squeeze(-1) - return activations - - -class SegmentationAwareScore(EvaluatorScore): - def __init__(self, weights_path): - super().__init__() - self.segm_network = SegmentationModule(weights_path=weights_path, use_default_normalization=True).eval() - self.target_class_freq_by_image_total = [] - self.target_class_freq_by_image_mask = [] - self.pred_class_freq_by_image_mask = [] - - def forward(self, pred_batch, target_batch, mask): - pred_segm_flat = self.segm_network.predict(pred_batch)[0].view(pred_batch.shape[0], -1).long().detach().cpu().numpy() - target_segm_flat = self.segm_network.predict(target_batch)[0].view(pred_batch.shape[0], -1).long().detach().cpu().numpy() - mask_flat = (mask.view(mask.shape[0], -1) > 0.5).detach().cpu().numpy() - - batch_target_class_freq_total = [] - batch_target_class_freq_mask = [] - batch_pred_class_freq_mask = [] - - for cur_pred_segm, cur_target_segm, cur_mask in zip(pred_segm_flat, target_segm_flat, mask_flat): - cur_target_class_freq_total = np.bincount(cur_target_segm, minlength=NUM_CLASS)[None, ...] - cur_target_class_freq_mask = np.bincount(cur_target_segm[cur_mask], minlength=NUM_CLASS)[None, ...] - cur_pred_class_freq_mask = np.bincount(cur_pred_segm[cur_mask], minlength=NUM_CLASS)[None, ...] - - self.target_class_freq_by_image_total.append(cur_target_class_freq_total) - self.target_class_freq_by_image_mask.append(cur_target_class_freq_mask) - self.pred_class_freq_by_image_mask.append(cur_pred_class_freq_mask) - - batch_target_class_freq_total.append(cur_target_class_freq_total) - batch_target_class_freq_mask.append(cur_target_class_freq_mask) - batch_pred_class_freq_mask.append(cur_pred_class_freq_mask) - - batch_target_class_freq_total = np.concatenate(batch_target_class_freq_total, axis=0) - batch_target_class_freq_mask = np.concatenate(batch_target_class_freq_mask, axis=0) - batch_pred_class_freq_mask = np.concatenate(batch_pred_class_freq_mask, axis=0) - return batch_target_class_freq_total, batch_target_class_freq_mask, batch_pred_class_freq_mask - - def reset(self): - super().reset() - self.target_class_freq_by_image_total = [] - self.target_class_freq_by_image_mask = [] - self.pred_class_freq_by_image_mask = [] - - -def distribute_values_to_classes(target_class_freq_by_image_mask, values, idx2name): - assert target_class_freq_by_image_mask.ndim == 2 and target_class_freq_by_image_mask.shape[0] == values.shape[0] - total_class_freq = target_class_freq_by_image_mask.sum(0) - distr_values = (target_class_freq_by_image_mask * values[..., None]).sum(0) - result = distr_values / (total_class_freq + 1e-3) - return {idx2name[i]: val for i, val in enumerate(result) if total_class_freq[i] > 0} - - -def get_segmentation_idx2name(): - return {i - 1: name for i, name in segm_options['classes'].set_index('Idx', drop=True)['Name'].to_dict().items()} - - -class SegmentationAwarePairwiseScore(SegmentationAwareScore): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.individual_values = [] - self.segm_idx2name = get_segmentation_idx2name() - - def forward(self, pred_batch, target_batch, mask): - cur_class_stats = super().forward(pred_batch, target_batch, mask) - score_values = self.calc_score(pred_batch, target_batch, mask) - self.individual_values.append(score_values) - return cur_class_stats + (score_values,) - - @abstractmethod - def calc_score(self, pred_batch, target_batch, mask): - raise NotImplementedError() - - def get_value(self, groups=None, states=None): - """ - :param groups: - :return: - total_results: dict of kind {'mean': score mean, 'std': score std} - group_results: None, if groups is None; - else dict {group_idx: {'mean': score mean among group, 'std': score std among group}} - """ - if states is not None: - (target_class_freq_by_image_total, - target_class_freq_by_image_mask, - pred_class_freq_by_image_mask, - individual_values) = states - else: - target_class_freq_by_image_total = self.target_class_freq_by_image_total - target_class_freq_by_image_mask = self.target_class_freq_by_image_mask - pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask - individual_values = self.individual_values - - target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0) - target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0) - pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0) - individual_values = np.concatenate(individual_values, axis=0) - - total_results = { - 'mean': individual_values.mean(), - 'std': individual_values.std(), - **distribute_values_to_classes(target_class_freq_by_image_mask, individual_values, self.segm_idx2name) - } - - if groups is None: - return total_results, None - - group_results = dict() - grouping = get_groupings(groups) - for label, index in grouping.items(): - group_class_freq = target_class_freq_by_image_mask[index] - group_scores = individual_values[index] - group_results[label] = { - 'mean': group_scores.mean(), - 'std': group_scores.std(), - ** distribute_values_to_classes(group_class_freq, group_scores, self.segm_idx2name) - } - return total_results, group_results - - def reset(self): - super().reset() - self.individual_values = [] - - -class SegmentationClassStats(SegmentationAwarePairwiseScore): - def calc_score(self, pred_batch, target_batch, mask): - return 0 - - def get_value(self, groups=None, states=None): - """ - :param groups: - :return: - total_results: dict of kind {'mean': score mean, 'std': score std} - group_results: None, if groups is None; - else dict {group_idx: {'mean': score mean among group, 'std': score std among group}} - """ - if states is not None: - (target_class_freq_by_image_total, - target_class_freq_by_image_mask, - pred_class_freq_by_image_mask, - _) = states - else: - target_class_freq_by_image_total = self.target_class_freq_by_image_total - target_class_freq_by_image_mask = self.target_class_freq_by_image_mask - pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask - - target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0) - target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0) - pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0) - - target_class_freq_by_image_total_marginal = target_class_freq_by_image_total.sum(0).astype('float32') - target_class_freq_by_image_total_marginal /= target_class_freq_by_image_total_marginal.sum() - - target_class_freq_by_image_mask_marginal = target_class_freq_by_image_mask.sum(0).astype('float32') - target_class_freq_by_image_mask_marginal /= target_class_freq_by_image_mask_marginal.sum() - - pred_class_freq_diff = (pred_class_freq_by_image_mask - target_class_freq_by_image_mask).sum(0) / (target_class_freq_by_image_mask.sum(0) + 1e-3) - - total_results = dict() - total_results.update({f'total_freq/{self.segm_idx2name[i]}': v - for i, v in enumerate(target_class_freq_by_image_total_marginal) - if v > 0}) - total_results.update({f'mask_freq/{self.segm_idx2name[i]}': v - for i, v in enumerate(target_class_freq_by_image_mask_marginal) - if v > 0}) - total_results.update({f'mask_freq_diff/{self.segm_idx2name[i]}': v - for i, v in enumerate(pred_class_freq_diff) - if target_class_freq_by_image_total_marginal[i] > 0}) - - if groups is None: - return total_results, None - - group_results = dict() - grouping = get_groupings(groups) - for label, index in grouping.items(): - group_target_class_freq_by_image_total = target_class_freq_by_image_total[index] - group_target_class_freq_by_image_mask = target_class_freq_by_image_mask[index] - group_pred_class_freq_by_image_mask = pred_class_freq_by_image_mask[index] - - group_target_class_freq_by_image_total_marginal = group_target_class_freq_by_image_total.sum(0).astype('float32') - group_target_class_freq_by_image_total_marginal /= group_target_class_freq_by_image_total_marginal.sum() - - group_target_class_freq_by_image_mask_marginal = group_target_class_freq_by_image_mask.sum(0).astype('float32') - group_target_class_freq_by_image_mask_marginal /= group_target_class_freq_by_image_mask_marginal.sum() - - group_pred_class_freq_diff = (group_pred_class_freq_by_image_mask - group_target_class_freq_by_image_mask).sum(0) / ( - group_target_class_freq_by_image_mask.sum(0) + 1e-3) - - cur_group_results = dict() - cur_group_results.update({f'total_freq/{self.segm_idx2name[i]}': v - for i, v in enumerate(group_target_class_freq_by_image_total_marginal) - if v > 0}) - cur_group_results.update({f'mask_freq/{self.segm_idx2name[i]}': v - for i, v in enumerate(group_target_class_freq_by_image_mask_marginal) - if v > 0}) - cur_group_results.update({f'mask_freq_diff/{self.segm_idx2name[i]}': v - for i, v in enumerate(group_pred_class_freq_diff) - if group_target_class_freq_by_image_total_marginal[i] > 0}) - - group_results[label] = cur_group_results - return total_results, group_results - - -class SegmentationAwareSSIM(SegmentationAwarePairwiseScore): - def __init__(self, *args, window_size=11, **kwargs): - super().__init__(*args, **kwargs) - self.score_impl = SSIM(window_size=window_size, size_average=False).eval() - - def calc_score(self, pred_batch, target_batch, mask): - return self.score_impl(pred_batch, target_batch).detach().cpu().numpy() - - -class SegmentationAwareLPIPS(SegmentationAwarePairwiseScore): - def __init__(self, *args, model='net-lin', net='vgg', model_path=None, use_gpu=True, **kwargs): - super().__init__(*args, **kwargs) - self.score_impl = PerceptualLoss(model=model, net=net, model_path=model_path, - use_gpu=use_gpu, spatial=False).eval() - - def calc_score(self, pred_batch, target_batch, mask): - return self.score_impl(pred_batch, target_batch).flatten().detach().cpu().numpy() - - -def calculade_fid_no_img(img_i, activations_pred, activations_target, eps=1e-6): - activations_pred = activations_pred.copy() - activations_pred[img_i] = activations_target[img_i] - return calculate_frechet_distance(activations_pred, activations_target, eps=eps) - - -class SegmentationAwareFID(SegmentationAwarePairwiseScore): - def __init__(self, *args, dims=2048, eps=1e-6, n_jobs=-1, **kwargs): - super().__init__(*args, **kwargs) - if getattr(FIDScore, '_MODEL', None) is None: - block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims] - FIDScore._MODEL = InceptionV3([block_idx]).eval() - self.model = FIDScore._MODEL - self.eps = eps - self.n_jobs = n_jobs - - def calc_score(self, pred_batch, target_batch, mask): - activations_pred = self._get_activations(pred_batch) - activations_target = self._get_activations(target_batch) - return activations_pred, activations_target - - def get_value(self, groups=None, states=None): - """ - :param groups: - :return: - total_results: dict of kind {'mean': score mean, 'std': score std} - group_results: None, if groups is None; - else dict {group_idx: {'mean': score mean among group, 'std': score std among group}} - """ - if states is not None: - (target_class_freq_by_image_total, - target_class_freq_by_image_mask, - pred_class_freq_by_image_mask, - activation_pairs) = states - else: - target_class_freq_by_image_total = self.target_class_freq_by_image_total - target_class_freq_by_image_mask = self.target_class_freq_by_image_mask - pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask - activation_pairs = self.individual_values - - target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0) - target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0) - pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0) - activations_pred, activations_target = zip(*activation_pairs) - activations_pred = np.concatenate(activations_pred, axis=0) - activations_target = np.concatenate(activations_target, axis=0) - - total_results = { - 'mean': calculate_frechet_distance(activations_pred, activations_target, eps=self.eps), - 'std': 0, - **self.distribute_fid_to_classes(target_class_freq_by_image_mask, activations_pred, activations_target) - } - - if groups is None: - return total_results, None - - group_results = dict() - grouping = get_groupings(groups) - for label, index in grouping.items(): - if len(index) > 1: - group_activations_pred = activations_pred[index] - group_activations_target = activations_target[index] - group_class_freq = target_class_freq_by_image_mask[index] - group_results[label] = { - 'mean': calculate_frechet_distance(group_activations_pred, group_activations_target, eps=self.eps), - 'std': 0, - **self.distribute_fid_to_classes(group_class_freq, - group_activations_pred, - group_activations_target) - } - else: - group_results[label] = dict(mean=float('nan'), std=0) - return total_results, group_results - - def distribute_fid_to_classes(self, class_freq, activations_pred, activations_target): - real_fid = calculate_frechet_distance(activations_pred, activations_target, eps=self.eps) - - fid_no_images = Parallel(n_jobs=self.n_jobs)( - delayed(calculade_fid_no_img)(img_i, activations_pred, activations_target, eps=self.eps) - for img_i in range(activations_pred.shape[0]) - ) - errors = real_fid - fid_no_images - return distribute_values_to_classes(class_freq, errors, self.segm_idx2name) - - def _get_activations(self, batch): - activations = self.model(batch)[0] - if activations.shape[2] != 1 or activations.shape[3] != 1: - activations = F.adaptive_avg_pool2d(activations, output_size=(1, 1)) - activations = activations.squeeze(-1).squeeze(-1).detach().cpu().numpy() - return activations diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/BlockTitle-900cfd93.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/BlockTitle-900cfd93.js deleted file mode 100644 index a7e315a282971c319907772075c78fa74c50f635..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/BlockTitle-900cfd93.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as d,i as h,s as g,V as k,G as w,H as $,f as I,C as q,M as _,g as m,X as B,Y as C,Z as S,p as r,l as j,t as c,o as G,q as p,e as H,m as K,n as M,I as N,K as T}from"./index-7c0e54a6.js";import{I as V}from"./Info-3b2d34d7.js";import"./Button-661a0701.js";function b(f){let e,n;return e=new V({props:{$$slots:{default:[X]},$$scope:{ctx:f}}}),{c(){H(e.$$.fragment)},m(l,o){K(e,l,o),n=!0},p(l,o){const u={};o&10&&(u.$$scope={dirty:o,ctx:l}),e.$set(u)},i(l){n||(r(e.$$.fragment,l),n=!0)},o(l){c(e.$$.fragment,l),n=!1},d(l){M(e,l)}}}function X(f){let e;return{c(){e=N(f[1])},m(n,l){m(n,e,l)},p(n,l){l&2&&T(e,n[1])},d(n){n&&p(e)}}}function Y(f){let e,n,l,o;const u=f[2].default,a=k(u,f,f[3],null);let s=f[1]&&b(f);return{c(){e=w("span"),a&&a.c(),n=$(),s&&s.c(),l=I(),q(e,"class","svelte-1gfkn6j"),_(e,"sr-only",!f[0]),_(e,"hide",!f[0]),_(e,"has-info",f[1]!=null)},m(t,i){m(t,e,i),a&&a.m(e,null),m(t,n,i),s&&s.m(t,i),m(t,l,i),o=!0},p(t,[i]){a&&a.p&&(!o||i&8)&&B(a,u,t,t[3],o?S(u,t[3],i,null):C(t[3]),null),(!o||i&1)&&_(e,"sr-only",!t[0]),(!o||i&1)&&_(e,"hide",!t[0]),(!o||i&2)&&_(e,"has-info",t[1]!=null),t[1]?s?(s.p(t,i),i&2&&r(s,1)):(s=b(t),s.c(),r(s,1),s.m(l.parentNode,l)):s&&(j(),c(s,1,1,()=>{s=null}),G())},i(t){o||(r(a,t),r(s),o=!0)},o(t){c(a,t),c(s),o=!1},d(t){t&&p(e),a&&a.d(t),t&&p(n),s&&s.d(t),t&&p(l)}}}function Z(f,e,n){let{$$slots:l={},$$scope:o}=e,{show_label:u=!0}=e,{info:a=void 0}=e;return f.$$set=s=>{"show_label"in s&&n(0,u=s.show_label),"info"in s&&n(1,a=s.info),"$$scope"in s&&n(3,o=s.$$scope)},[u,a,l,o]}class D extends d{constructor(e){super(),h(this,e,Z,Y,g,{show_label:0,info:1})}}export{D as B}; -//# sourceMappingURL=BlockTitle-900cfd93.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/sphinxext/plot_directive.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/sphinxext/plot_directive.py deleted file mode 100644 index c942085e215902ea4a550bcc91b5c3f4b1c3dbd6..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/sphinxext/plot_directive.py +++ /dev/null @@ -1,826 +0,0 @@ -""" -A directive for including a Matplotlib plot in a Sphinx document -================================================================ - -This is a Sphinx extension providing a reStructuredText directive -``.. plot::`` for including a plot in a Sphinx document. - -In HTML output, ``.. plot::`` will include a .png file with a link -to a high-res .png and .pdf. In LaTeX output, it will include a .pdf. - -The plot content may be defined in one of three ways: - -1. **A path to a source file** as the argument to the directive:: - - .. plot:: path/to/plot.py - - When a path to a source file is given, the content of the - directive may optionally contain a caption for the plot:: - - .. plot:: path/to/plot.py - - The plot caption. - - Additionally, one may specify the name of a function to call (with - no arguments) immediately after importing the module:: - - .. plot:: path/to/plot.py plot_function1 - -2. Included as **inline content** to the directive:: - - .. plot:: - - import matplotlib.pyplot as plt - plt.plot([1, 2, 3], [4, 5, 6]) - plt.title("A plotting exammple") - -3. Using **doctest** syntax:: - - .. plot:: - - A plotting example: - >>> import matplotlib.pyplot as plt - >>> plt.plot([1, 2, 3], [4, 5, 6]) - -Options -------- - -The ``.. plot::`` directive supports the following options: - - ``:format:`` : {'python', 'doctest'} - The format of the input. If unset, the format is auto-detected. - - ``:include-source:`` : bool - Whether to display the source code. The default can be changed using - the ``plot_include_source`` variable in :file:`conf.py` (which itself - defaults to False). - - ``:show-source-link:`` : bool - Whether to show a link to the source in HTML. The default can be - changed using the ``plot_html_show_source_link`` variable in - :file:`conf.py` (which itself defaults to True). - - ``:context:`` : bool or str - If provided, the code will be run in the context of all previous plot - directives for which the ``:context:`` option was specified. This only - applies to inline code plot directives, not those run from files. If - the ``:context: reset`` option is specified, the context is reset - for this and future plots, and previous figures are closed prior to - running the code. ``:context: close-figs`` keeps the context but closes - previous figures before running the code. - - ``:nofigs:`` : bool - If specified, the code block will be run, but no figures will be - inserted. This is usually useful with the ``:context:`` option. - - ``:caption:`` : str - If specified, the option's argument will be used as a caption for the - figure. This overwrites the caption given in the content, when the plot - is generated from a file. - -Additionally, this directive supports all the options of the `image directive -`_, -except for ``:target:`` (since plot will add its own target). These include -``:alt:``, ``:height:``, ``:width:``, ``:scale:``, ``:align:`` and ``:class:``. - -Configuration options ---------------------- - -The plot directive has the following configuration options: - - plot_include_source - Default value for the include-source option (default: False). - - plot_html_show_source_link - Whether to show a link to the source in HTML (default: True). - - plot_pre_code - Code that should be executed before each plot. If None (the default), - it will default to a string containing:: - - import numpy as np - from matplotlib import pyplot as plt - - plot_basedir - Base directory, to which ``plot::`` file names are relative to. - If None or empty (the default), file names are relative to the - directory where the file containing the directive is. - - plot_formats - File formats to generate (default: ['png', 'hires.png', 'pdf']). - List of tuples or strings:: - - [(suffix, dpi), suffix, ...] - - that determine the file format and the DPI. For entries whose - DPI was omitted, sensible defaults are chosen. When passing from - the command line through sphinx_build the list should be passed as - suffix:dpi,suffix:dpi, ... - - plot_html_show_formats - Whether to show links to the files in HTML (default: True). - - plot_rcparams - A dictionary containing any non-standard rcParams that should - be applied before each plot (default: {}). - - plot_apply_rcparams - By default, rcParams are applied when ``:context:`` option is not used - in a plot directive. If set, this configuration option overrides this - behavior and applies rcParams before each plot. - - plot_working_directory - By default, the working directory will be changed to the directory of - the example, so the code can get at its data files, if any. Also its - path will be added to `sys.path` so it can import any helper modules - sitting beside it. This configuration option can be used to specify - a central directory (also added to `sys.path`) where data files and - helper modules for all code are located. - - plot_template - Provide a customized template for preparing restructured text. -""" - -import contextlib -import doctest -from io import StringIO -import itertools -import os -from os.path import relpath -from pathlib import Path -import re -import shutil -import sys -import textwrap -import traceback - -from docutils.parsers.rst import directives, Directive -from docutils.parsers.rst.directives.images import Image -import jinja2 # Sphinx dependency. - -import matplotlib -from matplotlib.backend_bases import FigureManagerBase -import matplotlib.pyplot as plt -from matplotlib import _pylab_helpers, cbook - -matplotlib.use("agg") - -__version__ = 2 - - -# ----------------------------------------------------------------------------- -# Registration hook -# ----------------------------------------------------------------------------- - - -def _option_boolean(arg): - if not arg or not arg.strip(): - # no argument given, assume used as a flag - return True - elif arg.strip().lower() in ('no', '0', 'false'): - return False - elif arg.strip().lower() in ('yes', '1', 'true'): - return True - else: - raise ValueError(f'{arg!r} unknown boolean') - - -def _option_context(arg): - if arg in [None, 'reset', 'close-figs']: - return arg - raise ValueError("Argument should be None or 'reset' or 'close-figs'") - - -def _option_format(arg): - return directives.choice(arg, ('python', 'doctest')) - - -def mark_plot_labels(app, document): - """ - To make plots referenceable, we need to move the reference from the - "htmlonly" (or "latexonly") node to the actual figure node itself. - """ - for name, explicit in document.nametypes.items(): - if not explicit: - continue - labelid = document.nameids[name] - if labelid is None: - continue - node = document.ids[labelid] - if node.tagname in ('html_only', 'latex_only'): - for n in node: - if n.tagname == 'figure': - sectname = name - for c in n: - if c.tagname == 'caption': - sectname = c.astext() - break - - node['ids'].remove(labelid) - node['names'].remove(name) - n['ids'].append(labelid) - n['names'].append(name) - document.settings.env.labels[name] = \ - document.settings.env.docname, labelid, sectname - break - - -class PlotDirective(Directive): - """The ``.. plot::`` directive, as documented in the module's docstring.""" - - has_content = True - required_arguments = 0 - optional_arguments = 2 - final_argument_whitespace = False - option_spec = { - 'alt': directives.unchanged, - 'height': directives.length_or_unitless, - 'width': directives.length_or_percentage_or_unitless, - 'scale': directives.nonnegative_int, - 'align': Image.align, - 'class': directives.class_option, - 'include-source': _option_boolean, - 'show-source-link': _option_boolean, - 'format': _option_format, - 'context': _option_context, - 'nofigs': directives.flag, - 'caption': directives.unchanged, - } - - def run(self): - """Run the plot directive.""" - try: - return run(self.arguments, self.content, self.options, - self.state_machine, self.state, self.lineno) - except Exception as e: - raise self.error(str(e)) - - -def _copy_css_file(app, exc): - if exc is None and app.builder.format == 'html': - src = cbook._get_data_path('plot_directive/plot_directive.css') - dst = app.outdir / Path('_static') - dst.mkdir(exist_ok=True) - # Use copyfile because we do not want to copy src's permissions. - shutil.copyfile(src, dst / Path('plot_directive.css')) - - -def setup(app): - setup.app = app - setup.config = app.config - setup.confdir = app.confdir - app.add_directive('plot', PlotDirective) - app.add_config_value('plot_pre_code', None, True) - app.add_config_value('plot_include_source', False, True) - app.add_config_value('plot_html_show_source_link', True, True) - app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True) - app.add_config_value('plot_basedir', None, True) - app.add_config_value('plot_html_show_formats', True, True) - app.add_config_value('plot_rcparams', {}, True) - app.add_config_value('plot_apply_rcparams', False, True) - app.add_config_value('plot_working_directory', None, True) - app.add_config_value('plot_template', None, True) - app.connect('doctree-read', mark_plot_labels) - app.add_css_file('plot_directive.css') - app.connect('build-finished', _copy_css_file) - metadata = {'parallel_read_safe': True, 'parallel_write_safe': True, - 'version': matplotlib.__version__} - return metadata - - -# ----------------------------------------------------------------------------- -# Doctest handling -# ----------------------------------------------------------------------------- - - -def contains_doctest(text): - try: - # check if it's valid Python as-is - compile(text, '', 'exec') - return False - except SyntaxError: - pass - r = re.compile(r'^\s*>>>', re.M) - m = r.search(text) - return bool(m) - - -def _split_code_at_show(text, function_name): - """Split code at plt.show().""" - - is_doctest = contains_doctest(text) - if function_name is None: - parts = [] - part = [] - for line in text.split("\n"): - if ((not is_doctest and line.startswith('plt.show(')) or - (is_doctest and line.strip() == '>>> plt.show()')): - part.append(line) - parts.append("\n".join(part)) - part = [] - else: - part.append(line) - if "\n".join(part).strip(): - parts.append("\n".join(part)) - else: - parts = [text] - return is_doctest, parts - - -# ----------------------------------------------------------------------------- -# Template -# ----------------------------------------------------------------------------- - -TEMPLATE = """ -{{ source_code }} - -.. only:: html - - {% if src_name or (html_show_formats and not multi_image) %} - ( - {%- if src_name -%} - :download:`Source code <{{ build_dir }}/{{ src_name }}>` - {%- endif -%} - {%- if html_show_formats and not multi_image -%} - {%- for img in images -%} - {%- for fmt in img.formats -%} - {%- if src_name or not loop.first -%}, {% endif -%} - :download:`{{ fmt }} <{{ build_dir }}/{{ img.basename }}.{{ fmt }}>` - {%- endfor -%} - {%- endfor -%} - {%- endif -%} - ) - {% endif %} - - {% for img in images %} - .. figure:: {{ build_dir }}/{{ img.basename }}.{{ default_fmt }} - {% for option in options -%} - {{ option }} - {% endfor %} - - {% if html_show_formats and multi_image -%} - ( - {%- for fmt in img.formats -%} - {%- if not loop.first -%}, {% endif -%} - :download:`{{ fmt }} <{{ build_dir }}/{{ img.basename }}.{{ fmt }}>` - {%- endfor -%} - ) - {%- endif -%} - - {{ caption }} {# appropriate leading whitespace added beforehand #} - {% endfor %} - -.. only:: not html - - {% for img in images %} - .. figure:: {{ build_dir }}/{{ img.basename }}.* - {% for option in options -%} - {{ option }} - {% endfor -%} - - {{ caption }} {# appropriate leading whitespace added beforehand #} - {% endfor %} - -""" - -exception_template = """ -.. only:: html - - [`source code <%(linkdir)s/%(basename)s.py>`__] - -Exception occurred rendering plot. - -""" - -# the context of the plot for all directives specified with the -# :context: option -plot_context = dict() - - -class ImageFile: - def __init__(self, basename, dirname): - self.basename = basename - self.dirname = dirname - self.formats = [] - - def filename(self, format): - return os.path.join(self.dirname, "%s.%s" % (self.basename, format)) - - def filenames(self): - return [self.filename(fmt) for fmt in self.formats] - - -def out_of_date(original, derived, includes=None): - """ - Return whether *derived* is out-of-date relative to *original* or any of - the RST files included in it using the RST include directive (*includes*). - *derived* and *original* are full paths, and *includes* is optionally a - list of full paths which may have been included in the *original*. - """ - if not os.path.exists(derived): - return True - - if includes is None: - includes = [] - files_to_check = [original, *includes] - - def out_of_date_one(original, derived_mtime): - return (os.path.exists(original) and - derived_mtime < os.stat(original).st_mtime) - - derived_mtime = os.stat(derived).st_mtime - return any(out_of_date_one(f, derived_mtime) for f in files_to_check) - - -class PlotError(RuntimeError): - pass - - -def _run_code(code, code_path, ns=None, function_name=None): - """ - Import a Python module from a path, and run the function given by - name, if function_name is not None. - """ - - # Change the working directory to the directory of the example, so - # it can get at its data files, if any. Add its path to sys.path - # so it can import any helper modules sitting beside it. - pwd = os.getcwd() - if setup.config.plot_working_directory is not None: - try: - os.chdir(setup.config.plot_working_directory) - except OSError as err: - raise OSError(f'{err}\n`plot_working_directory` option in ' - f'Sphinx configuration file must be a valid ' - f'directory path') from err - except TypeError as err: - raise TypeError(f'{err}\n`plot_working_directory` option in ' - f'Sphinx configuration file must be a string or ' - f'None') from err - elif code_path is not None: - dirname = os.path.abspath(os.path.dirname(code_path)) - os.chdir(dirname) - - with cbook._setattr_cm( - sys, argv=[code_path], path=[os.getcwd(), *sys.path]), \ - contextlib.redirect_stdout(StringIO()): - try: - if ns is None: - ns = {} - if not ns: - if setup.config.plot_pre_code is None: - exec('import numpy as np\n' - 'from matplotlib import pyplot as plt\n', ns) - else: - exec(str(setup.config.plot_pre_code), ns) - if "__main__" in code: - ns['__name__'] = '__main__' - - # Patch out non-interactive show() to avoid triggering a warning. - with cbook._setattr_cm(FigureManagerBase, show=lambda self: None): - exec(code, ns) - if function_name is not None: - exec(function_name + "()", ns) - - except (Exception, SystemExit) as err: - raise PlotError(traceback.format_exc()) from err - finally: - os.chdir(pwd) - return ns - - -def clear_state(plot_rcparams, close=True): - if close: - plt.close('all') - matplotlib.rc_file_defaults() - matplotlib.rcParams.update(plot_rcparams) - - -def get_plot_formats(config): - default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 200} - formats = [] - plot_formats = config.plot_formats - for fmt in plot_formats: - if isinstance(fmt, str): - if ':' in fmt: - suffix, dpi = fmt.split(':') - formats.append((str(suffix), int(dpi))) - else: - formats.append((fmt, default_dpi.get(fmt, 80))) - elif isinstance(fmt, (tuple, list)) and len(fmt) == 2: - formats.append((str(fmt[0]), int(fmt[1]))) - else: - raise PlotError('invalid image format "%r" in plot_formats' % fmt) - return formats - - -def render_figures(code, code_path, output_dir, output_base, context, - function_name, config, context_reset=False, - close_figs=False, - code_includes=None): - """ - Run a pyplot script and save the images in *output_dir*. - - Save the images under *output_dir* with file names derived from - *output_base* - """ - if function_name is not None: - output_base = f'{output_base}_{function_name}' - formats = get_plot_formats(config) - - # Try to determine if all images already exist - - is_doctest, code_pieces = _split_code_at_show(code, function_name) - - # Look for single-figure output files first - img = ImageFile(output_base, output_dir) - for format, dpi in formats: - if context or out_of_date(code_path, img.filename(format), - includes=code_includes): - all_exists = False - break - img.formats.append(format) - else: - all_exists = True - - if all_exists: - return [(code, [img])] - - # Then look for multi-figure output files - results = [] - for i, code_piece in enumerate(code_pieces): - images = [] - for j in itertools.count(): - if len(code_pieces) > 1: - img = ImageFile('%s_%02d_%02d' % (output_base, i, j), - output_dir) - else: - img = ImageFile('%s_%02d' % (output_base, j), output_dir) - for fmt, dpi in formats: - if context or out_of_date(code_path, img.filename(fmt), - includes=code_includes): - all_exists = False - break - img.formats.append(fmt) - - # assume that if we have one, we have them all - if not all_exists: - all_exists = (j > 0) - break - images.append(img) - if not all_exists: - break - results.append((code_piece, images)) - else: - all_exists = True - - if all_exists: - return results - - # We didn't find the files, so build them - - results = [] - ns = plot_context if context else {} - - if context_reset: - clear_state(config.plot_rcparams) - plot_context.clear() - - close_figs = not context or close_figs - - for i, code_piece in enumerate(code_pieces): - - if not context or config.plot_apply_rcparams: - clear_state(config.plot_rcparams, close_figs) - elif close_figs: - plt.close('all') - - _run_code(doctest.script_from_examples(code_piece) if is_doctest - else code_piece, - code_path, ns, function_name) - - images = [] - fig_managers = _pylab_helpers.Gcf.get_all_fig_managers() - for j, figman in enumerate(fig_managers): - if len(fig_managers) == 1 and len(code_pieces) == 1: - img = ImageFile(output_base, output_dir) - elif len(code_pieces) == 1: - img = ImageFile("%s_%02d" % (output_base, j), output_dir) - else: - img = ImageFile("%s_%02d_%02d" % (output_base, i, j), - output_dir) - images.append(img) - for fmt, dpi in formats: - try: - figman.canvas.figure.savefig(img.filename(fmt), dpi=dpi) - except Exception as err: - raise PlotError(traceback.format_exc()) from err - img.formats.append(fmt) - - results.append((code_piece, images)) - - if not context or config.plot_apply_rcparams: - clear_state(config.plot_rcparams, close=not context) - - return results - - -def run(arguments, content, options, state_machine, state, lineno): - document = state_machine.document - config = document.settings.env.config - nofigs = 'nofigs' in options - - formats = get_plot_formats(config) - default_fmt = formats[0][0] - - options.setdefault('include-source', config.plot_include_source) - options.setdefault('show-source-link', config.plot_html_show_source_link) - if 'class' in options: - # classes are parsed into a list of string, and output by simply - # printing the list, abusing the fact that RST guarantees to strip - # non-conforming characters - options['class'] = ['plot-directive'] + options['class'] - else: - options.setdefault('class', ['plot-directive']) - keep_context = 'context' in options - context_opt = None if not keep_context else options['context'] - - rst_file = document.attributes['source'] - rst_dir = os.path.dirname(rst_file) - - if len(arguments): - if not config.plot_basedir: - source_file_name = os.path.join(setup.app.builder.srcdir, - directives.uri(arguments[0])) - else: - source_file_name = os.path.join(setup.confdir, config.plot_basedir, - directives.uri(arguments[0])) - - # If there is content, it will be passed as a caption. - caption = '\n'.join(content) - - # Enforce unambiguous use of captions. - if "caption" in options: - if caption: - raise ValueError( - 'Caption specified in both content and options.' - ' Please remove ambiguity.' - ) - # Use caption option - caption = options["caption"] - - # If the optional function name is provided, use it - if len(arguments) == 2: - function_name = arguments[1] - else: - function_name = None - - code = Path(source_file_name).read_text(encoding='utf-8') - output_base = os.path.basename(source_file_name) - else: - source_file_name = rst_file - code = textwrap.dedent("\n".join(map(str, content))) - counter = document.attributes.get('_plot_counter', 0) + 1 - document.attributes['_plot_counter'] = counter - base, ext = os.path.splitext(os.path.basename(source_file_name)) - output_base = '%s-%d.py' % (base, counter) - function_name = None - caption = options.get('caption', '') - - base, source_ext = os.path.splitext(output_base) - if source_ext in ('.py', '.rst', '.txt'): - output_base = base - else: - source_ext = '' - - # ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames - output_base = output_base.replace('.', '-') - - # is it in doctest format? - is_doctest = contains_doctest(code) - if 'format' in options: - if options['format'] == 'python': - is_doctest = False - else: - is_doctest = True - - # determine output directory name fragment - source_rel_name = relpath(source_file_name, setup.confdir) - source_rel_dir = os.path.dirname(source_rel_name).lstrip(os.path.sep) - - # build_dir: where to place output files (temporarily) - build_dir = os.path.join(os.path.dirname(setup.app.doctreedir), - 'plot_directive', - source_rel_dir) - # get rid of .. in paths, also changes pathsep - # see note in Python docs for warning about symbolic links on Windows. - # need to compare source and dest paths at end - build_dir = os.path.normpath(build_dir) - os.makedirs(build_dir, exist_ok=True) - - # how to link to files from the RST file - try: - build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/') - except ValueError: - # on Windows, relpath raises ValueError when path and start are on - # different mounts/drives - build_dir_link = build_dir - - # get list of included rst files so that the output is updated when any - # plots in the included files change. These attributes are modified by the - # include directive (see the docutils.parsers.rst.directives.misc module). - try: - source_file_includes = [os.path.join(os.getcwd(), t[0]) - for t in state.document.include_log] - except AttributeError: - # the document.include_log attribute only exists in docutils >=0.17, - # before that we need to inspect the state machine - possible_sources = {os.path.join(setup.confdir, t[0]) - for t in state_machine.input_lines.items} - source_file_includes = [f for f in possible_sources - if os.path.isfile(f)] - # remove the source file itself from the includes - try: - source_file_includes.remove(source_file_name) - except ValueError: - pass - - # save script (if necessary) - if options['show-source-link']: - Path(build_dir, output_base + source_ext).write_text( - doctest.script_from_examples(code) - if source_file_name == rst_file and is_doctest - else code, - encoding='utf-8') - - # make figures - try: - results = render_figures(code=code, - code_path=source_file_name, - output_dir=build_dir, - output_base=output_base, - context=keep_context, - function_name=function_name, - config=config, - context_reset=context_opt == 'reset', - close_figs=context_opt == 'close-figs', - code_includes=source_file_includes) - errors = [] - except PlotError as err: - reporter = state.memo.reporter - sm = reporter.system_message( - 2, "Exception occurred in plotting {}\n from {}:\n{}".format( - output_base, source_file_name, err), - line=lineno) - results = [(code, [])] - errors = [sm] - - # Properly indent the caption - caption = '\n' + '\n'.join(' ' + line.strip() - for line in caption.split('\n')) - - # generate output restructuredtext - total_lines = [] - for j, (code_piece, images) in enumerate(results): - if options['include-source']: - if is_doctest: - lines = ['', *code_piece.splitlines()] - else: - lines = ['.. code-block:: python', '', - *textwrap.indent(code_piece, ' ').splitlines()] - source_code = "\n".join(lines) - else: - source_code = "" - - if nofigs: - images = [] - - opts = [ - ':%s: %s' % (key, val) for key, val in options.items() - if key in ('alt', 'height', 'width', 'scale', 'align', 'class')] - - # Not-None src_name signals the need for a source download in the - # generated html - if j == 0 and options['show-source-link']: - src_name = output_base + source_ext - else: - src_name = None - - result = jinja2.Template(config.plot_template or TEMPLATE).render( - default_fmt=default_fmt, - build_dir=build_dir_link, - src_name=src_name, - multi_image=len(images) > 1, - options=opts, - images=images, - source_code=source_code, - html_show_formats=config.plot_html_show_formats and len(images), - caption=caption) - - total_lines.extend(result.split("\n")) - total_lines.extend("\n") - - if total_lines: - state_machine.insert_input(total_lines, source=source_file_name) - - return errors diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/data/dataset_video_test.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/data/dataset_video_test.py deleted file mode 100644 index e361441331bbae465b9e1b51f2abe39dd54f5a2f..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/data/dataset_video_test.py +++ /dev/null @@ -1,382 +0,0 @@ -import glob -import torch -from os import path as osp -import torch.utils.data as data - -import utils.utils_video as utils_video - - -class VideoRecurrentTestDataset(data.Dataset): - """Video test dataset for recurrent architectures, which takes LR video - frames as input and output corresponding HR video frames. Modified from - https://github.com/xinntao/BasicSR/blob/master/basicsr/data/reds_dataset.py - - Supported datasets: Vid4, REDS4, REDSofficial. - More generally, it supports testing dataset with following structures: - - dataroot - ├── subfolder1 - ├── frame000 - ├── frame001 - ├── ... - ├── subfolder1 - ├── frame000 - ├── frame001 - ├── ... - ├── ... - - For testing datasets, there is no need to prepare LMDB files. - - Args: - opt (dict): Config for train dataset. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - io_backend (dict): IO backend type and other kwarg. - cache_data (bool): Whether to cache testing datasets. - name (str): Dataset name. - meta_info_file (str): The path to the file storing the list of test - folders. If not provided, all the folders in the dataroot will - be used. - num_frame (int): Window size for input frames. - padding (str): Padding mode. - """ - - def __init__(self, opt): - super(VideoRecurrentTestDataset, self).__init__() - self.opt = opt - self.cache_data = opt['cache_data'] - self.gt_root, self.lq_root = opt['dataroot_gt'], opt['dataroot_lq'] - self.data_info = {'lq_path': [], 'gt_path': [], 'folder': [], 'idx': [], 'border': []} - - self.imgs_lq, self.imgs_gt = {}, {} - if 'meta_info_file' in opt: - with open(opt['meta_info_file'], 'r') as fin: - subfolders = [line.split(' ')[0] for line in fin] - subfolders_lq = [osp.join(self.lq_root, key) for key in subfolders] - subfolders_gt = [osp.join(self.gt_root, key) for key in subfolders] - else: - subfolders_lq = sorted(glob.glob(osp.join(self.lq_root, '*'))) - subfolders_gt = sorted(glob.glob(osp.join(self.gt_root, '*'))) - - for subfolder_lq, subfolder_gt in zip(subfolders_lq, subfolders_gt): - # get frame list for lq and gt - subfolder_name = osp.basename(subfolder_lq) - img_paths_lq = sorted(list(utils_video.scandir(subfolder_lq, full_path=True))) - img_paths_gt = sorted(list(utils_video.scandir(subfolder_gt, full_path=True))) - - max_idx = len(img_paths_lq) - assert max_idx == len(img_paths_gt), (f'Different number of images in lq ({max_idx})' - f' and gt folders ({len(img_paths_gt)})') - - self.data_info['lq_path'].extend(img_paths_lq) - self.data_info['gt_path'].extend(img_paths_gt) - self.data_info['folder'].extend([subfolder_name] * max_idx) - for i in range(max_idx): - self.data_info['idx'].append(f'{i}/{max_idx}') - border_l = [0] * max_idx - for i in range(self.opt['num_frame'] // 2): - border_l[i] = 1 - border_l[max_idx - i - 1] = 1 - self.data_info['border'].extend(border_l) - - # cache data or save the frame list - if self.cache_data: - print(f'Cache {subfolder_name} for VideoTestDataset...') - self.imgs_lq[subfolder_name] = utils_video.read_img_seq(img_paths_lq) - self.imgs_gt[subfolder_name] = utils_video.read_img_seq(img_paths_gt) - else: - self.imgs_lq[subfolder_name] = img_paths_lq - self.imgs_gt[subfolder_name] = img_paths_gt - - # Find unique folder strings - self.folders = sorted(list(set(self.data_info['folder']))) - self.sigma = opt['sigma'] / 255. if 'sigma' in opt else 0 # for non-blind video denoising - - def __getitem__(self, index): - folder = self.folders[index] - - if self.sigma: - # for non-blind video denoising - if self.cache_data: - imgs_gt = self.imgs_gt[folder] - else: - imgs_gt = utils_video.read_img_seq(self.imgs_gt[folder]) - - torch.manual_seed(0) - noise_level = torch.ones((1, 1, 1, 1)) * self.sigma - noise = torch.normal(mean=0, std=noise_level.expand_as(imgs_gt)) - imgs_lq = imgs_gt + noise - t, _, h, w = imgs_lq.shape - imgs_lq = torch.cat([imgs_lq, noise_level.expand(t, 1, h, w)], 1) - else: - # for video sr and deblurring - if self.cache_data: - imgs_lq = self.imgs_lq[folder] - imgs_gt = self.imgs_gt[folder] - else: - imgs_lq = utils_video.read_img_seq(self.imgs_lq[folder]) - imgs_gt = utils_video.read_img_seq(self.imgs_gt[folder]) - - return { - 'L': imgs_lq, - 'H': imgs_gt, - 'folder': folder, - 'lq_path': self.imgs_lq[folder], - } - - def __len__(self): - return len(self.folders) - - -class SingleVideoRecurrentTestDataset(data.Dataset): - """Single ideo test dataset for recurrent architectures, which takes LR video - frames as input and output corresponding HR video frames (only input LQ path). - - More generally, it supports testing dataset with following structures: - - dataroot - ├── subfolder1 - ├── frame000 - ├── frame001 - ├── ... - ├── subfolder1 - ├── frame000 - ├── frame001 - ├── ... - ├── ... - - For testing datasets, there is no need to prepare LMDB files. - - Args: - opt (dict): Config for train dataset. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - io_backend (dict): IO backend type and other kwarg. - cache_data (bool): Whether to cache testing datasets. - name (str): Dataset name. - meta_info_file (str): The path to the file storing the list of test - folders. If not provided, all the folders in the dataroot will - be used. - num_frame (int): Window size for input frames. - padding (str): Padding mode. - """ - - def __init__(self, opt): - super(SingleVideoRecurrentTestDataset, self).__init__() - self.opt = opt - self.cache_data = opt['cache_data'] - self.lq_root = opt['dataroot_lq'] - self.data_info = {'lq_path': [], 'folder': [], 'idx': [], 'border': []} - - self.imgs_lq = {} - if 'meta_info_file' in opt: - with open(opt['meta_info_file'], 'r') as fin: - subfolders = [line.split(' ')[0] for line in fin] - subfolders_lq = [osp.join(self.lq_root, key) for key in subfolders] - else: - subfolders_lq = sorted(glob.glob(osp.join(self.lq_root, '*'))) - - for subfolder_lq in subfolders_lq: - # get frame list for lq and gt - subfolder_name = osp.basename(subfolder_lq) - img_paths_lq = sorted(list(utils_video.scandir(subfolder_lq, full_path=True))) - - max_idx = len(img_paths_lq) - - self.data_info['lq_path'].extend(img_paths_lq) - self.data_info['folder'].extend([subfolder_name] * max_idx) - for i in range(max_idx): - self.data_info['idx'].append(f'{i}/{max_idx}') - border_l = [0] * max_idx - for i in range(self.opt['num_frame'] // 2): - border_l[i] = 1 - border_l[max_idx - i - 1] = 1 - self.data_info['border'].extend(border_l) - - # cache data or save the frame list - if self.cache_data: - print(f'Cache {subfolder_name} for VideoTestDataset...') - self.imgs_lq[subfolder_name] = utils_video.read_img_seq(img_paths_lq) - else: - self.imgs_lq[subfolder_name] = img_paths_lq - - # Find unique folder strings - self.folders = sorted(list(set(self.data_info['folder']))) - - def __getitem__(self, index): - folder = self.folders[index] - - if self.cache_data: - imgs_lq = self.imgs_lq[folder] - else: - imgs_lq = utils_video.read_img_seq(self.imgs_lq[folder]) - - return { - 'L': imgs_lq, - 'folder': folder, - 'lq_path': self.imgs_lq[folder], - } - - def __len__(self): - return len(self.folders) - - -class VideoTestVimeo90KDataset(data.Dataset): - """Video test dataset for Vimeo90k-Test dataset. - - It only keeps the center frame for testing. - For testing datasets, there is no need to prepare LMDB files. - - Args: - opt (dict): Config for train dataset. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - io_backend (dict): IO backend type and other kwarg. - cache_data (bool): Whether to cache testing datasets. - name (str): Dataset name. - meta_info_file (str): The path to the file storing the list of test - folders. If not provided, all the folders in the dataroot will - be used. - num_frame (int): Window size for input frames. - padding (str): Padding mode. - """ - - def __init__(self, opt): - super(VideoTestVimeo90KDataset, self).__init__() - self.opt = opt - self.cache_data = opt['cache_data'] - if self.cache_data: - raise NotImplementedError('cache_data in Vimeo90K-Test dataset is not implemented.') - self.gt_root, self.lq_root = opt['dataroot_gt'], opt['dataroot_lq'] - self.data_info = {'lq_path': [], 'gt_path': [], 'folder': [], 'idx': [], 'border': []} - neighbor_list = [i + (9 - opt['num_frame']) // 2 for i in range(opt['num_frame'])] - - with open(opt['meta_info_file'], 'r') as fin: - subfolders = [line.split(' ')[0] for line in fin] - for idx, subfolder in enumerate(subfolders): - gt_path = osp.join(self.gt_root, subfolder, 'im4.png') - self.data_info['gt_path'].append(gt_path) - lq_paths = [osp.join(self.lq_root, subfolder, f'im{i}.png') for i in neighbor_list] - self.data_info['lq_path'].append(lq_paths) - self.data_info['folder'].append('vimeo90k') - self.data_info['idx'].append(f'{idx}/{len(subfolders)}') - self.data_info['border'].append(0) - - self.pad_sequence = opt.get('pad_sequence', False) - - def __getitem__(self, index): - lq_path = self.data_info['lq_path'][index] - gt_path = self.data_info['gt_path'][index] - imgs_lq = utils_video.read_img_seq(lq_path) - img_gt = utils_video.read_img_seq([gt_path]) - img_gt.squeeze_(0) - - if self.pad_sequence: # pad the sequence: 7 frames to 8 frames - imgs_lq = torch.cat([imgs_lq, imgs_lq[-1:,...]], dim=0) - - return { - 'L': imgs_lq, # (t, c, h, w) - 'H': img_gt, # (c, h, w) - 'folder': self.data_info['folder'][index], # folder name - 'idx': self.data_info['idx'][index], # e.g., 0/843 - 'border': self.data_info['border'][index], # 0 for non-border - 'lq_path': lq_path[self.opt['num_frame'] // 2] # center frame - } - - def __len__(self): - return len(self.data_info['gt_path']) - - -class SingleVideoRecurrentTestDataset(data.Dataset): - """Single Video test dataset (only input LQ path). - - Supported datasets: Vid4, REDS4, REDSofficial. - More generally, it supports testing dataset with following structures: - - dataroot - ├── subfolder1 - ├── frame000 - ├── frame001 - ├── ... - ├── subfolder1 - ├── frame000 - ├── frame001 - ├── ... - ├── ... - - For testing datasets, there is no need to prepare LMDB files. - - Args: - opt (dict): Config for train dataset. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - io_backend (dict): IO backend type and other kwarg. - cache_data (bool): Whether to cache testing datasets. - name (str): Dataset name. - meta_info_file (str): The path to the file storing the list of test - folders. If not provided, all the folders in the dataroot will - be used. - num_frame (int): Window size for input frames. - padding (str): Padding mode. - """ - - def __init__(self, opt): - super(SingleVideoRecurrentTestDataset, self).__init__() - self.opt = opt - self.cache_data = opt['cache_data'] - self.lq_root = opt['dataroot_lq'] - self.data_info = {'lq_path': [], 'folder': [], 'idx': [], 'border': []} - # file client (io backend) - self.file_client = None - - self.imgs_lq = {} - if 'meta_info_file' in opt: - with open(opt['meta_info_file'], 'r') as fin: - subfolders = [line.split(' ')[0] for line in fin] - subfolders_lq = [osp.join(self.lq_root, key) for key in subfolders] - else: - subfolders_lq = sorted(glob.glob(osp.join(self.lq_root, '*'))) - - for subfolder_lq in subfolders_lq: - # get frame list for lq and gt - subfolder_name = osp.basename(subfolder_lq) - img_paths_lq = sorted(list(utils_video.scandir(subfolder_lq, full_path=True))) - - max_idx = len(img_paths_lq) - - self.data_info['lq_path'].extend(img_paths_lq) - self.data_info['folder'].extend([subfolder_name] * max_idx) - for i in range(max_idx): - self.data_info['idx'].append(f'{i}/{max_idx}') - border_l = [0] * max_idx - for i in range(self.opt['num_frame'] // 2): - border_l[i] = 1 - border_l[max_idx - i - 1] = 1 - self.data_info['border'].extend(border_l) - - # cache data or save the frame list - if self.cache_data: - logger.info(f'Cache {subfolder_name} for VideoTestDataset...') - self.imgs_lq[subfolder_name] = utils_video.read_img_seq(img_paths_lq) - else: - self.imgs_lq[subfolder_name] = img_paths_lq - - # Find unique folder strings - self.folders = sorted(list(set(self.data_info['folder']))) - - def __getitem__(self, index): - folder = self.folders[index] - - if self.cache_data: - imgs_lq = self.imgs_lq[folder] - else: - imgs_lq = utils_video.read_img_seq(self.imgs_lq[folder]) - - return { - 'L': imgs_lq, - 'folder': folder, - 'lq_path': self.imgs_lq[folder], - } - - def __len__(self): - return len(self.folders) diff --git a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/exllama_ext/cpu_func/rep_penalty.h b/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/exllama_ext/cpu_func/rep_penalty.h deleted file mode 100644 index 4f63b484704d9b9fff3240cf497235b1b0b2fc67..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/repositories/exllama/exllama_ext/cpu_func/rep_penalty.h +++ /dev/null @@ -1,30 +0,0 @@ -#ifndef _rep_penalty_h -#define _rep_penalty_h - -#include -#include - -void rep_penalty_cpu -( - const int vocab_size, - const uint64_t* sequence, - float* rep_mask, - const float penalty_max, - const int sustain, - const int decay, - const int seq_len -); - -void apply_rep_penalty_cpu -( - const int vocab_size, - const uint64_t* sequence, - const float penalty_max, - const int sustain, - const int decay, - const int seq_len, - float* logits -); - - -#endif diff --git a/spaces/leonelhs/remove-background/README.md b/spaces/leonelhs/remove-background/README.md deleted file mode 100644 index d0c5a9d7bc90428b552ccdb0be087df0367e6c3f..0000000000000000000000000000000000000000 --- a/spaces/leonelhs/remove-background/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Remove Background -emoji: 🌍 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/positionwise_feed_forward.py b/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/positionwise_feed_forward.py deleted file mode 100644 index 7a9237a38314e3f758f064ab78d8983b94a9eb0a..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg_extractor/encoder/positionwise_feed_forward.py +++ /dev/null @@ -1,31 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Positionwise feed forward layer definition.""" - -import torch - - -class PositionwiseFeedForward(torch.nn.Module): - """Positionwise feed forward layer. - - :param int idim: input dimenstion - :param int hidden_units: number of hidden units - :param float dropout_rate: dropout rate - - """ - - def __init__(self, idim, hidden_units, dropout_rate, activation=torch.nn.ReLU()): - """Construct an PositionwiseFeedForward object.""" - super(PositionwiseFeedForward, self).__init__() - self.w_1 = torch.nn.Linear(idim, hidden_units) - self.w_2 = torch.nn.Linear(hidden_units, idim) - self.dropout = torch.nn.Dropout(dropout_rate) - self.activation = activation - - def forward(self, x): - """Forward funciton.""" - return self.w_2(self.dropout(self.activation(self.w_1(x)))) diff --git a/spaces/lewiswu1209/MockingBird/train.py b/spaces/lewiswu1209/MockingBird/train.py deleted file mode 100644 index 5a6a06c805109159ff40cad69668f1fc38cf1e9b..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/train.py +++ /dev/null @@ -1,67 +0,0 @@ -import sys -import torch -import argparse -import numpy as np -from utils.load_yaml import HpsYaml -from ppg2mel.train.train_linglf02mel_seq2seq_oneshotvc import Solver - -# For reproducibility, comment these may speed up training -torch.backends.cudnn.deterministic = True -torch.backends.cudnn.benchmark = False - -def main(): - # Arguments - parser = argparse.ArgumentParser(description= - 'Training PPG2Mel VC model.') - parser.add_argument('--config', type=str, - help='Path to experiment config, e.g., config/vc.yaml') - parser.add_argument('--name', default=None, type=str, help='Name for logging.') - parser.add_argument('--logdir', default='log/', type=str, - help='Logging path.', required=False) - parser.add_argument('--ckpdir', default='ppg2mel/saved_models/', type=str, - help='Checkpoint path.', required=False) - parser.add_argument('--outdir', default='result/', type=str, - help='Decode output path.', required=False) - parser.add_argument('--load', default=None, type=str, - help='Load pre-trained model (for training only)', required=False) - parser.add_argument('--warm_start', action='store_true', - help='Load model weights only, ignore specified layers.') - parser.add_argument('--seed', default=0, type=int, - help='Random seed for reproducable results.', required=False) - parser.add_argument('--njobs', default=8, type=int, - help='Number of threads for dataloader/decoding.', required=False) - parser.add_argument('--cpu', action='store_true', help='Disable GPU training.') - parser.add_argument('--no-pin', action='store_true', - help='Disable pin-memory for dataloader') - parser.add_argument('--test', action='store_true', help='Test the model.') - parser.add_argument('--no-msg', action='store_true', help='Hide all messages.') - parser.add_argument('--finetune', action='store_true', help='Finetune model') - parser.add_argument('--oneshotvc', action='store_true', help='Oneshot VC model') - parser.add_argument('--bilstm', action='store_true', help='BiLSTM VC model') - parser.add_argument('--lsa', action='store_true', help='Use location-sensitive attention (LSA)') - - ### - - paras = parser.parse_args() - setattr(paras, 'gpu', not paras.cpu) - setattr(paras, 'pin_memory', not paras.no_pin) - setattr(paras, 'verbose', not paras.no_msg) - # Make the config dict dot visitable - config = HpsYaml(paras.config) - - np.random.seed(paras.seed) - torch.manual_seed(paras.seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(paras.seed) - - print(">>> OneShot VC training ...") - mode = "train" - solver = Solver(config, paras, mode) - solver.load_data() - solver.set_model() - solver.exec() - print(">>> Oneshot VC train finished!") - sys.exit(0) - -if __name__ == "__main__": - main() diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Datacard Preface [full Version] Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Datacard Preface [full Version] Download.md deleted file mode 100644 index 51657b4ff1337c7e7b7148b3db321db9e15f4f08..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Datacard Preface [full Version] Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Datacard preface [full version] download


        DOWNLOADhttps://bytlly.com/2uGwhC



        -
        -... crack, modscan32 tcp/ip connection . . buku ajar idai pdf 280 · Veer Zaara full movie download in hindi hd 1080p · Crackmodscan64 · Datacard Preface [full ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Ls Magazine Ls Models Ls Land Lsm Issue 24 Future School Added By Request ((EXCLUSIVE)).md b/spaces/lincquiQcaudo/Top-20-Diffusion/Ls Magazine Ls Models Ls Land Lsm Issue 24 Future School Added By Request ((EXCLUSIVE)).md deleted file mode 100644 index f3a522b7e2cce9e71ce5faebf1578e03938042e4..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Ls Magazine Ls Models Ls Land Lsm Issue 24 Future School Added By Request ((EXCLUSIVE)).md +++ /dev/null @@ -1,6 +0,0 @@ -

        Ls Magazine Ls Models Ls Land lsm issue 24 future school | added by request


        Download Ziphttps://bytlly.com/2uGx7e



        -
        -Ls Magazine Ls Models Ls Land lsm issue 24 future school | added by request. rialarraden's Ownd. フォロー. 2020.06.28 03:44. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/lithiumice/SadTalker/src/gradio_demo.py b/spaces/lithiumice/SadTalker/src/gradio_demo.py deleted file mode 100644 index e76e3fe679c833ff167e5b5acdbf37ce5820b130..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/gradio_demo.py +++ /dev/null @@ -1,137 +0,0 @@ -import torch, uuid -import os, sys, shutil -from src.utils.preprocess import CropAndExtract -from src.test_audio2coeff import Audio2Coeff -from src.facerender.animate import AnimateFromCoeff -from src.generate_batch import get_data -from src.generate_facerender_batch import get_facerender_data - -from pydub import AudioSegment - -def mp3_to_wav(mp3_filename,wav_filename,frame_rate): - mp3_file = AudioSegment.from_file(file=mp3_filename) - mp3_file.set_frame_rate(frame_rate).export(wav_filename,format="wav") - - -class SadTalker(): - - def __init__(self, checkpoint_path='checkpoints', config_path='src/config', lazy_load=False): - - if torch.cuda.is_available() : - device = "cuda" - else: - device = "cpu" - - self.device = device - - os.environ['TORCH_HOME']= checkpoint_path - - self.checkpoint_path = checkpoint_path - self.config_path = config_path - - self.path_of_lm_croper = os.path.join( checkpoint_path, 'shape_predictor_68_face_landmarks.dat') - self.path_of_net_recon_model = os.path.join( checkpoint_path, 'epoch_20.pth') - self.dir_of_BFM_fitting = os.path.join( checkpoint_path, 'BFM_Fitting') - self.wav2lip_checkpoint = os.path.join( checkpoint_path, 'wav2lip.pth') - - self.audio2pose_checkpoint = os.path.join( checkpoint_path, 'auido2pose_00140-model.pth') - self.audio2pose_yaml_path = os.path.join( config_path, 'auido2pose.yaml') - - self.audio2exp_checkpoint = os.path.join( checkpoint_path, 'auido2exp_00300-model.pth') - self.audio2exp_yaml_path = os.path.join( config_path, 'auido2exp.yaml') - - self.free_view_checkpoint = os.path.join( checkpoint_path, 'facevid2vid_00189-model.pth.tar') - - self.lazy_load = lazy_load - - if not self.lazy_load: - #init model - print(self.path_of_lm_croper) - self.preprocess_model = CropAndExtract(self.path_of_lm_croper, self.path_of_net_recon_model, self.dir_of_BFM_fitting, self.device) - - print(self.audio2pose_checkpoint) - self.audio_to_coeff = Audio2Coeff(self.audio2pose_checkpoint, self.audio2pose_yaml_path, - self.audio2exp_checkpoint, self.audio2exp_yaml_path, self.wav2lip_checkpoint, self.device) - - def test(self, source_image, driven_audio, preprocess='crop', still_mode=False, use_enhancer=False, result_dir='./results/'): - - ### crop: only model, - - if self.lazy_load: - #init model - print(self.path_of_lm_croper) - self.preprocess_model = CropAndExtract(self.path_of_lm_croper, self.path_of_net_recon_model, self.dir_of_BFM_fitting, self.device) - - print(self.audio2pose_checkpoint) - self.audio_to_coeff = Audio2Coeff(self.audio2pose_checkpoint, self.audio2pose_yaml_path, - self.audio2exp_checkpoint, self.audio2exp_yaml_path, self.wav2lip_checkpoint, self.device) - - if preprocess == 'full': - self.mapping_checkpoint = os.path.join(self.checkpoint_path, 'mapping_00109-model.pth.tar') - self.facerender_yaml_path = os.path.join(self.config_path, 'facerender_still.yaml') - else: - self.mapping_checkpoint = os.path.join(self.checkpoint_path, 'mapping_00229-model.pth.tar') - self.facerender_yaml_path = os.path.join(self.config_path, 'facerender.yaml') - - print(self.mapping_checkpoint) - print(self.free_view_checkpoint) - self.animate_from_coeff = AnimateFromCoeff(self.free_view_checkpoint, self.mapping_checkpoint, - self.facerender_yaml_path, self.device) - - time_tag = str(uuid.uuid4()) - save_dir = os.path.join(result_dir, time_tag) - os.makedirs(save_dir, exist_ok=True) - - input_dir = os.path.join(save_dir, 'input') - os.makedirs(input_dir, exist_ok=True) - - print(source_image) - pic_path = os.path.join(input_dir, os.path.basename(source_image)) - shutil.move(source_image, input_dir) - - if os.path.isfile(driven_audio): - audio_path = os.path.join(input_dir, os.path.basename(driven_audio)) - - #### mp3 to wav - if '.mp3' in audio_path: - mp3_to_wav(driven_audio, audio_path.replace('.mp3', '.wav'), 16000) - audio_path = audio_path.replace('.mp3', '.wav') - else: - shutil.move(driven_audio, input_dir) - else: - raise AttributeError("error audio") - - - os.makedirs(save_dir, exist_ok=True) - pose_style = 0 - #crop image and extract 3dmm from image - first_frame_dir = os.path.join(save_dir, 'first_frame_dir') - os.makedirs(first_frame_dir, exist_ok=True) - first_coeff_path, crop_pic_path, crop_info = self.preprocess_model.generate(pic_path, first_frame_dir,preprocess) - - if first_coeff_path is None: - raise AttributeError("No face is detected") - - #audio2ceoff - batch = get_data(first_coeff_path, audio_path, self.device, ref_eyeblink_coeff_path=None, still=still_mode) # longer audio? - coeff_path = self.audio_to_coeff.generate(batch, save_dir, pose_style) - #coeff2video - batch_size = 8 - data = get_facerender_data(coeff_path, crop_pic_path, first_coeff_path, audio_path, batch_size, still_mode=still_mode, preprocess=preprocess) - return_path = self.animate_from_coeff.generate(data, save_dir, pic_path, crop_info, enhancer='gfpgan' if use_enhancer else None, preprocess=preprocess) - video_name = data['video_name'] - print(f'The generated video is named {video_name} in {save_dir}') - - if self.lazy_load: - del self.preprocess_model - del self.audio_to_coeff - del self.animate_from_coeff - - if torch.cuda.is_available(): - torch.cuda.empty_cache() - torch.cuda.synchronize() - import gc; gc.collect() - - return return_path - - \ No newline at end of file diff --git a/spaces/lj1995/vocal2guitar/train/process_ckpt.py b/spaces/lj1995/vocal2guitar/train/process_ckpt.py deleted file mode 100644 index ac608aeb709e7821d30e920f69512511191b2cf1..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/train/process_ckpt.py +++ /dev/null @@ -1,215 +0,0 @@ -import torch, traceback, os, pdb, sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -from collections import OrderedDict -from i18n import I18nAuto - -i18n = I18nAuto() - - -def savee(ckpt, sr, if_f0, name, epoch, version, hps): - try: - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - opt["config"] = [ - hps.data.filter_length // 2 + 1, - 32, - hps.model.inter_channels, - hps.model.hidden_channels, - hps.model.filter_channels, - hps.model.n_heads, - hps.model.n_layers, - hps.model.kernel_size, - hps.model.p_dropout, - hps.model.resblock, - hps.model.resblock_kernel_sizes, - hps.model.resblock_dilation_sizes, - hps.model.upsample_rates, - hps.model.upsample_initial_channel, - hps.model.upsample_kernel_sizes, - hps.model.spk_embed_dim, - hps.model.gin_channels, - hps.data.sampling_rate, - ] - opt["info"] = "%sepoch" % epoch - opt["sr"] = sr - opt["f0"] = if_f0 - opt["version"] = version - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def show_info(path): - try: - a = torch.load(path, map_location="cpu") - return "模型信息:%s\n采样率:%s\n模型是否输入音高引导:%s\n版本:%s" % ( - a.get("info", "None"), - a.get("sr", "None"), - a.get("f0", "None"), - a.get("version", "None"), - ) - except: - return traceback.format_exc() - - -def extract_small_model(path, name, sr, if_f0, info, version): - try: - ckpt = torch.load(path, map_location="cpu") - if "model" in ckpt: - ckpt = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - if sr == "40k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 109, - 256, - 40000, - ] - elif sr == "48k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 6, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 48000, - ] - elif sr == "32k": - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 4, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 32000, - ] - if info == "": - info = "Extracted model." - opt["info"] = info - opt["version"] = version - opt["sr"] = sr - opt["f0"] = int(if_f0) - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def change_info(path, info, name): - try: - ckpt = torch.load(path, map_location="cpu") - ckpt["info"] = info - if name == "": - name = os.path.basename(path) - torch.save(ckpt, "weights/%s" % name) - return "Success." - except: - return traceback.format_exc() - - -def merge(path1, path2, alpha1, sr, f0, info, name, version): - try: - - def extract(ckpt): - a = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in a.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = a[key] - return opt - - ckpt1 = torch.load(path1, map_location="cpu") - ckpt2 = torch.load(path2, map_location="cpu") - cfg = ckpt1["config"] - if "model" in ckpt1: - ckpt1 = extract(ckpt1) - else: - ckpt1 = ckpt1["weight"] - if "model" in ckpt2: - ckpt2 = extract(ckpt2) - else: - ckpt2 = ckpt2["weight"] - if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())): - return "Fail to merge the models. The model architectures are not the same." - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt1.keys(): - # try: - if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape: - min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0]) - opt["weight"][key] = ( - alpha1 * (ckpt1[key][:min_shape0].float()) - + (1 - alpha1) * (ckpt2[key][:min_shape0].float()) - ).half() - else: - opt["weight"][key] = ( - alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float()) - ).half() - # except: - # pdb.set_trace() - opt["config"] = cfg - """ - if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000] - elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000] - elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000] - """ - opt["sr"] = sr - opt["f0"] = 1 if f0 == i18n("是") else 0 - opt["version"] = version - opt["info"] = info - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() diff --git a/spaces/llmonitor/benchmarks/app/prompts/layout.js b/spaces/llmonitor/benchmarks/app/prompts/layout.js deleted file mode 100644 index 121aab52585e611404d82ab958cdb31cc2747a3d..0000000000000000000000000000000000000000 --- a/spaces/llmonitor/benchmarks/app/prompts/layout.js +++ /dev/null @@ -1,20 +0,0 @@ -import { cookies } from "next/headers" -import Link from "next/link" - -export default function PromptsLayout({ children }) { - const cookiesList = cookies() - const token = cookiesList.get("token") - - return ( - <> -

        - {"dataset menu: "} - current dataset {" | "} - vote {" | "} - submit -

        -
        - {children} - - ) -} diff --git a/spaces/luxuedong/lxd/Dockerfile b/spaces/luxuedong/lxd/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/eval.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/eval.h deleted file mode 100644 index ba82cf42ae3673a3de391eb55777ef413c43dc33..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/eval.h +++ /dev/null @@ -1,132 +0,0 @@ -/* - pybind11/exec.h: Support for evaluating Python expressions and statements - from strings and files - - Copyright (c) 2016 Klemens Morgenstern and - Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "pybind11.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) - -enum eval_mode { - /// Evaluate a string containing an isolated expression - eval_expr, - - /// Evaluate a string containing a single statement. Returns \c none - eval_single_statement, - - /// Evaluate a string containing a sequence of statement. Returns \c none - eval_statements -}; - -template -object eval(str expr, object global = globals(), object local = object()) { - if (!local) - local = global; - - /* PyRun_String does not accept a PyObject / encoding specifier, - this seems to be the only alternative */ - std::string buffer = "# -*- coding: utf-8 -*-\n" + (std::string) expr; - - int start; - switch (mode) { - case eval_expr: start = Py_eval_input; break; - case eval_single_statement: start = Py_single_input; break; - case eval_statements: start = Py_file_input; break; - default: pybind11_fail("invalid evaluation mode"); - } - - PyObject *result = PyRun_String(buffer.c_str(), start, global.ptr(), local.ptr()); - if (!result) - throw error_already_set(); - return reinterpret_steal(result); -} - -template -object eval(const char (&s)[N], object global = globals(), object local = object()) { - /* Support raw string literals by removing common leading whitespace */ - auto expr = (s[0] == '\n') ? str(module::import("textwrap").attr("dedent")(s)) - : str(s); - return eval(expr, global, local); -} - -inline void exec(str expr, object global = globals(), object local = object()) { - eval(expr, global, local); -} - -template -void exec(const char (&s)[N], object global = globals(), object local = object()) { - eval(s, global, local); -} - -#if defined(PYPY_VERSION) && PY_VERSION_HEX >= 0x3000000 -template -object eval_file(str, object, object) { - pybind11_fail("eval_file not supported in PyPy3. Use eval"); -} -template -object eval_file(str, object) { - pybind11_fail("eval_file not supported in PyPy3. Use eval"); -} -template -object eval_file(str) { - pybind11_fail("eval_file not supported in PyPy3. Use eval"); -} -#else -template -object eval_file(str fname, object global = globals(), object local = object()) { - if (!local) - local = global; - - int start; - switch (mode) { - case eval_expr: start = Py_eval_input; break; - case eval_single_statement: start = Py_single_input; break; - case eval_statements: start = Py_file_input; break; - default: pybind11_fail("invalid evaluation mode"); - } - - int closeFile = 1; - std::string fname_str = (std::string) fname; -#if PY_VERSION_HEX >= 0x03040000 - FILE *f = _Py_fopen_obj(fname.ptr(), "r"); -#elif PY_VERSION_HEX >= 0x03000000 - FILE *f = _Py_fopen(fname.ptr(), "r"); -#else - /* No unicode support in open() :( */ - auto fobj = reinterpret_steal(PyFile_FromString( - const_cast(fname_str.c_str()), - const_cast("r"))); - FILE *f = nullptr; - if (fobj) - f = PyFile_AsFile(fobj.ptr()); - closeFile = 0; -#endif - if (!f) { - PyErr_Clear(); - pybind11_fail("File \"" + fname_str + "\" could not be opened!"); - } - -#if PY_VERSION_HEX < 0x03000000 && defined(PYPY_VERSION) - PyObject *result = PyRun_File(f, fname_str.c_str(), start, global.ptr(), - local.ptr()); - (void) closeFile; -#else - PyObject *result = PyRun_FileEx(f, fname_str.c_str(), start, global.ptr(), - local.ptr(), closeFile); -#endif - - if (!result) - throw error_already_set(); - return reinterpret_steal(result); -} -#endif - -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/ma-xu/LIVE/pybind11/tools/clang/cindex.py b/spaces/ma-xu/LIVE/pybind11/tools/clang/cindex.py deleted file mode 100644 index 3a083de0df70e64c07bb3c0cd4bdf69d7ddfd8c5..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tools/clang/cindex.py +++ /dev/null @@ -1,3884 +0,0 @@ -#===- cindex.py - Python Indexing Library Bindings -----------*- python -*--===# -# -# The LLVM Compiler Infrastructure -# -# This file is distributed under the University of Illinois Open Source -# License. See LICENSE.TXT for details. -# -#===------------------------------------------------------------------------===# - -r""" -Clang Indexing Library Bindings -=============================== - -This module provides an interface to the Clang indexing library. It is a -low-level interface to the indexing library which attempts to match the Clang -API directly while also being "pythonic". Notable differences from the C API -are: - - * string results are returned as Python strings, not CXString objects. - - * null cursors are translated to None. - - * access to child cursors is done via iteration, not visitation. - -The major indexing objects are: - - Index - - The top-level object which manages some global library state. - - TranslationUnit - - High-level object encapsulating the AST for a single translation unit. These - can be loaded from .ast files or parsed on the fly. - - Cursor - - Generic object for representing a node in the AST. - - SourceRange, SourceLocation, and File - - Objects representing information about the input source. - -Most object information is exposed using properties, when the underlying API -call is efficient. -""" - -# TODO -# ==== -# -# o API support for invalid translation units. Currently we can't even get the -# diagnostics on failure because they refer to locations in an object that -# will have been invalidated. -# -# o fix memory management issues (currently client must hold on to index and -# translation unit, or risk crashes). -# -# o expose code completion APIs. -# -# o cleanup ctypes wrapping, would be nice to separate the ctypes details more -# clearly, and hide from the external interface (i.e., help(cindex)). -# -# o implement additional SourceLocation, SourceRange, and File methods. - -from ctypes import * -import collections - -import clang.enumerations - -# ctypes doesn't implicitly convert c_void_p to the appropriate wrapper -# object. This is a problem, because it means that from_parameter will see an -# integer and pass the wrong value on platforms where int != void*. Work around -# this by marshalling object arguments as void**. -c_object_p = POINTER(c_void_p) - -callbacks = {} - -### Exception Classes ### - -class TranslationUnitLoadError(Exception): - """Represents an error that occurred when loading a TranslationUnit. - - This is raised in the case where a TranslationUnit could not be - instantiated due to failure in the libclang library. - - FIXME: Make libclang expose additional error information in this scenario. - """ - pass - -class TranslationUnitSaveError(Exception): - """Represents an error that occurred when saving a TranslationUnit. - - Each error has associated with it an enumerated value, accessible under - e.save_error. Consumers can compare the value with one of the ERROR_ - constants in this class. - """ - - # Indicates that an unknown error occurred. This typically indicates that - # I/O failed during save. - ERROR_UNKNOWN = 1 - - # Indicates that errors during translation prevented saving. The errors - # should be available via the TranslationUnit's diagnostics. - ERROR_TRANSLATION_ERRORS = 2 - - # Indicates that the translation unit was somehow invalid. - ERROR_INVALID_TU = 3 - - def __init__(self, enumeration, message): - assert isinstance(enumeration, int) - - if enumeration < 1 or enumeration > 3: - raise Exception("Encountered undefined TranslationUnit save error " - "constant: %d. Please file a bug to have this " - "value supported." % enumeration) - - self.save_error = enumeration - Exception.__init__(self, 'Error %d: %s' % (enumeration, message)) - -### Structures and Utility Classes ### - -class CachedProperty(object): - """Decorator that lazy-loads the value of a property. - - The first time the property is accessed, the original property function is - executed. The value it returns is set as the new value of that instance's - property, replacing the original method. - """ - - def __init__(self, wrapped): - self.wrapped = wrapped - try: - self.__doc__ = wrapped.__doc__ - except: - pass - - def __get__(self, instance, instance_type=None): - if instance is None: - return self - - value = self.wrapped(instance) - setattr(instance, self.wrapped.__name__, value) - - return value - - -class _CXString(Structure): - """Helper for transforming CXString results.""" - - _fields_ = [("spelling", c_char_p), ("free", c_int)] - - def __del__(self): - conf.lib.clang_disposeString(self) - - @staticmethod - def from_result(res, fn, args): - assert isinstance(res, _CXString) - return conf.lib.clang_getCString(res) - -class SourceLocation(Structure): - """ - A SourceLocation represents a particular location within a source file. - """ - _fields_ = [("ptr_data", c_void_p * 2), ("int_data", c_uint)] - _data = None - - def _get_instantiation(self): - if self._data is None: - f, l, c, o = c_object_p(), c_uint(), c_uint(), c_uint() - conf.lib.clang_getInstantiationLocation(self, byref(f), byref(l), - byref(c), byref(o)) - if f: - f = File(f) - else: - f = None - self._data = (f, int(l.value), int(c.value), int(o.value)) - return self._data - - @staticmethod - def from_position(tu, file, line, column): - """ - Retrieve the source location associated with a given file/line/column in - a particular translation unit. - """ - return conf.lib.clang_getLocation(tu, file, line, column) - - @staticmethod - def from_offset(tu, file, offset): - """Retrieve a SourceLocation from a given character offset. - - tu -- TranslationUnit file belongs to - file -- File instance to obtain offset from - offset -- Integer character offset within file - """ - return conf.lib.clang_getLocationForOffset(tu, file, offset) - - @property - def file(self): - """Get the file represented by this source location.""" - return self._get_instantiation()[0] - - @property - def line(self): - """Get the line represented by this source location.""" - return self._get_instantiation()[1] - - @property - def column(self): - """Get the column represented by this source location.""" - return self._get_instantiation()[2] - - @property - def offset(self): - """Get the file offset represented by this source location.""" - return self._get_instantiation()[3] - - def __eq__(self, other): - return conf.lib.clang_equalLocations(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - - def __repr__(self): - if self.file: - filename = self.file.name - else: - filename = None - return "" % ( - filename, self.line, self.column) - -class SourceRange(Structure): - """ - A SourceRange describes a range of source locations within the source - code. - """ - _fields_ = [ - ("ptr_data", c_void_p * 2), - ("begin_int_data", c_uint), - ("end_int_data", c_uint)] - - # FIXME: Eliminate this and make normal constructor? Requires hiding ctypes - # object. - @staticmethod - def from_locations(start, end): - return conf.lib.clang_getRange(start, end) - - @property - def start(self): - """ - Return a SourceLocation representing the first character within a - source range. - """ - return conf.lib.clang_getRangeStart(self) - - @property - def end(self): - """ - Return a SourceLocation representing the last character within a - source range. - """ - return conf.lib.clang_getRangeEnd(self) - - def __eq__(self, other): - return conf.lib.clang_equalRanges(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - - def __contains__(self, other): - """Useful to detect the Token/Lexer bug""" - if not isinstance(other, SourceLocation): - return False - if other.file is None and self.start.file is None: - pass - elif ( self.start.file.name != other.file.name or - other.file.name != self.end.file.name): - # same file name - return False - # same file, in between lines - if self.start.line < other.line < self.end.line: - return True - elif self.start.line == other.line: - # same file first line - if self.start.column <= other.column: - return True - elif other.line == self.end.line: - # same file last line - if other.column <= self.end.column: - return True - return False - - def __repr__(self): - return "" % (self.start, self.end) - -class Diagnostic(object): - """ - A Diagnostic is a single instance of a Clang diagnostic. It includes the - diagnostic severity, the message, the location the diagnostic occurred, as - well as additional source ranges and associated fix-it hints. - """ - - Ignored = 0 - Note = 1 - Warning = 2 - Error = 3 - Fatal = 4 - - def __init__(self, ptr): - self.ptr = ptr - - def __del__(self): - conf.lib.clang_disposeDiagnostic(self) - - @property - def severity(self): - return conf.lib.clang_getDiagnosticSeverity(self) - - @property - def location(self): - return conf.lib.clang_getDiagnosticLocation(self) - - @property - def spelling(self): - return conf.lib.clang_getDiagnosticSpelling(self) - - @property - def ranges(self): - class RangeIterator: - def __init__(self, diag): - self.diag = diag - - def __len__(self): - return int(conf.lib.clang_getDiagnosticNumRanges(self.diag)) - - def __getitem__(self, key): - if (key >= len(self)): - raise IndexError - return conf.lib.clang_getDiagnosticRange(self.diag, key) - - return RangeIterator(self) - - @property - def fixits(self): - class FixItIterator: - def __init__(self, diag): - self.diag = diag - - def __len__(self): - return int(conf.lib.clang_getDiagnosticNumFixIts(self.diag)) - - def __getitem__(self, key): - range = SourceRange() - value = conf.lib.clang_getDiagnosticFixIt(self.diag, key, - byref(range)) - if len(value) == 0: - raise IndexError - - return FixIt(range, value) - - return FixItIterator(self) - - @property - def children(self): - class ChildDiagnosticsIterator: - def __init__(self, diag): - self.diag_set = conf.lib.clang_getChildDiagnostics(diag) - - def __len__(self): - return int(conf.lib.clang_getNumDiagnosticsInSet(self.diag_set)) - - def __getitem__(self, key): - diag = conf.lib.clang_getDiagnosticInSet(self.diag_set, key) - if not diag: - raise IndexError - return Diagnostic(diag) - - return ChildDiagnosticsIterator(self) - - @property - def category_number(self): - """The category number for this diagnostic or 0 if unavailable.""" - return conf.lib.clang_getDiagnosticCategory(self) - - @property - def category_name(self): - """The string name of the category for this diagnostic.""" - return conf.lib.clang_getDiagnosticCategoryText(self) - - @property - def option(self): - """The command-line option that enables this diagnostic.""" - return conf.lib.clang_getDiagnosticOption(self, None) - - @property - def disable_option(self): - """The command-line option that disables this diagnostic.""" - disable = _CXString() - conf.lib.clang_getDiagnosticOption(self, byref(disable)) - - return conf.lib.clang_getCString(disable) - - def __repr__(self): - return "" % ( - self.severity, self.location, self.spelling) - - def from_param(self): - return self.ptr - -class FixIt(object): - """ - A FixIt represents a transformation to be applied to the source to - "fix-it". The fix-it shouldbe applied by replacing the given source range - with the given value. - """ - - def __init__(self, range, value): - self.range = range - self.value = value - - def __repr__(self): - return "" % (self.range, self.value) - -class TokenGroup(object): - """Helper class to facilitate token management. - - Tokens are allocated from libclang in chunks. They must be disposed of as a - collective group. - - One purpose of this class is for instances to represent groups of allocated - tokens. Each token in a group contains a reference back to an instance of - this class. When all tokens from a group are garbage collected, it allows - this class to be garbage collected. When this class is garbage collected, - it calls the libclang destructor which invalidates all tokens in the group. - - You should not instantiate this class outside of this module. - """ - def __init__(self, tu, memory, count): - self._tu = tu - self._memory = memory - self._count = count - - def __del__(self): - conf.lib.clang_disposeTokens(self._tu, self._memory, self._count) - - @staticmethod - def get_tokens(tu, extent): - """Helper method to return all tokens in an extent. - - This functionality is needed multiple places in this module. We define - it here because it seems like a logical place. - """ - tokens_memory = POINTER(Token)() - tokens_count = c_uint() - - conf.lib.clang_tokenize(tu, extent, byref(tokens_memory), - byref(tokens_count)) - - count = int(tokens_count.value) - - # If we get no tokens, no memory was allocated. Be sure not to return - # anything and potentially call a destructor on nothing. - if count < 1: - return - - tokens_array = cast(tokens_memory, POINTER(Token * count)).contents - - token_group = TokenGroup(tu, tokens_memory, tokens_count) - - for i in range(0, count): - token = Token() - token.int_data = tokens_array[i].int_data - token.ptr_data = tokens_array[i].ptr_data - token._tu = tu - token._group = token_group - - yield token - -class TokenKind(object): - """Describes a specific type of a Token.""" - - _value_map = {} # int -> TokenKind - - def __init__(self, value, name): - """Create a new TokenKind instance from a numeric value and a name.""" - self.value = value - self.name = name - - def __repr__(self): - return 'TokenKind.%s' % (self.name,) - - @staticmethod - def from_value(value): - """Obtain a registered TokenKind instance from its value.""" - result = TokenKind._value_map.get(value, None) - - if result is None: - raise ValueError('Unknown TokenKind: %d' % value) - - return result - - @staticmethod - def register(value, name): - """Register a new TokenKind enumeration. - - This should only be called at module load time by code within this - package. - """ - if value in TokenKind._value_map: - raise ValueError('TokenKind already registered: %d' % value) - - kind = TokenKind(value, name) - TokenKind._value_map[value] = kind - setattr(TokenKind, name, kind) - -### Cursor Kinds ### -class BaseEnumeration(object): - """ - Common base class for named enumerations held in sync with Index.h values. - - Subclasses must define their own _kinds and _name_map members, as: - _kinds = [] - _name_map = None - These values hold the per-subclass instances and value-to-name mappings, - respectively. - - """ - - def __init__(self, value): - if value >= len(self.__class__._kinds): - self.__class__._kinds += [None] * (value - len(self.__class__._kinds) + 1) - if self.__class__._kinds[value] is not None: - raise ValueError('{0} value {1} already loaded'.format( - str(self.__class__), value)) - self.value = value - self.__class__._kinds[value] = self - self.__class__._name_map = None - - - def from_param(self): - return self.value - - @property - def name(self): - """Get the enumeration name of this cursor kind.""" - if self._name_map is None: - self._name_map = {} - for key, value in list(self.__class__.__dict__.items()): - if isinstance(value, self.__class__): - self._name_map[value] = key - return self._name_map[self] - - @classmethod - def from_id(cls, id): - if id >= len(cls._kinds) or cls._kinds[id] is None: - raise ValueError('Unknown template argument kind %d' % id) - return cls._kinds[id] - - def __repr__(self): - return '%s.%s' % (self.__class__, self.name,) - - -class CursorKind(BaseEnumeration): - """ - A CursorKind describes the kind of entity that a cursor points to. - """ - - # The required BaseEnumeration declarations. - _kinds = [] - _name_map = None - - @staticmethod - def get_all_kinds(): - """Return all CursorKind enumeration instances.""" - return [_f for _f in CursorKind._kinds if _f] - - def is_declaration(self): - """Test if this is a declaration kind.""" - return conf.lib.clang_isDeclaration(self) - - def is_reference(self): - """Test if this is a reference kind.""" - return conf.lib.clang_isReference(self) - - def is_expression(self): - """Test if this is an expression kind.""" - return conf.lib.clang_isExpression(self) - - def is_statement(self): - """Test if this is a statement kind.""" - return conf.lib.clang_isStatement(self) - - def is_attribute(self): - """Test if this is an attribute kind.""" - return conf.lib.clang_isAttribute(self) - - def is_invalid(self): - """Test if this is an invalid kind.""" - return conf.lib.clang_isInvalid(self) - - def is_translation_unit(self): - """Test if this is a translation unit kind.""" - return conf.lib.clang_isTranslationUnit(self) - - def is_preprocessing(self): - """Test if this is a preprocessing kind.""" - return conf.lib.clang_isPreprocessing(self) - - def is_unexposed(self): - """Test if this is an unexposed kind.""" - return conf.lib.clang_isUnexposed(self) - - def __repr__(self): - return 'CursorKind.%s' % (self.name,) - -### -# Declaration Kinds - -# A declaration whose specific kind is not exposed via this interface. -# -# Unexposed declarations have the same operations as any other kind of -# declaration; one can extract their location information, spelling, find their -# definitions, etc. However, the specific kind of the declaration is not -# reported. -CursorKind.UNEXPOSED_DECL = CursorKind(1) - -# A C or C++ struct. -CursorKind.STRUCT_DECL = CursorKind(2) - -# A C or C++ union. -CursorKind.UNION_DECL = CursorKind(3) - -# A C++ class. -CursorKind.CLASS_DECL = CursorKind(4) - -# An enumeration. -CursorKind.ENUM_DECL = CursorKind(5) - -# A field (in C) or non-static data member (in C++) in a struct, union, or C++ -# class. -CursorKind.FIELD_DECL = CursorKind(6) - -# An enumerator constant. -CursorKind.ENUM_CONSTANT_DECL = CursorKind(7) - -# A function. -CursorKind.FUNCTION_DECL = CursorKind(8) - -# A variable. -CursorKind.VAR_DECL = CursorKind(9) - -# A function or method parameter. -CursorKind.PARM_DECL = CursorKind(10) - -# An Objective-C @interface. -CursorKind.OBJC_INTERFACE_DECL = CursorKind(11) - -# An Objective-C @interface for a category. -CursorKind.OBJC_CATEGORY_DECL = CursorKind(12) - -# An Objective-C @protocol declaration. -CursorKind.OBJC_PROTOCOL_DECL = CursorKind(13) - -# An Objective-C @property declaration. -CursorKind.OBJC_PROPERTY_DECL = CursorKind(14) - -# An Objective-C instance variable. -CursorKind.OBJC_IVAR_DECL = CursorKind(15) - -# An Objective-C instance method. -CursorKind.OBJC_INSTANCE_METHOD_DECL = CursorKind(16) - -# An Objective-C class method. -CursorKind.OBJC_CLASS_METHOD_DECL = CursorKind(17) - -# An Objective-C @implementation. -CursorKind.OBJC_IMPLEMENTATION_DECL = CursorKind(18) - -# An Objective-C @implementation for a category. -CursorKind.OBJC_CATEGORY_IMPL_DECL = CursorKind(19) - -# A typedef. -CursorKind.TYPEDEF_DECL = CursorKind(20) - -# A C++ class method. -CursorKind.CXX_METHOD = CursorKind(21) - -# A C++ namespace. -CursorKind.NAMESPACE = CursorKind(22) - -# A linkage specification, e.g. 'extern "C"'. -CursorKind.LINKAGE_SPEC = CursorKind(23) - -# A C++ constructor. -CursorKind.CONSTRUCTOR = CursorKind(24) - -# A C++ destructor. -CursorKind.DESTRUCTOR = CursorKind(25) - -# A C++ conversion function. -CursorKind.CONVERSION_FUNCTION = CursorKind(26) - -# A C++ template type parameter -CursorKind.TEMPLATE_TYPE_PARAMETER = CursorKind(27) - -# A C++ non-type template paramater. -CursorKind.TEMPLATE_NON_TYPE_PARAMETER = CursorKind(28) - -# A C++ template template parameter. -CursorKind.TEMPLATE_TEMPLATE_PARAMETER = CursorKind(29) - -# A C++ function template. -CursorKind.FUNCTION_TEMPLATE = CursorKind(30) - -# A C++ class template. -CursorKind.CLASS_TEMPLATE = CursorKind(31) - -# A C++ class template partial specialization. -CursorKind.CLASS_TEMPLATE_PARTIAL_SPECIALIZATION = CursorKind(32) - -# A C++ namespace alias declaration. -CursorKind.NAMESPACE_ALIAS = CursorKind(33) - -# A C++ using directive -CursorKind.USING_DIRECTIVE = CursorKind(34) - -# A C++ using declaration -CursorKind.USING_DECLARATION = CursorKind(35) - -# A Type alias decl. -CursorKind.TYPE_ALIAS_DECL = CursorKind(36) - -# A Objective-C synthesize decl -CursorKind.OBJC_SYNTHESIZE_DECL = CursorKind(37) - -# A Objective-C dynamic decl -CursorKind.OBJC_DYNAMIC_DECL = CursorKind(38) - -# A C++ access specifier decl. -CursorKind.CXX_ACCESS_SPEC_DECL = CursorKind(39) - - -### -# Reference Kinds - -CursorKind.OBJC_SUPER_CLASS_REF = CursorKind(40) -CursorKind.OBJC_PROTOCOL_REF = CursorKind(41) -CursorKind.OBJC_CLASS_REF = CursorKind(42) - -# A reference to a type declaration. -# -# A type reference occurs anywhere where a type is named but not -# declared. For example, given: -# typedef unsigned size_type; -# size_type size; -# -# The typedef is a declaration of size_type (CXCursor_TypedefDecl), -# while the type of the variable "size" is referenced. The cursor -# referenced by the type of size is the typedef for size_type. -CursorKind.TYPE_REF = CursorKind(43) -CursorKind.CXX_BASE_SPECIFIER = CursorKind(44) - -# A reference to a class template, function template, template -# template parameter, or class template partial specialization. -CursorKind.TEMPLATE_REF = CursorKind(45) - -# A reference to a namespace or namepsace alias. -CursorKind.NAMESPACE_REF = CursorKind(46) - -# A reference to a member of a struct, union, or class that occurs in -# some non-expression context, e.g., a designated initializer. -CursorKind.MEMBER_REF = CursorKind(47) - -# A reference to a labeled statement. -CursorKind.LABEL_REF = CursorKind(48) - -# A reference to a set of overloaded functions or function templates -# that has not yet been resolved to a specific function or function template. -CursorKind.OVERLOADED_DECL_REF = CursorKind(49) - -# A reference to a variable that occurs in some non-expression -# context, e.g., a C++ lambda capture list. -CursorKind.VARIABLE_REF = CursorKind(50) - -### -# Invalid/Error Kinds - -CursorKind.INVALID_FILE = CursorKind(70) -CursorKind.NO_DECL_FOUND = CursorKind(71) -CursorKind.NOT_IMPLEMENTED = CursorKind(72) -CursorKind.INVALID_CODE = CursorKind(73) - -### -# Expression Kinds - -# An expression whose specific kind is not exposed via this interface. -# -# Unexposed expressions have the same operations as any other kind of -# expression; one can extract their location information, spelling, children, -# etc. However, the specific kind of the expression is not reported. -CursorKind.UNEXPOSED_EXPR = CursorKind(100) - -# An expression that refers to some value declaration, such as a function, -# varible, or enumerator. -CursorKind.DECL_REF_EXPR = CursorKind(101) - -# An expression that refers to a member of a struct, union, class, Objective-C -# class, etc. -CursorKind.MEMBER_REF_EXPR = CursorKind(102) - -# An expression that calls a function. -CursorKind.CALL_EXPR = CursorKind(103) - -# An expression that sends a message to an Objective-C object or class. -CursorKind.OBJC_MESSAGE_EXPR = CursorKind(104) - -# An expression that represents a block literal. -CursorKind.BLOCK_EXPR = CursorKind(105) - -# An integer literal. -CursorKind.INTEGER_LITERAL = CursorKind(106) - -# A floating point number literal. -CursorKind.FLOATING_LITERAL = CursorKind(107) - -# An imaginary number literal. -CursorKind.IMAGINARY_LITERAL = CursorKind(108) - -# A string literal. -CursorKind.STRING_LITERAL = CursorKind(109) - -# A character literal. -CursorKind.CHARACTER_LITERAL = CursorKind(110) - -# A parenthesized expression, e.g. "(1)". -# -# This AST node is only formed if full location information is requested. -CursorKind.PAREN_EXPR = CursorKind(111) - -# This represents the unary-expression's (except sizeof and -# alignof). -CursorKind.UNARY_OPERATOR = CursorKind(112) - -# [C99 6.5.2.1] Array Subscripting. -CursorKind.ARRAY_SUBSCRIPT_EXPR = CursorKind(113) - -# A builtin binary operation expression such as "x + y" or -# "x <= y". -CursorKind.BINARY_OPERATOR = CursorKind(114) - -# Compound assignment such as "+=". -CursorKind.COMPOUND_ASSIGNMENT_OPERATOR = CursorKind(115) - -# The ?: ternary operator. -CursorKind.CONDITIONAL_OPERATOR = CursorKind(116) - -# An explicit cast in C (C99 6.5.4) or a C-style cast in C++ -# (C++ [expr.cast]), which uses the syntax (Type)expr. -# -# For example: (int)f. -CursorKind.CSTYLE_CAST_EXPR = CursorKind(117) - -# [C99 6.5.2.5] -CursorKind.COMPOUND_LITERAL_EXPR = CursorKind(118) - -# Describes an C or C++ initializer list. -CursorKind.INIT_LIST_EXPR = CursorKind(119) - -# The GNU address of label extension, representing &&label. -CursorKind.ADDR_LABEL_EXPR = CursorKind(120) - -# This is the GNU Statement Expression extension: ({int X=4; X;}) -CursorKind.StmtExpr = CursorKind(121) - -# Represents a C11 generic selection. -CursorKind.GENERIC_SELECTION_EXPR = CursorKind(122) - -# Implements the GNU __null extension, which is a name for a null -# pointer constant that has integral type (e.g., int or long) and is the same -# size and alignment as a pointer. -# -# The __null extension is typically only used by system headers, which define -# NULL as __null in C++ rather than using 0 (which is an integer that may not -# match the size of a pointer). -CursorKind.GNU_NULL_EXPR = CursorKind(123) - -# C++'s static_cast<> expression. -CursorKind.CXX_STATIC_CAST_EXPR = CursorKind(124) - -# C++'s dynamic_cast<> expression. -CursorKind.CXX_DYNAMIC_CAST_EXPR = CursorKind(125) - -# C++'s reinterpret_cast<> expression. -CursorKind.CXX_REINTERPRET_CAST_EXPR = CursorKind(126) - -# C++'s const_cast<> expression. -CursorKind.CXX_CONST_CAST_EXPR = CursorKind(127) - -# Represents an explicit C++ type conversion that uses "functional" -# notion (C++ [expr.type.conv]). -# -# Example: -# \code -# x = int(0.5); -# \endcode -CursorKind.CXX_FUNCTIONAL_CAST_EXPR = CursorKind(128) - -# A C++ typeid expression (C++ [expr.typeid]). -CursorKind.CXX_TYPEID_EXPR = CursorKind(129) - -# [C++ 2.13.5] C++ Boolean Literal. -CursorKind.CXX_BOOL_LITERAL_EXPR = CursorKind(130) - -# [C++0x 2.14.7] C++ Pointer Literal. -CursorKind.CXX_NULL_PTR_LITERAL_EXPR = CursorKind(131) - -# Represents the "this" expression in C++ -CursorKind.CXX_THIS_EXPR = CursorKind(132) - -# [C++ 15] C++ Throw Expression. -# -# This handles 'throw' and 'throw' assignment-expression. When -# assignment-expression isn't present, Op will be null. -CursorKind.CXX_THROW_EXPR = CursorKind(133) - -# A new expression for memory allocation and constructor calls, e.g: -# "new CXXNewExpr(foo)". -CursorKind.CXX_NEW_EXPR = CursorKind(134) - -# A delete expression for memory deallocation and destructor calls, -# e.g. "delete[] pArray". -CursorKind.CXX_DELETE_EXPR = CursorKind(135) - -# Represents a unary expression. -CursorKind.CXX_UNARY_EXPR = CursorKind(136) - -# ObjCStringLiteral, used for Objective-C string literals i.e. "foo". -CursorKind.OBJC_STRING_LITERAL = CursorKind(137) - -# ObjCEncodeExpr, used for in Objective-C. -CursorKind.OBJC_ENCODE_EXPR = CursorKind(138) - -# ObjCSelectorExpr used for in Objective-C. -CursorKind.OBJC_SELECTOR_EXPR = CursorKind(139) - -# Objective-C's protocol expression. -CursorKind.OBJC_PROTOCOL_EXPR = CursorKind(140) - -# An Objective-C "bridged" cast expression, which casts between -# Objective-C pointers and C pointers, transferring ownership in the process. -# -# \code -# NSString *str = (__bridge_transfer NSString *)CFCreateString(); -# \endcode -CursorKind.OBJC_BRIDGE_CAST_EXPR = CursorKind(141) - -# Represents a C++0x pack expansion that produces a sequence of -# expressions. -# -# A pack expansion expression contains a pattern (which itself is an -# expression) followed by an ellipsis. For example: -CursorKind.PACK_EXPANSION_EXPR = CursorKind(142) - -# Represents an expression that computes the length of a parameter -# pack. -CursorKind.SIZE_OF_PACK_EXPR = CursorKind(143) - -# Represents a C++ lambda expression that produces a local function -# object. -# -# \code -# void abssort(float *x, unsigned N) { -# std::sort(x, x + N, -# [](float a, float b) { -# return std::abs(a) < std::abs(b); -# }); -# } -# \endcode -CursorKind.LAMBDA_EXPR = CursorKind(144) - -# Objective-c Boolean Literal. -CursorKind.OBJ_BOOL_LITERAL_EXPR = CursorKind(145) - -# Represents the "self" expression in a ObjC method. -CursorKind.OBJ_SELF_EXPR = CursorKind(146) - - -# A statement whose specific kind is not exposed via this interface. -# -# Unexposed statements have the same operations as any other kind of statement; -# one can extract their location information, spelling, children, etc. However, -# the specific kind of the statement is not reported. -CursorKind.UNEXPOSED_STMT = CursorKind(200) - -# A labelled statement in a function. -CursorKind.LABEL_STMT = CursorKind(201) - -# A compound statement -CursorKind.COMPOUND_STMT = CursorKind(202) - -# A case statement. -CursorKind.CASE_STMT = CursorKind(203) - -# A default statement. -CursorKind.DEFAULT_STMT = CursorKind(204) - -# An if statement. -CursorKind.IF_STMT = CursorKind(205) - -# A switch statement. -CursorKind.SWITCH_STMT = CursorKind(206) - -# A while statement. -CursorKind.WHILE_STMT = CursorKind(207) - -# A do statement. -CursorKind.DO_STMT = CursorKind(208) - -# A for statement. -CursorKind.FOR_STMT = CursorKind(209) - -# A goto statement. -CursorKind.GOTO_STMT = CursorKind(210) - -# An indirect goto statement. -CursorKind.INDIRECT_GOTO_STMT = CursorKind(211) - -# A continue statement. -CursorKind.CONTINUE_STMT = CursorKind(212) - -# A break statement. -CursorKind.BREAK_STMT = CursorKind(213) - -# A return statement. -CursorKind.RETURN_STMT = CursorKind(214) - -# A GNU-style inline assembler statement. -CursorKind.ASM_STMT = CursorKind(215) - -# Objective-C's overall @try-@catch-@finally statement. -CursorKind.OBJC_AT_TRY_STMT = CursorKind(216) - -# Objective-C's @catch statement. -CursorKind.OBJC_AT_CATCH_STMT = CursorKind(217) - -# Objective-C's @finally statement. -CursorKind.OBJC_AT_FINALLY_STMT = CursorKind(218) - -# Objective-C's @throw statement. -CursorKind.OBJC_AT_THROW_STMT = CursorKind(219) - -# Objective-C's @synchronized statement. -CursorKind.OBJC_AT_SYNCHRONIZED_STMT = CursorKind(220) - -# Objective-C's autorealease pool statement. -CursorKind.OBJC_AUTORELEASE_POOL_STMT = CursorKind(221) - -# Objective-C's for collection statement. -CursorKind.OBJC_FOR_COLLECTION_STMT = CursorKind(222) - -# C++'s catch statement. -CursorKind.CXX_CATCH_STMT = CursorKind(223) - -# C++'s try statement. -CursorKind.CXX_TRY_STMT = CursorKind(224) - -# C++'s for (* : *) statement. -CursorKind.CXX_FOR_RANGE_STMT = CursorKind(225) - -# Windows Structured Exception Handling's try statement. -CursorKind.SEH_TRY_STMT = CursorKind(226) - -# Windows Structured Exception Handling's except statement. -CursorKind.SEH_EXCEPT_STMT = CursorKind(227) - -# Windows Structured Exception Handling's finally statement. -CursorKind.SEH_FINALLY_STMT = CursorKind(228) - -# A MS inline assembly statement extension. -CursorKind.MS_ASM_STMT = CursorKind(229) - -# The null statement. -CursorKind.NULL_STMT = CursorKind(230) - -# Adaptor class for mixing declarations with statements and expressions. -CursorKind.DECL_STMT = CursorKind(231) - -# OpenMP parallel directive. -CursorKind.OMP_PARALLEL_DIRECTIVE = CursorKind(232) - -# OpenMP SIMD directive. -CursorKind.OMP_SIMD_DIRECTIVE = CursorKind(233) - -# OpenMP for directive. -CursorKind.OMP_FOR_DIRECTIVE = CursorKind(234) - -# OpenMP sections directive. -CursorKind.OMP_SECTIONS_DIRECTIVE = CursorKind(235) - -# OpenMP section directive. -CursorKind.OMP_SECTION_DIRECTIVE = CursorKind(236) - -# OpenMP single directive. -CursorKind.OMP_SINGLE_DIRECTIVE = CursorKind(237) - -# OpenMP parallel for directive. -CursorKind.OMP_PARALLEL_FOR_DIRECTIVE = CursorKind(238) - -# OpenMP parallel sections directive. -CursorKind.OMP_PARALLEL_SECTIONS_DIRECTIVE = CursorKind(239) - -# OpenMP task directive. -CursorKind.OMP_TASK_DIRECTIVE = CursorKind(240) - -# OpenMP master directive. -CursorKind.OMP_MASTER_DIRECTIVE = CursorKind(241) - -# OpenMP critical directive. -CursorKind.OMP_CRITICAL_DIRECTIVE = CursorKind(242) - -# OpenMP taskyield directive. -CursorKind.OMP_TASKYIELD_DIRECTIVE = CursorKind(243) - -# OpenMP barrier directive. -CursorKind.OMP_BARRIER_DIRECTIVE = CursorKind(244) - -# OpenMP taskwait directive. -CursorKind.OMP_TASKWAIT_DIRECTIVE = CursorKind(245) - -# OpenMP flush directive. -CursorKind.OMP_FLUSH_DIRECTIVE = CursorKind(246) - -# Windows Structured Exception Handling's leave statement. -CursorKind.SEH_LEAVE_STMT = CursorKind(247) - -# OpenMP ordered directive. -CursorKind.OMP_ORDERED_DIRECTIVE = CursorKind(248) - -# OpenMP atomic directive. -CursorKind.OMP_ATOMIC_DIRECTIVE = CursorKind(249) - -# OpenMP for SIMD directive. -CursorKind.OMP_FOR_SIMD_DIRECTIVE = CursorKind(250) - -# OpenMP parallel for SIMD directive. -CursorKind.OMP_PARALLELFORSIMD_DIRECTIVE = CursorKind(251) - -# OpenMP target directive. -CursorKind.OMP_TARGET_DIRECTIVE = CursorKind(252) - -# OpenMP teams directive. -CursorKind.OMP_TEAMS_DIRECTIVE = CursorKind(253) - -# OpenMP taskgroup directive. -CursorKind.OMP_TASKGROUP_DIRECTIVE = CursorKind(254) - -# OpenMP cancellation point directive. -CursorKind.OMP_CANCELLATION_POINT_DIRECTIVE = CursorKind(255) - -# OpenMP cancel directive. -CursorKind.OMP_CANCEL_DIRECTIVE = CursorKind(256) - -# OpenMP target data directive. -CursorKind.OMP_TARGET_DATA_DIRECTIVE = CursorKind(257) - -# OpenMP taskloop directive. -CursorKind.OMP_TASK_LOOP_DIRECTIVE = CursorKind(258) - -# OpenMP taskloop simd directive. -CursorKind.OMP_TASK_LOOP_SIMD_DIRECTIVE = CursorKind(259) - -# OpenMP distribute directive. -CursorKind.OMP_DISTRIBUTE_DIRECTIVE = CursorKind(260) - -# OpenMP target enter data directive. -CursorKind.OMP_TARGET_ENTER_DATA_DIRECTIVE = CursorKind(261) - -# OpenMP target exit data directive. -CursorKind.OMP_TARGET_EXIT_DATA_DIRECTIVE = CursorKind(262) - -# OpenMP target parallel directive. -CursorKind.OMP_TARGET_PARALLEL_DIRECTIVE = CursorKind(263) - -# OpenMP target parallel for directive. -CursorKind.OMP_TARGET_PARALLELFOR_DIRECTIVE = CursorKind(264) - -# OpenMP target update directive. -CursorKind.OMP_TARGET_UPDATE_DIRECTIVE = CursorKind(265) - -# OpenMP distribute parallel for directive. -CursorKind.OMP_DISTRIBUTE_PARALLELFOR_DIRECTIVE = CursorKind(266) - -# OpenMP distribute parallel for simd directive. -CursorKind.OMP_DISTRIBUTE_PARALLEL_FOR_SIMD_DIRECTIVE = CursorKind(267) - -# OpenMP distribute simd directive. -CursorKind.OMP_DISTRIBUTE_SIMD_DIRECTIVE = CursorKind(268) - -# OpenMP target parallel for simd directive. -CursorKind.OMP_TARGET_PARALLEL_FOR_SIMD_DIRECTIVE = CursorKind(269) - -# OpenMP target simd directive. -CursorKind.OMP_TARGET_SIMD_DIRECTIVE = CursorKind(270) - -# OpenMP teams distribute directive. -CursorKind.OMP_TEAMS_DISTRIBUTE_DIRECTIVE = CursorKind(271) - -### -# Other Kinds - -# Cursor that represents the translation unit itself. -# -# The translation unit cursor exists primarily to act as the root cursor for -# traversing the contents of a translation unit. -CursorKind.TRANSLATION_UNIT = CursorKind(300) - -### -# Attributes - -# An attribute whoe specific kind is note exposed via this interface -CursorKind.UNEXPOSED_ATTR = CursorKind(400) - -CursorKind.IB_ACTION_ATTR = CursorKind(401) -CursorKind.IB_OUTLET_ATTR = CursorKind(402) -CursorKind.IB_OUTLET_COLLECTION_ATTR = CursorKind(403) - -CursorKind.CXX_FINAL_ATTR = CursorKind(404) -CursorKind.CXX_OVERRIDE_ATTR = CursorKind(405) -CursorKind.ANNOTATE_ATTR = CursorKind(406) -CursorKind.ASM_LABEL_ATTR = CursorKind(407) -CursorKind.PACKED_ATTR = CursorKind(408) -CursorKind.PURE_ATTR = CursorKind(409) -CursorKind.CONST_ATTR = CursorKind(410) -CursorKind.NODUPLICATE_ATTR = CursorKind(411) -CursorKind.CUDACONSTANT_ATTR = CursorKind(412) -CursorKind.CUDADEVICE_ATTR = CursorKind(413) -CursorKind.CUDAGLOBAL_ATTR = CursorKind(414) -CursorKind.CUDAHOST_ATTR = CursorKind(415) -CursorKind.CUDASHARED_ATTR = CursorKind(416) - -CursorKind.VISIBILITY_ATTR = CursorKind(417) - -CursorKind.DLLEXPORT_ATTR = CursorKind(418) -CursorKind.DLLIMPORT_ATTR = CursorKind(419) - -### -# Preprocessing -CursorKind.PREPROCESSING_DIRECTIVE = CursorKind(500) -CursorKind.MACRO_DEFINITION = CursorKind(501) -CursorKind.MACRO_INSTANTIATION = CursorKind(502) -CursorKind.INCLUSION_DIRECTIVE = CursorKind(503) - -### -# Extra declaration - -# A module import declaration. -CursorKind.MODULE_IMPORT_DECL = CursorKind(600) -# A type alias template declaration -CursorKind.TYPE_ALIAS_TEMPLATE_DECL = CursorKind(601) -# A static_assert or _Static_assert node -CursorKind.STATIC_ASSERT = CursorKind(602) -# A friend declaration -CursorKind.FRIEND_DECL = CursorKind(603) - -# A code completion overload candidate. -CursorKind.OVERLOAD_CANDIDATE = CursorKind(700) - -### Template Argument Kinds ### -class TemplateArgumentKind(BaseEnumeration): - """ - A TemplateArgumentKind describes the kind of entity that a template argument - represents. - """ - - # The required BaseEnumeration declarations. - _kinds = [] - _name_map = None - -TemplateArgumentKind.NULL = TemplateArgumentKind(0) -TemplateArgumentKind.TYPE = TemplateArgumentKind(1) -TemplateArgumentKind.DECLARATION = TemplateArgumentKind(2) -TemplateArgumentKind.NULLPTR = TemplateArgumentKind(3) -TemplateArgumentKind.INTEGRAL = TemplateArgumentKind(4) - -### Cursors ### - -class Cursor(Structure): - """ - The Cursor class represents a reference to an element within the AST. It - acts as a kind of iterator. - """ - _fields_ = [("_kind_id", c_int), ("xdata", c_int), ("data", c_void_p * 3)] - - @staticmethod - def from_location(tu, location): - # We store a reference to the TU in the instance so the TU won't get - # collected before the cursor. - cursor = conf.lib.clang_getCursor(tu, location) - cursor._tu = tu - - return cursor - - def __eq__(self, other): - return conf.lib.clang_equalCursors(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - - def is_definition(self): - """ - Returns true if the declaration pointed at by the cursor is also a - definition of that entity. - """ - return conf.lib.clang_isCursorDefinition(self) - - def is_const_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared 'const'. - """ - return conf.lib.clang_CXXMethod_isConst(self) - - def is_converting_constructor(self): - """Returns True if the cursor refers to a C++ converting constructor. - """ - return conf.lib.clang_CXXConstructor_isConvertingConstructor(self) - - def is_copy_constructor(self): - """Returns True if the cursor refers to a C++ copy constructor. - """ - return conf.lib.clang_CXXConstructor_isCopyConstructor(self) - - def is_default_constructor(self): - """Returns True if the cursor refers to a C++ default constructor. - """ - return conf.lib.clang_CXXConstructor_isDefaultConstructor(self) - - def is_move_constructor(self): - """Returns True if the cursor refers to a C++ move constructor. - """ - return conf.lib.clang_CXXConstructor_isMoveConstructor(self) - - def is_default_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared '= default'. - """ - return conf.lib.clang_CXXMethod_isDefaulted(self) - - def is_mutable_field(self): - """Returns True if the cursor refers to a C++ field that is declared - 'mutable'. - """ - return conf.lib.clang_CXXField_isMutable(self) - - def is_pure_virtual_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared pure virtual. - """ - return conf.lib.clang_CXXMethod_isPureVirtual(self) - - def is_static_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared 'static'. - """ - return conf.lib.clang_CXXMethod_isStatic(self) - - def is_virtual_method(self): - """Returns True if the cursor refers to a C++ member function or member - function template that is declared 'virtual'. - """ - return conf.lib.clang_CXXMethod_isVirtual(self) - - def get_definition(self): - """ - If the cursor is a reference to a declaration or a declaration of - some entity, return a cursor that points to the definition of that - entity. - """ - # TODO: Should probably check that this is either a reference or - # declaration prior to issuing the lookup. - return conf.lib.clang_getCursorDefinition(self) - - def get_usr(self): - """Return the Unified Symbol Resultion (USR) for the entity referenced - by the given cursor (or None). - - A Unified Symbol Resolution (USR) is a string that identifies a - particular entity (function, class, variable, etc.) within a - program. USRs can be compared across translation units to determine, - e.g., when references in one translation refer to an entity defined in - another translation unit.""" - return conf.lib.clang_getCursorUSR(self) - - @property - def kind(self): - """Return the kind of this cursor.""" - return CursorKind.from_id(self._kind_id) - - @property - def spelling(self): - """Return the spelling of the entity pointed at by the cursor.""" - if not hasattr(self, '_spelling'): - self._spelling = conf.lib.clang_getCursorSpelling(self) - - return self._spelling - - @property - def displayname(self): - """ - Return the display name for the entity referenced by this cursor. - - The display name contains extra information that helps identify the - cursor, such as the parameters of a function or template or the - arguments of a class template specialization. - """ - if not hasattr(self, '_displayname'): - self._displayname = conf.lib.clang_getCursorDisplayName(self) - - return self._displayname - - @property - def mangled_name(self): - """Return the mangled name for the entity referenced by this cursor.""" - if not hasattr(self, '_mangled_name'): - self._mangled_name = conf.lib.clang_Cursor_getMangling(self) - - return self._mangled_name - - @property - def location(self): - """ - Return the source location (the starting character) of the entity - pointed at by the cursor. - """ - if not hasattr(self, '_loc'): - self._loc = conf.lib.clang_getCursorLocation(self) - - return self._loc - - @property - def extent(self): - """ - Return the source range (the range of text) occupied by the entity - pointed at by the cursor. - """ - if not hasattr(self, '_extent'): - self._extent = conf.lib.clang_getCursorExtent(self) - - return self._extent - - @property - def storage_class(self): - """ - Retrieves the storage class (if any) of the entity pointed at by the - cursor. - """ - if not hasattr(self, '_storage_class'): - self._storage_class = conf.lib.clang_Cursor_getStorageClass(self) - - return StorageClass.from_id(self._storage_class) - - @property - def access_specifier(self): - """ - Retrieves the access specifier (if any) of the entity pointed at by the - cursor. - """ - if not hasattr(self, '_access_specifier'): - self._access_specifier = conf.lib.clang_getCXXAccessSpecifier(self) - - return AccessSpecifier.from_id(self._access_specifier) - - @property - def type(self): - """ - Retrieve the Type (if any) of the entity pointed at by the cursor. - """ - if not hasattr(self, '_type'): - self._type = conf.lib.clang_getCursorType(self) - - return self._type - - @property - def canonical(self): - """Return the canonical Cursor corresponding to this Cursor. - - The canonical cursor is the cursor which is representative for the - underlying entity. For example, if you have multiple forward - declarations for the same class, the canonical cursor for the forward - declarations will be identical. - """ - if not hasattr(self, '_canonical'): - self._canonical = conf.lib.clang_getCanonicalCursor(self) - - return self._canonical - - @property - def result_type(self): - """Retrieve the Type of the result for this Cursor.""" - if not hasattr(self, '_result_type'): - self._result_type = conf.lib.clang_getResultType(self.type) - - return self._result_type - - @property - def underlying_typedef_type(self): - """Return the underlying type of a typedef declaration. - - Returns a Type for the typedef this cursor is a declaration for. If - the current cursor is not a typedef, this raises. - """ - if not hasattr(self, '_underlying_type'): - assert self.kind.is_declaration() - self._underlying_type = \ - conf.lib.clang_getTypedefDeclUnderlyingType(self) - - return self._underlying_type - - @property - def enum_type(self): - """Return the integer type of an enum declaration. - - Returns a Type corresponding to an integer. If the cursor is not for an - enum, this raises. - """ - if not hasattr(self, '_enum_type'): - assert self.kind == CursorKind.ENUM_DECL - self._enum_type = conf.lib.clang_getEnumDeclIntegerType(self) - - return self._enum_type - - @property - def enum_value(self): - """Return the value of an enum constant.""" - if not hasattr(self, '_enum_value'): - assert self.kind == CursorKind.ENUM_CONSTANT_DECL - # Figure out the underlying type of the enum to know if it - # is a signed or unsigned quantity. - underlying_type = self.type - if underlying_type.kind == TypeKind.ENUM: - underlying_type = underlying_type.get_declaration().enum_type - if underlying_type.kind in (TypeKind.CHAR_U, - TypeKind.UCHAR, - TypeKind.CHAR16, - TypeKind.CHAR32, - TypeKind.USHORT, - TypeKind.UINT, - TypeKind.ULONG, - TypeKind.ULONGLONG, - TypeKind.UINT128): - self._enum_value = \ - conf.lib.clang_getEnumConstantDeclUnsignedValue(self) - else: - self._enum_value = conf.lib.clang_getEnumConstantDeclValue(self) - return self._enum_value - - @property - def objc_type_encoding(self): - """Return the Objective-C type encoding as a str.""" - if not hasattr(self, '_objc_type_encoding'): - self._objc_type_encoding = \ - conf.lib.clang_getDeclObjCTypeEncoding(self) - - return self._objc_type_encoding - - @property - def hash(self): - """Returns a hash of the cursor as an int.""" - if not hasattr(self, '_hash'): - self._hash = conf.lib.clang_hashCursor(self) - - return self._hash - - @property - def semantic_parent(self): - """Return the semantic parent for this cursor.""" - if not hasattr(self, '_semantic_parent'): - self._semantic_parent = conf.lib.clang_getCursorSemanticParent(self) - - return self._semantic_parent - - @property - def lexical_parent(self): - """Return the lexical parent for this cursor.""" - if not hasattr(self, '_lexical_parent'): - self._lexical_parent = conf.lib.clang_getCursorLexicalParent(self) - - return self._lexical_parent - - @property - def translation_unit(self): - """Returns the TranslationUnit to which this Cursor belongs.""" - # If this triggers an AttributeError, the instance was not properly - # created. - return self._tu - - @property - def referenced(self): - """ - For a cursor that is a reference, returns a cursor - representing the entity that it references. - """ - if not hasattr(self, '_referenced'): - self._referenced = conf.lib.clang_getCursorReferenced(self) - - return self._referenced - - @property - def brief_comment(self): - """Returns the brief comment text associated with that Cursor""" - return conf.lib.clang_Cursor_getBriefCommentText(self) - - @property - def raw_comment(self): - """Returns the raw comment text associated with that Cursor""" - return conf.lib.clang_Cursor_getRawCommentText(self) - - def get_arguments(self): - """Return an iterator for accessing the arguments of this cursor.""" - num_args = conf.lib.clang_Cursor_getNumArguments(self) - for i in range(0, num_args): - yield conf.lib.clang_Cursor_getArgument(self, i) - - def get_num_template_arguments(self): - """Returns the number of template args associated with this cursor.""" - return conf.lib.clang_Cursor_getNumTemplateArguments(self) - - def get_template_argument_kind(self, num): - """Returns the TemplateArgumentKind for the indicated template - argument.""" - return conf.lib.clang_Cursor_getTemplateArgumentKind(self, num) - - def get_template_argument_type(self, num): - """Returns the CXType for the indicated template argument.""" - return conf.lib.clang_Cursor_getTemplateArgumentType(self, num) - - def get_template_argument_value(self, num): - """Returns the value of the indicated arg as a signed 64b integer.""" - return conf.lib.clang_Cursor_getTemplateArgumentValue(self, num) - - def get_template_argument_unsigned_value(self, num): - """Returns the value of the indicated arg as an unsigned 64b integer.""" - return conf.lib.clang_Cursor_getTemplateArgumentUnsignedValue(self, num) - - def get_children(self): - """Return an iterator for accessing the children of this cursor.""" - - # FIXME: Expose iteration from CIndex, PR6125. - def visitor(child, parent, children): - # FIXME: Document this assertion in API. - # FIXME: There should just be an isNull method. - assert child != conf.lib.clang_getNullCursor() - - # Create reference to TU so it isn't GC'd before Cursor. - child._tu = self._tu - children.append(child) - return 1 # continue - children = [] - conf.lib.clang_visitChildren(self, callbacks['cursor_visit'](visitor), - children) - return iter(children) - - def walk_preorder(self): - """Depth-first preorder walk over the cursor and its descendants. - - Yields cursors. - """ - yield self - for child in self.get_children(): - for descendant in child.walk_preorder(): - yield descendant - - def get_tokens(self): - """Obtain Token instances formulating that compose this Cursor. - - This is a generator for Token instances. It returns all tokens which - occupy the extent this cursor occupies. - """ - return TokenGroup.get_tokens(self._tu, self.extent) - - def get_field_offsetof(self): - """Returns the offsetof the FIELD_DECL pointed by this Cursor.""" - return conf.lib.clang_Cursor_getOffsetOfField(self) - - def is_anonymous(self): - """ - Check if the record is anonymous. - """ - if self.kind == CursorKind.FIELD_DECL: - return self.type.get_declaration().is_anonymous() - return conf.lib.clang_Cursor_isAnonymous(self) - - def is_bitfield(self): - """ - Check if the field is a bitfield. - """ - return conf.lib.clang_Cursor_isBitField(self) - - def get_bitfield_width(self): - """ - Retrieve the width of a bitfield. - """ - return conf.lib.clang_getFieldDeclBitWidth(self) - - @staticmethod - def from_result(res, fn, args): - assert isinstance(res, Cursor) - # FIXME: There should just be an isNull method. - if res == conf.lib.clang_getNullCursor(): - return None - - # Store a reference to the TU in the Python object so it won't get GC'd - # before the Cursor. - tu = None - for arg in args: - if isinstance(arg, TranslationUnit): - tu = arg - break - - if hasattr(arg, 'translation_unit'): - tu = arg.translation_unit - break - - assert tu is not None - - res._tu = tu - return res - - @staticmethod - def from_cursor_result(res, fn, args): - assert isinstance(res, Cursor) - if res == conf.lib.clang_getNullCursor(): - return None - - res._tu = args[0]._tu - return res - -class StorageClass(object): - """ - Describes the storage class of a declaration - """ - - # The unique kind objects, index by id. - _kinds = [] - _name_map = None - - def __init__(self, value): - if value >= len(StorageClass._kinds): - StorageClass._kinds += [None] * (value - len(StorageClass._kinds) + 1) - if StorageClass._kinds[value] is not None: - raise ValueError('StorageClass already loaded') - self.value = value - StorageClass._kinds[value] = self - StorageClass._name_map = None - - def from_param(self): - return self.value - - @property - def name(self): - """Get the enumeration name of this storage class.""" - if self._name_map is None: - self._name_map = {} - for key,value in list(StorageClass.__dict__.items()): - if isinstance(value,StorageClass): - self._name_map[value] = key - return self._name_map[self] - - @staticmethod - def from_id(id): - if id >= len(StorageClass._kinds) or not StorageClass._kinds[id]: - raise ValueError('Unknown storage class %d' % id) - return StorageClass._kinds[id] - - def __repr__(self): - return 'StorageClass.%s' % (self.name,) - -StorageClass.INVALID = StorageClass(0) -StorageClass.NONE = StorageClass(1) -StorageClass.EXTERN = StorageClass(2) -StorageClass.STATIC = StorageClass(3) -StorageClass.PRIVATEEXTERN = StorageClass(4) -StorageClass.OPENCLWORKGROUPLOCAL = StorageClass(5) -StorageClass.AUTO = StorageClass(6) -StorageClass.REGISTER = StorageClass(7) - - -### C++ access specifiers ### - -class AccessSpecifier(BaseEnumeration): - """ - Describes the access of a C++ class member - """ - - # The unique kind objects, index by id. - _kinds = [] - _name_map = None - - def from_param(self): - return self.value - - def __repr__(self): - return 'AccessSpecifier.%s' % (self.name,) - -AccessSpecifier.INVALID = AccessSpecifier(0) -AccessSpecifier.PUBLIC = AccessSpecifier(1) -AccessSpecifier.PROTECTED = AccessSpecifier(2) -AccessSpecifier.PRIVATE = AccessSpecifier(3) -AccessSpecifier.NONE = AccessSpecifier(4) - -### Type Kinds ### - -class TypeKind(BaseEnumeration): - """ - Describes the kind of type. - """ - - # The unique kind objects, indexed by id. - _kinds = [] - _name_map = None - - @property - def spelling(self): - """Retrieve the spelling of this TypeKind.""" - return conf.lib.clang_getTypeKindSpelling(self.value) - - def __repr__(self): - return 'TypeKind.%s' % (self.name,) - -TypeKind.INVALID = TypeKind(0) -TypeKind.UNEXPOSED = TypeKind(1) -TypeKind.VOID = TypeKind(2) -TypeKind.BOOL = TypeKind(3) -TypeKind.CHAR_U = TypeKind(4) -TypeKind.UCHAR = TypeKind(5) -TypeKind.CHAR16 = TypeKind(6) -TypeKind.CHAR32 = TypeKind(7) -TypeKind.USHORT = TypeKind(8) -TypeKind.UINT = TypeKind(9) -TypeKind.ULONG = TypeKind(10) -TypeKind.ULONGLONG = TypeKind(11) -TypeKind.UINT128 = TypeKind(12) -TypeKind.CHAR_S = TypeKind(13) -TypeKind.SCHAR = TypeKind(14) -TypeKind.WCHAR = TypeKind(15) -TypeKind.SHORT = TypeKind(16) -TypeKind.INT = TypeKind(17) -TypeKind.LONG = TypeKind(18) -TypeKind.LONGLONG = TypeKind(19) -TypeKind.INT128 = TypeKind(20) -TypeKind.FLOAT = TypeKind(21) -TypeKind.DOUBLE = TypeKind(22) -TypeKind.LONGDOUBLE = TypeKind(23) -TypeKind.NULLPTR = TypeKind(24) -TypeKind.OVERLOAD = TypeKind(25) -TypeKind.DEPENDENT = TypeKind(26) -TypeKind.OBJCID = TypeKind(27) -TypeKind.OBJCCLASS = TypeKind(28) -TypeKind.OBJCSEL = TypeKind(29) -TypeKind.FLOAT128 = TypeKind(30) -TypeKind.HALF = TypeKind(31) -TypeKind.COMPLEX = TypeKind(100) -TypeKind.POINTER = TypeKind(101) -TypeKind.BLOCKPOINTER = TypeKind(102) -TypeKind.LVALUEREFERENCE = TypeKind(103) -TypeKind.RVALUEREFERENCE = TypeKind(104) -TypeKind.RECORD = TypeKind(105) -TypeKind.ENUM = TypeKind(106) -TypeKind.TYPEDEF = TypeKind(107) -TypeKind.OBJCINTERFACE = TypeKind(108) -TypeKind.OBJCOBJECTPOINTER = TypeKind(109) -TypeKind.FUNCTIONNOPROTO = TypeKind(110) -TypeKind.FUNCTIONPROTO = TypeKind(111) -TypeKind.CONSTANTARRAY = TypeKind(112) -TypeKind.VECTOR = TypeKind(113) -TypeKind.INCOMPLETEARRAY = TypeKind(114) -TypeKind.VARIABLEARRAY = TypeKind(115) -TypeKind.DEPENDENTSIZEDARRAY = TypeKind(116) -TypeKind.MEMBERPOINTER = TypeKind(117) -TypeKind.AUTO = TypeKind(118) -TypeKind.ELABORATED = TypeKind(119) - -class RefQualifierKind(BaseEnumeration): - """Describes a specific ref-qualifier of a type.""" - - # The unique kind objects, indexed by id. - _kinds = [] - _name_map = None - - def from_param(self): - return self.value - - def __repr__(self): - return 'RefQualifierKind.%s' % (self.name,) - -RefQualifierKind.NONE = RefQualifierKind(0) -RefQualifierKind.LVALUE = RefQualifierKind(1) -RefQualifierKind.RVALUE = RefQualifierKind(2) - -class Type(Structure): - """ - The type of an element in the abstract syntax tree. - """ - _fields_ = [("_kind_id", c_int), ("data", c_void_p * 2)] - - @property - def kind(self): - """Return the kind of this type.""" - return TypeKind.from_id(self._kind_id) - - def argument_types(self): - """Retrieve a container for the non-variadic arguments for this type. - - The returned object is iterable and indexable. Each item in the - container is a Type instance. - """ - class ArgumentsIterator(collections.Sequence): - def __init__(self, parent): - self.parent = parent - self.length = None - - def __len__(self): - if self.length is None: - self.length = conf.lib.clang_getNumArgTypes(self.parent) - - return self.length - - def __getitem__(self, key): - # FIXME Support slice objects. - if not isinstance(key, int): - raise TypeError("Must supply a non-negative int.") - - if key < 0: - raise IndexError("Only non-negative indexes are accepted.") - - if key >= len(self): - raise IndexError("Index greater than container length: " - "%d > %d" % ( key, len(self) )) - - result = conf.lib.clang_getArgType(self.parent, key) - if result.kind == TypeKind.INVALID: - raise IndexError("Argument could not be retrieved.") - - return result - - assert self.kind == TypeKind.FUNCTIONPROTO - return ArgumentsIterator(self) - - @property - def element_type(self): - """Retrieve the Type of elements within this Type. - - If accessed on a type that is not an array, complex, or vector type, an - exception will be raised. - """ - result = conf.lib.clang_getElementType(self) - if result.kind == TypeKind.INVALID: - raise Exception('Element type not available on this type.') - - return result - - @property - def element_count(self): - """Retrieve the number of elements in this type. - - Returns an int. - - If the Type is not an array or vector, this raises. - """ - result = conf.lib.clang_getNumElements(self) - if result < 0: - raise Exception('Type does not have elements.') - - return result - - @property - def translation_unit(self): - """The TranslationUnit to which this Type is associated.""" - # If this triggers an AttributeError, the instance was not properly - # instantiated. - return self._tu - - @staticmethod - def from_result(res, fn, args): - assert isinstance(res, Type) - - tu = None - for arg in args: - if hasattr(arg, 'translation_unit'): - tu = arg.translation_unit - break - - assert tu is not None - res._tu = tu - - return res - - def get_canonical(self): - """ - Return the canonical type for a Type. - - Clang's type system explicitly models typedefs and all the - ways a specific type can be represented. The canonical type - is the underlying type with all the "sugar" removed. For - example, if 'T' is a typedef for 'int', the canonical type for - 'T' would be 'int'. - """ - return conf.lib.clang_getCanonicalType(self) - - def is_const_qualified(self): - """Determine whether a Type has the "const" qualifier set. - - This does not look through typedefs that may have added "const" - at a different level. - """ - return conf.lib.clang_isConstQualifiedType(self) - - def is_volatile_qualified(self): - """Determine whether a Type has the "volatile" qualifier set. - - This does not look through typedefs that may have added "volatile" - at a different level. - """ - return conf.lib.clang_isVolatileQualifiedType(self) - - def is_restrict_qualified(self): - """Determine whether a Type has the "restrict" qualifier set. - - This does not look through typedefs that may have added "restrict" at - a different level. - """ - return conf.lib.clang_isRestrictQualifiedType(self) - - def is_function_variadic(self): - """Determine whether this function Type is a variadic function type.""" - assert self.kind == TypeKind.FUNCTIONPROTO - - return conf.lib.clang_isFunctionTypeVariadic(self) - - def is_pod(self): - """Determine whether this Type represents plain old data (POD).""" - return conf.lib.clang_isPODType(self) - - def get_pointee(self): - """ - For pointer types, returns the type of the pointee. - """ - return conf.lib.clang_getPointeeType(self) - - def get_declaration(self): - """ - Return the cursor for the declaration of the given type. - """ - return conf.lib.clang_getTypeDeclaration(self) - - def get_result(self): - """ - Retrieve the result type associated with a function type. - """ - return conf.lib.clang_getResultType(self) - - def get_array_element_type(self): - """ - Retrieve the type of the elements of the array type. - """ - return conf.lib.clang_getArrayElementType(self) - - def get_array_size(self): - """ - Retrieve the size of the constant array. - """ - return conf.lib.clang_getArraySize(self) - - def get_class_type(self): - """ - Retrieve the class type of the member pointer type. - """ - return conf.lib.clang_Type_getClassType(self) - - def get_named_type(self): - """ - Retrieve the type named by the qualified-id. - """ - return conf.lib.clang_Type_getNamedType(self) - def get_align(self): - """ - Retrieve the alignment of the record. - """ - return conf.lib.clang_Type_getAlignOf(self) - - def get_size(self): - """ - Retrieve the size of the record. - """ - return conf.lib.clang_Type_getSizeOf(self) - - def get_offset(self, fieldname): - """ - Retrieve the offset of a field in the record. - """ - return conf.lib.clang_Type_getOffsetOf(self, c_char_p(fieldname)) - - def get_ref_qualifier(self): - """ - Retrieve the ref-qualifier of the type. - """ - return RefQualifierKind.from_id( - conf.lib.clang_Type_getCXXRefQualifier(self)) - - def get_fields(self): - """Return an iterator for accessing the fields of this type.""" - - def visitor(field, children): - assert field != conf.lib.clang_getNullCursor() - - # Create reference to TU so it isn't GC'd before Cursor. - field._tu = self._tu - fields.append(field) - return 1 # continue - fields = [] - conf.lib.clang_Type_visitFields(self, - callbacks['fields_visit'](visitor), fields) - return iter(fields) - - @property - def spelling(self): - """Retrieve the spelling of this Type.""" - return conf.lib.clang_getTypeSpelling(self) - - def __eq__(self, other): - if type(other) != type(self): - return False - - return conf.lib.clang_equalTypes(self, other) - - def __ne__(self, other): - return not self.__eq__(other) - -## CIndex Objects ## - -# CIndex objects (derived from ClangObject) are essentially lightweight -# wrappers attached to some underlying object, which is exposed via CIndex as -# a void*. - -class ClangObject(object): - """ - A helper for Clang objects. This class helps act as an intermediary for - the ctypes library and the Clang CIndex library. - """ - def __init__(self, obj): - assert isinstance(obj, c_object_p) and obj - self.obj = self._as_parameter_ = obj - - def from_param(self): - return self._as_parameter_ - - -class _CXUnsavedFile(Structure): - """Helper for passing unsaved file arguments.""" - _fields_ = [("name", c_char_p), ("contents", c_char_p), ('length', c_ulong)] - -# Functions calls through the python interface are rather slow. Fortunately, -# for most symboles, we do not need to perform a function call. Their spelling -# never changes and is consequently provided by this spelling cache. -SpellingCache = { - # 0: CompletionChunk.Kind("Optional"), - # 1: CompletionChunk.Kind("TypedText"), - # 2: CompletionChunk.Kind("Text"), - # 3: CompletionChunk.Kind("Placeholder"), - # 4: CompletionChunk.Kind("Informative"), - # 5 : CompletionChunk.Kind("CurrentParameter"), - 6: '(', # CompletionChunk.Kind("LeftParen"), - 7: ')', # CompletionChunk.Kind("RightParen"), - 8: '[', # CompletionChunk.Kind("LeftBracket"), - 9: ']', # CompletionChunk.Kind("RightBracket"), - 10: '{', # CompletionChunk.Kind("LeftBrace"), - 11: '}', # CompletionChunk.Kind("RightBrace"), - 12: '<', # CompletionChunk.Kind("LeftAngle"), - 13: '>', # CompletionChunk.Kind("RightAngle"), - 14: ', ', # CompletionChunk.Kind("Comma"), - # 15: CompletionChunk.Kind("ResultType"), - 16: ':', # CompletionChunk.Kind("Colon"), - 17: ';', # CompletionChunk.Kind("SemiColon"), - 18: '=', # CompletionChunk.Kind("Equal"), - 19: ' ', # CompletionChunk.Kind("HorizontalSpace"), - # 20: CompletionChunk.Kind("VerticalSpace") -} - -class CompletionChunk: - class Kind: - def __init__(self, name): - self.name = name - - def __str__(self): - return self.name - - def __repr__(self): - return "" % self - - def __init__(self, completionString, key): - self.cs = completionString - self.key = key - self.__kindNumberCache = -1 - - def __repr__(self): - return "{'" + self.spelling + "', " + str(self.kind) + "}" - - @CachedProperty - def spelling(self): - if self.__kindNumber in SpellingCache: - return SpellingCache[self.__kindNumber] - return conf.lib.clang_getCompletionChunkText(self.cs, self.key).spelling - - # We do not use @CachedProperty here, as the manual implementation is - # apparently still significantly faster. Please profile carefully if you - # would like to add CachedProperty back. - @property - def __kindNumber(self): - if self.__kindNumberCache == -1: - self.__kindNumberCache = \ - conf.lib.clang_getCompletionChunkKind(self.cs, self.key) - return self.__kindNumberCache - - @CachedProperty - def kind(self): - return completionChunkKindMap[self.__kindNumber] - - @CachedProperty - def string(self): - res = conf.lib.clang_getCompletionChunkCompletionString(self.cs, - self.key) - - if (res): - return CompletionString(res) - else: - None - - def isKindOptional(self): - return self.__kindNumber == 0 - - def isKindTypedText(self): - return self.__kindNumber == 1 - - def isKindPlaceHolder(self): - return self.__kindNumber == 3 - - def isKindInformative(self): - return self.__kindNumber == 4 - - def isKindResultType(self): - return self.__kindNumber == 15 - -completionChunkKindMap = { - 0: CompletionChunk.Kind("Optional"), - 1: CompletionChunk.Kind("TypedText"), - 2: CompletionChunk.Kind("Text"), - 3: CompletionChunk.Kind("Placeholder"), - 4: CompletionChunk.Kind("Informative"), - 5: CompletionChunk.Kind("CurrentParameter"), - 6: CompletionChunk.Kind("LeftParen"), - 7: CompletionChunk.Kind("RightParen"), - 8: CompletionChunk.Kind("LeftBracket"), - 9: CompletionChunk.Kind("RightBracket"), - 10: CompletionChunk.Kind("LeftBrace"), - 11: CompletionChunk.Kind("RightBrace"), - 12: CompletionChunk.Kind("LeftAngle"), - 13: CompletionChunk.Kind("RightAngle"), - 14: CompletionChunk.Kind("Comma"), - 15: CompletionChunk.Kind("ResultType"), - 16: CompletionChunk.Kind("Colon"), - 17: CompletionChunk.Kind("SemiColon"), - 18: CompletionChunk.Kind("Equal"), - 19: CompletionChunk.Kind("HorizontalSpace"), - 20: CompletionChunk.Kind("VerticalSpace")} - -class CompletionString(ClangObject): - class Availability: - def __init__(self, name): - self.name = name - - def __str__(self): - return self.name - - def __repr__(self): - return "" % self - - def __len__(self): - return self.num_chunks - - @CachedProperty - def num_chunks(self): - return conf.lib.clang_getNumCompletionChunks(self.obj) - - def __getitem__(self, key): - if self.num_chunks <= key: - raise IndexError - return CompletionChunk(self.obj, key) - - @property - def priority(self): - return conf.lib.clang_getCompletionPriority(self.obj) - - @property - def availability(self): - res = conf.lib.clang_getCompletionAvailability(self.obj) - return availabilityKinds[res] - - @property - def briefComment(self): - if conf.function_exists("clang_getCompletionBriefComment"): - return conf.lib.clang_getCompletionBriefComment(self.obj) - return _CXString() - - def __repr__(self): - return " | ".join([str(a) for a in self]) \ - + " || Priority: " + str(self.priority) \ - + " || Availability: " + str(self.availability) \ - + " || Brief comment: " + str(self.briefComment.spelling) - -availabilityKinds = { - 0: CompletionChunk.Kind("Available"), - 1: CompletionChunk.Kind("Deprecated"), - 2: CompletionChunk.Kind("NotAvailable"), - 3: CompletionChunk.Kind("NotAccessible")} - -class CodeCompletionResult(Structure): - _fields_ = [('cursorKind', c_int), ('completionString', c_object_p)] - - def __repr__(self): - return str(CompletionString(self.completionString)) - - @property - def kind(self): - return CursorKind.from_id(self.cursorKind) - - @property - def string(self): - return CompletionString(self.completionString) - -class CCRStructure(Structure): - _fields_ = [('results', POINTER(CodeCompletionResult)), - ('numResults', c_int)] - - def __len__(self): - return self.numResults - - def __getitem__(self, key): - if len(self) <= key: - raise IndexError - - return self.results[key] - -class CodeCompletionResults(ClangObject): - def __init__(self, ptr): - assert isinstance(ptr, POINTER(CCRStructure)) and ptr - self.ptr = self._as_parameter_ = ptr - - def from_param(self): - return self._as_parameter_ - - def __del__(self): - conf.lib.clang_disposeCodeCompleteResults(self) - - @property - def results(self): - return self.ptr.contents - - @property - def diagnostics(self): - class DiagnosticsItr: - def __init__(self, ccr): - self.ccr= ccr - - def __len__(self): - return int(\ - conf.lib.clang_codeCompleteGetNumDiagnostics(self.ccr)) - - def __getitem__(self, key): - return conf.lib.clang_codeCompleteGetDiagnostic(self.ccr, key) - - return DiagnosticsItr(self) - - -class Index(ClangObject): - """ - The Index type provides the primary interface to the Clang CIndex library, - primarily by providing an interface for reading and parsing translation - units. - """ - - @staticmethod - def create(excludeDecls=False): - """ - Create a new Index. - Parameters: - excludeDecls -- Exclude local declarations from translation units. - """ - return Index(conf.lib.clang_createIndex(excludeDecls, 0)) - - def __del__(self): - conf.lib.clang_disposeIndex(self) - - def read(self, path): - """Load a TranslationUnit from the given AST file.""" - return TranslationUnit.from_ast_file(path, self) - - def parse(self, path, args=None, unsaved_files=None, options = 0): - """Load the translation unit from the given source code file by running - clang and generating the AST before loading. Additional command line - parameters can be passed to clang via the args parameter. - - In-memory contents for files can be provided by passing a list of pairs - to as unsaved_files, the first item should be the filenames to be mapped - and the second should be the contents to be substituted for the - file. The contents may be passed as strings or file objects. - - If an error was encountered during parsing, a TranslationUnitLoadError - will be raised. - """ - return TranslationUnit.from_source(path, args, unsaved_files, options, - self) - -class TranslationUnit(ClangObject): - """Represents a source code translation unit. - - This is one of the main types in the API. Any time you wish to interact - with Clang's representation of a source file, you typically start with a - translation unit. - """ - - # Default parsing mode. - PARSE_NONE = 0 - - # Instruct the parser to create a detailed processing record containing - # metadata not normally retained. - PARSE_DETAILED_PROCESSING_RECORD = 1 - - # Indicates that the translation unit is incomplete. This is typically used - # when parsing headers. - PARSE_INCOMPLETE = 2 - - # Instruct the parser to create a pre-compiled preamble for the translation - # unit. This caches the preamble (included files at top of source file). - # This is useful if the translation unit will be reparsed and you don't - # want to incur the overhead of reparsing the preamble. - PARSE_PRECOMPILED_PREAMBLE = 4 - - # Cache code completion information on parse. This adds time to parsing but - # speeds up code completion. - PARSE_CACHE_COMPLETION_RESULTS = 8 - - # Flags with values 16 and 32 are deprecated and intentionally omitted. - - # Do not parse function bodies. This is useful if you only care about - # searching for declarations/definitions. - PARSE_SKIP_FUNCTION_BODIES = 64 - - # Used to indicate that brief documentation comments should be included - # into the set of code completions returned from this translation unit. - PARSE_INCLUDE_BRIEF_COMMENTS_IN_CODE_COMPLETION = 128 - - @classmethod - def from_source(cls, filename, args=None, unsaved_files=None, options=0, - index=None): - """Create a TranslationUnit by parsing source. - - This is capable of processing source code both from files on the - filesystem as well as in-memory contents. - - Command-line arguments that would be passed to clang are specified as - a list via args. These can be used to specify include paths, warnings, - etc. e.g. ["-Wall", "-I/path/to/include"]. - - In-memory file content can be provided via unsaved_files. This is an - iterable of 2-tuples. The first element is the str filename. The - second element defines the content. Content can be provided as str - source code or as file objects (anything with a read() method). If - a file object is being used, content will be read until EOF and the - read cursor will not be reset to its original position. - - options is a bitwise or of TranslationUnit.PARSE_XXX flags which will - control parsing behavior. - - index is an Index instance to utilize. If not provided, a new Index - will be created for this TranslationUnit. - - To parse source from the filesystem, the filename of the file to parse - is specified by the filename argument. Or, filename could be None and - the args list would contain the filename(s) to parse. - - To parse source from an in-memory buffer, set filename to the virtual - filename you wish to associate with this source (e.g. "test.c"). The - contents of that file are then provided in unsaved_files. - - If an error occurs, a TranslationUnitLoadError is raised. - - Please note that a TranslationUnit with parser errors may be returned. - It is the caller's responsibility to check tu.diagnostics for errors. - - Also note that Clang infers the source language from the extension of - the input filename. If you pass in source code containing a C++ class - declaration with the filename "test.c" parsing will fail. - """ - if args is None: - args = [] - - if unsaved_files is None: - unsaved_files = [] - - if index is None: - index = Index.create() - - if isinstance(filename, str): - filename = filename.encode('utf8') - - args_length = len(args) - if args_length > 0: - args = (arg.encode('utf8') if isinstance(arg, str) else arg - for arg in args) - args_array = (c_char_p * args_length)(* args) - - unsaved_array = None - if len(unsaved_files) > 0: - unsaved_array = (_CXUnsavedFile * len(unsaved_files))() - for i, (name, contents) in enumerate(unsaved_files): - if hasattr(contents, "read"): - contents = contents.read() - - unsaved_array[i].name = name - unsaved_array[i].contents = contents - unsaved_array[i].length = len(contents) - - ptr = conf.lib.clang_parseTranslationUnit(index, filename, args_array, - args_length, unsaved_array, - len(unsaved_files), options) - - if not ptr: - raise TranslationUnitLoadError("Error parsing translation unit.") - - return cls(ptr, index=index) - - @classmethod - def from_ast_file(cls, filename, index=None): - """Create a TranslationUnit instance from a saved AST file. - - A previously-saved AST file (provided with -emit-ast or - TranslationUnit.save()) is loaded from the filename specified. - - If the file cannot be loaded, a TranslationUnitLoadError will be - raised. - - index is optional and is the Index instance to use. If not provided, - a default Index will be created. - """ - if index is None: - index = Index.create() - - ptr = conf.lib.clang_createTranslationUnit(index, filename) - if not ptr: - raise TranslationUnitLoadError(filename) - - return cls(ptr=ptr, index=index) - - def __init__(self, ptr, index): - """Create a TranslationUnit instance. - - TranslationUnits should be created using one of the from_* @classmethod - functions above. __init__ is only called internally. - """ - assert isinstance(index, Index) - self.index = index - ClangObject.__init__(self, ptr) - - def __del__(self): - conf.lib.clang_disposeTranslationUnit(self) - - @property - def cursor(self): - """Retrieve the cursor that represents the given translation unit.""" - return conf.lib.clang_getTranslationUnitCursor(self) - - @property - def spelling(self): - """Get the original translation unit source file name.""" - return conf.lib.clang_getTranslationUnitSpelling(self) - - def get_includes(self): - """ - Return an iterable sequence of FileInclusion objects that describe the - sequence of inclusions in a translation unit. The first object in - this sequence is always the input file. Note that this method will not - recursively iterate over header files included through precompiled - headers. - """ - def visitor(fobj, lptr, depth, includes): - if depth > 0: - loc = lptr.contents - includes.append(FileInclusion(loc.file, File(fobj), loc, depth)) - - # Automatically adapt CIndex/ctype pointers to python objects - includes = [] - conf.lib.clang_getInclusions(self, - callbacks['translation_unit_includes'](visitor), includes) - - return iter(includes) - - def get_file(self, filename): - """Obtain a File from this translation unit.""" - - return File.from_name(self, filename) - - def get_location(self, filename, position): - """Obtain a SourceLocation for a file in this translation unit. - - The position can be specified by passing: - - - Integer file offset. Initial file offset is 0. - - 2-tuple of (line number, column number). Initial file position is - (0, 0) - """ - f = self.get_file(filename) - - if isinstance(position, int): - return SourceLocation.from_offset(self, f, position) - - return SourceLocation.from_position(self, f, position[0], position[1]) - - def get_extent(self, filename, locations): - """Obtain a SourceRange from this translation unit. - - The bounds of the SourceRange must ultimately be defined by a start and - end SourceLocation. For the locations argument, you can pass: - - - 2 SourceLocation instances in a 2-tuple or list. - - 2 int file offsets via a 2-tuple or list. - - 2 2-tuple or lists of (line, column) pairs in a 2-tuple or list. - - e.g. - - get_extent('foo.c', (5, 10)) - get_extent('foo.c', ((1, 1), (1, 15))) - """ - f = self.get_file(filename) - - if len(locations) < 2: - raise Exception('Must pass object with at least 2 elements') - - start_location, end_location = locations - - if hasattr(start_location, '__len__'): - start_location = SourceLocation.from_position(self, f, - start_location[0], start_location[1]) - elif isinstance(start_location, int): - start_location = SourceLocation.from_offset(self, f, - start_location) - - if hasattr(end_location, '__len__'): - end_location = SourceLocation.from_position(self, f, - end_location[0], end_location[1]) - elif isinstance(end_location, int): - end_location = SourceLocation.from_offset(self, f, end_location) - - assert isinstance(start_location, SourceLocation) - assert isinstance(end_location, SourceLocation) - - return SourceRange.from_locations(start_location, end_location) - - @property - def diagnostics(self): - """ - Return an iterable (and indexable) object containing the diagnostics. - """ - class DiagIterator: - def __init__(self, tu): - self.tu = tu - - def __len__(self): - return int(conf.lib.clang_getNumDiagnostics(self.tu)) - - def __getitem__(self, key): - diag = conf.lib.clang_getDiagnostic(self.tu, key) - if not diag: - raise IndexError - return Diagnostic(diag) - - return DiagIterator(self) - - def reparse(self, unsaved_files=None, options=0): - """ - Reparse an already parsed translation unit. - - In-memory contents for files can be provided by passing a list of pairs - as unsaved_files, the first items should be the filenames to be mapped - and the second should be the contents to be substituted for the - file. The contents may be passed as strings or file objects. - """ - if unsaved_files is None: - unsaved_files = [] - - unsaved_files_array = 0 - if len(unsaved_files): - unsaved_files_array = (_CXUnsavedFile * len(unsaved_files))() - for i,(name,value) in enumerate(unsaved_files): - if not isinstance(value, str): - # FIXME: It would be great to support an efficient version - # of this, one day. - value = value.read() - print(value) - if not isinstance(value, str): - raise TypeError('Unexpected unsaved file contents.') - unsaved_files_array[i].name = name - unsaved_files_array[i].contents = value - unsaved_files_array[i].length = len(value) - ptr = conf.lib.clang_reparseTranslationUnit(self, len(unsaved_files), - unsaved_files_array, options) - - def save(self, filename): - """Saves the TranslationUnit to a file. - - This is equivalent to passing -emit-ast to the clang frontend. The - saved file can be loaded back into a TranslationUnit. Or, if it - corresponds to a header, it can be used as a pre-compiled header file. - - If an error occurs while saving, a TranslationUnitSaveError is raised. - If the error was TranslationUnitSaveError.ERROR_INVALID_TU, this means - the constructed TranslationUnit was not valid at time of save. In this - case, the reason(s) why should be available via - TranslationUnit.diagnostics(). - - filename -- The path to save the translation unit to. - """ - options = conf.lib.clang_defaultSaveOptions(self) - result = int(conf.lib.clang_saveTranslationUnit(self, filename, - options)) - if result != 0: - raise TranslationUnitSaveError(result, - 'Error saving TranslationUnit.') - - def codeComplete(self, path, line, column, unsaved_files=None, - include_macros=False, include_code_patterns=False, - include_brief_comments=False): - """ - Code complete in this translation unit. - - In-memory contents for files can be provided by passing a list of pairs - as unsaved_files, the first items should be the filenames to be mapped - and the second should be the contents to be substituted for the - file. The contents may be passed as strings or file objects. - """ - options = 0 - - if include_macros: - options += 1 - - if include_code_patterns: - options += 2 - - if include_brief_comments: - options += 4 - - if unsaved_files is None: - unsaved_files = [] - - unsaved_files_array = 0 - if len(unsaved_files): - unsaved_files_array = (_CXUnsavedFile * len(unsaved_files))() - for i,(name,value) in enumerate(unsaved_files): - if not isinstance(value, str): - # FIXME: It would be great to support an efficient version - # of this, one day. - value = value.read() - print(value) - if not isinstance(value, str): - raise TypeError('Unexpected unsaved file contents.') - unsaved_files_array[i].name = name - unsaved_files_array[i].contents = value - unsaved_files_array[i].length = len(value) - ptr = conf.lib.clang_codeCompleteAt(self, path, line, column, - unsaved_files_array, len(unsaved_files), options) - if ptr: - return CodeCompletionResults(ptr) - return None - - def get_tokens(self, locations=None, extent=None): - """Obtain tokens in this translation unit. - - This is a generator for Token instances. The caller specifies a range - of source code to obtain tokens for. The range can be specified as a - 2-tuple of SourceLocation or as a SourceRange. If both are defined, - behavior is undefined. - """ - if locations is not None: - extent = SourceRange(start=locations[0], end=locations[1]) - - return TokenGroup.get_tokens(self, extent) - -class File(ClangObject): - """ - The File class represents a particular source file that is part of a - translation unit. - """ - - @staticmethod - def from_name(translation_unit, file_name): - """Retrieve a file handle within the given translation unit.""" - return File(conf.lib.clang_getFile(translation_unit, file_name)) - - @property - def name(self): - """Return the complete file and path name of the file.""" - return conf.lib.clang_getCString(conf.lib.clang_getFileName(self)) - - @property - def time(self): - """Return the last modification time of the file.""" - return conf.lib.clang_getFileTime(self) - - def __bytes__(self): - return self.name - - def __repr__(self): - return "" % (self.name) - - @staticmethod - def from_cursor_result(res, fn, args): - assert isinstance(res, File) - - # Copy a reference to the TranslationUnit to prevent premature GC. - res._tu = args[0]._tu - return res - -class FileInclusion(object): - """ - The FileInclusion class represents the inclusion of one source file by - another via a '#include' directive or as the input file for the translation - unit. This class provides information about the included file, the including - file, the location of the '#include' directive and the depth of the included - file in the stack. Note that the input file has depth 0. - """ - - def __init__(self, src, tgt, loc, depth): - self.source = src - self.include = tgt - self.location = loc - self.depth = depth - - @property - def is_input_file(self): - """True if the included file is the input file.""" - return self.depth == 0 - -class CompilationDatabaseError(Exception): - """Represents an error that occurred when working with a CompilationDatabase - - Each error is associated to an enumerated value, accessible under - e.cdb_error. Consumers can compare the value with one of the ERROR_ - constants in this class. - """ - - # An unknown error occurred - ERROR_UNKNOWN = 0 - - # The database could not be loaded - ERROR_CANNOTLOADDATABASE = 1 - - def __init__(self, enumeration, message): - assert isinstance(enumeration, int) - - if enumeration > 1: - raise Exception("Encountered undefined CompilationDatabase error " - "constant: %d. Please file a bug to have this " - "value supported." % enumeration) - - self.cdb_error = enumeration - Exception.__init__(self, 'Error %d: %s' % (enumeration, message)) - -class CompileCommand(object): - """Represents the compile command used to build a file""" - def __init__(self, cmd, ccmds): - self.cmd = cmd - # Keep a reference to the originating CompileCommands - # to prevent garbage collection - self.ccmds = ccmds - - @property - def directory(self): - """Get the working directory for this CompileCommand""" - return conf.lib.clang_CompileCommand_getDirectory(self.cmd) - - @property - def filename(self): - """Get the working filename for this CompileCommand""" - return conf.lib.clang_CompileCommand_getFilename(self.cmd) - - @property - def arguments(self): - """ - Get an iterable object providing each argument in the - command line for the compiler invocation as a _CXString. - - Invariant : the first argument is the compiler executable - """ - length = conf.lib.clang_CompileCommand_getNumArgs(self.cmd) - for i in range(length): - yield conf.lib.clang_CompileCommand_getArg(self.cmd, i) - -class CompileCommands(object): - """ - CompileCommands is an iterable object containing all CompileCommand - that can be used for building a specific file. - """ - def __init__(self, ccmds): - self.ccmds = ccmds - - def __del__(self): - conf.lib.clang_CompileCommands_dispose(self.ccmds) - - def __len__(self): - return int(conf.lib.clang_CompileCommands_getSize(self.ccmds)) - - def __getitem__(self, i): - cc = conf.lib.clang_CompileCommands_getCommand(self.ccmds, i) - if not cc: - raise IndexError - return CompileCommand(cc, self) - - @staticmethod - def from_result(res, fn, args): - if not res: - return None - return CompileCommands(res) - -class CompilationDatabase(ClangObject): - """ - The CompilationDatabase is a wrapper class around - clang::tooling::CompilationDatabase - - It enables querying how a specific source file can be built. - """ - - def __del__(self): - conf.lib.clang_CompilationDatabase_dispose(self) - - @staticmethod - def from_result(res, fn, args): - if not res: - raise CompilationDatabaseError(0, - "CompilationDatabase loading failed") - return CompilationDatabase(res) - - @staticmethod - def fromDirectory(buildDir): - """Builds a CompilationDatabase from the database found in buildDir""" - errorCode = c_uint() - try: - cdb = conf.lib.clang_CompilationDatabase_fromDirectory(buildDir, - byref(errorCode)) - except CompilationDatabaseError as e: - raise CompilationDatabaseError(int(errorCode.value), - "CompilationDatabase loading failed") - return cdb - - def getCompileCommands(self, filename): - """ - Get an iterable object providing all the CompileCommands available to - build filename. Returns None if filename is not found in the database. - """ - return conf.lib.clang_CompilationDatabase_getCompileCommands(self, - filename) - - def getAllCompileCommands(self): - """ - Get an iterable object providing all the CompileCommands available from - the database. - """ - return conf.lib.clang_CompilationDatabase_getAllCompileCommands(self) - - -class Token(Structure): - """Represents a single token from the preprocessor. - - Tokens are effectively segments of source code. Source code is first parsed - into tokens before being converted into the AST and Cursors. - - Tokens are obtained from parsed TranslationUnit instances. You currently - can't create tokens manually. - """ - _fields_ = [ - ('int_data', c_uint * 4), - ('ptr_data', c_void_p) - ] - - @property - def spelling(self): - """The spelling of this token. - - This is the textual representation of the token in source. - """ - return conf.lib.clang_getTokenSpelling(self._tu, self) - - @property - def kind(self): - """Obtain the TokenKind of the current token.""" - return TokenKind.from_value(conf.lib.clang_getTokenKind(self)) - - @property - def location(self): - """The SourceLocation this Token occurs at.""" - return conf.lib.clang_getTokenLocation(self._tu, self) - - @property - def extent(self): - """The SourceRange this Token occupies.""" - return conf.lib.clang_getTokenExtent(self._tu, self) - - @property - def cursor(self): - """The Cursor this Token corresponds to.""" - cursor = Cursor() - - conf.lib.clang_annotateTokens(self._tu, byref(self), 1, byref(cursor)) - - return cursor - -# Now comes the plumbing to hook up the C library. - -# Register callback types in common container. -callbacks['translation_unit_includes'] = CFUNCTYPE(None, c_object_p, - POINTER(SourceLocation), c_uint, py_object) -callbacks['cursor_visit'] = CFUNCTYPE(c_int, Cursor, Cursor, py_object) -callbacks['fields_visit'] = CFUNCTYPE(c_int, Cursor, py_object) - -# Functions strictly alphabetical order. -functionList = [ - ("clang_annotateTokens", - [TranslationUnit, POINTER(Token), c_uint, POINTER(Cursor)]), - - ("clang_CompilationDatabase_dispose", - [c_object_p]), - - ("clang_CompilationDatabase_fromDirectory", - [c_char_p, POINTER(c_uint)], - c_object_p, - CompilationDatabase.from_result), - - ("clang_CompilationDatabase_getAllCompileCommands", - [c_object_p], - c_object_p, - CompileCommands.from_result), - - ("clang_CompilationDatabase_getCompileCommands", - [c_object_p, c_char_p], - c_object_p, - CompileCommands.from_result), - - ("clang_CompileCommands_dispose", - [c_object_p]), - - ("clang_CompileCommands_getCommand", - [c_object_p, c_uint], - c_object_p), - - ("clang_CompileCommands_getSize", - [c_object_p], - c_uint), - - ("clang_CompileCommand_getArg", - [c_object_p, c_uint], - _CXString, - _CXString.from_result), - - ("clang_CompileCommand_getDirectory", - [c_object_p], - _CXString, - _CXString.from_result), - - ("clang_CompileCommand_getFilename", - [c_object_p], - _CXString, - _CXString.from_result), - - ("clang_CompileCommand_getNumArgs", - [c_object_p], - c_uint), - - ("clang_codeCompleteAt", - [TranslationUnit, c_char_p, c_int, c_int, c_void_p, c_int, c_int], - POINTER(CCRStructure)), - - ("clang_codeCompleteGetDiagnostic", - [CodeCompletionResults, c_int], - Diagnostic), - - ("clang_codeCompleteGetNumDiagnostics", - [CodeCompletionResults], - c_int), - - ("clang_createIndex", - [c_int, c_int], - c_object_p), - - ("clang_createTranslationUnit", - [Index, c_char_p], - c_object_p), - - ("clang_CXXConstructor_isConvertingConstructor", - [Cursor], - bool), - - ("clang_CXXConstructor_isCopyConstructor", - [Cursor], - bool), - - ("clang_CXXConstructor_isDefaultConstructor", - [Cursor], - bool), - - ("clang_CXXConstructor_isMoveConstructor", - [Cursor], - bool), - - ("clang_CXXField_isMutable", - [Cursor], - bool), - - ("clang_CXXMethod_isConst", - [Cursor], - bool), - - ("clang_CXXMethod_isDefaulted", - [Cursor], - bool), - - ("clang_CXXMethod_isPureVirtual", - [Cursor], - bool), - - ("clang_CXXMethod_isStatic", - [Cursor], - bool), - - ("clang_CXXMethod_isVirtual", - [Cursor], - bool), - - ("clang_defaultDiagnosticDisplayOptions", - [], - c_uint), - - ("clang_defaultSaveOptions", - [TranslationUnit], - c_uint), - - ("clang_disposeCodeCompleteResults", - [CodeCompletionResults]), - -# ("clang_disposeCXTUResourceUsage", -# [CXTUResourceUsage]), - - ("clang_disposeDiagnostic", - [Diagnostic]), - - ("clang_disposeIndex", - [Index]), - - ("clang_disposeString", - [_CXString]), - - ("clang_disposeTokens", - [TranslationUnit, POINTER(Token), c_uint]), - - ("clang_disposeTranslationUnit", - [TranslationUnit]), - - ("clang_equalCursors", - [Cursor, Cursor], - bool), - - ("clang_equalLocations", - [SourceLocation, SourceLocation], - bool), - - ("clang_equalRanges", - [SourceRange, SourceRange], - bool), - - ("clang_equalTypes", - [Type, Type], - bool), - - ("clang_formatDiagnostic", - [Diagnostic, c_uint], - _CXString), - - ("clang_getArgType", - [Type, c_uint], - Type, - Type.from_result), - - ("clang_getArrayElementType", - [Type], - Type, - Type.from_result), - - ("clang_getArraySize", - [Type], - c_longlong), - - ("clang_getFieldDeclBitWidth", - [Cursor], - c_int), - - ("clang_getCanonicalCursor", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getCanonicalType", - [Type], - Type, - Type.from_result), - - ("clang_getChildDiagnostics", - [Diagnostic], - c_object_p), - - ("clang_getCompletionAvailability", - [c_void_p], - c_int), - - ("clang_getCompletionBriefComment", - [c_void_p], - _CXString), - - ("clang_getCompletionChunkCompletionString", - [c_void_p, c_int], - c_object_p), - - ("clang_getCompletionChunkKind", - [c_void_p, c_int], - c_int), - - ("clang_getCompletionChunkText", - [c_void_p, c_int], - _CXString), - - ("clang_getCompletionPriority", - [c_void_p], - c_int), - - ("clang_getCString", - [_CXString], - c_char_p), - - ("clang_getCursor", - [TranslationUnit, SourceLocation], - Cursor), - - ("clang_getCursorDefinition", - [Cursor], - Cursor, - Cursor.from_result), - - ("clang_getCursorDisplayName", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_getCursorExtent", - [Cursor], - SourceRange), - - ("clang_getCursorLexicalParent", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getCursorLocation", - [Cursor], - SourceLocation), - - ("clang_getCursorReferenced", - [Cursor], - Cursor, - Cursor.from_result), - - ("clang_getCursorReferenceNameRange", - [Cursor, c_uint, c_uint], - SourceRange), - - ("clang_getCursorSemanticParent", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getCursorSpelling", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_getCursorType", - [Cursor], - Type, - Type.from_result), - - ("clang_getCursorUSR", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_Cursor_getMangling", - [Cursor], - _CXString, - _CXString.from_result), - -# ("clang_getCXTUResourceUsage", -# [TranslationUnit], -# CXTUResourceUsage), - - ("clang_getCXXAccessSpecifier", - [Cursor], - c_uint), - - ("clang_getDeclObjCTypeEncoding", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_getDiagnostic", - [c_object_p, c_uint], - c_object_p), - - ("clang_getDiagnosticCategory", - [Diagnostic], - c_uint), - - ("clang_getDiagnosticCategoryText", - [Diagnostic], - _CXString, - _CXString.from_result), - - ("clang_getDiagnosticFixIt", - [Diagnostic, c_uint, POINTER(SourceRange)], - _CXString, - _CXString.from_result), - - ("clang_getDiagnosticInSet", - [c_object_p, c_uint], - c_object_p), - - ("clang_getDiagnosticLocation", - [Diagnostic], - SourceLocation), - - ("clang_getDiagnosticNumFixIts", - [Diagnostic], - c_uint), - - ("clang_getDiagnosticNumRanges", - [Diagnostic], - c_uint), - - ("clang_getDiagnosticOption", - [Diagnostic, POINTER(_CXString)], - _CXString, - _CXString.from_result), - - ("clang_getDiagnosticRange", - [Diagnostic, c_uint], - SourceRange), - - ("clang_getDiagnosticSeverity", - [Diagnostic], - c_int), - - ("clang_getDiagnosticSpelling", - [Diagnostic], - _CXString, - _CXString.from_result), - - ("clang_getElementType", - [Type], - Type, - Type.from_result), - - ("clang_getEnumConstantDeclUnsignedValue", - [Cursor], - c_ulonglong), - - ("clang_getEnumConstantDeclValue", - [Cursor], - c_longlong), - - ("clang_getEnumDeclIntegerType", - [Cursor], - Type, - Type.from_result), - - ("clang_getFile", - [TranslationUnit, c_char_p], - c_object_p), - - ("clang_getFileName", - [File], - _CXString), # TODO go through _CXString.from_result? - - ("clang_getFileTime", - [File], - c_uint), - - ("clang_getIBOutletCollectionType", - [Cursor], - Type, - Type.from_result), - - ("clang_getIncludedFile", - [Cursor], - File, - File.from_cursor_result), - - ("clang_getInclusions", - [TranslationUnit, callbacks['translation_unit_includes'], py_object]), - - ("clang_getInstantiationLocation", - [SourceLocation, POINTER(c_object_p), POINTER(c_uint), POINTER(c_uint), - POINTER(c_uint)]), - - ("clang_getLocation", - [TranslationUnit, File, c_uint, c_uint], - SourceLocation), - - ("clang_getLocationForOffset", - [TranslationUnit, File, c_uint], - SourceLocation), - - ("clang_getNullCursor", - None, - Cursor), - - ("clang_getNumArgTypes", - [Type], - c_uint), - - ("clang_getNumCompletionChunks", - [c_void_p], - c_int), - - ("clang_getNumDiagnostics", - [c_object_p], - c_uint), - - ("clang_getNumDiagnosticsInSet", - [c_object_p], - c_uint), - - ("clang_getNumElements", - [Type], - c_longlong), - - ("clang_getNumOverloadedDecls", - [Cursor], - c_uint), - - ("clang_getOverloadedDecl", - [Cursor, c_uint], - Cursor, - Cursor.from_cursor_result), - - ("clang_getPointeeType", - [Type], - Type, - Type.from_result), - - ("clang_getRange", - [SourceLocation, SourceLocation], - SourceRange), - - ("clang_getRangeEnd", - [SourceRange], - SourceLocation), - - ("clang_getRangeStart", - [SourceRange], - SourceLocation), - - ("clang_getResultType", - [Type], - Type, - Type.from_result), - - ("clang_getSpecializedCursorTemplate", - [Cursor], - Cursor, - Cursor.from_cursor_result), - - ("clang_getTemplateCursorKind", - [Cursor], - c_uint), - - ("clang_getTokenExtent", - [TranslationUnit, Token], - SourceRange), - - ("clang_getTokenKind", - [Token], - c_uint), - - ("clang_getTokenLocation", - [TranslationUnit, Token], - SourceLocation), - - ("clang_getTokenSpelling", - [TranslationUnit, Token], - _CXString, - _CXString.from_result), - - ("clang_getTranslationUnitCursor", - [TranslationUnit], - Cursor, - Cursor.from_result), - - ("clang_getTranslationUnitSpelling", - [TranslationUnit], - _CXString, - _CXString.from_result), - - ("clang_getTUResourceUsageName", - [c_uint], - c_char_p), - - ("clang_getTypeDeclaration", - [Type], - Cursor, - Cursor.from_result), - - ("clang_getTypedefDeclUnderlyingType", - [Cursor], - Type, - Type.from_result), - - ("clang_getTypeKindSpelling", - [c_uint], - _CXString, - _CXString.from_result), - - ("clang_getTypeSpelling", - [Type], - _CXString, - _CXString.from_result), - - ("clang_hashCursor", - [Cursor], - c_uint), - - ("clang_isAttribute", - [CursorKind], - bool), - - ("clang_isConstQualifiedType", - [Type], - bool), - - ("clang_isCursorDefinition", - [Cursor], - bool), - - ("clang_isDeclaration", - [CursorKind], - bool), - - ("clang_isExpression", - [CursorKind], - bool), - - ("clang_isFileMultipleIncludeGuarded", - [TranslationUnit, File], - bool), - - ("clang_isFunctionTypeVariadic", - [Type], - bool), - - ("clang_isInvalid", - [CursorKind], - bool), - - ("clang_isPODType", - [Type], - bool), - - ("clang_isPreprocessing", - [CursorKind], - bool), - - ("clang_isReference", - [CursorKind], - bool), - - ("clang_isRestrictQualifiedType", - [Type], - bool), - - ("clang_isStatement", - [CursorKind], - bool), - - ("clang_isTranslationUnit", - [CursorKind], - bool), - - ("clang_isUnexposed", - [CursorKind], - bool), - - ("clang_isVirtualBase", - [Cursor], - bool), - - ("clang_isVolatileQualifiedType", - [Type], - bool), - - ("clang_parseTranslationUnit", - [Index, c_char_p, c_void_p, c_int, c_void_p, c_int, c_int], - c_object_p), - - ("clang_reparseTranslationUnit", - [TranslationUnit, c_int, c_void_p, c_int], - c_int), - - ("clang_saveTranslationUnit", - [TranslationUnit, c_char_p, c_uint], - c_int), - - ("clang_tokenize", - [TranslationUnit, SourceRange, POINTER(POINTER(Token)), POINTER(c_uint)]), - - ("clang_visitChildren", - [Cursor, callbacks['cursor_visit'], py_object], - c_uint), - - ("clang_Cursor_getNumArguments", - [Cursor], - c_int), - - ("clang_Cursor_getArgument", - [Cursor, c_uint], - Cursor, - Cursor.from_result), - - ("clang_Cursor_getNumTemplateArguments", - [Cursor], - c_int), - - ("clang_Cursor_getTemplateArgumentKind", - [Cursor, c_uint], - TemplateArgumentKind.from_id), - - ("clang_Cursor_getTemplateArgumentType", - [Cursor, c_uint], - Type, - Type.from_result), - - ("clang_Cursor_getTemplateArgumentValue", - [Cursor, c_uint], - c_longlong), - - ("clang_Cursor_getTemplateArgumentUnsignedValue", - [Cursor, c_uint], - c_ulonglong), - - ("clang_Cursor_isAnonymous", - [Cursor], - bool), - - ("clang_Cursor_isBitField", - [Cursor], - bool), - - ("clang_Cursor_getBriefCommentText", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_Cursor_getRawCommentText", - [Cursor], - _CXString, - _CXString.from_result), - - ("clang_Cursor_getOffsetOfField", - [Cursor], - c_longlong), - - ("clang_Type_getAlignOf", - [Type], - c_longlong), - - ("clang_Type_getClassType", - [Type], - Type, - Type.from_result), - - ("clang_Type_getOffsetOf", - [Type, c_char_p], - c_longlong), - - ("clang_Type_getSizeOf", - [Type], - c_longlong), - - ("clang_Type_getCXXRefQualifier", - [Type], - c_uint), - - ("clang_Type_getNamedType", - [Type], - Type, - Type.from_result), - - ("clang_Type_visitFields", - [Type, callbacks['fields_visit'], py_object], - c_uint), -] - -class LibclangError(Exception): - def __init__(self, message): - self.m = message - - def __str__(self): - return self.m - -def register_function(lib, item, ignore_errors): - # A function may not exist, if these bindings are used with an older or - # incompatible version of libclang.so. - try: - func = getattr(lib, item[0]) - except AttributeError as e: - msg = str(e) + ". Please ensure that your python bindings are "\ - "compatible with your libclang.so version." - if ignore_errors: - return - raise LibclangError(msg) - - if len(item) >= 2: - func.argtypes = item[1] - - if len(item) >= 3: - func.restype = item[2] - - if len(item) == 4: - func.errcheck = item[3] - -def register_functions(lib, ignore_errors): - """Register function prototypes with a libclang library instance. - - This must be called as part of library instantiation so Python knows how - to call out to the shared library. - """ - - def register(item): - return register_function(lib, item, ignore_errors) - - for f in functionList: - register(f) - -class Config: - library_path = None - library_file = None - compatibility_check = False - loaded = False - - @staticmethod - def set_library_path(path): - """Set the path in which to search for libclang""" - if Config.loaded: - raise Exception("library path must be set before before using " \ - "any other functionalities in libclang.") - - Config.library_path = path - - @staticmethod - def set_library_file(filename): - """Set the exact location of libclang""" - if Config.loaded: - raise Exception("library file must be set before before using " \ - "any other functionalities in libclang.") - - Config.library_file = filename - - @staticmethod - def set_compatibility_check(check_status): - """ Perform compatibility check when loading libclang - - The python bindings are only tested and evaluated with the version of - libclang they are provided with. To ensure correct behavior a (limited) - compatibility check is performed when loading the bindings. This check - will throw an exception, as soon as it fails. - - In case these bindings are used with an older version of libclang, parts - that have been stable between releases may still work. Users of the - python bindings can disable the compatibility check. This will cause - the python bindings to load, even though they are written for a newer - version of libclang. Failures now arise if unsupported or incompatible - features are accessed. The user is required to test themselves if the - features they are using are available and compatible between different - libclang versions. - """ - if Config.loaded: - raise Exception("compatibility_check must be set before before " \ - "using any other functionalities in libclang.") - - Config.compatibility_check = check_status - - @CachedProperty - def lib(self): - lib = self.get_cindex_library() - register_functions(lib, not Config.compatibility_check) - Config.loaded = True - return lib - - def get_filename(self): - if Config.library_file: - return Config.library_file - - import platform - name = platform.system() - - if name == 'Darwin': - file = 'libclang.dylib' - elif name == 'Windows': - file = 'libclang.dll' - else: - file = 'libclang.so' - - if Config.library_path: - file = Config.library_path + '/' + file - - return file - - def get_cindex_library(self): - try: - library = cdll.LoadLibrary(self.get_filename()) - except OSError as e: - msg = str(e) + ". To provide a path to libclang use " \ - "Config.set_library_path() or " \ - "Config.set_library_file()." - raise LibclangError(msg) - - return library - - def function_exists(self, name): - try: - getattr(self.lib, name) - except AttributeError: - return False - - return True - -def register_enumerations(): - for name, value in clang.enumerations.TokenKinds: - TokenKind.register(value, name) - -conf = Config() -register_enumerations() - -__all__ = [ - 'Config', - 'CodeCompletionResults', - 'CompilationDatabase', - 'CompileCommands', - 'CompileCommand', - 'CursorKind', - 'Cursor', - 'Diagnostic', - 'File', - 'FixIt', - 'Index', - 'SourceLocation', - 'SourceRange', - 'TokenKind', - 'Token', - 'TranslationUnitLoadError', - 'TranslationUnit', - 'TypeKind', - 'Type', -] diff --git a/spaces/ma-xu/LIVE/thrust/thrust/iterator/transform_output_iterator.h b/spaces/ma-xu/LIVE/thrust/thrust/iterator/transform_output_iterator.h deleted file mode 100644 index 4c6683ae5c9b441d0c31d50d36fcabed60996b8e..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/iterator/transform_output_iterator.h +++ /dev/null @@ -1,163 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/iterator/transform_output_iterator.h - * \brief An output iterator which adapts another output iterator by applying a - * function to the result of its dereference before writing it. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -/*! \addtogroup iterators - * \{ - */ - -/*! \addtogroup fancyiterator Fancy Iterators - * \ingroup iterators - * \{ - */ - -/*! \p transform_output_iterator is a special kind of output iterator which - * transforms a value written upon dereference. This iterator is useful - * for transforming an output from algorithms without explicitly storing the - * intermediate result in the memory and applying subsequent transformation, - * thereby avoiding wasting memory capacity and bandwidth. - * Using \p transform_iterator facilitates kernel fusion by deferring execution - * of transformation until the value is written while saving both memory - * capacity and bandwidth. - * - * The following code snippet demonstrated how to create a - * \p transform_output_iterator which applies \c sqrtf to the assigning value. - * - * \code - * #include - * #include - * - * // note: functor inherits form unary function - * // note: functor inherits from unary_function - * struct square_root : public thrust::unary_function - * { - * __host__ __device__ - * float operator()(float x) const - * { - * return sqrtf(x); - * } - * }; - * - * int main() - * { - * thrust::device_vector v(4); - * - * typedef thrust::device_vector::iterator FloatIterator; - * thrust::transform_output_iterator iter(v.begin(), square_root()); - * - * iter[0] = 1.0f; // stores sqrtf( 1.0f) - * iter[1] = 4.0f; // stores sqrtf( 4.0f) - * iter[2] = 9.0f; // stores sqrtf( 9.0f) - * iter[3] = 16.0f; // stores sqrtf(16.0f) - * // iter[4] is an out-of-bounds error - * - * v[0]; // returns 1.0f; - * v[1]; // returns 2.0f; - * v[2]; // returns 3.0f; - * v[3]; // returns 4.0f; - * - * } - * \endcode - * - * \see make_transform_output_iterator - */ - -template - class transform_output_iterator - : public detail::transform_output_iterator_base::type -{ - - /*! \cond - */ - - public: - - typedef typename - detail::transform_output_iterator_base::type - super_t; - - friend class thrust::iterator_core_access; - /*! \endcond - */ - - /*! This constructor takes as argument an \c OutputIterator and an \c - * UnaryFunction and copies them to a new \p transform_output_iterator - * - * \param out An \c OutputIterator pointing to the output range whereto the result of - * \p transform_output_iterator's \c UnaryFunction will be written. - * \param fun An \c UnaryFunction used to transform the objects assigned to - * this \p transform_output_iterator. - */ - __host__ __device__ - transform_output_iterator(OutputIterator const& out, UnaryFunction fun) : super_t(out), fun(fun) - { - } - - /*! \cond - */ - private: - - __host__ __device__ - typename super_t::reference dereference() const - { - return detail::transform_output_iterator_proxy< - UnaryFunction, OutputIterator - >(this->base_reference(), fun); - } - - UnaryFunction fun; - - /*! \endcond - */ -}; // end transform_output_iterator - -/*! \p make_transform_output_iterator creates a \p transform_output_iterator from - * an \c OutputIterator and \c UnaryFunction. - * - * \param out The \c OutputIterator pointing to the output range of the newly - * created \p transform_output_iterator - * \param fun The \c UnaryFunction transform the object before assigning it to - * \c out by the newly created \p transform_output_iterator - * \see transform_output_iterator - */ -template -transform_output_iterator -__host__ __device__ -make_transform_output_iterator(OutputIterator out, UnaryFunction fun) -{ - return transform_output_iterator(out, fun); -} // end make_transform_output_iterator - -/*! \} // end fancyiterators - */ - -/*! \} // end iterators - */ - -} // end thrust - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/utils/options.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/utils/options.py deleted file mode 100644 index 09bfa5a5b68bec82902f65179883474f470c8de9..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/utils/options.py +++ /dev/null @@ -1,194 +0,0 @@ -import argparse -import random -import torch -import yaml -from collections import OrderedDict -from os import path as osp - -from basicsr.utils import set_random_seed -from basicsr.utils.dist_util import get_dist_info, init_dist, master_only - - -def ordered_yaml(): - """Support OrderedDict for yaml. - - Returns: - yaml Loader and Dumper. - """ - try: - from yaml import CDumper as Dumper - from yaml import CLoader as Loader - except ImportError: - from yaml import Dumper, Loader - - _mapping_tag = yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG - - def dict_representer(dumper, data): - return dumper.represent_dict(data.items()) - - def dict_constructor(loader, node): - return OrderedDict(loader.construct_pairs(node)) - - Dumper.add_representer(OrderedDict, dict_representer) - Loader.add_constructor(_mapping_tag, dict_constructor) - return Loader, Dumper - - -def dict2str(opt, indent_level=1): - """dict to string for printing options. - - Args: - opt (dict): Option dict. - indent_level (int): Indent level. Default: 1. - - Return: - (str): Option string for printing. - """ - msg = '\n' - for k, v in opt.items(): - if isinstance(v, dict): - msg += ' ' * (indent_level * 2) + k + ':[' - msg += dict2str(v, indent_level + 1) - msg += ' ' * (indent_level * 2) + ']\n' - else: - msg += ' ' * (indent_level * 2) + k + ': ' + str(v) + '\n' - return msg - - -def _postprocess_yml_value(value): - # None - if value == '~' or value.lower() == 'none': - return None - # bool - if value.lower() == 'true': - return True - elif value.lower() == 'false': - return False - # !!float number - if value.startswith('!!float'): - return float(value.replace('!!float', '')) - # number - if value.isdigit(): - return int(value) - elif value.replace('.', '', 1).isdigit() and value.count('.') < 2: - return float(value) - # list - if value.startswith('['): - return eval(value) - # str - return value - - -def parse_options(root_path, is_train=True): - parser = argparse.ArgumentParser() - parser.add_argument('-opt', type=str, required=True, help='Path to option YAML file.') - parser.add_argument('--launcher', choices=['none', 'pytorch', 'slurm'], default='none', help='job launcher') - parser.add_argument('--auto_resume', action='store_true') - parser.add_argument('--debug', action='store_true') - parser.add_argument('--local_rank', type=int, default=0) - parser.add_argument( - '--force_yml', nargs='+', default=None, help='Force to update yml files. Examples: train:ema_decay=0.999') - args = parser.parse_args() - - # parse yml to dict - with open(args.opt, mode='r') as f: - opt = yaml.load(f, Loader=ordered_yaml()[0]) - - # distributed settings - if args.launcher == 'none': - opt['dist'] = False - print('Disable distributed.', flush=True) - else: - opt['dist'] = True - if args.launcher == 'slurm' and 'dist_params' in opt: - init_dist(args.launcher, **opt['dist_params']) - else: - init_dist(args.launcher) - opt['rank'], opt['world_size'] = get_dist_info() - - # random seed - seed = opt.get('manual_seed') - if seed is None: - seed = random.randint(1, 10000) - opt['manual_seed'] = seed - set_random_seed(seed + opt['rank']) - - # force to update yml options - if args.force_yml is not None: - for entry in args.force_yml: - # now do not support creating new keys - keys, value = entry.split('=') - keys, value = keys.strip(), value.strip() - value = _postprocess_yml_value(value) - eval_str = 'opt' - for key in keys.split(':'): - eval_str += f'["{key}"]' - eval_str += '=value' - # using exec function - exec(eval_str) - - opt['auto_resume'] = args.auto_resume - opt['is_train'] = is_train - - # debug setting - if args.debug and not opt['name'].startswith('debug'): - opt['name'] = 'debug_' + opt['name'] - - if opt['num_gpu'] == 'auto': - opt['num_gpu'] = torch.cuda.device_count() - - # datasets - for phase, dataset in opt['datasets'].items(): - # for multiple datasets, e.g., val_1, val_2; test_1, test_2 - phase = phase.split('_')[0] - dataset['phase'] = phase - if 'scale' in opt: - dataset['scale'] = opt['scale'] - if dataset.get('dataroot_gt') is not None: - dataset['dataroot_gt'] = osp.expanduser(dataset['dataroot_gt']) - if dataset.get('dataroot_lq') is not None: - dataset['dataroot_lq'] = osp.expanduser(dataset['dataroot_lq']) - - # paths - for key, val in opt['path'].items(): - if (val is not None) and ('resume_state' in key or 'pretrain_network' in key): - opt['path'][key] = osp.expanduser(val) - - if is_train: - experiments_root = osp.join(root_path, 'experiments', opt['name']) - opt['path']['experiments_root'] = experiments_root - opt['path']['models'] = osp.join(experiments_root, 'models') - opt['path']['training_states'] = osp.join(experiments_root, 'training_states') - opt['path']['log'] = experiments_root - opt['path']['visualization'] = osp.join(experiments_root, 'visualization') - - # change some options for debug mode - if 'debug' in opt['name']: - if 'val' in opt: - opt['val']['val_freq'] = 8 - opt['logger']['print_freq'] = 1 - opt['logger']['save_checkpoint_freq'] = 8 - else: # test - results_root = osp.join(root_path, 'results', opt['name']) - opt['path']['results_root'] = results_root - opt['path']['log'] = results_root - opt['path']['visualization'] = osp.join(results_root, 'visualization') - - return opt, args - - -@master_only -def copy_opt_file(opt_file, experiments_root): - # copy the yml file to the experiment root - import sys - import time - from shutil import copyfile - cmd = ' '.join(sys.argv) - filename = osp.join(experiments_root, osp.basename(opt_file)) - copyfile(opt_file, filename) - - with open(filename, 'r+') as f: - lines = f.readlines() - lines.insert(0, f'# GENERATE TIME: {time.asctime()}\n# CMD:\n# {cmd}\n\n') - f.seek(0) - f.writelines(lines) diff --git a/spaces/manhkhanhUIT/BOPBTL/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/comm.py b/spaces/manhkhanhUIT/BOPBTL/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/comm.py deleted file mode 100644 index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/comm.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def __getstate__(self): - return {'master_callback': self._master_callback} - - def __setstate__(self, state): - self.__init__(state['master_callback']) - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/predict.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/predict.py deleted file mode 100644 index 5573cd1a64d8357641299517338011e7e1aa1ac1..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/predict.py +++ /dev/null @@ -1,222 +0,0 @@ -import tempfile -from pathlib import Path -import argparse -import shutil -import os -import glob -import cv2 -import cog -from run import run_cmd - - -class Predictor(cog.Predictor): - def setup(self): - parser = argparse.ArgumentParser() - parser.add_argument( - "--input_folder", type=str, default="input/cog_temp", help="Test images" - ) - parser.add_argument( - "--output_folder", - type=str, - default="output", - help="Restored images, please use the absolute path", - ) - parser.add_argument("--GPU", type=str, default="0", help="0,1,2") - parser.add_argument( - "--checkpoint_name", - type=str, - default="Setting_9_epoch_100", - help="choose which checkpoint", - ) - self.opts = parser.parse_args("") - self.basepath = os.getcwd() - self.opts.input_folder = os.path.join(self.basepath, self.opts.input_folder) - self.opts.output_folder = os.path.join(self.basepath, self.opts.output_folder) - os.makedirs(self.opts.input_folder, exist_ok=True) - os.makedirs(self.opts.output_folder, exist_ok=True) - - @cog.input("image", type=Path, help="input image") - @cog.input( - "HR", - type=bool, - default=False, - help="whether the input image is high-resolution", - ) - @cog.input( - "with_scratch", - type=bool, - default=False, - help="whether the input image is scratched", - ) - def predict(self, image, HR=False, with_scratch=False): - try: - os.chdir(self.basepath) - input_path = os.path.join(self.opts.input_folder, os.path.basename(image)) - shutil.copy(str(image), input_path) - - gpu1 = self.opts.GPU - - ## Stage 1: Overall Quality Improve - print("Running Stage 1: Overall restoration") - os.chdir("./Global") - stage_1_input_dir = self.opts.input_folder - stage_1_output_dir = os.path.join( - self.opts.output_folder, "stage_1_restore_output" - ) - - os.makedirs(stage_1_output_dir, exist_ok=True) - - if not with_scratch: - - stage_1_command = ( - "python test.py --test_mode Full --Quality_restore --test_input " - + stage_1_input_dir - + " --outputs_dir " - + stage_1_output_dir - + " --gpu_ids " - + gpu1 - ) - run_cmd(stage_1_command) - else: - - mask_dir = os.path.join(stage_1_output_dir, "masks") - new_input = os.path.join(mask_dir, "input") - new_mask = os.path.join(mask_dir, "mask") - stage_1_command_1 = ( - "python detection.py --test_path " - + stage_1_input_dir - + " --output_dir " - + mask_dir - + " --input_size full_size" - + " --GPU " - + gpu1 - ) - - if HR: - HR_suffix = " --HR" - else: - HR_suffix = "" - - stage_1_command_2 = ( - "python test.py --Scratch_and_Quality_restore --test_input " - + new_input - + " --test_mask " - + new_mask - + " --outputs_dir " - + stage_1_output_dir - + " --gpu_ids " - + gpu1 - + HR_suffix - ) - - run_cmd(stage_1_command_1) - run_cmd(stage_1_command_2) - - ## Solve the case when there is no face in the old photo - stage_1_results = os.path.join(stage_1_output_dir, "restored_image") - stage_4_output_dir = os.path.join(self.opts.output_folder, "final_output") - os.makedirs(stage_4_output_dir, exist_ok=True) - for x in os.listdir(stage_1_results): - img_dir = os.path.join(stage_1_results, x) - shutil.copy(img_dir, stage_4_output_dir) - - print("Finish Stage 1 ...") - print("\n") - - ## Stage 2: Face Detection - - print("Running Stage 2: Face Detection") - os.chdir(".././Face_Detection") - stage_2_input_dir = os.path.join(stage_1_output_dir, "restored_image") - stage_2_output_dir = os.path.join( - self.opts.output_folder, "stage_2_detection_output" - ) - os.makedirs(stage_2_output_dir, exist_ok=True) - - stage_2_command = ( - "python detect_all_dlib_HR.py --url " - + stage_2_input_dir - + " --save_url " - + stage_2_output_dir - ) - - run_cmd(stage_2_command) - print("Finish Stage 2 ...") - print("\n") - - ## Stage 3: Face Restore - print("Running Stage 3: Face Enhancement") - os.chdir(".././Face_Enhancement") - stage_3_input_mask = "./" - stage_3_input_face = stage_2_output_dir - stage_3_output_dir = os.path.join( - self.opts.output_folder, "stage_3_face_output" - ) - - os.makedirs(stage_3_output_dir, exist_ok=True) - - self.opts.checkpoint_name = "FaceSR_512" - stage_3_command = ( - "python test_face.py --old_face_folder " - + stage_3_input_face - + " --old_face_label_folder " - + stage_3_input_mask - + " --tensorboard_log --name " - + self.opts.checkpoint_name - + " --gpu_ids " - + gpu1 - + " --load_size 512 --label_nc 18 --no_instance --preprocess_mode resize --batchSize 1 --results_dir " - + stage_3_output_dir - + " --no_parsing_map" - ) - - run_cmd(stage_3_command) - print("Finish Stage 3 ...") - print("\n") - - ## Stage 4: Warp back - print("Running Stage 4: Blending") - os.chdir(".././Face_Detection") - stage_4_input_image_dir = os.path.join(stage_1_output_dir, "restored_image") - stage_4_input_face_dir = os.path.join(stage_3_output_dir, "each_img") - stage_4_output_dir = os.path.join(self.opts.output_folder, "final_output") - os.makedirs(stage_4_output_dir, exist_ok=True) - - stage_4_command = ( - "python align_warp_back_multiple_dlib_HR.py --origin_url " - + stage_4_input_image_dir - + " --replace_url " - + stage_4_input_face_dir - + " --save_url " - + stage_4_output_dir - ) - - run_cmd(stage_4_command) - print("Finish Stage 4 ...") - print("\n") - - print("All the processing is done. Please check the results.") - - final_output = os.listdir(os.path.join(self.opts.output_folder, "final_output"))[0] - - image_restore = cv2.imread(os.path.join(self.opts.output_folder, "final_output", final_output)) - - out_path = Path(tempfile.mkdtemp()) / "out.png" - - cv2.imwrite(str(out_path), image_restore) - finally: - clean_folder(self.opts.input_folder) - clean_folder(self.opts.output_folder) - return out_path - - -def clean_folder(folder): - for filename in os.listdir(folder): - file_path = os.path.join(folder, filename) - try: - if os.path.isfile(file_path) or os.path.islink(file_path): - os.unlink(file_path) - elif os.path.isdir(file_path): - shutil.rmtree(file_path) - except Exception as e: - print(f"Failed to delete {file_path}. Reason:{e}") diff --git a/spaces/matthoffner/AudioCraft_Plus/scripts/templates/base.html b/spaces/matthoffner/AudioCraft_Plus/scripts/templates/base.html deleted file mode 100644 index f74668c19ecb83090a8a2d82c026bf417190ec6d..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/scripts/templates/base.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - {% block head %} - - - AudioCraft — MOS - {% endblock %} - - -
        -

        AudioCraft — MOS

        - {% block content %}{% endblock %} -
        - - diff --git a/spaces/matthoffner/starchat-ui/pages/api/google.ts b/spaces/matthoffner/starchat-ui/pages/api/google.ts deleted file mode 100644 index 12024cbd714db593e33e504e4a96b24180311f3e..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/pages/api/google.ts +++ /dev/null @@ -1,149 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next'; - -import { OPENAI_API_HOST } from '@/utils/app/const'; -import { cleanSourceText } from '@/utils/server/google'; - -import { Message } from '@/types/chat'; -import { GoogleBody, GoogleSource } from '@/types/google'; - -import { Readability } from '@mozilla/readability'; -import endent from 'endent'; -import jsdom, { JSDOM } from 'jsdom'; - -const handler = async (req: NextApiRequest, res: NextApiResponse) => { - try { - const { messages, key, model, googleAPIKey, googleCSEId } = - req.body as GoogleBody; - - const userMessage = messages[messages.length - 1]; - const query = encodeURIComponent(userMessage.content.trim()); - - const googleRes = await fetch( - `https://customsearch.googleapis.com/customsearch/v1?key=${ - googleAPIKey ? googleAPIKey : process.env.GOOGLE_API_KEY - }&cx=${ - googleCSEId ? googleCSEId : process.env.GOOGLE_CSE_ID - }&q=${query}&num=5`, - ); - - const googleData = await googleRes.json(); - - const sources: GoogleSource[] = googleData.items.map((item: any) => ({ - title: item.title, - link: item.link, - displayLink: item.displayLink, - snippet: item.snippet, - image: item.pagemap?.cse_image?.[0]?.src, - text: '', - })); - - const sourcesWithText: any = await Promise.all( - sources.map(async (source) => { - try { - const timeoutPromise = new Promise((_, reject) => - setTimeout(() => reject(new Error('Request timed out')), 5000), - ); - - const res = (await Promise.race([ - fetch(source.link), - timeoutPromise, - ])) as any; - - // if (res) { - const html = await res.text(); - - const virtualConsole = new jsdom.VirtualConsole(); - virtualConsole.on('error', (error) => { - if (!error.message.includes('Could not parse CSS stylesheet')) { - console.error(error); - } - }); - - const dom = new JSDOM(html, { virtualConsole }); - const doc = dom.window.document; - const parsed = new Readability(doc).parse(); - - if (parsed) { - let sourceText = cleanSourceText(parsed.textContent); - - return { - ...source, - // TODO: switch to tokens - text: sourceText.slice(0, 2000), - } as GoogleSource; - } - // } - - return null; - } catch (error) { - console.error(error); - return null; - } - }), - ); - - const filteredSources: GoogleSource[] = sourcesWithText.filter(Boolean); - - const answerPrompt = endent` - Provide me with the information I requested. Use the sources to provide an accurate response. Respond in markdown format. Cite the sources you used as a markdown link as you use them at the end of each sentence by number of the source (ex: [[1]](link.com)). Provide an accurate response and then stop. Today's date is ${new Date().toLocaleDateString()}. - - Example Input: - What's the weather in San Francisco today? - - Example Sources: - [Weather in San Francisco](https://www.google.com/search?q=weather+san+francisco) - - Example Response: - It's 70 degrees and sunny in San Francisco today. [[1]](https://www.google.com/search?q=weather+san+francisco) - - Input: - ${userMessage.content.trim()} - - Sources: - ${filteredSources.map((source) => { - return endent` - ${source.title} (${source.link}): - ${source.text} - `; - })} - - Response: - `; - - const answerMessage: Message = { role: 'user', content: answerPrompt }; - - const answerRes = await fetch(`${OPENAI_API_HOST}/v1/chat/completions`, { - headers: { - 'Content-Type': 'application/json', - Authorization: `Bearer ${key ? key : process.env.OPENAI_API_KEY}`, - ...(process.env.OPENAI_ORGANIZATION && { - 'OpenAI-Organization': process.env.OPENAI_ORGANIZATION, - }), - }, - method: 'POST', - body: JSON.stringify({ - model: model.id, - messages: [ - { - role: 'system', - content: `Use the sources to provide an accurate response. Respond in markdown format. Cite the sources you used as [1](link), etc, as you use them. Maximum 4 sentences.`, - }, - answerMessage, - ], - max_tokens: 1000, - temperature: 1, - stream: false, - }), - }); - - const { choices: choices2 } = await answerRes.json(); - const answer = choices2[0].message.content; - - res.status(200).json({ answer }); - } catch (error) { - console.error(error); - res.status(500).json({ error: 'Error'}) - } -}; - -export default handler; diff --git a/spaces/maxjmohr/MSc_02_PDL_A4/README.md b/spaces/maxjmohr/MSc_02_PDL_A4/README.md deleted file mode 100644 index 67a285c8a063f13d70be690a5171641dbb7f23e7..0000000000000000000000000000000000000000 --- a/spaces/maxjmohr/MSc_02_PDL_A4/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MSc 02 PDL A4 -emoji: 👀 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/meluvsguaca/iluvguacastoo/Dockerfile b/spaces/meluvsguaca/iluvguacastoo/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/meluvsguaca/iluvguacastoo/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/menghanxia/disco/models/transformer2d.py b/spaces/menghanxia/disco/models/transformer2d.py deleted file mode 100644 index b494597100fdd631d6edecb4b5feb1b840ddce79..0000000000000000000000000000000000000000 --- a/spaces/menghanxia/disco/models/transformer2d.py +++ /dev/null @@ -1,229 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn -import copy, math -from models.position_encoding import build_position_encoding - - -class TransformerEncoder(nn.Module): - - def __init__(self, enc_layer, num_layers, use_dense_pos=False): - super().__init__() - self.layers = nn.ModuleList([copy.deepcopy(enc_layer) for i in range(num_layers)]) - self.num_layers = num_layers - self.use_dense_pos = use_dense_pos - - def forward(self, src, pos, padding_mask=None): - if self.use_dense_pos: - ## pos encoding at each MH-Attention block (q,k) - output, pos_enc = src, pos - for layer in self.layers: - output, att_map = layer(output, pos_enc, padding_mask) - else: - ## pos encoding at input only (q,k,v) - output, pos_enc = src + pos, None - for layer in self.layers: - output, att_map = layer(output, pos_enc, padding_mask) - return output, att_map - - -class EncoderLayer(nn.Module): - - def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1, activation="relu", - use_dense_pos=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - - def with_pos_embed(self, tensor, pos): - return tensor if pos is None else tensor + pos - - def forward(self, src, pos, padding_mask): - q = k = self.with_pos_embed(src, pos) - src2, attn = self.self_attn(q, k, value=src, key_padding_mask=padding_mask) - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src, attn - - -class TransformerDecoder(nn.Module): - - def __init__(self, dec_layer, num_layers, use_dense_pos=False, return_intermediate=False): - super().__init__() - self.layers = nn.ModuleList([copy.deepcopy(dec_layer) for i in range(num_layers)]) - self.num_layers = num_layers - self.use_dense_pos = use_dense_pos - self.return_intermediate = return_intermediate - - def forward(self, tgt, tgt_pos, memory, memory_pos, - tgt_padding_mask, src_padding_mask, tgt_attn_mask=None): - intermediate = [] - if self.use_dense_pos: - ## pos encoding at each MH-Attention block (q,k) - output = tgt - tgt_pos_enc, memory_pos_enc = tgt_pos, memory_pos - for layer in self.layers: - output, att_map = layer(output, tgt_pos_enc, memory, memory_pos_enc, - tgt_padding_mask, src_padding_mask, tgt_attn_mask) - if self.return_intermediate: - intermediate.append(output) - else: - ## pos encoding at input only (q,k,v) - output = tgt + tgt_pos - tgt_pos_enc, memory_pos_enc = None, None - for layer in self.layers: - output, att_map = layer(output, tgt_pos_enc, memory, memory_pos_enc, - tgt_padding_mask, src_padding_mask, tgt_attn_mask) - if self.return_intermediate: - intermediate.append(output) - - if self.return_intermediate: - return torch.stack(intermediate) - return output, att_map - - -class DecoderLayer(nn.Module): - - def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1, activation="relu", - use_dense_pos=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - self.corr_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - - def with_pos_embed(self, tensor, pos): - return tensor if pos is None else tensor + pos - - def forward(self, tgt, tgt_pos, memory, memory_pos, - tgt_padding_mask, memory_padding_mask, tgt_attn_mask): - q = k = self.with_pos_embed(tgt, tgt_pos) - tgt2, attn = self.self_attn(q, k, value=tgt, key_padding_mask=tgt_padding_mask, - attn_mask=tgt_attn_mask) - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - tgt2, attn = self.corr_attn(query=self.with_pos_embed(tgt, tgt_pos), - key=self.with_pos_embed(memory, memory_pos), - value=memory, key_padding_mask=memory_padding_mask) - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout3(tgt2) - tgt = self.norm3(tgt) - return tgt, attn - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - - -#----------------------------------------------------------------------------------- -''' -copy from the implementatoin of "attention-is-all-you-need-pytorch-master" by Yu-Hsiang Huang -''' - -class MultiHeadAttention(nn.Module): - ''' Multi-Head Attention module ''' - - def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1): - super().__init__() - - self.n_head = n_head - self.d_k = d_k - self.d_v = d_v - - self.w_qs = nn.Linear(d_model, n_head * d_k, bias=False) - self.w_ks = nn.Linear(d_model, n_head * d_k, bias=False) - self.w_vs = nn.Linear(d_model, n_head * d_v, bias=False) - self.fc = nn.Linear(n_head * d_v, d_model, bias=False) - - self.attention = ScaledDotProductAttention(temperature=d_k ** 0.5) - - self.dropout = nn.Dropout(dropout) - self.layer_norm = nn.LayerNorm(d_model, eps=1e-6) - - - def forward(self, q, k, v, mask=None): - - d_k, d_v, n_head = self.d_k, self.d_v, self.n_head - sz_b, len_q, len_k, len_v = q.size(0), q.size(1), k.size(1), v.size(1) - - residual = q - - # Pass through the pre-attention projection: b x lq x (n*dv) - # Separate different heads: b x lq x n x dv - q = self.w_qs(q).view(sz_b, len_q, n_head, d_k) - k = self.w_ks(k).view(sz_b, len_k, n_head, d_k) - v = self.w_vs(v).view(sz_b, len_v, n_head, d_v) - - # Transpose for attention dot product: b x n x lq x dv - q, k, v = q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2) - - if mask is not None: - mask = mask.unsqueeze(1) # For head axis broadcasting. - - q, attn = self.attention(q, k, v, mask=mask) - - # Transpose to move the head dimension back: b x lq x n x dv - # Combine the last two dimensions to concatenate all the heads together: b x lq x (n*dv) - q = q.transpose(1, 2).contiguous().view(sz_b, len_q, -1) - q = self.dropout(self.fc(q)) - q += residual - - q = self.layer_norm(q) - - return q, attn - - - -class ScaledDotProductAttention(nn.Module): - ''' Scaled Dot-Product Attention ''' - - def __init__(self, temperature, attn_dropout=0.1): - super().__init__() - self.temperature = temperature - self.dropout = nn.Dropout(attn_dropout) - - def forward(self, q, k, v, mask=None): - - attn = torch.matmul(q / self.temperature, k.transpose(2, 3)) - - if mask is not None: - attn = attn.masked_fill(mask == 0, -1e9) - - attn = self.dropout(F.softmax(attn, dim=-1)) - output = torch.matmul(attn, v) - - return output, attn \ No newline at end of file diff --git a/spaces/merve/data-leak/source/private-and-fair/style.css b/spaces/merve/data-leak/source/private-and-fair/style.css deleted file mode 100644 index 420336c2e0c31186e29779935402929f9275b845..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/private-and-fair/style.css +++ /dev/null @@ -1,307 +0,0 @@ -html{ - min-width: 830px; - overflow-x: auto; -} - -.highlight-yellow{ - margin-top: -30px; - margin-bottom: 20px; -} -.highlight-yellow a{ - background: yellow; - padding: 5px; -} - -.tooltip{ - width: 112px; -} - -.tooltip-footnote { - top: -1000px; - position: absolute; - padding: 10px; - background: rgba(255, 255, 255, .8); - border: 0px solid lightgray; - - width: 300px !important; - font-size: 14px; - line-height: 1.4em; - background: rgba(0, 0, 0, .8); - color: #fff; - pointer-events: all !important; -} -.tooltip-footnote a{ - color: #fff !important; - -} -.tooltip-footnote:hover{ -/* opacity: 1; - pointer-events: all !important; -*/} - -.tooltip-footnote-hidden{ - opacity: 0; - transition: opacity .3s; - transition-delay: .2s; - pointer-events: none !important; -} - -.tooltip-hidden{ - pointer-events: none !important; -} - -@media (max-width: 590px){ - .footend{ - margin-left: 0px; - width: 10px; - } - - - div.tooltip-footnote{ - transition: all 0s !important; - transition-delay: 0s !important; - - display: none; - position: fixed; - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -.footstart{ - padding-left: 2px; - height: 8px !important; - /*background: red;*/ - /*display: inline-block;*/ - line-height: 0em; -} - - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -circle.point{ - stroke: #000; - stroke-width: .5; - fill-opacity: .5; - cursor: pointer; -} - -circle.point.swapped{ - stroke-width: 2; -} - -path.boundry-line{ - pointer-events: none; - opacity: .1; -} - -.dragging{ - cursor: pointer; -} - -.sliders{ - position: relative; - top: 10px; - padding-top: 5px; -} - -.slider-container{ - height: 30px; -} - -.graph{ - width: 900px; -} - - -.chart-title{ - font-size: 14px; - font-weight: 600; - text-align: center; - margin-top: 25px; - /*padding-top: 5px;*/ -} - -.epoch-graph{ - max-width: 700px; - margin: 0px auto; -} - -.decision-boundry{ - max-width: 320px; - margin: 0px auto; -} - - - -.digit-button-container{ - max-width: 400px; - margin: 0px auto; - display: flex; - gap: 10px; -} - - -.button{ - text-align: center; - flex-grow: 1; - flex-basis: 0; - padding: 5px; - cursor: pointer; - user-select: none; - - outline: 1px solid #ccc; - - position: relative; -} - -@media (hover: hover) and (pointer: fine) { - .button:hover{ - /*border-color: #000;*/ - /*border-left-width: 1px;*/ - outline: 1px solid #000 !important; - z-index: 100; - } -} - - -.button.active{ - background: #000; - color: #fff; - outline: 0px; - /*font-weight: 500;*/ -} - - -.button-row > div{ - display: inline-block; -} - -.accuracy-line{ - stroke: #888; -} -.accuracy-line.active{ - stroke-width: 3px; - stroke: #000; - /*stroke: rgb(219, 61, 17);*/ -} - -.accuracy-circle{ - fill: #888; - /*opacity: .5;*/ -} -.accuracy-circle text{ - pointer-events: none; -} -.accuracy-circle.active{ - opacity: 1; - fill: #000; - - /*fill: rgb(219, 61, 17);*/ -} - -.accuracy-label.active text{ - font-weight: 600 !important; -} - -.digit-button-container{ - margin-bottom: 30px; -} - - - -.slider-native { - -webkit-appearance: none; - /*width: 100%;*/ - width: 180px; - height: 15px; - background: #d3d3d3; - outline: none; - -webkit-transition: .2s; - transition: opacity .2s; - position: relative; - left: 1em; - top: 2px; -} - -.slider-native::-webkit-slider-thumb { - -webkit-appearance: none; - appearance: none; - width: 30px; - height: 30px; - border-radius: 50%; - background: #000; - cursor: pointer; -} -.slider-native:hover { - opacity: 1; -} - -svg{ - user-select: none; -} - - -.axis .tick text{ - fill: #555; -} - -.annotation{ - font-size: 12px; -} - - - -ul{ - margin-top: -1em; - list-style: none; - -} - -li{ - margin-left: 10px; -} - - - -.info-box .post:hover .img{ - outline: 1px solid #333 !important; -} -.info-box .post:hover .title{ - text-decoration: underline !important; -} - -.post-summary{ - display: none; -} - - -.x .tick.active path{ - stroke: rgba(255,255,0,.5) !important; - stroke-width: 9; -} - - -.active circle{ - stroke-width: 2; - stroke: #000; -} - -.accuracy-rect.active rect:first-child{ - stroke: yellow !important; - fill: #ccc !important; - fill-opacity: 1; - stroke-width: 5; - paint-order: stroke; - -} \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js b/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js deleted file mode 100644 index 8ab520922aa2b8cb8086ca86f5119fc0b46ac433..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/watch-files.js +++ /dev/null @@ -1,83 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -!(function(){ - function watchFile(path){ - var lastStr = '' - - console.log(path) - function check(){ - d3.text(path + '?' + Math.random(), (err, nextStr) => { - if (err){ - console.log(err) - return check() - } - - if (nextStr == lastStr) return - lastStr = nextStr - - if (path.includes('.js')){ - console.log('js', new Date()) - Function(nextStr.replace('\n', ';').replace('\n', ';'))() - } - - if (path.includes('.css')){ - console.log('css', new Date()) - - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder')) - .filter((d, i) => i == 0) - .forEach(d => d.href = path + '?' + Math.random()) - } - }) - - if (python_settings.isDev) setTimeout(check, 100) - } - check() - } - - ;[ - 'list.css', - 'style.css', - '../two-sentences/init-scatter.js', - '../two-sentences/init-util.js', - '../two-sentences/init-pair.js', - 'init.js' - ].forEach(filename => { - var root = document.currentScript.src.replace('watch-files.js', '').split('?')[0] - var path = root + filename - - if (python_settings.isDev){ - watchFile(path) - } else { - if (path.includes('.js')){ - var node = document.createElement('script') - node.setAttribute('src', path) - document.body.appendChild(node) - } - - if (path.includes('.css')){ - Array.from(document.querySelectorAll('link')) - .filter(d => d.href.includes(path) || d.href.includes('__hs_placeholder')) - .filter((d, i) => i == 0) - .forEach(d => d.href = path + '?' + Math.random()) - } - } - }) -})() - - - diff --git a/spaces/merve/measuring-fairness/source/anonymization/make-gs.js b/spaces/merve/measuring-fairness/source/anonymization/make-gs.js deleted file mode 100644 index 4eb1aaeffeb2a69e726a9d452d7eea7b3352b318..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/anonymization/make-gs.js +++ /dev/null @@ -1,105 +0,0 @@ -window.makeGS = function(){ - var prevSlideIndex = -1 - function updateSlide(i){ - var slide = slides[i] - if (!slide) return - - d3.select('.tooltip').classed('tooltip-hidden', true) - - var dur = 500 - - sel.student.transition('xKey').duration(dur).delay(dur ? slide.circleDelayFn : 0) - .translate(d => (d.isAdditionalStudent && slide.xKey != 'plagerizedShifted') ? [0,0]: d.pos[slide.xKey]) - - - if (sel.rectAt[slide.xKey]){ - sel.uniqueBox.transition('at').duration(dur) - .delay(d => dur ? slide.circleDelayFn(d.d0) : 0) - .at(sel.rectAt[slide.xKey]) - .translate(d => d.d0.group[slide.xKey].pos) - } - - sel.uniqueBox.transition().duration(dur) - .st({opacity: slide.showUniqueBox ? 1 : 0}) - - sel.uniqueSeasonBox.transition() - .delay((d, i) => slide.showUniqueSeasonBox ? dur*2 + i*40 : 0).duration(slide.showUniqueSeasonBox ? 0 : dur) - .st({opacity: slide.showUniqueSeasonBox ? 1 : 0}) - - - if (sliders.headsProb != slide.headsProbTarget && slide.animateHeadsProbSlider != -1){ - var headI = d3.interpolate(sliders.headsProb, slide.headsProbTarget) - if (window.headSliderTimer) window.headSliderTimer.stop() - window.headSliderTimer = d3.timer(ms => { - var dur = slide.animateHeadsProbSlider ? 2000 : 1 - var t = d3.easeCubicInOut(d3.clamp(0, ms/dur, 1)) - sliders.updateHeadsProb(headI(t)) - if (t == 1) headSliderTimer.stop() - }) - } - - if (sliders.population != slide.populationTarget){ - var popI = d3.interpolate(sliders.population, slide.populationTarget) - if (window.popSliderTimer) window.popSliderTimer.stop() - window.popSliderTimer = d3.timer(ms => { - var dur = slide.animatePopulationSlider ? 2000 : 1 - var t = d3.easeCubicInOut(d3.clamp(0, ms/dur, 1)) - sliders.updatePopulation(Math.round(popI(t)/2)*2) - if (t == 1) popSliderTimer.stop() - }) - } - - axii.stateAxis.transition().duration(dur/2) - .st({opacity: slide.showStateAxis ? 1 : 0}) - axii.ageAxis.transition().duration(dur/2) - .st({opacity: slide.showAgeAxis ? 1 : 0}) - axii.seasonAxis.transition().duration(dur/2) - .st({opacity: slide.showSeasonAxis ? 1 : 0}) - axii.headAxis.transition().duration(dur/2) - .st({opacity: slide.showHeadAxis ? 1 : 0}) - axii.headCaptionAxis.transition().duration(dur/2) - .st({opacity: slide.showHeadCaptionAxis ? 1 : 0}) - estimates.axisSel.transition().delay(dur).duration(dur/2) - .st({opacity: slide.showHistogramAxis ? 1 : 0}) - estimates.activeSel.transition().delay(dur).duration(dur/2) - .st({opacity: slide.showHistogramAxis ? 1 : 0}) - // axii.estimateAxis.transition().delay(dur).duration(dur/2) - // .st({opacity: slide.showEstimate && !slide.enterHistogram ? 1 : 0}) - // axii.plagerizedAxis.transition().delay(dur).duration(dur/2) - // .st({opacity: slide.showPlagerizedAxis ? 1 : 0}) - - - annotationSel.transition().duration(dur/2) - .st({opacity: d => i == d.slide ? 1 : 0}) - - estimates.containerSel.transition('xKey').duration(dur/2) - .st({opacity: slide.showHistogram ? 1 : 0}) - - if (slide.enterHistogram){ - estimates.render(true) - } else { - window.flipAllCoinsTimer._time = Infinity - } - if (slide.enterHistogram === 0) estimates.estimateSel.classed('active', 1) - - - // Display the default coin flip state if the histogram is not visible. - sel.flipCircle.transition().duration(dur) - .at({transform: d => { - return slide.showFlipCircle && d.coinVals[estimates.active.index] < sliders.headsProb ? 'scale(1)' : 'scale(.1)'}}) - - prevSlideIndex = i - slides.curSlide = slide - } - - var gs = d3.graphScroll() - .container(d3.select('.container-1')) - .graph(d3.selectAll('container-1 #graph')) - .eventId('uniqueId1') - .sections(d3.selectAll('.container-1 #sections > div')) - .offset(300) - .on('active', updateSlide) -} - - -if (window.init) window.init() diff --git a/spaces/merve/streamlit-dataset-demo/README.md b/spaces/merve/streamlit-dataset-demo/README.md deleted file mode 100644 index d22ab0173862dc6f6717dc792e5558721e6ae7f3..0000000000000000000000000000000000000000 --- a/spaces/merve/streamlit-dataset-demo/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: EDA -emoji: 🧚🏻‍♀️ -colorFrom: purple -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/mithril-security/blind_chat/src/lib/utils/share.ts b/spaces/mithril-security/blind_chat/src/lib/utils/share.ts deleted file mode 100644 index 4587669a10164aa7c961429fbddec9cf438c0eca..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/lib/utils/share.ts +++ /dev/null @@ -1,7 +0,0 @@ -export function share(url: string, title: string) { - if (navigator.share) { - navigator.share({ url, title }); - } else { - prompt("Copy this public url to share:", url); - } -} diff --git a/spaces/mms-meta/MMS/uroman/lib/JSON/backportPP/Compat5005.pm b/spaces/mms-meta/MMS/uroman/lib/JSON/backportPP/Compat5005.pm deleted file mode 100644 index 139990edff0a28474e53f882d4c4efeb2ad7d701..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/uroman/lib/JSON/backportPP/Compat5005.pm +++ /dev/null @@ -1,131 +0,0 @@ -package # This is JSON::backportPP - JSON::backportPP5005; - -use 5.005; -use strict; - -my @properties; - -$JSON::PP5005::VERSION = '1.10'; - -BEGIN { - - sub utf8::is_utf8 { - 0; # It is considered that UTF8 flag off for Perl 5.005. - } - - sub utf8::upgrade { - } - - sub utf8::downgrade { - 1; # must always return true. - } - - sub utf8::encode { - } - - sub utf8::decode { - } - - *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii; - *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1; - *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates; - *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode; - - # missing in B module. - sub B::SVp_IOK () { 0x01000000; } - sub B::SVp_NOK () { 0x02000000; } - sub B::SVp_POK () { 0x04000000; } - - $INC{'bytes.pm'} = 1; # dummy -} - - - -sub _encode_ascii { - join('', map { $_ <= 127 ? chr($_) : sprintf('\u%04x', $_) } unpack('C*', $_[0]) ); -} - - -sub _encode_latin1 { - join('', map { chr($_) } unpack('C*', $_[0]) ); -} - - -sub _decode_surrogates { # from http://homepage1.nifty.com/nomenclator/unicode/ucs_utf.htm - my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00); # from perlunicode - my $bit = unpack('B32', pack('N', $uni)); - - if ( $bit =~ /^00000000000(...)(......)(......)(......)$/ ) { - my ($w, $x, $y, $z) = ($1, $2, $3, $4); - return pack('B*', sprintf('11110%s10%s10%s10%s', $w, $x, $y, $z)); - } - else { - Carp::croak("Invalid surrogate pair"); - } -} - - -sub _decode_unicode { - my ($u) = @_; - my ($utf8bit); - - if ( $u =~ /^00([89a-f][0-9a-f])$/i ) { # 0x80-0xff - return pack( 'H2', $1 ); - } - - my $bit = unpack("B*", pack("H*", $u)); - - if ( $bit =~ /^00000(.....)(......)$/ ) { - $utf8bit = sprintf('110%s10%s', $1, $2); - } - elsif ( $bit =~ /^(....)(......)(......)$/ ) { - $utf8bit = sprintf('1110%s10%s10%s', $1, $2, $3); - } - else { - Carp::croak("Invalid escaped unicode"); - } - - return pack('B*', $utf8bit); -} - - -sub JSON::PP::incr_text { - $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new; - - if ( $_[0]->{_incr_parser}->{incr_parsing} ) { - Carp::croak("incr_text can not be called when the incremental parser already started parsing"); - } - - $_[0]->{_incr_parser}->{incr_text} = $_[1] if ( @_ > 1 ); - $_[0]->{_incr_parser}->{incr_text}; -} - - -1; -__END__ - -=pod - -=head1 NAME - -JSON::PP5005 - Helper module in using JSON::PP in Perl 5.005 - -=head1 DESCRIPTION - -JSON::PP calls internally. - -=head1 AUTHOR - -Makamaka Hannyaharamitu, Emakamaka[at]cpan.orgE - - -=head1 COPYRIGHT AND LICENSE - -Copyright 2007-2012 by Makamaka Hannyaharamitu - -This library is free software; you can redistribute it and/or modify -it under the same terms as Perl itself. - -=cut - diff --git a/spaces/mnauf/detect-bees/classify/train.py b/spaces/mnauf/detect-bees/classify/train.py deleted file mode 100644 index 178ebcdfff53a6e0af512f22165ed68031810d8c..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/classify/train.py +++ /dev/null @@ -1,331 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Train a YOLOv5 classifier model on a classification dataset - -Usage - Single-GPU training: - $ python classify/train.py --model yolov5s-cls.pt --data imagenette160 --epochs 5 --img 224 - -Usage - Multi-GPU DDP training: - $ python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3 - -Datasets: --data mnist, fashion-mnist, cifar10, cifar100, imagenette, imagewoof, imagenet, or 'path/to/data' -YOLOv5-cls models: --model yolov5n-cls.pt, yolov5s-cls.pt, yolov5m-cls.pt, yolov5l-cls.pt, yolov5x-cls.pt -Torchvision models: --model resnet50, efficientnet_b0, etc. See https://pytorch.org/vision/stable/models.html -""" - -import argparse -import os -import subprocess -import sys -import time -from copy import deepcopy -from datetime import datetime -from pathlib import Path - -import torch -import torch.distributed as dist -import torch.hub as hub -import torch.optim.lr_scheduler as lr_scheduler -import torchvision -from torch.cuda import amp -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from classify import val as validate -from models.experimental import attempt_load -from models.yolo import ClassificationModel, DetectionModel -from utils.dataloaders import create_classification_dataloader -from utils.general import (DATASETS_DIR, LOGGER, WorkingDirectory, check_git_status, check_requirements, colorstr, - download, increment_path, init_seeds, print_args, yaml_save) -from utils.loggers import GenericLogger -from utils.plots import imshow_cls -from utils.torch_utils import (ModelEMA, model_info, reshape_classifier_output, select_device, smart_DDP, - smart_optimizer, smartCrossEntropyLoss, torch_distributed_zero_first) - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) - - -def train(opt, device): - init_seeds(opt.seed + 1 + RANK, deterministic=True) - save_dir, data, bs, epochs, nw, imgsz, pretrained = \ - opt.save_dir, Path(opt.data), opt.batch_size, opt.epochs, min(os.cpu_count() - 1, opt.workers), \ - opt.imgsz, str(opt.pretrained).lower() == 'true' - cuda = device.type != 'cpu' - - # Directories - wdir = save_dir / 'weights' - wdir.mkdir(parents=True, exist_ok=True) # make dir - last, best = wdir / 'last.pt', wdir / 'best.pt' - - # Save run settings - yaml_save(save_dir / 'opt.yaml', vars(opt)) - - # Logger - logger = GenericLogger(opt=opt, console_logger=LOGGER) if RANK in {-1, 0} else None - - # Download Dataset - with torch_distributed_zero_first(LOCAL_RANK), WorkingDirectory(ROOT): - data_dir = data if data.is_dir() else (DATASETS_DIR / data) - if not data_dir.is_dir(): - LOGGER.info(f'\nDataset not found ⚠️, missing path {data_dir}, attempting download...') - t = time.time() - if str(data) == 'imagenet': - subprocess.run(f"bash {ROOT / 'data/scripts/get_imagenet.sh'}", shell=True, check=True) - else: - url = f'https://github.com/ultralytics/yolov5/releases/download/v1.0/{data}.zip' - download(url, dir=data_dir.parent) - s = f"Dataset download success ✅ ({time.time() - t:.1f}s), saved to {colorstr('bold', data_dir)}\n" - LOGGER.info(s) - - # Dataloaders - nc = len([x for x in (data_dir / 'train').glob('*') if x.is_dir()]) # number of classes - trainloader = create_classification_dataloader(path=data_dir / 'train', - imgsz=imgsz, - batch_size=bs // WORLD_SIZE, - augment=True, - cache=opt.cache, - rank=LOCAL_RANK, - workers=nw) - - test_dir = data_dir / 'test' if (data_dir / 'test').exists() else data_dir / 'val' # data/test or data/val - if RANK in {-1, 0}: - testloader = create_classification_dataloader(path=test_dir, - imgsz=imgsz, - batch_size=bs // WORLD_SIZE * 2, - augment=False, - cache=opt.cache, - rank=-1, - workers=nw) - - # Model - with torch_distributed_zero_first(LOCAL_RANK), WorkingDirectory(ROOT): - if Path(opt.model).is_file() or opt.model.endswith('.pt'): - model = attempt_load(opt.model, device='cpu', fuse=False) - elif opt.model in torchvision.models.__dict__: # TorchVision models i.e. resnet50, efficientnet_b0 - model = torchvision.models.__dict__[opt.model](weights='IMAGENET1K_V1' if pretrained else None) - else: - m = hub.list('ultralytics/yolov5') # + hub.list('pytorch/vision') # models - raise ModuleNotFoundError(f'--model {opt.model} not found. Available models are: \n' + '\n'.join(m)) - if isinstance(model, DetectionModel): - LOGGER.warning("WARNING ⚠️ pass YOLOv5 classifier model with '-cls' suffix, i.e. '--model yolov5s-cls.pt'") - model = ClassificationModel(model=model, nc=nc, cutoff=opt.cutoff or 10) # convert to classification model - reshape_classifier_output(model, nc) # update class count - for m in model.modules(): - if not pretrained and hasattr(m, 'reset_parameters'): - m.reset_parameters() - if isinstance(m, torch.nn.Dropout) and opt.dropout is not None: - m.p = opt.dropout # set dropout - for p in model.parameters(): - p.requires_grad = True # for training - model = model.to(device) - - # Info - if RANK in {-1, 0}: - model.names = trainloader.dataset.classes # attach class names - model.transforms = testloader.dataset.torch_transforms # attach inference transforms - model_info(model) - if opt.verbose: - LOGGER.info(model) - images, labels = next(iter(trainloader)) - file = imshow_cls(images[:25], labels[:25], names=model.names, f=save_dir / 'train_images.jpg') - logger.log_images(file, name='Train Examples') - logger.log_graph(model, imgsz) # log model - - # Optimizer - optimizer = smart_optimizer(model, opt.optimizer, opt.lr0, momentum=0.9, decay=opt.decay) - - # Scheduler - lrf = 0.01 # final lr (fraction of lr0) - # lf = lambda x: ((1 + math.cos(x * math.pi / epochs)) / 2) * (1 - lrf) + lrf # cosine - lf = lambda x: (1 - x / epochs) * (1 - lrf) + lrf # linear - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) - # scheduler = lr_scheduler.OneCycleLR(optimizer, max_lr=lr0, total_steps=epochs, pct_start=0.1, - # final_div_factor=1 / 25 / lrf) - - # EMA - ema = ModelEMA(model) if RANK in {-1, 0} else None - - # DDP mode - if cuda and RANK != -1: - model = smart_DDP(model) - - # Train - t0 = time.time() - criterion = smartCrossEntropyLoss(label_smoothing=opt.label_smoothing) # loss function - best_fitness = 0.0 - scaler = amp.GradScaler(enabled=cuda) - val = test_dir.stem # 'val' or 'test' - LOGGER.info(f'Image sizes {imgsz} train, {imgsz} test\n' - f'Using {nw * WORLD_SIZE} dataloader workers\n' - f"Logging results to {colorstr('bold', save_dir)}\n" - f'Starting {opt.model} training on {data} dataset with {nc} classes for {epochs} epochs...\n\n' - f"{'Epoch':>10}{'GPU_mem':>10}{'train_loss':>12}{f'{val}_loss':>12}{'top1_acc':>12}{'top5_acc':>12}") - for epoch in range(epochs): # loop over the dataset multiple times - tloss, vloss, fitness = 0.0, 0.0, 0.0 # train loss, val loss, fitness - model.train() - if RANK != -1: - trainloader.sampler.set_epoch(epoch) - pbar = enumerate(trainloader) - if RANK in {-1, 0}: - pbar = tqdm(enumerate(trainloader), total=len(trainloader), bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') - for i, (images, labels) in pbar: # progress bar - images, labels = images.to(device, non_blocking=True), labels.to(device) - - # Forward - with amp.autocast(enabled=cuda): # stability issues when enabled - loss = criterion(model(images), labels) - - # Backward - scaler.scale(loss).backward() - - # Optimize - scaler.unscale_(optimizer) # unscale gradients - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients - scaler.step(optimizer) - scaler.update() - optimizer.zero_grad() - if ema: - ema.update(model) - - if RANK in {-1, 0}: - # Print - tloss = (tloss * i + loss.item()) / (i + 1) # update mean losses - mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB) - pbar.desc = f"{f'{epoch + 1}/{epochs}':>10}{mem:>10}{tloss:>12.3g}" + ' ' * 36 - - # Test - if i == len(pbar) - 1: # last batch - top1, top5, vloss = validate.run(model=ema.ema, - dataloader=testloader, - criterion=criterion, - pbar=pbar) # test accuracy, loss - fitness = top1 # define fitness as top1 accuracy - - # Scheduler - scheduler.step() - - # Log metrics - if RANK in {-1, 0}: - # Best fitness - if fitness > best_fitness: - best_fitness = fitness - - # Log - metrics = { - "train/loss": tloss, - f"{val}/loss": vloss, - "metrics/accuracy_top1": top1, - "metrics/accuracy_top5": top5, - "lr/0": optimizer.param_groups[0]['lr']} # learning rate - logger.log_metrics(metrics, epoch) - - # Save model - final_epoch = epoch + 1 == epochs - if (not opt.nosave) or final_epoch: - ckpt = { - 'epoch': epoch, - 'best_fitness': best_fitness, - 'model': deepcopy(ema.ema).half(), # deepcopy(de_parallel(model)).half(), - 'ema': None, # deepcopy(ema.ema).half(), - 'updates': ema.updates, - 'optimizer': None, # optimizer.state_dict(), - 'opt': vars(opt), - 'date': datetime.now().isoformat()} - - # Save last, best and delete - torch.save(ckpt, last) - if best_fitness == fitness: - torch.save(ckpt, best) - del ckpt - - # Train complete - if RANK in {-1, 0} and final_epoch: - LOGGER.info(f'\nTraining complete ({(time.time() - t0) / 3600:.3f} hours)' - f"\nResults saved to {colorstr('bold', save_dir)}" - f"\nPredict: python classify/predict.py --weights {best} --source im.jpg" - f"\nValidate: python classify/val.py --weights {best} --data {data_dir}" - f"\nExport: python export.py --weights {best} --include onnx" - f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{best}')" - f"\nVisualize: https://netron.app\n") - - # Plot examples - images, labels = (x[:25] for x in next(iter(testloader))) # first 25 images and labels - pred = torch.max(ema.ema(images.to(device)), 1)[1] - file = imshow_cls(images, labels, pred, model.names, verbose=False, f=save_dir / 'test_images.jpg') - - # Log results - meta = {"epochs": epochs, "top1_acc": best_fitness, "date": datetime.now().isoformat()} - logger.log_images(file, name='Test Examples (true-predicted)', epoch=epoch) - logger.log_model(best, epochs, metadata=meta) - - -def parse_opt(known=False): - parser = argparse.ArgumentParser() - parser.add_argument('--model', type=str, default='yolov5s-cls.pt', help='initial weights path') - parser.add_argument('--data', type=str, default='imagenette160', help='cifar10, cifar100, mnist, imagenet, ...') - parser.add_argument('--epochs', type=int, default=10, help='total training epochs') - parser.add_argument('--batch-size', type=int, default=64, help='total batch size for all GPUs') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=224, help='train, val image size (pixels)') - parser.add_argument('--nosave', action='store_true', help='only save final checkpoint') - parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--project', default=ROOT / 'runs/train-cls', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--pretrained', nargs='?', const=True, default=True, help='start from i.e. --pretrained False') - parser.add_argument('--optimizer', choices=['SGD', 'Adam', 'AdamW', 'RMSProp'], default='Adam', help='optimizer') - parser.add_argument('--lr0', type=float, default=0.001, help='initial learning rate') - parser.add_argument('--decay', type=float, default=5e-5, help='weight decay') - parser.add_argument('--label-smoothing', type=float, default=0.1, help='Label smoothing epsilon') - parser.add_argument('--cutoff', type=int, default=None, help='Model layer cutoff index for Classify() head') - parser.add_argument('--dropout', type=float, default=None, help='Dropout (fraction)') - parser.add_argument('--verbose', action='store_true', help='Verbose mode') - parser.add_argument('--seed', type=int, default=0, help='Global training seed') - parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify') - return parser.parse_known_args()[0] if known else parser.parse_args() - - -def main(opt): - # Checks - if RANK in {-1, 0}: - print_args(vars(opt)) - check_git_status() - check_requirements() - - # DDP mode - device = select_device(opt.device, batch_size=opt.batch_size) - if LOCAL_RANK != -1: - assert opt.batch_size != -1, 'AutoBatch is coming soon for classification, please pass a valid --batch-size' - assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE' - assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command' - torch.cuda.set_device(LOCAL_RANK) - device = torch.device('cuda', LOCAL_RANK) - dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo") - - # Parameters - opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok) # increment run - - # Train - train(opt, device) - - -def run(**kwargs): - # Usage: from yolov5 import classify; classify.train.run(data=mnist, imgsz=320, model='yolov5m') - opt = parse_opt(True) - for k, v in kwargs.items(): - setattr(opt, k, v) - main(opt) - return opt - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/msafi04/abstractive_summarization/app.py b/spaces/msafi04/abstractive_summarization/app.py deleted file mode 100644 index 9ce86cf8fdbfd817e6476950405e4961df21e443..0000000000000000000000000000000000000000 --- a/spaces/msafi04/abstractive_summarization/app.py +++ /dev/null @@ -1,14 +0,0 @@ -import gradio as gr -from transformers import PegasusForConditionalGeneration -from transformers import PegasusTokenizer -from transformers import pipeline - -model_name = "google/pegasus-xsum" -pegasus_tokenizer = PegasusTokenizer.from_pretrained(model_name) -def summarize(input_text): - nwords=len(input_text.split(" ")) - # Define summarization pipeline - summarizer = pipeline("summarization", model=model_name, tokenizer=pegasus_tokenizer,min_length=int(nwords/10)+10, max_length=int(nwords/5+10), framework="pt") - summary=summarizer(input_text)[0]['summary_text'] - return(summary) -gr.Interface(fn=summarize,inputs=gr.inputs.Textbox(placeholder="Paste the text to be summarized here..."),outputs="textbox").launch(); \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/data/data_utils.py b/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/data/data_utils.py deleted file mode 100644 index cc4729e63c8ef551b29617d1169a44c24f509ad0..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/data/data_utils.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def calc_mean_invstddev(feature): - if len(feature.size()) != 2: - raise ValueError("We expect the input feature to be 2-D tensor") - mean = feature.mean(0) - var = feature.var(0) - # avoid division by ~zero - eps = 1e-8 - if (var < eps).any(): - return mean, 1.0 / (torch.sqrt(var) + eps) - return mean, 1.0 / torch.sqrt(var) - - -def apply_mv_norm(features): - # If there is less than 2 spectrograms, the variance cannot be computed (is NaN) - # and normalization is not possible, so return the item as it is - if features.size(0) < 2: - return features - mean, invstddev = calc_mean_invstddev(features) - res = (features - mean) * invstddev - return res - - -def lengths_to_encoder_padding_mask(lengths, batch_first=False): - """ - convert lengths (a 1-D Long/Int tensor) to 2-D binary tensor - - Args: - lengths: a (B, )-shaped tensor - - Return: - max_length: maximum length of B sequences - encoder_padding_mask: a (max_length, B) binary mask, where - [t, b] = 0 for t < lengths[b] and 1 otherwise - - TODO: - kernelize this function if benchmarking shows this function is slow - """ - max_lengths = torch.max(lengths).item() - bsz = lengths.size(0) - encoder_padding_mask = torch.arange( - max_lengths - ).to( # a (T, ) tensor with [0, ..., T-1] - lengths.device - ).view( # move to the right device - 1, max_lengths - ).expand( # reshape to (1, T)-shaped tensor - bsz, -1 - ) >= lengths.view( # expand to (B, T)-shaped tensor - bsz, 1 - ).expand( - -1, max_lengths - ) - if not batch_first: - return encoder_padding_mask.t(), max_lengths - else: - return encoder_padding_mask, max_lengths - - -def encoder_padding_mask_to_lengths( - encoder_padding_mask, max_lengths, batch_size, device -): - """ - convert encoder_padding_mask (2-D binary tensor) to a 1-D tensor - - Conventionally, encoder output contains a encoder_padding_mask, which is - a 2-D mask in a shape (T, B), whose (t, b) element indicate whether - encoder_out[t, b] is a valid output (=0) or not (=1). Occasionally, we - need to convert this mask tensor to a 1-D tensor in shape (B, ), where - [b] denotes the valid length of b-th sequence - - Args: - encoder_padding_mask: a (T, B)-shaped binary tensor or None; if None, - indicating all are valid - Return: - seq_lengths: a (B,)-shaped tensor, where its (b, )-th element is the - number of valid elements of b-th sequence - - max_lengths: maximum length of all sequence, if encoder_padding_mask is - not None, max_lengths must equal to encoder_padding_mask.size(0) - - batch_size: batch size; if encoder_padding_mask is - not None, max_lengths must equal to encoder_padding_mask.size(1) - - device: which device to put the result on - """ - if encoder_padding_mask is None: - return torch.Tensor([max_lengths] * batch_size).to(torch.int32).to(device) - - assert encoder_padding_mask.size(0) == max_lengths, "max_lengths does not match" - assert encoder_padding_mask.size(1) == batch_size, "batch_size does not match" - - return max_lengths - torch.sum(encoder_padding_mask, dim=0) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/caption/ofa_ratacaption_vqa_caption_stage_1_lr1e5.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/caption/ofa_ratacaption_vqa_caption_stage_1_lr1e5.sh deleted file mode 100644 index d6d893a5c3283c95bae7037ac6a5f3f7b26cf9fe..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/caption/ofa_ratacaption_vqa_caption_stage_1_lr1e5.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=ofa_ratacaption_vqa_caption_stage_1_lr1e5 -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -####SBATCH --nodelist=x1004c4s2b0n0 -#SBATCH --time=24:00:00 -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/ofa_ratacaption_vqa_caption_stage_1_lr1e5.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/ratatouille/caption/ofa_ratacaption_vqa_caption_stage_1_lr1e5.sh - - diff --git a/spaces/mueller-franzes/medfusion-app/tests/models/time_embedders/test.py b/spaces/mueller-franzes/medfusion-app/tests/models/time_embedders/test.py deleted file mode 100644 index c4eb5211f200aba0bacf1d2ac15803e796e7f783..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/tests/models/time_embedders/test.py +++ /dev/null @@ -1,18 +0,0 @@ -import torch -from medical_diffusion.models.embedders import TimeEmbbeding, SinusoidalPosEmb, LabelEmbedder - -cond_emb = LabelEmbedder(10, num_classes=2) -c = torch.tensor([[0,], [1,]]) -v = cond_emb(c) -print(v) - - -tim_emb = SinusoidalPosEmb(20, max_period=10) -t = torch.tensor([1,2,3, 1000]) -v = tim_emb(t) -print(v) - -tim_emb = TimeEmbbeding(4*4, SinusoidalPosEmb, {'max_period':10}) -t = torch.tensor([1,2,3, 1000]) -v = tim_emb(t) -print(v) \ No newline at end of file diff --git a/spaces/nakas/MusicGenDemucs/MODEL_CARD.md b/spaces/nakas/MusicGenDemucs/MODEL_CARD.md deleted file mode 100644 index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/MODEL_CARD.md +++ /dev/null @@ -1,81 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details** See [our paper][arxiv] - -**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Quantitative analysis - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/naver/SuperFeatures/how/stages/evaluate.py b/spaces/naver/SuperFeatures/how/stages/evaluate.py deleted file mode 100644 index 931db98fd2e8398fd7f8041be4021fa094c5d47a..0000000000000000000000000000000000000000 --- a/spaces/naver/SuperFeatures/how/stages/evaluate.py +++ /dev/null @@ -1,314 +0,0 @@ -"""Implements evaluation of trained models""" - -import time -import warnings -from pathlib import Path -import pickle -import numpy as np -import torch -from torchvision import transforms -from PIL import ImageFile - -from cirtorch.datasets.genericdataset import ImagesFromList - -from asmk import asmk_method, kernel as kern_pkg -from ..networks import how_net -from ..utils import score_helpers, data_helpers, logging - -ImageFile.LOAD_TRUNCATED_IMAGES = True -warnings.filterwarnings("ignore", r"^Possibly corrupt EXIF data", category=UserWarning) - - -def evaluate_demo(demo_eval, evaluation, globals): - """Demo evaluating a trained network - - :param dict demo_eval: Demo-related options - :param dict evaluation: Evaluation-related options - :param dict globals: Global options - """ - globals["device"] = torch.device("cpu") - if demo_eval['gpu_id'] is not None: - globals["device"] = torch.device(("cuda:%s" % demo_eval['gpu_id'])) - - # Handle net_path when directory - net_path = Path(demo_eval['exp_folder']) / demo_eval['net_path'] - if net_path.is_dir() and (net_path / "epochs/model_best.pth").exists(): - net_path = net_path / "epochs/model_best.pth" - - # Load net - state = _convert_checkpoint(torch.load(net_path, map_location='cpu')) - net = how_net.init_network(**state['net_params']).to(globals['device']) - net.load_state_dict(state['state_dict']) - globals["transform"] = transforms.Compose([transforms.ToTensor(), \ - transforms.Normalize(**dict(zip(["mean", "std"], net.runtime['mean_std'])))]) - - # Eval - if evaluation['global_descriptor']['datasets']: - eval_global(net, evaluation['inference'], globals, **evaluation['global_descriptor']) - - if evaluation['multistep']: - eval_asmk_multistep(net, evaluation['inference'], evaluation['multistep'], globals, **evaluation['local_descriptor']) - elif evaluation['local_descriptor']['datasets']: - eval_asmk(net, evaluation['inference'], globals, **evaluation['local_descriptor']) - - -def eval_global(net, inference, globals, *, datasets): - """Evaluate global descriptors""" - net.eval() - time0 = time.time() - logger = globals["logger"] - logger.info("Starting global evaluation") - - results = {} - for dataset in datasets: - images, qimages, bbxs, gnd = data_helpers.load_dataset(dataset, data_root=globals['root_path']) - logger.info(f"Evaluating {dataset}") - - with logging.LoggingStopwatch("extracting database images", logger.info, logger.debug): - dset = ImagesFromList(root='', images=images, imsize=inference['image_size'], bbxs=None, - transform=globals['transform']) - vecs = how_net.extract_vectors(net, dset, globals["device"], scales=inference['scales']) - with logging.LoggingStopwatch("extracting query images", logger.info, logger.debug): - qdset = ImagesFromList(root='', images=qimages, imsize=inference['image_size'], bbxs=bbxs, - transform=globals['transform']) - qvecs = how_net.extract_vectors(net, qdset, globals["device"], scales=inference['scales']) - - vecs, qvecs = vecs.numpy(), qvecs.numpy() - ranks = np.argsort(-np.dot(vecs, qvecs.T), axis=0) - results[dataset] = score_helpers.compute_map_and_log(dataset, ranks, gnd, logger=logger) - - logger.info(f"Finished global evaluation in {int(time.time()-time0) // 60} min") - return results - - -def eval_asmk(net, inference, globals, *, datasets, codebook_training, asmk): - """Evaluate local descriptors with ASMK""" - net.eval() - time0 = time.time() - logger = globals["logger"] - logger.info("Starting asmk evaluation") - - asmk = asmk_method.ASMKMethod.initialize_untrained(asmk) - asmk = asmk_train_codebook(net, inference, globals, logger, codebook_training=codebook_training, - asmk=asmk, cache_path=None) - - results = {} - for dataset in datasets: - dataset_name = dataset if isinstance(dataset, str) else dataset['name'] - images, qimages, bbxs, gnd = data_helpers.load_dataset(dataset, data_root=globals['root_path']) - logger.info(f"Evaluating '{dataset_name}'") - - asmk_dataset = asmk_index_database(net, inference, globals, logger, asmk=asmk, images=images) - asmk_query_ivf(net, inference, globals, logger, dataset=dataset, asmk_dataset=asmk_dataset, - qimages=qimages, bbxs=bbxs, gnd=gnd, results=results, - cache_path=globals["exp_path"] / "query_results.pkl") - - logger.info(f"Finished asmk evaluation in {int(time.time()-time0) // 60} min") - return results - - -def eval_asmk_multistep(net, inference, multistep, globals, *, datasets, codebook_training, asmk): - """Evaluate local descriptors with ASMK""" - valid_steps = ["train_codebook", "aggregate_database", "build_ivf", "query_ivf", "aggregate_build_query"] - assert multistep['step'] in valid_steps, multistep['step'] - - net.eval() - time0 = time.time() - logger = globals["logger"] - (globals["exp_path"] / "eval").mkdir(exist_ok=True) - logger.info(f"Starting asmk evaluation step '{multistep['step']}'") - - # Handle partitioning - partition = {"suffix": "", "norm_start": 0, "norm_end": 1} - if multistep.get("partition"): - total, index = multistep['partition'] - partition = {"suffix": f":{total}_{str(index).zfill(len(str(total-1)))}", - "norm_start": index / total, - "norm_end": (index+1) / total} - if multistep['step'] == "aggregate_database" or multistep['step'] == "query_ivf": - logger.info(f"Processing partition '{total}_{index}'") - - # Handle distractors - distractors_path = None - distractors = multistep.get("distractors") - if distractors: - distractors_path = globals["exp_path"] / f"eval/{distractors}.ivf.pkl" - - # Train codebook - asmk = asmk_method.ASMKMethod.initialize_untrained(asmk) - cdb_path = globals["exp_path"] / "eval/codebook.pkl" - if multistep['step'] == "train_codebook": - asmk_train_codebook(net, inference, globals, logger, codebook_training=codebook_training, - asmk=asmk, cache_path=cdb_path) - return None - - asmk = asmk.train_codebook(None, cache_path=cdb_path) - - results = {} - for dataset in datasets: - dataset_name = database_name = dataset if isinstance(dataset, str) else dataset['name'] - if distractors and multistep['step'] != "aggregate_database": - dataset_name = f"{distractors}_{database_name}" - images, qimages, bbxs, gnd = data_helpers.load_dataset(dataset, data_root=globals['root_path']) - logger.info(f"Processing dataset '{dataset_name}'") - - # Infer database - if multistep['step'] == "aggregate_database": - agg_path = globals["exp_path"] / f"eval/{database_name}.agg{partition['suffix']}.pkl" - asmk_aggregate_database(net, inference, globals, logger, asmk=asmk, images=images, - partition=partition, cache_path=agg_path) - - # Build ivf - elif multistep['step'] == "build_ivf": - ivf_path = globals["exp_path"] / f"eval/{dataset_name}.ivf.pkl" - asmk_build_ivf(globals, logger, asmk=asmk, cache_path=ivf_path, database_name=database_name, - distractors=distractors, distractors_path=distractors_path) - - # Query ivf - elif multistep['step'] == "query_ivf": - asmk_dataset = asmk.build_ivf(None, None, cache_path=globals["exp_path"] / f"eval/{dataset_name}.ivf.pkl") - start, end = int(len(qimages)*partition['norm_start']), int(len(qimages)*partition['norm_end']) - bbxs = bbxs[start:end] if bbxs is not None else None - results_path = globals["exp_path"] / f"eval/{dataset_name}.results{partition['suffix']}.pkl" - asmk_query_ivf(net, inference, globals, logger, dataset=dataset, asmk_dataset=asmk_dataset, - qimages=qimages[start:end], bbxs=bbxs, gnd=gnd, results=results, - cache_path=results_path, imid_offset=start) - - # All 3 dataset steps - elif multistep['step'] == "aggregate_build_query": - if multistep.get("partition"): - raise NotImplementedError("Partitions within step 'aggregate_build_query' are not" \ - " supported, use separate steps") - results_path = globals["exp_path"] / "query_results.pkl" - if gnd is None and results_path.exists(): - logger.debug("Step results already exist") - continue - asmk_dataset = asmk_index_database(net, inference, globals, logger, asmk=asmk, images=images, - distractors_path=distractors_path) - asmk_query_ivf(net, inference, globals, logger, dataset=dataset, asmk_dataset=asmk_dataset, - qimages=qimages, bbxs=bbxs, gnd=gnd, results=results, cache_path=results_path) - - logger.info(f"Finished asmk evaluation step '{multistep['step']}' in {int(time.time()-time0) // 60} min") - return results - -# -# Separate steps -# - -def asmk_train_codebook(net, inference, globals, logger, *, codebook_training, asmk, cache_path): - """Asmk evaluation step 'train_codebook'""" - if cache_path and cache_path.exists(): - return asmk.train_codebook(None, cache_path=cache_path) - - images = data_helpers.load_dataset('train', data_root=globals['root_path'])[0] - images = images[:codebook_training['images']] - dset = ImagesFromList(root='', images=images, imsize=inference['image_size'], bbxs=None, - transform=globals['transform']) - infer_opts = {"scales": codebook_training['scales'], "features_num": inference['features_num']} - des_train = how_net.extract_vectors_local(net, dset, globals["device"], **infer_opts)[0] - asmk = asmk.train_codebook(des_train, cache_path=cache_path) - logger.info(f"Codebook trained in {asmk.metadata['train_codebook']['train_time']:.1f}s") - return asmk - -def asmk_aggregate_database(net, inference, globals, logger, *, asmk, images, partition, cache_path): - """Asmk evaluation step 'aggregate_database'""" - if cache_path.exists(): - logger.debug("Step results already exist") - return - codebook = asmk.codebook - kernel = kern_pkg.ASMKKernel(codebook, **asmk.params['build_ivf']['kernel']) - start, end = int(len(images)*partition['norm_start']), int(len(images)*partition['norm_end']) - data_opts = {"imsize": inference['image_size'], "transform": globals['transform']} - infer_opts = {"scales": inference['scales'], "features_num": inference['features_num']} - # Aggregate database - dset = ImagesFromList(root='', images=images[start:end], bbxs=None, **data_opts) - vecs, imids, *_ = how_net.extract_vectors_local(net, dset, globals["device"], **infer_opts) - imids += start - quantized = codebook.quantize(vecs, imids, **asmk.params["build_ivf"]["quantize"]) - aggregated = kernel.aggregate(*quantized, **asmk.params["build_ivf"]["aggregate"]) - with cache_path.open("wb") as handle: - pickle.dump(dict(zip(["des", "word_ids", "image_ids"], aggregated)), handle) - -def asmk_build_ivf(globals, logger, *, asmk, cache_path, database_name, distractors, distractors_path): - """Asmk evaluation step 'build_ivf'""" - if cache_path.exists(): - logger.debug("Step results already exist") - return asmk.build_ivf(None, None, cache_path=cache_path) - builder = asmk.create_ivf_builder(cache_path=cache_path) - # Build ivf - if not builder.loaded_from_cache: - if distractors: - builder.initialize_with_distractors(distractors_path) - logger.debug(f"Loaded ivf with distractors '{distractors}'") - for path in sorted(globals["exp_path"].glob(f"eval/{database_name}.agg*.pkl")): - with path.open("rb") as handle: - des = pickle.load(handle) - builder.ivf.add(des['des'], des['word_ids'], des['image_ids']) - logger.info(f"Indexed '{path.name}'") - asmk_dataset = asmk.add_ivf_builder(builder) - logger.debug(f"IVF stats: {asmk_dataset.metadata['build_ivf']['ivf_stats']}") - return asmk_dataset - -def asmk_index_database(net, inference, globals, logger, *, asmk, images, distractors_path=None): - """Asmk evaluation step 'aggregate_database' and 'build_ivf'""" - data_opts = {"imsize": inference['image_size'], "transform": globals['transform']} - infer_opts = {"scales": inference['scales'], "features_num": inference['features_num']} - # Index database vectors - dset = ImagesFromList(root='', images=images, bbxs=None, **data_opts) - vecs, imids, *_ = how_net.extract_vectors_local(net, dset, globals["device"], **infer_opts) - asmk_dataset = asmk.build_ivf(vecs, imids, distractors_path=distractors_path) - logger.info(f"Indexed images in {asmk_dataset.metadata['build_ivf']['index_time']:.2f}s") - logger.debug(f"IVF stats: {asmk_dataset.metadata['build_ivf']['ivf_stats']}") - return asmk_dataset - -def asmk_query_ivf(net, inference, globals, logger, *, dataset, asmk_dataset, qimages, bbxs, gnd, - results, cache_path, imid_offset=0): - """Asmk evaluation step 'query_ivf'""" - if gnd is None and cache_path and cache_path.exists(): - logger.debug("Step results already exist") - return - data_opts = {"imsize": inference['image_size'], "transform": globals['transform']} - infer_opts = {"scales": inference['scales'], "features_num": inference['features_num']} - # Query vectors - qdset = ImagesFromList(root='', images=qimages, bbxs=bbxs, **data_opts) - qvecs, qimids, *_ = how_net.extract_vectors_local(net, qdset, globals["device"], **infer_opts) - qimids += imid_offset - metadata, query_ids, ranks, scores = asmk_dataset.query_ivf(qvecs, qimids) - logger.debug(f"Average query time (quant+aggr+search) is {metadata['query_avg_time']:.3f}s") - # Evaluate - if gnd is not None: - results[dataset] = score_helpers.compute_map_and_log(dataset, ranks.T, gnd, logger=logger) - with cache_path.open("wb") as handle: - pickle.dump({"metadata": metadata, "query_ids": query_ids, "ranks": ranks, "scores": scores}, handle) - -# -# Helpers -# - -def _convert_checkpoint(state): - """Enable loading checkpoints in the old format""" - if "_version" not in state: - # Old checkpoint format - meta = state['meta'] - state['net_params'] = { - "architecture": meta['architecture'], - "pretrained": True, - "skip_layer": meta['skip_layer'], - "dim_reduction": {"dim": meta["dim"]}, - "smoothing": {"kernel_size": meta["feat_pool_k"]}, - "runtime": { - "mean_std": [meta['mean'], meta['std']], - "image_size": 1024, - "features_num": 1000, - "scales": [2.0, 1.414, 1.0, 0.707, 0.5, 0.353, 0.25], - "training_scales": [1], - }, - } - - state_dict = state['state_dict'] - state_dict['dim_reduction.weight'] = state_dict.pop("whiten.weight") - state_dict['dim_reduction.bias'] = state_dict.pop("whiten.bias") - - state['_version'] = "how/2020" - - return state diff --git a/spaces/nbeuchat/actors_matching/pipeline/__init__.py b/spaces/nbeuchat/actors_matching/pipeline/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/A Court Of Thorns And Roses Epub Torrent.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/A Court Of Thorns And Roses Epub Torrent.md deleted file mode 100644 index 7bd94bcd255daeaa92e87e101ec34ed13f7ea3f3..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/A Court Of Thorns And Roses Epub Torrent.md +++ /dev/null @@ -1,13 +0,0 @@ - -

        How to Download A Court of Thorns and Roses by Sarah J. Maas in EPUB Format

        -

        A Court of Thorns and Roses by Sarah J. Maas is a bestselling fantasy romance novel that follows the story of Feyre, a young huntress who is taken captive by a faerie lord after killing a wolf in the woods. The novel is the first book in the Court of Thorns and Roses series, which has been praised for its rich world-building, complex characters, and steamy romance.

        -

        A Court Of Thorns And Roses Epub Torrent


        Download Zip ✓✓✓ https://urlcod.com/2uIaGV



        -

        If you are looking for a way to download A Court of Thorns and Roses by Sarah J. Maas in EPUB format, you have come to the right place. EPUB is a popular e-book format that can be read on various devices, such as smartphones, tablets, e-readers, and computers. EPUB files are also smaller than PDF files, which means they take up less space and load faster.

        -

        There are several ways to download A Court of Thorns and Roses by Sarah J. Maas in EPUB format, but not all of them are legal or safe. Some websites may offer free downloads of pirated copies, which can expose you to malware, viruses, or legal issues. Therefore, it is always advisable to download e-books from reputable sources that respect the author's rights and offer high-quality files.

        -

        One of the best ways to download A Court of Thorns and Roses by Sarah J. Maas in EPUB format is to purchase it from an online bookstore, such as Amazon, Barnes & Noble, or Kobo. These websites offer secure payment methods and allow you to download the e-book instantly after purchase. You can also access your e-book library from any device with your account.

        -

        Another way to download A Court of Thorns and Roses by Sarah J. Maas in EPUB format is to borrow it from an online library, such as OverDrive or Libby. These websites allow you to access thousands of e-books for free with your library card or student ID. You can also browse by genre, popularity, or availability. However, you may have to wait for a copy if the e-book is in high demand or has a limited number of loans.

        -

        -

        A third way to download A Court of Thorns and Roses by Sarah J. Maas in EPUB format is to use a torrent client, such as BitTorrent or uTorrent. Torrents are peer-to-peer file-sharing networks that allow users to download files from each other. However, this method is not recommended for several reasons. First, downloading torrents may be illegal in some countries or regions, depending on the copyright laws and regulations. Second, downloading torrents may be risky for your device and personal information, as some files may contain malware, viruses, or spyware. Third, downloading torrents may be unethical and disrespectful to the author and publisher, as they do not receive any compensation for their work.

        -

        Therefore, if you want to download A Court of Thorns and Roses by Sarah J. Maas in EPUB format, the best option is to buy it from an online bookstore or borrow it from an online library. This way, you can enjoy reading this amazing novel without any hassle or guilt.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Aqvox Asio License Key.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Aqvox Asio License Key.md deleted file mode 100644 index a765714ca5de4a2961588130d19542bf70ba4028..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Aqvox Asio License Key.md +++ /dev/null @@ -1,27 +0,0 @@ - -Here is a possible title and article with seo optimization and html formatting for the keyword "Aqvox Asio License Key": - -

        How to Get the Best Sound Quality from Your USB DAC with Aqvox Asio License Key

        - -

        If you are an audiophile who owns a USB DAC or other audio device with a USB input, you might be wondering how to get the best sound quality from your computer. One of the most important factors is the software that handles the audio stream between your computer and your USB DAC. The default Windows or Mac drivers are not optimized for high-resolution audio and can introduce latency, jitter, noise, and other distortions that degrade the sound quality.

        - -

        Fortunately, there is a solution: Aqvox Asio License Key. This is a software that allows you to use the ASIO (Audio Stream Input/Output) protocol with your USB DAC. ASIO is a standard that bypasses the operating system's audio mixer and delivers the audio stream directly to your USB DAC, resulting in lower latency, higher bit depth, higher sample rate, and better synchronization. ASIO also supports multiple channels and devices, so you can use your USB DAC with other ASIO-compatible software and hardware.

        -

        Aqvox Asio License Key


        Download ===== https://urlcod.com/2uIclz



        - -

        Aqvox Asio License Key is a must-try for every owner of a USB DAC or other audio device with a USB input. It is compatible with most USB DACs and media players that support ASIO output. It is easy to install and use, and it offers a huge improvement in sound quality. You will hear more open and transparent sound, with better dynamics and more defined low-end. Of course, you will also need a good hifi set and audiophile music material to appreciate the difference.

        - -

        To get started with Aqvox Asio License Key, you need to follow these steps:

        - -
          -
        1. Check that your media player program has an ASIO output option. You can find a list of compatible media players here.
        2. -
        3. Connect your USB DAC to your PC and start the Aqvox Asio Driver installation. You can download the free trial version here. Note that the trial version will produce a short disturbing signal every 60 seconds, which can be removed by entering a license key.
        4. -
        5. The installation will take up to 2 minutes. Ignore any messages about missing signatures and reboot after finishing.
        6. -
        7. Set the audio output options of your media player to Aqvox.com USB ASIO.
        8. -
        9. Enjoy the improved sound quality!
        10. -
        - -

        If you want to purchase the Aqvox Asio License Key, you can do so here. The license key will allow you to use the Aqvox Asio Driver without any limitations or interruptions. It will also support the development of this software and ensure its compatibility with future updates and devices.

        - -

        Aqvox Asio License Key is the best way to enhance your listening experience with your USB DAC or other audio device with a USB input. It will make your computer sound like a high-end audio system and bring out the best in your music. Don't settle for less than optimal sound quality, try Aqvox Asio License Key today!

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Macro Express Pro 4.2.1.1 Portable ((FULL)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Macro Express Pro 4.2.1.1 Portable ((FULL)).md deleted file mode 100644 index 06255608b28dbccd4101b58fff885fe274ec56bc..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Macro Express Pro 4.2.1.1 Portable ((FULL)).md +++ /dev/null @@ -1,8 +0,0 @@ - -

        Macro Express Pro 4.2.1.1 Portable: A Powerful Tool for Automating Tasks on Your Computer

        -

        Do you want to save time and effort by automating repetitive tasks on your computer? Do you want to create custom shortcuts for your favorite programs, websites, or commands? Do you want to have a portable tool that you can use on any computer without installing anything? If you answered yes to any of these questions, then you should try Macro Express Pro 4.2.1.

        -

        Macro Express Pro is a software that allows you to create and run macros on your computer.

        -

        Macro Express Pro 4.2.1.1 Portable


        Download ⇒⇒⇒ https://urlcod.com/2uIbBP



        -

        A macro is a sequence of actions that can be triggered by a single keystroke, mouse click

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Minecraft Republic City Map Download !!BETTER!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Minecraft Republic City Map Download !!BETTER!!.md deleted file mode 100644 index 3df3c64211c6bd76a421d76a546f5b2a737aa71b..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Minecraft Republic City Map Download !!BETTER!!.md +++ /dev/null @@ -1,24 +0,0 @@ -
        -

        Minecraft Republic City Map Download: How to Explore the World of Avatar in Minecraft

        -

        If you are a fan of the Avatar: The Last Airbender and The Legend of Korra animated series, you might be interested in downloading the Minecraft Republic City map. This map is a faithful recreation of the capital city of the United Republic of Nations, a modern metropolis that blends elements of different cultures and bending styles. In this article, we will show you how to download and install the Minecraft Republic City map, as well as give you some tips on how to enjoy it.

        -

        What is the Minecraft Republic City Map?

        -

        The Minecraft Republic City map is a project created by Leozaur, a talented Minecraft builder and Avatar fan. The map is based on the official concept art and screenshots from The Legend of Korra, as well as some fan-made additions and modifications. The map features many iconic locations from the show, such as Air Temple Island, Central City Station, Pro-bending Arena, City Hall, Avatar Korra Park, Future Industries Tower, and more. The map also includes some hidden secrets and easter eggs for fans to discover.

        -

        Minecraft Republic City Map Download


        Downloadhttps://urlcod.com/2uI9D6



        -Minecraft Republic City Map -

        How to Download and Install the Minecraft Republic City Map?

        -

        To download and install the Minecraft Republic City map, you will need to follow these steps:

        -
          -
        1. Go to the official page of the map on Planet Minecraft and click on the "Download Minecraft Map" button.
        2. -
        3. Extract the downloaded zip file and copy the folder named "Republic City 1.8" into your Minecraft saves folder. You can find this folder by typing %appdata%\.minecraft\saves in your Windows search bar or by following this guide for other operating systems.
        4. -
        5. Launch Minecraft and select "Singleplayer". You should see the "Republic City 1.8" map in your list of worlds. Click on it and press "Play Selected World".
        6. -
        7. Enjoy exploring the world of Avatar in Minecraft!
        8. -
        -

        How to Enjoy the Minecraft Republic City Map?

        -

        There are many ways to enjoy the Minecraft Republic City map, depending on your preferences and play style. Here are some suggestions:

        -
          -
        • If you want to experience the map as a tourist, you can use the /gamemode 3 command to enter spectator mode and fly around the city. You can also use the /tp command to teleport to specific coordinates or landmarks. You can find a list of coordinates and landmarks on the official page of the map.
        • -
        • If you want to roleplay as an Avatar character or create your own story, you can use the /gamemode 0 command to enter survival mode and interact with the environment. You can also use mods or plugins to add bending abilities, NPCs, quests, and more. Some examples of mods or plugins that are compatible with the map are Avatar Mod 2: Out of the Iceberg, ProjectKorra, Custom NPCs, and Citizens.
        • -
        • If you want to challenge yourself or compete with others, you can use the /gamemode 2 command to enter adventure mode and try

          -

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/nikhil5678/turkey-syria-earthquake-tweets/app.py b/spaces/nikhil5678/turkey-syria-earthquake-tweets/app.py deleted file mode 100644 index dab19fcf9340fdbb3f8430f8f21258a91763e540..0000000000000000000000000000000000000000 --- a/spaces/nikhil5678/turkey-syria-earthquake-tweets/app.py +++ /dev/null @@ -1,168 +0,0 @@ -import streamlit as st -import pandas as pd -import pickle -import streamlit as st -import matplotlib.pyplot as plt -import helper -import seaborn as sns - - -df = pickle.load(open('tweets.pkl', 'rb')) - -st.sidebar.image('cover.jpg') -st.sidebar.header("Turkey-Syria Earthquake Tweet's Analysis") - -selected = st.sidebar.radio( - 'select an option', - ('Overall', 'Language-Based Analysis', 'Source-Based Analysis') -) - -language_tweets = df['language'].value_counts().head(20).reset_index() -language_tweets.rename(columns={'language': 'Tweet Count', 'index': 'Language'}, inplace=True) -source_tweets = df['source'].value_counts().head(30).reset_index() -source_tweets.rename(columns={'source': 'Tweet Count', 'index': 'Source'}, inplace=True) - -if selected: - if selected == 'Overall': - st.header("Overall Analysis") - - df['isVerified'] = df['isVerified'].astype(int) - pie_plot_verified = df['isVerified'].value_counts() - pie_plot_verified.rename(index={0:'Unverified', 1:'Verified'}, inplace=True) - labels = 'Unverified', 'Verified' - st.subheader('Verified Handles') - - col1, col2 = st.columns(2) - with col1: - chart_data = pd.DataFrame(data=pie_plot_verified) - st.bar_chart(chart_data) - with col2: - # st.write('User-Ratio') - helper.plot_pie(pie_plot_verified, labels) - - st.subheader("# Trending Hash Tags") - hash_for_word_cloud = df.sort_values(by='followers_count', ascending=False).head(200)[ - 'hashtags'].reset_index() - df_wc = helper.word_cloud(hash_for_word_cloud, 'hashtags') - fig, ax = plt.subplots() - ax.imshow(df_wc) - st.pyplot(fig) - - tweets_per_day = df['day'].value_counts().reset_index() - tweets_per_day.rename(columns={'index': 'date', 'day': 'tweets'}, inplace=True) - - st.subheader('Tweets Everyday') - fig, (line_chart, freq_chart) = plt.subplots(figsize=(9, 6), ncols=2) - g = sns.lineplot(x="date", y="tweets", data=tweets_per_day, ax=line_chart) - g.set(xticks=list(range(6, 22))) - sns.heatmap(tweets_per_day, annot=True, cmap="Reds_r", - linewidths=2, ax=freq_chart) - st.pyplot(fig) - - st.subheader('Hashtag Trends Each Day (Heatmap)') - helper.plot_heatmap() - - col3, col4 = st.columns(2) - with col3: - st.subheader('Most Used Languages') - helper.plot_bar_chart(language_tweets.head(10)) - with col4: - st.subheader('Most Used Sources') - helper.plot_bar_chart(source_tweets.head(10)) - - if selected == 'Language-Based Analysis': - st.header("Language-Based Analysis") - - unique_lang = df['language'].value_counts().head(10).reset_index() - option = st.sidebar.selectbox( - 'select the language', - unique_lang['index'] - ) - - st.subheader('Tweets per Language (Top 20)') - helper.plot_bar_chart(language_tweets) - - lang_df = df[df['language'] == option] - cnt_lang_df = lang_df['day'].value_counts().reset_index() - cnt_lang_df.rename(columns={'index': 'date', 'day': 'freq'}, inplace=True) - - st.subheader('Tweets Everyday') - st.write(option) - fig, (line_chart, freq_chart) = plt.subplots(figsize=(9, 6), ncols=2) - g = sns.lineplot(x="date", y="freq", data=cnt_lang_df, ax=line_chart) - g.set(xticks=list(range(6, 22))) - sns.heatmap(cnt_lang_df, annot=True, cmap="Blues", - linewidths=2, ax=freq_chart) - st.pyplot(fig) - - verified_lang_users = lang_df['isVerified'].astype('int').value_counts() - verified_lang_users.rename(index={0: 'Unverified', 1: 'Verified'}, inplace=True) - verified_df_lang = pd.DataFrame(verified_lang_users) - temp = verified_df_lang.rename(columns={'index':'Users', 'isVerified':'Tweets'}) - lang_users = temp.reset_index() - labels = 'Unverified', 'Verified' - st.subheader('Verified Handles') - st.write(option) - col5, col6 = st.columns(2) - with col5: - helper.plot_bar_chart(lang_users) - with col6: - # st.write('User-Ratio') - helper.plot_pie(verified_lang_users, labels) - - st.subheader("Most Occured Words") - hash_for_word_cloud = lang_df.sort_values(by='followers_count', ascending=False).head(200)[ - 'content'].reset_index() - df_wc = helper.word_cloud(hash_for_word_cloud, 'content') - fig, ax = plt.subplots() - ax.imshow(df_wc) - st.pyplot(fig) - - if selected == 'Source-Based Analysis': - st.header("Source-Based Analysis") - - unique_source = df['source'].value_counts().head(10).reset_index() - option = st.sidebar.selectbox( - 'select the language', - unique_source['index'] - ) - - st.subheader('Tweets per Source (Top 30)') - helper.plot_bar_chart(source_tweets) - - source_df = df[df['source'] == option] - cnt_src_df = source_df['day'].value_counts().reset_index() - cnt_src_df.rename(columns={'index': 'date', 'day': 'freq'}, inplace=True) - - st.subheader('Tweets Everyday') - st.write(option) - fig, (line_chart, freq_chart) = plt.subplots(figsize=(9, 6), ncols=2) - g = sns.lineplot(x="date", y="freq", data=cnt_src_df, ax=line_chart) - g.set(xticks=list(range(6, 22))) - sns.heatmap(cnt_src_df, annot=True, cmap="Blues", - linewidths=2, ax=freq_chart) - st.pyplot(fig) - - verified_src_users = source_df['isVerified'].astype('int').value_counts() - verified_src_users.rename(index={0: 'Unverified', 1: 'Verified'}, inplace=True) - verified_df_src = pd.DataFrame(verified_src_users) - temp = verified_df_src.rename(columns={'index': 'Users', 'isVerified': 'Tweets'}) - src_users = temp.reset_index() - labels = 'Unverified', 'Verified' - st.subheader('Verified Handles') - st.write(option) - col5, col6 = st.columns(2) - with col5: - helper.plot_bar_chart(src_users) - with col6: - # st.write('User-Ratio') - helper.plot_pie(verified_src_users, labels) - - st.subheader("Most Occured Words") - hash_for_word_cloud = source_df.sort_values(by='followers_count', ascending=False).head(200)[ - 'content'].reset_index() - df_wc = helper.word_cloud(hash_for_word_cloud, 'content') - fig, ax = plt.subplots() - ax.imshow(df_wc) - st.pyplot(fig) - diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/engine/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/engine/__init__.py deleted file mode 100644 index e6e4d673dedd10419b612755cfcb9744fc4999f8..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/engine/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .launch import * -from .train_loop import * - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -# prefer to let hooks and defaults live in separate namespaces (therefore not in __all__) -# but still make them available here -from .hooks import * -from .defaults import ( - create_ddp_model, - default_argument_parser, - default_setup, - default_writers, - DefaultPredictor, - DefaultTrainer, -) diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/styles.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/styles.js deleted file mode 100644 index eb9ee9a49b4eeb07bd7098772b15558fb30b20be..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/styles.js +++ /dev/null @@ -1,137 +0,0 @@ -/** - * Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved. - * For licensing, see LICENSE.md or https://ckeditor.com/legal/ckeditor-oss-license - */ - -// This file contains style definitions that can be used by CKEditor plugins. -// -// The most common use for it is the "stylescombo" plugin which shows the Styles drop-down -// list containing all styles in the editor toolbar. Other plugins, like -// the "div" plugin, use a subset of the styles for their features. -// -// If you do not have plugins that depend on this file in your editor build, you can simply -// ignore it. Otherwise it is strongly recommended to customize this file to match your -// website requirements and design properly. -// -// For more information refer to: https://ckeditor.com/docs/ckeditor4/latest/guide/dev_styles.html#style-rules - -CKEDITOR.stylesSet.add( 'default', [ - /* Block styles */ - - // These styles are already available in the "Format" drop-down list ("format" plugin), - // so they are not needed here by default. You may enable them to avoid - // placing the "Format" combo in the toolbar, maintaining the same features. - /**/ - { name: 'Paragraph', element: 'p' }, - - //{ name: 'Heading 1', element: 'h1' }, - { name: 'Heading 1', element: 'h2' }, - { name: 'Heading 2', element: 'h3' }, - { name: 'Heading 3', element: 'h4' }, - //{ name: 'Heading 5', element: 'h5' }, - //{ name: 'Heading 6', element: 'h6' }, - { name: 'Preformatted',element: 'pre' }, - { name: 'Div', element: 'div' }, - - - //{ name: 'Italic Title', element: 'h2', styles: { 'font-style': 'italic' } }, - //{ name: 'Subtitle', element: 'h3', styles: { 'color': '#aaa', 'font-style': 'italic' } }, - { - name: 'Container', - element: 'div', - styles: { - padding: '5px 10px', - background: '#eee', - border: '1px solid #ccc' - } - }, - //{ name: 'Code', element: 'codepre' }, - - { name: 'Monospace', element: 'code' }, - /* Inline styles */ - - // These are core styles available as toolbar buttons. You may opt enabling - // some of them in the Styles drop-down list, removing them from the toolbar. - // (This requires the "stylescombo" plugin.) - /* - { name: 'Strong', element: 'strong', overrides: 'b' }, - { name: 'Emphasis', element: 'em' , overrides: 'i' }, - { name: 'Underline', element: 'u' }, - { name: 'Strikethrough', element: 'strike' }, - { name: 'Subscript', element: 'sub' }, - { name: 'Superscript', element: 'sup' }, - */ - -// { name: 'Marker', element: 'span', attributes: { 'class': 'marker' } }, - -// { name: 'Big', element: 'big' }, -// { name: 'Small', element: 'small' }, - //{ name: 'Keyboard Phrase', element: 'kbd' }, - //{ name: 'Sample Text', element: 'samp' }, - //{ name: 'Variable', element: 'var' }, - - //{ name: 'Deleted Text', element: 'del' }, - //{ name: 'Inserted Text', element: 'ins' }, - - //{ name: 'Cited Work', element: 'cite' }, - //{ name: 'Inline Quotation', element: 'q' }, - - //{ name: 'Language: RTL', element: 'span', attributes: { 'dir': 'rtl' } }, - //{ name: 'Language: LTR', element: 'span', attributes: { 'dir': 'ltr' } }, - - /* Object styles */ - - { - name: 'Styled Image (left)', - element: 'img', - attributes: { 'class': 'left' } - }, - - { - name: 'Styled Image (right)', - element: 'img', - attributes: { 'class': 'right' } - }, - - { - name: 'Compact Table', - element: 'table', - attributes: { - cellpadding: '5', - cellspacing: '0', - border: '1', - bordercolor: '#ccc' - }, - styles: { - 'border-collapse': 'collapse' - } - }, - - { name: 'Borderless Table', element: 'table', styles: { 'border-style': 'hidden', 'background-color': '#E6E6FA' } }, - { name: 'Square Bulleted List', element: 'ul', styles: { 'list-style-type': 'square' } }, - - /* Widget styles */ - - { name: 'Clean Image', type: 'widget', widget: 'image', attributes: { 'class': 'image-clean' } }, - { name: 'Grayscale Image', type: 'widget', widget: 'image', attributes: { 'class': 'image-grayscale' } }, - - { name: 'Featured Snippet', type: 'widget', widget: 'codeSnippet', attributes: { 'class': 'code-featured' } }, - - { name: 'Featured Formula', type: 'widget', widget: 'mathjax', attributes: { 'class': 'math-featured' } }, - - { name: '240p', type: 'widget', widget: 'embedSemantic', attributes: { 'class': 'embed-240p' }, group: 'size' }, - { name: '360p', type: 'widget', widget: 'embedSemantic', attributes: { 'class': 'embed-360p' }, group: 'size' }, - { name: '480p', type: 'widget', widget: 'embedSemantic', attributes: { 'class': 'embed-480p' }, group: 'size' }, - { name: '720p', type: 'widget', widget: 'embedSemantic', attributes: { 'class': 'embed-720p' }, group: 'size' }, - { name: '1080p', type: 'widget', widget: 'embedSemantic', attributes: { 'class': 'embed-1080p' }, group: 'size' }, - - // Adding space after the style name is an intended workaround. For now, there - // is no option to create two styles with the same name for different widget types. See https://dev.ckeditor.com/ticket/16664. - { name: '240p ', type: 'widget', widget: 'embed', attributes: { 'class': 'embed-240p' }, group: 'size' }, - { name: '360p ', type: 'widget', widget: 'embed', attributes: { 'class': 'embed-360p' }, group: 'size' }, - { name: '480p ', type: 'widget', widget: 'embed', attributes: { 'class': 'embed-480p' }, group: 'size' }, - { name: '720p ', type: 'widget', widget: 'embed', attributes: { 'class': 'embed-720p' }, group: 'size' }, - { name: '1080p ', type: 'widget', widget: 'embed', attributes: { 'class': 'embed-1080p' }, group: 'size' } - -] ); - diff --git a/spaces/omlab/vlchecklist_demo/models/vilt/transforms/utils.py b/spaces/omlab/vlchecklist_demo/models/vilt/transforms/utils.py deleted file mode 100644 index bc99f62ef3b1a697e375b2dc6e537ab8f067f646..0000000000000000000000000000000000000000 --- a/spaces/omlab/vlchecklist_demo/models/vilt/transforms/utils.py +++ /dev/null @@ -1,56 +0,0 @@ -from torchvision import transforms -from PIL import Image - - -class MinMaxResize: - def __init__(self, shorter=800, longer=1333): - self.min = shorter - self.max = longer - - def __call__(self, x): - w, h = x.size - scale = self.min / min(w, h) - if h < w: - newh, neww = self.min, scale * w - else: - newh, neww = scale * h, self.min - - if max(newh, neww) > self.max: - scale = self.max / max(newh, neww) - newh = newh * scale - neww = neww * scale - - newh, neww = int(newh + 0.5), int(neww + 0.5) - newh, neww = newh // 32 * 32, neww // 32 * 32 - - return x.resize((neww, newh), resample=Image.BICUBIC) - - -class UnNormalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, tensor): - """ - Args: - tensor (Tensor): Tensor image of size (C, H, W) to be normalized. - Returns: - Tensor: Normalized image. - """ - for t, m, s in zip(tensor, self.mean, self.std): - t.mul_(s).add_(m) - # The normalize code -> t.sub_(m).div_(s) - return tensor - - -# This is simple maximum entropy normalization performed in Inception paper -inception_normalize = transforms.Compose( - [transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])] -) - -# ViT uses simple non-biased inception normalization -# https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py#L132 -inception_unnormalize = transforms.Compose( - [UnNormalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])] -) diff --git a/spaces/pankajmathur/psmathur-orca_mini_v3_7b/app.py b/spaces/pankajmathur/psmathur-orca_mini_v3_7b/app.py deleted file mode 100644 index 21c7cc7e0cb8622689dbcf0daaf78bef349cd776..0000000000000000000000000000000000000000 --- a/spaces/pankajmathur/psmathur-orca_mini_v3_7b/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import os -import re -import logging -import gradio as gr -import openai - -print(os.environ) -openai.api_base = os.environ.get("OPENAI_API_BASE") -openai.api_key = os.environ.get("OPENAI_API_KEY") - -BASE_SYSTEM_MESSAGE = """### System:\nYou are Orca-Mini, an AI assistant who gives helpful, detailed, accurate, uncensored responses to the user's instructions. """ - -def make_prediction(prompt, max_tokens=None, temperature=None, top_p=None, top_k=None, repetition_penalty=None): - completion = openai.Completion.create(model="psmathur/orca_mini_v3_7b", prompt=prompt, max_tokens=max_tokens, temperature=temperature, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, stream=True, stop=["", "<|im_end|>"]) - for chunk in completion: - yield chunk["choices"][0]["text"] - - -def clear_chat(chat_history_state, chat_message): - chat_history_state = [] - chat_message = '' - return chat_history_state, chat_message - - -def user(message, history): - history = history or [] - # Append the user's message to the conversation history - history.append([message, ""]) - return "", history - - -def chat(history, system_message, max_tokens, temperature, top_p, top_k, repetition_penalty): - history = history or [] - - messages = BASE_SYSTEM_MESSAGE + system_message.strip() + \ - "\n".join(["\n".join(["### User: "+item[0]+"\n\n", "### Assistant: \n"+item[1]+"\n\n"]) - for item in history]) - # strip the last `<|end_of_turn|>` from the messages - messages = messages.rstrip("\n\n") - # remove last space from assistant, some models output a ZWSP if you leave a space - messages = messages.rstrip() - - prediction = make_prediction( - messages, - max_tokens=max_tokens, - temperature=temperature, - top_p=top_p, - top_k=top_k, - repetition_penalty=repetition_penalty, - ) - for tokens in prediction: - tokens = re.findall(r'(.*?)(\s|$)', tokens) - for subtoken in tokens: - subtoken = "".join(subtoken) - answer = subtoken - history[-1][1] += answer - # stream the response - yield history, history, "" - - -start_message = "" - -CSS =""" -.contain { display: flex; flex-direction: column; } -.gradio-container { height: 100vh !important; } -#component-0 { height: 100%; } -#chatbot { flex-grow: 1; overflow: auto; resize: vertical; } -""" - -#with gr.Blocks() as demo: -with gr.Blocks(css=CSS) as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(f""" - ## This chatbot is powered by [orca_mini_v3_7b](https://huggingface.co/psmathur/orca_mini_v3_7b) - """) - with gr.Row(): - gr.Markdown("# orca-mini chatbot") - with gr.Row(): - #chatbot = gr.Chatbot().style(height=500) - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - message = gr.Textbox( - label="Hello, I am orca-mini, How can I help you today?", - placeholder="Ask me anything! For example: I'm going to cook for my date who claims to be a picky eater. Can you recommend me a dish that's easy to cook?", - lines=3, - ) - with gr.Row(): - submit = gr.Button(value="Send", variant="secondary").style(full_width=True) - clear = gr.Button(value="Clear", variant="secondary").style(full_width=False) - stop = gr.Button(value="Stop", variant="secondary").style(full_width=False) - with gr.Accordion("Show Model Parameters", open=False): - with gr.Row(): - with gr.Column(): - max_tokens = gr.Slider(20, 2000, label="Max Tokens", step=20, value=500) - temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=0.8) - top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95) - top_k = gr.Slider(0, 100, label="Top K", step=1, value=40) - repetition_penalty = gr.Slider(0.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1) - - system_msg = gr.Textbox( - start_message, label="System Message", interactive=True, visible=True, placeholder="System prompt you want chatbot to remember. For example: Explain like I am five year old.", lines=5) - - chat_history_state = gr.State() - clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False) - clear.click(lambda: None, None, chatbot, queue=False) - - submit_click_event = submit.click( - fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True - ).then( - fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], outputs=[chatbot, chat_history_state, message], queue=True - ) - stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event], queue=False) - -demo.queue(max_size=48, concurrency_count=16).launch(debug=True, server_name="0.0.0.0", server_port=7860) diff --git a/spaces/passgenau-digital/virtual-assistant-demo-hsb/Dockerfile b/spaces/passgenau-digital/virtual-assistant-demo-hsb/Dockerfile deleted file mode 100644 index 6c61202a4b4019c07faefb243c975735a42688f1..0000000000000000000000000000000000000000 --- a/spaces/passgenau-digital/virtual-assistant-demo-hsb/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -FROM python:3.11 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt -RUN python3 -m pip install --no-cache-dir --upgrade pip -RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN --mount=type=secret,id=OPENAI_API_KEY,mode=0444,required=true - -ENV NLTK_DATA=/usr/local/nltk_data -RUN pip3 install nltk -RUN [ "python3", "-c", "import nltk; nltk.download('all', download_dir='/usr/local/nltk_data')" ] - -COPY . . - -CMD ["panel", "serve", "/code/app.py", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "*"] - -RUN mkdir /.cache -RUN chmod 777 /.cache -RUN mkdir .chroma -RUN chmod 777 .chroma \ No newline at end of file diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/report.html b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/report.html deleted file mode 100644 index d902a992e37846c7de7794bc4a824eb48f0103bc..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/report.html +++ /dev/null @@ -1,80 +0,0 @@ - - - - - - - - - - - - -
          -
          -

          {{ header.name }}

          - -
          sort by: - iou - label - unit -
          -
          -
          -
          -
          unit {{ r.unit }} ({{ r.label }}, iou {{ r.iou | fixed(4) }})
          - -
          -
          -
          - - - diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/encoding.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/encoding.py deleted file mode 100644 index 008f06a79bf598b149bdccb73e572d13331a1631..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/encoding.py +++ /dev/null @@ -1,36 +0,0 @@ -import codecs -import locale -import re -import sys -from typing import List, Tuple - -BOMS: List[Tuple[bytes, str]] = [ - (codecs.BOM_UTF8, "utf-8"), - (codecs.BOM_UTF16, "utf-16"), - (codecs.BOM_UTF16_BE, "utf-16-be"), - (codecs.BOM_UTF16_LE, "utf-16-le"), - (codecs.BOM_UTF32, "utf-32"), - (codecs.BOM_UTF32_BE, "utf-32-be"), - (codecs.BOM_UTF32_LE, "utf-32-le"), -] - -ENCODING_RE = re.compile(rb"coding[:=]\s*([-\w.]+)") - - -def auto_decode(data: bytes) -> str: - """Check a bytes string for a BOM to correctly detect the encoding - - Fallback to locale.getpreferredencoding(False) like open() on Python3""" - for bom, encoding in BOMS: - if data.startswith(bom): - return data[len(bom) :].decode(encoding) - # Lets check the first two lines as in PEP263 - for line in data.split(b"\n")[:2]: - if line[0:1] == b"#" and ENCODING_RE.search(line): - result = ENCODING_RE.search(line) - assert result is not None - encoding = result.groups()[0].decode("ascii") - return data.decode(encoding) - return data.decode( - locale.getpreferredencoding(False) or sys.getdefaultencoding(), - ) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langbulgarianmodel.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langbulgarianmodel.py deleted file mode 100644 index 994668219dd4def6404e0afd3f538b29a0e50f8b..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langbulgarianmodel.py +++ /dev/null @@ -1,4649 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -BULGARIAN_LANG_MODEL = { - 63: { # 'e' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 45: { # '\xad' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 31: { # 'А' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 2, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 0, # 'и' - 26: 2, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 32: { # 'Б' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 2, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 35: { # 'В' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 2, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 43: { # 'Г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 1, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 37: { # 'Д' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 2, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 44: { # 'Е' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 2, # 'Ф' - 49: 1, # 'Х' - 53: 2, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 0, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 55: { # 'Ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 47: { # 'З' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 40: { # 'И' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 2, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 2, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 3, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 0, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 59: { # 'Й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 33: { # 'К' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 46: { # 'Л' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 38: { # 'М' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 36: { # 'Н' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 41: { # 'О' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 1, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 1, # 'Й' - 33: 2, # 'К' - 46: 2, # 'Л' - 38: 2, # 'М' - 36: 2, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 1, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 0, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 2, # 'ч' - 27: 0, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 30: { # 'П' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 2, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 39: { # 'Р' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 2, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 1, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 28: { # 'С' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 3, # 'А' - 32: 2, # 'Б' - 35: 2, # 'В' - 43: 1, # 'Г' - 37: 2, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 2, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 1, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 34: { # 'Т' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 2, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 2, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 1, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 1, # 'Ъ' - 60: 0, # 'Ю' - 56: 1, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 3, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 51: { # 'У' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 2, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 48: { # 'Ф' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 2, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 1, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 2, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 49: { # 'Х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 1, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 53: { # 'Ц' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 2, # 'И' - 59: 0, # 'Й' - 33: 2, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 2, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 50: { # 'Ч' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 2, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 2, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 54: { # 'Ш' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 1, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 1, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 2, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 57: { # 'Щ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 1, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 1, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 1, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 61: { # 'Ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 1, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 2, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 2, # 'Р' - 28: 1, # 'С' - 34: 1, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 1, # 'Х' - 53: 1, # 'Ц' - 50: 1, # 'Ч' - 54: 1, # 'Ш' - 57: 1, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 1, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 60: { # 'Ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 1, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 0, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 2, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 0, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 56: { # 'Я' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 1, # 'В' - 43: 1, # 'Г' - 37: 1, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 1, # 'Л' - 38: 1, # 'М' - 36: 1, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 2, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 1, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 0, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 1: { # 'а' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 1, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 18: { # 'б' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 0, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 3, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 9: { # 'в' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 1, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 0, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 20: { # 'г' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 11: { # 'д' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 3: { # 'е' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 23: { # 'ж' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 15: { # 'з' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 2: { # 'и' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 1, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 1, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 1, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 1, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 26: { # 'й' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 2, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 12: { # 'к' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 1, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 1, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 10: { # 'л' - 63: 1, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 1, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 1, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 3, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 14: { # 'м' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 1, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 1, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 6: { # 'н' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 1, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 2, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 3, # 'ф' - 25: 2, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 4: { # 'о' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 2, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 3, # 'и' - 26: 3, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 2, # 'у' - 29: 3, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 3, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 13: { # 'п' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 3, # 'л' - 14: 1, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 7: { # 'р' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 3, # 'е' - 23: 3, # 'ж' - 15: 2, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 3, # 'х' - 22: 3, # 'ц' - 21: 2, # 'ч' - 27: 3, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 1, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 8: { # 'с' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 2, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 2, # 'ш' - 24: 0, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 5: { # 'т' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 2, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 3, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 2, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 3, # 'ъ' - 52: 2, # 'ь' - 42: 2, # 'ю' - 16: 3, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 19: { # 'у' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 2, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 2, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 3, # 'ш' - 24: 2, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 29: { # 'ф' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 1, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 2, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 2, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 25: { # 'х' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 2, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 1, # 'п' - 7: 3, # 'р' - 8: 1, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 1, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 22: { # 'ц' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 2, # 'в' - 20: 1, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 1, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 2, # 'к' - 10: 1, # 'л' - 14: 1, # 'м' - 6: 1, # 'н' - 4: 2, # 'о' - 13: 1, # 'п' - 7: 1, # 'р' - 8: 1, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 1, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 21: { # 'ч' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 1, # 'б' - 9: 3, # 'в' - 20: 1, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 1, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 2, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 1, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 27: { # 'ш' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 2, # 'в' - 20: 0, # 'г' - 11: 1, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 3, # 'к' - 10: 2, # 'л' - 14: 1, # 'м' - 6: 3, # 'н' - 4: 2, # 'о' - 13: 2, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 2, # 'у' - 29: 1, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 1, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 2, # 'ъ' - 52: 1, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 24: { # 'щ' - 63: 1, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 3, # 'а' - 18: 0, # 'б' - 9: 1, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 3, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 3, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 2, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 1, # 'р' - 8: 0, # 'с' - 5: 2, # 'т' - 19: 3, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 2, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 17: { # 'ъ' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 3, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 3, # 'ж' - 15: 3, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 3, # 'о' - 13: 3, # 'п' - 7: 3, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 2, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 2, # 'ш' - 24: 3, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 2, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 52: { # 'ь' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 1, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 1, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 1, # 'н' - 4: 3, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 1, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 1, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 1, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 42: { # 'ю' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 1, # 'а' - 18: 2, # 'б' - 9: 1, # 'в' - 20: 2, # 'г' - 11: 2, # 'д' - 3: 1, # 'е' - 23: 2, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 1, # 'й' - 12: 2, # 'к' - 10: 2, # 'л' - 14: 2, # 'м' - 6: 2, # 'н' - 4: 1, # 'о' - 13: 1, # 'п' - 7: 2, # 'р' - 8: 2, # 'с' - 5: 2, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 1, # 'х' - 22: 2, # 'ц' - 21: 3, # 'ч' - 27: 1, # 'ш' - 24: 1, # 'щ' - 17: 1, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 16: { # 'я' - 63: 0, # 'e' - 45: 1, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 3, # 'б' - 9: 3, # 'в' - 20: 2, # 'г' - 11: 3, # 'д' - 3: 2, # 'е' - 23: 1, # 'ж' - 15: 2, # 'з' - 2: 1, # 'и' - 26: 2, # 'й' - 12: 3, # 'к' - 10: 3, # 'л' - 14: 3, # 'м' - 6: 3, # 'н' - 4: 1, # 'о' - 13: 2, # 'п' - 7: 2, # 'р' - 8: 3, # 'с' - 5: 3, # 'т' - 19: 1, # 'у' - 29: 1, # 'ф' - 25: 3, # 'х' - 22: 2, # 'ц' - 21: 1, # 'ч' - 27: 1, # 'ш' - 24: 2, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 1, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 58: { # 'є' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, - 62: { # '№' - 63: 0, # 'e' - 45: 0, # '\xad' - 31: 0, # 'А' - 32: 0, # 'Б' - 35: 0, # 'В' - 43: 0, # 'Г' - 37: 0, # 'Д' - 44: 0, # 'Е' - 55: 0, # 'Ж' - 47: 0, # 'З' - 40: 0, # 'И' - 59: 0, # 'Й' - 33: 0, # 'К' - 46: 0, # 'Л' - 38: 0, # 'М' - 36: 0, # 'Н' - 41: 0, # 'О' - 30: 0, # 'П' - 39: 0, # 'Р' - 28: 0, # 'С' - 34: 0, # 'Т' - 51: 0, # 'У' - 48: 0, # 'Ф' - 49: 0, # 'Х' - 53: 0, # 'Ц' - 50: 0, # 'Ч' - 54: 0, # 'Ш' - 57: 0, # 'Щ' - 61: 0, # 'Ъ' - 60: 0, # 'Ю' - 56: 0, # 'Я' - 1: 0, # 'а' - 18: 0, # 'б' - 9: 0, # 'в' - 20: 0, # 'г' - 11: 0, # 'д' - 3: 0, # 'е' - 23: 0, # 'ж' - 15: 0, # 'з' - 2: 0, # 'и' - 26: 0, # 'й' - 12: 0, # 'к' - 10: 0, # 'л' - 14: 0, # 'м' - 6: 0, # 'н' - 4: 0, # 'о' - 13: 0, # 'п' - 7: 0, # 'р' - 8: 0, # 'с' - 5: 0, # 'т' - 19: 0, # 'у' - 29: 0, # 'ф' - 25: 0, # 'х' - 22: 0, # 'ц' - 21: 0, # 'ч' - 27: 0, # 'ш' - 24: 0, # 'щ' - 17: 0, # 'ъ' - 52: 0, # 'ь' - 42: 0, # 'ю' - 16: 0, # 'я' - 58: 0, # 'є' - 62: 0, # '№' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -ISO_8859_5_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 194, # '\x80' - 129: 195, # '\x81' - 130: 196, # '\x82' - 131: 197, # '\x83' - 132: 198, # '\x84' - 133: 199, # '\x85' - 134: 200, # '\x86' - 135: 201, # '\x87' - 136: 202, # '\x88' - 137: 203, # '\x89' - 138: 204, # '\x8a' - 139: 205, # '\x8b' - 140: 206, # '\x8c' - 141: 207, # '\x8d' - 142: 208, # '\x8e' - 143: 209, # '\x8f' - 144: 210, # '\x90' - 145: 211, # '\x91' - 146: 212, # '\x92' - 147: 213, # '\x93' - 148: 214, # '\x94' - 149: 215, # '\x95' - 150: 216, # '\x96' - 151: 217, # '\x97' - 152: 218, # '\x98' - 153: 219, # '\x99' - 154: 220, # '\x9a' - 155: 221, # '\x9b' - 156: 222, # '\x9c' - 157: 223, # '\x9d' - 158: 224, # '\x9e' - 159: 225, # '\x9f' - 160: 81, # '\xa0' - 161: 226, # 'Ё' - 162: 227, # 'Ђ' - 163: 228, # 'Ѓ' - 164: 229, # 'Є' - 165: 230, # 'Ѕ' - 166: 105, # 'І' - 167: 231, # 'Ї' - 168: 232, # 'Ј' - 169: 233, # 'Љ' - 170: 234, # 'Њ' - 171: 235, # 'Ћ' - 172: 236, # 'Ќ' - 173: 45, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 31, # 'А' - 177: 32, # 'Б' - 178: 35, # 'В' - 179: 43, # 'Г' - 180: 37, # 'Д' - 181: 44, # 'Е' - 182: 55, # 'Ж' - 183: 47, # 'З' - 184: 40, # 'И' - 185: 59, # 'Й' - 186: 33, # 'К' - 187: 46, # 'Л' - 188: 38, # 'М' - 189: 36, # 'Н' - 190: 41, # 'О' - 191: 30, # 'П' - 192: 39, # 'Р' - 193: 28, # 'С' - 194: 34, # 'Т' - 195: 51, # 'У' - 196: 48, # 'Ф' - 197: 49, # 'Х' - 198: 53, # 'Ц' - 199: 50, # 'Ч' - 200: 54, # 'Ш' - 201: 57, # 'Щ' - 202: 61, # 'Ъ' - 203: 239, # 'Ы' - 204: 67, # 'Ь' - 205: 240, # 'Э' - 206: 60, # 'Ю' - 207: 56, # 'Я' - 208: 1, # 'а' - 209: 18, # 'б' - 210: 9, # 'в' - 211: 20, # 'г' - 212: 11, # 'д' - 213: 3, # 'е' - 214: 23, # 'ж' - 215: 15, # 'з' - 216: 2, # 'и' - 217: 26, # 'й' - 218: 12, # 'к' - 219: 10, # 'л' - 220: 14, # 'м' - 221: 6, # 'н' - 222: 4, # 'о' - 223: 13, # 'п' - 224: 7, # 'р' - 225: 8, # 'с' - 226: 5, # 'т' - 227: 19, # 'у' - 228: 29, # 'ф' - 229: 25, # 'х' - 230: 22, # 'ц' - 231: 21, # 'ч' - 232: 27, # 'ш' - 233: 24, # 'щ' - 234: 17, # 'ъ' - 235: 75, # 'ы' - 236: 52, # 'ь' - 237: 241, # 'э' - 238: 42, # 'ю' - 239: 16, # 'я' - 240: 62, # '№' - 241: 242, # 'ё' - 242: 243, # 'ђ' - 243: 244, # 'ѓ' - 244: 58, # 'є' - 245: 245, # 'ѕ' - 246: 98, # 'і' - 247: 246, # 'ї' - 248: 247, # 'ј' - 249: 248, # 'љ' - 250: 249, # 'њ' - 251: 250, # 'ћ' - 252: 251, # 'ќ' - 253: 91, # '§' - 254: 252, # 'ў' - 255: 253, # 'џ' -} - -ISO_8859_5_BULGARIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-5", - language="Bulgarian", - char_to_order_map=ISO_8859_5_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", -) - -WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 77, # 'A' - 66: 90, # 'B' - 67: 99, # 'C' - 68: 100, # 'D' - 69: 72, # 'E' - 70: 109, # 'F' - 71: 107, # 'G' - 72: 101, # 'H' - 73: 79, # 'I' - 74: 185, # 'J' - 75: 81, # 'K' - 76: 102, # 'L' - 77: 76, # 'M' - 78: 94, # 'N' - 79: 82, # 'O' - 80: 110, # 'P' - 81: 186, # 'Q' - 82: 108, # 'R' - 83: 91, # 'S' - 84: 74, # 'T' - 85: 119, # 'U' - 86: 84, # 'V' - 87: 96, # 'W' - 88: 111, # 'X' - 89: 187, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 65, # 'a' - 98: 69, # 'b' - 99: 70, # 'c' - 100: 66, # 'd' - 101: 63, # 'e' - 102: 68, # 'f' - 103: 112, # 'g' - 104: 103, # 'h' - 105: 92, # 'i' - 106: 194, # 'j' - 107: 104, # 'k' - 108: 95, # 'l' - 109: 86, # 'm' - 110: 87, # 'n' - 111: 71, # 'o' - 112: 116, # 'p' - 113: 195, # 'q' - 114: 85, # 'r' - 115: 93, # 's' - 116: 97, # 't' - 117: 113, # 'u' - 118: 196, # 'v' - 119: 197, # 'w' - 120: 198, # 'x' - 121: 199, # 'y' - 122: 200, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 206, # 'Ђ' - 129: 207, # 'Ѓ' - 130: 208, # '‚' - 131: 209, # 'ѓ' - 132: 210, # '„' - 133: 211, # '…' - 134: 212, # '†' - 135: 213, # '‡' - 136: 120, # '€' - 137: 214, # '‰' - 138: 215, # 'Љ' - 139: 216, # '‹' - 140: 217, # 'Њ' - 141: 218, # 'Ќ' - 142: 219, # 'Ћ' - 143: 220, # 'Џ' - 144: 221, # 'ђ' - 145: 78, # '‘' - 146: 64, # '’' - 147: 83, # '“' - 148: 121, # '”' - 149: 98, # '•' - 150: 117, # '–' - 151: 105, # '—' - 152: 222, # None - 153: 223, # '™' - 154: 224, # 'љ' - 155: 225, # '›' - 156: 226, # 'њ' - 157: 227, # 'ќ' - 158: 228, # 'ћ' - 159: 229, # 'џ' - 160: 88, # '\xa0' - 161: 230, # 'Ў' - 162: 231, # 'ў' - 163: 232, # 'Ј' - 164: 233, # '¤' - 165: 122, # 'Ґ' - 166: 89, # '¦' - 167: 106, # '§' - 168: 234, # 'Ё' - 169: 235, # '©' - 170: 236, # 'Є' - 171: 237, # '«' - 172: 238, # '¬' - 173: 45, # '\xad' - 174: 239, # '®' - 175: 240, # 'Ї' - 176: 73, # '°' - 177: 80, # '±' - 178: 118, # 'І' - 179: 114, # 'і' - 180: 241, # 'ґ' - 181: 242, # 'µ' - 182: 243, # '¶' - 183: 244, # '·' - 184: 245, # 'ё' - 185: 62, # '№' - 186: 58, # 'є' - 187: 246, # '»' - 188: 247, # 'ј' - 189: 248, # 'Ѕ' - 190: 249, # 'ѕ' - 191: 250, # 'ї' - 192: 31, # 'А' - 193: 32, # 'Б' - 194: 35, # 'В' - 195: 43, # 'Г' - 196: 37, # 'Д' - 197: 44, # 'Е' - 198: 55, # 'Ж' - 199: 47, # 'З' - 200: 40, # 'И' - 201: 59, # 'Й' - 202: 33, # 'К' - 203: 46, # 'Л' - 204: 38, # 'М' - 205: 36, # 'Н' - 206: 41, # 'О' - 207: 30, # 'П' - 208: 39, # 'Р' - 209: 28, # 'С' - 210: 34, # 'Т' - 211: 51, # 'У' - 212: 48, # 'Ф' - 213: 49, # 'Х' - 214: 53, # 'Ц' - 215: 50, # 'Ч' - 216: 54, # 'Ш' - 217: 57, # 'Щ' - 218: 61, # 'Ъ' - 219: 251, # 'Ы' - 220: 67, # 'Ь' - 221: 252, # 'Э' - 222: 60, # 'Ю' - 223: 56, # 'Я' - 224: 1, # 'а' - 225: 18, # 'б' - 226: 9, # 'в' - 227: 20, # 'г' - 228: 11, # 'д' - 229: 3, # 'е' - 230: 23, # 'ж' - 231: 15, # 'з' - 232: 2, # 'и' - 233: 26, # 'й' - 234: 12, # 'к' - 235: 10, # 'л' - 236: 14, # 'м' - 237: 6, # 'н' - 238: 4, # 'о' - 239: 13, # 'п' - 240: 7, # 'р' - 241: 8, # 'с' - 242: 5, # 'т' - 243: 19, # 'у' - 244: 29, # 'ф' - 245: 25, # 'х' - 246: 22, # 'ц' - 247: 21, # 'ч' - 248: 27, # 'ш' - 249: 24, # 'щ' - 250: 17, # 'ъ' - 251: 75, # 'ы' - 252: 52, # 'ь' - 253: 253, # 'э' - 254: 42, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_BULGARIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1251", - language="Bulgarian", - char_to_order_map=WINDOWS_1251_BULGARIAN_CHAR_TO_ORDER, - language_model=BULGARIAN_LANG_MODEL, - typical_positive_ratio=0.969392, - keep_ascii_letters=False, - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", -) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/version.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/version.py deleted file mode 100644 index ec253c414474677d3a5977511cfe901bfb786740..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/version.py +++ /dev/null @@ -1,6 +0,0 @@ -from ._importlib import metadata - -try: - __version__ = metadata.version('setuptools') or '0.dev0+unknown' -except Exception: - __version__ = '0.dev0+unknown' diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/jack/pa_jack.c b/spaces/prerna9811/Chord/portaudio/src/hostapi/jack/pa_jack.c deleted file mode 100644 index 124c0f8b2e239f9d19c53daf67bb591a84dcd1b0..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/jack/pa_jack.c +++ /dev/null @@ -1,1826 +0,0 @@ -/* - * $Id$ - * PortAudio Portable Real-Time Audio Library - * Latest Version at: http://www.portaudio.com - * JACK Implementation by Joshua Haberman - * - * Copyright (c) 2004 Stefan Westerfeld - * Copyright (c) 2004 Arve Knudsen - * Copyright (c) 2002 Joshua Haberman - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** - @file - @ingroup hostapi_src -*/ - -#include -#include -#include -#include -#include -#include -#include -#include /* EBUSY */ -#include /* sig_atomic_t */ -#include -#include - -#include -#include - -#include "pa_util.h" -#include "pa_hostapi.h" -#include "pa_stream.h" -#include "pa_process.h" -#include "pa_allocation.h" -#include "pa_cpuload.h" -#include "pa_ringbuffer.h" -#include "pa_debugprint.h" - -#include "pa_jack.h" - -static pthread_t mainThread_; -static char *jackErr_ = NULL; -static const char* clientName_ = "PortAudio"; -static const char* port_regex_suffix = ":.*"; - -#define STRINGIZE_HELPER(expr) #expr -#define STRINGIZE(expr) STRINGIZE_HELPER(expr) - -/* Check PaError */ -#define ENSURE_PA(expr) \ - do { \ - PaError paErr; \ - if( (paErr = (expr)) < paNoError ) \ - { \ - if( (paErr) == paUnanticipatedHostError && pthread_self() == mainThread_ ) \ - { \ - const char *err = jackErr_; \ - if (! err ) err = "unknown error"; \ - PaUtil_SetLastHostErrorInfo( paJACK, -1, err ); \ - } \ - PaUtil_DebugPrint(( "Expression '" #expr "' failed in '" __FILE__ "', line: " STRINGIZE( __LINE__ ) "\n" )); \ - result = paErr; \ - goto error; \ - } \ - } while( 0 ) - -#define UNLESS(expr, code) \ - do { \ - if( (expr) == 0 ) \ - { \ - if( (code) == paUnanticipatedHostError && pthread_self() == mainThread_ ) \ - { \ - const char *err = jackErr_; \ - if (!err) err = "unknown error"; \ - PaUtil_SetLastHostErrorInfo( paJACK, -1, err ); \ - } \ - PaUtil_DebugPrint(( "Expression '" #expr "' failed in '" __FILE__ "', line: " STRINGIZE( __LINE__ ) "\n" )); \ - result = (code); \ - goto error; \ - } \ - } while( 0 ) - -#define ASSERT_CALL(expr, success) \ - do { \ - int err = (expr); \ - assert( err == success ); \ - } while( 0 ) - -/* - * Functions that directly map to the PortAudio stream interface - */ - -static void Terminate( struct PaUtilHostApiRepresentation *hostApi ); -static PaError IsFormatSupported( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ); -static PaError OpenStream( struct PaUtilHostApiRepresentation *hostApi, - PaStream** s, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerBuffer, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ); -static PaError CloseStream( PaStream* stream ); -static PaError StartStream( PaStream *stream ); -static PaError StopStream( PaStream *stream ); -static PaError AbortStream( PaStream *stream ); -static PaError IsStreamStopped( PaStream *s ); -static PaError IsStreamActive( PaStream *stream ); -/*static PaTime GetStreamInputLatency( PaStream *stream );*/ -/*static PaTime GetStreamOutputLatency( PaStream *stream );*/ -static PaTime GetStreamTime( PaStream *stream ); -static double GetStreamCpuLoad( PaStream* stream ); - - -/* - * Data specific to this API - */ - -struct PaJackStream; - -typedef struct -{ - PaUtilHostApiRepresentation commonHostApiRep; - PaUtilStreamInterface callbackStreamInterface; - PaUtilStreamInterface blockingStreamInterface; - - PaUtilAllocationGroup *deviceInfoMemory; - - jack_client_t *jack_client; - int jack_buffer_size; - PaHostApiIndex hostApiIndex; - - pthread_mutex_t mtx; - pthread_cond_t cond; - unsigned long inputBase, outputBase; - - /* For dealing with the process thread */ - volatile int xrun; /* Received xrun notification from JACK? */ - struct PaJackStream * volatile toAdd, * volatile toRemove; - struct PaJackStream *processQueue; - volatile sig_atomic_t jackIsDown; -} -PaJackHostApiRepresentation; - -/* PaJackStream - a stream data structure specifically for this implementation */ - -typedef struct PaJackStream -{ - PaUtilStreamRepresentation streamRepresentation; - PaUtilBufferProcessor bufferProcessor; - PaUtilCpuLoadMeasurer cpuLoadMeasurer; - PaJackHostApiRepresentation *hostApi; - - /* our input and output ports */ - jack_port_t **local_input_ports; - jack_port_t **local_output_ports; - - /* the input and output ports of the client we are connecting to */ - jack_port_t **remote_input_ports; - jack_port_t **remote_output_ports; - - int num_incoming_connections; - int num_outgoing_connections; - - jack_client_t *jack_client; - - /* The stream is running if it's still producing samples. - * The stream is active if samples it produced are still being heard. - */ - volatile sig_atomic_t is_running; - volatile sig_atomic_t is_active; - /* Used to signal processing thread that stream should start or stop, respectively */ - volatile sig_atomic_t doStart, doStop, doAbort; - - jack_nframes_t t0; - - PaUtilAllocationGroup *stream_memory; - - /* These are useful in the process callback */ - - int callbackResult; - int isSilenced; - int xrun; - - /* These are useful for the blocking API */ - - int isBlockingStream; - PaUtilRingBuffer inFIFO; - PaUtilRingBuffer outFIFO; - volatile sig_atomic_t data_available; - sem_t data_semaphore; - int bytesPerFrame; - int samplesPerFrame; - - struct PaJackStream *next; -} -PaJackStream; - -/* In calls to jack_get_ports() this filter expression is used instead of "" - * to prevent any other types (eg Midi ports etc) being listed */ -#define JACK_PORT_TYPE_FILTER "audio" - -#define TRUE 1 -#define FALSE 0 - -/* - * Functions specific to this API - */ - -static int JackCallback( jack_nframes_t frames, void *userData ); - - -/* - * - * Implementation - * - */ - -/* ---- blocking emulation layer ---- */ - -/* Allocate buffer. */ -static PaError BlockingInitFIFO( PaUtilRingBuffer *rbuf, long numFrames, long bytesPerFrame ) -{ - long numBytes = numFrames * bytesPerFrame; - char *buffer = (char *) malloc( numBytes ); - if( buffer == NULL ) return paInsufficientMemory; - memset( buffer, 0, numBytes ); - return (PaError) PaUtil_InitializeRingBuffer( rbuf, 1, numBytes, buffer ); -} - -/* Free buffer. */ -static PaError BlockingTermFIFO( PaUtilRingBuffer *rbuf ) -{ - if( rbuf->buffer ) free( rbuf->buffer ); - rbuf->buffer = NULL; - return paNoError; -} - -static int -BlockingCallback( const void *inputBuffer, - void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - struct PaJackStream *stream = (PaJackStream *)userData; - long numBytes = stream->bytesPerFrame * framesPerBuffer; - - /* This may get called with NULL inputBuffer during initial setup. */ - if( inputBuffer != NULL ) - { - PaUtil_WriteRingBuffer( &stream->inFIFO, inputBuffer, numBytes ); - } - if( outputBuffer != NULL ) - { - int numRead = PaUtil_ReadRingBuffer( &stream->outFIFO, outputBuffer, numBytes ); - /* Zero out remainder of buffer if we run out of data. */ - memset( (char *)outputBuffer + numRead, 0, numBytes - numRead ); - } - - if( !stream->data_available ) - { - stream->data_available = 1; - sem_post( &stream->data_semaphore ); - } - return paContinue; -} - -static PaError -BlockingBegin( PaJackStream *stream, int minimum_buffer_size ) -{ - long doRead = 0; - long doWrite = 0; - PaError result = paNoError; - long numFrames; - - doRead = stream->local_input_ports != NULL; - doWrite = stream->local_output_ports != NULL; - /* */ - stream->samplesPerFrame = 2; - stream->bytesPerFrame = sizeof(float) * stream->samplesPerFrame; - /* */ - numFrames = 32; - while (numFrames < minimum_buffer_size) - numFrames *= 2; - - if( doRead ) - { - ENSURE_PA( BlockingInitFIFO( &stream->inFIFO, numFrames, stream->bytesPerFrame ) ); - } - if( doWrite ) - { - long numBytes; - - ENSURE_PA( BlockingInitFIFO( &stream->outFIFO, numFrames, stream->bytesPerFrame ) ); - - /* Make Write FIFO appear full initially. */ - numBytes = PaUtil_GetRingBufferWriteAvailable( &stream->outFIFO ); - PaUtil_AdvanceRingBufferWriteIndex( &stream->outFIFO, numBytes ); - } - - stream->data_available = 0; - sem_init( &stream->data_semaphore, 0, 0 ); - -error: - return result; -} - -static void -BlockingEnd( PaJackStream *stream ) -{ - BlockingTermFIFO( &stream->inFIFO ); - BlockingTermFIFO( &stream->outFIFO ); - - sem_destroy( &stream->data_semaphore ); -} - -static PaError BlockingReadStream( PaStream* s, void *data, unsigned long numFrames ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream *)s; - - long bytesRead; - char *p = (char *) data; - long numBytes = stream->bytesPerFrame * numFrames; - while( numBytes > 0 ) - { - bytesRead = PaUtil_ReadRingBuffer( &stream->inFIFO, p, numBytes ); - numBytes -= bytesRead; - p += bytesRead; - if( numBytes > 0 ) - { - /* see write for an explanation */ - if( stream->data_available ) - stream->data_available = 0; - else - sem_wait( &stream->data_semaphore ); - } - } - - return result; -} - -static PaError BlockingWriteStream( PaStream* s, const void *data, unsigned long numFrames ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream *)s; - long bytesWritten; - char *p = (char *) data; - long numBytes = stream->bytesPerFrame * numFrames; - while( numBytes > 0 ) - { - bytesWritten = PaUtil_WriteRingBuffer( &stream->outFIFO, p, numBytes ); - numBytes -= bytesWritten; - p += bytesWritten; - if( numBytes > 0 ) - { - /* we use the following algorithm: - * (1) write data - * (2) if some data didn't fit into the ringbuffer, set data_available to 0 - * to indicate to the audio that if space becomes available, we want to know - * (3) retry to write data (because it might be that between (1) and (2) - * new space in the buffer became available) - * (4) if this failed, we are sure that the buffer is really empty and - * we will definitely receive a notification when it becomes available - * thus we can safely sleep - * - * if the algorithm bailed out in step (3) before, it leaks a count of 1 - * on the semaphore; however, it doesn't matter, because if we block in (4), - * we also do it in a loop - */ - if( stream->data_available ) - stream->data_available = 0; - else - sem_wait( &stream->data_semaphore ); - } - } - - return result; -} - -static signed long -BlockingGetStreamReadAvailable( PaStream* s ) -{ - PaJackStream *stream = (PaJackStream *)s; - - int bytesFull = PaUtil_GetRingBufferReadAvailable( &stream->inFIFO ); - return bytesFull / stream->bytesPerFrame; -} - -static signed long -BlockingGetStreamWriteAvailable( PaStream* s ) -{ - PaJackStream *stream = (PaJackStream *)s; - - int bytesEmpty = PaUtil_GetRingBufferWriteAvailable( &stream->outFIFO ); - return bytesEmpty / stream->bytesPerFrame; -} - -static PaError -BlockingWaitEmpty( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream *)s; - - while( PaUtil_GetRingBufferReadAvailable( &stream->outFIFO ) > 0 ) - { - stream->data_available = 0; - sem_wait( &stream->data_semaphore ); - } - return 0; -} - -/* ---- jack driver ---- */ - -/* copy null terminated string source to destination, escaping regex characters with '\\' in the process */ -static void copy_string_and_escape_regex_chars( char *destination, const char *source, size_t destbuffersize ) -{ - assert( destination != source ); - assert( destbuffersize > 0 ); - - char *dest = destination; - /* dest_stop is the last location that we can null-terminate the string */ - char *dest_stop = destination + (destbuffersize - 1); - - const char *src = source; - - while ( *src != '\0' && dest != dest_stop ) - { - const char c = *src; - if ( strchr( "\\()[]{}*+?|$^.", c ) != NULL ) - { - if( (dest + 1) == dest_stop ) - break; /* only proceed if we can write both c and the escape */ - - *dest = '\\'; - dest++; - } - *dest = c; - dest++; - - src++; - } - - *dest = '\0'; -} - -/* BuildDeviceList(): - * - * The process of determining a list of PortAudio "devices" from - * JACK's client/port system is fairly involved, so it is separated - * into its own routine. - */ - -static PaError BuildDeviceList( PaJackHostApiRepresentation *jackApi ) -{ - /* Utility macros for the repetitive process of allocating memory */ - - /* JACK has no concept of a device. To JACK, there are clients - * which have an arbitrary number of ports. To make this - * intelligible to PortAudio clients, we will group each JACK client - * into a device, and make each port of that client a channel */ - - PaError result = paNoError; - PaUtilHostApiRepresentation *commonApi = &jackApi->commonHostApiRep; - - const char **jack_ports = NULL; - char **client_names = NULL; - char *port_regex_string = NULL; - // In the worst case scenario, every character would be escaped, doubling the string size. - // Add 1 for null terminator. - size_t device_name_regex_escaped_size = jack_client_name_size() * 2 + 1; - size_t port_regex_size = device_name_regex_escaped_size + strlen(port_regex_suffix); - int port_index, client_index, i; - double globalSampleRate; - regex_t port_regex; - unsigned long numClients = 0, numPorts = 0; - char *tmp_client_name = NULL; - - commonApi->info.defaultInputDevice = paNoDevice; - commonApi->info.defaultOutputDevice = paNoDevice; - commonApi->info.deviceCount = 0; - - /* Parse the list of ports, using a regex to grab the client names */ - ASSERT_CALL( regcomp( &port_regex, "^[^:]*", REG_EXTENDED ), 0 ); - - /* since we are rebuilding the list of devices, free all memory - * associated with the previous list */ - PaUtil_FreeAllAllocations( jackApi->deviceInfoMemory ); - - port_regex_string = PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, port_regex_size ); - tmp_client_name = PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, jack_client_name_size() ); - - /* We can only retrieve the list of clients indirectly, by first - * asking for a list of all ports, then parsing the port names - * according to the client_name:port_name convention (which is - * enforced by jackd) - * A: If jack_get_ports returns NULL, there's nothing for us to do */ - UNLESS( (jack_ports = jack_get_ports( jackApi->jack_client, "", JACK_PORT_TYPE_FILTER, 0 )) && jack_ports[0], paNoError ); - /* Find number of ports */ - while( jack_ports[numPorts] ) - ++numPorts; - /* At least there will be one port per client :) */ - UNLESS( client_names = PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, numPorts * - sizeof (char *) ), paInsufficientMemory ); - - /* Build a list of clients from the list of ports */ - for( numClients = 0, port_index = 0; jack_ports[port_index] != NULL; port_index++ ) - { - int client_seen = FALSE; - regmatch_t match_info; - const char *port = jack_ports[port_index]; - PA_DEBUG(( "JACK port found: %s\n", port )); - - /* extract the client name from the port name, using a regex - * that parses the clientname:portname syntax */ - UNLESS( !regexec( &port_regex, port, 1, &match_info, 0 ), paInternalError ); - assert(match_info.rm_eo - match_info.rm_so < jack_client_name_size()); - memcpy( tmp_client_name, port + match_info.rm_so, - match_info.rm_eo - match_info.rm_so ); - tmp_client_name[match_info.rm_eo - match_info.rm_so] = '\0'; - - /* do we know about this port's client yet? */ - for( i = 0; i < numClients; i++ ) - { - if( strcmp( tmp_client_name, client_names[i] ) == 0 ) - client_seen = TRUE; - } - - if (client_seen) - continue; /* A: Nothing to see here, move along */ - - UNLESS( client_names[numClients] = (char*)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - strlen(tmp_client_name) + 1), paInsufficientMemory ); - - /* The alsa_pcm client should go in spot 0. If this - * is the alsa_pcm client AND we are NOT about to put - * it in spot 0 put it in spot 0 and move whatever - * was already in spot 0 to the end. */ - if( strcmp( "alsa_pcm", tmp_client_name ) == 0 && numClients > 0 ) - { - /* alsa_pcm goes in spot 0 */ - strcpy( client_names[ numClients ], client_names[0] ); - strcpy( client_names[0], tmp_client_name ); - } - else - { - /* put the new client at the end of the client list */ - strcpy( client_names[ numClients ], tmp_client_name ); - } - ++numClients; - } - - /* Now we have a list of clients, which will become the list of - * PortAudio devices. */ - - /* there is one global sample rate all clients must conform to */ - - globalSampleRate = jack_get_sample_rate( jackApi->jack_client ); - UNLESS( commonApi->deviceInfos = (PaDeviceInfo**)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - sizeof(PaDeviceInfo*) * numClients ), paInsufficientMemory ); - - assert( commonApi->info.deviceCount == 0 ); - - /* Create a PaDeviceInfo structure for every client */ - for( client_index = 0; client_index < numClients; client_index++ ) - { - PaDeviceInfo *curDevInfo; - const char **clientPorts = NULL; - - UNLESS( curDevInfo = (PaDeviceInfo*)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - sizeof(PaDeviceInfo) ), paInsufficientMemory ); - UNLESS( curDevInfo->name = (char*)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - strlen(client_names[client_index]) + 1 ), paInsufficientMemory ); - strcpy( (char *)curDevInfo->name, client_names[client_index] ); - - curDevInfo->structVersion = 2; - curDevInfo->hostApi = jackApi->hostApiIndex; - - /* JACK is very inflexible: there is one sample rate the whole - * system must run at, and all clients must speak IEEE float. */ - curDevInfo->defaultSampleRate = globalSampleRate; - - /* To determine how many input and output channels are available, - * we re-query jackd with more specific parameters. */ - copy_string_and_escape_regex_chars( port_regex_string, - client_names[client_index], - device_name_regex_escaped_size ); - strncat( port_regex_string, port_regex_suffix, port_regex_size ); - - /* ... what are your output ports (that we could input from)? */ - clientPorts = jack_get_ports( jackApi->jack_client, port_regex_string, - JACK_PORT_TYPE_FILTER, JackPortIsOutput); - curDevInfo->maxInputChannels = 0; - curDevInfo->defaultLowInputLatency = 0.; - curDevInfo->defaultHighInputLatency = 0.; - if( clientPorts ) - { - jack_port_t *p = jack_port_by_name( jackApi->jack_client, clientPorts[0] ); - curDevInfo->defaultLowInputLatency = curDevInfo->defaultHighInputLatency = - jack_port_get_latency( p ) / globalSampleRate; - - for( i = 0; clientPorts[i] != NULL; i++) - { - /* The number of ports returned is the number of output channels. - * We don't care what they are, we just care how many */ - curDevInfo->maxInputChannels++; - } - free(clientPorts); - } - - /* ... what are your input ports (that we could output to)? */ - clientPorts = jack_get_ports( jackApi->jack_client, port_regex_string, - JACK_PORT_TYPE_FILTER, JackPortIsInput); - curDevInfo->maxOutputChannels = 0; - curDevInfo->defaultLowOutputLatency = 0.; - curDevInfo->defaultHighOutputLatency = 0.; - if( clientPorts ) - { - jack_port_t *p = jack_port_by_name( jackApi->jack_client, clientPorts[0] ); - curDevInfo->defaultLowOutputLatency = curDevInfo->defaultHighOutputLatency = - jack_port_get_latency( p ) / globalSampleRate; - - for( i = 0; clientPorts[i] != NULL; i++) - { - /* The number of ports returned is the number of input channels. - * We don't care what they are, we just care how many */ - curDevInfo->maxOutputChannels++; - } - free(clientPorts); - } - - PA_DEBUG(( "Adding JACK device %s with %d input channels and %d output channels\n", - client_names[client_index], - curDevInfo->maxInputChannels, - curDevInfo->maxOutputChannels )); - - /* Add this client to the list of devices */ - commonApi->deviceInfos[client_index] = curDevInfo; - ++commonApi->info.deviceCount; - if( commonApi->info.defaultInputDevice == paNoDevice && curDevInfo->maxInputChannels > 0 ) - commonApi->info.defaultInputDevice = client_index; - if( commonApi->info.defaultOutputDevice == paNoDevice && curDevInfo->maxOutputChannels > 0 ) - commonApi->info.defaultOutputDevice = client_index; - } - -error: - regfree( &port_regex ); - free( jack_ports ); - return result; -} - -static void UpdateSampleRate( PaJackStream *stream, double sampleRate ) -{ - /* XXX: Maybe not the cleanest way of going about this? */ - stream->cpuLoadMeasurer.samplingPeriod = stream->bufferProcessor.samplePeriod = 1. / sampleRate; - stream->streamRepresentation.streamInfo.sampleRate = sampleRate; -} - -static void JackErrorCallback( const char *msg ) -{ - if( pthread_self() == mainThread_ ) - { - assert( msg ); - jackErr_ = realloc( jackErr_, strlen( msg ) + 1 ); - strcpy( jackErr_, msg ); - } -} - -static void JackOnShutdown( void *arg ) -{ - PaJackHostApiRepresentation *jackApi = (PaJackHostApiRepresentation *)arg; - PaJackStream *stream = jackApi->processQueue; - - PA_DEBUG(( "%s: JACK server is shutting down\n", __FUNCTION__ )); - for( ; stream; stream = stream->next ) - { - stream->is_active = 0; - } - - /* Make sure that the main thread doesn't get stuck waiting on the condition */ - ASSERT_CALL( pthread_mutex_lock( &jackApi->mtx ), 0 ); - jackApi->jackIsDown = 1; - ASSERT_CALL( pthread_cond_signal( &jackApi->cond ), 0 ); - ASSERT_CALL( pthread_mutex_unlock( &jackApi->mtx ), 0 ); - -} - -static int JackSrCb( jack_nframes_t nframes, void *arg ) -{ - PaJackHostApiRepresentation *jackApi = (PaJackHostApiRepresentation *)arg; - double sampleRate = (double)nframes; - PaJackStream *stream = jackApi->processQueue; - - /* Update all streams in process queue */ - PA_DEBUG(( "%s: Acting on change in JACK samplerate: %f\n", __FUNCTION__, sampleRate )); - for( ; stream; stream = stream->next ) - { - if( stream->streamRepresentation.streamInfo.sampleRate != sampleRate ) - { - PA_DEBUG(( "%s: Updating samplerate\n", __FUNCTION__ )); - UpdateSampleRate( stream, sampleRate ); - } - } - - return 0; -} - -static int JackXRunCb(void *arg) { - PaJackHostApiRepresentation *hostApi = (PaJackHostApiRepresentation *)arg; - assert( hostApi ); - hostApi->xrun = TRUE; - PA_DEBUG(( "%s: JACK signalled xrun\n", __FUNCTION__ )); - return 0; -} - -PaError PaJack_Initialize( PaUtilHostApiRepresentation **hostApi, - PaHostApiIndex hostApiIndex ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *jackHostApi; - int activated = 0; - jack_status_t jackStatus = 0; - *hostApi = NULL; /* Initialize to NULL */ - - UNLESS( jackHostApi = (PaJackHostApiRepresentation*) - PaUtil_AllocateMemory( sizeof(PaJackHostApiRepresentation) ), paInsufficientMemory ); - UNLESS( jackHostApi->deviceInfoMemory = PaUtil_CreateAllocationGroup(), paInsufficientMemory ); - - mainThread_ = pthread_self(); - ASSERT_CALL( pthread_mutex_init( &jackHostApi->mtx, NULL ), 0 ); - ASSERT_CALL( pthread_cond_init( &jackHostApi->cond, NULL ), 0 ); - - /* Try to become a client of the JACK server. If we cannot do - * this, then this API cannot be used. - * - * Without the JackNoStartServer option, the jackd server is started - * automatically which we do not want. - */ - - jackHostApi->jack_client = jack_client_open( clientName_, JackNoStartServer, &jackStatus ); - if( !jackHostApi->jack_client ) - { - /* the V19 development docs say that if an implementation - * detects that it cannot be used, it should return a NULL - * interface and paNoError */ - PA_DEBUG(( "%s: Couldn't connect to JACK, status: %d\n", __FUNCTION__, jackStatus )); - result = paNoError; - goto error; - } - - jackHostApi->hostApiIndex = hostApiIndex; - - *hostApi = &jackHostApi->commonHostApiRep; - (*hostApi)->info.structVersion = 1; - (*hostApi)->info.type = paJACK; - (*hostApi)->info.name = "JACK Audio Connection Kit"; - - /* Build a device list by querying the JACK server */ - ENSURE_PA( BuildDeviceList( jackHostApi ) ); - - /* Register functions */ - - (*hostApi)->Terminate = Terminate; - (*hostApi)->OpenStream = OpenStream; - (*hostApi)->IsFormatSupported = IsFormatSupported; - - PaUtil_InitializeStreamInterface( &jackHostApi->callbackStreamInterface, - CloseStream, StartStream, - StopStream, AbortStream, - IsStreamStopped, IsStreamActive, - GetStreamTime, GetStreamCpuLoad, - PaUtil_DummyRead, PaUtil_DummyWrite, - PaUtil_DummyGetReadAvailable, - PaUtil_DummyGetWriteAvailable ); - - PaUtil_InitializeStreamInterface( &jackHostApi->blockingStreamInterface, CloseStream, StartStream, - StopStream, AbortStream, IsStreamStopped, IsStreamActive, - GetStreamTime, PaUtil_DummyGetCpuLoad, - BlockingReadStream, BlockingWriteStream, - BlockingGetStreamReadAvailable, BlockingGetStreamWriteAvailable ); - - jackHostApi->inputBase = jackHostApi->outputBase = 0; - jackHostApi->xrun = 0; - jackHostApi->toAdd = jackHostApi->toRemove = NULL; - jackHostApi->processQueue = NULL; - jackHostApi->jackIsDown = 0; - - jack_on_shutdown( jackHostApi->jack_client, JackOnShutdown, jackHostApi ); - jack_set_error_function( JackErrorCallback ); - jackHostApi->jack_buffer_size = jack_get_buffer_size ( jackHostApi->jack_client ); - /* Don't check for error, may not be supported (deprecated in at least jackdmp) */ - jack_set_sample_rate_callback( jackHostApi->jack_client, JackSrCb, jackHostApi ); - UNLESS( !jack_set_xrun_callback( jackHostApi->jack_client, JackXRunCb, jackHostApi ), paUnanticipatedHostError ); - UNLESS( !jack_set_process_callback( jackHostApi->jack_client, JackCallback, jackHostApi ), paUnanticipatedHostError ); - UNLESS( !jack_activate( jackHostApi->jack_client ), paUnanticipatedHostError ); - activated = 1; - - return result; - -error: - if( activated ) - ASSERT_CALL( jack_deactivate( jackHostApi->jack_client ), 0 ); - - if( jackHostApi ) - { - if( jackHostApi->jack_client ) - ASSERT_CALL( jack_client_close( jackHostApi->jack_client ), 0 ); - - if( jackHostApi->deviceInfoMemory ) - { - PaUtil_FreeAllAllocations( jackHostApi->deviceInfoMemory ); - PaUtil_DestroyAllocationGroup( jackHostApi->deviceInfoMemory ); - } - - PaUtil_FreeMemory( jackHostApi ); - } - return result; -} - - -static void Terminate( struct PaUtilHostApiRepresentation *hostApi ) -{ - PaJackHostApiRepresentation *jackHostApi = (PaJackHostApiRepresentation*)hostApi; - - /* note: this automatically disconnects all ports, since a deactivated - * client is not allowed to have any ports connected */ - ASSERT_CALL( jack_deactivate( jackHostApi->jack_client ), 0 ); - - ASSERT_CALL( pthread_mutex_destroy( &jackHostApi->mtx ), 0 ); - ASSERT_CALL( pthread_cond_destroy( &jackHostApi->cond ), 0 ); - - ASSERT_CALL( jack_client_close( jackHostApi->jack_client ), 0 ); - - if( jackHostApi->deviceInfoMemory ) - { - PaUtil_FreeAllAllocations( jackHostApi->deviceInfoMemory ); - PaUtil_DestroyAllocationGroup( jackHostApi->deviceInfoMemory ); - } - - PaUtil_FreeMemory( jackHostApi ); - - free( jackErr_ ); - jackErr_ = NULL; -} - -static PaError IsFormatSupported( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ) -{ - int inputChannelCount = 0, outputChannelCount = 0; - PaSampleFormat inputSampleFormat, outputSampleFormat; - - if( inputParameters ) - { - inputChannelCount = inputParameters->channelCount; - inputSampleFormat = inputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( inputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that input device can support inputChannelCount */ - if( inputChannelCount > hostApi->deviceInfos[ inputParameters->device ]->maxInputChannels ) - return paInvalidChannelCount; - - /* validate inputStreamInfo */ - if( inputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - inputChannelCount = 0; - } - - if( outputParameters ) - { - outputChannelCount = outputParameters->channelCount; - outputSampleFormat = outputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( outputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that output device can support inputChannelCount */ - if( outputChannelCount > hostApi->deviceInfos[ outputParameters->device ]->maxOutputChannels ) - return paInvalidChannelCount; - - /* validate outputStreamInfo */ - if( outputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - outputChannelCount = 0; - } - - /* - The following check is not necessary for JACK. - - - if a full duplex stream is requested, check that the combination - of input and output parameters is supported - - - Because the buffer adapter handles conversion between all standard - sample formats, the following checks are only required if paCustomFormat - is implemented, or under some other unusual conditions. - - - check that input device can support inputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - - - check that output device can support outputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - */ - - /* check that the device supports sampleRate */ - -#define ABS(x) ( (x) > 0 ? (x) : -(x) ) - if( ABS(sampleRate - jack_get_sample_rate(((PaJackHostApiRepresentation *) hostApi)->jack_client )) > 1 ) - return paInvalidSampleRate; -#undef ABS - - return paFormatIsSupported; -} - -/* Basic stream initialization */ -static PaError InitializeStream( PaJackStream *stream, PaJackHostApiRepresentation *hostApi, int numInputChannels, - int numOutputChannels ) -{ - PaError result = paNoError; - assert( stream ); - - memset( stream, 0, sizeof (PaJackStream) ); - UNLESS( stream->stream_memory = PaUtil_CreateAllocationGroup(), paInsufficientMemory ); - stream->jack_client = hostApi->jack_client; - stream->hostApi = hostApi; - - if( numInputChannels > 0 ) - { - UNLESS( stream->local_input_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numInputChannels ), - paInsufficientMemory ); - memset( stream->local_input_ports, 0, sizeof(jack_port_t*) * numInputChannels ); - UNLESS( stream->remote_output_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numInputChannels ), - paInsufficientMemory ); - memset( stream->remote_output_ports, 0, sizeof(jack_port_t*) * numInputChannels ); - } - if( numOutputChannels > 0 ) - { - UNLESS( stream->local_output_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numOutputChannels ), - paInsufficientMemory ); - memset( stream->local_output_ports, 0, sizeof(jack_port_t*) * numOutputChannels ); - UNLESS( stream->remote_input_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numOutputChannels ), - paInsufficientMemory ); - memset( stream->remote_input_ports, 0, sizeof(jack_port_t*) * numOutputChannels ); - } - - stream->num_incoming_connections = numInputChannels; - stream->num_outgoing_connections = numOutputChannels; - -error: - return result; -} - -/*! - * Free resources associated with stream, and eventually stream itself. - * - * Frees allocated memory, and closes opened pcms. - */ -static void CleanUpStream( PaJackStream *stream, int terminateStreamRepresentation, int terminateBufferProcessor ) -{ - int i; - assert( stream ); - - if( stream->isBlockingStream ) - BlockingEnd( stream ); - - for( i = 0; i < stream->num_incoming_connections; ++i ) - { - if( stream->local_input_ports[i] ) - ASSERT_CALL( jack_port_unregister( stream->jack_client, stream->local_input_ports[i] ), 0 ); - } - for( i = 0; i < stream->num_outgoing_connections; ++i ) - { - if( stream->local_output_ports[i] ) - ASSERT_CALL( jack_port_unregister( stream->jack_client, stream->local_output_ports[i] ), 0 ); - } - - if( terminateStreamRepresentation ) - PaUtil_TerminateStreamRepresentation( &stream->streamRepresentation ); - if( terminateBufferProcessor ) - PaUtil_TerminateBufferProcessor( &stream->bufferProcessor ); - - if( stream->stream_memory ) - { - PaUtil_FreeAllAllocations( stream->stream_memory ); - PaUtil_DestroyAllocationGroup( stream->stream_memory ); - } - PaUtil_FreeMemory( stream ); -} - -static PaError WaitCondition( PaJackHostApiRepresentation *hostApi ) -{ - PaError result = paNoError; - int err = 0; - PaTime pt = PaUtil_GetTime(); - struct timespec ts; - - ts.tv_sec = (time_t) floor( pt + 10 * 60 /* 10 minutes */ ); - ts.tv_nsec = (long) ((pt - floor( pt )) * 1000000000); - /* XXX: Best enclose in loop, in case of spurious wakeups? */ - err = pthread_cond_timedwait( &hostApi->cond, &hostApi->mtx, &ts ); - - /* Make sure we didn't time out */ - UNLESS( err != ETIMEDOUT, paTimedOut ); - UNLESS( !err, paInternalError ); - -error: - return result; -} - -static PaError AddStream( PaJackStream *stream ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *hostApi = stream->hostApi; - /* Add to queue of streams that should be processed */ - ASSERT_CALL( pthread_mutex_lock( &hostApi->mtx ), 0 ); - if( !hostApi->jackIsDown ) - { - hostApi->toAdd = stream; - /* Unlock mutex and await signal from processing thread */ - result = WaitCondition( stream->hostApi ); - } - ASSERT_CALL( pthread_mutex_unlock( &hostApi->mtx ), 0 ); - ENSURE_PA( result ); - - UNLESS( !hostApi->jackIsDown, paDeviceUnavailable ); - -error: - return result; -} - -/* Remove stream from processing queue */ -static PaError RemoveStream( PaJackStream *stream ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *hostApi = stream->hostApi; - - /* Add to queue over streams that should be processed */ - ASSERT_CALL( pthread_mutex_lock( &hostApi->mtx ), 0 ); - if( !hostApi->jackIsDown ) - { - hostApi->toRemove = stream; - /* Unlock mutex and await signal from processing thread */ - result = WaitCondition( stream->hostApi ); - } - ASSERT_CALL( pthread_mutex_unlock( &hostApi->mtx ), 0 ); - ENSURE_PA( result ); - -error: - return result; -} - -/* Add stream to JACK callback processing queue */ -static PaError OpenStream( struct PaUtilHostApiRepresentation *hostApi, - PaStream** s, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerBuffer, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *jackHostApi = (PaJackHostApiRepresentation*)hostApi; - PaJackStream *stream = NULL; - char *port_string = PaUtil_GroupAllocateMemory( jackHostApi->deviceInfoMemory, jack_port_name_size() ); - // In the worst case every character would be escaped which would double the string length. - // Add 1 for null terminator - size_t regex_escaped_client_name_size = jack_client_name_size() * 2 + 1; - unsigned long regex_size = regex_escaped_client_name_size + strlen(port_regex_suffix); - char *regex_pattern = PaUtil_GroupAllocateMemory( jackHostApi->deviceInfoMemory, regex_size ); - const char **jack_ports = NULL; - /* int jack_max_buffer_size = jack_get_buffer_size( jackHostApi->jack_client ); */ - int i; - int inputChannelCount, outputChannelCount; - const double jackSr = jack_get_sample_rate( jackHostApi->jack_client ); - PaSampleFormat inputSampleFormat = 0, outputSampleFormat = 0; - int bpInitialized = 0, srInitialized = 0; /* Initialized buffer processor and stream representation? */ - unsigned long ofs; - - /* validate platform specific flags */ - if( (streamFlags & paPlatformSpecificFlags) != 0 ) - return paInvalidFlag; /* unexpected platform specific flag */ - if( (streamFlags & paPrimeOutputBuffersUsingStreamCallback) != 0 ) - { - streamFlags &= ~paPrimeOutputBuffersUsingStreamCallback; - /*return paInvalidFlag;*/ /* This implementation does not support buffer priming */ - } - - if( framesPerBuffer != paFramesPerBufferUnspecified ) - { - /* Jack operates with power of two buffers, and we don't support non-integer buffer adaption (yet) */ - /*UNLESS( !(framesPerBuffer & (framesPerBuffer - 1)), paBufferTooBig );*/ /* TODO: Add descriptive error code? */ - } - - /* Preliminary checks */ - - if( inputParameters ) - { - inputChannelCount = inputParameters->channelCount; - inputSampleFormat = inputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( inputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that input device can support inputChannelCount */ - if( inputChannelCount > hostApi->deviceInfos[ inputParameters->device ]->maxInputChannels ) - return paInvalidChannelCount; - - /* validate inputStreamInfo */ - if( inputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - inputChannelCount = 0; - } - - if( outputParameters ) - { - outputChannelCount = outputParameters->channelCount; - outputSampleFormat = outputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( outputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that output device can support inputChannelCount */ - if( outputChannelCount > hostApi->deviceInfos[ outputParameters->device ]->maxOutputChannels ) - return paInvalidChannelCount; - - /* validate outputStreamInfo */ - if( outputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - outputChannelCount = 0; - } - - /* ... check that the sample rate exactly matches the ONE acceptable rate - * A: This rate isn't necessarily constant though? */ - -#define ABS(x) ( (x) > 0 ? (x) : -(x) ) - if( ABS(sampleRate - jackSr) > 1 ) - return paInvalidSampleRate; -#undef ABS - - UNLESS( stream = (PaJackStream*)PaUtil_AllocateMemory( sizeof(PaJackStream) ), paInsufficientMemory ); - ENSURE_PA( InitializeStream( stream, jackHostApi, inputChannelCount, outputChannelCount ) ); - - /* the blocking emulation, if necessary */ - stream->isBlockingStream = !streamCallback; - if( stream->isBlockingStream ) - { - float latency = 0.001; /* 1ms is the absolute minimum we support */ - int minimum_buffer_frames = 0; - - if( inputParameters && inputParameters->suggestedLatency > latency ) - latency = inputParameters->suggestedLatency; - else if( outputParameters && outputParameters->suggestedLatency > latency ) - latency = outputParameters->suggestedLatency; - - /* the latency the user asked for indicates the minimum buffer size in frames */ - minimum_buffer_frames = (int) (latency * jack_get_sample_rate( jackHostApi->jack_client )); - - /* we also need to be able to store at least three full jack buffers to avoid dropouts */ - if( jackHostApi->jack_buffer_size * 3 > minimum_buffer_frames ) - minimum_buffer_frames = jackHostApi->jack_buffer_size * 3; - - /* setup blocking API data structures (FIXME: can fail) */ - BlockingBegin( stream, minimum_buffer_frames ); - - /* install our own callback for the blocking API */ - streamCallback = BlockingCallback; - userData = stream; - - PaUtil_InitializeStreamRepresentation( &stream->streamRepresentation, - &jackHostApi->blockingStreamInterface, streamCallback, userData ); - } - else - { - PaUtil_InitializeStreamRepresentation( &stream->streamRepresentation, - &jackHostApi->callbackStreamInterface, streamCallback, userData ); - } - srInitialized = 1; - PaUtil_InitializeCpuLoadMeasurer( &stream->cpuLoadMeasurer, jackSr ); - - /* create the JACK ports. We cannot connect them until audio - * processing begins */ - - /* Register a unique set of ports for this stream - * TODO: Robust allocation of new port names */ - - ofs = jackHostApi->inputBase; - for( i = 0; i < inputChannelCount; i++ ) - { - snprintf( port_string, jack_port_name_size(), "in_%lu", ofs + i ); - UNLESS( stream->local_input_ports[i] = jack_port_register( - jackHostApi->jack_client, port_string, - JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput, 0 ), paInsufficientMemory ); - } - jackHostApi->inputBase += inputChannelCount; - - ofs = jackHostApi->outputBase; - for( i = 0; i < outputChannelCount; i++ ) - { - snprintf( port_string, jack_port_name_size(), "out_%lu", ofs + i ); - UNLESS( stream->local_output_ports[i] = jack_port_register( - jackHostApi->jack_client, port_string, - JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput, 0 ), paInsufficientMemory ); - } - jackHostApi->outputBase += outputChannelCount; - - /* look up the jack_port_t's for the remote ports. We could do - * this at stream start time, but doing it here ensures the - * name lookup only happens once. */ - - if( inputChannelCount > 0 ) - { - int err = 0; - - /* Get output ports of our capture device */ - copy_string_and_escape_regex_chars( regex_pattern, - hostApi->deviceInfos[ inputParameters->device ]->name, - regex_escaped_client_name_size ); - strncat( regex_pattern, port_regex_suffix, regex_size ); - UNLESS( jack_ports = jack_get_ports( jackHostApi->jack_client, regex_pattern, - JACK_PORT_TYPE_FILTER, JackPortIsOutput ), paUnanticipatedHostError ); - for( i = 0; i < inputChannelCount && jack_ports[i]; i++ ) - { - if( (stream->remote_output_ports[i] = jack_port_by_name( - jackHostApi->jack_client, jack_ports[i] )) == NULL ) - { - err = 1; - break; - } - } - free( jack_ports ); - UNLESS( !err, paInsufficientMemory ); - - /* Fewer ports than expected? */ - UNLESS( i == inputChannelCount, paInternalError ); - } - - if( outputChannelCount > 0 ) - { - int err = 0; - - /* Get input ports of our playback device */ - copy_string_and_escape_regex_chars( regex_pattern, - hostApi->deviceInfos[ outputParameters->device ]->name, - regex_escaped_client_name_size ); - strncat( regex_pattern, port_regex_suffix, regex_size ); - UNLESS( jack_ports = jack_get_ports( jackHostApi->jack_client, regex_pattern, - JACK_PORT_TYPE_FILTER, JackPortIsInput ), paUnanticipatedHostError ); - for( i = 0; i < outputChannelCount && jack_ports[i]; i++ ) - { - if( (stream->remote_input_ports[i] = jack_port_by_name( - jackHostApi->jack_client, jack_ports[i] )) == 0 ) - { - err = 1; - break; - } - } - free( jack_ports ); - UNLESS( !err , paInsufficientMemory ); - - /* Fewer ports than expected? */ - UNLESS( i == outputChannelCount, paInternalError ); - } - - ENSURE_PA( PaUtil_InitializeBufferProcessor( - &stream->bufferProcessor, - inputChannelCount, - inputSampleFormat, - paFloat32 | paNonInterleaved, /* hostInputSampleFormat */ - outputChannelCount, - outputSampleFormat, - paFloat32 | paNonInterleaved, /* hostOutputSampleFormat */ - jackSr, - streamFlags, - framesPerBuffer, - 0, /* Ignored */ - paUtilUnknownHostBufferSize, /* Buffer size may vary on JACK's discretion */ - streamCallback, - userData ) ); - bpInitialized = 1; - - if( stream->num_incoming_connections > 0 ) - stream->streamRepresentation.streamInfo.inputLatency = (jack_port_get_latency( stream->remote_output_ports[0] ) - - jack_get_buffer_size( jackHostApi->jack_client ) /* One buffer is not counted as latency */ - + PaUtil_GetBufferProcessorInputLatencyFrames( &stream->bufferProcessor )) / sampleRate; - if( stream->num_outgoing_connections > 0 ) - stream->streamRepresentation.streamInfo.outputLatency = (jack_port_get_latency( stream->remote_input_ports[0] ) - - jack_get_buffer_size( jackHostApi->jack_client ) /* One buffer is not counted as latency */ - + PaUtil_GetBufferProcessorOutputLatencyFrames( &stream->bufferProcessor )) / sampleRate; - - stream->streamRepresentation.streamInfo.sampleRate = jackSr; - stream->t0 = jack_frame_time( jackHostApi->jack_client ); /* A: Time should run from Pa_OpenStream */ - - /* Add to queue of opened streams */ - ENSURE_PA( AddStream( stream ) ); - - *s = (PaStream*)stream; - - return result; - -error: - if( stream ) - CleanUpStream( stream, srInitialized, bpInitialized ); - - return result; -} - -/* - When CloseStream() is called, the multi-api layer ensures that - the stream has already been stopped or aborted. -*/ -static PaError CloseStream( PaStream* s ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream*)s; - - /* Remove this stream from the processing queue */ - ENSURE_PA( RemoveStream( stream ) ); - -error: - CleanUpStream( stream, 1, 1 ); - return result; -} - -static PaError RealProcess( PaJackStream *stream, jack_nframes_t frames ) -{ - PaError result = paNoError; - PaStreamCallbackTimeInfo timeInfo = {0,0,0}; - int chn; - int framesProcessed; - const double sr = jack_get_sample_rate( stream->jack_client ); /* Shouldn't change during the process callback */ - PaStreamCallbackFlags cbFlags = 0; - - /* If the user has returned !paContinue from the callback we'll want to flush the internal buffers, - * when these are empty we can finally mark the stream as inactive */ - if( stream->callbackResult != paContinue && - PaUtil_IsBufferProcessorOutputEmpty( &stream->bufferProcessor ) ) - { - stream->is_active = 0; - if( stream->streamRepresentation.streamFinishedCallback ) - stream->streamRepresentation.streamFinishedCallback( stream->streamRepresentation.userData ); - PA_DEBUG(( "%s: Callback finished\n", __FUNCTION__ )); - - goto end; - } - - timeInfo.currentTime = (jack_frame_time( stream->jack_client ) - stream->t0) / sr; - if( stream->num_incoming_connections > 0 ) - timeInfo.inputBufferAdcTime = timeInfo.currentTime - jack_port_get_latency( stream->remote_output_ports[0] ) - / sr; - if( stream->num_outgoing_connections > 0 ) - timeInfo.outputBufferDacTime = timeInfo.currentTime + jack_port_get_latency( stream->remote_input_ports[0] ) - / sr; - - PaUtil_BeginCpuLoadMeasurement( &stream->cpuLoadMeasurer ); - - if( stream->xrun ) - { - /* XXX: Any way to tell which of these occurred? */ - cbFlags = paOutputUnderflow | paInputOverflow; - stream->xrun = FALSE; - } - PaUtil_BeginBufferProcessing( &stream->bufferProcessor, &timeInfo, - cbFlags ); - - if( stream->num_incoming_connections > 0 ) - PaUtil_SetInputFrameCount( &stream->bufferProcessor, frames ); - if( stream->num_outgoing_connections > 0 ) - PaUtil_SetOutputFrameCount( &stream->bufferProcessor, frames ); - - for( chn = 0; chn < stream->num_incoming_connections; chn++ ) - { - jack_default_audio_sample_t *channel_buf = (jack_default_audio_sample_t*) - jack_port_get_buffer( stream->local_input_ports[chn], - frames ); - - PaUtil_SetNonInterleavedInputChannel( &stream->bufferProcessor, - chn, - channel_buf ); - } - - for( chn = 0; chn < stream->num_outgoing_connections; chn++ ) - { - jack_default_audio_sample_t *channel_buf = (jack_default_audio_sample_t*) - jack_port_get_buffer( stream->local_output_ports[chn], - frames ); - - PaUtil_SetNonInterleavedOutputChannel( &stream->bufferProcessor, - chn, - channel_buf ); - } - - framesProcessed = PaUtil_EndBufferProcessing( &stream->bufferProcessor, - &stream->callbackResult ); - /* We've specified a host buffer size mode where every frame should be consumed by the buffer processor */ - assert( framesProcessed == frames ); - - PaUtil_EndCpuLoadMeasurement( &stream->cpuLoadMeasurer, framesProcessed ); - -end: - return result; -} - -/* Update the JACK callback's stream processing queue. */ -static PaError UpdateQueue( PaJackHostApiRepresentation *hostApi ) -{ - PaError result = paNoError; - int queueModified = 0; - const double jackSr = jack_get_sample_rate( hostApi->jack_client ); - int err; - - if( (err = pthread_mutex_trylock( &hostApi->mtx )) != 0 ) - { - assert( err == EBUSY ); - return paNoError; - } - - if( hostApi->toAdd ) - { - if( hostApi->processQueue ) - { - PaJackStream *node = hostApi->processQueue; - /* Advance to end of queue */ - while( node->next ) - node = node->next; - - node->next = hostApi->toAdd; - } - else - { - /* The only queue entry. */ - hostApi->processQueue = (PaJackStream *)hostApi->toAdd; - } - - /* If necessary, update stream state */ - if( hostApi->toAdd->streamRepresentation.streamInfo.sampleRate != jackSr ) - UpdateSampleRate( hostApi->toAdd, jackSr ); - - hostApi->toAdd = NULL; - queueModified = 1; - } - if( hostApi->toRemove ) - { - int removed = 0; - PaJackStream *node = hostApi->processQueue, *prev = NULL; - assert( hostApi->processQueue ); - - while( node ) - { - if( node == hostApi->toRemove ) - { - if( prev ) - prev->next = node->next; - else - hostApi->processQueue = (PaJackStream *)node->next; - - removed = 1; - break; - } - - prev = node; - node = node->next; - } - UNLESS( removed, paInternalError ); - hostApi->toRemove = NULL; - PA_DEBUG(( "%s: Removed stream from processing queue\n", __FUNCTION__ )); - queueModified = 1; - } - - if( queueModified ) - { - /* Signal that we've done what was asked of us */ - ASSERT_CALL( pthread_cond_signal( &hostApi->cond ), 0 ); - } - -error: - ASSERT_CALL( pthread_mutex_unlock( &hostApi->mtx ), 0 ); - - return result; -} - -/* Audio processing callback invoked periodically from JACK. */ -static int JackCallback( jack_nframes_t frames, void *userData ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *hostApi = (PaJackHostApiRepresentation *)userData; - PaJackStream *stream = NULL; - int xrun = hostApi->xrun; - hostApi->xrun = 0; - - assert( hostApi ); - - ENSURE_PA( UpdateQueue( hostApi ) ); - - /* Process each stream */ - stream = hostApi->processQueue; - for( ; stream; stream = stream->next ) - { - if( xrun ) /* Don't override if already set */ - stream->xrun = 1; - - /* See if this stream is to be started */ - if( stream->doStart ) - { - /* If we can't obtain a lock, we'll try next time */ - int err = pthread_mutex_trylock( &stream->hostApi->mtx ); - if( !err ) - { - if( stream->doStart ) /* Could potentially change before obtaining the lock */ - { - stream->is_active = 1; - stream->doStart = 0; - PA_DEBUG(( "%s: Starting stream\n", __FUNCTION__ )); - ASSERT_CALL( pthread_cond_signal( &stream->hostApi->cond ), 0 ); - stream->callbackResult = paContinue; - stream->isSilenced = 0; - } - - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - } - else - assert( err == EBUSY ); - } - else if( stream->doStop || stream->doAbort ) /* Should we stop/abort stream? */ - { - if( stream->callbackResult == paContinue ) /* Ok, make it stop */ - { - PA_DEBUG(( "%s: Stopping stream\n", __FUNCTION__ )); - stream->callbackResult = stream->doStop ? paComplete : paAbort; - } - } - - if( stream->is_active ) - ENSURE_PA( RealProcess( stream, frames ) ); - /* If we have just entered inactive state, silence output */ - if( !stream->is_active && !stream->isSilenced ) - { - int i; - - /* Silence buffer after entering inactive state */ - PA_DEBUG(( "Silencing the output\n" )); - for( i = 0; i < stream->num_outgoing_connections; ++i ) - { - jack_default_audio_sample_t *buffer = jack_port_get_buffer( stream->local_output_ports[i], frames ); - memset( buffer, 0, sizeof (jack_default_audio_sample_t) * frames ); - } - - stream->isSilenced = 1; - } - - if( stream->doStop || stream->doAbort ) - { - /* See if RealProcess has acted on the request */ - if( !stream->is_active ) /* Ok, signal to the main thread that we've carried out the operation */ - { - /* If we can't obtain a lock, we'll try next time */ - int err = pthread_mutex_trylock( &stream->hostApi->mtx ); - if( !err ) - { - stream->doStop = stream->doAbort = 0; - ASSERT_CALL( pthread_cond_signal( &stream->hostApi->cond ), 0 ); - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - } - else - assert( err == EBUSY ); - } - } - } - - return 0; -error: - return -1; -} - -static PaError StartStream( PaStream *s ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream*)s; - int i; - - /* Ready the processor */ - PaUtil_ResetBufferProcessor( &stream->bufferProcessor ); - - /* Connect the ports. Note that the ports may already have been connected by someone else in - * the meantime, in which case JACK returns EEXIST. */ - - if( stream->num_incoming_connections > 0 ) - { - for( i = 0; i < stream->num_incoming_connections; i++ ) - { - int r = jack_connect( stream->jack_client, jack_port_name( stream->remote_output_ports[i] ), - jack_port_name( stream->local_input_ports[i] ) ); - UNLESS( 0 == r || EEXIST == r, paUnanticipatedHostError ); - } - } - - if( stream->num_outgoing_connections > 0 ) - { - for( i = 0; i < stream->num_outgoing_connections; i++ ) - { - int r = jack_connect( stream->jack_client, jack_port_name( stream->local_output_ports[i] ), - jack_port_name( stream->remote_input_ports[i] ) ); - UNLESS( 0 == r || EEXIST == r, paUnanticipatedHostError ); - } - } - - stream->xrun = FALSE; - - /* Enable processing */ - - ASSERT_CALL( pthread_mutex_lock( &stream->hostApi->mtx ), 0 ); - stream->doStart = 1; - - /* Wait for stream to be started */ - result = WaitCondition( stream->hostApi ); - /* - do - { - err = pthread_cond_timedwait( &stream->hostApi->cond, &stream->hostApi->mtx, &ts ); - } while( !stream->is_active && !err ); - */ - if( result != paNoError ) /* Something went wrong, call off the stream start */ - { - stream->doStart = 0; - stream->is_active = 0; /* Cancel any processing */ - } - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - - ENSURE_PA( result ); - - stream->is_running = TRUE; - PA_DEBUG(( "%s: Stream started\n", __FUNCTION__ )); - -error: - return result; -} - -static PaError RealStop( PaJackStream *stream, int abort ) -{ - PaError result = paNoError; - int i; - - if( stream->isBlockingStream ) - BlockingWaitEmpty ( stream ); - - ASSERT_CALL( pthread_mutex_lock( &stream->hostApi->mtx ), 0 ); - if( abort ) - stream->doAbort = 1; - else - stream->doStop = 1; - - /* Wait for stream to be stopped */ - result = WaitCondition( stream->hostApi ); - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - ENSURE_PA( result ); - - UNLESS( !stream->is_active, paInternalError ); - - PA_DEBUG(( "%s: Stream stopped\n", __FUNCTION__ )); - -error: - stream->is_running = FALSE; - - /* Disconnect ports belonging to this stream */ - - if( !stream->hostApi->jackIsDown ) /* XXX: Well? */ - { - for( i = 0; i < stream->num_incoming_connections; i++ ) - { - if( jack_port_connected( stream->local_input_ports[i] ) ) - { - UNLESS( !jack_port_disconnect( stream->jack_client, stream->local_input_ports[i] ), - paUnanticipatedHostError ); - } - } - for( i = 0; i < stream->num_outgoing_connections; i++ ) - { - if( jack_port_connected( stream->local_output_ports[i] ) ) - { - UNLESS( !jack_port_disconnect( stream->jack_client, stream->local_output_ports[i] ), - paUnanticipatedHostError ); - } - } - } - - return result; -} - -static PaError StopStream( PaStream *s ) -{ - assert(s); - return RealStop( (PaJackStream *)s, 0 ); -} - -static PaError AbortStream( PaStream *s ) -{ - assert(s); - return RealStop( (PaJackStream *)s, 1 ); -} - -static PaError IsStreamStopped( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream*)s; - return !stream->is_running; -} - - -static PaError IsStreamActive( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream*)s; - return stream->is_active; -} - - -static PaTime GetStreamTime( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream*)s; - - /* A: Is this relevant?? --> TODO: what if we're recording-only? */ - return (jack_frame_time( stream->jack_client ) - stream->t0) / (PaTime)jack_get_sample_rate( stream->jack_client ); -} - - -static double GetStreamCpuLoad( PaStream* s ) -{ - PaJackStream *stream = (PaJackStream*)s; - return PaUtil_GetCpuLoad( &stream->cpuLoadMeasurer ); -} - -PaError PaJack_SetClientName( const char* name ) -{ - if( strlen( name ) > jack_client_name_size() ) - { - /* OK, I don't know any better error code */ - return paInvalidFlag; - } - clientName_ = name; - return paNoError; -} - -PaError PaJack_GetClientName(const char** clientName) -{ - PaError result = paNoError; - PaJackHostApiRepresentation* jackHostApi = NULL; - PaJackHostApiRepresentation** ref = &jackHostApi; - ENSURE_PA( PaUtil_GetHostApiRepresentation( (PaUtilHostApiRepresentation**)ref, paJACK ) ); - *clientName = jack_get_client_name( jackHostApi->jack_client ); - -error: - return result; -} - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/varLib/cff.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/varLib/cff.py deleted file mode 100644 index 52e6a8848d3bd8bf6e7e76f932d90d794004652f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/varLib/cff.py +++ /dev/null @@ -1,712 +0,0 @@ -from collections import namedtuple -from fontTools.cffLib import ( - maxStackLimit, - TopDictIndex, - buildOrder, - topDictOperators, - topDictOperators2, - privateDictOperators, - privateDictOperators2, - FDArrayIndex, - FontDict, - VarStoreData, -) -from io import BytesIO -from fontTools.cffLib.specializer import specializeCommands, commandsToProgram -from fontTools.ttLib import newTable -from fontTools import varLib -from fontTools.varLib.models import allEqual -from fontTools.misc.roundTools import roundFunc -from fontTools.misc.psCharStrings import T2CharString, T2OutlineExtractor -from fontTools.pens.t2CharStringPen import T2CharStringPen -from functools import partial - -from .errors import ( - VarLibCFFDictMergeError, - VarLibCFFPointTypeMergeError, - VarLibCFFHintTypeMergeError, - VarLibMergeError, -) - - -# Backwards compatibility -MergeDictError = VarLibCFFDictMergeError -MergeTypeError = VarLibCFFPointTypeMergeError - - -def addCFFVarStore(varFont, varModel, varDataList, masterSupports): - fvarTable = varFont["fvar"] - axisKeys = [axis.axisTag for axis in fvarTable.axes] - varTupleList = varLib.builder.buildVarRegionList(masterSupports, axisKeys) - varStoreCFFV = varLib.builder.buildVarStore(varTupleList, varDataList) - - topDict = varFont["CFF2"].cff.topDictIndex[0] - topDict.VarStore = VarStoreData(otVarStore=varStoreCFFV) - if topDict.FDArray[0].vstore is None: - fdArray = topDict.FDArray - for fontDict in fdArray: - if hasattr(fontDict, "Private"): - fontDict.Private.vstore = topDict.VarStore - - -def lib_convertCFFToCFF2(cff, otFont): - # This assumes a decompiled CFF table. - cff2GetGlyphOrder = cff.otFont.getGlyphOrder - topDictData = TopDictIndex(None, cff2GetGlyphOrder, None) - topDictData.items = cff.topDictIndex.items - cff.topDictIndex = topDictData - topDict = topDictData[0] - if hasattr(topDict, "Private"): - privateDict = topDict.Private - else: - privateDict = None - opOrder = buildOrder(topDictOperators2) - topDict.order = opOrder - topDict.cff2GetGlyphOrder = cff2GetGlyphOrder - if not hasattr(topDict, "FDArray"): - fdArray = topDict.FDArray = FDArrayIndex() - fdArray.strings = None - fdArray.GlobalSubrs = topDict.GlobalSubrs - topDict.GlobalSubrs.fdArray = fdArray - charStrings = topDict.CharStrings - if charStrings.charStringsAreIndexed: - charStrings.charStringsIndex.fdArray = fdArray - else: - charStrings.fdArray = fdArray - fontDict = FontDict() - fontDict.setCFF2(True) - fdArray.append(fontDict) - fontDict.Private = privateDict - privateOpOrder = buildOrder(privateDictOperators2) - if privateDict is not None: - for entry in privateDictOperators: - key = entry[1] - if key not in privateOpOrder: - if key in privateDict.rawDict: - # print "Removing private dict", key - del privateDict.rawDict[key] - if hasattr(privateDict, key): - delattr(privateDict, key) - # print "Removing privateDict attr", key - else: - # clean up the PrivateDicts in the fdArray - fdArray = topDict.FDArray - privateOpOrder = buildOrder(privateDictOperators2) - for fontDict in fdArray: - fontDict.setCFF2(True) - for key in list(fontDict.rawDict.keys()): - if key not in fontDict.order: - del fontDict.rawDict[key] - if hasattr(fontDict, key): - delattr(fontDict, key) - - privateDict = fontDict.Private - for entry in privateDictOperators: - key = entry[1] - if key not in privateOpOrder: - if key in privateDict.rawDict: - # print "Removing private dict", key - del privateDict.rawDict[key] - if hasattr(privateDict, key): - delattr(privateDict, key) - # print "Removing privateDict attr", key - # Now delete up the deprecated topDict operators from CFF 1.0 - for entry in topDictOperators: - key = entry[1] - if key not in opOrder: - if key in topDict.rawDict: - del topDict.rawDict[key] - if hasattr(topDict, key): - delattr(topDict, key) - - # At this point, the Subrs and Charstrings are all still T2Charstring class - # easiest to fix this by compiling, then decompiling again - cff.major = 2 - file = BytesIO() - cff.compile(file, otFont, isCFF2=True) - file.seek(0) - cff.decompile(file, otFont, isCFF2=True) - - -def convertCFFtoCFF2(varFont): - # Convert base font to a single master CFF2 font. - cffTable = varFont["CFF "] - lib_convertCFFToCFF2(cffTable.cff, varFont) - newCFF2 = newTable("CFF2") - newCFF2.cff = cffTable.cff - varFont["CFF2"] = newCFF2 - del varFont["CFF "] - - -def conv_to_int(num): - if isinstance(num, float) and num.is_integer(): - return int(num) - return num - - -pd_blend_fields = ( - "BlueValues", - "OtherBlues", - "FamilyBlues", - "FamilyOtherBlues", - "BlueScale", - "BlueShift", - "BlueFuzz", - "StdHW", - "StdVW", - "StemSnapH", - "StemSnapV", -) - - -def get_private(regionFDArrays, fd_index, ri, fd_map): - region_fdArray = regionFDArrays[ri] - region_fd_map = fd_map[fd_index] - if ri in region_fd_map: - region_fdIndex = region_fd_map[ri] - private = region_fdArray[region_fdIndex].Private - else: - private = None - return private - - -def merge_PrivateDicts(top_dicts, vsindex_dict, var_model, fd_map): - """ - I step through the FontDicts in the FDArray of the varfont TopDict. - For each varfont FontDict: - - * step through each key in FontDict.Private. - * For each key, step through each relevant source font Private dict, and - build a list of values to blend. - - The 'relevant' source fonts are selected by first getting the right - submodel using ``vsindex_dict[vsindex]``. The indices of the - ``subModel.locations`` are mapped to source font list indices by - assuming the latter order is the same as the order of the - ``var_model.locations``. I can then get the index of each subModel - location in the list of ``var_model.locations``. - """ - - topDict = top_dicts[0] - region_top_dicts = top_dicts[1:] - if hasattr(region_top_dicts[0], "FDArray"): - regionFDArrays = [fdTopDict.FDArray for fdTopDict in region_top_dicts] - else: - regionFDArrays = [[fdTopDict] for fdTopDict in region_top_dicts] - for fd_index, font_dict in enumerate(topDict.FDArray): - private_dict = font_dict.Private - vsindex = getattr(private_dict, "vsindex", 0) - # At the moment, no PrivateDict has a vsindex key, but let's support - # how it should work. See comment at end of - # merge_charstrings() - still need to optimize use of vsindex. - sub_model, _ = vsindex_dict[vsindex] - master_indices = [] - for loc in sub_model.locations[1:]: - i = var_model.locations.index(loc) - 1 - master_indices.append(i) - pds = [private_dict] - last_pd = private_dict - for ri in master_indices: - pd = get_private(regionFDArrays, fd_index, ri, fd_map) - # If the region font doesn't have this FontDict, just reference - # the last one used. - if pd is None: - pd = last_pd - else: - last_pd = pd - pds.append(pd) - num_masters = len(pds) - for key, value in private_dict.rawDict.items(): - dataList = [] - if key not in pd_blend_fields: - continue - if isinstance(value, list): - try: - values = [pd.rawDict[key] for pd in pds] - except KeyError: - print( - "Warning: {key} in default font Private dict is " - "missing from another font, and was " - "discarded.".format(key=key) - ) - continue - try: - values = zip(*values) - except IndexError: - raise VarLibCFFDictMergeError(key, value, values) - """ - Row 0 contains the first value from each master. - Convert each row from absolute values to relative - values from the previous row. - e.g for three masters, a list of values was: - master 0 OtherBlues = [-217,-205] - master 1 OtherBlues = [-234,-222] - master 1 OtherBlues = [-188,-176] - The call to zip() converts this to: - [(-217, -234, -188), (-205, -222, -176)] - and is converted finally to: - OtherBlues = [[-217, 17.0, 46.0], [-205, 0.0, 0.0]] - """ - prev_val_list = [0] * num_masters - any_points_differ = False - for val_list in values: - rel_list = [ - (val - prev_val_list[i]) for (i, val) in enumerate(val_list) - ] - if (not any_points_differ) and not allEqual(rel_list): - any_points_differ = True - prev_val_list = val_list - deltas = sub_model.getDeltas(rel_list) - # For PrivateDict BlueValues, the default font - # values are absolute, not relative to the prior value. - deltas[0] = val_list[0] - dataList.append(deltas) - # If there are no blend values,then - # we can collapse the blend lists. - if not any_points_differ: - dataList = [data[0] for data in dataList] - else: - values = [pd.rawDict[key] for pd in pds] - if not allEqual(values): - dataList = sub_model.getDeltas(values) - else: - dataList = values[0] - - # Convert numbers with no decimal part to an int - if isinstance(dataList, list): - for i, item in enumerate(dataList): - if isinstance(item, list): - for j, jtem in enumerate(item): - dataList[i][j] = conv_to_int(jtem) - else: - dataList[i] = conv_to_int(item) - else: - dataList = conv_to_int(dataList) - - private_dict.rawDict[key] = dataList - - -def _cff_or_cff2(font): - if "CFF " in font: - return font["CFF "] - return font["CFF2"] - - -def getfd_map(varFont, fonts_list): - """Since a subset source font may have fewer FontDicts in their - FDArray than the default font, we have to match up the FontDicts in - the different fonts . We do this with the FDSelect array, and by - assuming that the same glyph will reference matching FontDicts in - each source font. We return a mapping from fdIndex in the default - font to a dictionary which maps each master list index of each - region font to the equivalent fdIndex in the region font.""" - fd_map = {} - default_font = fonts_list[0] - region_fonts = fonts_list[1:] - num_regions = len(region_fonts) - topDict = _cff_or_cff2(default_font).cff.topDictIndex[0] - if not hasattr(topDict, "FDSelect"): - # All glyphs reference only one FontDict. - # Map the FD index for regions to index 0. - fd_map[0] = {ri: 0 for ri in range(num_regions)} - return fd_map - - gname_mapping = {} - default_fdSelect = topDict.FDSelect - glyphOrder = default_font.getGlyphOrder() - for gid, fdIndex in enumerate(default_fdSelect): - gname_mapping[glyphOrder[gid]] = fdIndex - if fdIndex not in fd_map: - fd_map[fdIndex] = {} - for ri, region_font in enumerate(region_fonts): - region_glyphOrder = region_font.getGlyphOrder() - region_topDict = _cff_or_cff2(region_font).cff.topDictIndex[0] - if not hasattr(region_topDict, "FDSelect"): - # All the glyphs share the same FontDict. Pick any glyph. - default_fdIndex = gname_mapping[region_glyphOrder[0]] - fd_map[default_fdIndex][ri] = 0 - else: - region_fdSelect = region_topDict.FDSelect - for gid, fdIndex in enumerate(region_fdSelect): - default_fdIndex = gname_mapping[region_glyphOrder[gid]] - region_map = fd_map[default_fdIndex] - if ri not in region_map: - region_map[ri] = fdIndex - return fd_map - - -CVarData = namedtuple("CVarData", "varDataList masterSupports vsindex_dict") - - -def merge_region_fonts(varFont, model, ordered_fonts_list, glyphOrder): - topDict = varFont["CFF2"].cff.topDictIndex[0] - top_dicts = [topDict] + [ - _cff_or_cff2(ttFont).cff.topDictIndex[0] for ttFont in ordered_fonts_list[1:] - ] - num_masters = len(model.mapping) - cvData = merge_charstrings(glyphOrder, num_masters, top_dicts, model) - fd_map = getfd_map(varFont, ordered_fonts_list) - merge_PrivateDicts(top_dicts, cvData.vsindex_dict, model, fd_map) - addCFFVarStore(varFont, model, cvData.varDataList, cvData.masterSupports) - - -def _get_cs(charstrings, glyphName, filterEmpty=False): - if glyphName not in charstrings: - return None - cs = charstrings[glyphName] - - if filterEmpty: - cs.decompile() - if cs.program == []: # CFF2 empty charstring - return None - elif ( - len(cs.program) <= 2 - and cs.program[-1] == "endchar" - and (len(cs.program) == 1 or type(cs.program[0]) in (int, float)) - ): # CFF1 empty charstring - return None - - return cs - - -def _add_new_vsindex( - model, key, masterSupports, vsindex_dict, vsindex_by_key, varDataList -): - varTupleIndexes = [] - for support in model.supports[1:]: - if support not in masterSupports: - masterSupports.append(support) - varTupleIndexes.append(masterSupports.index(support)) - var_data = varLib.builder.buildVarData(varTupleIndexes, None, False) - vsindex = len(vsindex_dict) - vsindex_by_key[key] = vsindex - vsindex_dict[vsindex] = (model, [key]) - varDataList.append(var_data) - return vsindex - - -def merge_charstrings(glyphOrder, num_masters, top_dicts, masterModel): - vsindex_dict = {} - vsindex_by_key = {} - varDataList = [] - masterSupports = [] - default_charstrings = top_dicts[0].CharStrings - for gid, gname in enumerate(glyphOrder): - # interpret empty non-default masters as missing glyphs from a sparse master - all_cs = [ - _get_cs(td.CharStrings, gname, i != 0) for i, td in enumerate(top_dicts) - ] - model, model_cs = masterModel.getSubModel(all_cs) - # create the first pass CFF2 charstring, from - # the default charstring. - default_charstring = model_cs[0] - var_pen = CFF2CharStringMergePen([], gname, num_masters, 0) - # We need to override outlineExtractor because these - # charstrings do have widths in the 'program'; we need to drop these - # values rather than post assertion error for them. - default_charstring.outlineExtractor = MergeOutlineExtractor - default_charstring.draw(var_pen) - - # Add the coordinates from all the other regions to the - # blend lists in the CFF2 charstring. - region_cs = model_cs[1:] - for region_idx, region_charstring in enumerate(region_cs, start=1): - var_pen.restart(region_idx) - region_charstring.outlineExtractor = MergeOutlineExtractor - region_charstring.draw(var_pen) - - # Collapse each coordinate list to a blend operator and its args. - new_cs = var_pen.getCharString( - private=default_charstring.private, - globalSubrs=default_charstring.globalSubrs, - var_model=model, - optimize=True, - ) - default_charstrings[gname] = new_cs - - if not region_cs: - continue - - if (not var_pen.seen_moveto) or ("blend" not in new_cs.program): - # If this is not a marking glyph, or if there are no blend - # arguments, then we can use vsindex 0. No need to - # check if we need a new vsindex. - continue - - # If the charstring required a new model, create - # a VarData table to go with, and set vsindex. - key = tuple(v is not None for v in all_cs) - try: - vsindex = vsindex_by_key[key] - except KeyError: - vsindex = _add_new_vsindex( - model, key, masterSupports, vsindex_dict, vsindex_by_key, varDataList - ) - # We do not need to check for an existing new_cs.private.vsindex, - # as we know it doesn't exist yet. - if vsindex != 0: - new_cs.program[:0] = [vsindex, "vsindex"] - - # If there is no variation in any of the charstrings, then vsindex_dict - # never gets built. This could still be needed if there is variation - # in the PrivatDict, so we will build the default data for vsindex = 0. - if not vsindex_dict: - key = (True,) * num_masters - _add_new_vsindex( - masterModel, key, masterSupports, vsindex_dict, vsindex_by_key, varDataList - ) - cvData = CVarData( - varDataList=varDataList, - masterSupports=masterSupports, - vsindex_dict=vsindex_dict, - ) - # XXX To do: optimize use of vsindex between the PrivateDicts and - # charstrings - return cvData - - -class CFFToCFF2OutlineExtractor(T2OutlineExtractor): - """This class is used to remove the initial width from the CFF - charstring without trying to add the width to self.nominalWidthX, - which is None.""" - - def popallWidth(self, evenOdd=0): - args = self.popall() - if not self.gotWidth: - if evenOdd ^ (len(args) % 2): - args = args[1:] - self.width = self.defaultWidthX - self.gotWidth = 1 - return args - - -class MergeOutlineExtractor(CFFToCFF2OutlineExtractor): - """Used to extract the charstring commands - including hints - from a - CFF charstring in order to merge it as another set of region data - into a CFF2 variable font charstring.""" - - def __init__( - self, - pen, - localSubrs, - globalSubrs, - nominalWidthX, - defaultWidthX, - private=None, - blender=None, - ): - super().__init__( - pen, localSubrs, globalSubrs, nominalWidthX, defaultWidthX, private, blender - ) - - def countHints(self): - args = self.popallWidth() - self.hintCount = self.hintCount + len(args) // 2 - return args - - def _hint_op(self, type, args): - self.pen.add_hint(type, args) - - def op_hstem(self, index): - args = self.countHints() - self._hint_op("hstem", args) - - def op_vstem(self, index): - args = self.countHints() - self._hint_op("vstem", args) - - def op_hstemhm(self, index): - args = self.countHints() - self._hint_op("hstemhm", args) - - def op_vstemhm(self, index): - args = self.countHints() - self._hint_op("vstemhm", args) - - def _get_hintmask(self, index): - if not self.hintMaskBytes: - args = self.countHints() - if args: - self._hint_op("vstemhm", args) - self.hintMaskBytes = (self.hintCount + 7) // 8 - hintMaskBytes, index = self.callingStack[-1].getBytes(index, self.hintMaskBytes) - return index, hintMaskBytes - - def op_hintmask(self, index): - index, hintMaskBytes = self._get_hintmask(index) - self.pen.add_hintmask("hintmask", [hintMaskBytes]) - return hintMaskBytes, index - - def op_cntrmask(self, index): - index, hintMaskBytes = self._get_hintmask(index) - self.pen.add_hintmask("cntrmask", [hintMaskBytes]) - return hintMaskBytes, index - - -class CFF2CharStringMergePen(T2CharStringPen): - """Pen to merge Type 2 CharStrings.""" - - def __init__( - self, default_commands, glyphName, num_masters, master_idx, roundTolerance=0.01 - ): - # For roundTolerance see https://github.com/fonttools/fonttools/issues/2838 - super().__init__( - width=None, glyphSet=None, CFF2=True, roundTolerance=roundTolerance - ) - self.pt_index = 0 - self._commands = default_commands - self.m_index = master_idx - self.num_masters = num_masters - self.prev_move_idx = 0 - self.seen_moveto = False - self.glyphName = glyphName - self.round = roundFunc(roundTolerance, round=round) - - def add_point(self, point_type, pt_coords): - if self.m_index == 0: - self._commands.append([point_type, [pt_coords]]) - else: - cmd = self._commands[self.pt_index] - if cmd[0] != point_type: - raise VarLibCFFPointTypeMergeError( - point_type, self.pt_index, len(cmd[1]), cmd[0], self.glyphName - ) - cmd[1].append(pt_coords) - self.pt_index += 1 - - def add_hint(self, hint_type, args): - if self.m_index == 0: - self._commands.append([hint_type, [args]]) - else: - cmd = self._commands[self.pt_index] - if cmd[0] != hint_type: - raise VarLibCFFHintTypeMergeError( - hint_type, self.pt_index, len(cmd[1]), cmd[0], self.glyphName - ) - cmd[1].append(args) - self.pt_index += 1 - - def add_hintmask(self, hint_type, abs_args): - # For hintmask, fonttools.cffLib.specializer.py expects - # each of these to be represented by two sequential commands: - # first holding only the operator name, with an empty arg list, - # second with an empty string as the op name, and the mask arg list. - if self.m_index == 0: - self._commands.append([hint_type, []]) - self._commands.append(["", [abs_args]]) - else: - cmd = self._commands[self.pt_index] - if cmd[0] != hint_type: - raise VarLibCFFHintTypeMergeError( - hint_type, self.pt_index, len(cmd[1]), cmd[0], self.glyphName - ) - self.pt_index += 1 - cmd = self._commands[self.pt_index] - cmd[1].append(abs_args) - self.pt_index += 1 - - def _moveTo(self, pt): - if not self.seen_moveto: - self.seen_moveto = True - pt_coords = self._p(pt) - self.add_point("rmoveto", pt_coords) - # I set prev_move_idx here because add_point() - # can change self.pt_index. - self.prev_move_idx = self.pt_index - 1 - - def _lineTo(self, pt): - pt_coords = self._p(pt) - self.add_point("rlineto", pt_coords) - - def _curveToOne(self, pt1, pt2, pt3): - _p = self._p - pt_coords = _p(pt1) + _p(pt2) + _p(pt3) - self.add_point("rrcurveto", pt_coords) - - def _closePath(self): - pass - - def _endPath(self): - pass - - def restart(self, region_idx): - self.pt_index = 0 - self.m_index = region_idx - self._p0 = (0, 0) - - def getCommands(self): - return self._commands - - def reorder_blend_args(self, commands, get_delta_func): - """ - We first re-order the master coordinate values. - For a moveto to lineto, the args are now arranged as:: - - [ [master_0 x,y], [master_1 x,y], [master_2 x,y] ] - - We re-arrange this to:: - - [ [master_0 x, master_1 x, master_2 x], - [master_0 y, master_1 y, master_2 y] - ] - - If the master values are all the same, we collapse the list to - as single value instead of a list. - - We then convert this to:: - - [ [master_0 x] + [x delta tuple] + [numBlends=1] - [master_0 y] + [y delta tuple] + [numBlends=1] - ] - """ - for cmd in commands: - # arg[i] is the set of arguments for this operator from master i. - args = cmd[1] - m_args = zip(*args) - # m_args[n] is now all num_master args for the i'th argument - # for this operation. - cmd[1] = list(m_args) - lastOp = None - for cmd in commands: - op = cmd[0] - # masks are represented by two cmd's: first has only op names, - # second has only args. - if lastOp in ["hintmask", "cntrmask"]: - coord = list(cmd[1]) - if not allEqual(coord): - raise VarLibMergeError( - "Hintmask values cannot differ between source fonts." - ) - cmd[1] = [coord[0][0]] - else: - coords = cmd[1] - new_coords = [] - for coord in coords: - if allEqual(coord): - new_coords.append(coord[0]) - else: - # convert to deltas - deltas = get_delta_func(coord)[1:] - coord = [coord[0]] + deltas - coord.append(1) - new_coords.append(coord) - cmd[1] = new_coords - lastOp = op - return commands - - def getCharString( - self, private=None, globalSubrs=None, var_model=None, optimize=True - ): - commands = self._commands - commands = self.reorder_blend_args( - commands, partial(var_model.getDeltas, round=self.round) - ) - if optimize: - commands = specializeCommands( - commands, generalizeFirst=False, maxstack=maxStackLimit - ) - program = commandsToProgram(commands) - charString = T2CharString( - program=program, private=private, globalSubrs=globalSubrs - ) - return charString diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/__vite-browser-external-b25bb000.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/__vite-browser-external-b25bb000.js deleted file mode 100644 index efa8971d2172dd2c1924c07a4e2b2bc18871ccd9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/__vite-browser-external-b25bb000.js +++ /dev/null @@ -1,2 +0,0 @@ -const e={};export{e as default}; -//# sourceMappingURL=__vite-browser-external-b25bb000.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py deleted file mode 100644 index dbcaff1fcf1b1cbb404b3e7367b037942f4e9d03..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py +++ /dev/null @@ -1,356 +0,0 @@ -import ssl -import sys -from types import TracebackType -from typing import Iterable, Iterator, Iterable, List, Optional, Type - -from .._backends.sync import SyncBackend -from .._backends.base import SOCKET_OPTION, NetworkBackend -from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol -from .._models import Origin, Request, Response -from .._synchronization import Event, Lock, ShieldCancellation -from .connection import HTTPConnection -from .interfaces import ConnectionInterface, RequestInterface - - -class RequestStatus: - def __init__(self, request: Request): - self.request = request - self.connection: Optional[ConnectionInterface] = None - self._connection_acquired = Event() - - def set_connection(self, connection: ConnectionInterface) -> None: - assert self.connection is None - self.connection = connection - self._connection_acquired.set() - - def unset_connection(self) -> None: - assert self.connection is not None - self.connection = None - self._connection_acquired = Event() - - def wait_for_connection( - self, timeout: Optional[float] = None - ) -> ConnectionInterface: - if self.connection is None: - self._connection_acquired.wait(timeout=timeout) - assert self.connection is not None - return self.connection - - -class ConnectionPool(RequestInterface): - """ - A connection pool for making HTTP requests. - """ - - def __init__( - self, - ssl_context: Optional[ssl.SSLContext] = None, - max_connections: Optional[int] = 10, - max_keepalive_connections: Optional[int] = None, - keepalive_expiry: Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - local_address: Optional[str] = None, - uds: Optional[str] = None, - network_backend: Optional[NetworkBackend] = None, - socket_options: Optional[Iterable[SOCKET_OPTION]] = None, - ) -> None: - """ - A connection pool for making HTTP requests. - - Parameters: - ssl_context: An SSL context to use for verifying connections. - If not specified, the default `httpcore.default_ssl_context()` - will be used. - max_connections: The maximum number of concurrent HTTP connections that - the pool should allow. Any attempt to send a request on a pool that - would exceed this amount will block until a connection is available. - max_keepalive_connections: The maximum number of idle HTTP connections - that will be maintained in the pool. - keepalive_expiry: The duration in seconds that an idle HTTP connection - may be maintained for before being expired from the pool. - http1: A boolean indicating if HTTP/1.1 requests should be supported - by the connection pool. Defaults to True. - http2: A boolean indicating if HTTP/2 requests should be supported by - the connection pool. Defaults to False. - retries: The maximum number of retries when trying to establish a - connection. - local_address: Local address to connect from. Can also be used to connect - using a particular address family. Using `local_address="0.0.0.0"` - will connect using an `AF_INET` address (IPv4), while using - `local_address="::"` will connect using an `AF_INET6` address (IPv6). - uds: Path to a Unix Domain Socket to use instead of TCP sockets. - network_backend: A backend instance to use for handling network I/O. - socket_options: Socket options that have to be included - in the TCP socket when the connection was established. - """ - self._ssl_context = ssl_context - - self._max_connections = ( - sys.maxsize if max_connections is None else max_connections - ) - self._max_keepalive_connections = ( - sys.maxsize - if max_keepalive_connections is None - else max_keepalive_connections - ) - self._max_keepalive_connections = min( - self._max_connections, self._max_keepalive_connections - ) - - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - self._retries = retries - self._local_address = local_address - self._uds = uds - - self._pool: List[ConnectionInterface] = [] - self._requests: List[RequestStatus] = [] - self._pool_lock = Lock() - self._network_backend = ( - SyncBackend() if network_backend is None else network_backend - ) - self._socket_options = socket_options - - def create_connection(self, origin: Origin) -> ConnectionInterface: - return HTTPConnection( - origin=origin, - ssl_context=self._ssl_context, - keepalive_expiry=self._keepalive_expiry, - http1=self._http1, - http2=self._http2, - retries=self._retries, - local_address=self._local_address, - uds=self._uds, - network_backend=self._network_backend, - socket_options=self._socket_options, - ) - - @property - def connections(self) -> List[ConnectionInterface]: - """ - Return a list of the connections currently in the pool. - - For example: - - ```python - >>> pool.connections - [ - , - , - , - ] - ``` - """ - return list(self._pool) - - def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool: - """ - Attempt to provide a connection that can handle the given origin. - """ - origin = status.request.url.origin - - # If there are queued requests in front of us, then don't acquire a - # connection. We handle requests strictly in order. - waiting = [s for s in self._requests if s.connection is None] - if waiting and waiting[0] is not status: - return False - - # Reuse an existing connection if one is currently available. - for idx, connection in enumerate(self._pool): - if connection.can_handle_request(origin) and connection.is_available(): - self._pool.pop(idx) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - # If the pool is currently full, attempt to close one idle connection. - if len(self._pool) >= self._max_connections: - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle(): - connection.close() - self._pool.pop(idx) - break - - # If the pool is still full, then we cannot acquire a connection. - if len(self._pool) >= self._max_connections: - return False - - # Otherwise create a new connection. - connection = self.create_connection(origin) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - def _close_expired_connections(self) -> None: - """ - Clean up the connection pool by closing off any connections that have expired. - """ - # Close any connections that have expired their keep-alive time. - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.has_expired(): - connection.close() - self._pool.pop(idx) - - # If the pool size exceeds the maximum number of allowed keep-alive connections, - # then close off idle connections as required. - pool_size = len(self._pool) - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle() and pool_size > self._max_keepalive_connections: - connection.close() - self._pool.pop(idx) - pool_size -= 1 - - def handle_request(self, request: Request) -> Response: - """ - Send an HTTP request, and return an HTTP response. - - This is the core implementation that is called into by `.request()` or `.stream()`. - """ - scheme = request.url.scheme.decode() - if scheme == "": - raise UnsupportedProtocol( - "Request URL is missing an 'http://' or 'https://' protocol." - ) - if scheme not in ("http", "https", "ws", "wss"): - raise UnsupportedProtocol( - f"Request URL has an unsupported protocol '{scheme}://'." - ) - - status = RequestStatus(request) - - with self._pool_lock: - self._requests.append(status) - self._close_expired_connections() - self._attempt_to_acquire_connection(status) - - while True: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("pool", None) - try: - connection = status.wait_for_connection(timeout=timeout) - except BaseException as exc: - # If we timeout here, or if the task is cancelled, then make - # sure to remove the request from the queue before bubbling - # up the exception. - with self._pool_lock: - # Ensure only remove when task exists. - if status in self._requests: - self._requests.remove(status) - raise exc - - try: - response = connection.handle_request(request) - except ConnectionNotAvailable: - # The ConnectionNotAvailable exception is a special case, that - # indicates we need to retry the request on a new connection. - # - # The most common case where this can occur is when multiple - # requests are queued waiting for a single connection, which - # might end up as an HTTP/2 connection, but which actually ends - # up as HTTP/1.1. - with self._pool_lock: - # Maintain our position in the request queue, but reset the - # status so that the request becomes queued again. - status.unset_connection() - self._attempt_to_acquire_connection(status) - except BaseException as exc: - with ShieldCancellation(): - self.response_closed(status) - raise exc - else: - break - - # When we return the response, we wrap the stream in a special class - # that handles notifying the connection pool once the response - # has been released. - assert isinstance(response.stream, Iterable) - return Response( - status=response.status, - headers=response.headers, - content=ConnectionPoolByteStream(response.stream, self, status), - extensions=response.extensions, - ) - - def response_closed(self, status: RequestStatus) -> None: - """ - This method acts as a callback once the request/response cycle is complete. - - It is called into from the `ConnectionPoolByteStream.close()` method. - """ - assert status.connection is not None - connection = status.connection - - with self._pool_lock: - # Update the state of the connection pool. - if status in self._requests: - self._requests.remove(status) - - if connection.is_closed() and connection in self._pool: - self._pool.remove(connection) - - # Since we've had a response closed, it's possible we'll now be able - # to service one or more requests that are currently pending. - for status in self._requests: - if status.connection is None: - acquired = self._attempt_to_acquire_connection(status) - # If we could not acquire a connection for a queued request - # then we don't need to check anymore requests that are - # queued later behind it. - if not acquired: - break - - # Housekeeping. - self._close_expired_connections() - - def close(self) -> None: - """ - Close any connections in the pool. - """ - with self._pool_lock: - for connection in self._pool: - connection.close() - self._pool = [] - self._requests = [] - - def __enter__(self) -> "ConnectionPool": - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - self.close() - - -class ConnectionPoolByteStream: - """ - A wrapper around the response byte stream, that additionally handles - notifying the connection pool when the response has been closed. - """ - - def __init__( - self, - stream: Iterable[bytes], - pool: ConnectionPool, - status: RequestStatus, - ) -> None: - self._stream = stream - self._pool = pool - self._status = status - - def __iter__(self) -> Iterator[bytes]: - for part in self._stream: - yield part - - def close(self) -> None: - try: - if hasattr(self._stream, "close"): - self._stream.close() - finally: - with ShieldCancellation(): - self._pool.response_closed(self._status) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/src/fortranobject.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/src/fortranobject.c deleted file mode 100644 index 072392bb665140044c604f1a6b391fa0588fa16f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/src/fortranobject.c +++ /dev/null @@ -1,1423 +0,0 @@ -#define FORTRANOBJECT_C -#include "fortranobject.h" - -#ifdef __cplusplus -extern "C" { -#endif - -#include -#include -#include - -/* - This file implements: FortranObject, array_from_pyobj, copy_ND_array - - Author: Pearu Peterson - $Revision: 1.52 $ - $Date: 2005/07/11 07:44:20 $ -*/ - -int -F2PyDict_SetItemString(PyObject *dict, char *name, PyObject *obj) -{ - if (obj == NULL) { - fprintf(stderr, "Error loading %s\n", name); - if (PyErr_Occurred()) { - PyErr_Print(); - PyErr_Clear(); - } - return -1; - } - return PyDict_SetItemString(dict, name, obj); -} - -/* - * Python-only fallback for thread-local callback pointers - */ -void * -F2PySwapThreadLocalCallbackPtr(char *key, void *ptr) -{ - PyObject *local_dict, *value; - void *prev; - - local_dict = PyThreadState_GetDict(); - if (local_dict == NULL) { - Py_FatalError( - "F2PySwapThreadLocalCallbackPtr: PyThreadState_GetDict " - "failed"); - } - - value = PyDict_GetItemString(local_dict, key); - if (value != NULL) { - prev = PyLong_AsVoidPtr(value); - if (PyErr_Occurred()) { - Py_FatalError( - "F2PySwapThreadLocalCallbackPtr: PyLong_AsVoidPtr failed"); - } - } - else { - prev = NULL; - } - - value = PyLong_FromVoidPtr((void *)ptr); - if (value == NULL) { - Py_FatalError( - "F2PySwapThreadLocalCallbackPtr: PyLong_FromVoidPtr failed"); - } - - if (PyDict_SetItemString(local_dict, key, value) != 0) { - Py_FatalError( - "F2PySwapThreadLocalCallbackPtr: PyDict_SetItemString failed"); - } - - Py_DECREF(value); - - return prev; -} - -void * -F2PyGetThreadLocalCallbackPtr(char *key) -{ - PyObject *local_dict, *value; - void *prev; - - local_dict = PyThreadState_GetDict(); - if (local_dict == NULL) { - Py_FatalError( - "F2PyGetThreadLocalCallbackPtr: PyThreadState_GetDict failed"); - } - - value = PyDict_GetItemString(local_dict, key); - if (value != NULL) { - prev = PyLong_AsVoidPtr(value); - if (PyErr_Occurred()) { - Py_FatalError( - "F2PyGetThreadLocalCallbackPtr: PyLong_AsVoidPtr failed"); - } - } - else { - prev = NULL; - } - - return prev; -} - -static PyArray_Descr * -get_descr_from_type_and_elsize(const int type_num, const int elsize) { - PyArray_Descr * descr = PyArray_DescrFromType(type_num); - if (type_num == NPY_STRING) { - // PyArray_DescrFromType returns descr with elsize = 0. - PyArray_DESCR_REPLACE(descr); - if (descr == NULL) { - return NULL; - } - descr->elsize = elsize; - } - return descr; -} - -/************************* FortranObject *******************************/ - -typedef PyObject *(*fortranfunc)(PyObject *, PyObject *, PyObject *, void *); - -PyObject * -PyFortranObject_New(FortranDataDef *defs, f2py_void_func init) -{ - int i; - PyFortranObject *fp = NULL; - PyObject *v = NULL; - if (init != NULL) { /* Initialize F90 module objects */ - (*(init))(); - } - fp = PyObject_New(PyFortranObject, &PyFortran_Type); - if (fp == NULL) { - return NULL; - } - if ((fp->dict = PyDict_New()) == NULL) { - Py_DECREF(fp); - return NULL; - } - fp->len = 0; - while (defs[fp->len].name != NULL) { - fp->len++; - } - if (fp->len == 0) { - goto fail; - } - fp->defs = defs; - for (i = 0; i < fp->len; i++) { - if (fp->defs[i].rank == -1) { /* Is Fortran routine */ - v = PyFortranObject_NewAsAttr(&(fp->defs[i])); - if (v == NULL) { - goto fail; - } - PyDict_SetItemString(fp->dict, fp->defs[i].name, v); - Py_XDECREF(v); - } - else if ((fp->defs[i].data) != - NULL) { /* Is Fortran variable or array (not allocatable) */ - PyArray_Descr * - descr = get_descr_from_type_and_elsize(fp->defs[i].type, - fp->defs[i].elsize); - if (descr == NULL) { - goto fail; - } - v = PyArray_NewFromDescr(&PyArray_Type, descr, fp->defs[i].rank, - fp->defs[i].dims.d, NULL, fp->defs[i].data, - NPY_ARRAY_FARRAY, NULL); - if (v == NULL) { - Py_DECREF(descr); - goto fail; - } - PyDict_SetItemString(fp->dict, fp->defs[i].name, v); - Py_XDECREF(v); - } - } - return (PyObject *)fp; -fail: - Py_XDECREF(fp); - return NULL; -} - -PyObject * -PyFortranObject_NewAsAttr(FortranDataDef *defs) -{ /* used for calling F90 module routines */ - PyFortranObject *fp = NULL; - fp = PyObject_New(PyFortranObject, &PyFortran_Type); - if (fp == NULL) - return NULL; - if ((fp->dict = PyDict_New()) == NULL) { - PyObject_Del(fp); - return NULL; - } - fp->len = 1; - fp->defs = defs; - if (defs->rank == -1) { - PyDict_SetItemString(fp->dict, "__name__", PyUnicode_FromFormat("function %s", defs->name)); - } else if (defs->rank == 0) { - PyDict_SetItemString(fp->dict, "__name__", PyUnicode_FromFormat("scalar %s", defs->name)); - } else { - PyDict_SetItemString(fp->dict, "__name__", PyUnicode_FromFormat("array %s", defs->name)); - } - return (PyObject *)fp; -} - -/* Fortran methods */ - -static void -fortran_dealloc(PyFortranObject *fp) -{ - Py_XDECREF(fp->dict); - PyObject_Del(fp); -} - -/* Returns number of bytes consumed from buf, or -1 on error. */ -static Py_ssize_t -format_def(char *buf, Py_ssize_t size, FortranDataDef def) -{ - char *p = buf; - int i; - npy_intp n; - - n = PyOS_snprintf(p, size, "array(%" NPY_INTP_FMT, def.dims.d[0]); - if (n < 0 || n >= size) { - return -1; - } - p += n; - size -= n; - - for (i = 1; i < def.rank; i++) { - n = PyOS_snprintf(p, size, ",%" NPY_INTP_FMT, def.dims.d[i]); - if (n < 0 || n >= size) { - return -1; - } - p += n; - size -= n; - } - - if (size <= 0) { - return -1; - } - - *p++ = ')'; - size--; - - if (def.data == NULL) { - static const char notalloc[] = ", not allocated"; - if ((size_t)size < sizeof(notalloc)) { - return -1; - } - memcpy(p, notalloc, sizeof(notalloc)); - p += sizeof(notalloc); - size -= sizeof(notalloc); - } - - return p - buf; -} - -static PyObject * -fortran_doc(FortranDataDef def) -{ - char *buf, *p; - PyObject *s = NULL; - Py_ssize_t n, origsize, size = 100; - - if (def.doc != NULL) { - size += strlen(def.doc); - } - origsize = size; - buf = p = (char *)PyMem_Malloc(size); - if (buf == NULL) { - return PyErr_NoMemory(); - } - - if (def.rank == -1) { - if (def.doc) { - n = strlen(def.doc); - if (n > size) { - goto fail; - } - memcpy(p, def.doc, n); - p += n; - size -= n; - } - else { - n = PyOS_snprintf(p, size, "%s - no docs available", def.name); - if (n < 0 || n >= size) { - goto fail; - } - p += n; - size -= n; - } - } - else { - PyArray_Descr *d = PyArray_DescrFromType(def.type); - n = PyOS_snprintf(p, size, "%s : '%c'-", def.name, d->type); - Py_DECREF(d); - if (n < 0 || n >= size) { - goto fail; - } - p += n; - size -= n; - - if (def.data == NULL) { - n = format_def(p, size, def); - if (n < 0) { - goto fail; - } - p += n; - size -= n; - } - else if (def.rank > 0) { - n = format_def(p, size, def); - if (n < 0) { - goto fail; - } - p += n; - size -= n; - } - else { - n = strlen("scalar"); - if (size < n) { - goto fail; - } - memcpy(p, "scalar", n); - p += n; - size -= n; - } - } - if (size <= 1) { - goto fail; - } - *p++ = '\n'; - size--; - - /* p now points one beyond the last character of the string in buf */ - s = PyUnicode_FromStringAndSize(buf, p - buf); - - PyMem_Free(buf); - return s; - -fail: - fprintf(stderr, - "fortranobject.c: fortran_doc: len(p)=%zd>%zd=size:" - " too long docstring required, increase size\n", - p - buf, origsize); - PyMem_Free(buf); - return NULL; -} - -static FortranDataDef *save_def; /* save pointer of an allocatable array */ -static void -set_data(char *d, npy_intp *f) -{ /* callback from Fortran */ - if (*f) /* In fortran f=allocated(d) */ - save_def->data = d; - else - save_def->data = NULL; - /* printf("set_data: d=%p,f=%d\n",d,*f); */ -} - -static PyObject * -fortran_getattr(PyFortranObject *fp, char *name) -{ - int i, j, k, flag; - if (fp->dict != NULL) { - PyObject *v = _PyDict_GetItemStringWithError(fp->dict, name); - if (v == NULL && PyErr_Occurred()) { - return NULL; - } - else if (v != NULL) { - Py_INCREF(v); - return v; - } - } - for (i = 0, j = 1; i < fp->len && (j = strcmp(name, fp->defs[i].name)); - i++) - ; - if (j == 0) - if (fp->defs[i].rank != -1) { /* F90 allocatable array */ - if (fp->defs[i].func == NULL) - return NULL; - for (k = 0; k < fp->defs[i].rank; ++k) fp->defs[i].dims.d[k] = -1; - save_def = &fp->defs[i]; - (*(fp->defs[i].func))(&fp->defs[i].rank, fp->defs[i].dims.d, - set_data, &flag); - if (flag == 2) - k = fp->defs[i].rank + 1; - else - k = fp->defs[i].rank; - if (fp->defs[i].data != NULL) { /* array is allocated */ - PyObject *v = PyArray_New( - &PyArray_Type, k, fp->defs[i].dims.d, fp->defs[i].type, - NULL, fp->defs[i].data, 0, NPY_ARRAY_FARRAY, NULL); - if (v == NULL) - return NULL; - /* Py_INCREF(v); */ - return v; - } - else { /* array is not allocated */ - Py_RETURN_NONE; - } - } - if (strcmp(name, "__dict__") == 0) { - Py_INCREF(fp->dict); - return fp->dict; - } - if (strcmp(name, "__doc__") == 0) { - PyObject *s = PyUnicode_FromString(""), *s2, *s3; - for (i = 0; i < fp->len; i++) { - s2 = fortran_doc(fp->defs[i]); - s3 = PyUnicode_Concat(s, s2); - Py_DECREF(s2); - Py_DECREF(s); - s = s3; - } - if (PyDict_SetItemString(fp->dict, name, s)) - return NULL; - return s; - } - if ((strcmp(name, "_cpointer") == 0) && (fp->len == 1)) { - PyObject *cobj = - F2PyCapsule_FromVoidPtr((void *)(fp->defs[0].data), NULL); - if (PyDict_SetItemString(fp->dict, name, cobj)) - return NULL; - return cobj; - } - PyObject *str, *ret; - str = PyUnicode_FromString(name); - ret = PyObject_GenericGetAttr((PyObject *)fp, str); - Py_DECREF(str); - return ret; -} - -static int -fortran_setattr(PyFortranObject *fp, char *name, PyObject *v) -{ - int i, j, flag; - PyArrayObject *arr = NULL; - for (i = 0, j = 1; i < fp->len && (j = strcmp(name, fp->defs[i].name)); - i++) - ; - if (j == 0) { - if (fp->defs[i].rank == -1) { - PyErr_SetString(PyExc_AttributeError, - "over-writing fortran routine"); - return -1; - } - if (fp->defs[i].func != NULL) { /* is allocatable array */ - npy_intp dims[F2PY_MAX_DIMS]; - int k; - save_def = &fp->defs[i]; - if (v != Py_None) { /* set new value (reallocate if needed -- - see f2py generated code for more - details ) */ - for (k = 0; k < fp->defs[i].rank; k++) dims[k] = -1; - if ((arr = array_from_pyobj(fp->defs[i].type, dims, - fp->defs[i].rank, F2PY_INTENT_IN, - v)) == NULL) - return -1; - (*(fp->defs[i].func))(&fp->defs[i].rank, PyArray_DIMS(arr), - set_data, &flag); - } - else { /* deallocate */ - for (k = 0; k < fp->defs[i].rank; k++) dims[k] = 0; - (*(fp->defs[i].func))(&fp->defs[i].rank, dims, set_data, - &flag); - for (k = 0; k < fp->defs[i].rank; k++) dims[k] = -1; - } - memcpy(fp->defs[i].dims.d, dims, - fp->defs[i].rank * sizeof(npy_intp)); - } - else { /* not allocatable array */ - if ((arr = array_from_pyobj(fp->defs[i].type, fp->defs[i].dims.d, - fp->defs[i].rank, F2PY_INTENT_IN, - v)) == NULL) - return -1; - } - if (fp->defs[i].data != - NULL) { /* copy Python object to Fortran array */ - npy_intp s = PyArray_MultiplyList(fp->defs[i].dims.d, - PyArray_NDIM(arr)); - if (s == -1) - s = PyArray_MultiplyList(PyArray_DIMS(arr), PyArray_NDIM(arr)); - if (s < 0 || (memcpy(fp->defs[i].data, PyArray_DATA(arr), - s * PyArray_ITEMSIZE(arr))) == NULL) { - if ((PyObject *)arr != v) { - Py_DECREF(arr); - } - return -1; - } - if ((PyObject *)arr != v) { - Py_DECREF(arr); - } - } - else - return (fp->defs[i].func == NULL ? -1 : 0); - return 0; /* successful */ - } - if (fp->dict == NULL) { - fp->dict = PyDict_New(); - if (fp->dict == NULL) - return -1; - } - if (v == NULL) { - int rv = PyDict_DelItemString(fp->dict, name); - if (rv < 0) - PyErr_SetString(PyExc_AttributeError, - "delete non-existing fortran attribute"); - return rv; - } - else - return PyDict_SetItemString(fp->dict, name, v); -} - -static PyObject * -fortran_call(PyFortranObject *fp, PyObject *arg, PyObject *kw) -{ - int i = 0; - /* printf("fortran call - name=%s,func=%p,data=%p,%p\n",fp->defs[i].name, - fp->defs[i].func,fp->defs[i].data,&fp->defs[i].data); */ - if (fp->defs[i].rank == -1) { /* is Fortran routine */ - if (fp->defs[i].func == NULL) { - PyErr_Format(PyExc_RuntimeError, "no function to call"); - return NULL; - } - else if (fp->defs[i].data == NULL) - /* dummy routine */ - return (*((fortranfunc)(fp->defs[i].func)))((PyObject *)fp, arg, - kw, NULL); - else - return (*((fortranfunc)(fp->defs[i].func)))( - (PyObject *)fp, arg, kw, (void *)fp->defs[i].data); - } - PyErr_Format(PyExc_TypeError, "this fortran object is not callable"); - return NULL; -} - -static PyObject * -fortran_repr(PyFortranObject *fp) -{ - PyObject *name = NULL, *repr = NULL; - name = PyObject_GetAttrString((PyObject *)fp, "__name__"); - PyErr_Clear(); - if (name != NULL && PyUnicode_Check(name)) { - repr = PyUnicode_FromFormat("", name); - } - else { - repr = PyUnicode_FromString(""); - } - Py_XDECREF(name); - return repr; -} - -PyTypeObject PyFortran_Type = { - PyVarObject_HEAD_INIT(NULL, 0).tp_name = "fortran", - .tp_basicsize = sizeof(PyFortranObject), - .tp_dealloc = (destructor)fortran_dealloc, - .tp_getattr = (getattrfunc)fortran_getattr, - .tp_setattr = (setattrfunc)fortran_setattr, - .tp_repr = (reprfunc)fortran_repr, - .tp_call = (ternaryfunc)fortran_call, -}; - -/************************* f2py_report_atexit *******************************/ - -#ifdef F2PY_REPORT_ATEXIT -static int passed_time = 0; -static int passed_counter = 0; -static int passed_call_time = 0; -static struct timeb start_time; -static struct timeb stop_time; -static struct timeb start_call_time; -static struct timeb stop_call_time; -static int cb_passed_time = 0; -static int cb_passed_counter = 0; -static int cb_passed_call_time = 0; -static struct timeb cb_start_time; -static struct timeb cb_stop_time; -static struct timeb cb_start_call_time; -static struct timeb cb_stop_call_time; - -extern void -f2py_start_clock(void) -{ - ftime(&start_time); -} -extern void -f2py_start_call_clock(void) -{ - f2py_stop_clock(); - ftime(&start_call_time); -} -extern void -f2py_stop_clock(void) -{ - ftime(&stop_time); - passed_time += 1000 * (stop_time.time - start_time.time); - passed_time += stop_time.millitm - start_time.millitm; -} -extern void -f2py_stop_call_clock(void) -{ - ftime(&stop_call_time); - passed_call_time += 1000 * (stop_call_time.time - start_call_time.time); - passed_call_time += stop_call_time.millitm - start_call_time.millitm; - passed_counter += 1; - f2py_start_clock(); -} - -extern void -f2py_cb_start_clock(void) -{ - ftime(&cb_start_time); -} -extern void -f2py_cb_start_call_clock(void) -{ - f2py_cb_stop_clock(); - ftime(&cb_start_call_time); -} -extern void -f2py_cb_stop_clock(void) -{ - ftime(&cb_stop_time); - cb_passed_time += 1000 * (cb_stop_time.time - cb_start_time.time); - cb_passed_time += cb_stop_time.millitm - cb_start_time.millitm; -} -extern void -f2py_cb_stop_call_clock(void) -{ - ftime(&cb_stop_call_time); - cb_passed_call_time += - 1000 * (cb_stop_call_time.time - cb_start_call_time.time); - cb_passed_call_time += - cb_stop_call_time.millitm - cb_start_call_time.millitm; - cb_passed_counter += 1; - f2py_cb_start_clock(); -} - -static int f2py_report_on_exit_been_here = 0; -extern void -f2py_report_on_exit(int exit_flag, void *name) -{ - if (f2py_report_on_exit_been_here) { - fprintf(stderr, " %s\n", (char *)name); - return; - } - f2py_report_on_exit_been_here = 1; - fprintf(stderr, " /-----------------------\\\n"); - fprintf(stderr, " < F2PY performance report >\n"); - fprintf(stderr, " \\-----------------------/\n"); - fprintf(stderr, "Overall time spent in ...\n"); - fprintf(stderr, "(a) wrapped (Fortran/C) functions : %8d msec\n", - passed_call_time); - fprintf(stderr, "(b) f2py interface, %6d calls : %8d msec\n", - passed_counter, passed_time); - fprintf(stderr, "(c) call-back (Python) functions : %8d msec\n", - cb_passed_call_time); - fprintf(stderr, "(d) f2py call-back interface, %6d calls : %8d msec\n", - cb_passed_counter, cb_passed_time); - - fprintf(stderr, - "(e) wrapped (Fortran/C) functions (actual) : %8d msec\n\n", - passed_call_time - cb_passed_call_time - cb_passed_time); - fprintf(stderr, - "Use -DF2PY_REPORT_ATEXIT_DISABLE to disable this message.\n"); - fprintf(stderr, "Exit status: %d\n", exit_flag); - fprintf(stderr, "Modules : %s\n", (char *)name); -} -#endif - -/********************** report on array copy ****************************/ - -#ifdef F2PY_REPORT_ON_ARRAY_COPY -static void -f2py_report_on_array_copy(PyArrayObject *arr) -{ - const npy_intp arr_size = PyArray_Size((PyObject *)arr); - if (arr_size > F2PY_REPORT_ON_ARRAY_COPY) { - fprintf(stderr, - "copied an array: size=%ld, elsize=%" NPY_INTP_FMT "\n", - arr_size, (npy_intp)PyArray_ITEMSIZE(arr)); - } -} -static void -f2py_report_on_array_copy_fromany(void) -{ - fprintf(stderr, "created an array from object\n"); -} - -#define F2PY_REPORT_ON_ARRAY_COPY_FROMARR \ - f2py_report_on_array_copy((PyArrayObject *)arr) -#define F2PY_REPORT_ON_ARRAY_COPY_FROMANY f2py_report_on_array_copy_fromany() -#else -#define F2PY_REPORT_ON_ARRAY_COPY_FROMARR -#define F2PY_REPORT_ON_ARRAY_COPY_FROMANY -#endif - -/************************* array_from_obj *******************************/ - -/* - * File: array_from_pyobj.c - * - * Description: - * ------------ - * Provides array_from_pyobj function that returns a contiguous array - * object with the given dimensions and required storage order, either - * in row-major (C) or column-major (Fortran) order. The function - * array_from_pyobj is very flexible about its Python object argument - * that can be any number, list, tuple, or array. - * - * array_from_pyobj is used in f2py generated Python extension - * modules. - * - * Author: Pearu Peterson - * Created: 13-16 January 2002 - * $Id: fortranobject.c,v 1.52 2005/07/11 07:44:20 pearu Exp $ - */ - -static int check_and_fix_dimensions(const PyArrayObject* arr, - const int rank, - npy_intp *dims, - const char *errmess); - -static int -find_first_negative_dimension(const int rank, const npy_intp *dims) -{ - int i; - for (i = 0; i < rank; ++i) { - if (dims[i] < 0) { - return i; - } - } - return -1; -} - -#ifdef DEBUG_COPY_ND_ARRAY -void -dump_dims(int rank, npy_intp const *dims) -{ - int i; - printf("["); - for (i = 0; i < rank; ++i) { - printf("%3" NPY_INTP_FMT, dims[i]); - } - printf("]\n"); -} -void -dump_attrs(const PyArrayObject *obj) -{ - const PyArrayObject_fields *arr = (const PyArrayObject_fields *)obj; - int rank = PyArray_NDIM(arr); - npy_intp size = PyArray_Size((PyObject *)arr); - printf("\trank = %d, flags = %d, size = %" NPY_INTP_FMT "\n", rank, - arr->flags, size); - printf("\tstrides = "); - dump_dims(rank, arr->strides); - printf("\tdimensions = "); - dump_dims(rank, arr->dimensions); -} -#endif - -#define SWAPTYPE(a, b, t) \ - { \ - t c; \ - c = (a); \ - (a) = (b); \ - (b) = c; \ - } - -static int -swap_arrays(PyArrayObject *obj1, PyArrayObject *obj2) -{ - PyArrayObject_fields *arr1 = (PyArrayObject_fields *)obj1, - *arr2 = (PyArrayObject_fields *)obj2; - SWAPTYPE(arr1->data, arr2->data, char *); - SWAPTYPE(arr1->nd, arr2->nd, int); - SWAPTYPE(arr1->dimensions, arr2->dimensions, npy_intp *); - SWAPTYPE(arr1->strides, arr2->strides, npy_intp *); - SWAPTYPE(arr1->base, arr2->base, PyObject *); - SWAPTYPE(arr1->descr, arr2->descr, PyArray_Descr *); - SWAPTYPE(arr1->flags, arr2->flags, int); - /* SWAPTYPE(arr1->weakreflist,arr2->weakreflist,PyObject*); */ - return 0; -} - -#define ARRAY_ISCOMPATIBLE(arr,type_num) \ - ((PyArray_ISINTEGER(arr) && PyTypeNum_ISINTEGER(type_num)) || \ - (PyArray_ISFLOAT(arr) && PyTypeNum_ISFLOAT(type_num)) || \ - (PyArray_ISCOMPLEX(arr) && PyTypeNum_ISCOMPLEX(type_num)) || \ - (PyArray_ISBOOL(arr) && PyTypeNum_ISBOOL(type_num)) || \ - (PyArray_ISSTRING(arr) && PyTypeNum_ISSTRING(type_num))) - -static int -get_elsize(PyObject *obj) { - /* - get_elsize determines array itemsize from a Python object. Returns - elsize if successful, -1 otherwise. - - Supported types of the input are: numpy.ndarray, bytes, str, tuple, - list. - */ - - if (PyArray_Check(obj)) { - return PyArray_DESCR((PyArrayObject *)obj)->elsize; - } else if (PyBytes_Check(obj)) { - return PyBytes_GET_SIZE(obj); - } else if (PyUnicode_Check(obj)) { - return PyUnicode_GET_LENGTH(obj); - } else if (PySequence_Check(obj)) { - PyObject* fast = PySequence_Fast(obj, "f2py:fortranobject.c:get_elsize"); - if (fast != NULL) { - Py_ssize_t i, n = PySequence_Fast_GET_SIZE(fast); - int sz, elsize = 0; - for (i=0; i elsize) { - elsize = sz; - } - } - Py_DECREF(fast); - return elsize; - } - } - return -1; -} - -extern PyArrayObject * -ndarray_from_pyobj(const int type_num, - const int elsize_, - npy_intp *dims, - const int rank, - const int intent, - PyObject *obj, - const char *errmess) { - /* - * Return an array with given element type and shape from a Python - * object while taking into account the usage intent of the array. - * - * - element type is defined by type_num and elsize - * - shape is defined by dims and rank - * - * ndarray_from_pyobj is used to convert Python object arguments - * to numpy ndarrays with given type and shape that data is passed - * to interfaced Fortran or C functions. - * - * errmess (if not NULL), contains a prefix of an error message - * for an exception to be triggered within this function. - * - * Negative elsize value means that elsize is to be determined - * from the Python object in runtime. - * - * Note on strings - * --------------- - * - * String type (type_num == NPY_STRING) does not have fixed - * element size and, by default, the type object sets it to - * 0. Therefore, for string types, one has to use elsize - * argument. For other types, elsize value is ignored. - * - * NumPy defines the type of a fixed-width string as - * dtype('S'). In addition, there is also dtype('c'), that - * appears as dtype('S1') (these have the same type_num value), - * but is actually different (.char attribute is either 'S' or - * 'c', respecitely). - * - * In Fortran, character arrays and strings are different - * concepts. The relation between Fortran types, NumPy dtypes, - * and type_num-elsize pairs, is defined as follows: - * - * character*5 foo | dtype('S5') | elsize=5, shape=() - * character(5) foo | dtype('S1') | elsize=1, shape=(5) - * character*5 foo(n) | dtype('S5') | elsize=5, shape=(n,) - * character(5) foo(n) | dtype('S1') | elsize=1, shape=(5, n) - * character*(*) foo | dtype('S') | elsize=-1, shape=() - * - * Note about reference counting - * ----------------------------- - * - * If the caller returns the array to Python, it must be done with - * Py_BuildValue("N",arr). Otherwise, if obj!=arr then the caller - * must call Py_DECREF(arr). - * - * Note on intent(cache,out,..) - * ---------------------------- - * Don't expect correct data when returning intent(cache) array. - * - */ - char mess[F2PY_MESSAGE_BUFFER_SIZE]; - PyArrayObject *arr = NULL; - int elsize = (elsize_ < 0 ? get_elsize(obj) : elsize_); - if (elsize < 0) { - if (errmess != NULL) { - strcpy(mess, errmess); - } - sprintf(mess + strlen(mess), - " -- failed to determine element size from %s", - Py_TYPE(obj)->tp_name); - PyErr_SetString(PyExc_SystemError, mess); - return NULL; - } - PyArray_Descr * descr = get_descr_from_type_and_elsize(type_num, elsize); // new reference - if (descr == NULL) { - return NULL; - } - elsize = descr->elsize; - if ((intent & F2PY_INTENT_HIDE) - || ((intent & F2PY_INTENT_CACHE) && (obj == Py_None)) - || ((intent & F2PY_OPTIONAL) && (obj == Py_None)) - ) { - /* intent(cache), optional, intent(hide) */ - int ineg = find_first_negative_dimension(rank, dims); - if (ineg >= 0) { - int i; - strcpy(mess, "failed to create intent(cache|hide)|optional array" - "-- must have defined dimensions but got ("); - for(i = 0; i < rank; ++i) - sprintf(mess + strlen(mess), "%" NPY_INTP_FMT ",", dims[i]); - strcat(mess, ")"); - PyErr_SetString(PyExc_ValueError, mess); - Py_DECREF(descr); - return NULL; - } - arr = (PyArrayObject *) \ - PyArray_NewFromDescr(&PyArray_Type, descr, rank, dims, - NULL, NULL, !(intent & F2PY_INTENT_C), NULL); - if (arr == NULL) { - Py_DECREF(descr); - return NULL; - } - if (PyArray_ITEMSIZE(arr) != elsize) { - strcpy(mess, "failed to create intent(cache|hide)|optional array"); - sprintf(mess+strlen(mess)," -- expected elsize=%d got %" NPY_INTP_FMT, elsize, (npy_intp)PyArray_ITEMSIZE(arr)); - PyErr_SetString(PyExc_ValueError,mess); - Py_DECREF(arr); - return NULL; - } - if (!(intent & F2PY_INTENT_CACHE)) { - PyArray_FILLWBYTE(arr, 0); - } - return arr; - } - - if (PyArray_Check(obj)) { - arr = (PyArrayObject *)obj; - if (intent & F2PY_INTENT_CACHE) { - /* intent(cache) */ - if (PyArray_ISONESEGMENT(arr) - && PyArray_ITEMSIZE(arr) >= elsize) { - if (check_and_fix_dimensions(arr, rank, dims, errmess)) { - Py_DECREF(descr); - return NULL; - } - if (intent & F2PY_INTENT_OUT) - Py_INCREF(arr); - Py_DECREF(descr); - return arr; - } - strcpy(mess, "failed to initialize intent(cache) array"); - if (!PyArray_ISONESEGMENT(arr)) - strcat(mess, " -- input must be in one segment"); - if (PyArray_ITEMSIZE(arr) < elsize) - sprintf(mess + strlen(mess), - " -- expected at least elsize=%d but got " - "%" NPY_INTP_FMT, - elsize, (npy_intp)PyArray_ITEMSIZE(arr)); - PyErr_SetString(PyExc_ValueError, mess); - Py_DECREF(descr); - return NULL; - } - - /* here we have always intent(in) or intent(inout) or intent(inplace) - */ - - if (check_and_fix_dimensions(arr, rank, dims, errmess)) { - Py_DECREF(descr); - return NULL; - } - /* - printf("intent alignment=%d\n", F2PY_GET_ALIGNMENT(intent)); - printf("alignment check=%d\n", F2PY_CHECK_ALIGNMENT(arr, intent)); - int i; - for (i=1;i<=16;i++) - printf("i=%d isaligned=%d\n", i, ARRAY_ISALIGNED(arr, i)); - */ - if ((! (intent & F2PY_INTENT_COPY)) && - PyArray_ITEMSIZE(arr) == elsize && - ARRAY_ISCOMPATIBLE(arr,type_num) && - F2PY_CHECK_ALIGNMENT(arr, intent)) { - if ((intent & F2PY_INTENT_INOUT || intent & F2PY_INTENT_INPLACE) - ? ((intent & F2PY_INTENT_C) ? PyArray_ISCARRAY(arr) : PyArray_ISFARRAY(arr)) - : ((intent & F2PY_INTENT_C) ? PyArray_ISCARRAY_RO(arr) : PyArray_ISFARRAY_RO(arr))) { - if ((intent & F2PY_INTENT_OUT)) { - Py_INCREF(arr); - } - /* Returning input array */ - Py_DECREF(descr); - return arr; - } - } - if (intent & F2PY_INTENT_INOUT) { - strcpy(mess, "failed to initialize intent(inout) array"); - /* Must use PyArray_IS*ARRAY because intent(inout) requires - * writable input */ - if ((intent & F2PY_INTENT_C) && !PyArray_ISCARRAY(arr)) - strcat(mess, " -- input not contiguous"); - if (!(intent & F2PY_INTENT_C) && !PyArray_ISFARRAY(arr)) - strcat(mess, " -- input not fortran contiguous"); - if (PyArray_ITEMSIZE(arr) != elsize) - sprintf(mess + strlen(mess), - " -- expected elsize=%d but got %" NPY_INTP_FMT, - elsize, - (npy_intp)PyArray_ITEMSIZE(arr) - ); - if (!(ARRAY_ISCOMPATIBLE(arr, type_num))) { - sprintf(mess + strlen(mess), - " -- input '%c' not compatible to '%c'", - PyArray_DESCR(arr)->type, descr->type); - } - if (!(F2PY_CHECK_ALIGNMENT(arr, intent))) - sprintf(mess + strlen(mess), " -- input not %d-aligned", - F2PY_GET_ALIGNMENT(intent)); - PyErr_SetString(PyExc_ValueError, mess); - Py_DECREF(descr); - return NULL; - } - - /* here we have always intent(in) or intent(inplace) */ - - { - PyArrayObject * retarr = (PyArrayObject *) \ - PyArray_NewFromDescr(&PyArray_Type, descr, PyArray_NDIM(arr), PyArray_DIMS(arr), - NULL, NULL, !(intent & F2PY_INTENT_C), NULL); - if (retarr==NULL) { - Py_DECREF(descr); - return NULL; - } - F2PY_REPORT_ON_ARRAY_COPY_FROMARR; - if (PyArray_CopyInto(retarr, arr)) { - Py_DECREF(retarr); - return NULL; - } - if (intent & F2PY_INTENT_INPLACE) { - if (swap_arrays(arr,retarr)) { - Py_DECREF(retarr); - return NULL; /* XXX: set exception */ - } - Py_XDECREF(retarr); - if (intent & F2PY_INTENT_OUT) - Py_INCREF(arr); - } else { - arr = retarr; - } - } - return arr; - } - - if ((intent & F2PY_INTENT_INOUT) || (intent & F2PY_INTENT_INPLACE) || - (intent & F2PY_INTENT_CACHE)) { - PyErr_Format(PyExc_TypeError, - "failed to initialize intent(inout|inplace|cache) " - "array, input '%s' object is not an array", - Py_TYPE(obj)->tp_name); - Py_DECREF(descr); - return NULL; - } - - { - F2PY_REPORT_ON_ARRAY_COPY_FROMANY; - arr = (PyArrayObject *)PyArray_FromAny( - obj, descr, 0, 0, - ((intent & F2PY_INTENT_C) ? NPY_ARRAY_CARRAY - : NPY_ARRAY_FARRAY) | - NPY_ARRAY_FORCECAST, - NULL); - // Warning: in the case of NPY_STRING, PyArray_FromAny may - // reset descr->elsize, e.g. dtype('S0') becomes dtype('S1'). - if (arr == NULL) { - Py_DECREF(descr); - return NULL; - } - if (type_num != NPY_STRING && PyArray_ITEMSIZE(arr) != elsize) { - // This is internal sanity tests: elsize has been set to - // descr->elsize in the beginning of this function. - strcpy(mess, "failed to initialize intent(in) array"); - sprintf(mess + strlen(mess), - " -- expected elsize=%d got %" NPY_INTP_FMT, elsize, - (npy_intp)PyArray_ITEMSIZE(arr)); - PyErr_SetString(PyExc_ValueError, mess); - Py_DECREF(arr); - return NULL; - } - if (check_and_fix_dimensions(arr, rank, dims, errmess)) { - Py_DECREF(arr); - return NULL; - } - return arr; - } -} - -extern PyArrayObject * -array_from_pyobj(const int type_num, - npy_intp *dims, - const int rank, - const int intent, - PyObject *obj) { - /* - Same as ndarray_from_pyobj but with elsize determined from type, - if possible. Provided for backward compatibility. - */ - PyArray_Descr* descr = PyArray_DescrFromType(type_num); - int elsize = descr->elsize; - Py_DECREF(descr); - return ndarray_from_pyobj(type_num, elsize, dims, rank, intent, obj, NULL); -} - -/*****************************************/ -/* Helper functions for array_from_pyobj */ -/*****************************************/ - -static int -check_and_fix_dimensions(const PyArrayObject* arr, const int rank, - npy_intp *dims, const char *errmess) -{ - /* - * This function fills in blanks (that are -1's) in dims list using - * the dimensions from arr. It also checks that non-blank dims will - * match with the corresponding values in arr dimensions. - * - * Returns 0 if the function is successful. - * - * If an error condition is detected, an exception is set and 1 is - * returned. - */ - char mess[F2PY_MESSAGE_BUFFER_SIZE]; - const npy_intp arr_size = - (PyArray_NDIM(arr)) ? PyArray_Size((PyObject *)arr) : 1; -#ifdef DEBUG_COPY_ND_ARRAY - dump_attrs(arr); - printf("check_and_fix_dimensions:init: dims="); - dump_dims(rank, dims); -#endif - if (rank > PyArray_NDIM(arr)) { /* [1,2] -> [[1],[2]]; 1 -> [[1]] */ - npy_intp new_size = 1; - int free_axe = -1; - int i; - npy_intp d; - /* Fill dims where -1 or 0; check dimensions; calc new_size; */ - for (i = 0; i < PyArray_NDIM(arr); ++i) { - d = PyArray_DIM(arr, i); - if (dims[i] >= 0) { - if (d > 1 && dims[i] != d) { - PyErr_Format( - PyExc_ValueError, - "%d-th dimension must be fixed to %" NPY_INTP_FMT - " but got %" NPY_INTP_FMT "\n", - i, dims[i], d); - return 1; - } - if (!dims[i]) - dims[i] = 1; - } - else { - dims[i] = d ? d : 1; - } - new_size *= dims[i]; - } - for (i = PyArray_NDIM(arr); i < rank; ++i) - if (dims[i] > 1) { - PyErr_Format(PyExc_ValueError, - "%d-th dimension must be %" NPY_INTP_FMT - " but got 0 (not defined).\n", - i, dims[i]); - return 1; - } - else if (free_axe < 0) - free_axe = i; - else - dims[i] = 1; - if (free_axe >= 0) { - dims[free_axe] = arr_size / new_size; - new_size *= dims[free_axe]; - } - if (new_size != arr_size) { - PyErr_Format(PyExc_ValueError, - "unexpected array size: new_size=%" NPY_INTP_FMT - ", got array with arr_size=%" NPY_INTP_FMT - " (maybe too many free indices)\n", - new_size, arr_size); - return 1; - } - } - else if (rank == PyArray_NDIM(arr)) { - npy_intp new_size = 1; - int i; - npy_intp d; - for (i = 0; i < rank; ++i) { - d = PyArray_DIM(arr, i); - if (dims[i] >= 0) { - if (d > 1 && d != dims[i]) { - if (errmess != NULL) { - strcpy(mess, errmess); - } - sprintf(mess + strlen(mess), - " -- %d-th dimension must be fixed to %" - NPY_INTP_FMT " but got %" NPY_INTP_FMT, - i, dims[i], d); - PyErr_SetString(PyExc_ValueError, mess); - return 1; - } - if (!dims[i]) - dims[i] = 1; - } - else - dims[i] = d; - new_size *= dims[i]; - } - if (new_size != arr_size) { - PyErr_Format(PyExc_ValueError, - "unexpected array size: new_size=%" NPY_INTP_FMT - ", got array with arr_size=%" NPY_INTP_FMT "\n", - new_size, arr_size); - return 1; - } - } - else { /* [[1,2]] -> [[1],[2]] */ - int i, j; - npy_intp d; - int effrank; - npy_intp size; - for (i = 0, effrank = 0; i < PyArray_NDIM(arr); ++i) - if (PyArray_DIM(arr, i) > 1) - ++effrank; - if (dims[rank - 1] >= 0) - if (effrank > rank) { - PyErr_Format(PyExc_ValueError, - "too many axes: %d (effrank=%d), " - "expected rank=%d\n", - PyArray_NDIM(arr), effrank, rank); - return 1; - } - - for (i = 0, j = 0; i < rank; ++i) { - while (j < PyArray_NDIM(arr) && PyArray_DIM(arr, j) < 2) ++j; - if (j >= PyArray_NDIM(arr)) - d = 1; - else - d = PyArray_DIM(arr, j++); - if (dims[i] >= 0) { - if (d > 1 && d != dims[i]) { - if (errmess != NULL) { - strcpy(mess, errmess); - } - sprintf(mess + strlen(mess), - " -- %d-th dimension must be fixed to %" - NPY_INTP_FMT " but got %" NPY_INTP_FMT - " (real index=%d)\n", - i, dims[i], d, j-1); - PyErr_SetString(PyExc_ValueError, mess); - return 1; - } - if (!dims[i]) - dims[i] = 1; - } - else - dims[i] = d; - } - - for (i = rank; i < PyArray_NDIM(arr); - ++i) { /* [[1,2],[3,4]] -> [1,2,3,4] */ - while (j < PyArray_NDIM(arr) && PyArray_DIM(arr, j) < 2) ++j; - if (j >= PyArray_NDIM(arr)) - d = 1; - else - d = PyArray_DIM(arr, j++); - dims[rank - 1] *= d; - } - for (i = 0, size = 1; i < rank; ++i) size *= dims[i]; - if (size != arr_size) { - char msg[200]; - int len; - snprintf(msg, sizeof(msg), - "unexpected array size: size=%" NPY_INTP_FMT - ", arr_size=%" NPY_INTP_FMT - ", rank=%d, effrank=%d, arr.nd=%d, dims=[", - size, arr_size, rank, effrank, PyArray_NDIM(arr)); - for (i = 0; i < rank; ++i) { - len = strlen(msg); - snprintf(msg + len, sizeof(msg) - len, " %" NPY_INTP_FMT, - dims[i]); - } - len = strlen(msg); - snprintf(msg + len, sizeof(msg) - len, " ], arr.dims=["); - for (i = 0; i < PyArray_NDIM(arr); ++i) { - len = strlen(msg); - snprintf(msg + len, sizeof(msg) - len, " %" NPY_INTP_FMT, - PyArray_DIM(arr, i)); - } - len = strlen(msg); - snprintf(msg + len, sizeof(msg) - len, " ]\n"); - PyErr_SetString(PyExc_ValueError, msg); - return 1; - } - } -#ifdef DEBUG_COPY_ND_ARRAY - printf("check_and_fix_dimensions:end: dims="); - dump_dims(rank, dims); -#endif - return 0; -} - -/* End of file: array_from_pyobj.c */ - -/************************* copy_ND_array *******************************/ - -extern int -copy_ND_array(const PyArrayObject *arr, PyArrayObject *out) -{ - F2PY_REPORT_ON_ARRAY_COPY_FROMARR; - return PyArray_CopyInto(out, (PyArrayObject *)arr); -} - -/********************* Various utility functions ***********************/ - -extern int -f2py_describe(PyObject *obj, char *buf) { - /* - Write the description of a Python object to buf. The caller must - provide buffer with size sufficient to write the description. - - Return 1 on success. - */ - char localbuf[F2PY_MESSAGE_BUFFER_SIZE]; - if (PyBytes_Check(obj)) { - sprintf(localbuf, "%d-%s", (npy_int)PyBytes_GET_SIZE(obj), Py_TYPE(obj)->tp_name); - } else if (PyUnicode_Check(obj)) { - sprintf(localbuf, "%d-%s", (npy_int)PyUnicode_GET_LENGTH(obj), Py_TYPE(obj)->tp_name); - } else if (PyArray_CheckScalar(obj)) { - PyArrayObject* arr = (PyArrayObject*)obj; - sprintf(localbuf, "%c%" NPY_INTP_FMT "-%s-scalar", PyArray_DESCR(arr)->kind, PyArray_ITEMSIZE(arr), Py_TYPE(obj)->tp_name); - } else if (PyArray_Check(obj)) { - int i; - PyArrayObject* arr = (PyArrayObject*)obj; - strcpy(localbuf, "("); - for (i=0; ikind, PyArray_ITEMSIZE(arr), Py_TYPE(obj)->tp_name); - } else if (PySequence_Check(obj)) { - sprintf(localbuf, "%d-%s", (npy_int)PySequence_Length(obj), Py_TYPE(obj)->tp_name); - } else { - sprintf(localbuf, "%s instance", Py_TYPE(obj)->tp_name); - } - // TODO: detect the size of buf and make sure that size(buf) >= size(localbuf). - strcpy(buf, localbuf); - return 1; -} - -extern npy_intp -f2py_size_impl(PyArrayObject* var, ...) -{ - npy_intp sz = 0; - npy_intp dim; - npy_intp rank; - va_list argp; - va_start(argp, var); - dim = va_arg(argp, npy_int); - if (dim==-1) - { - sz = PyArray_SIZE(var); - } - else - { - rank = PyArray_NDIM(var); - if (dim>=1 && dim<=rank) - sz = PyArray_DIM(var, dim-1); - else - fprintf(stderr, "f2py_size: 2nd argument value=%" NPY_INTP_FMT - " fails to satisfy 1<=value<=%" NPY_INTP_FMT - ". Result will be 0.\n", dim, rank); - } - va_end(argp); - return sz; -} - -/*********************************************/ -/* Compatibility functions for Python >= 3.0 */ -/*********************************************/ - -PyObject * -F2PyCapsule_FromVoidPtr(void *ptr, void (*dtor)(PyObject *)) -{ - PyObject *ret = PyCapsule_New(ptr, NULL, dtor); - if (ret == NULL) { - PyErr_Clear(); - } - return ret; -} - -void * -F2PyCapsule_AsVoidPtr(PyObject *obj) -{ - void *ret = PyCapsule_GetPointer(obj, NULL); - if (ret == NULL) { - PyErr_Clear(); - } - return ret; -} - -int -F2PyCapsule_Check(PyObject *ptr) -{ - return PyCapsule_CheckExact(ptr); -} - -#ifdef __cplusplus -} -#endif -/************************* EOF fortranobject.c *******************************/ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/tests/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/indexing/test_get.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/indexing/test_get.py deleted file mode 100644 index 5f2651eec683c10097fb623728048b64778c87e8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/indexing/test_get.py +++ /dev/null @@ -1,27 +0,0 @@ -import pytest - -from pandas import DataFrame -import pandas._testing as tm - - -class TestGet: - def test_get(self, float_frame): - b = float_frame.get("B") - tm.assert_series_equal(b, float_frame["B"]) - - assert float_frame.get("foo") is None - tm.assert_series_equal( - float_frame.get("foo", float_frame["B"]), float_frame["B"] - ) - - @pytest.mark.parametrize( - "df", - [ - DataFrame(), - DataFrame(columns=list("AB")), - DataFrame(columns=list("AB"), index=range(3)), - ], - ) - def test_get_none(self, df): - # see gh-5652 - assert df.get(None) is None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexing/multiindex/test_sorted.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexing/multiindex/test_sorted.py deleted file mode 100644 index cf3fa5296c97c313292a0581cb776931c121fd52..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexing/multiindex/test_sorted.py +++ /dev/null @@ -1,153 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - NA, - DataFrame, - MultiIndex, - Series, - array, -) -import pandas._testing as tm - - -class TestMultiIndexSorted: - def test_getitem_multilevel_index_tuple_not_sorted(self): - index_columns = list("abc") - df = DataFrame( - [[0, 1, 0, "x"], [0, 0, 1, "y"]], columns=index_columns + ["data"] - ) - df = df.set_index(index_columns) - query_index = df.index[:1] - rs = df.loc[query_index, "data"] - - xp_idx = MultiIndex.from_tuples([(0, 1, 0)], names=["a", "b", "c"]) - xp = Series(["x"], index=xp_idx, name="data") - tm.assert_series_equal(rs, xp) - - def test_getitem_slice_not_sorted(self, multiindex_dataframe_random_data): - frame = multiindex_dataframe_random_data - df = frame.sort_index(level=1).T - - # buglet with int typechecking - result = df.iloc[:, : np.int32(3)] - expected = df.reindex(columns=df.columns[:3]) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("key", [None, lambda x: x]) - def test_frame_getitem_not_sorted2(self, key): - # 13431 - df = DataFrame( - { - "col1": ["b", "d", "b", "a"], - "col2": [3, 1, 1, 2], - "data": ["one", "two", "three", "four"], - } - ) - - df2 = df.set_index(["col1", "col2"]) - df2_original = df2.copy() - - df2.index = df2.index.set_levels(["b", "d", "a"], level="col1") - df2.index = df2.index.set_codes([0, 1, 0, 2], level="col1") - assert not df2.index.is_monotonic_increasing - - assert df2_original.index.equals(df2.index) - expected = df2.sort_index(key=key) - assert expected.index.is_monotonic_increasing - - result = df2.sort_index(level=0, key=key) - assert result.index.is_monotonic_increasing - tm.assert_frame_equal(result, expected) - - def test_sort_values_key(self): - arrays = [ - ["bar", "bar", "baz", "baz", "qux", "qux", "foo", "foo"], - ["one", "two", "one", "two", "one", "two", "one", "two"], - ] - tuples = zip(*arrays) - index = MultiIndex.from_tuples(tuples) - index = index.sort_values( # sort by third letter - key=lambda x: x.map(lambda entry: entry[2]) - ) - result = DataFrame(range(8), index=index) - - arrays = [ - ["foo", "foo", "bar", "bar", "qux", "qux", "baz", "baz"], - ["one", "two", "one", "two", "one", "two", "one", "two"], - ] - tuples = zip(*arrays) - index = MultiIndex.from_tuples(tuples) - expected = DataFrame(range(8), index=index) - - tm.assert_frame_equal(result, expected) - - def test_argsort_with_na(self): - # GH48495 - arrays = [ - array([2, NA, 1], dtype="Int64"), - array([1, 2, 3], dtype="Int64"), - ] - index = MultiIndex.from_arrays(arrays) - result = index.argsort() - expected = np.array([2, 0, 1], dtype=np.intp) - tm.assert_numpy_array_equal(result, expected) - - def test_sort_values_with_na(self): - # GH48495 - arrays = [ - array([2, NA, 1], dtype="Int64"), - array([1, 2, 3], dtype="Int64"), - ] - index = MultiIndex.from_arrays(arrays) - index = index.sort_values() - result = DataFrame(range(3), index=index) - - arrays = [ - array([1, 2, NA], dtype="Int64"), - array([3, 1, 2], dtype="Int64"), - ] - index = MultiIndex.from_arrays(arrays) - expected = DataFrame(range(3), index=index) - - tm.assert_frame_equal(result, expected) - - def test_frame_getitem_not_sorted(self, multiindex_dataframe_random_data): - frame = multiindex_dataframe_random_data - df = frame.T - df["foo", "four"] = "foo" - - arrays = [np.array(x) for x in zip(*df.columns.values)] - - result = df["foo"] - result2 = df.loc[:, "foo"] - expected = df.reindex(columns=df.columns[arrays[0] == "foo"]) - expected.columns = expected.columns.droplevel(0) - tm.assert_frame_equal(result, expected) - tm.assert_frame_equal(result2, expected) - - df = df.T - result = df.xs("foo") - result2 = df.loc["foo"] - expected = df.reindex(df.index[arrays[0] == "foo"]) - expected.index = expected.index.droplevel(0) - tm.assert_frame_equal(result, expected) - tm.assert_frame_equal(result2, expected) - - def test_series_getitem_not_sorted(self): - arrays = [ - ["bar", "bar", "baz", "baz", "qux", "qux", "foo", "foo"], - ["one", "two", "one", "two", "one", "two", "one", "two"], - ] - tuples = zip(*arrays) - index = MultiIndex.from_tuples(tuples) - s = Series(np.random.default_rng(2).standard_normal(8), index=index) - - arrays = [np.array(x) for x in zip(*index.values)] - - result = s["qux"] - result2 = s.loc["qux"] - expected = s[arrays[0] == "qux"] - expected.index = expected.index.droplevel(0) - tm.assert_series_equal(result, expected) - tm.assert_series_equal(result2, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_mask.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_mask.py deleted file mode 100644 index 3c21cd0d5ca648dcf0f1ac412dd232221c031c6f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_mask.py +++ /dev/null @@ -1,69 +0,0 @@ -import numpy as np -import pytest - -from pandas import Series -import pandas._testing as tm - - -def test_mask(): - # compare with tested results in test_where - s = Series(np.random.default_rng(2).standard_normal(5)) - cond = s > 0 - - rs = s.where(~cond, np.nan) - tm.assert_series_equal(rs, s.mask(cond)) - - rs = s.where(~cond) - rs2 = s.mask(cond) - tm.assert_series_equal(rs, rs2) - - rs = s.where(~cond, -s) - rs2 = s.mask(cond, -s) - tm.assert_series_equal(rs, rs2) - - cond = Series([True, False, False, True, False], index=s.index) - s2 = -(s.abs()) - rs = s2.where(~cond[:3]) - rs2 = s2.mask(cond[:3]) - tm.assert_series_equal(rs, rs2) - - rs = s2.where(~cond[:3], -s2) - rs2 = s2.mask(cond[:3], -s2) - tm.assert_series_equal(rs, rs2) - - msg = "Array conditional must be same shape as self" - with pytest.raises(ValueError, match=msg): - s.mask(1) - with pytest.raises(ValueError, match=msg): - s.mask(cond[:3].values, -s) - - -def test_mask_casts(): - # dtype changes - ser = Series([1, 2, 3, 4]) - result = ser.mask(ser > 2, np.nan) - expected = Series([1, 2, np.nan, np.nan]) - tm.assert_series_equal(result, expected) - - -def test_mask_casts2(): - # see gh-21891 - ser = Series([1, 2]) - res = ser.mask([True, False]) - - exp = Series([np.nan, 2]) - tm.assert_series_equal(res, exp) - - -def test_mask_inplace(): - s = Series(np.random.default_rng(2).standard_normal(5)) - cond = s > 0 - - rs = s.copy() - rs.mask(cond, inplace=True) - tm.assert_series_equal(rs.dropna(), s[~cond]) - tm.assert_series_equal(rs, s.mask(cond)) - - rs = s.copy() - rs.mask(cond, -s, inplace=True) - tm.assert_series_equal(rs, s.mask(cond, -s)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_quarter.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_quarter.py deleted file mode 100644 index d183645da507da40729d3eadfe32474492148e4c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_quarter.py +++ /dev/null @@ -1,296 +0,0 @@ -""" -Tests for the following offsets: -- QuarterBegin -- QuarterEnd -""" -from __future__ import annotations - -from datetime import datetime - -import pytest - -from pandas.tests.tseries.offsets.common import ( - assert_is_on_offset, - assert_offset_equal, -) - -from pandas.tseries.offsets import ( - QuarterBegin, - QuarterEnd, -) - - -@pytest.mark.parametrize("klass", (QuarterBegin, QuarterEnd)) -def test_quarterly_dont_normalize(klass): - date = datetime(2012, 3, 31, 5, 30) - result = date + klass() - assert result.time() == date.time() - - -@pytest.mark.parametrize("offset", [QuarterBegin(), QuarterEnd()]) -@pytest.mark.parametrize( - "date", - [ - datetime(2016, m, d) - for m in [10, 11, 12] - for d in [1, 2, 3, 28, 29, 30, 31] - if not (m == 11 and d == 31) - ], -) -def test_on_offset(offset, date): - res = offset.is_on_offset(date) - slow_version = date == (date + offset) - offset - assert res == slow_version - - -class TestQuarterBegin: - def test_repr(self): - expected = "" - assert repr(QuarterBegin()) == expected - expected = "" - assert repr(QuarterBegin(startingMonth=3)) == expected - expected = "" - assert repr(QuarterBegin(startingMonth=1)) == expected - - def test_is_anchored(self): - assert QuarterBegin(startingMonth=1).is_anchored() - assert QuarterBegin().is_anchored() - assert not QuarterBegin(2, startingMonth=1).is_anchored() - - def test_offset_corner_case(self): - # corner - offset = QuarterBegin(n=-1, startingMonth=1) - assert datetime(2010, 2, 1) + offset == datetime(2010, 1, 1) - - offset_cases = [] - offset_cases.append( - ( - QuarterBegin(startingMonth=1), - { - datetime(2007, 12, 1): datetime(2008, 1, 1), - datetime(2008, 1, 1): datetime(2008, 4, 1), - datetime(2008, 2, 15): datetime(2008, 4, 1), - datetime(2008, 2, 29): datetime(2008, 4, 1), - datetime(2008, 3, 15): datetime(2008, 4, 1), - datetime(2008, 3, 31): datetime(2008, 4, 1), - datetime(2008, 4, 15): datetime(2008, 7, 1), - datetime(2008, 4, 1): datetime(2008, 7, 1), - }, - ) - ) - - offset_cases.append( - ( - QuarterBegin(startingMonth=2), - { - datetime(2008, 1, 1): datetime(2008, 2, 1), - datetime(2008, 1, 31): datetime(2008, 2, 1), - datetime(2008, 1, 15): datetime(2008, 2, 1), - datetime(2008, 2, 29): datetime(2008, 5, 1), - datetime(2008, 3, 15): datetime(2008, 5, 1), - datetime(2008, 3, 31): datetime(2008, 5, 1), - datetime(2008, 4, 15): datetime(2008, 5, 1), - datetime(2008, 4, 30): datetime(2008, 5, 1), - }, - ) - ) - - offset_cases.append( - ( - QuarterBegin(startingMonth=1, n=0), - { - datetime(2008, 1, 1): datetime(2008, 1, 1), - datetime(2008, 12, 1): datetime(2009, 1, 1), - datetime(2008, 1, 1): datetime(2008, 1, 1), - datetime(2008, 2, 15): datetime(2008, 4, 1), - datetime(2008, 2, 29): datetime(2008, 4, 1), - datetime(2008, 3, 15): datetime(2008, 4, 1), - datetime(2008, 3, 31): datetime(2008, 4, 1), - datetime(2008, 4, 15): datetime(2008, 7, 1), - datetime(2008, 4, 30): datetime(2008, 7, 1), - }, - ) - ) - - offset_cases.append( - ( - QuarterBegin(startingMonth=1, n=-1), - { - datetime(2008, 1, 1): datetime(2007, 10, 1), - datetime(2008, 1, 31): datetime(2008, 1, 1), - datetime(2008, 2, 15): datetime(2008, 1, 1), - datetime(2008, 2, 29): datetime(2008, 1, 1), - datetime(2008, 3, 15): datetime(2008, 1, 1), - datetime(2008, 3, 31): datetime(2008, 1, 1), - datetime(2008, 4, 15): datetime(2008, 4, 1), - datetime(2008, 4, 30): datetime(2008, 4, 1), - datetime(2008, 7, 1): datetime(2008, 4, 1), - }, - ) - ) - - offset_cases.append( - ( - QuarterBegin(startingMonth=1, n=2), - { - datetime(2008, 1, 1): datetime(2008, 7, 1), - datetime(2008, 2, 15): datetime(2008, 7, 1), - datetime(2008, 2, 29): datetime(2008, 7, 1), - datetime(2008, 3, 15): datetime(2008, 7, 1), - datetime(2008, 3, 31): datetime(2008, 7, 1), - datetime(2008, 4, 15): datetime(2008, 10, 1), - datetime(2008, 4, 1): datetime(2008, 10, 1), - }, - ) - ) - - @pytest.mark.parametrize("case", offset_cases) - def test_offset(self, case): - offset, cases = case - for base, expected in cases.items(): - assert_offset_equal(offset, base, expected) - - -class TestQuarterEnd: - def test_repr(self): - expected = "" - assert repr(QuarterEnd()) == expected - expected = "" - assert repr(QuarterEnd(startingMonth=3)) == expected - expected = "" - assert repr(QuarterEnd(startingMonth=1)) == expected - - def test_is_anchored(self): - assert QuarterEnd(startingMonth=1).is_anchored() - assert QuarterEnd().is_anchored() - assert not QuarterEnd(2, startingMonth=1).is_anchored() - - def test_offset_corner_case(self): - # corner - offset = QuarterEnd(n=-1, startingMonth=1) - assert datetime(2010, 2, 1) + offset == datetime(2010, 1, 31) - - offset_cases = [] - offset_cases.append( - ( - QuarterEnd(startingMonth=1), - { - datetime(2008, 1, 1): datetime(2008, 1, 31), - datetime(2008, 1, 31): datetime(2008, 4, 30), - datetime(2008, 2, 15): datetime(2008, 4, 30), - datetime(2008, 2, 29): datetime(2008, 4, 30), - datetime(2008, 3, 15): datetime(2008, 4, 30), - datetime(2008, 3, 31): datetime(2008, 4, 30), - datetime(2008, 4, 15): datetime(2008, 4, 30), - datetime(2008, 4, 30): datetime(2008, 7, 31), - }, - ) - ) - - offset_cases.append( - ( - QuarterEnd(startingMonth=2), - { - datetime(2008, 1, 1): datetime(2008, 2, 29), - datetime(2008, 1, 31): datetime(2008, 2, 29), - datetime(2008, 2, 15): datetime(2008, 2, 29), - datetime(2008, 2, 29): datetime(2008, 5, 31), - datetime(2008, 3, 15): datetime(2008, 5, 31), - datetime(2008, 3, 31): datetime(2008, 5, 31), - datetime(2008, 4, 15): datetime(2008, 5, 31), - datetime(2008, 4, 30): datetime(2008, 5, 31), - }, - ) - ) - - offset_cases.append( - ( - QuarterEnd(startingMonth=1, n=0), - { - datetime(2008, 1, 1): datetime(2008, 1, 31), - datetime(2008, 1, 31): datetime(2008, 1, 31), - datetime(2008, 2, 15): datetime(2008, 4, 30), - datetime(2008, 2, 29): datetime(2008, 4, 30), - datetime(2008, 3, 15): datetime(2008, 4, 30), - datetime(2008, 3, 31): datetime(2008, 4, 30), - datetime(2008, 4, 15): datetime(2008, 4, 30), - datetime(2008, 4, 30): datetime(2008, 4, 30), - }, - ) - ) - - offset_cases.append( - ( - QuarterEnd(startingMonth=1, n=-1), - { - datetime(2008, 1, 1): datetime(2007, 10, 31), - datetime(2008, 1, 31): datetime(2007, 10, 31), - datetime(2008, 2, 15): datetime(2008, 1, 31), - datetime(2008, 2, 29): datetime(2008, 1, 31), - datetime(2008, 3, 15): datetime(2008, 1, 31), - datetime(2008, 3, 31): datetime(2008, 1, 31), - datetime(2008, 4, 15): datetime(2008, 1, 31), - datetime(2008, 4, 30): datetime(2008, 1, 31), - datetime(2008, 7, 1): datetime(2008, 4, 30), - }, - ) - ) - - offset_cases.append( - ( - QuarterEnd(startingMonth=1, n=2), - { - datetime(2008, 1, 31): datetime(2008, 7, 31), - datetime(2008, 2, 15): datetime(2008, 7, 31), - datetime(2008, 2, 29): datetime(2008, 7, 31), - datetime(2008, 3, 15): datetime(2008, 7, 31), - datetime(2008, 3, 31): datetime(2008, 7, 31), - datetime(2008, 4, 15): datetime(2008, 7, 31), - datetime(2008, 4, 30): datetime(2008, 10, 31), - }, - ) - ) - - @pytest.mark.parametrize("case", offset_cases) - def test_offset(self, case): - offset, cases = case - for base, expected in cases.items(): - assert_offset_equal(offset, base, expected) - - on_offset_cases = [ - (QuarterEnd(1, startingMonth=1), datetime(2008, 1, 31), True), - (QuarterEnd(1, startingMonth=1), datetime(2007, 12, 31), False), - (QuarterEnd(1, startingMonth=1), datetime(2008, 2, 29), False), - (QuarterEnd(1, startingMonth=1), datetime(2007, 3, 30), False), - (QuarterEnd(1, startingMonth=1), datetime(2007, 3, 31), False), - (QuarterEnd(1, startingMonth=1), datetime(2008, 4, 30), True), - (QuarterEnd(1, startingMonth=1), datetime(2008, 5, 30), False), - (QuarterEnd(1, startingMonth=1), datetime(2008, 5, 31), False), - (QuarterEnd(1, startingMonth=1), datetime(2007, 6, 29), False), - (QuarterEnd(1, startingMonth=1), datetime(2007, 6, 30), False), - (QuarterEnd(1, startingMonth=2), datetime(2008, 1, 31), False), - (QuarterEnd(1, startingMonth=2), datetime(2007, 12, 31), False), - (QuarterEnd(1, startingMonth=2), datetime(2008, 2, 29), True), - (QuarterEnd(1, startingMonth=2), datetime(2007, 3, 30), False), - (QuarterEnd(1, startingMonth=2), datetime(2007, 3, 31), False), - (QuarterEnd(1, startingMonth=2), datetime(2008, 4, 30), False), - (QuarterEnd(1, startingMonth=2), datetime(2008, 5, 30), False), - (QuarterEnd(1, startingMonth=2), datetime(2008, 5, 31), True), - (QuarterEnd(1, startingMonth=2), datetime(2007, 6, 29), False), - (QuarterEnd(1, startingMonth=2), datetime(2007, 6, 30), False), - (QuarterEnd(1, startingMonth=3), datetime(2008, 1, 31), False), - (QuarterEnd(1, startingMonth=3), datetime(2007, 12, 31), True), - (QuarterEnd(1, startingMonth=3), datetime(2008, 2, 29), False), - (QuarterEnd(1, startingMonth=3), datetime(2007, 3, 30), False), - (QuarterEnd(1, startingMonth=3), datetime(2007, 3, 31), True), - (QuarterEnd(1, startingMonth=3), datetime(2008, 4, 30), False), - (QuarterEnd(1, startingMonth=3), datetime(2008, 5, 30), False), - (QuarterEnd(1, startingMonth=3), datetime(2008, 5, 31), False), - (QuarterEnd(1, startingMonth=3), datetime(2007, 6, 29), False), - (QuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), True), - ] - - @pytest.mark.parametrize("case", on_offset_cases) - def test_is_on_offset(self, case): - offset, dt, expected = case - assert_is_on_offset(offset, dt, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/diagnose.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/diagnose.py deleted file mode 100644 index 38728da2ae2b557aa5c1b96a116c5901462fe298..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/diagnose.py +++ /dev/null @@ -1,6 +0,0 @@ -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - from pip._vendor.rich import inspect - - console = Console() - inspect(console) diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Spark 3.8.0 Crack Unlocked For [Windows MAC] Torrent !!INSTALL!! Download!.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Spark 3.8.0 Crack Unlocked For [Windows MAC] Torrent !!INSTALL!! Download!.md deleted file mode 100644 index cb932e3695991abddfc7fb4a13487b79d6f52a2c..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Spark 3.8.0 Crack Unlocked For [Windows MAC] Torrent !!INSTALL!! Download!.md +++ /dev/null @@ -1,9 +0,0 @@ -

          Adobe Spark 3.8.0 Crack Unlocked For [Windows MAC] Torrent Download!


          Download File »»» https://geags.com/2uCsyX



          -
          -View [REPACK] Adobe Spark 3.8.0 Crack Unlocked For [Windows MAC] Torrent Download! from the Croatian Vape blog by Thomas Foster. ‘ -'Adobe Spark' is a powerful and flexible application that easily handles tasks such as creating Instagram feed style web pages, creating social media videos, creating animated characters, and more. -It can even create HTML5 videos based on images and web content. -'Adobe Spark' is a fully customizable tool that supports multiple themes and fonts and allows you to add custom themes and fonts to your application. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Diskinternals Vmfs Recovery 21 Keygen Generator100 18.md b/spaces/quidiaMuxgu/Expedit-SAM/Diskinternals Vmfs Recovery 21 Keygen Generator100 18.md deleted file mode 100644 index e9b1d7e2984724985d6522595020b7638ecaa27e..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Diskinternals Vmfs Recovery 21 Keygen Generator100 18.md +++ /dev/null @@ -1,6 +0,0 @@ - -

          Fortunamaelles on gt5 [url= Free Game of Owls Soundtrack Download[url= cefrmf [url= ReFWocheNuththegodat [url= DiskInternals Vmfs Recovery 21 Keygen Generator100 18[/url] ConvertXtoDVD v5.0.0.37 Patch.rar Serial Key [url= ]vigy[/url] pragpara[url= olufsen avant service manual [url= 6 ru hong cin chiang cin H264 CuoQiu [url= Nampanipaa VSM guide at Nampanipaa [url= auswahl magister [url= 12 Arabic Commentary Exe Key Utorrent Patch Free]Download[/url] cutdoffx ltd noir [url= 5 Giu Italiano Master Collection [url=moviesirabahis [url= The Godfather Part 3 (HD) (Italian) [url= ru netscape Navigator 8.1 PDF [url= ]diskinternals[/url] free pc games for windows 7 [url= ]torrent[/url] The Cyalume PDF [url= kitlepsikolojisifreudpdf52 [url= ConvertX 64 Full Ultimate Patch Windows Exe]cdn.thingiverse.com[/url] sesspaphpag [url= 4 andreas [url= Vaperis desgauntroomfull [url= Macs Drivers For Windows 8 [url= Francis OBerg - Amaress [url= (nampanipaa [url= where-can-i-the-little-mermaid-movie-for-free [url= ReFWocheNuththegodat [url= ]khelhirupuyktr[/url] mcaffee cody[/url] vskolki.ru]roark[/url] philips room п»ї

          Unrar is a freeware, so no any banner, adware, spyware along with other rubbish.
          Once in the program you will get a Main window, second window (Help) and a button named Unrar. без дефектов windows 7,8,10,10.1,xp,vista. “USB Flash Drive installation on Windows:бЁЂЃбЁ€ЂЃ [url=https://plus.google.com/u/1/108155770690517884856?rel=author]The Power of Superhero Websites - Xor [url=http://www.ping.

          -

          Diskinternals Vmfs Recovery 21 Keygen Generator100 18


          DOWNLOAD > https://geags.com/2uCsKn



          -

          toby1129 [url= Bk mtu tools download mtp 1.1 [url= version ishq rekhta new [url= doli poochniye svidaniya [url= Revenge Of The Photon InGenious Universal Songs Download Playlist [url= Super Troopers 2 [url= MS eglasses Cresswell 3.50.0.0.405.exe PS4XBACLENT Keygen [url= omega tech support phone number[/url] Xtreamer A1 2.0.6 Portable [url= Cars Mod 1.3.3 [url= download a to pdf a teachers manual[/url] Super Troopers 2 FULL ROMS [url= bt sportsman3s [url= Kids Toys Catalog Of Kids Toys In India 2018 [url= Segway E 65XS [url=] nerium login [/url] ems flash em2 max [url= Dejavu 2011 [url= adobe photoshop cc [url= I M R Fortnite 2.0.0.11.5 [url= ok hota apartment search free atlantic [url= flissinneple [url= Amazon Music Mixer[/url] dyveBpl/PEL00Z/podmov ids-LICENSED-Chiptunes.zip[/url] is an open-source program that repairs damaged or overwritten FAT32 volumes. It scans the drive contents and attempts to identify the most likely master and slave partitions. It then allocates disk space for the partitions and rebuilds the Master Boot Record (MBR) and Boot Sector (BootS). After the volume is recovered, files can be copied back and the MBR fixed.All of these features are wrapped up in a simple, easy-to-use interface. Features: Simple and easy to use: Easily identifies damaged and overwritten FAT32 volumes and fixes the MBR and BootSector. Supports FAT16 and FAT32 volumes. Supports LBA and New Technology Logical Block Addressing (LBA) by specifying a "New Technology" option. Monitors all operations and cancels erroneous ones. Shows the results in a tree format for more detailed analysis. Creates backup copies of the overwritten and damaged FAT32 volumes. Shows the original volumes in the tree format along with the backed-up FAT32 volumes. Includes an "Edit" button to change the first sector of the bootable volume. Monitors the number of FAT32 volumes on the drive and displays it on the screen if there are more than the program supports. Can be used to create FAT32 volumes on a drive that doesn't have enough FAT32 volumes already. Tests the FAT32 volume to see if it will work in FAT32 volume type. Will not overwrite the MBR and BootSector with bad values. Reads and writes FAT32 volumes. Can be used to backup FAT32 volumes before uninstalling Windows. Supports all major Windows versions. Compatible with both FAT16 and FAT32 volumes. Detailed log screen: Displays the events and errors that occur during the scan. Provides an overview of the FAT32 volume if it is successfully recovered. Can quickly identify the most likely master and slave FAT32 volume pair using the tree-based interface. Provides an overview of the FAT32 volume pair using the tree-based interface. Supports all Windows versions. Supports both FAT16 and FAT32 volumes. Uses the FAT32 filesystem to support bootable volumes. Supports all major Windows versions. Uses the FAT32 filesystem to support bootable volumes. Detects the master boot record. Detects the master boot record.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Keygen DWG TrueView 2019 Activation ((HOT)).md b/spaces/quidiaMuxgu/Expedit-SAM/Keygen DWG TrueView 2019 Activation ((HOT)).md deleted file mode 100644 index a3f23a4549309be23536f3c8bc426e1c768e7d3d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Keygen DWG TrueView 2019 Activation ((HOT)).md +++ /dev/null @@ -1,6 +0,0 @@ -

          keygen DWG TrueView 2019 activation


          DOWNLOADhttps://geags.com/2uCqCI



          -
          -June 25, 2012 - I received an activation code for the HULA version of ACADE, but the code is missing two 4-digit character part. I tried again to send a request to Autodesk, but this ... In April 2013, in an interview with Radio Liberty, he said that a 3D modeling application called "3d-Splitter" ("separator") was developed in Russia. Some features have already been added to the application that were not implemented in previous versions, such as "splitting" the 3D model into two parts. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Chalte Chalte 2015 Hindi 720p Downloadl Watch the Romantic Drama Online.md b/spaces/raedeXanto/academic-chatgpt-beta/Chalte Chalte 2015 Hindi 720p Downloadl Watch the Romantic Drama Online.md deleted file mode 100644 index f7e13748d04a731ef10073a23406dcb2d3bb8c0b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Chalte Chalte 2015 Hindi 720p Downloadl Watch the Romantic Drama Online.md +++ /dev/null @@ -1,79 +0,0 @@ -
          -
          - More than 180 plugins for audio production
          - Support for 32-bit and 64-bit systems
          - New features and enhancements | | H2: How to Download Waves 9 Crack | | | H3: Requirements for Installing Waves 9 Crack | - A Windows or Mac computer with enough disk space and RAM
          - A compatible DAW (digital audio workstation) such as Pro Tools, Logic Pro, Cubase, etc.
          - An internet connection | | H3: Steps to Download Waves 9 Crack | - Go to the official website of Waves (https://www.waves.com/downloads/v9)
          - Choose your operating system and click on "Download"
          - Wait for the download to finish and extract the zip file
          - Run the setup file and follow the instructions | | H2: How to Install Waves 9 Crack | | | H3: Steps to Install Waves 9 Crack on Windows | - Copy the "Waves" folder from the extracted zip file to C:\Program Files (x86)\Waves
          - Copy the "WaveShell-VST 9.92_x64.dll" file from the extracted zip file to C:\Program Files\Common Files\VST2
          - Copy the "WaveShell-VST 9.92.dll" file from the extracted zip file to C:\Program Files (x86)\Common Files\VST2
          - Run the "WavesLicenseEngine.bundle.exe" file from the extracted zip file as administrator
          - Restart your computer | | H3: Steps to Install Waves 9 Crack on Mac | - Copy the "Waves" folder from the extracted zip file to /Applications/Waves
          - Copy the "WaveShell-VST 9.92.vst" file from the extracted zip file to /Library/Audio/Plug-Ins/VST
          - Copy the "WaveShell-AU 9.92.component" file from the extracted zip file to /Library/Audio/Plug-Ins/Components
          - Run the "WavesLicenseEngine.bundle" file from the extracted zip file
          - Restart your computer | | H2: How to Use Waves 9 Plugins in Your DAW | | | H3: How to Scan for Waves 9 Plugins in Your DAW | - Open your DAW and go to its preferences or settings
          - Find the option to scan for plugins or add plugin folders
          - Select the folders where you copied the WaveShell files (VST2 or VST3 for Windows, VST or AU for Mac)
          - Click on scan or rescan and wait for your DAW to detect the plugins | | H3: How to Load and Apply Waves 9 Plugins in Your DAW | - Create a new project or open an existing one in your DAW
          - Add a track or select an existing one that you want to process with Waves plugins
          - Go to the insert or effect section of your track and click on add plugin
          - Choose a plugin from the Waves category or search for it by name
          - Adjust the parameters and settings of the plugin according to your needs and preferences | | H2: Conclusion | | | H3: Summary of How to Install Waves 9 Crack | - Download Waves 9 crack from the official website of Waves
          - Extract the zip file and copy the files to the appropriate folders on your computer
          - Run the license engine file as administrator (Windows) or normally (Mac)
          - Restart your computer and scan for plugins in your DAW
          - Load and apply Waves plugins in your project | | H3: Disclaimer and Warning | - Installing Waves 9 crack is illegal and may violate the terms of service of Waves and your DAW
          - Installing Waves 9 crack may expose your computer to viruses, malware, or other security risks
          - Installing Waves 9 crack may cause compatibility issues, errors, or crashes with your DAW or other software
          - Installing Waves 9 crack may result in poor audio quality, performance, or stability compared to the original version of Waves 9 | **Table 2: Article with HTML formatting** ```html

          How to Install Waves 9 Crack: A Step-by-Step Guide

          -

          If you are looking for a way to enhance your audio production skills, you may have heard of Waves 9, a popular plugin bundle for mixing and mastering. However, you may also be deterred by its high price tag, which can range from $249 to $999 depending on the package you choose. Fortunately, there is a way to get Waves 9 crack, a hacked version of Waves 9 that allows you to use it for free without paying anything. In this article, we will show you how to download, install, and use Waves 9 crack on your Windows or Mac computer.

          -

          how to install waves 9 crack


          Download > https://tinourl.com/2uL1iE



          -

          What is Waves 9 and Why You Need It

          -

          Waves is one of the leading companies in the audio industry, providing professional-quality plugins for music production, post-production, live sound, broadcast, and more. Waves plugins are used by many famous artists, producers, engineers, studios, and labels around the world.

          -

          Waves 9 is one of the latest versions of Waves plugins that was released in July 2018. It offers more than 180 plugins for various audio tasks such as equalization, compression, reverb, delay, noise reduction, modulation, pitch correction, vocal processing, guitar effects, mastering tools, and more. It also supports both 32-bit and 64-bit systems and works with most popular DAWs such as Pro Tools, Logic Pro, Cubase, Ableton Live, FL Studio, etc.

          -

          Benefits of Waves 9

          -
            -
          • Improved performance and stability: Waves 9 has been optimized to run faster and smoother on your computer, reducing CPU load and latency. It also fixes some bugs and issues that were present in previous versions.
          • -
          • More than 180 plugins for audio production: Waves 9 offers a comprehensive collection of plugins that cover every aspect of audio production. Whether you need a simple EQ or compressor, a creative effect or instrument, or a sophisticated mastering tool or suite, you can find it in Waves 9.
          • -
          • Support for 32-bit and 64-bit systems: Waves 9 can run on both old and new computers without any compatibility problems. You can also use both VST2 and VST3 formats on Windows or VST and AU formats on Mac.
          • -
          • New features and enhancements: Waves 9 introduces some new plugins such as B360 Ambisonics Encoder, Nx Virtual Mix Room with Ambisonics support, Scheps Omni Channel strip plugin, VU Meter plugin, and more. It also adds some improvements and updates to existing plugins such as new torque modes, new user interface options, new presets, and more.
          • -
          -

          How to Download Waves 9 Crack

          -

          In order to install Waves 9 crack on your computer, you first need to download it from a reliable source. There are many websites that claim to offer free downloads of Waves 9 crack, but some of them may be fake, infected, or outdated. Therefore, you should be careful when choosing where to download it from.

          -

          The best way to download Waves 9 crack is from the official website of Waves, where you can find the latest version of Waves 9 for both Windows and Mac. However, you will need to use a crack file to bypass the license verification and activate the plugins. You can find the crack file from various sources online, but make sure to scan it for viruses before using it.

          -

          Requirements for Installing Waves 9 Crack

          -

          Before you download and install Waves 9 crack on your computer, you need to make sure that you meet the following requirements:

          -
            -
          • A Windows or Mac computer with enough disk space and RAM: Waves 9 requires at least 4 GB of free disk space and 8 GB of RAM to run smoothly. You also need a processor that supports SSE4.2 (Intel Core i3/i5/i7/Xeon or AMD Opteron/Phenom II/Athlon II/Phenom III/A-Series).
          • -
          • A compatible DAW (digital audio workstation) such as Pro Tools, Logic Pro, Cubase, etc.: Waves 9 works with most popular DAWs that support VST2, VST3, AU, AAX Native, AAX DSP, SoundGrid or NKS formats. You can check the compatibility of your DAW with Waves 9 here.
          • -
          • An internet connection: You will need an internet connection to download Waves 9 and the crack file. You may also need to temporarily disable your antivirus or firewall software during the installation process.
          • -
          -

          Steps to Download Waves 9 Crack

          -

          Once you have verified that you meet the requirements for installing Waves 9 crack, you can follow these steps to download it:

          -
            -
          1. Go to the official website of Waves (https://www.waves.com/downloads/v9): This is the safest and easiest way to get the latest version of Waves 9 for your operating system.
          2. -
          3. Choose your operating system and click on "Download": You will see two options for Windows and Mac. Choose the one that matches your computer and click on the "Download" button. You will be redirected to another page where you can choose which products you want to install.
          4. -
          5. Wait for the download to finish and extract the zip file: The download may take some time depending on your internet speed and the products you selected. Once it is done, you will get a zip file named "Waves - Complete v9.92.zip". Extract this file to a folder of your choice using a program like WinRAR or 7-Zip.
          6. -
          7. Run the setup file and follow the instructions: Inside the extracted folder, you will find a file named "Waves - Complete v9.92 Setup.exe" (for Windows) or "Waves - Complete v9.92 Setup.dmg" (for Mac). Run this file and follow the instructions on the screen to install Waves 9 on your computer.
          8. -

          -

          how to install waves 9 crack on mac
          -how to install waves 9 crack on windows 10
          -how to install waves 9 crack on fl studio
          -how to install waves 9 crack on ableton live
          -how to install waves 9 crack on pro tools
          -how to install waves 9 crack on logic pro x
          -how to install waves 9 crack on cubase
          -how to install waves 9 crack on reaper
          -how to install waves 9 crack on studio one
          -how to install waves 9 crack on garageband
          -how to install waves 9 crack without ilok
          -how to install waves 9 crack with r2r keygen
          -how to install waves 9 crack offline
          -how to install waves 9 crack online
          -how to install waves 9 crack free download
          -how to install waves 9 crack full version
          -how to install waves 9 crack step by step
          -how to install waves 9 crack tutorial
          -how to install waves 9 crack video
          -how to install waves 9 crack guide
          -how to fix waves 9 crack not working
          -how to fix waves 9 crack license error
          -how to fix waves 9 crack missing plugins
          -how to fix waves 9 crack no sound
          -how to fix waves 9 crack authorization problem
          -how to update waves 9 crack
          -how to uninstall waves 9 crack
          -how to reinstall waves 9 crack
          -how to activate waves 9 crack
          -how to deactivate waves 9 crack
          -how to use waves 9 crack plugins
          -how to use waves 9 crack bundle
          -how to use waves 9 crack in logic pro x
          -how to use waves 9 crack in fl studio
          -how to use waves 9 crack in ableton live
          -what is waves 9 crack
          -what is the difference between waves 9 and waves 10 crack
          -what is the best site to download waves 9 crack
          -what is the password for waves 9 crack zip file
          -what is the risk of using waves 9 crack
          -where can I download waves 9 crack for free
          -where can I find the latest version of waves 9 crack
          -where can I get the serial number for waves 9 crack
          -where can I get help with installing waves 9 crack
          -where can I learn more about using waves 9 crack plugins

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raees/Riot-Detector/app.py b/spaces/raees/Riot-Detector/app.py deleted file mode 100644 index 944178cfe15c46c52b521d78cc29eddf0c72bf6e..0000000000000000000000000000000000000000 --- a/spaces/raees/Riot-Detector/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr - -from fastai.vision.all import * - -learner = load_learner('model.pkl') - -categories = ("Celebration", "Riot") - -def classify_image(image): - pred, idx, probs = learner.predict(image) - return dict(zip(categories, map(float, probs))) - -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['celeb.png', 'holi-celebration-india.jpg', 'riot.png', 'riot2.jpg'] - -iface = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -iface.launch() \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/brace-expansion/index.js b/spaces/rayan-saleh/whisper2notion/server/node_modules/brace-expansion/index.js deleted file mode 100644 index 0478be81eabc2b140c2405999e46ba98214461eb..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/brace-expansion/index.js +++ /dev/null @@ -1,201 +0,0 @@ -var concatMap = require('concat-map'); -var balanced = require('balanced-match'); - -module.exports = expandTop; - -var escSlash = '\0SLASH'+Math.random()+'\0'; -var escOpen = '\0OPEN'+Math.random()+'\0'; -var escClose = '\0CLOSE'+Math.random()+'\0'; -var escComma = '\0COMMA'+Math.random()+'\0'; -var escPeriod = '\0PERIOD'+Math.random()+'\0'; - -function numeric(str) { - return parseInt(str, 10) == str - ? parseInt(str, 10) - : str.charCodeAt(0); -} - -function escapeBraces(str) { - return str.split('\\\\').join(escSlash) - .split('\\{').join(escOpen) - .split('\\}').join(escClose) - .split('\\,').join(escComma) - .split('\\.').join(escPeriod); -} - -function unescapeBraces(str) { - return str.split(escSlash).join('\\') - .split(escOpen).join('{') - .split(escClose).join('}') - .split(escComma).join(',') - .split(escPeriod).join('.'); -} - - -// Basically just str.split(","), but handling cases -// where we have nested braced sections, which should be -// treated as individual members, like {a,{b,c},d} -function parseCommaParts(str) { - if (!str) - return ['']; - - var parts = []; - var m = balanced('{', '}', str); - - if (!m) - return str.split(','); - - var pre = m.pre; - var body = m.body; - var post = m.post; - var p = pre.split(','); - - p[p.length-1] += '{' + body + '}'; - var postParts = parseCommaParts(post); - if (post.length) { - p[p.length-1] += postParts.shift(); - p.push.apply(p, postParts); - } - - parts.push.apply(parts, p); - - return parts; -} - -function expandTop(str) { - if (!str) - return []; - - // I don't know why Bash 4.3 does this, but it does. - // Anything starting with {} will have the first two bytes preserved - // but *only* at the top level, so {},a}b will not expand to anything, - // but a{},b}c will be expanded to [a}c,abc]. - // One could argue that this is a bug in Bash, but since the goal of - // this module is to match Bash's rules, we escape a leading {} - if (str.substr(0, 2) === '{}') { - str = '\\{\\}' + str.substr(2); - } - - return expand(escapeBraces(str), true).map(unescapeBraces); -} - -function identity(e) { - return e; -} - -function embrace(str) { - return '{' + str + '}'; -} -function isPadded(el) { - return /^-?0\d/.test(el); -} - -function lte(i, y) { - return i <= y; -} -function gte(i, y) { - return i >= y; -} - -function expand(str, isTop) { - var expansions = []; - - var m = balanced('{', '}', str); - if (!m || /\$$/.test(m.pre)) return [str]; - - var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body); - var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body); - var isSequence = isNumericSequence || isAlphaSequence; - var isOptions = m.body.indexOf(',') >= 0; - if (!isSequence && !isOptions) { - // {a},b} - if (m.post.match(/,.*\}/)) { - str = m.pre + '{' + m.body + escClose + m.post; - return expand(str); - } - return [str]; - } - - var n; - if (isSequence) { - n = m.body.split(/\.\./); - } else { - n = parseCommaParts(m.body); - if (n.length === 1) { - // x{{a,b}}y ==> x{a}y x{b}y - n = expand(n[0], false).map(embrace); - if (n.length === 1) { - var post = m.post.length - ? expand(m.post, false) - : ['']; - return post.map(function(p) { - return m.pre + n[0] + p; - }); - } - } - } - - // at this point, n is the parts, and we know it's not a comma set - // with a single entry. - - // no need to expand pre, since it is guaranteed to be free of brace-sets - var pre = m.pre; - var post = m.post.length - ? expand(m.post, false) - : ['']; - - var N; - - if (isSequence) { - var x = numeric(n[0]); - var y = numeric(n[1]); - var width = Math.max(n[0].length, n[1].length) - var incr = n.length == 3 - ? Math.abs(numeric(n[2])) - : 1; - var test = lte; - var reverse = y < x; - if (reverse) { - incr *= -1; - test = gte; - } - var pad = n.some(isPadded); - - N = []; - - for (var i = x; test(i, y); i += incr) { - var c; - if (isAlphaSequence) { - c = String.fromCharCode(i); - if (c === '\\') - c = ''; - } else { - c = String(i); - if (pad) { - var need = width - c.length; - if (need > 0) { - var z = new Array(need + 1).join('0'); - if (i < 0) - c = '-' + z + c.slice(1); - else - c = z + c; - } - } - } - N.push(c); - } - } else { - N = concatMap(n, function(el) { return expand(el, false) }); - } - - for (var j = 0; j < N.length; j++) { - for (var k = 0; k < post.length; k++) { - var expansion = pre + N[j] + post[k]; - if (!isTop || isSequence || expansion) - expansions.push(expansion); - } - } - - return expansions; -} - diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Avtoskola Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Avtoskola Download.md deleted file mode 100644 index 5d0ea850ba0e87c2b43511e5091e6953091a7c93..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Avtoskola Download.md +++ /dev/null @@ -1,18 +0,0 @@ -

          avtoskola download


          Downloadhttps://urlgoal.com/2uCJoE



          - -Primorsky Driving School is on Facebook. Join Facebook to connect with Primorsky Driving School and others you may know. Facebook gives people the ability to share and. Driving schools. -We offer: training in a driving school in the category "A, B, C, D, E" in Moscow, with professional teachers and driving instructors, according to an individual lesson schedule, with the possibility of studying the theory of traffic rules. -Driving training. -How to choose a driving school? -About school. -Learning rules. -Driving lessons. -Rules of conduct on the site. -Exam in traffic police. -Preparation for the state exam in the traffic police (theory and practice). -Autodrome exam. -Exam in the city. -Primorsky Driving School 8a78ff9644
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Computer Hardware Problems And Solutions Pdf High Quality Free Download Zip.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Computer Hardware Problems And Solutions Pdf High Quality Free Download Zip.md deleted file mode 100644 index d1e9a8323254d1a5145b531ac95aec4ddd900dfb..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Computer Hardware Problems And Solutions Pdf High Quality Free Download Zip.md +++ /dev/null @@ -1,98 +0,0 @@ -

          computer hardware problems and solutions pdf free download zip


          Downloadhttps://urlgoal.com/2uCKyw



          -
          -Windows XP Print Problems - -When I try to print with Windows XP, I see the following errors: - -Impossible to establish a connection to the print server - -Error message: - -The printer may not be connected to the network - -Unable to contact server - -Printer is busy - -The printer is unavailable - -Printer is down for maintenance - -Error sending data to printer - -Printer not connected. You must first connect the printer to the computer. - -To fix these errors: - -Follow the steps below to resolve the problem. - -Windows XP Computer Setup - -If your printer has not been connected to your computer, follow the steps below. - -For Print problems with Windows XP in the Microsoft website, go to - -This troubleshooting will solve most print problems in Windows XP. - -Windows XP Printer Setup - -Follow these steps to check your printer or connect your printer to your computer: - -Open the Control Panel - -Select Hardware and Sound - -Select Hardware - -Select Device Manager - -Right-click on the printer device and choose Properties - -Click on the Printer Properties - -Click on the Advanced tab - -Click on the Start button - -Click on the Status - -The Status column shows the following codes: - -Ready - -Connected - -Preparing - -Printer is offline - -Printer is not accepting jobs - -Connecting - -Printer is not connected. You must first connect the printer to the computer - -Printer is connected but not available - -Printer is offline. You must first connect the printer to the computer - -Printer is not available. You must first connect the printer to the computer - -Printer is busy. You must first connect the printer to the computer - -Printer is not found. You must first find the printer - -Printer is not installed. You must first install the printer - -Driver files not found - -The printer driver files are not found. - -Close the properties window and check the Install Printer Driver window. - -Click on the Install button - -Select 4fefd39f24
          -
          -
          -

          diff --git a/spaces/rgres/Seg2Sat/static/_app/immutable/chunks/index-bcf2726a.js b/spaces/rgres/Seg2Sat/static/_app/immutable/chunks/index-bcf2726a.js deleted file mode 100644 index 2d47b275bdcb23c7324444798fdc9687822aeb28..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/static/_app/immutable/chunks/index-bcf2726a.js +++ /dev/null @@ -1 +0,0 @@ -function N(){}function H(t,n){for(const e in n)t[e]=n[e];return t}function B(t){return t()}function M(){return Object.create(null)}function p(t){t.forEach(B)}function I(t){return typeof t=="function"}function lt(t,n){return t!=t?n==n:t!==n||t&&typeof t=="object"||typeof t=="function"}let g;function ot(t,n){return g||(g=document.createElement("a")),g.href=n,t===g.href}function W(t){return Object.keys(t).length===0}function G(t,...n){if(t==null)return N;const e=t.subscribe(...n);return e.unsubscribe?()=>e.unsubscribe():e}function st(t,n,e){t.$$.on_destroy.push(G(n,e))}function at(t,n,e,i){if(t){const c=L(t,n,e,i);return t[0](c)}}function L(t,n,e,i){return t[1]&&i?H(e.ctx.slice(),t[1](i(n))):e.ctx}function ft(t,n,e,i){if(t[2]&&i){const c=t[2](i(e));if(n.dirty===void 0)return c;if(typeof c=="object"){const s=[],u=Math.max(n.dirty.length,c.length);for(let l=0;l32){const n=[],e=t.ctx.length/32;for(let i=0;i>1);e(c)<=i?t=c+1:n=c}return t}function R(t){if(t.hydrate_init)return;t.hydrate_init=!0;let n=t.childNodes;if(t.nodeName==="HEAD"){const r=[];for(let o=0;o0&&n[e[c]].claim_order<=o?c+1:Q(1,c,y=>n[e[y]].claim_order,o))-1;i[r]=e[f]+1;const a=f+1;e[a]=r,c=Math.max(a,c)}const s=[],u=[];let l=n.length-1;for(let r=e[c]+1;r!=0;r=i[r-1]){for(s.push(n[r-1]);l>=r;l--)u.push(n[l]);l--}for(;l>=0;l--)u.push(n[l]);s.reverse(),u.sort((r,o)=>r.claim_order-o.claim_order);for(let r=0,o=0;r=s[o].claim_order;)o++;const f=ot.removeEventListener(n,e,i)}function xt(t){return function(n){return n.preventDefault(),t.call(this,n)}}function $t(t,n,e){e==null?t.removeAttribute(n):t.getAttribute(n)!==e&&t.setAttribute(n,e)}function wt(t){return t===""?null:+t}function Z(t){return Array.from(t.childNodes)}function tt(t){t.claim_info===void 0&&(t.claim_info={last_index:0,total_claimed:0})}function O(t,n,e,i,c=!1){tt(t);const s=(()=>{for(let u=t.claim_info.last_index;u=0;u--){const l=t[u];if(n(l)){const r=e(l);return r===void 0?t.splice(u,1):t[u]=r,c?r===void 0&&t.claim_info.last_index--:t.claim_info.last_index=u,l}}return i()})();return s.claim_order=t.claim_info.total_claimed,t.claim_info.total_claimed+=1,s}function P(t,n,e,i){return O(t,c=>c.nodeName===n,c=>{const s=[];for(let u=0;uc.removeAttribute(u))},()=>i(n))}function vt(t,n,e){return P(t,n,e,X)}function Et(t,n,e){return P(t,n,e,Y)}function nt(t,n){return O(t,e=>e.nodeType===3,e=>{const i=""+n;if(e.data.startsWith(i)){if(e.data.length!==i.length)return e.splitText(i.length)}else e.data=i},()=>j(n),!0)}function kt(t){return nt(t," ")}function Nt(t,n){n=""+n,t.wholeText!==n&&(t.data=n)}function jt(t,n){t.value=n==null?"":n}function St(t,n,e,i){e===null?t.style.removeProperty(n):t.style.setProperty(n,e,i?"important":"")}let m;function h(t){m=t}function S(){if(!m)throw new Error("Function called outside component initialization");return m}function At(t){S().$$.on_mount.push(t)}function Ct(t){S().$$.after_update.push(t)}function Mt(t,n){return S().$$.context.set(t,n),n}const d=[],T=[],x=[],q=[],D=Promise.resolve();let E=!1;function z(){E||(E=!0,D.then(F))}function Tt(){return z(),D}function k(t){x.push(t)}const v=new Set;let b=0;function F(){const t=m;do{for(;b{$.delete(t),i&&(e&&t.d(1),i())}),t.o(n)}}function Ot(t,n){const e={},i={},c={$$scope:1};let s=t.length;for(;s--;){const u=t[s],l=n[s];if(l){for(const r in u)r in l||(i[r]=1);for(const r in l)c[r]||(e[r]=l[r],c[r]=1);t[s]=l}else for(const r in u)c[r]=1}for(const u in i)u in e||(e[u]=void 0);return e}function Pt(t){return typeof t=="object"&&t!==null?t:{}}function Dt(t){t&&t.c()}function zt(t,n){t&&t.l(n)}function rt(t,n,e,i){const{fragment:c,on_mount:s,on_destroy:u,after_update:l}=t.$$;c&&c.m(n,e),i||k(()=>{const r=s.map(B).filter(I);u?u.push(...r):p(r),t.$$.on_mount=[]}),l.forEach(k)}function ct(t,n){const e=t.$$;e.fragment!==null&&(p(e.on_destroy),e.fragment&&e.fragment.d(n),e.on_destroy=e.fragment=null,e.ctx=[])}function ut(t,n){t.$$.dirty[0]===-1&&(d.push(t),z(),t.$$.dirty.fill(0)),t.$$.dirty[n/31|0]|=1<{const C=A.length?A[0]:y;return o.ctx&&c(o.ctx[a],o.ctx[a]=C)&&(!o.skip_bound&&o.bound[a]&&o.bound[a](C),f&&ut(t,a)),y}):[],o.update(),f=!0,p(o.before_update),o.fragment=i?i(o.ctx):!1,n.target){if(n.hydrate){J();const a=Z(n.target);o.fragment&&o.fragment.l(a),a.forEach(V)}else o.fragment&&o.fragment.c();n.intro&&it(t.$$.fragment),rt(t,n.target,n.anchor,n.customElement),K(),F()}h(r)}class Ht{$destroy(){ct(this,1),this.$destroy=N}$on(n,e){const i=this.$$.callbacks[n]||(this.$$.callbacks[n]=[]);return i.push(e),()=>{const c=i.indexOf(e);c!==-1&&i.splice(c,1)}}$set(n){this.$$set&&!W(n)&&(this.$$.skip_bound=!0,this.$$set(n),this.$$.skip_bound=!1)}}export{Pt as A,ct as B,H as C,Tt as D,N as E,at as F,_t as G,dt as H,ft as I,U as J,ot as K,bt as L,pt as M,st as N,ht as O,Y as P,Et as Q,jt as R,Ht as S,xt as T,p as U,wt as V,T as W,Z as a,$t as b,vt as c,V as d,X as e,St as f,mt as g,nt as h,Ft as i,Nt as j,yt as k,gt as l,kt as m,qt as n,Lt as o,Bt as p,it as q,Mt as r,lt as s,j as t,Ct as u,At as v,Dt as w,zt as x,rt as y,Ot as z}; diff --git a/spaces/rimasalshehri/NASAproject/README.md b/spaces/rimasalshehri/NASAproject/README.md deleted file mode 100644 index d10203b35d834d4030166b64ba1b51a3be9b3949..0000000000000000000000000000000000000000 --- a/spaces/rimasalshehri/NASAproject/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NASAproject -emoji: 🌍 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rinong/StyleGAN-NADA/e4e/utils/common.py b/spaces/rinong/StyleGAN-NADA/e4e/utils/common.py deleted file mode 100644 index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000 --- a/spaces/rinong/StyleGAN-NADA/e4e/utils/common.py +++ /dev/null @@ -1,55 +0,0 @@ -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - return tensor2im(x) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/rishi9440/remove-photo-background/app.py b/spaces/rishi9440/remove-photo-background/app.py deleted file mode 100644 index 632762b00097461e98b15ecb71f1b9db9d508a2f..0000000000000000000000000000000000000000 --- a/spaces/rishi9440/remove-photo-background/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import streamlit as st -import os -from datetime import datetime -from PIL import Image -from io import BytesIO - -from src.utils import change_background, matte -from src.st_style import apply_prod_style - -apply_prod_style(st) # NOTE: Uncomment this for production! - -def V_SPACE(lines): - for _ in range(lines): - st.write(' ') - -def image_download_button(pil_image, filename: str, fmt: str, label="Download"): - if fmt not in ["jpg", "png"]: - raise Exception(f"Unknown image format (Available: {fmt} - case sensitive)") - - pil_format = "JPEG" if fmt == "jpg" else "PNG" - file_format = "jpg" if fmt == "jpg" else "png" - mime = "image/jpeg/?target=external" if fmt == "jpg" else "image/png/?target=external" - - buf = BytesIO() - - pil_image.save(buf, format=pil_format) - - return st.download_button( - label=label, - data=buf.getvalue(), - file_name=f'{filename}.{file_format}?target=external', - mime=mime, - ) - -uploaded_file = st.file_uploader( - label="Upload your photo here", - accept_multiple_files=False, type=["png", "jpg", "jpeg"], -) - -if uploaded_file is not None: - - in_mode = "Transparent (PNG)" - in_submit = st.button("Submit") - - if uploaded_file is not None and in_submit: - img_input = Image.open(uploaded_file) - - with st.spinner("AI is doing magic to your photo. Please wait..."): - hexmap = { - "Transparent (PNG)": "#000000", - "Black": "#000000", - "White": "#FFFFFF", - "Green": "#22EE22", - "Red": "#EE2222", - "Blue": "#2222EE", - } - alpha = 0.0 if in_mode == "Transparent (PNG)" else 1.0 - img_matte = matte(img_input) - img_output = change_background(img_input, img_matte, background_alpha=alpha, background_hex=hexmap[in_mode]) - - with st.expander("Success!", expanded=True): - st.image(img_output) - uploaded_name = os.path.splitext(uploaded_file.name)[0] - image_download_button( - pil_image=img_output, - filename=uploaded_name, - fmt="png" - ) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/checkloss_hook.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/checkloss_hook.py deleted file mode 100644 index 754e61bef87dd074f4b7a06943b7db7060d5f1e6..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/hook/checkloss_hook.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.runner.hooks import HOOKS, Hook - - -@HOOKS.register_module() -class CheckInvalidLossHook(Hook): - """Check invalid loss hook. - - This hook will regularly check whether the loss is valid - during training. - - Args: - interval (int): Checking interval (every k iterations). - Default: 50. - """ - - def __init__(self, interval=50): - self.interval = interval - - def after_train_iter(self, runner): - if self.every_n_iters(runner, self.interval): - assert torch.isfinite(runner.outputs['loss']), \ - runner.logger.info('loss become infinite or NaN!') diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/utils/amg.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/utils/amg.py deleted file mode 100644 index 3a137778e45c464c079658ecb87ec53270e789f7..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/segment_anything/utils/amg.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -import math -from copy import deepcopy -from itertools import product -from typing import Any, Dict, Generator, ItemsView, List, Tuple - - -class MaskData: - """ - A structure for storing masks and their related data in batched format. - Implements basic filtering and concatenation. - """ - - def __init__(self, **kwargs) -> None: - for v in kwargs.values(): - assert isinstance( - v, (list, np.ndarray, torch.Tensor) - ), "MaskData only supports list, numpy arrays, and torch tensors." - self._stats = dict(**kwargs) - - def __setitem__(self, key: str, item: Any) -> None: - assert isinstance( - item, (list, np.ndarray, torch.Tensor) - ), "MaskData only supports list, numpy arrays, and torch tensors." - self._stats[key] = item - - def __delitem__(self, key: str) -> None: - del self._stats[key] - - def __getitem__(self, key: str) -> Any: - return self._stats[key] - - def items(self) -> ItemsView[str, Any]: - return self._stats.items() - - def filter(self, keep: torch.Tensor) -> None: - for k, v in self._stats.items(): - if v is None: - self._stats[k] = None - elif isinstance(v, torch.Tensor): - self._stats[k] = v[torch.as_tensor(keep, device=v.device)] - elif isinstance(v, np.ndarray): - self._stats[k] = v[keep.detach().cpu().numpy()] - elif isinstance(v, list) and keep.dtype == torch.bool: - self._stats[k] = [a for i, a in enumerate(v) if keep[i]] - elif isinstance(v, list): - self._stats[k] = [v[i] for i in keep] - else: - raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") - - def cat(self, new_stats: "MaskData") -> None: - for k, v in new_stats.items(): - if k not in self._stats or self._stats[k] is None: - self._stats[k] = deepcopy(v) - elif isinstance(v, torch.Tensor): - self._stats[k] = torch.cat([self._stats[k], v], dim=0) - elif isinstance(v, np.ndarray): - self._stats[k] = np.concatenate([self._stats[k], v], axis=0) - elif isinstance(v, list): - self._stats[k] = self._stats[k] + deepcopy(v) - else: - raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") - - def to_numpy(self) -> None: - for k, v in self._stats.items(): - if isinstance(v, torch.Tensor): - self._stats[k] = v.detach().cpu().numpy() - - -def is_box_near_crop_edge( - boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0 -) -> torch.Tensor: - """Filter masks at the edge of a crop, but not at the edge of the original image.""" - crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) - orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) - boxes = uncrop_boxes_xyxy(boxes, crop_box).float() - near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) - near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) - near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) - return torch.any(near_crop_edge, dim=1) - - -def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: - box_xywh = deepcopy(box_xyxy) - box_xywh[2] = box_xywh[2] - box_xywh[0] - box_xywh[3] = box_xywh[3] - box_xywh[1] - return box_xywh - - -def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: - assert len(args) > 0 and all( - len(a) == len(args[0]) for a in args - ), "Batched iteration must have inputs of all the same size." - n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) - for b in range(n_batches): - yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args] - - -def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: - """ - Encodes masks to an uncompressed RLE, in the format expected by - pycoco tools. - """ - # Put in fortran order and flatten h,w - b, h, w = tensor.shape - tensor = tensor.permute(0, 2, 1).flatten(1) - - # Compute change indices - diff = tensor[:, 1:] ^ tensor[:, :-1] - change_indices = diff.nonzero() - - # Encode run length - out = [] - for i in range(b): - cur_idxs = change_indices[change_indices[:, 0] == i, 1] - cur_idxs = torch.cat( - [ - torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), - cur_idxs + 1, - torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), - ] - ) - btw_idxs = cur_idxs[1:] - cur_idxs[:-1] - counts = [] if tensor[i, 0] == 0 else [0] - counts.extend(btw_idxs.detach().cpu().tolist()) - out.append({"size": [h, w], "counts": counts}) - return out - - -def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: - """Compute a binary mask from an uncompressed RLE.""" - h, w = rle["size"] - mask = np.empty(h * w, dtype=bool) - idx = 0 - parity = False - for count in rle["counts"]: - mask[idx : idx + count] = parity - idx += count - parity ^= True - mask = mask.reshape(w, h) - return mask.transpose() # Put in C order - - -def area_from_rle(rle: Dict[str, Any]) -> int: - return sum(rle["counts"][1::2]) - - -def calculate_stability_score( - masks: torch.Tensor, mask_threshold: float, threshold_offset: float -) -> torch.Tensor: - """ - Computes the stability score for a batch of masks. The stability - score is the IoU between the binary masks obtained by thresholding - the predicted mask logits at high and low values. - """ - # One mask is always contained inside the other. - # Save memory by preventing unnecesary cast to torch.int64 - intersections = ( - (masks > (mask_threshold + threshold_offset)) - .sum(-1, dtype=torch.int16) - .sum(-1, dtype=torch.int32) - ) - unions = ( - (masks > (mask_threshold - threshold_offset)) - .sum(-1, dtype=torch.int16) - .sum(-1, dtype=torch.int32) - ) - return intersections / unions - - -def build_point_grid(n_per_side: int) -> np.ndarray: - """Generates a 2D grid of points evenly spaced in [0,1]x[0,1].""" - offset = 1 / (2 * n_per_side) - points_one_side = np.linspace(offset, 1 - offset, n_per_side) - points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) - points_y = np.tile(points_one_side[:, None], (1, n_per_side)) - points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2) - return points - - -def build_all_layer_point_grids( - n_per_side: int, n_layers: int, scale_per_layer: int -) -> List[np.ndarray]: - """Generates point grids for all crop layers.""" - points_by_layer = [] - for i in range(n_layers + 1): - n_points = int(n_per_side / (scale_per_layer**i)) - points_by_layer.append(build_point_grid(n_points)) - return points_by_layer - - -def generate_crop_boxes( - im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float -) -> Tuple[List[List[int]], List[int]]: - """ - Generates a list of crop boxes of different sizes. Each layer - has (2**i)**2 boxes for the ith layer. - """ - crop_boxes, layer_idxs = [], [] - im_h, im_w = im_size - short_side = min(im_h, im_w) - - # Original image - crop_boxes.append([0, 0, im_w, im_h]) - layer_idxs.append(0) - - def crop_len(orig_len, n_crops, overlap): - return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) - - for i_layer in range(n_layers): - n_crops_per_side = 2 ** (i_layer + 1) - overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) - - crop_w = crop_len(im_w, n_crops_per_side, overlap) - crop_h = crop_len(im_h, n_crops_per_side, overlap) - - crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] - crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] - - # Crops in XYWH format - for x0, y0 in product(crop_box_x0, crop_box_y0): - box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] - crop_boxes.append(box) - layer_idxs.append(i_layer + 1) - - return crop_boxes, layer_idxs - - -def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) - # Check if boxes has a channel dimension - if len(boxes.shape) == 3: - offset = offset.unsqueeze(1) - return boxes + offset - - -def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0]], device=points.device) - # Check if points has a channel dimension - if len(points.shape) == 3: - offset = offset.unsqueeze(1) - return points + offset - - -def uncrop_masks( - masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int -) -> torch.Tensor: - x0, y0, x1, y1 = crop_box - if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: - return masks - # Coordinate transform masks - pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) - pad = (x0, pad_x - x0, y0, pad_y - y0) - return torch.nn.functional.pad(masks, pad, value=0) - - -def remove_small_regions( - mask: np.ndarray, area_thresh: float, mode: str -) -> Tuple[np.ndarray, bool]: - """ - Removes small disconnected regions and holes in a mask. Returns the - mask and an indicator of if the mask has been modified. - """ - import cv2 # type: ignore - - assert mode in ["holes", "islands"] - correct_holes = mode == "holes" - working_mask = (correct_holes ^ mask).astype(np.uint8) - n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) - sizes = stats[:, -1][1:] # Row 0 is background label - small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] - if len(small_regions) == 0: - return mask, False - fill_labels = [0] + small_regions - if not correct_holes: - fill_labels = [i for i in range(n_labels) if i not in fill_labels] - # If every region is below threshold, keep largest - if len(fill_labels) == 0: - fill_labels = [int(np.argmax(sizes)) + 1] - mask = np.isin(regions, fill_labels) - return mask, True - - -def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: - from pycocotools import mask as mask_utils # type: ignore - - h, w = uncompressed_rle["size"] - rle = mask_utils.frPyObjects(uncompressed_rle, h, w) - rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json - return rle - - -def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: - """ - Calculates boxes in XYXY format around masks. Return [0,0,0,0] for - an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. - """ - # torch.max below raises an error on empty inputs, just skip in this case - if torch.numel(masks) == 0: - return torch.zeros(*masks.shape[:-2], 4, device=masks.device) - - # Normalize shape to CxHxW - shape = masks.shape - h, w = shape[-2:] - if len(shape) > 2: - masks = masks.flatten(0, -3) - else: - masks = masks.unsqueeze(0) - - # Get top and bottom edges - in_height, _ = torch.max(masks, dim=-1) - in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] - bottom_edges, _ = torch.max(in_height_coords, dim=-1) - in_height_coords = in_height_coords + h * (~in_height) - top_edges, _ = torch.min(in_height_coords, dim=-1) - - # Get left and right edges - in_width, _ = torch.max(masks, dim=-2) - in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] - right_edges, _ = torch.max(in_width_coords, dim=-1) - in_width_coords = in_width_coords + w * (~in_width) - left_edges, _ = torch.min(in_width_coords, dim=-1) - - # If the mask is empty the right edge will be to the left of the left edge. - # Replace these boxes with [0, 0, 0, 0] - empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) - out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) - out = out * (~empty_filter).unsqueeze(-1) - - # Return to original shape - if len(shape) > 2: - out = out.reshape(*shape[:-2], 4) - else: - out = out[0] - - return out diff --git a/spaces/rorallitri/biomedical-language-models/logs/Engal Thanga Raja Movie Download.md b/spaces/rorallitri/biomedical-language-models/logs/Engal Thanga Raja Movie Download.md deleted file mode 100644 index 9d654932a47bcb0391ebf4c8900fb4b1f98806d9..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Engal Thanga Raja Movie Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Engal Thanga Raja Movie Download


          Download Ziphttps://tinurll.com/2uznen



          - - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/russellc/BLIP/data/utils.py b/spaces/russellc/BLIP/data/utils.py deleted file mode 100644 index 628894844becd462d444584b8b2b01a84ee4b8f7..0000000000000000000000000000000000000000 --- a/spaces/russellc/BLIP/data/utils.py +++ /dev/null @@ -1,112 +0,0 @@ -import re -import json -import os - -import torch -import torch.distributed as dist - -import utils - -def pre_caption(caption,max_words=50): - caption = re.sub( - r"([.!\"()*#:;~])", - ' ', - caption.lower(), - ) - caption = re.sub( - r"\s{2,}", - ' ', - caption, - ) - caption = caption.rstrip('\n') - caption = caption.strip(' ') - - #truncate caption - caption_words = caption.split(' ') - if len(caption_words)>max_words: - caption = ' '.join(caption_words[:max_words]) - - return caption - -def pre_question(question,max_ques_words=50): - question = re.sub( - r"([.!\"()*#:;~])", - '', - question.lower(), - ) - question = question.rstrip(' ') - - #truncate question - question_words = question.split(' ') - if len(question_words)>max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - - return question - - -def save_result(result, result_dir, filename, remove_duplicate=''): - result_file = os.path.join(result_dir, '%s_rank%d.json'%(filename,utils.get_rank())) - final_result_file = os.path.join(result_dir, '%s.json'%filename) - - json.dump(result,open(result_file,'w')) - - dist.barrier() - - if utils.is_main_process(): - # combine results from all processes - result = [] - - for rank in range(utils.get_world_size()): - result_file = os.path.join(result_dir, '%s_rank%d.json'%(filename,rank)) - res = json.load(open(result_file,'r')) - result += res - - if remove_duplicate: - result_new = [] - id_list = [] - for res in result: - if res[remove_duplicate] not in id_list: - id_list.append(res[remove_duplicate]) - result_new.append(res) - result = result_new - - json.dump(result,open(final_result_file,'w')) - print('result file saved to %s'%final_result_file) - - return final_result_file - - - -from pycocotools.coco import COCO -from pycocoevalcap.eval import COCOEvalCap -from torchvision.datasets.utils import download_url - -def coco_caption_eval(coco_gt_root, results_file, split): - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val_gt.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test_gt.json'} - filenames = {'val':'coco_karpathy_val_gt.json','test':'coco_karpathy_test_gt.json'} - - download_url(urls[split],coco_gt_root) - annotation_file = os.path.join(coco_gt_root,filenames[split]) - - # create coco object and coco_result object - coco = COCO(annotation_file) - coco_result = coco.loadRes(results_file) - - # create coco_eval object by taking coco and coco_result - coco_eval = COCOEvalCap(coco, coco_result) - - # evaluate on a subset of images by setting - # coco_eval.params['image_id'] = coco_result.getImgIds() - # please remove this line when evaluating the full validation set - # coco_eval.params['image_id'] = coco_result.getImgIds() - - # evaluate results - # SPICE will take a few minutes the first time, but speeds up due to caching - coco_eval.evaluate() - - # print output evaluation scores - for metric, score in coco_eval.eval.items(): - print(f'{metric}: {score:.3f}') - - return coco_eval \ No newline at end of file diff --git a/spaces/russellc/BLIP/transform/randaugment.py b/spaces/russellc/BLIP/transform/randaugment.py deleted file mode 100644 index 094d9f4cacc93146d2bab7311d9dc04feb07032c..0000000000000000000000000000000000000000 --- a/spaces/russellc/BLIP/transform/randaugment.py +++ /dev/null @@ -1,340 +0,0 @@ -import cv2 -import numpy as np - - -## aug functions -def identity_func(img): - return img - - -def autocontrast_func(img, cutoff=0): - ''' - same output as PIL.ImageOps.autocontrast - ''' - n_bins = 256 - - def tune_channel(ch): - n = ch.size - cut = cutoff * n // 100 - if cut == 0: - high, low = ch.max(), ch.min() - else: - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - low = np.argwhere(np.cumsum(hist) > cut) - low = 0 if low.shape[0] == 0 else low[0] - high = np.argwhere(np.cumsum(hist[::-1]) > cut) - high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0] - if high <= low: - table = np.arange(n_bins) - else: - scale = (n_bins - 1) / (high - low) - offset = -low * scale - table = np.arange(n_bins) * scale + offset - table[table < 0] = 0 - table[table > n_bins - 1] = n_bins - 1 - table = table.clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def equalize_func(img): - ''' - same output as PIL.ImageOps.equalize - PIL's implementation is different from cv2.equalize - ''' - n_bins = 256 - - def tune_channel(ch): - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - non_zero_hist = hist[hist != 0].reshape(-1) - step = np.sum(non_zero_hist[:-1]) // (n_bins - 1) - if step == 0: return ch - n = np.empty_like(hist) - n[0] = step // 2 - n[1:] = hist[:-1] - table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def rotate_func(img, degree, fill=(0, 0, 0)): - ''' - like PIL, rotate by degree, not radians - ''' - H, W = img.shape[0], img.shape[1] - center = W / 2, H / 2 - M = cv2.getRotationMatrix2D(center, degree, 1) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill) - return out - - -def solarize_func(img, thresh=128): - ''' - same output as PIL.ImageOps.posterize - ''' - table = np.array([el if el < thresh else 255 - el for el in range(256)]) - table = table.clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def color_func(img, factor): - ''' - same output as PIL.ImageEnhance.Color - ''' - ## implementation according to PIL definition, quite slow - # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis] - # out = blend(degenerate, img, factor) - # M = ( - # np.eye(3) * factor - # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor) - # )[np.newaxis, np.newaxis, :] - M = ( - np.float32([ - [0.886, -0.114, -0.114], - [-0.587, 0.413, -0.587], - [-0.299, -0.299, 0.701]]) * factor - + np.float32([[0.114], [0.587], [0.299]]) - ) - out = np.matmul(img, M).clip(0, 255).astype(np.uint8) - return out - - -def contrast_func(img, factor): - """ - same output as PIL.ImageEnhance.Contrast - """ - mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299])) - table = np.array([( - el - mean) * factor + mean - for el in range(256) - ]).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def brightness_func(img, factor): - ''' - same output as PIL.ImageEnhance.Contrast - ''' - table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def sharpness_func(img, factor): - ''' - The differences the this result and PIL are all on the 4 boundaries, the center - areas are same - ''' - kernel = np.ones((3, 3), dtype=np.float32) - kernel[1][1] = 5 - kernel /= 13 - degenerate = cv2.filter2D(img, -1, kernel) - if factor == 0.0: - out = degenerate - elif factor == 1.0: - out = img - else: - out = img.astype(np.float32) - degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :] - out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate) - out = out.astype(np.uint8) - return out - - -def shear_x_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, factor, 0], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_x_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, -offset], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_y_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [0, 1, -offset]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def posterize_func(img, bits): - ''' - same output as PIL.ImageOps.posterize - ''' - out = np.bitwise_and(img, np.uint8(255 << (8 - bits))) - return out - - -def shear_y_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [factor, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def cutout_func(img, pad_size, replace=(0, 0, 0)): - replace = np.array(replace, dtype=np.uint8) - H, W = img.shape[0], img.shape[1] - rh, rw = np.random.random(2) - pad_size = pad_size // 2 - ch, cw = int(rh * H), int(rw * W) - x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H) - y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W) - out = img.copy() - out[x1:x2, y1:y2, :] = replace - return out - - -### level to args -def enhance_level_to_args(MAX_LEVEL): - def level_to_args(level): - return ((level / MAX_LEVEL) * 1.8 + 0.1,) - return level_to_args - - -def shear_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 0.3 - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def translate_level_to_args(translate_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * float(translate_const) - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = int((level / MAX_LEVEL) * cutout_const) - return (level, replace_value) - - return level_to_args - - -def solarize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 256) - return (level, ) - return level_to_args - - -def none_level_to_args(level): - return () - - -def posterize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 4) - return (level, ) - return level_to_args - - -def rotate_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 30 - if np.random.random() < 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -func_dict = { - 'Identity': identity_func, - 'AutoContrast': autocontrast_func, - 'Equalize': equalize_func, - 'Rotate': rotate_func, - 'Solarize': solarize_func, - 'Color': color_func, - 'Contrast': contrast_func, - 'Brightness': brightness_func, - 'Sharpness': sharpness_func, - 'ShearX': shear_x_func, - 'TranslateX': translate_x_func, - 'TranslateY': translate_y_func, - 'Posterize': posterize_func, - 'ShearY': shear_y_func, -} - -translate_const = 10 -MAX_LEVEL = 10 -replace_value = (128, 128, 128) -arg_dict = { - 'Identity': none_level_to_args, - 'AutoContrast': none_level_to_args, - 'Equalize': none_level_to_args, - 'Rotate': rotate_level_to_args(MAX_LEVEL, replace_value), - 'Solarize': solarize_level_to_args(MAX_LEVEL), - 'Color': enhance_level_to_args(MAX_LEVEL), - 'Contrast': enhance_level_to_args(MAX_LEVEL), - 'Brightness': enhance_level_to_args(MAX_LEVEL), - 'Sharpness': enhance_level_to_args(MAX_LEVEL), - 'ShearX': shear_level_to_args(MAX_LEVEL, replace_value), - 'TranslateX': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'TranslateY': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'Posterize': posterize_level_to_args(MAX_LEVEL), - 'ShearY': shear_level_to_args(MAX_LEVEL, replace_value), -} - - -class RandomAugment(object): - - def __init__(self, N=2, M=10, isPIL=False, augs=[]): - self.N = N - self.M = M - self.isPIL = isPIL - if augs: - self.augs = augs - else: - self.augs = list(arg_dict.keys()) - - def get_random_ops(self): - sampled_ops = np.random.choice(self.augs, self.N) - return [(op, 0.5, self.M) for op in sampled_ops] - - def __call__(self, img): - if self.isPIL: - img = np.array(img) - ops = self.get_random_ops() - for name, prob, level in ops: - if np.random.random() > prob: - continue - args = arg_dict[name](level) - img = func_dict[name](img, *args) - return img - - -if __name__ == '__main__': - a = RandomAugment() - img = np.random.randn(32, 32, 3) - a(img) \ No newline at end of file diff --git a/spaces/sai22/vits-models/text/cleaners.py b/spaces/sai22/vits-models/text/cleaners.py deleted file mode 100644 index 68c9ad24d5a303b68a521fba2e8776c8cc867356..0000000000000000000000000000000000000000 --- a/spaces/sai22/vits-models/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i - -namespace groundingdino { - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/samuelinferences/TabPFN/TabPFN/scripts/baseline_prediction_interface.py b/spaces/samuelinferences/TabPFN/TabPFN/scripts/baseline_prediction_interface.py deleted file mode 100644 index 298a046c4c3c39cbddbcdc5ee47c68606c706b2c..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/TabPFN/TabPFN/scripts/baseline_prediction_interface.py +++ /dev/null @@ -1,38 +0,0 @@ -import tqdm -import numpy as np - -def baseline_predict(metric_function, eval_xs, eval_ys, categorical_feats, metric_used=None, eval_pos=2, max_time=300, **kwargs): - """ - Baseline prediction interface. - :param metric_function: - :param eval_xs: - :param eval_ys: - :param categorical_feats: - :param metric_used: - :param eval_pos: - :param max_time: Scheduled maximum time - :param kwargs: - :return: list [np.array(metrics), np.array(outputs), best_configs] or [None, None, None] if failed - """ - - metrics = [] - outputs = [] - best_configs = [] - eval_splits = list(zip(eval_xs.transpose(0, 1), eval_ys.transpose(0, 1))) - for eval_x, eval_y in tqdm.tqdm(eval_splits, desc='Calculating splits'+str(metric_function)+' '+str(eval_pos)): - try: - metric, output, best_config = metric_function(eval_x[:eval_pos], - eval_y[:eval_pos], - eval_x[eval_pos:], - eval_y[eval_pos:], - categorical_feats, - metric_used=metric_used - , max_time=max_time) - metrics += [metric] - outputs += [output] - best_configs += [best_config] - return np.array(metrics), np.array(outputs), best_configs - except Exception as e: - print(f'There was an exception in {metric_function}') - print(e) - return None, None, None \ No newline at end of file diff --git a/spaces/sandeepmajumdar/nlp-sorcery/app.py b/spaces/sandeepmajumdar/nlp-sorcery/app.py deleted file mode 100644 index b6ab713ca477340c3cdaffb6e3c1363e3afad954..0000000000000000000000000000000000000000 --- a/spaces/sandeepmajumdar/nlp-sorcery/app.py +++ /dev/null @@ -1,120 +0,0 @@ -from transformers import pipeline -import gradio as gr - -# Sentiment analysis interface -sa_examples = [ - 'the food delivered was stale', - 'i like your shirt', - 'this is not the way to work', -] - -sa_app = gr.Interface.load( - 'huggingface/distilbert-base-uncased-finetuned-sst-2-english', - title='Sentiment Analysis', - examples=sa_examples, - description='Type your sentence here and click Submit', -) - -# Text Generation -tg_examples = [ - 'I want to feel', - 'It is possible to', - 'When the world seems' -] - -tg_app = gr.Interface.load( - 'huggingface/distilgpt2', - title='Text Generation', - examples=tg_examples, - description="Write an incomplete sentence and submit", -) - -# Fill Mask -fm_examples = [ - 'Do you know how much I you?', - 'When we went to the forest, it raining', -] - -fm_app = gr.Interface.load( - 'huggingface/distilroberta-base', - title='Fill In The Blank', - examples=fm_examples, - description="Write a sentence with a missing word using \", -) - -# Named Entity Recognition -ner_examples = [ - 'My name is Doug and I live in Delhi', - 'Vishal works at Google', -] - -ner_app = gr.Interface.load( - 'huggingface/dbmdz/bert-large-cased-finetuned-conll03-english', - title='Named Entity Recognition', - examples=ner_examples, - description="Write a sentence with a name, place, organization, etc and I'll try to reveal them", -) - -# Summarization -sum_examples = [ - '''Television is one of the many wonders of modern science and technology. It was invented in England by the Scottish scientist J.N. Baird -in 1928 and the British Broadcasting Corporation was the first to broadcast television images in 1929. Previously the radio helped us -hear things from far and near. spread information and knowledge from one corner of the globe to another. But all this was done through -sound only. But television combined visual images with sound. Today we can watch games, shows, and song and dance programs from all -corners of the world while sitting at our own homes. TV can be used for educating the masses, for bringing to us the latest pieces of -information audio-visually and can provide us all kinds of entertainment even in color. But as in all things, too much televiewing may -prove harmful. In many cases, the habit of watching TV has an adverse effect on the study habits of the young. When we read books, we -have to use our intelligence and imagination. But in most cases, TV watching is a passive thing. It may dull our imagination and -intelligence.''', -] - -sum_app = gr.Interface.load( - 'huggingface/sshleifer/distilbart-cnn-12-6', - title='Text Summarization', - examples=sum_examples, - description="Copy and dump a long paragraph here for summarization, or click the example below", -) - -# Translation to Hindi -trans_examples = [ - 'I want to go home', - "i will go to the station tomorrow", -] - -trans_app = gr.Interface.load( - 'huggingface/Helsinki-NLP/opus-mt-en-hi', - title='Translate From English to Hindi', - examples=trans_examples, - description="Write a sentence to translate from English to Hindi", -) - -# Text to Speech -tts_examples = [ - "How do you do?", - 'i thought we were supposed to go to the park' -] - -tts_app = gr.Interface.load( - "huggingface/facebook/fastspeech2-en-ljspeech", - title='Text to Speech', - examples=tts_examples, - description="Give me something to say!", -) - -# Speech to Text -stt_app = gr.Interface.load( - "huggingface/facebook/wav2vec2-base-960h", - title='Speech to Text', - inputs="mic", - description="Let me try to guess what you're saying! Stop the recording button before submitting.", -) - -with gr.Blocks() as demo: - gr.Markdown("# App For Various NLP Tasks (use landscape on phone)") - gr.TabbedInterface([sa_app, tg_app, fm_app, ner_app, sum_app, trans_app, tts_app, stt_app], - ["Sentiment Analysis", "Text Generation", "Fill Blank", "Named Entity", "Summary", - "Translation", "Text to Speech", "Speech to Text"] - ) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/sarinam/speaker-anonymization-gan/demo_inference/__init__.py b/spaces/sarinam/speaker-anonymization-gan/demo_inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sasha/Draw-Me-An-Insect/app.py b/spaces/sasha/Draw-Me-An-Insect/app.py deleted file mode 100644 index 74886265a182b87b1cb7fb9f85dbbf35f6440991..0000000000000000000000000000000000000000 --- a/spaces/sasha/Draw-Me-An-Insect/app.py +++ /dev/null @@ -1,450 +0,0 @@ -import gradio as gr -#import torch -import whisper -from datetime import datetime -from PIL import Image -import flag -import os -#MY_SECRET_TOKEN=os.environ.get('HF_TOKEN_SD') - -#from diffusers import StableDiffusionPipeline - -whisper = gr.Interface.load(name="spaces/sanchit-gandhi/whisper-large-v2") -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") -### ———————————————————————————————————————— - -title="Draw Me an Insect 🐞 /Dessine-moi un insecte 🐞" - -### ———————————————————————————————————————— - - -#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -#pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=MY_SECRET_TOKEN) -#pipe.to(device) - -### ———————————————————————————————————————— - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt, fn_index=2) - return [os.path.join(gallery_dir, img) for img in os.listdir(gallery_dir)] - - -def magic_whisper_to_sd(audio, guidance_scale, nb_iterations, seed): - - whisper_results = translate_better(audio) - prompt = whisper_results[1] - images = get_images(prompt) - - return whisper_results[0], whisper_results[1], images - -#def diffuse(prompt, guidance_scale, nb_iterations, seed): -# -# generator = torch.Generator(device=device).manual_seed(int(seed)) -# -# print(""" -# — -# Sending prompt to Stable Diffusion ... -# — -# """) -# print("prompt: " + prompt) -# print("guidance scale: " + str(guidance_scale)) -# print("inference steps: " + str(nb_iterations)) -# print("seed: " + str(seed)) -# -# images_list = pipe( -# [prompt] * 2, -# guidance_scale=guidance_scale, -# num_inference_steps=nb_iterations, -# generator=generator -# ) -# -# images = [] -# -# safe_image = Image.open(r"unsafe.png") -# -# for i, image in enumerate(images_list["sample"]): -# if(images_list["nsfw_content_detected"][i]): -# images.append(safe_image) -# else: -# images.append(image) -# -# -# print("Stable Diffusion has finished") -# print("———————————————————————————————————————————") -# -# return images - -def translate_better(audio): - print(""" - — - Sending audio to Whisper ... - — - """) - transcribe_text_result = whisper(audio, None, "transcribe", fn_index=0) - translate_text_result = whisper(audio, None, "translate", fn_index=0) - print("transcript: " + transcribe_text_result) - print("———————————————————————————————————————————") - print("translated: " + translate_text_result) - - return transcribe_text_result, translate_text_result - -### ———————————————————————————————————————— - -css = """ - .container { - max-width: 880px; - margin: auto; - padding-top: 1.5rem; - } - a { - text-decoration: underline; - } - h1 { - font-weight: 900; - margin-bottom: 7px; - text-align: center; - font-size: 2em; - margin-bottom: 1em; - } - #w2sd_container{ - margin-top: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .tabitem { - border-bottom-left-radius: 10px; - border-bottom-right-radius: 10px; - } - #record_tab, #upload_tab { - font-size: 1.2em; - } - #record_btn{ - - } - #record_btn > div > button > span { - width: 2.375rem; - height: 2.375rem; - } - #record_btn > div > button > span > span { - width: 2.375rem; - height: 2.375rem; - } - audio { - margin-bottom: 10px; - } - div#record_btn > .mt-6{ - margin-top: 0!important; - } - div#record_btn > .mt-6 button { - font-size: 2em; - width: 100%; - padding: 20px; - height: 160px; - } - div#upload_area { - height: 11.1rem; - } - div#upload_area > div.w-full > div { - min-height: 9rem; - } - #check_btn_1, #check_btn_2{ - color: #fff; - --tw-gradient-from: #4caf50; - --tw-gradient-stops: var(--tw-gradient-from), var(--tw-gradient-to); - --tw-gradient-to: #4caf50; - border-color: #8bc34a; - } - #magic_btn_1, #magic_btn_2{ - color: #fff; - --tw-gradient-from: #f44336; - --tw-gradient-stops: var(--tw-gradient-from), var(--tw-gradient-to); - --tw-gradient-to: #ff9800; - border-color: #ff9800; - } - input::-webkit-inner-spin-button, input::-webkit-outer-spin-button { - -webkit-appearance: none; - } - input[type=number] { - -moz-appearance: textfield; - } - input[type=range] { - -webkit-appearance: none; - cursor: pointer; - height: 1px; - background: currentColor; - } - input[type=range]::-webkit-slider-thumb { - -webkit-appearance: none; - width: 0.5em; - height: 1.2em; - border-radius: 10px; - background: currentColor; - } - input[type=range]::-moz-range-thumb{ - width: 0.5em; - height: 1.2em; - border-radius: 10px; - background: currentColor; - } - div#spoken_lang textarea { - font-size: 4em; - line-height: 1em; - text-align: center; - } - div#transcripted { - flex: 4; - } - div#translated textarea { - font-size: 1.5em; - line-height: 1.25em; - } - #sd_settings { - margin-bottom: 20px; - } - #diffuse_btn { - color: #fff; - font-size: 1em; - margin-bottom: 20px; - --tw-gradient-from: #4caf50; - --tw-gradient-stops: var(--tw-gradient-from), var(--tw-gradient-to); - --tw-gradient-to: #4caf50; - border-color: #8bc34a; - } - #notice { - padding: 20px 14px 10px; - display: flex; - align-content: space-evenly; - gap: 20px; - line-height: 1em; - font-size: .8em; - border: 1px solid #374151; - border-radius: 10px; - } - #about { - padding: 20px; - } - #notice > div { - flex: 1; - } - -""" - -### ———————————————————————————————————————— - -with gr.Blocks(css=css) as demo: - with gr.Column(): - gr.HTML(''' -

          - Draw Me an Insect 🐞 - Dessine-moi un insecte 🐞 -

          -

          - Tell the AI the story of your first insect encounter and it will generate an image to illustrate it! - -

          - -

          - Raconte à l'IA l'histoire de ta première rencontre avec les insectes et ça va génerer une image pour l'illustrer!

          - ''') -# with gr.Row(elem_id="w2sd_container"): -# with gr.Column(): - - gr.Markdown( - """ - - ## 1. Record audio or Upload an audio file/ Enregistrer de l'audio ou téléverser un fichier audio : - - """ - ) - with gr.Tab(label="Record/Enregistrer", elem_id="record_tab"): - with gr.Column(): - record_input = gr.Audio( - source="microphone", - type="filepath", - show_label=False, - elem_id="record_btn" - ) - with gr.Row(): - audio_r_translate = gr.Button("Check the transcription/Vérifier la transcription 👍", elem_id="check_btn_1") - audio_r_direct_sd = gr.Button("Generate the image right now! / Génerer l'image directement! 🖌️", elem_id="magic_btn_1") - - with gr.Tab(label="Upload audio/Téléverser audio", elem_id="upload_tab"): - with gr.Column(): - upload_input = gr.Audio( - source="upload", - type="filepath", - show_label=False, - elem_id="upload_area" - ) - with gr.Row(): - audio_u_translate = gr.Button("Check the transcription/Vérifier la transcription 👍", elem_id="check_btn_2") - audio_u_direct_sd = gr.Button("Generate the image right now! / Génerer l'image directement! 🖌️", elem_id="magic_btn_2") - - - with gr.Accordion(label="Image generation Settings/Configuration de génération d'image", elem_id="sd_settings", visible=False): - with gr.Row(): - guidance_scale = gr.Slider(2, 15, value = 7, label = 'Guidance Scale') - nb_iterations = gr.Slider(10, 50, value = 25, step = 1, label = 'Steps') - seed = gr.Slider(label = "Seed", minimum = 0, maximum = 2147483647, step = 1, randomize = True) - - gr.Markdown( - """ - ## 2. Check the text output, correct it if necessary/Vérifier la transcription, corriger si nécessaire: - """ - ) - - with gr.Row(): - - transcripted_output = gr.Textbox( - label="Transcription in your detected spoken language", - lines=3, - elem_id="transcripted" - ) - #language_detected_output = gr.Textbox(label="Native language", elem_id="spoken_lang",lines=3) - - with gr.Column(): - translated_output = gr.Textbox( - label="Transcript translated in English by Whisper", - lines=4, - elem_id="translated" - ) - with gr.Row(): - clear_btn = gr.Button(value="Clear") - diffuse_btn = gr.Button(value="OK, Diffuse this prompt !", elem_id="diffuse_btn") - - clear_btn.click(fn=lambda value: gr.update(value=""), inputs=clear_btn, outputs=translated_output) - - - - - -# with gr.Column(): - - - - gr.Markdown(""" - ## 3. Wait for your image/Attendre ton image ☕️ - This can take ~20-30 seconds/ Ceci peut prendre jusqu'à 20-30 secondes. - - """ - ) - - sd_output = gr.Gallery().style(grid=2, height="auto") - - - gr.Markdown(""" - ### 📌 About the models -

          - Whisper is a general-purpose speech recognition model.

          - It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
          - — -

          -

          - Stable Diffusion is a state of the art text-to-image model that generates images from text. -

          -
          -
          - LICENSE -

          - The model is licensed with a CreativeML Open RAIL-M license.

          -

          - The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license.

          -

          - The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups.

          -

          - For the full list of restrictions please read the license. -

          -
          -
          - Biases and content acknowledgment -

          - Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence.

          -

          - The model was trained on the LAION-5B dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes.

          -

          You can read more in the model card. -

          -
          -
          - - """, elem_id="about") - - audio_r_translate.click(translate_better, - inputs = record_input, - outputs = [ - #language_detected_output, - transcripted_output, - translated_output - ]) - - audio_u_translate.click(translate_better, - inputs = upload_input, - outputs = [ - #language_detected_output, - transcripted_output, - translated_output - ]) - - audio_r_direct_sd.click(magic_whisper_to_sd, - inputs = [ - record_input, - guidance_scale, - nb_iterations, - seed - ], - outputs = [ - #language_detected_output, - transcripted_output, - translated_output, - sd_output - ]) - - audio_u_direct_sd.click(magic_whisper_to_sd, - inputs = [ - upload_input, - guidance_scale, - nb_iterations, - seed - ], - outputs = [ - #language_detected_output, - transcripted_output, - translated_output, - sd_output - ]) - - diffuse_btn.click(get_images, - inputs = [ - translated_output - ], - outputs = sd_output - ) - gr.HTML(''' - - ''') - - -if __name__ == "__main__": - demo.queue(max_size=32, concurrency_count=20).launch() \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Daens Um Grito De Justica ((TOP)) Download Do Filme Dublado.md b/spaces/scedlatioru/img-to-music/example/Daens Um Grito De Justica ((TOP)) Download Do Filme Dublado.md deleted file mode 100644 index 2ae7c01be0f360d2d440b9f2464565150c081885..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Daens Um Grito De Justica ((TOP)) Download Do Filme Dublado.md +++ /dev/null @@ -1,127 +0,0 @@ -
          -

          Daens Um Grito de Justiça Download do Filme Dublado: Saiba Como Assistir Esse Clássico do Cinema

          - -

          Daens Um Grito de Justiça é um filme de 1992, dirigido por Stijn Coninx, baseado na biografia do padre belga Adolf Daens, que lutou pelos direitos dos trabalhadores na cidade de Aalst, no final do século XIX. O filme foi indicado ao Oscar de melhor filme estrangeiro e recebeu vários prêmios e elogios da crítica.

          -

          daens um grito de justica download do filme dublado


          Download Ziphttps://gohhs.com/2uEyWy



          - -

          O filme conta a história de Daens (Jan Decleir), um padre católico que é transferido para Aalst, uma cidade industrializada onde os operários vivem em condições miseráveis, explorados pela indústria têxtil. Daens se sensibiliza com a situação dos trabalhadores e começa a publicar textos denunciando as injustiças e defendendo seus direitos. Ele também se envolve com a política e se torna um líder dos operários, entrando em conflito com a Igreja, que apoia os burgueses, e com o governo, que reprime os movimentos sociais.

          - -

          Daens Um Grito de Justiça é um filme emocionante e inspirador, que mostra a luta de um homem pela dignidade humana e pela justiça social. O filme também retrata a realidade histórica da Segunda Revolução Industrial na Bélgica, com suas transformações econômicas, sociais e culturais.

          - -

          Se você quer assistir a esse filme incrível, mas não sabe como baixar ou onde encontrar o filme dublado, não se preocupe. Neste artigo, vamos te mostrar tudo o que você precisa saber sobre Daens Um Grito de Justiça download do filme dublado, incluindo onde baixar, como baixar, como assistir e muito mais.

          - -

          Onde baixar Daens Um Grito de Justiça download do filme dublado?

          - -

          Uma das formas mais fáceis e rápidas de baixar Daens Um Grito de Justiça download do filme dublado é através de sites que oferecem downloads gratuitos ou pagos de filmes. Esses sites disponibilizam o filme em diferentes formatos e qualidades, como AVI, MP4, MKV, 720p, 1080p, etc.

          - -

          No entanto, nem todos os sites são confiáveis ou seguros. Alguns sites podem conter vírus, malware ou outros programas nocivos que podem danificar o seu computador ou roubar as suas informações pessoais. Alguns sites podem também oferecer arquivos falsos ou corrompidos que não funcionam ou causam problemas no filme. Alguns sites podem ainda violar os direitos autorais do filme e te expor a riscos legais.

          - -

          Por isso, é importante escolher um site que garanta a qualidade e a autenticidade do filme, bem como a sua segurança e privacidade. Um dos sites mais confiáveis e recomendados para baixar Daens Um Grito de Justiça download do filme dublado é o Vimeo.

          -

          - -

          Como baixar Daens Um Grito de Justiça download do filme dublado no Vimeo?

          - -

          O Vimeo é um site que oferece downloads gratuitos e ilimitados de vídeos em HD legendados ou dublados. O site possui um acervo enorme de vídeos de todos os gêneros e épocas, incluindo Daens Um Grito de Justiça download do filme dublado.

          - -

          O site também é seguro e protegido contra vírus, malware ou outros programas nocivos. O site também respeita os direitos autorais dos vídeos e não infringe nenhuma lei. O site também é fácil de usar e não requer nenhum cadastro ou pagamento.

          - -

          Para baixar Daens Um Grito de Justiça download do filme dublado no Vimeo, você só precisa seguir estes passos simples:

          - -
            -
          1. Acesse o site Vimeo no seu navegador.
          2. -
          3. Digite Daens - 1992 (Legendado) na caixa de pesquisa ou navegue pelas categorias até encontrar o vídeo.
          4. -
          5. Clique no título do vídeo para acessar a página do vídeo.
          6. -
          7. Clique no botão Download para iniciar o download do vídeo.
          8. -
          9. Escolha o formato e a qualidade que você deseja baixar o vídeo.
          10. -
          11. Aguarde o download ser concluído e salve o arquivo no seu computador.
          12. -
          - -

          Pronto! Você já pode assistir Daens Um Grito de Justiça download do filme dublado no seu computador ou em qualquer outro dispositivo compatível.

          - -

          Como assistir Daens Um Grito de Justiça download do filme dublado?

          - -

          Depois de baixar Daens Um Grito de Justiça download do filme dublado no Vimeo, você pode assistir ao vídeo no seu computador ou em qualquer outro dispositivo compatível com o formato e a qualidade do arquivo.

          - -

          Para assistir ao vídeo no seu computador, você só precisa abrir o arquivo com um programa que reproduza vídeos, como o VLC Media Player, o Windows Media Player ou o KMPlayer. Você pode ajustar as configurações de áudio, vídeo e legendas conforme a sua preferência.

          - -

          Para assistir ao vídeo em outro dispositivo, como uma TV smart, um celular ou um tablet, você só precisa transferir o arquivo para o dispositivo através de um cabo USB ou uma rede Wi-Fi. Você pode usar um aplicativo que reproduza vídeos, como o MX Player, o VLC for Android ou o VLC for iOS. Você pode ajustar as configurações de áudio, vídeo e legendas conforme a sua preferência.

          - -

          Conclusão

          - -

          Daens Um Grito de Justiça é um filme incrível que conta a história real do padre belga Adolf Daens, que lutou pelos direitos dos trabalhadores na cidade de Aalst, no final do século XIX. O filme foi indicado ao Oscar de melhor filme estrangeiro e recebeu vários prêmios e elogios da crítica.

          - -

          Se você quer assistir a esse filme emocionante e inspirador, mas não sabe como baixar ou onde encontrar o filme dublado, não se preocupe. Neste artigo, mostramos tudo o que você precisa saber sobre Daens Um Grito de Justiça download do filme dublado, incluindo onde baixar, como baixar, como assistir e muito mais.

          - -

          Agora você já pode baixar e assistir Daens Um Grito de Justiça download do filme dublado no seu computador ou em qualquer outro dispositivo compatível com o formato e a qualidade do arquivo. Você vai se surpreender com esse filme incrível que mostra a luta de um homem pela dignidade humana e pela justiça social.

          - -

          Se você gostou deste artigo ou tem alguma dúvida ou sugestão sobre este tema ou qualquer outro relacionado ao audio-digital.net , por favor sinta-se livre para nos contactar a qualquer momento. Estamos sempre felizes em ouvir você e ajudá-lo em qualquer maneira que pudermos.

          -

          Quais são os benefícios de assistir Daens Um Grito de Justiça download do filme dublado?

          - -

          Assistir Daens Um Grito de Justiça download do filme dublado pode trazer vários benefícios para você, tanto pessoais quanto profissionais. Veja alguns deles:

          - -
            -
          • Você vai aprender mais sobre a história da Bélgica e da Europa no final do século XIX, com suas transformações econômicas, sociais e culturais.
          • -
          • Você vai conhecer a biografia do padre belga Adolf Daens, que foi um exemplo de coragem, fé e compromisso social.
          • -
          • Você vai se emocionar e se inspirar com a luta dos trabalhadores pela dignidade humana e pela justiça social.
          • -
          • Você vai apreciar a qualidade artística do filme, que tem uma direção, um roteiro, uma fotografia, uma trilha sonora e uma atuação excelentes.
          • -
          • Você vai melhorar o seu conhecimento e o seu domínio do idioma português, ao assistir o filme dublado.
          • -
          - -

          Portanto, assistir Daens Um Grito de Justiça download do filme dublado pode ser uma experiência enriquecedora e gratificante para você, que vai te proporcionar mais cultura, informação e entretenimento.

          - -

          Quais são as dicas para aproveitar melhor Daens Um Grito de Justiça download do filme dublado?

          - -

          Para aproveitar melhor Daens Um Grito de Justiça download do filme dublado, você pode seguir algumas dicas simples, mas que podem fazer a diferença na sua experiência. Veja algumas delas:

          - -
            -
          • Escolha um momento adequado para assistir o filme, sem interrupções ou distrações.
          • -
          • Prepare um ambiente confortável e agradável para assistir o filme, com uma boa iluminação, ventilação e som.
          • -
          • Assista o filme com atenção e interesse, procurando captar os detalhes da história, dos personagens e das cenas.
          • -
          • Assista o filme com uma mente aberta e crítica, procurando compreender o contexto histórico, social e cultural do filme.
          • -
          • Assista o filme com uma postura reflexiva e ética, procurando extrair as lições e os valores do filme.
          • -
          • Assista o filme com uma atitude participativa e colaborativa, compartilhando as suas impressões e opiniões sobre o filme com outras pessoas.
          • -
          - -

          Seguindo essas dicas, você pode aproveitar melhor Daens Um Grito de Justiça download do filme dublado e ter uma experiência mais satisfatória e proveitosa.

          -

          Quais são as curiosidades sobre Daens Um Grito de Justiça download do filme dublado?

          - -

          Daens Um Grito de Justiça download do filme dublado é um filme cheio de curiosidades e fatos interessantes que valem a pena conhecer. Veja alguns deles:

          - -
            -
          • O filme é baseado na biografia do padre belga Adolf Daens, escrita pelo escritor belga Louis Paul Boon.
          • -
          • O filme foi indicado ao Oscar de melhor filme estrangeiro em 1993, mas perdeu para o filme francês Indochina.
          • -
          • O filme foi um grande sucesso de público e crítica na Bélgica, onde foi visto por mais de um milhão de espectadores.
          • -
          • O filme foi filmado na cidade de Aalst, onde o padre Daens viveu e atuou, e contou com a participação de vários moradores locais como figurantes.
          • -
          • O filme foi lançado em DVD no Brasil em 2007, pela distribuidora Versátil Home Video.
          • -
          - -

          Essas são algumas das curiosidades sobre Daens Um Grito de Justiça download do filme dublado que você pode descobrir ao assistir o filme.

          - -

          Quais são as críticas sobre Daens Um Grito de Justiça download do filme dublado?

          - -

          Daens Um Grito de Justiça download do filme dublado é um filme que recebeu muitas críticas positivas e elogiosas de vários veículos e profissionais da imprensa especializada. Veja algumas delas:

          - -
          -

          "Daens é um filme poderoso e comovente, que retrata com fidelidade e sensibilidade a vida e a obra do padre belga Adolf Daens, um dos grandes defensores dos direitos humanos da história. O filme tem uma direção impecável, um roteiro envolvente, uma fotografia belíssima e uma atuação magistral de Jan Decleir no papel principal. Um filme que merece ser visto e aplaudido." - Revista Veja

          -
          - -
          -

          "Daens é um filme inspirador e emocionante, que conta a história real do padre belga Adolf Daens, que lutou pelos direitos dos trabalhadores na cidade de Aalst, no final do século XIX. O filme é uma obra-prima do cinema histórico e social, que mostra com realismo e profundidade as transformações econômicas, sociais e culturais da Bélgica e da Europa na época. O filme tem uma direção soberba, um roteiro inteligente, uma fotografia deslumbrante e uma atuação extraordinária de Jan Decleir no papel principal. Um filme que merece ser visto e admirado." - Jornal O Globo

          -
          - -
          -

          "Daens é um filme espetacular e emocionante, que narra a história real do padre belga Adolf Daens, que lutou pelos direitos dos trabalhadores na cidade de Aalst, no final do século XIX. O filme é uma obra-prima do cinema histórico e social, que mostra com realismo e profundidade as transformações econômicas, sociais e culturais da Bélgica e da Europa na época. O filme tem uma direção magnífica, um roteiro brilhante, uma fotografia maravilhosa e uma atuação excepcional de Jan Decleir no papel principal. Um filme que merece ser visto e reverenciado." - Revista Época

          -
          - -

          Essas são algumas das críticas sobre Daens Um Grito de Justiça download do filme dublado que você pode conferir ao assistir o filme.

          -

          Conclusão

          - -

          Daens Um Grito de Justiça download do filme dublado é um filme incrível que conta a história real do padre belga Adolf Daens, que lutou pelos direitos dos trabalhadores na cidade de Aalst, no final do século XIX. O filme foi indicado ao Oscar de melhor filme estrangeiro e recebeu vários prêmios e elogios da crítica.

          - -

          Se você quer assistir a esse filme emocionante e inspirador, mas não sabe como baixar ou onde encontrar o filme dublado, não se preocupe. Neste artigo, mostramos tudo o que você precisa saber sobre Daens Um Grito de Justiça download do filme dublado, incluindo onde baixar, como baixar, como assistir e muito mais.

          - -

          Agora você já pode baixar e assistir Daens Um Grito de Justiça download do filme dublado no seu computador ou em qualquer outro dispositivo compatível com o formato e a qualidade do arquivo. Você vai se surpreender com esse filme incrível que mostra a luta de um homem pela dignidade humana e pela justiça social.

          - -

          Se você gostou deste artigo ou tem alguma dúvida ou sugestão sobre este tema ou qualquer outro relacionado ao audio-digital.net , por favor sinta-se livre para nos contactar a qualquer momento. Estamos sempre felizes em ouvir você e ajudá-lo em qualquer maneira que pudermos.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/PATCHEDAUTODATA451CrackFULL.md b/spaces/scedlatioru/img-to-music/example/PATCHEDAUTODATA451CrackFULL.md deleted file mode 100644 index be3eef6219c13ebe9399a96359924c5792a32c53..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/PATCHEDAUTODATA451CrackFULL.md +++ /dev/null @@ -1,10 +0,0 @@ -
          -

          PATCHEDAUTODATA451CrackFULL QuarkXPress 2020 15.1.3 Crack Latest Full Download Gemvision Countersketch Studio. I'm going to purchase a new video card. I was surprised by the price. Stay Connected. The best thing about these plans is that they automatically renew at the end of their initial term.

          -

          PATCHEDAUTODATA451CrackFULL


          Download File » https://gohhs.com/2uEzJj



          -

          PATCHEDAUTODATA451CrackFULL QuarkXPress 2020 15.1.3 Crack Latest Full Download Gemvision Countersketch Studio. This email address is being protected from spambots.
          Continue to shopping. At this time, the company had not yet released any information regarding the phone.

          -

          PATCHEDAUTODATA451CrackFULL QuarkXPress 2020 15.1.3 Crack Latest Full Download Gemvision Countersketch Studio. However, my will and a lot of my family was in Canada and they were finding it hard to figure out how to get back.
          This product will not work without the training. Customer Support.

          -

          Revista Sensacional De Traileros Pdf Download
          the crew crack only skidrow 114
          Korg M1 Serial Number
          tacx trainer software 4 0 crack added
          gpu shader 3.0 free download pes 2012
          Sugar Bytes Turnado 1.6.1 Crack FREE Download
          quickbooks pos 2013 beast 62
          PATCHEDAUTODATA451CrackFULL
          QuarkXPress 2020 15.1.3 Crack Latest Full Download
          Gemvision Countersketch Studio

          -

          -

          QuarkXpress 2020 15.1.3 Crack Latest Full Download
          the crew crack only skidrow 114
          Korg M1 Serial Number
          tacx trainer software 4 0 crack added
          gpu shader 3.0 free download pes 2012
          Sugar Bytes Turnado 1.6.1 Crack FREE Download
          quickbooks pos 2013 beast 62
          PATCHEDAUTODATA451CrackFULL
          QuarkXpress 2020 15.1.3 Crack Latest Full Download
          Gemvision Countersketch Studio

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Wicresetutilitycrackkeygen20.md b/spaces/scedlatioru/img-to-music/example/Wicresetutilitycrackkeygen20.md deleted file mode 100644 index 23d7c0e62c89b93c73e94f504e7c72b774383898..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Wicresetutilitycrackkeygen20.md +++ /dev/null @@ -1,12 +0,0 @@ -

          wicresetutilitycrackkeygen20


          Download Zip ✯✯✯ https://gohhs.com/2uEzrC



          -
          -INKCHIP Adjustment Program - Utility to reset the waste ink counters (WIC) for the Epson printer. · 3 KEY. $6.99 Per Key Save - $12 30% OFF ONLY $20.97 Buy Now. Epson Easy Photo Print software. -Epson Easy Photo Print is a simple program that helps you print your photos in full color. -Download Driver For Notebook Asus X552v Download on this page. -Download Epson Easy Photo Print. -Epson Easy Photo Print. -EPSON Easy Photo Print is a program that helps you customize and print your photos in full color. -You can download it from the EPSON software catalog. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/schogini/toys/app-i.py b/spaces/schogini/toys/app-i.py deleted file mode 100644 index 2a9ea2a52cdc5bf95b4b443669538295064c21ae..0000000000000000000000000000000000000000 --- a/spaces/schogini/toys/app-i.py +++ /dev/null @@ -1,412 +0,0 @@ -import gradio as gr -from datetime import datetime -import smtplib -import requests -import re -import time -#import examples -import os - -THANKS = """ - -# THANKS FOR YOUR QUERY - -### You have two more options to get AI assistance - -### 1) Use ChatGPT with the below prompt -#### === MicroPython Script Guidelines === -#### - Import: Use 'from schoginitoys import *' -#### - Joystick: Use Config.xValue, Config.yValue, Config.up, Config.down, Config.right, Config.left, Config.button_pressed -#### - Buzzer: Use beep() for 0.1s beep, beep(2) for 2s beep -#### - Display: Use show("text"), scroll("text"), display_bitmap(bitmap_data, col), display.set_pixel(col, row, value) -#### - Exit: Use Config.left or Config.button_pressed to exit -#### - Libraries: No need to import random, time, urandom, string; we handle it -#### - Output: Use show() for strings <= 4 chars, scroll() for longer strings -#### - Reset: No need to call display_reset(); it's handled in our library -#### - Formatting: Ensure all code is formatted -#### - Explanations: Include kid-friendly explanations below each code snippet -#### - Hyperlinks: Add a link to https://schoginitoys.com for more info -#### - LED Specs: 32 columns x 8 rows; use display.show() after display.set_pixel() -- LED Connections: GP12=Red, GP13=Yellow, GP14=Green; use these for LED-based projects -- Ask your question here. - -### 2) Register at https://schoginitoys.com/p01 to gain access to -### our AI Chatbot: Coding Assistance - -## Schogini Toys Tech Team! - -""" - -def post_it(q=''): - # Define the URL and parameters - if q=='': - return - url = os.environ.get('BLUEHOST_ENDPOINT')+"/index-cb.php" - params = { - "password": os.environ.get('BLUEHOST_API_PASS'), - "q": q - } - headers = { - 'Accept': 'application/json', - 'Content-Type': 'application/json', - 'User-Agent': 'my-app' - } - # print(url) - # Make the POST request - # response = requests.post(url, params=params) - response = requests.get(url, params=params, headers=headers) - - # Check if the request was successful - if response.status_code == 200: - #print("Successfully posted data.") - #print("Response:", response.text) - pass - else: - #print(response) - print(f"Failed to post data. Status code: {response.status_code}") - -def insert_newlines(text, max_line_length=80): - new_text = "" - for line in text.split('\n'): - while len(line) > max_line_length: - # Find the last space before the max_line_length - last_space = line[:max_line_length].rfind(' ') - if last_space == -1: # No space found, just break at max_line_length - last_space = max_line_length - new_text += line[:last_space] + '\n' - line = line[last_space:].strip() - new_text += line + '\n' - return new_text - -def get_menu_item(m=''): - url = os.environ.get('BLUEHOST_ENDPOINT')+"/index-e.php" - password_token = os.environ.get('BLUEHOST_API_PASS') - headers = { - 'Accept': 'application/json', - 'Content-Type': 'application/json', - 'User-Agent': 'my-app', - 'Cache-Control': 'no-cache, no-store, must-revalidate', - 'Pragma': 'no-cache', - 'Expires': '0', - } - params = { - 'password_token': password_token, - 'menu': m - } - # Make an HTTP GET request with the password token as a query parameter - response = requests.get(url, params=params, headers=headers) - # Check if the request was successful - if response.status_code == 200: - output_text = response.text - # Now, output_text contains the content fetched from the PHP script - #print(f"Received output:\n{output_text}") - return insert_newlines(output_text) - else: - #print(f"Failed to fetch data. Status code: {response.status_code}") - return "" - - -def get_buttons(): - url = os.environ.get('BLUEHOST_ENDPOINT')+"/index-e.php" - password_token = os.environ.get('BLUEHOST_API_PASS') - headers = { - 'Accept': 'application/json', - 'Content-Type': 'application/json', - 'User-Agent': 'my-app', - 'Cache-Control': 'no-cache, no-store, must-revalidate', - 'Pragma': 'no-cache', - 'Expires': '0', - } - # Make an HTTP GET request with the password token as a query parameter - response = requests.get(url, params={'password_token': password_token}, headers=headers) - # Check if the request was successful - list_values = [] - if response.status_code == 200: - # Extract the plain text from the response - - text_content = response.text - # print("RAW1") - # print(response) - # print("RAW2") - # print(text_content) - # Use regular expression to find all the list values, assuming they're always formatted as "Lxxx" - #list_values = re.findall(r'"L\d+"', text_content) - - # list_values = re.findall(r'"(L\d+|L.*)"', text_content) - list_values = re.findall(r'"(.*)"', text_content) - - - #list_values = re.findall(r'"L(?:\d+|-MENU)"', text_content)) - #list_values = re.findall(r'"L"', text_content) - #list_values = text_content - #print(list_values) - # Remove the quotes to get the actual values - list_values = [value.strip('"') for value in list_values] - #list_values = [value.strip("'") for value in list_values] - - #print(list_values) - # Print or use the list - #print("Extracted list values:", list_values) - - # else: - # print(f"Failed to fetch data. Status code: {response.status_code}") - # pass - return list_values - -def greet(name): - # return "Hello " + name + "!!" - - # Get the current date and time - #now = datetime.now() - - # Format the datetime object as a string in the format YYYYMMDD_HHMMSS - #timestamp_str = now.strftime("%Y%m%d_%H%M%S") - - # Create a unique filename by appending the timestamp to the base filename - #unique_filename = f"query_{timestamp_str}" - - #print(f"Unique Filename: {unique_filename}") - #In this example, unique_filename will contain a - #filename like myfile_20231001_123456.txt, where 20231001 represents the date (YYYYMMDD) - - - - - # # Open a file called 'example.txt' in write mode ('w') - # with open(unique_filename, 'w') as file: - # # Write the string "Hello, world!" to the file - # file.write(name) - - #send_email(name, unique_filename) - - - # f"Failed to post data. Status code: {response.status_code}" - - - - time.sleep(2) - - - - - - - - if name=='': - msg="Please click any button below or enter your query." - else: - menu_code = get_menu_item(name) - #print("menu_code: " + menu_code) - # if "not found" in menu_code: - if re.search(r"not found", menu_code): - tpl=THANKS #examples.PROJECT_TEMPLATE - post_it(name) # Save the query in bluehost - msg = tpl - #menu_code = tpl - return msg - if menu_code =='': - try: - tpl=eval("examples." + name) - # print("examples." + name) - msg = "```python\n" + tpl + "\n```\n" - except: - tpl=THANKS #PROJECT_TEMPLATE - post_it(name) # Save the query in bluehost - msg = tpl - else: - msg = menu_code - # msg = "```python\n"+menu_code+"\n```\n" - - - - # return "\n```python\n" + "\n\nimport schoginitoys\n\nprint(\"abcd\")\n" + "\n\n```" - return msg - - -# iface = gr.Interface(fn=greet, inputs="text", outputs="text") -# iface.launch() - - -import openai -import gradio as gr - - -messages = [{"role": "system", -"content": ''' -=== MicroPython Script Guidelines === - -- Import: Use 'from schoginitoys import *' -- Joystick: Use Config.xValue, Config.yValue, Config.up, Config.down, Config.right, Config.left, Config.button_pressed -- Buzzer: Use beep() for 0.1s beep, beep(2) for 2s beep -- Display: Use show("text"), scroll("text"), display_bitmap(bitmap_data, col), display.set_pixel(col, row, value) -- Exit: Use Config.left or Config.button_pressed to exit -- Libraries: No need to import random, time, urandom, string; we handle it -- Output: Use show() for strings <= 4 chars, scroll() for longer strings -- Reset: No need to call display_reset(); it's handled in our library -- Formatting: Ensure all code is formatted -- Explanations: Include kid-friendly explanations below each code snippet -- Hyperlinks: Add a link to https://schoginitoys.com for more info -- LED Specs: 32 columns x 8 rows; use display.show() after display.set_pixel() -- LED Connections: GP12=Red, GP13=Yellow, GP14=Green; use these for LED-based projects -''' -}] -# messages = [{"role": "system", -# "content": """You are MicroPython expert. -# Make sure that the python scripts you output contains import line from schoginitoys import *. -# All code sections should be code formatted. - -# Our DIY Kit uses Raspberry Pi PICO. - -# Our DIY kit has a joystick and the user inputs are mapped as below. - -# Config.xValue has the x value. -# Config.yValue has the y value. - -# Config.up will be True if user moves joystick fully up. -# Config.down will be True if user moves joystick fully down. -# Config.right will be True if user moves joystick fully right. -# Config.left will be True if user moves joystick fully left. - -# Config.button_pressed will be True if the user pushes the joystick middle button. - -# Our DIY kit has a buzzer with these functions. - -# beep() function will beep for 0.1 second. -# beep(2) function will beep for 2 seconds. - -# Our DIY kit has a 4 digit Max7219 display, but do not use any setup initializations, -# we have these fucntions. -# display_reset() will initialize and erases the display. -# show("good") will show good on the display. -# scroll("good morning") will scroll good morning on the display. -# In addtion you can use these functions. -# display_bitmap(bitmap_data,col) will show the bitmap at the col position. -# display.set_pixel(col, row, value) will set the col, row with value. - -# Normally when the user moves the joystick to left, Config.left becomes true and we use -# this to exit the script. Certain cases if your code needs the Config.left flag you may -# instead Config.button_pressed to exit the script. - -# Please note that you don't need to import these libraries we are doing it. - -# import random -# import time -# import urandom -# import string - -# Anytime you need to output a result using the print statement, please follow this condition. -# If the string is 4 characters or less use show(string), if the string is more than -# 4 characters use scroll(string), this is instead of print(string). - -# You don't need to use display_reset() as we are already doing it in schoginitoys library. -# Use display_reset() only when you want to explicitly fill the display with zeros. - -# Again, remember that never user the print() statement, use only show() or scroll() functions. -# You don't need to call display_reset() before show() or scroll() as we are calling that anyway -# in these functions. - - -# Please don't import random, we are doing it via schoginitoys import. - -# Again, please remember to code format the output scripts. - -# Please provide a kid friendly explanation below the code snippet to explain each and every -# this so that your responses are educative for kids learning python. - -# All responses should include a well formatted explanation outside on the code block -# with heading and subheadings. - -# All responses should include a hyperlink to https://schoginitoys.com saying for more info. - -# Display has 32 horizontal led columns and 8 led rows. - -# Remember to add display.show() after each display.set_pixel(). - -# Please these connection provided in the Kit. - -# Raspberry Pi PICO Port GP12 is connected to Red LED. -# Raspberry Pi PICO Port GP13 is connected to Yellow LED. -# Raspberry Pi PICO Port GP14 is connected to Green LED. - -# For LED signal based projects like traffic signal etc. use the above LEDs using the Raspberry Pi PICO Machine library PIN class -# instead of the matrix display. - -# """}] - -# display_row(value, row) will fill the whole row with value. -# display_col(value, col) will fill the whole col with value. - -def CustomChatGPT(user_input): - messages.append({"role": "user", "content": user_input}) - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", #"gpt-4", #https://platform.openai.com/docs/models/gpt-4 - messages = messages - ) - ChatGPT_reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": ChatGPT_reply}) - return ChatGPT_reply - -# demo = gradio.Interface( -# fn=CustomChatGPT, -# inputs = "text", -# outputs = "markdown", -# title = "Schogini Toys: AI Chatbot Your Coding Companion!") - -gr.close_all() -# demo = gr.Interface(fn=summarize, -# inputs=[gr.Textbox(label="Text to summarize", lines=6)], -# outputs=[gr.Textbox(label="Result", lines=3)], -# title="Text summarization with distilbart-cnn", -# description="Summarize any text using the `shleifer/distilbart-cnn-12-6` model under the hood!" -# ) -# demo.launch(share=True, server_port=int(os.environ['PORT2'])) - -#css_code='body{background-image:url("https://picsum.photos/seed/picsum/200/300");}' - -def sree_auth(username='', password=''): - # print(os.environ.get('ABHI_PASS')) - # print(os.environ.get('SREE_PASS')) - - if username=='abhi' and password==os.environ.get('ABHI_PASS'): - return True - if username=='sree' and password==os.environ.get('SREE_PASS'): - return True - return False - -# print(os.environ.get('ABHI_PASS')) -# print(os.environ.get('SREE_PASS')) -examples_list = get_buttons() - -demo = gr.Interface( - # fn=CustomChatGPT, - fn=greet, - inputs = [gr.Textbox(label="Ask your questions!", lines=6)], - outputs = "markdown", - #outputs = [gr.Textbox(label="Result", lines=8)], # - title = "Schogini Toys - AI Chatbot V1.03", - description="Your Python Projects Coding Companion!", - allow_flagging="never", - examples = examples_list, - examples_per_page = 50, - # examples = gr.Examples( - # examples = examples_list, - # examples_per_page = 20, - # run_on_click = True, - # inputs = 0, - # #fn=mirror, - # #cache_examples=True, - # ), - # examples=[ - # "L201", - # "L202", - # "L203", - # "L204", - # ], - # theme=gr.themes.Soft(), - theme=gr.themes.Default(), - ) - -# demo.launch() -demo.launch(auth=sree_auth) -# demo.launch(share=True) - diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/inference/inference_core.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/inference/inference_core.py deleted file mode 100644 index d7d5b24cf9aaed19656a194b1d6dfc0aaffe91c6..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/inference/inference_core.py +++ /dev/null @@ -1,381 +0,0 @@ -from typing import List, Optional, Iterable, Dict -import logging -from omegaconf import DictConfig - -import numpy as np -import torch -import torch.nn.functional as F - -from tracker.inference.memory_manager import MemoryManager -from tracker.inference.object_manager import ObjectManager -from tracker.inference.image_feature_store import ImageFeatureStore -from tracker.model.cutie import CUTIE -from tracker.utils.tensor_utils import pad_divide_by, unpad, aggregate - -log = logging.getLogger() - - -class InferenceCore: - def __init__(self, - network: CUTIE, - cfg: DictConfig, - *, - image_feature_store: ImageFeatureStore = None): - self.network = network - self.cfg = cfg - self.mem_every = cfg.mem_every - stagger_updates = cfg.stagger_updates - self.chunk_size = cfg.chunk_size - self.save_aux = cfg.save_aux - self.max_internal_size = cfg.max_internal_size - self.flip_aug = cfg.flip_aug - - self.curr_ti = -1 - self.last_mem_ti = 0 - # at which time indices should we update the sensory memory - if stagger_updates >= self.mem_every: - self.stagger_ti = set(range(1, self.mem_every + 1)) - else: - self.stagger_ti = set( - np.round(np.linspace(1, self.mem_every, stagger_updates)).astype(int)) - self.object_manager = ObjectManager() - self.memory = MemoryManager(cfg=cfg, object_manager=self.object_manager) - - if image_feature_store is None: - self.image_feature_store = ImageFeatureStore(self.network) - else: - self.image_feature_store = image_feature_store - - self.last_mask = None - - def clear_memory(self): - self.curr_ti = -1 - self.last_mem_ti = 0 - self.memory = MemoryManager(cfg=self.cfg, object_manager=self.object_manager) - - def clear_non_permanent_memory(self): - self.curr_ti = -1 - self.last_mem_ti = 0 - self.memory.clear_non_permanent_memory() - - def clear_sensory_memory(self): - self.curr_ti = -1 - self.last_mem_ti = 0 - self.memory.clear_sensory_memory() - - def update_config(self, cfg): - self.mem_every = cfg['mem_every'] - self.memory.update_config(cfg) - - def _add_memory(self, - image: torch.Tensor, - pix_feat: torch.Tensor, - prob: torch.Tensor, - key: torch.Tensor, - shrinkage: torch.Tensor, - selection: torch.Tensor, - *, - is_deep_update: bool = True, - force_permanent: bool = False) -> None: - """ - Memorize the given segmentation in all memory stores. - - The batch dimension is 1 if flip augmentation is not used. - image: RGB image, (1/2)*3*H*W - pix_feat: from the key encoder, (1/2)*_*H*W - prob: (1/2)*num_objects*H*W, in [0, 1] - key/shrinkage/selection: for anisotropic l2, (1/2)*_*H*W - selection can be None if not using long-term memory - is_deep_update: whether to use deep update (e.g. with the mask encoder) - force_permanent: whether to force the memory to be permanent - """ - if prob.shape[1] == 0: - # nothing to add - log.warn('Trying to add an empty object mask to memory!') - return - - if force_permanent: - as_permanent = 'all' - else: - as_permanent = 'first' - - self.memory.initialize_sensory_if_needed(key, self.object_manager.all_obj_ids) - msk_value, sensory, obj_value, self.obj_logits = self.network.encode_mask( - image, - pix_feat, - self.memory.get_sensory(self.object_manager.all_obj_ids), - prob, - deep_update=is_deep_update, - chunk_size=self.chunk_size, - need_weights=self.save_aux) - self.memory.add_memory(key, - shrinkage, - msk_value, - obj_value, - self.object_manager.all_obj_ids, - selection=selection, - as_permanent=as_permanent) - self.last_mem_ti = self.curr_ti - if is_deep_update: - self.memory.update_sensory(sensory, self.object_manager.all_obj_ids) - - def _segment(self, - key: torch.Tensor, - selection: torch.Tensor, - pix_feat: torch.Tensor, - ms_features: Iterable[torch.Tensor], - update_sensory: bool = True) -> torch.Tensor: - """ - Produce a segmentation using the given features and the memory - - The batch dimension is 1 if flip augmentation is not used. - key/selection: for anisotropic l2: (1/2) * _ * H * W - pix_feat: from the key encoder, (1/2) * _ * H * W - ms_features: an iterable of multiscale features from the encoder, each is (1/2)*_*H*W - with strides 16, 8, and 4 respectively - update_sensory: whether to update the sensory memory - - Returns: (num_objects+1)*H*W normalized probability; the first channel is the background - """ - bs = key.shape[0] - if self.flip_aug: - assert bs == 2 - else: - assert bs == 1 - - if not self.memory.engaged: - log.warn('Trying to segment without any memory!') - return torch.zeros((1, key.shape[-2] * 16, key.shape[-1] * 16), - device=key.device, - dtype=key.dtype) - - memory_readout = self.memory.read(pix_feat, key, selection, self.last_mask, self.network) - memory_readout = self.object_manager.realize_dict(memory_readout) - sensory, _, pred_prob_with_bg = self.network.segment(ms_features, - memory_readout, - self.memory.get_sensory( - self.object_manager.all_obj_ids), - chunk_size=self.chunk_size, - update_sensory=update_sensory) - # remove batch dim - if self.flip_aug: - # average predictions of the non-flipped and flipped version - pred_prob_with_bg = (pred_prob_with_bg[0] + - torch.flip(pred_prob_with_bg[1], dims=[-1])) / 2 - else: - pred_prob_with_bg = pred_prob_with_bg[0] - if update_sensory: - self.memory.update_sensory(sensory, self.object_manager.all_obj_ids) - return pred_prob_with_bg - - def step(self, - image: torch.Tensor, - mask: Optional[torch.Tensor] = None, - objects: Optional[List[int]] = None, - *, - idx_mask: bool = True, - end: bool = False, - delete_buffer: bool = True, - force_permanent: bool = False) -> torch.Tensor: - """ - Take a step with a new incoming image. - If there is an incoming mask with new objects, we will memorize them. - If there is no incoming mask, we will segment the image using the memory. - In both cases, we will update the memory and return a segmentation. - - image: 3*H*W - mask: H*W (if idx mask) or len(objects)*H*W or None - objects: list of object ids that are valid in the mask Tensor. - The ids themselves do not need to be consecutive/in order, but they need to be - in the same position in the list as the corresponding mask - in the tensor in non-idx-mask mode. - objects is ignored if the mask is None. - If idx_mask is False and objects is None, we sequentially infer the object ids. - idx_mask: if True, mask is expected to contain an object id at every pixel. - If False, mask should have multiple channels with each channel representing one object. - end: if we are at the end of the sequence, we do not need to update memory - if unsure just set it to False - delete_buffer: whether to delete the image feature buffer after this step - force_permanent: the memory recorded this frame will be added to the permanent memory - """ - if objects is None and mask is not None: - assert not idx_mask - objects = list(range(1, mask.shape[0] + 1)) - - # resize input if needed -- currently only used for the GUI - resize_needed = False - if self.max_internal_size > 0: - h, w = image.shape[-2:] - min_side = min(h, w) - if min_side > self.max_internal_size: - resize_needed = True - new_h = int(h / min_side * self.max_internal_size) - new_w = int(w / min_side * self.max_internal_size) - image = F.interpolate(image.unsqueeze(0), - size=(new_h, new_w), - mode='bilinear', - align_corners=False)[0] - if mask is not None: - if idx_mask: - mask = F.interpolate(mask.unsqueeze(0).unsqueeze(0).float(), - size=(new_h, new_w), - mode='nearest', - align_corners=False)[0, 0].round().long() - else: - mask = F.interpolate(mask.unsqueeze(0), - size=(new_h, new_w), - mode='bilinear', - align_corners=False)[0] - - self.curr_ti += 1 - - image, self.pad = pad_divide_by(image, 16) - image = image.unsqueeze(0) # add the batch dimension - if self.flip_aug: - image = torch.cat([image, torch.flip(image, dims=[-1])], dim=0) - - # whether to update the working memory - is_mem_frame = ((self.curr_ti - self.last_mem_ti >= self.mem_every) or - (mask is not None)) and (not end) - # segment when there is no input mask or when the input mask is incomplete - need_segment = (mask is None) or (self.object_manager.num_obj > 0 - and not self.object_manager.has_all(objects)) - update_sensory = ((self.curr_ti - self.last_mem_ti) in self.stagger_ti) and (not end) - - # encoding the image - ms_feat, pix_feat = self.image_feature_store.get_features(self.curr_ti, image) - key, shrinkage, selection = self.image_feature_store.get_key(self.curr_ti, image) - - # segmentation from memory if needed - if need_segment: - pred_prob_with_bg = self._segment(key, - selection, - pix_feat, - ms_feat, - update_sensory=update_sensory) - - # use the input mask if provided - if mask is not None: - # inform the manager of the new objects, and get a list of temporary id - # temporary ids -- indicates the position of objects in the tensor - # (starts with 1 due to the background channel) - corresponding_tmp_ids, _ = self.object_manager.add_new_objects(objects) - - mask, _ = pad_divide_by(mask, 16) - if need_segment: - # merge predicted mask with the incomplete input mask - pred_prob_no_bg = pred_prob_with_bg[1:] - # use the mutual exclusivity of segmentation - if idx_mask: - pred_prob_no_bg[:, mask > 0] = 0 - else: - pred_prob_no_bg[:, mask.max(0) > 0.5] = 0 - - new_masks = [] - for mask_id, tmp_id in enumerate(corresponding_tmp_ids): - if idx_mask: - this_mask = (mask == objects[mask_id]).type_as(pred_prob_no_bg) - else: - this_mask = mask[tmp_id] - if tmp_id >= pred_prob_no_bg.shape[0]: - new_masks.append(this_mask.unsqueeze(0)) - else: - # +1 for padding the background channel - pred_prob_no_bg[tmp_id + 1] = this_mask - # new_masks are always in the order of tmp_id - mask = torch.cat([pred_prob_no_bg, *new_masks], dim=0) - elif idx_mask: - # simply convert cls to one-hot representation - if len(objects) == 0: - if delete_buffer: - self.image_feature_store.delete(self.curr_ti) - log.warn('Trying to insert an empty mask as memory!') - return torch.zeros((1, key.shape[-2] * 16, key.shape[-1] * 16), - device=key.device, - dtype=key.dtype) - mask = torch.stack( - [mask == objects[mask_id] for mask_id, _ in enumerate(corresponding_tmp_ids)], - dim=0) - pred_prob_with_bg = aggregate(mask, dim=0) - pred_prob_with_bg = torch.softmax(pred_prob_with_bg, dim=0) - - self.last_mask = pred_prob_with_bg[1:].unsqueeze(0) - if self.flip_aug: - self.last_mask = torch.cat( - [self.last_mask, torch.flip(self.last_mask, dims=[-1])], dim=0) - - # save as memory if needed - if is_mem_frame or force_permanent: - self._add_memory(image, - pix_feat, - self.last_mask, - key, - shrinkage, - selection, - force_permanent=force_permanent) - - if delete_buffer: - self.image_feature_store.delete(self.curr_ti) - - output_prob = unpad(pred_prob_with_bg, self.pad) - if resize_needed: - # restore output to the original size - output_prob = F.interpolate(output_prob.unsqueeze(0), - size=(h, w), - mode='bilinear', - align_corners=False)[0] - - return output_prob - - def get_aux_outputs(self, image: torch.Tensor) -> Dict[str, torch.Tensor]: - image, pads = pad_divide_by(image, 16) - image = image.unsqueeze(0) # add the batch dimension - _, pix_feat = self.image_feature_store.get_features(self.curr_ti, image) - - aux_inputs = self.memory.aux - aux_outputs = self.network.compute_aux(pix_feat, aux_inputs, selector=None) - aux_outputs['q_weights'] = aux_inputs['q_weights'] - aux_outputs['p_weights'] = aux_inputs['p_weights'] - - for k, v in aux_outputs.items(): - if len(v.shape) == 5: - aux_outputs[k] = F.interpolate(v[0], - size=image.shape[-2:], - mode='bilinear', - align_corners=False) - elif 'weights' in k: - b, num_objects, num_heads, num_queries, h, w = v.shape - v = v.view(num_objects * num_heads, num_queries, h, w) - v = F.interpolate(v, size=image.shape[-2:], mode='bilinear', align_corners=False) - aux_outputs[k] = v.view(num_objects, num_heads, num_queries, *image.shape[-2:]) - else: - aux_outputs[k] = F.interpolate(v, - size=image.shape[-2:], - mode='bilinear', - align_corners=False)[0] - aux_outputs[k] = unpad(aux_outputs[k], pads) - if 'weights' in k: - weights = aux_outputs[k] - weights = weights / (weights.max(-1, keepdim=True)[0].max(-2, keepdim=True)[0] + - 1e-8) - aux_outputs[k] = (weights * 255).cpu().numpy() - else: - aux_outputs[k] = (aux_outputs[k].softmax(dim=0) * 255).cpu().numpy() - - self.image_feature_store.delete(self.curr_ti) - return aux_outputs - - def get_aux_object_weights(self, image: torch.Tensor) -> np.ndarray: - image, pads = pad_divide_by(image, 16) - # B*num_objects*H*W*num_queries -> num_objects*num_queries*H*W - # weights = F.softmax(self.obj_logits, dim=-1)[0] - weights = F.sigmoid(self.obj_logits)[0] - weights = weights.permute(0, 3, 1, 2).contiguous() - weights = F.interpolate(weights, - size=image.shape[-2:], - mode='bilinear', - align_corners=False) - # weights = weights / (weights.max(-1, keepdim=True)[0].max(-2, keepdim=True)[0]) - weights = unpad(weights, pads) - weights = (weights * 255).cpu().numpy() - return weights diff --git a/spaces/seawolf2357/sd-prompt-gen/README.md b/spaces/seawolf2357/sd-prompt-gen/README.md deleted file mode 100644 index 9cb3a8b4b866c6ff87178ef3a85be419b823b5f1..0000000000000000000000000000000000000000 --- a/spaces/seawolf2357/sd-prompt-gen/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MagicPrompt Stable Diffusion -emoji: 😻😻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -license: mit -duplicated_from: doevent/Stable-Diffusion-prompt-generator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/segments-tobias/conex/espnet2/asr/encoder/vgg_rnn_encoder.py b/spaces/segments-tobias/conex/espnet2/asr/encoder/vgg_rnn_encoder.py deleted file mode 100644 index 8c36c8cf4f2bef3cc1db7f9c0cba3ea9ad902024..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/asr/encoder/vgg_rnn_encoder.py +++ /dev/null @@ -1,106 +0,0 @@ -from typing import Tuple - -import numpy as np -import torch -from typeguard import check_argument_types - -from espnet.nets.e2e_asr_common import get_vgg2l_odim -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask -from espnet.nets.pytorch_backend.rnn.encoders import RNN -from espnet.nets.pytorch_backend.rnn.encoders import RNNP -from espnet.nets.pytorch_backend.rnn.encoders import VGG2L -from espnet2.asr.encoder.abs_encoder import AbsEncoder - - -class VGGRNNEncoder(AbsEncoder): - """VGGRNNEncoder class. - - Args: - input_size: The number of expected features in the input - bidirectional: If ``True`` becomes a bidirectional LSTM - use_projection: Use projection layer or not - num_layers: Number of recurrent layers - hidden_size: The number of hidden features - output_size: The number of output features - dropout: dropout probability - - """ - - def __init__( - self, - input_size: int, - rnn_type: str = "lstm", - bidirectional: bool = True, - use_projection: bool = True, - num_layers: int = 4, - hidden_size: int = 320, - output_size: int = 320, - dropout: float = 0.0, - in_channel: int = 1, - ): - assert check_argument_types() - super().__init__() - self._output_size = output_size - self.rnn_type = rnn_type - self.bidirectional = bidirectional - self.use_projection = use_projection - if rnn_type not in {"lstm", "gru"}: - raise ValueError(f"Not supported rnn_type={rnn_type}") - - # Subsample is not used for VGGRNN - subsample = np.ones(num_layers + 1, dtype=np.int) - rnn_type = ("b" if bidirectional else "") + rnn_type - if use_projection: - self.enc = torch.nn.ModuleList( - [ - VGG2L(in_channel), - RNNP( - get_vgg2l_odim(input_size, in_channel=in_channel), - num_layers, - hidden_size, - output_size, - subsample, - dropout, - typ=rnn_type, - ), - ] - ) - - else: - self.enc = torch.nn.ModuleList( - [ - VGG2L(in_channel), - RNN( - get_vgg2l_odim(input_size, in_channel=in_channel), - num_layers, - hidden_size, - output_size, - dropout, - typ=rnn_type, - ), - ] - ) - - def output_size(self) -> int: - return self._output_size - - def forward( - self, - xs_pad: torch.Tensor, - ilens: torch.Tensor, - prev_states: torch.Tensor = None, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - if prev_states is None: - prev_states = [None] * len(self.enc) - assert len(prev_states) == len(self.enc) - - current_states = [] - for module, prev_state in zip(self.enc, prev_states): - xs_pad, ilens, states = module(xs_pad, ilens, prev_state=prev_state) - current_states.append(states) - - if self.use_projection: - xs_pad.masked_fill_(make_pad_mask(ilens, xs_pad, 1), 0.0) - else: - xs_pad = xs_pad.masked_fill(make_pad_mask(ilens, xs_pad, 1), 0.0) - return xs_pad, ilens, current_states diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/visualizer.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/visualizer.py deleted file mode 100644 index 7a1b7b101e9b73f75f9136bc67f2063c7c1cf1c1..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/visualizer.py +++ /dev/null @@ -1,318 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@File : visualizer.py -@Time : 2022/04/05 11:39:33 -@Author : Shilong Liu -@Contact : slongliu86@gmail.com -""" - -import datetime -import os - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from matplotlib import transforms -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon -from pycocotools import mask as maskUtils - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class ColorMap: - def __init__(self, basergb=[255, 255, 0]): - self.basergb = np.array(basergb) - - def __call__(self, attnmap): - # attnmap: h, w. np.uint8. - # return: h, w, 4. np.uint8. - assert attnmap.dtype == np.uint8 - h, w = attnmap.shape - res = self.basergb.copy() - res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3 - attn1 = attnmap.copy()[..., None] # h, w, 1 - res = np.concatenate((res, attn1), axis=-1).astype(np.uint8) - return res - - -def rainbow_text(x, y, ls, lc, **kw): - """ - Take a list of strings ``ls`` and colors ``lc`` and place them next to each - other, with text ls[i] being shown in color lc[i]. - - This example shows how to do both vertical and horizontal text, and will - pass all keyword arguments to plt.text, so you can set the font size, - family, etc. - """ - t = plt.gca().transData - fig = plt.gcf() - plt.show() - - # horizontal version - for s, c in zip(ls, lc): - text = plt.text(x, y, " " + s + " ", color=c, transform=t, **kw) - text.draw(fig.canvas.get_renderer()) - ex = text.get_window_extent() - t = transforms.offset_copy(text._transform, x=ex.width, units="dots") - - # #vertical version - # for s,c in zip(ls,lc): - # text = plt.text(x,y," "+s+" ",color=c, transform=t, - # rotation=90,va='bottom',ha='center',**kw) - # text.draw(fig.canvas.get_renderer()) - # ex = text.get_window_extent() - # t = transforms.offset_copy(text._transform, y=ex.height, units='dots') - - -class COCOVisualizer: - def __init__(self, coco=None, tokenlizer=None) -> None: - self.coco = coco - - def visualize(self, img, tgt, caption=None, dpi=180, savedir="vis"): - """ - img: tensor(3, H, W) - tgt: make sure they are all on cpu. - must have items: 'image_id', 'boxes', 'size' - """ - plt.figure(dpi=dpi) - plt.rcParams["font.size"] = "5" - ax = plt.gca() - img = renorm(img).permute(1, 2, 0) - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - ax.imshow(img) - - self.addtgt(tgt) - - if tgt is None: - image_id = 0 - elif "image_id" not in tgt: - image_id = 0 - else: - image_id = tgt["image_id"] - - if caption is None: - savename = "{}/{}-{}.png".format( - savedir, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - else: - savename = "{}/{}-{}-{}.png".format( - savedir, caption, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - print("savename: {}".format(savename)) - os.makedirs(os.path.dirname(savename), exist_ok=True) - plt.savefig(savename) - plt.close() - - def addtgt(self, tgt): - """ """ - if tgt is None or not "boxes" in tgt: - ax = plt.gca() - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - - ax.set_axis_off() - return - - ax = plt.gca() - H, W = tgt["size"] - numbox = tgt["boxes"].shape[0] - - color = [] - polygons = [] - boxes = [] - for box in tgt["boxes"].cpu(): - unnormbbox = box * torch.Tensor([W, H, W, H]) - unnormbbox[:2] -= unnormbbox[2:] / 2 - [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist() - boxes.append([bbox_x, bbox_y, bbox_w, bbox_h]) - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - color.append(c) - - p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1) - ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - - if "strings_positive" in tgt and len(tgt["strings_positive"]) > 0: - assert ( - len(tgt["strings_positive"]) == numbox - ), f"{len(tgt['strings_positive'])} = {numbox}, " - for idx, strlist in enumerate(tgt["strings_positive"]): - cate_id = int(tgt["labels"][idx]) - _string = str(cate_id) + ":" + " ".join(strlist) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "box_label" in tgt: - assert len(tgt["box_label"]) == numbox, f"{len(tgt['box_label'])} = {numbox}, " - for idx, bl in enumerate(tgt["box_label"]): - _string = str(bl) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - # plt.figure() - # rainbow_text(0.0,0.0,"all unicorns poop rainbows ! ! !".split(), - # ['red', 'orange', 'brown', 'green', 'blue', 'purple', 'black']) - - if "attn" in tgt: - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if isinstance(tgt["attn"], tuple): - tgt["attn"] = [tgt["attn"]] - for item in tgt["attn"]: - attn_map, basergb = item - attn_map = (attn_map - attn_map.min()) / (attn_map.max() - attn_map.min() + 1e-3) - attn_map = (attn_map * 255).astype(np.uint8) - cm = ColorMap(basergb) - heatmap = cm(attn_map) - ax.imshow(heatmap) - ax.set_axis_off() - - def showAnns(self, anns, draw_bbox=False): - """ - Display the specified annotations. - :param anns (array of object): annotations to display - :return: None - """ - if len(anns) == 0: - return 0 - if "segmentation" in anns[0] or "keypoints" in anns[0]: - datasetType = "instances" - elif "caption" in anns[0]: - datasetType = "captions" - else: - raise Exception("datasetType not supported") - if datasetType == "instances": - ax = plt.gca() - ax.set_autoscale_on(False) - polygons = [] - color = [] - for ann in anns: - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - if "segmentation" in ann: - if type(ann["segmentation"]) == list: - # polygon - for seg in ann["segmentation"]: - poly = np.array(seg).reshape((int(len(seg) / 2), 2)) - polygons.append(Polygon(poly)) - color.append(c) - else: - # mask - t = self.imgs[ann["image_id"]] - if type(ann["segmentation"]["counts"]) == list: - rle = maskUtils.frPyObjects( - [ann["segmentation"]], t["height"], t["width"] - ) - else: - rle = [ann["segmentation"]] - m = maskUtils.decode(rle) - img = np.ones((m.shape[0], m.shape[1], 3)) - if ann["iscrowd"] == 1: - color_mask = np.array([2.0, 166.0, 101.0]) / 255 - if ann["iscrowd"] == 0: - color_mask = np.random.random((1, 3)).tolist()[0] - for i in range(3): - img[:, :, i] = color_mask[i] - ax.imshow(np.dstack((img, m * 0.5))) - if "keypoints" in ann and type(ann["keypoints"]) == list: - # turn skeleton into zero-based index - sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1 - kp = np.array(ann["keypoints"]) - x = kp[0::3] - y = kp[1::3] - v = kp[2::3] - for sk in sks: - if np.all(v[sk] > 0): - plt.plot(x[sk], y[sk], linewidth=3, color=c) - plt.plot( - x[v > 0], - y[v > 0], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor="k", - markeredgewidth=2, - ) - plt.plot( - x[v > 1], - y[v > 1], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor=c, - markeredgewidth=2, - ) - - if draw_bbox: - [bbox_x, bbox_y, bbox_w, bbox_h] = ann["bbox"] - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - color.append(c) - - # p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4) - # ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - elif datasetType == "captions": - for ann in anns: - print(ann["caption"]) diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/test/__init__.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/test/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sidphbot/Researcher/arxiv_public_data/internal_citations.py b/spaces/sidphbot/Researcher/arxiv_public_data/internal_citations.py deleted file mode 100644 index 3ab715a7adf5f5031b3ee924779ec60dfe8ef2a4..0000000000000000000000000000000000000000 --- a/spaces/sidphbot/Researcher/arxiv_public_data/internal_citations.py +++ /dev/null @@ -1,128 +0,0 @@ -#! /usr/bin/env python -import time -import re -import sys -import glob -import os -import gzip -import json -import math -from multiprocessing import Pool,cpu_count - -from arxiv_public_data.regex_arxiv import REGEX_ARXIV_FLEXIBLE, clean -from arxiv_public_data.config import DIR_FULLTEXT, DIR_OUTPUT, LOGGER - -log = LOGGER.getChild('fulltext') -RE_FLEX = re.compile(REGEX_ARXIV_FLEXIBLE) -RE_OLDNAME_SPLIT = re.compile(r"([a-z\-]+)(\d+)") - - -def path_to_id(path): - """ Convert filepath name of ArXiv file to ArXiv ID """ - name = os.path.splitext(os.path.basename(path))[0] - if '.' in name: # new ID - return name - split = [a for a in RE_OLDNAME_SPLIT.split(name) if a] - return "/".join(split) - - -def all_articles(directory=DIR_FULLTEXT): - """ Find all *.txt files in directory """ - out = [] - # make sure the path is absolute for os.walk - directory = os.path.abspath(os.path.expanduser(directory)) - - for root, dirs, files in os.walk(directory): - for f in files: - if 'txt' in f: - out.append(os.path.join(root, f)) - - return out - -def extract_references(filename, pattern=RE_FLEX): - """ - Parameters - ---------- - filename : str - name of file to search for pattern - pattern : re pattern object - compiled regex pattern - - Returns - ------- - citations : list - list of found arXiv IDs - """ - out = [] - with open(filename, 'r') as fn: - txt = fn.read() - - for matches in pattern.findall(txt): - out.extend([clean(a) for a in matches if a]) - return list(set(out)) - -def citation_list_inner(articles): - """ Find references in all the input articles - Parameters - ---------- - articles : list of str - list of paths to article text - Returns - ------- - citations : dict[arXiv ID] = list of arXiv IDs - dictionary of articles and their references - """ - cites = {} - for i, article in enumerate(articles): - if i > 0 and i % 1000 == 0: - log.info('Completed {} articles'.format(i)) - try: - refs = extract_references(article) - cites[path_to_id(article)] = refs - except: - log.error("Error in {}".format(article)) - continue - return cites - - -def citation_list_parallel(N=cpu_count(), directory=DIR_FULLTEXT): - """ - Split the task of checking for citations across some number of processes - Parameters - ---------- - N : int - number of processes - directory: str - directory where full text files are stored - Returns - ------- - citations : dict[arXiv ID] = list of arXiv IDs - all arXiv citations in all articles - """ - articles = all_articles(directory) - log.info('Calculating citation network for {} articles'.format(len(articles))) - - pool = Pool(N) - - A = len(articles) - divs = list(range(0, A, math.ceil(A/N))) + [A] - chunks = [articles[s:e] for s, e in zip(divs[:-1], divs[1:])] - - cites = pool.map(citation_list_inner, chunks) - - allcites = {} - for c in cites: - allcites.update(c) - return allcites - - -def default_filename(): - return os.path.join(DIR_OUTPUT, 'internal-citations.json.gz') - - -def save_to_default_location(citations): - filename = default_filename() - - log.info('Saving to "{}"'.format(filename)) - with gzip.open(filename, 'wb') as fn: - fn.write(json.dumps(citations).encode('utf-8')) diff --git a/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/helper_scripts/make_bias_AA.py b/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/helper_scripts/make_bias_AA.py deleted file mode 100644 index b32fb6b69dd4754d2ebef4c3baf5d81b6573d5a2..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/helper_scripts/make_bias_AA.py +++ /dev/null @@ -1,27 +0,0 @@ -import argparse - -def main(args): - - import numpy as np - import json - - bias_list = [float(item) for item in args.bias_list.split()] - AA_list = [str(item) for item in args.AA_list.split()] - - my_dict = dict(zip(AA_list, bias_list)) - - with open(args.output_path, 'w') as f: - f.write(json.dumps(my_dict) + '\n') - - -if __name__ == "__main__": - argparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - argparser.add_argument("--output_path", type=str, help="Path to the output dictionary") - argparser.add_argument("--AA_list", type=str, default='', help="List of AAs to be biased") - argparser.add_argument("--bias_list", type=str, default='', help="AA bias strengths") - - args = argparser.parse_args() - main(args) - -#e.g. output -#{"A": -0.01, "G": 0.02} diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/data/pipeline.py b/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/data/pipeline.py deleted file mode 100644 index 461bce875ab6f9cad4e2b0897c44a6cf1ef399ae..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/alphafold/data/pipeline.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Functions for building the input features for the AlphaFold model.""" - -import os -from typing import Mapping, Optional, Sequence -from absl import logging -from alphafold.common import residue_constants -from alphafold.data import parsers -from alphafold.data import templates -from alphafold.data.tools import hhblits -from alphafold.data.tools import hhsearch -from alphafold.data.tools import jackhmmer -import numpy as np - -# Internal import (7716). - -FeatureDict = Mapping[str, np.ndarray] - - -def make_sequence_features( - sequence: str, description: str, num_res: int) -> FeatureDict: - """Constructs a feature dict of sequence features.""" - features = {} - features['aatype'] = residue_constants.sequence_to_onehot( - sequence=sequence, - mapping=residue_constants.restype_order_with_x, - map_unknown_to_x=True) - features['between_segment_residues'] = np.zeros((num_res,), dtype=np.int32) - features['domain_name'] = np.array([description.encode('utf-8')], - dtype=np.object_) - features['residue_index'] = np.array(range(num_res), dtype=np.int32) - features['seq_length'] = np.array([num_res] * num_res, dtype=np.int32) - features['sequence'] = np.array([sequence.encode('utf-8')], dtype=np.object_) - return features - - -def make_msa_features( - msas: Sequence[Sequence[str]], - deletion_matrices: Sequence[parsers.DeletionMatrix]) -> FeatureDict: - """Constructs a feature dict of MSA features.""" - if not msas: - raise ValueError('At least one MSA must be provided.') - - int_msa = [] - deletion_matrix = [] - seen_sequences = set() - for msa_index, msa in enumerate(msas): - if not msa: - raise ValueError(f'MSA {msa_index} must contain at least one sequence.') - for sequence_index, sequence in enumerate(msa): - if sequence in seen_sequences: - continue - seen_sequences.add(sequence) - int_msa.append( - [residue_constants.HHBLITS_AA_TO_ID[res] for res in sequence]) - deletion_matrix.append(deletion_matrices[msa_index][sequence_index]) - - num_res = len(msas[0][0]) - num_alignments = len(int_msa) - features = {} - features['deletion_matrix_int'] = np.array(deletion_matrix, dtype=np.int32) - features['msa'] = np.array(int_msa, dtype=np.int32) - features['num_alignments'] = np.array( - [num_alignments] * num_res, dtype=np.int32) - return features - - -class DataPipeline: - """Runs the alignment tools and assembles the input features.""" - - def __init__(self, - jackhmmer_binary_path: str, - hhblits_binary_path: str, - hhsearch_binary_path: str, - uniref90_database_path: str, - mgnify_database_path: str, - bfd_database_path: Optional[str], - uniclust30_database_path: Optional[str], - small_bfd_database_path: Optional[str], - pdb70_database_path: str, - template_featurizer: templates.TemplateHitFeaturizer, - use_small_bfd: bool, - mgnify_max_hits: int = 501, - uniref_max_hits: int = 10000): - """Constructs a feature dict for a given FASTA file.""" - self._use_small_bfd = use_small_bfd - self.jackhmmer_uniref90_runner = jackhmmer.Jackhmmer( - binary_path=jackhmmer_binary_path, - database_path=uniref90_database_path) - if use_small_bfd: - self.jackhmmer_small_bfd_runner = jackhmmer.Jackhmmer( - binary_path=jackhmmer_binary_path, - database_path=small_bfd_database_path) - else: - self.hhblits_bfd_uniclust_runner = hhblits.HHBlits( - binary_path=hhblits_binary_path, - databases=[bfd_database_path, uniclust30_database_path]) - self.jackhmmer_mgnify_runner = jackhmmer.Jackhmmer( - binary_path=jackhmmer_binary_path, - database_path=mgnify_database_path) - self.hhsearch_pdb70_runner = hhsearch.HHSearch( - binary_path=hhsearch_binary_path, - databases=[pdb70_database_path]) - self.template_featurizer = template_featurizer - self.mgnify_max_hits = mgnify_max_hits - self.uniref_max_hits = uniref_max_hits - - def process(self, input_fasta_path: str, msa_output_dir: str) -> FeatureDict: - """Runs alignment tools on the input sequence and creates features.""" - with open(input_fasta_path) as f: - input_fasta_str = f.read() - input_seqs, input_descs = parsers.parse_fasta(input_fasta_str) - if len(input_seqs) != 1: - raise ValueError( - f'More than one input sequence found in {input_fasta_path}.') - input_sequence = input_seqs[0] - input_description = input_descs[0] - num_res = len(input_sequence) - - jackhmmer_uniref90_result = self.jackhmmer_uniref90_runner.query( - input_fasta_path)[0] - jackhmmer_mgnify_result = self.jackhmmer_mgnify_runner.query( - input_fasta_path)[0] - - uniref90_msa_as_a3m = parsers.convert_stockholm_to_a3m( - jackhmmer_uniref90_result['sto'], max_sequences=self.uniref_max_hits) - hhsearch_result = self.hhsearch_pdb70_runner.query(uniref90_msa_as_a3m) - - uniref90_out_path = os.path.join(msa_output_dir, 'uniref90_hits.sto') - with open(uniref90_out_path, 'w') as f: - f.write(jackhmmer_uniref90_result['sto']) - - mgnify_out_path = os.path.join(msa_output_dir, 'mgnify_hits.sto') - with open(mgnify_out_path, 'w') as f: - f.write(jackhmmer_mgnify_result['sto']) - - pdb70_out_path = os.path.join(msa_output_dir, 'pdb70_hits.hhr') - with open(pdb70_out_path, 'w') as f: - f.write(hhsearch_result) - - uniref90_msa, uniref90_deletion_matrix, _ = parsers.parse_stockholm( - jackhmmer_uniref90_result['sto']) - mgnify_msa, mgnify_deletion_matrix, _ = parsers.parse_stockholm( - jackhmmer_mgnify_result['sto']) - hhsearch_hits = parsers.parse_hhr(hhsearch_result) - mgnify_msa = mgnify_msa[:self.mgnify_max_hits] - mgnify_deletion_matrix = mgnify_deletion_matrix[:self.mgnify_max_hits] - - if self._use_small_bfd: - jackhmmer_small_bfd_result = self.jackhmmer_small_bfd_runner.query( - input_fasta_path)[0] - - bfd_out_path = os.path.join(msa_output_dir, 'small_bfd_hits.a3m') - with open(bfd_out_path, 'w') as f: - f.write(jackhmmer_small_bfd_result['sto']) - - bfd_msa, bfd_deletion_matrix, _ = parsers.parse_stockholm( - jackhmmer_small_bfd_result['sto']) - else: - hhblits_bfd_uniclust_result = self.hhblits_bfd_uniclust_runner.query( - input_fasta_path) - - bfd_out_path = os.path.join(msa_output_dir, 'bfd_uniclust_hits.a3m') - with open(bfd_out_path, 'w') as f: - f.write(hhblits_bfd_uniclust_result['a3m']) - - bfd_msa, bfd_deletion_matrix = parsers.parse_a3m( - hhblits_bfd_uniclust_result['a3m']) - - templates_result = self.template_featurizer.get_templates( - query_sequence=input_sequence, - query_pdb_code=None, - query_release_date=None, - hits=hhsearch_hits) - - sequence_features = make_sequence_features( - sequence=input_sequence, - description=input_description, - num_res=num_res) - - msa_features = make_msa_features( - msas=(uniref90_msa, bfd_msa, mgnify_msa), - deletion_matrices=(uniref90_deletion_matrix, - bfd_deletion_matrix, - mgnify_deletion_matrix)) - - logging.info('Uniref90 MSA size: %d sequences.', len(uniref90_msa)) - logging.info('BFD MSA size: %d sequences.', len(bfd_msa)) - logging.info('MGnify MSA size: %d sequences.', len(mgnify_msa)) - logging.info('Final (deduplicated) MSA size: %d sequences.', - msa_features['num_alignments'][0]) - logging.info('Total number of templates (NB: this can include bad ' - 'templates and is later filtered to top 4): %d.', - templates_result.features['template_domain_names'].shape[0]) - - return {**sequence_features, **msa_features, **templates_result.features} diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stickman One Piece APK and Join the Pirate Adventure.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stickman One Piece APK and Join the Pirate Adventure.md deleted file mode 100644 index 21b5e8f6a4ef338f1985108a084ca818510787f3..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stickman One Piece APK and Join the Pirate Adventure.md +++ /dev/null @@ -1,91 +0,0 @@ - -

          Stickman One Piece APK: A Fun and Exciting Game for One Piece Fans

          -

          If you are a fan of One Piece, the popular manga and anime series, you will love Stickman One Piece APK, a fun and exciting game that lets you play as your favorite characters from the show. In this game, you can join Luffy, Zoro, Nami, Sanji, Usopp, Chopper, Robin, Franky, Brook, and many more in their adventures across the Grand Line. You can fight against enemies, explore islands, collect treasures, and unleash powerful skills. Stickman One Piece APK is a game that will keep you entertained for hours.

          -

          stickman one piece apk


          Download >> https://ssurll.com/2uNXTl



          -

          What is Stickman One Piece APK?

          -

          A brief introduction to the game and its features

          -

          Stickman One Piece APK is a game that combines the elements of stickman games and One Piece. It is a side-scrolling action game that lets you control different characters from the Straw Hat Pirates crew. Each character has their own unique skills and abilities that you can use to defeat enemies and bosses. You can also switch between characters during the game to create different combos and strategies.

          -

          The game has several modes to choose from, such as story mode, survival mode, challenge mode, and multiplayer mode. In story mode, you can follow the plot of the original manga and anime series and relive some of the most memorable scenes. In survival mode, you can test your skills and endurance by fighting waves of enemies until you die. In challenge mode, you can complete various tasks and missions to earn rewards. In multiplayer mode, you can team up with other players online or offline and cooperate or compete with them.

          -

          How to download and install the game on your Android device

          -

          Stickman One Piece APK is not available on the Google Play Store, so you will need to download it from a third-party source. Here are the steps to download and install the game on your Android device:

          -
            -
          1. Go to [this link](^1^) to download the latest version of Stickman One Piece APK.
          2. -
          3. Enable unknown sources on your device by going to Settings > Security > Unknown Sources.
          4. -
          5. Locate the downloaded file on your device and tap on it to install it.
          6. -
          7. Wait for the installation process to finish and then launch the game.
          8. -
          9. Enjoy playing Stickman One Piece APK!
          10. -
          -

          Why should you play Stickman One Piece APK?

          -

          The game has amazing graphics and sound effects

          -

          One of the reasons why you should play Stickman One Piece APK is because of its amazing graphics and sound effects. The game has a colorful and vibrant design that captures the essence of One Piece. The characters are well-drawn and animated, and they look like their counterparts from the show. The backgrounds are also detailed and varied, depicting different locations from the series. The game also has a great soundtrack that matches the mood of each scene. The sound effects are realistic and immersive, making you feel like you are part of the action.

          -

          The game has a variety of characters and skills to choose from

          -

          Another reason why you should play Stickman One Piece APK is because of its variety of characters and skills to choose from. The game has over 20 characters from the Straw Hat Pirates crew that you can unlock and play as. Each character has their own personality, appearance, voice, and skill set. For example, Luffy can stretch his limbs and use his rubber powers, Zoro can wield three swords and use his swordsmanship skills, Nami can manipulate the weather and use her clima-tact, and so on. You can also customize your characters by changing their outfits, accessories, and weapons. You can also upgrade their skills by spending coins and gems that you earn from the game.

          -

          The game has a challenging and addictive gameplay

          -

          A third reason why you should play Stickman One Piece APK is because of its challenging and addictive gameplay. The game has a simple and intuitive control system that lets you move, jump, attack, and use skills with ease. You can also combine different skills to create powerful combos and effects. The game has a lot of enemies and bosses that you have to face, each with their own strengths and weaknesses. You have to use your strategy and reflexes to overcome them. The game also has a lot of rewards and achievements that you can unlock by completing various tasks and missions. The game will keep you hooked for hours as you try to progress through the story and improve your characters.

          -

          Tips and tricks to master Stickman One Piece APK

          -

          Upgrade your characters and skills regularly

          -

          One of the tips to master Stickman One Piece APK is to upgrade your characters and skills regularly. Upgrading your characters will increase their stats, such as health, attack, defense, and speed. Upgrading your skills will increase their damage, range, duration, and cooldown. You can upgrade your characters and skills by spending coins and gems that you earn from the game. You can also get more coins and gems by watching ads, completing offers, or buying them with real money.

          -

          Use the right skills at the right time

          -

          Another tip to master Stickman One Piece APK is to use the right skills at the right time. Each skill has its own advantages and disadvantages, depending on the situation. For example, Luffy's gear second skill can increase his speed and attack, but it also drains his health. Zoro's asura skill can deal massive damage to multiple enemies, but it also has a long cooldown. Nami's thunderbolt tempo skill can stun enemies, but it also consumes a lot of energy. You have to be smart and careful when using your skills, as they can make or break your game.

          -

          Collect coins and gems to unlock more items and features

          -

          A third tip to master Stickman One Piece APK is to collect coins and gems to unlock more items and features. Coins and gems are the main currencies in the game that you can use to buy various things, such as new characters, outfits, accessories, weapons, skills, upgrades, and more. You can get coins and gems by playing the game, completing missions, watching ads, or buying them with real money. You should try to collect as many coins and gems as possible, as they will help you enhance your game experience.

          -

          stickman one piece apk download
          -stickman one piece game apk
          -stickman one piece mod apk
          -stickman one piece android apk
          -stickman one piece apk free
          -stickman one piece apk latest version
          -stickman one piece apk offline
          -stickman one piece apk 2023
          -stickman one piece apk hack
          -stickman one piece apk unlimited money
          -stickman one piece apk full
          -stickman one piece apk no ads
          -stickman one piece apk english
          -stickman one piece apk español
          -stickman one piece apk online
          -stickman one piece apk update
          -stickman one piece apk revdl
          -stickman one piece apk rexdl
          -stickman one piece apk pure
          -stickman one piece apk uptodown
          -stickman one piece apk mob.org
          -stickman one piece apk apkpure
          -stickman one piece apk apkmirror
          -stickman one piece apk apknite
          -stickman one piece apk apkmody
          -stickman one piece apk happymod
          -stickman one piece apk moddroid
          -stickman one piece apk an1.com
          -stickman one piece apk android 1.com
          -stickman one piece apk android 11.com
          -stickman one piece apk android 12.com
          -stickman one piece apk android 10.com
          -stickman one piece apk android 9.com
          -stickman one piece apk android 8.com
          -stickman one piece apk android 7.com
          -stickman one piece apk android 6.com
          -stickman one piece apk android 5.com
          -stickman one piece apk android 4.com
          -stickman one piece apk for pc
          -stickman one piece apk for ios
          -stickman one piece apk for windows 10.com
          -stickman one piece apk for macbook.com
          -stickman one piece apk for laptop.com
          -stickman one piece apk for tablet.com
          -stickman one piece luffy vs zoro vs sanji vs nami vs usopp vs chopper vs robin vs franky vs brook vs jimbei vs law vs sabo vs ace vs shanks vs mihawk vs doflamingo vs kaido vs big mom vs blackbeard vs dragon vs roger vs whitebeard vs rayleigh vs garp vs sengoku vs akainu vs aokiji vs kizaru vs fujitora vs ryokugyu vs smoker vs tashigi vs coby vs hancock vs buggy vs crocodile vs enel vs moria vs kuma vs ivankov etc. (stickman version) game play online free download mod hack cheat unlimited money coins gems diamonds gold stars levels characters skills weapons items costumes accessories outfits hats masks shoes gloves socks shirts pants jackets coats sweaters hoodies scarves belts earrings necklaces bracelets rings watches sunglasses glasses goggles helmets hats caps beanies bandanas headbands hair clips bows ribbons flowers feathers beads sequins glitter sparkles stickers tattoos piercings nails polish manicure pedicure spa salon makeover makeup hairdresser barber shop store mall supermarket market grocery shop bakery cafe restaurant diner fast food pizza burger sushi chinese indian mexican italian french japanese korean thai vietnamese greek turkish arabic persian russian german spanish portuguese brazilian dutch swedish norwegian danish finnish polish czech hungarian romanian bulgarian croatian serbian slovenian slovakian ukrainian belarusian latvian lithuanian estonian moldovan albanian macedonian bosnian montenegrin kosovan georgian armenian azerbaijani kazakh turkmen uzbek kyrgyz tajik mongolian nepali bhutanese bangladeshi pakistani afghan iranian iraqi syrian lebanese israeli palestinian jordanian saudi arabian yemeni omani emirati qatari bahraini kuwaiti egyptian libyan tunisian algerian moroccan mauritanian malian nigerien nigerian chadian sudanese south sudanese ethiopian eritrean somali djiboutian kenyan ugandan rwandan burundian tanzanian zambian zimbabwean malawian mozambican madagascan comorian seychellois mauritian reunionese mayottean angolan namibian botswanan south african lesotho swazi eswatini basotho swaziland mozambique mozambican zimbabwe zimbabwean zambia z

          -

          Conclusion

          -

          Stickman One Piece APK is a fun and exciting game for One Piece fans that lets you play as your favorite characters from the show. You can enjoy the amazing graphics and sound effects, the variety of characters and skills, and the challenging and addictive gameplay. You can also follow the story mode, play with other players online or offline, complete various tasks and missions, unlock more items and features, and more. Stickman One Piece APK is a game that will make you feel like you are part of the One Piece world.

          -

          FAQs

          -
            -
          • Q: Is Stickman One Piece APK safe to download?
          • -
          • A: Yes, Stickman One Piece APK is safe to download from a trusted source like [this link]. However, you should always be careful when downloading any file from the internet, as there may be some risks involved.
          • -
          • Q: Is Stickman One Piece APK free to play?
          • -
          • A: Yes, Stickman One Piece APK is free to play. However, there are some optional in-app purchases that you can make with real money if you want to get more coins and gems or remove ads.
          • -
          • Q: How can I play Stickman One Piece APK with my friends?
          • -
          • A: You can play Stickman One Piece APK with your friends by using the multiplayer mode. You can either join an existing room or create your own room. You can also choose whether you want to play online or offline.
          • -
          • Q: How can I contact the developer of Stickman One Piece APK?
          • -
          • A: You can contact the developer of Stickman One Piece APK by sending an email to stickmanonepiece@gmail.com or visiting their Facebook page at [this link].
          • -
          • 401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy Fast and Easy Video Downloading with APK 1DM.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy Fast and Easy Video Downloading with APK 1DM.md deleted file mode 100644 index a8ac509d010a3d39afceefcf27f60d81401d368a..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy Fast and Easy Video Downloading with APK 1DM.md +++ /dev/null @@ -1,125 +0,0 @@ - -

            Download APK 1DM: How to Install and Use One of the Best Download Managers for Android

            -

            If you are looking for a way to download videos, music, movies, torrents, and other files from the internet with blazing fast speed and pause/resume support, you might want to try 1DM app. But what is 1DM app and how can you install it on your Android device? In this article, we will explain what an APK file is, how to install APK files on Android devices, what is 1DM app and what are its features and benefits, and how to download APK 1DM app from Google Play or other sources.

            -

            What is an APK file and why you might want to download one

            -

            An APK file stands for Android Package Kit or Android Application Package. It is a file format that Android uses to distribute and install apps. An APK file contains all the elements that an app needs to run properly on your device. You can think of it as a zip file that contains the app's code, resources, manifest, certificates, etc.

            -

            download apk 1dm


            DOWNLOAD · https://ssurll.com/2uO1fT



            -

            Normally, when you download an app from Google Play Store, it automatically installs the APK file for you. However, sometimes you might want to download an APK file from other sources, such as websites or file-sharing platforms. There are several reasons why you might want to do this:

            -
              -
            • You want to access an app that is not available in your region or country.
            • -
            • You want to try an app that is not yet released or updated on Google Play Store.
            • -
            • You want to use an older version of an app that works better for you.
            • -
            • You want to install a modified or hacked version of an app that offers extra features or removes ads.
            • -
            • You want to backup or share an app with someone else.
            • -
            -

            However, downloading APK files from unknown sources can also be risky. You might end up installing malware or viruses on your device. You might also violate the terms of service or the intellectual property rights of the app developers. Therefore, you should always be careful about where you get your APK files from and what permissions they require.

            -

            How to install APK files on Android devices

            -

            To install an APK file on your Android device, you need to enable the option to install apps from unknown sources. This option varies depending on your Android version and device model. Here are some common ways to find it:

            -
              -
            • Go to Settings > Apps > Special app access > Install unknown apps.
            • -
            • Go to Settings > Apps & notifications > Advanced > Special app access > Install unknown apps.
            • -
            • Go to Settings > Security > Unknown sources.
            • -
            -

            Once you enable this option, you need to choose which apps are allowed to install unknown apps. For example, if you use Chrome as your browser, you need to toggle on Allow from this source for Chrome. This way, you can download APK files using Chrome and install them directly.

            -

            Alternatively, you can use a file manager app to locate and install APK files on your device. You can download a file manager app from Google Play Store or use the one that comes with your device. For example, you can use Cx File Explorer or File Manager .

            -

            After you download an APK file using your browser or transfer it from your computer via USB cable, you can open it with your file manager app and tap Install . You might need to grant some permissions or accept some warnings before the installation completes.

            -

            What is 1DM app and what are its features and benefits

            -

            1DM app (formerly IDM) stands for One Download Manager. It is one of the best adblock and privacy browsers with the fastest and most advanced download manager (with torrent and HD video downloader) available on Android. It is the best free adblock and privacy browser for Android devices. It has over 100 million downloads on Google Play Store and a 4.5-star rating from more than 1.5 million users. Here are some of the features and benefits of 1DM app:

            -

            download apk 1dm browser and video downloader
            -download apk 1dm pro mod
            -download apk 1dm latest version
            -download apk 1dm for android
            -download apk 1dm plus
            -download apk 1dm premium
            -download apk 1dm cracked
            -download apk 1dm free
            -download apk 1dm full
            -download apk 1dm app
            -download apk 1dm modded
            -download apk 1dm unlocked
            -download apk 1dm no ads
            -download apk 1dm update
            -download apk 1dm old version
            -download apk 1dm from google play
            -download apk 1dm from apkpure
            -download apk 1dm from uptodown
            -download apk 1dm from apkmirror
            -download apk 1dm from apkmody
            -download apk 1dm for pc
            -download apk 1dm for windows
            -download apk 1dm for mac
            -download apk 1dm for linux
            -download apk 1dm for chromebook
            -download apk 1dm for firestick
            -download apk 1dm for smart tv
            -download apk 1dm for android tv box
            -download apk 1dm for ios
            -download apk 1dm for iphone
            -download apk 1dm for ipad
            -download apk 1dm alternative
            -download apk 1dm review
            -download apk 1dm tutorial
            -download apk 1dm guide
            -download apk 1dm tips and tricks
            -download apk 1dm features
            -download apk 1dm benefits
            -download apk 1dm advantages and disadvantages
            -download apk 1dm comparison with other apps
            -download apk 1dm how to use
            -download apk 1dm how to install
            -download apk 1dm how to uninstall
            -download apk 1dm how to update
            -download apk 1dm how to fix errors
            -download apk 1dm how to change settings
            -download apk 1dm how to customize interface
            -download apk 1dm how to manage downloads
            -download apk 1dm how to speed up downloads

            -
              -
            • It supports all file types, including music, video, documents, archives, programs, torrents, etc.
            • -
            • It can download files from any website or app, including YouTube, Facebook, Instagram, Twitter, TikTok, etc.
            • -
            • It can download multiple files at the same time with up to 32 simultaneous connections per download.
            • -
            • It can resume broken or paused downloads even if the internet connection is lost or the website changes.
            • -
            • It can download files in the background while you use other apps or turn off the screen.
            • -
            • It can download files over Wi-Fi only or use both Wi-Fi and mobile data to boost the speed.
            • -
            • It can schedule downloads to start or stop at a specific time or date.
            • -
            • It can import download links from a text file or clipboard.
            • -
            • It can export download links to share with others or backup.
            • -
            • It can scan and remove duplicate files to save storage space.
            • -
            • It can sort and organize downloaded files by name, size, date, type, etc.
            • -
            • It can play downloaded videos and music with its built-in media player.
            • -
            • It can hide downloaded files from other apps with its private vault feature.
            • -
            • It can block annoying ads and pop-ups on websites with its adblock feature.
            • -
            • It can protect your privacy and security with its incognito mode and VPN support.
            • -
            -

            How to download APK 1DM app from Google Play or other sources

            -

            The easiest way to download APK 1DM app is to install it from Google Play Store. You can search for 1DM app on Google Play Store or use this link: [Download 1DM app from Google Play Store]. You just need to tap Install and wait for the app to be downloaded and installed on your device. You might need to grant some permissions or accept some terms of service before using the app.

            -

            If you want to download APK 1DM app from other sources, you need to follow the steps we mentioned earlier to enable the option to install apps from unknown sources and choose which apps are allowed to do so. Then, you need to find a reliable and trustworthy website that offers APK 1DM app for download. You can use a search engine like Bing or Google to find such websites. For example, you can use this link: [Download APK 1DM app from APKPure]. You just need to tap Download APK and wait for the file to be downloaded on your device. Then, you need to open it with your file manager app and tap Install . You might need to grant some permissions or accept some warnings before the installation completes.

            -

            Conclusion: Summarize the main points and provide some tips for using 1DM app

            -

            In conclusion, APK 1DM app is one of the best download managers for Android devices that allows you to download any file from any website or app with fast speed and pause/resume support. It also offers many features and benefits such as adblock, privacy, media player, private vault, etc. To install APK 1DM app on your device, you can either use Google Play Store or other sources. However, you should always be careful about where you get your APK files from and what permissions they require. Here are some tips for using 1DM app:

            -
              -
            • To start a download, you can either use the built-in browser of 1DM app or copy and paste the download link from another browser or app.
            • -
            • To manage your downloads, you can swipe left or right on the download bar or tap on the download icon on the top right corner of the screen.
            • -
            • To access your downloaded files, you can tap on the folder icon on the bottom right corner of the screen or use your file manager app.
            • -
            • To change your settings, you can tap on the menu icon on the top left corner of the screen and choose Settings .
            • -
            • To get help or support, you can tap on the menu icon on the top left corner of the screen and choose Help & feedback .
            • -
            -

            FAQs: Answer some common questions about APK files and 1DM app

            -

            Here are some common questions and answers about APK files and 1DM app:

            -

            Q: What is the difference between APK and MOD APK?

            -

            A: APK is the original file format that Android uses to install apps. MOD APK is a modified version of an APK file that has been altered or hacked to offer extra features or remove ads. For example, a MOD APK of 1DM app might allow you to download videos from YouTube without any restrictions or watermark. However, using MOD APK files can be risky as they might contain malware or viruses or violate the terms of service or the intellectual property rights of the app developers.

            -

            Q: How can I update APK 1DM app?

            -

            A: If you installed APK 1DM app from Google Play Store, you can update it automatically or manually from there. You just need to open Google Play Store and tap on My apps & games . Then, you can tap on Update all or Update next to 1DM app. If you installed APK 1DM app from other sources, you need to download the latest version of the APK file from the same source and install it over the existing one. You might need to enable the option to install apps from unknown sources and choose which apps are allowed to do so again.

            -

            Q: How can I uninstall APK 1DM app?

            -

            A: To uninstall APK 1DM app from your device, you can either use the built-in uninstaller of 1DM app or use your device's settings. To use the built-in uninstaller of 1DM app, you can tap on the menu icon on the top left corner of the screen and choose Uninstall . Then, you can tap on OK to confirm. To use your device's settings, you can go to Settings > Apps > 1DM app > Uninstall . Then, you can tap on OK to confirm.

            -

            Q: How can I contact the developers of APK 1DM app?

            -

            A: To contact the developers of APK 1DM app, you can use their official website, email address, or social media accounts. Here are some ways to reach them:

            -
              -
            • Website: [https://www.apps2sd.info/idmp/faq.html]
            • -
            • Email: [idmplus.apps2sd@gmail.com]
            • -
            • Facebook: [https://www.facebook.com/idmplus]
            • -
            • Twitter: [https://twitter.com/IDMPlus]
            • -
            -

            Q: How can I support the developers of APK 1DM app?

            -

            A: To support the developers of APK 1DM app, you can buy the pro version of the app, which offers more features and removes ads. You can also rate and review the app on Google Play Store, share it with your friends and family, and provide feedback and suggestions to the developers.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/sinz2002/ChuanhuChatGPT/readme/README_ja.md b/spaces/sinz2002/ChuanhuChatGPT/readme/README_ja.md deleted file mode 100644 index fc56eec0b81c22ff0a49e3960aa52ffd7d6dc5cb..0000000000000000000000000000000000000000 --- a/spaces/sinz2002/ChuanhuChatGPT/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
            - - 简体中文 | English | 日本語 -
            - -

            川虎 Chat 🐯 Chuanhu Chat

            -
            - - Logo - - -

            -

            ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

            -

            - - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

            - ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
            - ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
            - オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
            - マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
            - GPT-4対応/LLMのローカルデプロイ可能。 -

            - 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

            -

            - Animation Demo -

            -

            -
            - -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。 -- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## インストール - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python ChuanhuChatbot.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/skf15963/summary/fengshen/models/transfo_xl_denoise/generate.py b/spaces/skf15963/summary/fengshen/models/transfo_xl_denoise/generate.py deleted file mode 100644 index 5b768ff1baf6477735ac14fec9df58f7cd2724c6..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/transfo_xl_denoise/generate.py +++ /dev/null @@ -1,66 +0,0 @@ -import torch -import torch.nn.functional as F -from fengshen.models.transfo_xl_denoise.tokenization_transfo_xl_denoise import TransfoXLDenoiseTokenizer -from fengshen.models.transfo_xl_denoise.modeling_transfo_xl_denoise import TransfoXLDenoiseModel -from fengshen.utils import top_k_logits, get_masks_and_position_ids - - -def get_batch(context_tokens, mem_length, batch_size=1): - tokens = context_tokens - tokens = tokens.view(batch_size, -1).contiguous() - # Get the masks and postition ids. - attention_mask, position_ids = get_masks_and_position_ids(tokens, mem_length=mem_length) - return tokens, attention_mask, position_ids - - -def denoise_generate(model, - tokenizer, - input_text, - device=0, - mem_length=512, - temperature=1., - top_p=0.9, - eod_token=50000): - ''' Generate with fixed prompt pretrained ''' - prompt = f"“{input_text}”改写后是“" - res = [] - counter = 0 - tokens, attention_mask, position_ids = get_batch( - torch.LongTensor(tokenizer.encode(prompt)), mem_length, batch_size=1) - tokens, attention_mask, position_ids = tokens.cuda( - device), attention_mask.cuda(device), position_ids.cuda(device) - org_context_length = tokens.shape[-1] - model = model.cuda(device) - while counter < 100: - if counter == 0: - mems = [] # empty at the begining - output = model(input_ids=tokens, attention_mask=attention_mask, - position_ids=position_ids, hidden_states=mems) - logits, mems = output.logits, output.hidden_states - else: - index = org_context_length + counter - output = model(input_ids=tokens[:, index - 1: index], position_ids=tokens.new_ones((1, 1)) * (index - 1), - attention_mask=tokens.new_ones(1, 1, 1, mem_length + 1, device=device, - dtype=torch.float), hidden_states=mems) - logits, mems = output.logits, output.hidden_states - logits = logits[:, -1] - logits /= temperature - logits = top_k_logits(logits, top_k=0, top_p=top_p) - log_probs = F.softmax(logits, dim=-1) - prev = torch.multinomial(log_probs, num_samples=1)[0] - is_end = prev == eod_token - if is_end: - break - tokens = torch.cat((tokens, prev.view(1, 1)), dim=1) - counter += 1 - res.append(tokenizer.decode(tokens.view(-1).contiguous().tolist())) - return res - - -if __name__ == "__main__": - device = 1 - tokenizer = TransfoXLDenoiseTokenizer.from_pretrained('IDEA-CCNL/Bigan-Transformer-XL-denoise-1.1B') - model = TransfoXLDenoiseModel.from_pretrained('IDEA-CCNL/Bigan-Transformer-XL-denoise-1.1B') - input_text = "凡是有成就的人, 都很严肃地对待生命自己的" - res = denoise_generate(model, tokenizer, input_text) - print(res) diff --git a/spaces/sklearn-docs/text-feature-extraction-evaluation/descriptions/parameter_grid/alpha.md b/spaces/sklearn-docs/text-feature-extraction-evaluation/descriptions/parameter_grid/alpha.md deleted file mode 100644 index 7370bdcb8a93d793134237e9d48b880adb51d9fa..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/text-feature-extraction-evaluation/descriptions/parameter_grid/alpha.md +++ /dev/null @@ -1 +0,0 @@ -The "alpha" parameter adds a constant value to the occurrence counters of features, ensuring that even unobserved feature values have a non-zero probability. Smaller values of "alpha" result in weaker smoothing, while larger values increase the level of smoothing. The default value is 1.0, which applies Laplace smoothing, but it can be adjusted based on the model's requirements. \ No newline at end of file diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_cuda.cpp b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_cuda.cpp deleted file mode 100644 index 5d9424908ed2dbd4ac3cdb98d13e09287a4d2f2d..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_cuda.cpp +++ /dev/null @@ -1,685 +0,0 @@ -// modify from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c - -#include -#include - -#include -#include - -void deformable_im2col(const at::Tensor data_im, const at::Tensor data_offset, - const int channels, const int height, const int width, - const int ksize_h, const int ksize_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - at::Tensor data_col); - -void deformable_col2im(const at::Tensor data_col, const at::Tensor data_offset, - const int channels, const int height, const int width, - const int ksize_h, const int ksize_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - at::Tensor grad_im); - -void deformable_col2im_coord( - const at::Tensor data_col, const at::Tensor data_im, - const at::Tensor data_offset, const int channels, const int height, - const int width, const int ksize_h, const int ksize_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, at::Tensor grad_offset); - -void modulated_deformable_im2col_cuda( - const at::Tensor data_im, const at::Tensor data_offset, - const at::Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - at::Tensor data_col); - -void modulated_deformable_col2im_cuda( - const at::Tensor data_col, const at::Tensor data_offset, - const at::Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - at::Tensor grad_im); - -void modulated_deformable_col2im_coord_cuda( - const at::Tensor data_col, const at::Tensor data_im, - const at::Tensor data_offset, const at::Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, at::Tensor grad_offset, - at::Tensor grad_mask); - -void shape_check(at::Tensor input, at::Tensor offset, at::Tensor *gradOutput, - at::Tensor weight, int kH, int kW, int dH, int dW, int padH, - int padW, int dilationH, int dilationW, int group, - int deformable_group) { - TORCH_CHECK(weight.ndimension() == 4, - "4D weight tensor (nOutputPlane,nInputPlane,kH,kW) expected, " - "but got: %s", - weight.ndimension()); - - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - - TORCH_CHECK(kW > 0 && kH > 0, - "kernel size should be greater than zero, but got kH: %d kW: %d", kH, - kW); - - TORCH_CHECK((weight.size(2) == kH && weight.size(3) == kW), - "kernel size should be consistent with weight, ", - "but got kH: %d kW: %d weight.size(2): %d, weight.size(3): %d", kH, - kW, weight.size(2), weight.size(3)); - - TORCH_CHECK(dW > 0 && dH > 0, - "stride should be greater than zero, but got dH: %d dW: %d", dH, dW); - - TORCH_CHECK( - dilationW > 0 && dilationH > 0, - "dilation should be greater than 0, but got dilationH: %d dilationW: %d", - dilationH, dilationW); - - int ndim = input.ndimension(); - int dimf = 0; - int dimh = 1; - int dimw = 2; - - if (ndim == 4) { - dimf++; - dimh++; - dimw++; - } - - TORCH_CHECK(ndim == 3 || ndim == 4, "3D or 4D input tensor expected but got: %s", - ndim); - - long nInputPlane = weight.size(1) * group; - long inputHeight = input.size(dimh); - long inputWidth = input.size(dimw); - long nOutputPlane = weight.size(0); - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - - TORCH_CHECK(nInputPlane % deformable_group == 0, - "input channels must divide deformable group size"); - - if (outputWidth < 1 || outputHeight < 1) - AT_ERROR( - "Given input size: (%ld x %ld x %ld). " - "Calculated output size: (%ld x %ld x %ld). Output size is too small", - nInputPlane, inputHeight, inputWidth, nOutputPlane, outputHeight, - outputWidth); - - TORCH_CHECK(input.size(1) == nInputPlane, - "invalid number of input planes, expected: %d, but got: %d", - nInputPlane, input.size(1)); - - TORCH_CHECK((inputHeight >= kH && inputWidth >= kW), - "input image is smaller than kernel"); - - TORCH_CHECK((offset.size(2) == outputHeight && offset.size(3) == outputWidth), - "invalid spatial size of offset, expected height: %d width: %d, but " - "got height: %d width: %d", - outputHeight, outputWidth, offset.size(2), offset.size(3)); - - TORCH_CHECK((offset.size(1) == deformable_group * 2 * kH * kW), - "invalid number of channels of offset"); - - if (gradOutput != NULL) { - TORCH_CHECK(gradOutput->size(dimf) == nOutputPlane, - "invalid number of gradOutput planes, expected: %d, but got: %d", - nOutputPlane, gradOutput->size(dimf)); - - TORCH_CHECK((gradOutput->size(dimh) == outputHeight && - gradOutput->size(dimw) == outputWidth), - "invalid size of gradOutput, expected height: %d width: %d , but " - "got height: %d width: %d", - outputHeight, outputWidth, gradOutput->size(dimh), - gradOutput->size(dimw)); - } -} - -int deform_conv_forward_cuda(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - // todo: resize columns to include im2col: done - // todo: add im2col_step as input - // todo: add new output buffer and transpose it to output (or directly - // transpose output) todo: possibly change data indexing because of - // parallel_imgs - - shape_check(input, offset, NULL, weight, kH, kW, dH, dW, padH, padW, - dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - input = input.contiguous(); - offset = offset.contiguous(); - weight = weight.contiguous(); - - int batch = 1; - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input.unsqueeze_(0); - offset.unsqueeze_(0); - } - - // todo: assert batchsize dividable by im2col_step - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - output = output.view({batchSize / im2col_step, im2col_step, nOutputPlane, - outputHeight, outputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < outputHeight * outputWidth) { - ones = at::ones({outputHeight, outputWidth}, input.options()); - } - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - at::Tensor output_buffer = - at::zeros({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}, - output.options()); - - output_buffer = output_buffer.view( - {output_buffer.size(0), group, output_buffer.size(1) / group, - output_buffer.size(2), output_buffer.size(3)}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - output_buffer[elt][g] = output_buffer[elt][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output_buffer[elt][g]); - } - } - - output_buffer = output_buffer.view( - {output_buffer.size(0), output_buffer.size(1) * output_buffer.size(2), - output_buffer.size(3), output_buffer.size(4)}); - - output_buffer = output_buffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step, outputHeight, outputWidth}); - output_buffer.transpose_(1, 2); - output.copy_(output_buffer); - output = output.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - output = output.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - } - - return 1; -} - -int deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step) { - shape_check(input, offset, &gradOutput, weight, kH, kW, dH, dW, padH, padW, - dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - input = input.contiguous(); - offset = offset.contiguous(); - gradOutput = gradOutput.contiguous(); - weight = weight.contiguous(); - - int batch = 1; - - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view({1, input.size(0), input.size(1), input.size(2)}); - offset = offset.view({1, offset.size(0), offset.size(1), offset.size(2)}); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), 3, "invalid batch size of offset"); - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - // change order of grad output - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - gradInput = gradInput.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - gradOffset = gradOffset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, - outputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - // divide into groups - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), group, gradOutput.size(1) / group, - gradOutput.size(2), gradOutput.size(3), gradOutput.size(4)}); - - for (int g = 0; g < group; g++) { - columns[g] = columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - gradOutput[elt][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), gradOutput.size(1) * gradOutput.size(2), - gradOutput.size(3), gradOutput.size(4), gradOutput.size(5)}); - - deformable_col2im_coord(columns, input[elt], offset[elt], nInputPlane, - inputHeight, inputWidth, kH, kW, padH, padW, dH, dW, - dilationH, dilationW, im2col_step, deformable_group, - gradOffset[elt]); - - deformable_col2im(columns, offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, gradInput[elt]); - } - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - gradOffset = gradOffset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - gradInput = gradInput.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - gradOffset = - gradOffset.view({offset.size(1), offset.size(2), offset.size(3)}); - } - - return 1; -} - -int deform_conv_backward_parameters_cuda( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step) { - // todo: transpose and reshape outGrad - // todo: reshape columns - // todo: add im2col_step as input - - shape_check(input, offset, &gradOutput, gradWeight, kH, kW, dH, dW, padH, - padW, dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - input = input.contiguous(); - offset = offset.contiguous(); - gradOutput = gradOutput.contiguous(); - - int batch = 1; - - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view( - at::IntList({1, input.size(0), input.size(1), input.size(2)})); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = gradWeight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - at::Tensor gradOutputBuffer = at::zeros_like(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, im2col_step, - outputHeight, outputWidth}); - gradOutputBuffer.copy_(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}); - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - // divide into group - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), group, gradOutputBuffer.size(1) / group, - gradOutputBuffer.size(2), gradOutputBuffer.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - gradWeight = - gradWeight.view({group, gradWeight.size(0) / group, gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3)}); - - for (int g = 0; g < group; g++) { - gradWeight[g] = gradWeight[g] - .flatten(1) - .addmm_(gradOutputBuffer[elt][g].flatten(1), - columns[g].transpose(1, 0), 1.0, scale) - .view_as(gradWeight[g]); - } - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), - gradOutputBuffer.size(1) * gradOutputBuffer.size(2), - gradOutputBuffer.size(3), gradOutputBuffer.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradWeight = gradWeight.view({gradWeight.size(0) * gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3), - gradWeight.size(4)}); - } - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - } - - return 1; -} - -void modulated_deform_conv_cuda_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias) { - TORCH_CHECK(input.is_contiguous(), "input tensor has to be contiguous"); - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_out = weight.size(0); - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape wont match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels wont match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - // resize output - output = output.view({batch, channels_out, height_out, width_out}).zero_(); - // resize temporary columns - columns = - at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, - input.options()); - - output = output.view({output.size(0), group, output.size(1) / group, - output.size(2), output.size(3)}); - - for (int b = 0; b < batch; b++) { - modulated_deformable_im2col_cuda( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - // divide into group - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - - for (int g = 0; g < group; g++) { - output[b][g] = output[b][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output[b][g]); - } - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - } - - output = output.view({output.size(0), output.size(1) * output.size(2), - output.size(3), output.size(4)}); - - if (with_bias) { - output += bias.view({1, bias.size(0), 1, 1}); - } -} - -void modulated_deform_conv_cuda_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - TORCH_CHECK(input.is_contiguous(), "input tensor has to be contiguous"); - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape wont match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels wont match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - grad_input = grad_input.view({batch, channels, height, width}); - columns = at::zeros({channels * kernel_h * kernel_w, height_out * width_out}, - input.options()); - - grad_output = - grad_output.view({grad_output.size(0), group, grad_output.size(1) / group, - grad_output.size(2), grad_output.size(3)}); - - for (int b = 0; b < batch; b++) { - // divide int group - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - grad_output[b][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - - // gradient w.r.t. input coordinate data - modulated_deformable_col2im_coord_cuda( - columns, input[b], offset[b], mask[b], 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b], - grad_mask[b]); - // gradient w.r.t. input data - modulated_deformable_col2im_cuda( - columns, offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, grad_input[b]); - - // gradient w.r.t. weight, dWeight should accumulate across the batch and - // group - modulated_deformable_im2col_cuda( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - grad_weight = grad_weight.view({group, grad_weight.size(0) / group, - grad_weight.size(1), grad_weight.size(2), - grad_weight.size(3)}); - if (with_bias) - grad_bias = grad_bias.view({group, grad_bias.size(0) / group}); - - for (int g = 0; g < group; g++) { - grad_weight[g] = - grad_weight[g] - .flatten(1) - .addmm_(grad_output[b][g].flatten(1), columns[g].transpose(0, 1)) - .view_as(grad_weight[g]); - if (with_bias) { - grad_bias[g] = - grad_bias[g] - .view({-1, 1}) - .addmm_(grad_output[b][g].flatten(1), ones.view({-1, 1})) - .view(-1); - } - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - grad_weight = grad_weight.view({grad_weight.size(0) * grad_weight.size(1), - grad_weight.size(2), grad_weight.size(3), - grad_weight.size(4)}); - if (with_bias) - grad_bias = grad_bias.view({grad_bias.size(0) * grad_bias.size(1)}); - } - grad_output = grad_output.view({grad_output.size(0) * grad_output.size(1), - grad_output.size(2), grad_output.size(3), - grad_output.size(4)}); -} diff --git a/spaces/smartinezbragado/reddit-topic-modelling/templates/success.html b/spaces/smartinezbragado/reddit-topic-modelling/templates/success.html deleted file mode 100644 index 94b5e86ef5c9db8491f2712b30a68b1248ae3b94..0000000000000000000000000000000000000000 --- a/spaces/smartinezbragado/reddit-topic-modelling/templates/success.html +++ /dev/null @@ -1,38 +0,0 @@ - - - - - - - - Reddit Topic Modelling - - - reddit-logo - -
            -

            Topics

            -
            -
            - {% for t in topics %} - {{titles[loop.index]}} - {{ t|safe }} - {% endfor %} -
            -
            -

            Documents

            -
            -
            - {% for t in docs %} - {{docs_titles[loop.index]}} - {{ t|safe }} - {% endfor %} -
            -
            - - - - \ No newline at end of file diff --git a/spaces/spaces-ci-bot/webhook/app.py b/spaces/spaces-ci-bot/webhook/app.py deleted file mode 100644 index d8d39a588a253d4a01eb7b4f0acabb9c05defe04..0000000000000000000000000000000000000000 --- a/spaces/spaces-ci-bot/webhook/app.py +++ /dev/null @@ -1,235 +0,0 @@ -import os -from pathlib import Path -from typing import Literal - -from fastapi import BackgroundTasks, HTTPException, Response, status -from huggingface_hub import ( - CommitOperationAdd, - CommitOperationDelete, - comment_discussion, - create_commit, - create_repo, - delete_repo, - get_repo_discussions, - snapshot_download, - space_info, -) -from huggingface_hub.repocard import RepoCard -from requests import HTTPError -from huggingface_hub import login -from huggingface_hub import WebhooksServer, WebhookPayload -from huggingface_hub.utils import RepositoryNotFoundError -from ui import generate_ui - -login(token=os.getenv("HF_TOKEN")) - -CI_BOT_NAME = "spaces-ci-bot" - -app = WebhooksServer(ui=generate_ui()) - - -@app.add_webhook -async def trigger_ci_on_pr(payload: WebhookPayload, task_queue: BackgroundTasks): - if payload.repo.type != "space": - raise HTTPException(400, f"Must be a Space, not {payload.repo.type}") - - space_id = payload.repo.name - - has_task = False - if ( - # Means "a new PR has been opened" - payload.event.scope.startswith("discussion") - and payload.event.action == "create" - and payload.discussion is not None - and payload.discussion.isPullRequest - and payload.discussion.status == "open" - ): - if not is_pr_synced(space_id=space_id, pr_num=payload.discussion.num): - # New PR! Sync task scheduled - task_queue.add_task( - sync_ci_space, - space_id=space_id, - pr_num=payload.discussion.num, - private=payload.repo.private, - ) - has_task = True - elif ( - # Means "a PR has been merged or closed" - payload.event.scope.startswith("discussion") - and payload.event.action == "update" - and payload.discussion is not None - and payload.discussion.isPullRequest - and ( - payload.discussion.status == "merged" - or payload.discussion.status == "closed" - ) - ): - task_queue.add_task( - delete_ci_space, - space_id=space_id, - pr_num=payload.discussion.num, - ) - has_task = True - elif ( - # Means "some content has been pushed to the Space" (any branch) - payload.event.scope.startswith("repo.content") - and payload.event.action == "update" - ): - # New repo change. Is it a commit on a PR? - # => loop through all PRs and check if new changes happened - for discussion in get_repo_discussions(repo_id=space_id, repo_type="space"): - if discussion.is_pull_request and discussion.status == "open": - if not is_pr_synced(space_id=space_id, pr_num=discussion.num): - # Found a PR that is not yet synced - task_queue.add_task( - sync_ci_space, - space_id=space_id, - pr_num=discussion.num, - private=payload.repo.private, - ) - has_task = True - - if has_task: - return Response( - "Task scheduled to sync/delete Space", status_code=status.HTTP_202_ACCEPTED - ) - else: - return Response("No task scheduled", status_code=status.HTTP_200_OK) - - -def is_pr_synced(space_id: str, pr_num: int) -> bool: - # What is the last synced commit for this PR? - ci_space_id = _get_ci_space_id(space_id=space_id, pr_num=pr_num) - try: - card = RepoCard.load(repo_id_or_path=ci_space_id, repo_type="space") - last_synced_sha = getattr(card.data, "synced_sha", None) - except HTTPError: - last_synced_sha = None - - # What is the last commit id for this PR? - info = space_info(repo_id=space_id, revision=f"refs/pr/{pr_num}") - last_pr_sha = info.sha - - # Is it up to date ? - return last_synced_sha == last_pr_sha - - -def sync_ci_space(space_id: str, pr_num: int, private: bool) -> None: - print(f"New task: sync ephemeral env for {space_id} (PR {pr_num})") - - # Create a temporary space for CI if didn't exist - ci_space_id = _get_ci_space_id(space_id=space_id, pr_num=pr_num) - - try: - create_repo( - ci_space_id, - repo_type="space", - space_sdk="docker", # Will be overwritten by sync - private=private, - ) - is_new = True - except HTTPError as err: - if err.response.status_code == 409: # already exists - is_new = False - else: - raise - - # Download space codebase from PR revision - snapshot_path = Path( - snapshot_download( - repo_id=space_id, revision=f"refs/pr/{pr_num}", repo_type="space" - ) - ) - - # Sync space codebase with PR revision - operations = [ # little aggressive but works - CommitOperationDelete(".", is_folder=True) - ] - for filepath in snapshot_path.glob("**/*"): - if filepath.is_file(): - path_in_repo = str(filepath.relative_to(snapshot_path)) - - # Upload all files without changes except for the README file - if path_in_repo == "README.md": - card = RepoCard.load(filepath) - setattr(card.data, "synced_sha", snapshot_path.name) # latest sha - path_or_fileobj = str(card).encode() - else: - path_or_fileobj = filepath - - operations.append( - CommitOperationAdd( - path_in_repo=path_in_repo, path_or_fileobj=path_or_fileobj - ) - ) - - create_commit( - repo_id=ci_space_id, - repo_type="space", - operations=operations, - commit_message=f"Sync CI Space with PR {pr_num}.", - ) - - # Post a comment on the PR - notify_pr(space_id=space_id, pr_num=pr_num, action="create" if is_new else "update") - - -def delete_ci_space(space_id: str, pr_num: int) -> None: - print(f"New task: delete ephemeral env for {space_id} (PR {pr_num})") - - # Delete - ci_space_id = _get_ci_space_id(space_id=space_id, pr_num=pr_num) - try: - delete_repo(repo_id=ci_space_id, repo_type="space") - except RepositoryNotFoundError: - # Repo did not exist: no need to notify - return - - # Notify about deletion - notify_pr(space_id=space_id, pr_num=pr_num, action="delete") - - -def notify_pr( - space_id: str, pr_num: int, action: Literal["create", "update", "delete"] -) -> None: - ci_space_id = _get_ci_space_id(space_id=space_id, pr_num=pr_num) - if action == "create": - comment = NOTIFICATION_TEMPLATE_CREATE.format(ci_space_id=ci_space_id) - elif action == "update": - comment = NOTIFICATION_TEMPLATE_UPDATE.format(ci_space_id=ci_space_id) - elif action == "delete": - comment = NOTIFICATION_TEMPLATE_DELETE - else: - raise ValueError(f"Status {action} not handled.") - - comment_discussion( - repo_id=space_id, repo_type="space", discussion_num=pr_num, comment=comment - ) - - -def _get_ci_space_id(space_id: str, pr_num: int) -> str: - return f"{CI_BOT_NAME}/{space_id.replace('/', '-')}-ci-pr-{pr_num}" - - -NOTIFICATION_TEMPLATE_CREATE = """\ -Following the creation of this PR, an ephemeral Space [{ci_space_id}](https://huggingface.co/spaces/{ci_space_id}) has been launched. Any changes pushed to this PR will be synced with the test Space. - -If your Space requires configuration (secrets or upgraded hardware), you must duplicate the ephemeral Space to your account and configure the settings by yourself. You are responsible of making sure that the changes introduced in the PR are not harmful (leak secrets, run malicious scripts,...). - -_(This is an automated message.)_ -""" - -NOTIFICATION_TEMPLATE_UPDATE = """\ -Following new commits that happened in this PR, the ephemeral Space [{ci_space_id}](https://huggingface.co/spaces/{ci_space_id}) has been updated. - -_(This is an automated message.)_ -""" - -NOTIFICATION_TEMPLATE_DELETE = """\ -PR is now merged/closed. The ephemeral Space has been deleted. - -_(This is an automated message.)_ -""" - - -app.run() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/cross_entropy.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/cross_entropy.py deleted file mode 100644 index fe461064716b38ecf2eb610daddbb609a1884e6b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/criterions/cross_entropy.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class CrossEntropyCriterionConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_criterion("cross_entropy", dataclass=CrossEntropyCriterionConfig) -class CrossEntropyCriterion(FairseqCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, _ = self.compute_loss(model, net_output, sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.view(-1, lprobs.size(-1)) - target = model.get_targets(sample, net_output).view(-1) - loss = F.nll_loss( - lprobs, - target, - ignore_index=self.padding_idx, - reduction="sum" if reduce else "none", - ) - return loss, loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - # we divide by log(2) to convert the loss from base e to base 2 - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/stomexserde/gpt4-ui/Examples/Avg Licence Key 2017.md b/spaces/stomexserde/gpt4-ui/Examples/Avg Licence Key 2017.md deleted file mode 100644 index 083e1566c17272060dfa09555506a4375da0e6d1..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Avg Licence Key 2017.md +++ /dev/null @@ -1,50 +0,0 @@ -
            -

            How to Activate AVG Antivirus 2017 with a Free License Key

            -

            AVG Antivirus 2017 is a popular security software that protects your PC from viruses, malware, ransomware, and other threats. It also offers features such as webcam protection, file shredder, password manager, and more. If you want to use AVG Antivirus 2017 for free, you can activate it with a free license key that you can find online.

            -

            In this article, we will show you how to activate AVG Antivirus 2017 with a free license key in a few simple steps.

            -

            Avg Licence Key 2017


            Download - https://urlgoal.com/2uI9WI



            -

            Step 1: Download and install AVG Antivirus 2017

            -

            You can download AVG Antivirus 2017 from the official website here. Choose the free version and follow the instructions to install it on your PC.

            -

            Step 2: Find a free license key online

            -

            There are many websites that offer free license keys for AVG Antivirus 2017. You can search for them on Google or use one of these sources:

            -
              -
            • Smart Serials: This website provides serial numbers for various software products, including AVG Antivirus 2017. You can copy and paste one of the serial numbers from this page.
            • -
            • Internet Archive: This website archives various digital content, including software files. You can download a ZIP file that contains AVG Internet Security 2017 with license keys from this page.
            • -
            • Scribd: This website allows users to upload and share documents online. You can download a PDF file that contains AVG serial keys from this page.
            • -
            -

            Make sure to choose a license key that matches your version of AVG Antivirus 2017 (32-bit or 64-bit) and has not expired.

            -

            Step 3: Enter the license key in AVG Antivirus 2017

            -

            Once you have a license key, you can enter it in AVG Antivirus 2017 to activate it. Here's how:

            -
              -
            1. Open AVG Antivirus 2017 and click on the Menu icon at the top right corner.
            2. -
            3. Select About from the drop-down menu.
            4. -
            5. Click on Subscription at the bottom of the window.
            6. -
            7. Click on Enter License Number.
            8. -
            9. Paste the license key that you copied from online and click on Activate.
            10. -
            11. You should see a confirmation message that your subscription is active.
            12. -
            -

            Congratulations! You have successfully activated AVG Antivirus 2017 with a free license key. You can now enjoy its full features and protection for free.

            - -

            Step 4: Update AVG Antivirus 2017 regularly

            -

            To keep your PC protected from the latest threats, you should update AVG Antivirus 2017 regularly. AVG Antivirus 2017 will automatically check for updates and download them in the background. However, you can also manually check for updates and install them. Here's how:

            -
              -
            1. Open AVG Antivirus 2017 and click on the Menu icon at the top right corner.
            2. -
            3. Select Settings from the drop-down menu.
            4. -
            5. Click on Update on the left panel.
            6. -
            7. Click on Check for Updates under Virus Definitions and Program.
            8. -
            9. If there are any updates available, click on Install Now to install them.
            10. -
            -

            You should see a message that your AVG Antivirus 2017 is up to date.

            -

            -

            Step 5: Scan your PC with AVG Antivirus 2017

            -

            To make sure your PC is free from viruses, malware, and other threats, you should scan your PC with AVG Antivirus 2017 regularly. AVG Antivirus 2017 will automatically scan your PC every day at a scheduled time. However, you can also manually scan your PC or specific files and folders. Here's how:

            -
              -
            1. Open AVG Antivirus 2017 and click on the Scan Computer button at the main screen.
            2. -
            3. Select one of the scan options: Basic Scan, Deep Scan, USB/DVD Scan, or File or Folder Scan.
            4. -
            5. If you choose File or Folder Scan, browse to the file or folder that you want to scan and click on OK.
            6. -
            7. Wait for the scan to complete and review the results.
            8. -
            9. If there are any threats detected, click on Resolve All to remove them.
            10. -
            -

            You should see a message that your PC is clean and safe.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Busta Rhymes T-pain Hustlers Anthem Zippy Aparece Directas Liv.md b/spaces/stomexserde/gpt4-ui/Examples/Busta Rhymes T-pain Hustlers Anthem Zippy Aparece Directas Liv.md deleted file mode 100644 index 70370b4506f3541940e09f99687c05a8c69bafd4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Busta Rhymes T-pain Hustlers Anthem Zippy Aparece Directas Liv.md +++ /dev/null @@ -1,22 +0,0 @@ - -

            Busta Rhymes and T-Pain Celebrate Success in Hustler's Anthem 09

            -

            Busta Rhymes and T-Pain teamed up for a hit single in 2009 called Hustler's Anthem 09, which was featured on Busta's eighth studio album Back on My B.S. The song is a tribute to all the hustlers who work hard to achieve their goals and enjoy the fruits of their labor.

            -

            The song was produced by Ty Fyffe and features T-Pain on the hook, singing "You should already know when I walk in the door / That it ain't no use in frontin' on me / I'm a hustle I'm a do my thing / You already know what it's gon' be". Busta Rhymes delivers his trademark rapid-fire verses, boasting about his lavish lifestyle and his success in the music industry.

            -

            Busta Rhymes T-pain Hustlers Anthem Zippy aparece directas liv


            Download Ziphttps://urlgoal.com/2uI5G9



            -

            The song was accompanied by a music video directed by Hype Williams, which shows Busta and T-Pain partying in various locations, such as a yacht, a club, and a mansion. The video also features cameo appearances by Tony Yayo, Spliff Star, DJ Khaled, and Rick Ross.

            -

            Hustler's Anthem 09 was well received by critics and fans alike, who praised the catchy chorus and the energetic performance by both artists. The song peaked at number 60 on the Billboard Hot 100 chart and number 11 on the Hot Rap Songs chart. It also charted in several other countries, such as Canada, Germany, Switzerland, and the UK.

            -

            Hustler's Anthem 09 is one of Busta Rhymes' most popular songs and a staple in his live shows. It showcases his versatility as a rapper and his ability to collaborate with different artists. It also reflects his motto of "always staying busy", as he has been releasing music consistently for over three decades.

            -

            If you want to listen to Hustler's Anthem 09 or learn more about Busta Rhymes and T-Pain, you can check out the following sources:

            -

            -
              -
            • [^1^] Busta Rhymes – Hustler's Anthem 09 Lyrics | Genius Lyrics
            • -
            • [^2^] Hustler's Anthem '09 (feat. T-Pain) - Single by Busta Rhymes on Apple Music
            • -
            • [^3^] Busta Rhymes - Hustler's Anthem 09 ft. T-Pain - YouTube
            • -
            - -

            The song also has a remix version, which features additional verses by OJ da Juiceman, Jadakiss, Swizz Beatz, and Ryan Leslie. The remix was released as a digital download on March 17, 2009 and was also included on the deluxe edition of Back on My B.S. The remix has a different beat and a different chorus by T-Pain, who sings "You should already know when I walk in the spot / That it ain't no use in frontin' on me / I'm a baller I'm a do my thing / You already know what it's gon' be".

            -

            The remix was also accompanied by a music video directed by Rik Cordero, which shows Busta and his guests performing in front of a green screen with various backgrounds. The video also features cameo appearances by Ron Browz, DJ Drama, and DJ Envy.

            -

            The remix was well received by critics and fans alike, who praised the new beat and the guest appearances by other rappers. The remix peaked at number 28 on the Hot R&B/Hip-Hop Songs chart and number 18 on the Hot Rap Songs chart. It also charted in several other countries, such as Belgium, France, and Ireland.

            -

            Hustler's Anthem 09 is one of Busta Rhymes' most successful singles and a testament to his longevity and relevance in the hip-hop scene. It also showcases his collaboration with T-Pain, who is one of the most influential singers and producers of the 2000s. The song is a motivational anthem for all the hustlers who strive to make their dreams come true.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Comprendre Les Femmes Pierre Daco.pdf.md b/spaces/stomexserde/gpt4-ui/Examples/Comprendre Les Femmes Pierre Daco.pdf.md deleted file mode 100644 index 9bab5283c8a0872961454725a0ca052d6c501e14..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Comprendre Les Femmes Pierre Daco.pdf.md +++ /dev/null @@ -1,19 +0,0 @@ - -

            Comprendre Les Femmes: A Classic Book by Pierre Daco on Female Psychology

            -

            If you are looking for a book that will help you understand women better, you might want to check out Comprendre Les Femmes by Pierre Daco. This book, which was first published in 1969, is a classic work on female psychology that explores the deep and eternal aspects of femininity.

            -

            Comprendre Les Femmes Pierre Daco.pdf


            DOWNLOADhttps://urlgoal.com/2uI74S



            -

            Pierre Daco was a Belgian psychoanalyst and author who wrote several books on psychology, sexuality, and spirituality. He was known for his humanistic and holistic approach to psychotherapy, as well as his interest in Eastern philosophy and mysticism.

            -

            In Comprendre Les Femmes, he offers a comprehensive and insightful analysis of the female psyche, based on his clinical experience and his personal observations. He covers topics such as the feminine instinct, the feminine complex, the feminine ideal, the feminine role, the feminine love, and the feminine destiny.

            -

            He also provides practical advice on how to improve the harmony and communication between men and women, and how to foster a more balanced and authentic relationship. He argues that the renewal of the world depends on the renewal of women, who have the potential to bring more peace, wisdom, and creativity to humanity.

            -

            Comprendre Les Femmes is a book that will appeal to anyone who wants to learn more about the nature and essence of women, and who wants to enrich their understanding of themselves and their partners. It is a book that will challenge you to look beyond the superficial and temporary aspects of femininity, and to discover the profound and eternal ones.

            - -

            Comprendre Les Femmes is not only a book about women, but also a book about men. Pierre Daco explains how men and women can complement each other, and how they can overcome their differences and conflicts. He also reveals the secrets of male psychology, and how men can better understand themselves and their partners.

            -

            -

            Pierre Daco was a renowned psychotherapist and author who had a great clinical experience and a talent for popularizing psychology. He wrote many books for professionals and general readers, covering topics such as stress, dreams, sexuality, spirituality, and psychoanalysis. He was influenced by Charles Baudouin and Carl Gustav Jung, and he was a member of the International Institute of Psychoanalysis and Psychotherapy Charles Baudouin and of the International Foundation for Analytical Psychology.

            -

            Comprendre Les Femmes is a book that has been translated into many languages and has sold millions of copies worldwide. It is a book that has inspired generations of readers who have found in it a source of wisdom, guidance, and inspiration. It is a book that will help you comprehend women, and yourself.

            - -

            Comprendre Les Femmes is a book that has received many positive reviews from readers who have appreciated its depth, clarity, and relevance. Many readers have found in this book a valuable tool to improve their self-knowledge, their relationships, and their well-being. Some readers have also praised the author's style, which is both accessible and engaging.

            -

            Comprendre Les Femmes is also a book that has been criticized by some readers who have found it outdated, stereotypical, or biased. Some readers have argued that the book does not reflect the diversity and complexity of women, and that it relies on generalizations and assumptions that are not supported by scientific evidence. Some readers have also questioned the author's authority and credibility, given his lack of formal training in psychology.

            -

            Comprendre Les Femmes is a book that invites you to form your own opinion and to explore your own experience. It is a book that challenges you to question your beliefs and prejudices, and to open your mind and heart to new perspectives. It is a book that will make you think, feel, and grow.

            e93f5a0c3f
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Hd Movie Golmaal 3 In Hindi REPACK.md b/spaces/stomexserde/gpt4-ui/Examples/Download Hd Movie Golmaal 3 In Hindi REPACK.md deleted file mode 100644 index 931ea29ab03c1ad8f2ceb13af256a99664e25c6b..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Hd Movie Golmaal 3 In Hindi REPACK.md +++ /dev/null @@ -1,19 +0,0 @@ - -

            How to Download HD Movie Golmaal 3 in Hindi for Free

            -

            Golmaal 3 is a 2010 Bollywood comedy film directed by Rohit Shetty and starring Ajay Devgn, Kareena Kapoor, Mithun Chakraborty, Arshad Warsi, Tusshar Kapoor, Kunal Khemu, Shreyas Talpade and Johnny Lever. It is the third installment in the Golmaal series and a sequel to Golmaal Returns (2008). The film follows the hilarious antics of two rival groups of siblings who are forced to live together after their parents get married.

            -

            download hd movie Golmaal 3 in hindi


            Download Filehttps://urlgoal.com/2uI6Cx



            -

            If you are looking for a way to download HD movie Golmaal 3 in Hindi for free, you have come to the right place. In this article, we will show you some of the best websites and platforms where you can watch or download Golmaal 3 in full HD quality with Hindi audio. You can also find out more about the cast, plot and reviews of Golmaal 3 here.

            -

            Where to Watch or Download Golmaal 3 in HD Quality with Hindi Audio

            -

            There are many websites and platforms that offer Golmaal 3 for online streaming or downloading. However, not all of them are safe, legal or reliable. Some of them may contain viruses, malware or pop-up ads that can harm your device or compromise your privacy. Some of them may also have low-quality videos, broken links or incomplete downloads. Therefore, it is important to choose a trusted and reputable source for watching or downloading Golmaal 3 in HD quality with Hindi audio.

            -

            Here are some of the best websites and platforms that we recommend for watching or downloading Golmaal 3 in HD quality with Hindi audio:

            -
              -
            • PogoLinks: PogoLinks is a website that provides direct Google Drive download links for Bollywood and Hollywood movies and web series. You can download Golmaal 3 (2010) movie in full HD quality with Hindi audio from PogoLinks in various sizes and formats according to your preference. You can also watch the trailer and read the synopsis of Golmaal 3 on PogoLinks. To download Golmaal 3 from PogoLinks, you need to click on the download button and follow the steps to complete the download[^1^].
            • -
            • JioCinema: JioCinema is an online video streaming platform that offers a wide range of movies, TV shows, music videos and more for Jio users. You can watch Golmaal 3 (2010) movie online on JioCinema in full HD quality with Hindi audio. You can also find out more about the cast, genre and ratings of Golmaal 3 on JioCinema. To watch Golmaal 3 on JioCinema, you need to have a Jio account and a Jio SIM card. You can access JioCinema through the website or the app[^2^] [^3^].
            • -
            • Archive: Archive is a website that offers free access to millions of books, movies, music, software and more. You can download Golmaal (Hindi) movie collection from Archive in various formats and resolutions. The collection includes Golmaal Fun Unlimited (2006), Golmaal Returns (2008) and Golmaal 3 (2010). You can also watch the movies online on Archive. To download or watch Golmaal (Hindi) movie collection from Archive, you need to click on the download options or the play button[^4^].
            • -
            -

            More About Golmaal 3 Movie

            -

            Golmaal 3 is a comedy film that was released on November 5, 2010. It was produced by Dhillin Mehta under the banner of Shree Ashtavinayak Cine Vision Ltd. and distributed by Eros International. The film received mixed reviews from critics and audiences but was a commercial success at the box office. It was one of the highest-grossing Bollywood films of 2010 and won several awards and nominations.

            -

            -

            The plot of Golmaal 3 revolves around two groups of siblings who hate each other but are forced to live together when their parents fall in love and get married. The siblings try to sabotage each other's plans and create chaos in their family

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dr.Explain Ultima 5.4.1033 Patch.md b/spaces/stomexserde/gpt4-ui/Examples/Dr.Explain Ultima 5.4.1033 Patch.md deleted file mode 100644 index 936381e0f6cb5e7477f6cee2bd3532ea953c1964..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dr.Explain Ultima 5.4.1033 Patch.md +++ /dev/null @@ -1,30 +0,0 @@ - -

            How to Create Professional Help Systems with Dr.Explain Ultima 5.4.1033 Patch

            -

            If you are looking for a powerful and easy-to-use tool to create help systems, documentation, manuals, or guides for your projects, you should consider Dr.Explain Ultima 5.4.1033 Patch. This software allows you to produce high-quality output in various formats, such as HTML, PDF, CHM, RTF, and more.

            -

            Dr.Explain Ultima 5.4.1033 Patch


            Downloadhttps://urlgoal.com/2uIb0w



            -

            Dr.Explain Ultima 5.4.1033 Patch has a simple and intuitive user interface that lets you work with three main components: the document tree, the page properties, and the WYSIWYG editor. You can add different elements to your pages, such as text, images, tables, links, videos, etc., and see how they will look like in the final result.

            -

            One of the most useful features of Dr.Explain Ultima 5.4.1033 Patch is the user interface analyzer, which allows you to capture screenshots of your application and automatically generate annotations for each element. You can also edit the annotations and customize their appearance and behavior.

            -

            Another advantage of Dr.Explain Ultima 5.4.1033 Patch is the collaboration mode, which enables you to work on your project with other people online. You just need to upload your project to the server and invite your team members to join. You can also track the changes and revisions made by each user.

            -

            Dr.Explain Ultima 5.4.1033 Patch is a comprehensive and reliable solution for creating help systems and documentation for any kind of project. It supports multiple languages and has a full Russian localization. You can download it from the official website or use the patch provided by CrackingPatching.com to activate it.

            -

            - -

            In this article, we will show you how to use Dr.Explain Ultima 5.4.1033 Patch to create a help system for a simple calculator application. We will cover the following steps:

            -
              -
            1. Creating a new project and setting up the basic properties.
            2. -
            3. Adding pages and topics to the document tree.
            4. -
            5. Using the user interface analyzer to capture screenshots and generate annotations.
            6. -
            7. Editing the page content and formatting in the WYSIWYG editor.
            8. -
            9. Exporting the project to HTML format and viewing the result.
            10. -
            -

            Let's get started!

            - -

            Creating a new project and setting up the basic properties

            -

            To create a new project in Dr.Explain Ultima 5.4.1033 Patch, you need to click on the File menu and select New Project. You will see a dialog box where you can enter the project name, the output format, the default language, and the project folder. For this example, we will name our project Calculator Help, choose HTML as the output format, select English as the default language, and browse to a folder where we want to save our project files.

            -

            After clicking OK, you will see the main window of Dr.Explain Ultima 5.4.1033 Patch with an empty document tree on the left, a blank page properties panel on the right, and a blank WYSIWYG editor in the center. You can also access the project properties by clicking on the Project menu and selecting Project Properties. Here you can change the title, author, keywords, description, and other settings of your project.

            - -

            Adding pages and topics to the document tree

            -

            The document tree is where you organize the structure and hierarchy of your help system. You can add pages and topics to the document tree by right-clicking on any node and selecting Add Page or Add Topic. A page is a container for one or more topics, while a topic is a unit of information that corresponds to a single web page in HTML format.

            -

            For this example, we will add four pages to our document tree: Introduction, How to Use Calculator, FAQ, and About. We will also add two topics to each page: Overview and Features for Introduction, Basic Operations and Advanced Functions for How to Use Calculator, Common Questions and Troubleshooting for FAQ, and Contact Information and License Agreement for About.

            -

            You can rename any page or topic by double-clicking on its name in the document tree or by editing its properties in the page properties panel. You can also drag and drop any page or topic to change its position or level in the document tree.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Kabhi Na Kabhi To Miloge Shapit.iso.md b/spaces/stomexserde/gpt4-ui/Examples/Kabhi Na Kabhi To Miloge Shapit.iso.md deleted file mode 100644 index 3a894648240163d5c4236bd3ecc9f42a9ec6b106..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Kabhi Na Kabhi To Miloge Shapit.iso.md +++ /dev/null @@ -1,17 +0,0 @@ - -

            Kabhi Na Kabhi To Miloge: A Romantic Song from Shaapit

            -

            Kabhi Na Kabhi To Miloge is a Hindi song from the 2010 horror film Shaapit, directed by Vikram Bhatt and starring Aditya Narayan, Shweta Agarwal and Rahul Dev. The song is composed by Chirantan Bhatt and sung by Aditya Narayan and Suzanne D'Mello, with lyrics by Sameer. The song expresses the love and longing of a couple who are cursed by an ancient spirit and can never be together.

            -

            kabhi na kabhi to miloge shapit.iso


            Download File ★★★ https://urlgoal.com/2uI7Zg



            -

            The song has a haunting melody and soulful lyrics that convey the emotions of the lovers who are separated by fate. The song also features a rock version that adds more intensity and drama to the situation. The song was well received by the audience and critics alike, and became one of the most popular songs of the year. The song has over 3 million plays on JioSaavn[^1^] and over 3.7 million views on YouTube[^2^].

            -

            The song is also available as an ISO file, which is a disc image file that contains all the data of a CD or DVD. An ISO file can be used to create a backup copy of a disc or to mount it on a virtual drive. The ISO file of Kabhi Na Kabhi To Miloge can be downloaded from various sources on the internet, but one should be careful about the authenticity and legality of the file.

            -

            Kabhi Na Kabhi To Miloge is a song that will touch your heart and make you feel the pain and passion of love. It is a song that will stay with you for a long time.

            - -

            The song Kabhi Na Kabhi To Miloge is also a part of the soundtrack of Shaapit, which consists of six songs composed by Chirantan Bhatt and Najam Sheraz. The soundtrack was released by T-Series on 4 March 2010. The other songs in the album are Tere Bina Jiya Na Jaye, Ajnabi Hawaayein Bekrara Bahein, Chaahata Dil Tumko Tum Nahin Janate, Hayaati Ye Hayaati Kehati and Shaapit Hua Kya Kya Hota Hain. The soundtrack received positive reviews from critics and listeners, who praised the music for its freshness and variety.

            -

            The film Shaapit is based on the concept of generational curses, where a family is doomed by a vengeful spirit for generations. The film follows the story of Aman (Aditya Narayan) and Kaaya (Shweta Agarwal), who fall in love and decide to get married. However, they discover that Kaaya's family is cursed by a spirit who kills every girl on her 18th birthday. Aman and Kaaya seek the help of a professor (Rahul Dev) who specializes in paranormal phenomena, and embark on a journey to break the curse and save their love.

            -

            -

            The film was released on 19 March 2010 and received mixed reviews from critics. Some praised the film for its horror elements and performances, while others criticized it for its weak script and direction. The film was a moderate success at the box office, earning about 10 crore rupees against a budget of 7 crore rupees.

            - -

            The song Kabhi Na Kabhi To Miloge has also been covered by various artists and singers, who have given their own interpretation and rendition of the song. Some of the notable covers are by Arijit Singh, who sang a live version of the song at a concert in 2014, by Siddharth Slathia, who sang an acoustic version of the song on his YouTube channel in 2016, and by Sanam Puri, who sang a mashup of the song with another popular song Tum Hi Ho from the film Aashiqui 2 in 2017. These covers have also received appreciation and admiration from the fans of the original song.

            -

            The song Kabhi Na Kabhi To Miloge is not only a beautiful song, but also a powerful message of hope and faith. The song inspires us to believe that no matter what obstacles we face in life, we will always find our true love someday. The song also reminds us to cherish and value our loved ones, and to never give up on our dreams. The song is a tribute to the eternal bond of love that transcends time and space.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/evaluation/swap_generation_from_arranged_result_evaluator.py b/spaces/sunshineatnoon/TextureScraping/swapae/evaluation/swap_generation_from_arranged_result_evaluator.py deleted file mode 100644 index d08a668f5225406fa94675fecd734506409bff36..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/evaluation/swap_generation_from_arranged_result_evaluator.py +++ /dev/null @@ -1,123 +0,0 @@ -import glob -import torchvision.transforms as transforms -import os -import torch -from swapae.evaluation import BaseEvaluator -import swapae.util as util -from PIL import Image - - -class InputDataset(torch.utils.data.Dataset): - def __init__(self, dataroot): - structure_images = sorted(glob.glob(os.path.join(dataroot, "input_structure", "*.png"))) - style_images = sorted(glob.glob(os.path.join(dataroot, "input_style", "*.png"))) - - for structure_path, style_path in zip(structure_images, style_images): - assert structure_path.replace("structure", "style") == style_path, \ - "%s and %s do not match" % (structure_path, style_path) - - assert len(structure_images) == len(style_images) - print("found %d images at %s" % (len(structure_images), dataroot)) - - self.structure_images = structure_images - self.style_images = style_images - self.transform = transforms.Compose( - [ - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ] - ) - - def __len__(self): - return len(self.structure_images) - - def __getitem__(self, idx): - structure_image = self.transform(Image.open(self.structure_images[idx]).convert('RGB')) - style_image = self.transform(Image.open(self.style_images[idx]).convert('RGB')) - return {'structure': structure_image, - 'style': style_image, - 'path': self.structure_images[idx]} - - -class SwapGenerationFromArrangedResultEvaluator(BaseEvaluator): - """ Given two directories containing input structure and style (texture) - images, respectively, generate reconstructed and swapped images. - The input directories should contain the same set of image filenames. - It differs from StructureStyleGridGenerationEvaluator, which creates - N^2 outputs (i.e. swapping of all possible pairs between the structure and - style images). - """ - @staticmethod - def modify_commandline_options(parser, is_train): - return parser - - def image_save_dir(self, nsteps): - return os.path.join(self.output_dir(), "%s_%s" % (self.target_phase, nsteps), "images") - - def create_webpage(self, nsteps): - if nsteps is None: - nsteps = self.opt.resume_iter - elif isinstance(nsteps, int): - nsteps = str(round(nsteps / 1000)) + "k" - savedir = os.path.join(self.output_dir(), "%s_%s" % (self.target_phase, nsteps)) - os.makedirs(savedir, exist_ok=True) - webpage_title = "%s. iter=%s. phase=%s" % \ - (self.opt.name, str(nsteps), self.target_phase) - self.webpage = util.HTML(savedir, webpage_title) - - def add_to_webpage(self, images, filenames, tile=1): - converted_images = [] - for image in images: - if isinstance(image, list): - image = torch.stack(image, dim=0).flatten(0, 1) - image = Image.fromarray(util.tensor2im(image, tile=min(image.size(0), tile))) - converted_images.append(image) - - self.webpage.add_images(converted_images, - filenames) - print("saved %s" % str(filenames)) - #self.webpage.save() - - def set_num_test_images(self, num_images): - self.num_test_images = num_images - - def evaluate(self, model, dataset, nsteps=None): - input_dataset = torch.utils.data.DataLoader( - InputDataset(self.opt.dataroot), - batch_size=1, - shuffle=False, drop_last=False, num_workers=0 - ) - - self.num_test_images = None - self.create_webpage(nsteps) - image_num = 0 - for i, data_i in enumerate(input_dataset): - structure = data_i["structure"].cuda() - style = data_i["style"].cuda() - path = data_i["path"][0] - path = os.path.basename(path) - #if "real_B" in data_i: - # image = torch.cat([image, data_i["real_B"].cuda()], dim=0) - # paths = paths + data_i["path_B"] - sp, gl = model(structure, command="encode") - rec = model(sp, gl, command="decode") - - _, gl = model(style, command="encode") - swapped = model(sp, gl, command="decode") - - self.add_to_webpage([structure, style, rec, swapped], - ["%s_structure.png" % (path), - "%s_style.png" % (path), - "%s_rec.png" % (path), - "%s_swap.png" % (path)], - tile=1) - image_num += 1 - if self.num_test_images is not None and self.num_test_images <= image_num: - self.webpage.save() - return {} - - self.webpage.save() - return {} - - - diff --git a/spaces/supertori/files/stable-diffusion-webui/javascript/imageMaskFix.js b/spaces/supertori/files/stable-diffusion-webui/javascript/imageMaskFix.js deleted file mode 100644 index 9fe7a60309c95b4921360fb09d5bee2b2bd2a73c..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/javascript/imageMaskFix.js +++ /dev/null @@ -1,45 +0,0 @@ -/** - * temporary fix for https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/668 - * @see https://github.com/gradio-app/gradio/issues/1721 - */ -window.addEventListener( 'resize', () => imageMaskResize()); -function imageMaskResize() { - const canvases = gradioApp().querySelectorAll('#img2maskimg .touch-none canvas'); - if ( ! canvases.length ) { - canvases_fixed = false; - window.removeEventListener( 'resize', imageMaskResize ); - return; - } - - const wrapper = canvases[0].closest('.touch-none'); - const previewImage = wrapper.previousElementSibling; - - if ( ! previewImage.complete ) { - previewImage.addEventListener( 'load', () => imageMaskResize()); - return; - } - - const w = previewImage.width; - const h = previewImage.height; - const nw = previewImage.naturalWidth; - const nh = previewImage.naturalHeight; - const portrait = nh > nw; - const factor = portrait; - - const wW = Math.min(w, portrait ? h/nh*nw : w/nw*nw); - const wH = Math.min(h, portrait ? h/nh*nh : w/nw*nh); - - wrapper.style.width = `${wW}px`; - wrapper.style.height = `${wH}px`; - wrapper.style.left = `0px`; - wrapper.style.top = `0px`; - - canvases.forEach( c => { - c.style.width = c.style.height = ''; - c.style.maxWidth = '100%'; - c.style.maxHeight = '100%'; - c.style.objectFit = 'contain'; - }); - } - - onUiUpdate(() => imageMaskResize()); diff --git a/spaces/supertori/files/stable-diffusion-webui/style.css b/spaces/supertori/files/stable-diffusion-webui/style.css deleted file mode 100644 index e9f92ecd095d072f8f3b510e4729b574d5ed3b9f..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/style.css +++ /dev/null @@ -1,972 +0,0 @@ -.container { - max-width: 100%; -} - -.token-counter{ - position: absolute; - display: inline-block; - right: 2em; - min-width: 0 !important; - width: auto; - z-index: 100; -} - -.token-counter.error span{ - box-shadow: 0 0 0.0 0.3em rgba(255,0,0,0.15), inset 0 0 0.6em rgba(255,0,0,0.075); - border: 2px solid rgba(255,0,0,0.4) !important; -} - -.token-counter div{ - display: inline; -} - -.token-counter span{ - padding: 0.1em 0.75em; -} - -#sh{ - min-width: 2em; - min-height: 2em; - max-width: 2em; - max-height: 2em; - flex-grow: 0; - padding-left: 0.25em; - padding-right: 0.25em; - margin: 0.1em 0; - opacity: 0%; - cursor: default; -} - -.output-html p {margin: 0 0.5em;} - -.row > *, -.row > .gr-form > * { - min-width: min(120px, 100%); - flex: 1 1 0%; -} - -.performance { - font-size: 0.85em; - color: #444; -} - -.performance p{ - display: inline-block; -} - -.performance .time { - margin-right: 0; -} - -.performance .vram { -} - -#txt2img_generate, #img2img_generate { - min-height: 4.5em; -} - -@media screen and (min-width: 2500px) { - #txt2img_gallery, #img2img_gallery { - min-height: 768px; - } -} - -#txt2img_gallery img, #img2img_gallery img{ - object-fit: scale-down; -} -#txt2img_actions_column, #img2img_actions_column { - margin: 0.35rem 0.75rem 0.35rem 0; -} -#script_list { - padding: .625rem .75rem 0 .625rem; -} -.justify-center.overflow-x-scroll { - justify-content: left; -} - -.justify-center.overflow-x-scroll button:first-of-type { - margin-left: auto; -} - -.justify-center.overflow-x-scroll button:last-of-type { - margin-right: auto; -} - -[id$=_random_seed], [id$=_random_subseed], [id$=_reuse_seed], [id$=_reuse_subseed], #open_folder{ - min-width: 2.3em; - height: 2.5em; - flex-grow: 0; - padding-left: 0.25em; - padding-right: 0.25em; -} - -#hidden_element{ - display: none; -} - -[id$=_seed_row], [id$=_subseed_row]{ - gap: 0.5rem; - padding: 0.6em; -} - -[id$=_subseed_show_box]{ - min-width: auto; - flex-grow: 0; -} - -[id$=_subseed_show_box] > div{ - border: 0; - height: 100%; -} - -[id$=_subseed_show]{ - min-width: auto; - flex-grow: 0; - padding: 0; -} - -[id$=_subseed_show] label{ - height: 100%; -} - -#txt2img_actions_column, #img2img_actions_column{ - gap: 0; - margin-right: .75rem; -} - -#txt2img_tools, #img2img_tools{ - gap: 0.4em; -} - -#interrogate_col{ - min-width: 0 !important; - max-width: 8em !important; - margin-right: 1em; - gap: 0; -} -#interrogate, #deepbooru{ - margin: 0em 0.25em 0.5em 0.25em; - min-width: 8em; - max-width: 8em; -} - -#style_pos_col, #style_neg_col{ - min-width: 8em !important; -} - -#txt2img_styles_row, #img2img_styles_row{ - gap: 0.25em; - margin-top: 0.3em; -} - -#txt2img_styles_row > button, #img2img_styles_row > button{ - margin: 0; -} - -#txt2img_styles, #img2img_styles{ - padding: 0; -} - -#txt2img_styles > label > div, #img2img_styles > label > div{ - min-height: 3.2em; -} - -ul.list-none{ - max-height: 35em; - z-index: 2000; -} - -.gr-form{ - background: transparent; -} - -.my-4{ - margin-top: 0; - margin-bottom: 0; -} - -#resize_mode{ - flex: 1.5; -} - -button{ - align-self: stretch !important; -} - -.overflow-hidden, .gr-panel{ - overflow: visible !important; -} - -#x_type, #y_type{ - max-width: 10em; -} - -#txt2img_preview, #img2img_preview, #ti_preview{ - position: absolute; - width: 320px; - left: 0; - right: 0; - margin-left: auto; - margin-right: auto; - margin-top: 34px; - z-index: 100; - border: none; - border-top-left-radius: 0; - border-top-right-radius: 0; -} - -@media screen and (min-width: 768px) { - #txt2img_preview, #img2img_preview, #ti_preview { - position: absolute; - } -} - -@media screen and (max-width: 767px) { - #txt2img_preview, #img2img_preview, #ti_preview { - position: relative; - } -} - -#txt2img_preview div.left-0.top-0, #img2img_preview div.left-0.top-0, #ti_preview div.left-0.top-0{ - display: none; -} - -fieldset span.text-gray-500, .gr-block.gr-box span.text-gray-500, label.block span{ - position: absolute; - top: -0.7em; - line-height: 1.2em; - padding: 0; - margin: 0 0.5em; - - background-color: white; - box-shadow: 6px 0 6px 0px white, -6px 0 6px 0px white; - - z-index: 300; -} - -.dark fieldset span.text-gray-500, .dark .gr-block.gr-box span.text-gray-500, .dark label.block span{ - background-color: rgb(31, 41, 55); - box-shadow: none; - border: 1px solid rgba(128, 128, 128, 0.1); - border-radius: 6px; - padding: 0.1em 0.5em; -} - -#txt2img_column_batch, #img2img_column_batch{ - min-width: min(13.5em, 100%) !important; -} - -#settings fieldset span.text-gray-500, #settings .gr-block.gr-box span.text-gray-500, #settings label.block span{ - position: relative; - border: none; - margin-right: 8em; -} - -#settings .gr-panel div.flex-col div.justify-between div{ - position: relative; - z-index: 200; -} - -#settings{ - display: block; -} - -#settings > div{ - border: none; - margin-left: 10em; -} - -#settings > div.flex-wrap{ - float: left; - display: block; - margin-left: 0; - width: 10em; -} - -#settings > div.flex-wrap button{ - display: block; - border: none; - text-align: left; -} - -#settings_result{ - height: 1.4em; - margin: 0 1.2em; -} - -input[type="range"]{ - margin: 0.5em 0 -0.3em 0; -} - -#mask_bug_info { - text-align: center; - display: block; - margin-top: -0.75em; - margin-bottom: -0.75em; -} - -#txt2img_negative_prompt, #img2img_negative_prompt{ -} - -/* gradio 3.8 adds opacity to progressbar which makes it blink; disable it here */ -.transition.opacity-20 { - opacity: 1 !important; -} - -/* more gradio's garbage cleanup */ -.min-h-\[4rem\] { min-height: unset !important; } -.min-h-\[6rem\] { min-height: unset !important; } - -.progressDiv{ - position: relative; - height: 20px; - background: #b4c0cc; - border-radius: 3px !important; - margin-bottom: -3px; -} - -.dark .progressDiv{ - background: #424c5b; -} - -.progressDiv .progress{ - width: 0%; - height: 20px; - background: #0060df; - color: white; - font-weight: bold; - line-height: 20px; - padding: 0 8px 0 0; - text-align: right; - border-radius: 3px; - overflow: visible; - white-space: nowrap; - padding: 0 0.5em; -} - -.livePreview{ - position: absolute; - z-index: 300; - background-color: white; - margin: -4px; -} - -.dark .livePreview{ - background-color: rgb(17 24 39 / var(--tw-bg-opacity)); -} - -.livePreview img{ - position: absolute; - object-fit: contain; - width: 100%; - height: 100%; -} - -#lightboxModal{ - display: none; - position: fixed; - z-index: 1001; - padding-top: 100px; - left: 0; - top: 0; - width: 100%; - height: 100%; - overflow: auto; - background-color: rgba(20, 20, 20, 0.95); - user-select: none; - -webkit-user-select: none; -} - -.modalControls { - display: grid; - grid-template-columns: 32px 32px 32px 1fr 32px; - grid-template-areas: "zoom tile save space close"; - position: absolute; - top: 0; - left: 0; - right: 0; - padding: 16px; - gap: 16px; - background-color: rgba(0,0,0,0.2); -} - -.modalClose { - grid-area: close; -} - -.modalZoom { - grid-area: zoom; -} - -.modalSave { - grid-area: save; -} - -.modalTileImage { - grid-area: tile; -} - -.modalClose, -.modalZoom, -.modalTileImage { - color: white; - font-size: 35px; - font-weight: bold; - cursor: pointer; -} - -.modalSave { - color: white; - font-size: 28px; - margin-top: 8px; - font-weight: bold; - cursor: pointer; -} - -.modalClose:hover, -.modalClose:focus, -.modalSave:hover, -.modalSave:focus, -.modalZoom:hover, -.modalZoom:focus { - color: #999; - text-decoration: none; - cursor: pointer; -} - -#modalImage { - display: block; - margin-left: auto; - margin-right: auto; - margin-top: auto; - width: auto; -} - -.modalImageFullscreen { - object-fit: contain; - height: 90%; -} - -.modalPrev, -.modalNext { - cursor: pointer; - position: absolute; - top: 50%; - width: auto; - padding: 16px; - margin-top: -50px; - color: white; - font-weight: bold; - font-size: 20px; - transition: 0.6s ease; - border-radius: 0 3px 3px 0; - user-select: none; - -webkit-user-select: none; -} - -.modalNext { - right: 0; - border-radius: 3px 0 0 3px; -} - -.modalPrev:hover, -.modalNext:hover { - background-color: rgba(0, 0, 0, 0.8); -} - -#imageARPreview{ - position:absolute; - top:0px; - left:0px; - border:2px solid red; - background:rgba(255, 0, 0, 0.3); - z-index: 900; - pointer-events:none; - display:none -} - -#txt2img_generate_box, #img2img_generate_box{ - position: relative; -} - -#txt2img_interrupt, #img2img_interrupt, #txt2img_skip, #img2img_skip{ - position: absolute; - width: 50%; - height: 100%; - background: #b4c0cc; - display: none; -} - -#txt2img_interrupt, #img2img_interrupt{ - left: 0; - border-radius: 0.5rem 0 0 0.5rem; -} -#txt2img_skip, #img2img_skip{ - right: 0; - border-radius: 0 0.5rem 0.5rem 0; -} - -.red { - color: red; -} - -.gallery-item { - --tw-bg-opacity: 0 !important; -} - -#context-menu{ - z-index:9999; - position:absolute; - display:block; - padding:0px 0; - border:2px solid #a55000; - border-radius:8px; - box-shadow:1px 1px 2px #CE6400; - width: 200px; -} - -.context-menu-items{ - list-style: none; - margin: 0; - padding: 0; -} - -.context-menu-items a{ - display:block; - padding:5px; - cursor:pointer; -} - -.context-menu-items a:hover{ - background: #a55000; -} - -#quicksettings { - width: fit-content; -} - -#quicksettings > div, #quicksettings > fieldset{ - max-width: 24em; - min-width: 24em; - padding: 0; - border: none; - box-shadow: none; - background: none; - margin-right: 10px; -} - -#quicksettings > div > div > div > label > span { - position: relative; - margin-right: 9em; - margin-bottom: -1em; -} - -canvas[key="mask"] { - z-index: 12 !important; - filter: invert(); - mix-blend-mode: multiply; - pointer-events: none; -} - - -/* gradio 3.4.1 stuff for editable scrollbar values */ -.gr-box > div > div > input.gr-text-input{ - position: absolute; - right: 0.5em; - top: -0.6em; - z-index: 400; - width: 6em; -} -#quicksettings .gr-box > div > div > input.gr-text-input { - top: -1.12em; -} - -.row.gr-compact{ - overflow: visible; -} - -#img2img_image, #img2img_image > .h-60, #img2img_image > .h-60 > div, #img2img_image > .h-60 > div > img, -#img2img_sketch, #img2img_sketch > .h-60, #img2img_sketch > .h-60 > div, #img2img_sketch > .h-60 > div > img, -#img2maskimg, #img2maskimg > .h-60, #img2maskimg > .h-60 > div, #img2maskimg > .h-60 > div > img, -#inpaint_sketch, #inpaint_sketch > .h-60, #inpaint_sketch > .h-60 > div, #inpaint_sketch > .h-60 > div > img -{ - height: 480px !important; - max-height: 480px !important; - min-height: 480px !important; -} - -/* Extensions */ - -#tab_extensions table{ - border-collapse: collapse; -} - -#tab_extensions table td, #tab_extensions table th{ - border: 1px solid #ccc; - padding: 0.25em 0.5em; -} - -#tab_extensions table input[type="checkbox"]{ - margin-right: 0.5em; -} - -#tab_extensions button{ - max-width: 16em; -} - -#tab_extensions input[disabled="disabled"]{ - opacity: 0.5; -} - -.extension-tag{ - font-weight: bold; - font-size: 95%; -} - -#available_extensions .info{ - margin: 0; -} - -#available_extensions .date_added{ - opacity: 0.85; - font-size: 90%; -} - -#image_buttons_txt2img button, #image_buttons_img2img button, #image_buttons_extras button{ - min-width: auto; - padding-left: 0.5em; - padding-right: 0.5em; -} - -.gr-form{ - background-color: white; -} - -.dark .gr-form{ - background-color: rgb(31 41 55 / var(--tw-bg-opacity)); -} - -.gr-button-tool, .gr-button-tool-top{ - max-width: 2.5em; - min-width: 2.5em !important; - height: 2.4em; -} - -.gr-button-tool{ - margin: 0.6em 0em 0.55em 0; -} - -.gr-button-tool-top, #settings .gr-button-tool{ - margin: 1.6em 0.7em 0.55em 0; -} - - -#modelmerger_results_container{ - margin-top: 1em; - overflow: visible; -} - -#modelmerger_models{ - gap: 0; -} - - -#quicksettings .gr-button-tool{ - margin: 0; - border-color: unset; - background-color: unset; -} - -#modelmerger_interp_description>p { - margin: 0!important; - text-align: center; -} -#modelmerger_interp_description { - margin: 0.35rem 0.75rem 1.23rem; -} -#img2img_settings > div.gr-form, #txt2img_settings > div.gr-form { - padding-top: 0.9em; - padding-bottom: 0.9em; -} -#txt2img_settings { - padding-top: 1.16em; - padding-bottom: 0.9em; -} -#img2img_settings { - padding-bottom: 0.9em; -} - -#img2img_settings div.gr-form .gr-form, #txt2img_settings div.gr-form .gr-form, #train_tabs div.gr-form .gr-form{ - border: none; - padding-bottom: 0.5em; -} - -footer { - display: none !important; -} - -#footer{ - text-align: center; -} - -#footer div{ - display: inline-block; -} - -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -#txtimg_hr_finalres{ - min-height: 0 !important; - padding: .625rem .75rem; - margin-left: -0.75em - -} - -#txtimg_hr_finalres .resolution{ - font-weight: bold; -} - -#txt2img_checkboxes, #img2img_checkboxes{ - margin-bottom: 0.5em; - margin-left: 0em; -} -#txt2img_checkboxes > div, #img2img_checkboxes > div{ - flex: 0; - white-space: nowrap; - min-width: auto; -} - -#img2img_copy_to_img2img, #img2img_copy_to_sketch, #img2img_copy_to_inpaint, #img2img_copy_to_inpaint_sketch{ - margin-left: 0em; -} - -#axis_options { - margin-left: 0em; -} - -.inactive{ - opacity: 0.5; -} - -[id*='_prompt_container']{ - gap: 0; -} - -[id*='_prompt_container'] > div{ - margin: -0.4em 0 0 0; -} - -.gr-compact { - border: none; -} - -.dark .gr-compact{ - background-color: rgb(31 41 55 / var(--tw-bg-opacity)); - margin-left: 0; -} - -.gr-compact{ - overflow: visible; -} - -.gr-compact > *{ -} - -.gr-compact .gr-block, .gr-compact .gr-form{ - border: none; - box-shadow: none; -} - -.gr-compact .gr-box{ - border-radius: .5rem !important; - border-width: 1px !important; -} - -#mode_img2img > div > div{ - gap: 0 !important; -} - -[id*='img2img_copy_to_'] { - border: none; -} - -[id*='img2img_copy_to_'] > button { -} - -[id*='img2img_label_copy_to_'] { - font-size: 1.0em; - font-weight: bold; - text-align: center; - line-height: 2.4em; -} - -.extra-networks > div > [id *= '_extra_']{ - margin: 0.3em; -} - -.extra-network-subdirs{ - padding: 0.2em 0.35em; -} - -.extra-network-subdirs button{ - margin: 0 0.15em; -} - -#txt2img_extra_networks .search, #img2img_extra_networks .search{ - display: inline-block; - max-width: 16em; - margin: 0.3em; - align-self: center; -} - -#txt2img_extra_view, #img2img_extra_view { - width: auto; -} - -.extra-network-cards .nocards, .extra-network-thumbs .nocards{ - margin: 1.25em 0.5em 0.5em 0.5em; -} - -.extra-network-cards .nocards h1, .extra-network-thumbs .nocards h1{ - font-size: 1.5em; - margin-bottom: 1em; -} - -.extra-network-cards .nocards li, .extra-network-thumbs .nocards li{ - margin-left: 0.5em; -} - -.extra-network-thumbs { - display: flex; - flex-flow: row wrap; - gap: 10px; -} - -.extra-network-thumbs .card { - height: 6em; - width: 6em; - cursor: pointer; - background-image: url('./file=html/card-no-preview.png'); - background-size: cover; - background-position: center center; - position: relative; -} - -.extra-network-thumbs .card:hover .additional a { - display: inline-block; -} - -.extra-network-thumbs .actions .additional a { - background-image: url('./file=html/image-update.svg'); - background-repeat: no-repeat; - background-size: cover; - background-position: center center; - position: absolute; - top: 0; - left: 0; - width: 24px; - height: 24px; - display: none; - font-size: 0; - text-align: -9999; -} - -.extra-network-thumbs .actions .name { - position: absolute; - bottom: 0; - font-size: 10px; - padding: 3px; - width: 100%; - overflow: hidden; - white-space: nowrap; - text-overflow: ellipsis; - background: rgba(0,0,0,.5); - color: white; -} - -.extra-network-thumbs .card:hover .actions .name { - white-space: normal; - word-break: break-all; -} - -.extra-network-cards .card{ - display: inline-block; - margin: 0.5em; - width: 16em; - height: 24em; - box-shadow: 0 0 5px rgba(128, 128, 128, 0.5); - border-radius: 0.2em; - position: relative; - - background-size: auto 100%; - background-position: center; - overflow: hidden; - cursor: pointer; - - background-image: url('./file=html/card-no-preview.png') -} - -.extra-network-cards .card:hover{ - box-shadow: 0 0 2px 0.3em rgba(0, 128, 255, 0.35); -} - -.extra-network-cards .card .actions .additional{ - display: none; -} - -.extra-network-cards .card .actions{ - position: absolute; - bottom: 0; - left: 0; - right: 0; - padding: 0.5em; - color: white; - background: rgba(0,0,0,0.5); - box-shadow: 0 0 0.25em 0.25em rgba(0,0,0,0.5); - text-shadow: 0 0 0.2em black; -} - -.extra-network-cards .card .actions:hover{ - box-shadow: 0 0 0.75em 0.75em rgba(0,0,0,0.5) !important; -} - -.extra-network-cards .card .actions .name{ - font-size: 1.7em; - font-weight: bold; - line-break: anywhere; -} - -.extra-network-cards .card .actions .description { - display: block; - max-height: 3em; - white-space: pre-wrap; - line-height: 1.1; -} - -.extra-network-cards .card .actions .description:hover { - max-height: none; -} - -.extra-network-cards .card .actions:hover .additional{ - display: block; -} - -.extra-network-cards .card ul{ - margin: 0.25em 0 0.75em 0.25em; - cursor: unset; -} - -.extra-network-cards .card ul a{ - cursor: pointer; -} - -.extra-network-cards .card ul a:hover{ - color: red; -} - -[id*='_prompt_container'] > div { - margin: 0!important; -} diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ram Jaane Full TOP Movie Download In Mp4.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ram Jaane Full TOP Movie Download In Mp4.md deleted file mode 100644 index 322d41d38c47d2f1d0edc6067687cb570807886e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ram Jaane Full TOP Movie Download In Mp4.md +++ /dev/null @@ -1,8 +0,0 @@ -

            Ram Jaane Full Movie Download In Mp4


            DOWNLOADhttps://cinurl.com/2uEXXR



            - -T ags: Ram jaane full movie Video Songs, Ram jaane full movie HD video, 3gp Ram jaane full movie Download, mp4 Ram jaane full movie movie songs, Ram jaane ...Tags: ram jaane full movies, ram jaane full movie, ram jaane watch online full movie, ram jaane full movie download, ram jaane full movie watch online, ram jaane full movie free download -Tags: ram jaane full movies, ram jaane full movie, ram jaane watch online full movie, ram jaane full movie download, ram jaane full movie watch online, ram jaane full movie free download -The film is completely and completely in good quality, you can on this site. ... 8a78ff9644
            -
            -
            -

            diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Red Alert 2 Cheats Tool Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Red Alert 2 Cheats Tool Download.md deleted file mode 100644 index 68fa32bc51174c67c4cb364b6e9462cc455b63ee..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Red Alert 2 Cheats Tool Download.md +++ /dev/null @@ -1,16 +0,0 @@ -

            red alert 2 cheats tool download


            Download File ►►► https://cinurl.com/2uEXsy



            -
            -The Trainers, Cheats, Game Patch Fixes are available for Download and are updated daily so check back often to make sure you have the latest version of the game trainers, cheats and game patches for your version of the game. Also click here to download the latest versions of the Trainer and Cheat Utilities for this game if you haven't downloaded it already. - -Find more tips about this game or any other games by reading the 'Game Tips' and 'Game News' articles found on this site. - -Master-level Trainers, Cheats, Game Patches, Game Tweaks, Tricks, Trainers, Resources and Games Patch Fixes for Trainers, Cheats, Game Patches, Game Tweaks, Trainers, Resources and Games Patch Fixes are featured on this page. The Trainers, Cheats, Game Patches, Game Tweaks, Trainers, Resources and Games Patch Fixes are updated regularly so check back often to make sure you have the latest version of the game trainers, cheats and game patches for your version of the game. Also click here to download the latest versions of the Trainer and Cheat Utilities for this game if you haven't downloaded it already. - -Warhammer 40,000: Dawn of War - Retribution Cheats, Trainer & Game Patch Fixes are featured on this page. The Trainers, Cheats, Game Patches, Game Tweaks, Trainers, Resources and Games Patch Fixes are available for Download and are updated daily so check back often to make sure you have the latest version of the game trainers, cheats and game patches for your version of the game. Also click here to download the latest versions of the Trainer and Cheat Utilities for this game if you haven't downloaded it already. - -This page is reserved for Staff and Games Specialists. If you are an Admin and would like to contact a staff member on this page, please use the Contact button in the Main Menu.If you are contacting the Games Specialist team for help with game issues, please use the Contact button in the Main Menu. - -To the right is the list of Cheats, Game Fixes, Game Patches and Trainer releases for Command & Conquer: Red Alert 2 4fefd39f24
            -
            -
            -

            diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/The Fantastic Four (English) Movie Dual Audio Hindi Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/The Fantastic Four (English) Movie Dual Audio Hindi Torrent.md deleted file mode 100644 index d196a51e4b7103457beb52f3611a4d4d1e194c77..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/The Fantastic Four (English) Movie Dual Audio Hindi Torrent.md +++ /dev/null @@ -1,30 +0,0 @@ -

            the Fantastic Four (English) movie dual audio hindi torrent


            Download File »»» https://cinurl.com/2uEXA1



            -
            -Region Code: Region-A. Produced by: Avt. Film. Directed by: Tim Miller. Written by: Eric Pearson, Bill Pearson, David Heyman, Simon Kinberg, Jeph Loeb & Roberto Orci. Story by: Simon Kinberg, Jeph Loeb & Roberto Orci. Featuring Marvel's F*(x) four heroes -- The Thing, Human Torch, The Invisible Woman & The Leader -- in a gripping action adventure that will test their strength and sacrifice to save a world that no longer wants them. - -(Note: The movie's actual running time is 138 minutes.) - -Fantastic Four - -[Action Adventure] - -[Editing: 5 min] - -Storyline: When the Human Torch finds himself shrunk to 4 inches, he discovers the power to shrink other people down to a size to fit in the bottle he carries around. The Fantastic Four soon follow suit. But their venture into the Negative Zone leads to their being captured by the rogue and evil genius, Dr. Doom. - -But when they’re rescued and returned to the Human Torch, they find themselves imprisoned in a place called Latveria. And they’re greeted there by Dr. Doom who is also the one-time ruler of the country. - -The Fantastic Four soon discover that the Invisible Woman is in a relationship with the Black Knight. - -After some hard words with their family, the Fantastic Four decide to leave. But they can’t take anything of value with them. The power rings from the Negative Zone simply disappear. They simply leave them there. - -Suddenly, they find themselves transported back to the Negative Zone where they are pursued by creatures that want to take the power rings for themselves. - -The human race is on the verge of extinction. Mankind is poised on the edge of oblivion. And Doctor Doom, the Nazi scientist who knows about the ancient power of the Fantastic Four is on the verge of conquering them with the Infinity Gem. And he’s working with the Time-Master. - -The leader of the Fantastic Four, The Thing, decides to fight back. But to protect their teammates from Dr. Doom and the Time Master, they must venture into the Negative Zone. There they’re attacked by monsters and near to being killed. But they’re saved by one of their own who is on the brink of oblivion. - -On a mission to save the world, 4fefd39f24
            -
            -
            -

            diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/[MM]-NFO-tools(edit-create-read-nfos) Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/[MM]-NFO-tools(edit-create-read-nfos) Crack.md deleted file mode 100644 index bb2090f47dc4cbe7c9bd3668deba21298b59758c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/[MM]-NFO-tools(edit-create-read-nfos) Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

            [MM]-NFO-tools(edit-create-read-nfo's) crack


            Download Ziphttps://cinurl.com/2uEXuF



            - -[MM]-NFO-tools(edit-create-read-nfo's) Crack lindillan. 2021.01.14 06:24. 関連記事. 2020 Edraw Max 7.9.4 Serial Number. 2021.01.14 06:50 · Telerik Ui For ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/syedislamuddin/base_editors/app.py b/spaces/syedislamuddin/base_editors/app.py deleted file mode 100644 index 45f52b7d4203aa2b007151da346044ab1e4188b8..0000000000000000000000000000000000000000 --- a/spaces/syedislamuddin/base_editors/app.py +++ /dev/null @@ -1,531 +0,0 @@ -#from turtle import shape -import streamlit as st -#from st_keyup import st_keyup -import pandas as pd -import numpy as np -from st_aggrid import AgGrid, GridOptionsBuilder,GridUpdateMode,DataReturnMode - - -import time -import os -from PIL import Image - - -#if 'select_method' not in st.session_state: -# st.session_state['select_method'] = 'temp' -#if 'method' not in st.session_state: -# st.session_state['method'] = 'temp' - -def transform(df,str): - # Select columns - #cols = st.multiselect('Please select columns to save current Table as csv file', - cols = st.multiselect(str, - df.columns.tolist(), - df.columns.tolist() - ) - df = df[cols] - return df -@st.cache -def convert_df1(df): - return df.to_csv(index=False).encode('utf-8') - -def convert_df(df): - return df.to_csv().encode('utf-8') - -def display_res(method,sep,rsid): -#if method == 'bystander_ABE8e_mean': - st.header(select_method+' with: '+method+' option') - fnm=cwd+select_method+'/'+select_method+'_'+method+'.csv' - #data = pd.read_csv(fnm, sep=',') - data = pd.read_csv(fnm, sep=sep) - - #get snp data - if len(variant_spl) > 1: #variant_spl has two components - #data_snp = data[data['rs_id'].str.contains(variant_spl[1])] - data_snp = data[data[rsid].str.contains(variant_spl[1])] - data_snp[rsid]=variant_spl[0]+':'+data_snp[rsid] - else: - #data_snp = data[data['rs_id'].str.contains(variant_spl[0])] - data_snp = data[data[rsid].str.contains(variant_spl[0])] - data_snp[rsid]='NaN'+':'+data_snp[rsid] - data_snp.reset_index(drop=True, inplace=True) - - if data_snp.shape[0]>0: - df = transform(data_snp,'Please Select columns to save whole table') - #fname = st_keyup("Please input file name to save Table", value='temp') #st.text_input('Please input file name to save Table', 'temp', live=True) - - csv = convert_df(df) - - st.download_button( - label="Download Table as CSV file", - data=csv, - #file_name=fname+'.csv', - file_name=method+'_'+variant_spl[0]+'.csv', - mime='text/csv', - ) - - if len(variant_spl) > 1: - f""" - **Results for SNP: {variant_spl[1]} on GENE: {variant_spl[0]}** - """ - else: - f""" - **Results for SNP: {variant_spl[0]} on GENE: NAN** - """ - #AgGrid(data_snp) - st.markdown(table_edit,unsafe_allow_html=True) - gb = GridOptionsBuilder.from_dataframe(data_snp) - gb.configure_pagination(enabled=False)#,paginationAutoPageSize=False)#True) #Add pagination - gb.configure_default_column(enablePivot=True, enableValue=True, enableRowGroup=True) - gb.configure_selection(selection_mode="multiple", use_checkbox=True) - - gb.configure_side_bar() - gridOptions = gb.build() - - grid_response = AgGrid( - data_snp, - height=200, - gridOptions=gridOptions, - enable_enterprise_modules=True, - update_mode=GridUpdateMode.MODEL_CHANGED, - data_return_mode=DataReturnMode.FILTERED_AND_SORTED, - fit_columns_on_grid_load=False, - header_checkbox_selection_filtered_only=True, - use_checkbox=True, - width='100%' - ) - - #data = grid_response['data'] - selected = grid_response['selected_rows'] - if selected: - st.write('Selected rows') - - dfs = pd.DataFrame(selected) - st.dataframe(dfs[dfs.columns[1:dfs.shape[1]]]) - - dfs1 = transform(dfs[dfs.columns[1:dfs.shape[1]]],'Please select columns to save selected Table') - #csv = convert_df1(dfs[dfs.columns[1:dfs.shape[1]]]) - csv = convert_df1(dfs1) - - - st.download_button( - label="Download data as CSV", - data=csv, - file_name=method+'_'+variant_spl[0]+'.csv', - mime='text/csv', - ) - - - -st. set_page_config(layout="wide") - -cwd=os.getcwd()+'/'+'data/' - -#get snps list -snps = pd.read_csv("SNPS.csv") -variants=snps['GENE:SNP'].unique() -variants_s=sorted(variants,key=len) - -caution = '

            Please note that not (necessarily) all variants are targetted.

            ' - -tips = '

            Important Tool Tips:

            ' - -table_edit = '

            About Table: Please note that table can be sorted based on by clicking on any column and Multiple rows can be selected (by clicking check box in first column) to save only those rows.

            ' - - - -st.title('Single Base Editiors') - -st.sidebar.image("logo-card-white.png", use_column_width=True) -#ReadMe = st.sidebar.radio('ReadME',value=True) - -#Calc = st.sidebar.radio('Selection Menu') -Calc = st.sidebar.radio( - "", - ('ReadME', 'Selection Menu')) - -#if Calc: - -#st.sidebar.title("Selection Menu") -if Calc == 'ReadME': - #st.markdown("[Introduction](#Introduction)") - #st.markdown("[How do base editors work](#How-do-base-editors-work)") - st.header('How to use this app') - st.markdown('Please note that all tools were run using Human Genome **(hg38)**. Each tool require **specific input format** (described for each tool selected from the sidebar when **Selection Menue is enabled**) and **output results** in different formats **(with different columns based on method selected as described under each tool)**. Some of these tools also allow selection of various **endonucleases and related options**, their **reulsts are provided as radio controls** in the sidebar of this app under each tool.') - st.markdown('**Requirements:** 1) Python3.4 or higher and 2) streamlit 1.13') - st.markdown('To start this app, **unzip** the base_editor_app.zip in a folder of your choice') - st.markdown('Open shell terminal and **cd to base_editor_app folder**') - st.markdown('Type: **streamlit run baserditorsV3.py**, It will launch baseeditor app in the default browser') - st.markdown('**By default** README radio button is enabled to describe general information about the App and How to use it.') - st.markdown("- Please enable **Selection Menu** radio control in the sidebar **to enable variant, tool and endonuclease options**") - st.markdown("- Select Desired Variant from the dropdown list") - st.markdown("- Select a Tool from the dropdown list") - st.markdown("- Select one of the options **(if available)**") - - - st.header('Introduction:') - st.markdown('This app **reviewes** popular single base quality estimators for a **list [1](https://drive.google.com/file/d/1Sxb-Cc-epbs6vujQaX9wa5acqus0RW3q/view?usp=sharing) of rsIDs** per disease of interest based on CARD’s cross-NDD efforts. We filtered our candidate list of **base edit predictors** for those that are at least **semi-automated and reproducible** (no copy and pasting IDs or sequences one at a time.') - st.markdown('Two categories of DNA base editors **(BEs)** are, a) cytosine base editors **(CBEs: C/G -> T/A converters)** and b) adenine base editors **(ABEs: A/T -> G/C converters)**, as shown in Figure below. While base editors can only introduce 4 edits, **Prime Editors** on the other hand can do all 12 edits using usual Cas9 (and its variants) and a gRNA called prime editing guide RNA (**pegRNA**). We also tested a **prime editor** and an **RNA editor for gene knockdown** for these targets') - image = Image.open('CBE_ABE.webp') - st.image(image, caption='Cytosine and Adenine base editors. Figure from: https://www.nature.com/articles/s41573-020-0084-6') - st.header('How do base editors work') - st.markdown("**Base editing requires three elements:**") - st.markdown("- A Cas nickase (Cas9 with mutation in RuvC nuclease domain, which enables it to nick but not cleave DNA) or Cas fused to a deaminase that makes the edit.") - st.markdown("- A single guide RNA (sgRNA which is composed of target-specific CRISPR RNA (crRNA) and an auxiliary trans-activating crRNA (trcrRNA) joined by linker loop) targeting Cas9 to a specific DNA locus") - st.markdown("- A target base for editing within the editing window specified by the Cas9 protein") - st.markdown('The Cas9 protein has six domains, REC I (responsible for binding guide RNA), REC II, Bridge Helix, PAM Interacting (**confers PAM specificity and is responsible for initiating binding to target DNA**), HNH and RuvC (**each cut single-stranded DNA after 3rd base upstream of PAM**). Cas9 and its variants are highly specific to various PAM sequences and have two endonuclease domains: the n-terminal RuvC-like nuclease domain and the HNH-like nuclease domain near the center of the protein') - st.markdown('A whole range of CBEs and ABEs have been developed. Various CEBs ranging from **simple** deactivated Cas9 (dCas9)+cytidine deaminsae+uracil DNA glycosylase inhibitor (UGI) to improved single mutated Cas9 (nCas9)+cytidine deaminsae+uracil DNA glycosylase inhibitor (UGI) called BE3 systems and its variants such as Target-AID editors were developed. 4th generation BEs (called BE4, such as BE4max etc which focus on improving editors delivery to the nucleus) further minimize undesired base conversions that can happen with BE33.') - st.markdown('Similar to CBEs, **Adenine base editors (ABEs)** such as ABEmax, ABE4max, ABE8e and ABE8s were also developed.') - #st.markdown('Similar to CBEs, adenine base editor such as ABEmax, ABE4max, ABE8e and ABE8s were also developed.') - #st.markdown("**Key parameters for a good BE are:**") - #st.markdown("- Editing efficiency: 4th generation base editiors **BE4max and ABE4max [2](https://www.nature.com/articles/nbt.4172), ABE8s [3](https://www.nature.com/articles/s41587-020-0491-6) and Target-AID (dual base) [4](https://www.nature.com/articles/s41587-020-0535-y)**") - #st.markdown("- Editing efficiency") - #st.markdown("- Minimal off-target effects") - st.header('Base editor tools reviewed') - st.markdown('We have reviewed a total of 6 tools in the public domain which are **at least semi-automated and reproducible** (no copy and pasting IDs or sequences one at a time). These tools offer a wide range of options ranging from **HDR** based edits to improved **single base editors** to precise base editing such as **Prime editing**. Furthermore, many of these tools offer variety of PAM sequences expanding the number of available target sites for base editing') - """ - - [BE-DICT](http://130.60.24.130/page-set?actionID=5f8c494b8c854d0029ffa9d3) - - An attention based deep learning algorithm for based editing outcomes prediction [Paper](https://www.nature.com/articles/s41467-021-25375-z). - - Options: ABE8e, ABEmax, BE4max, Target-AID. - - [ChopChop](https://chopchop.cbu.uib.no) - - This tool offers various Endonucleaes (Cas9, nCas9, Cpf1 (also known as Cas12a and **only contains crRNA**), CasX (generates staggered double-stranded break) and **Cas13 (also known as C2c2)) RNA editor**) and PAM options. Results for following options are reported in this app: - - Cas13a, CasX_TTCN, Cpf1_TTN, NGG (Cas9), Nickase_NGG, Nickase_NRG. - - [E-CRISP](http://www.e-crisp.org/E-CRISP/) - - This tool offers relaxed, medium and strict options for PAM sequence. - - [GuideScan2](https://guidescan.com) - - This tool offers Cas9 and Cpf1 endonucleases with various options to filter out results. Results based on 30, 40 and 50bp (SNP location = (n/2)bp) input sequence range (for Cas9 and Cpf1) are reported in this app: - - 30bp_cpf1, 30bp_NGG, 40bp_cpf1, 40bp_NGG, 50bp_cpf1, 50bp_NGG - - [PnB Designer](https://fgcz-shiny.uzh.ch/PnBDesigner/) - - This tool allows base editing as well as **prime editing**. Results reported in this app are based on: - - Base_editing_guides, Nicking_guides, pegRNA_oligos - - [SNP_CRISPR](https://www.flyrnai.org/tools/snp_crispr/web/) - - This tool offers guides for NGG and NAG PAM sequences and are reporoted in this app: - - NGG, NAG - """ - - -else: -#if Calc == 'Selection Menu': - #ReadMe = st.sidebar.checkbox('ReadME',value=False) - select_variant = st.sidebar.selectbox( - "Please select variant", - variants_s - ) - - variant_spl=select_variant.split() - select_method = st.sidebar.selectbox( - "Please select a Tool", - ("BE-DICT", "ChopChop","E-CRISP","GuideScan2","PnB Designer","SNP_CRISPR") - ) - - if select_method == "ChopChop": - method = st.sidebar.radio( - "Please select an option", - ('Cas13a', 'CasX_TTCN','Cpf1_TTN','CRISPR-CAS9_NGG', 'Nickase_NGG','Nickase_NRG')) - - if select_method == "BE-DICT": - method = st.sidebar.radio( - "Please select an option", - ('bystander_ABE8e_mean', 'bystander_ABEmax_mean_5','bystander_BE4max_mean','bystander_Target-AID_mean')) - if select_method == "GuideScan2": - method = st.sidebar.radio( - "Please select an option", - ('30bp_cpf1', '30bp_NGG','40bp_cpf1', '40bp_NGG','50bp_cpf1', '50bp_NGG')) - - if select_method == "PnB Designer": - method = st.sidebar.radio( - "Please select an option", - ('Base_editing_guides', 'Nicking_guides_PE3_PE3b','pegRNA_oligos')) - - if select_method == "SNP_CRISPR": - method = st.sidebar.radio( - "Please select an option", - ('NAG', 'NGG')) - - #BE-DICT - if select_method == "BE-DICT": - st.markdown("**Summary**") - st.markdown("BE-DICT predicts base editiong outcomes for **4 commonly used base editors (BEs)**. It uses an attention-based deep learning algorithm (based on transformer-encoder architecture) trained on high-throughput target library (of 28,394 target sequences) screens to predict single base editing (Both **Adenine** {A.T -> G.C} and **Cytosine** {C.G -> T.A} **Base Editors**) outcomes.") - """ - - Adenine Base Editors (ABEs) - - Based on Adenine deaminase ecTad7.10 **(ABEmax)** and ecTadA-8e **(ABE8e)** - - Cytosine Base Editors (CBEs) - - Based on Cytosine deaminase rAPOPEC1 **(CBE4max)** and **Target-AID** - """ - st.markdown("Most base editors convert target bases in a ~5- nucleotide region within the protospacer target sequence and undesired **bystander** editing of additional C or A bases in the editing window are common.") - st.markdown("**All results shown here are based on bystander models for each ABE and CBE.**") - st.write("[BE-DICT Web App](http://130.60.24.130/page-set?actionID=5f8c494b8c854d0029ffa9d3)") - st.write("[BE-DICT Paper](https://www.nature.com/articles/s41467-021-25375-z)") - st.markdown(caution,unsafe_allow_html=True) - st.markdown("- Input: 2-column csv: col1=Inp_seq, col2=seq_id, where **Inp_seq is a 20 bp** target sequence and seq_id is an identification.") - st.markdown("- Output: 4 column csv file") - st.markdown("- **Columns of interest**: Output_seq and Pred_score columns **(Higher is better)**.") - st.markdown("**Batch mode** can be run from [here](http://130.60.24.130/page?actionID=607552549609a200293b663f)") - st.markdown("**Please note that: Only one of all possible alleles is used to generate this output.**") - st.markdown(tips,unsafe_allow_html=True) - st.markdown('This tool uses an **attention based deep learning framework** for base predictions and employ four different algorithms/models for prediction: ABEMax, BE4max, ABE8e, Target-AID.') - st.markdown('**Please note that this tool only targets NGG PAM**') - - display_res(method,',','rs_id') - - - - - #ChopChop - if select_method == "ChopChop": - st.markdown("**Summary**") - st.markdown("ChopChop is a versatile tool that identifies CRISPR–Cas single guide RNA (sgRNA) targets for DNA and RNA including targeted enrichment of loci for long-read sequencing for over **200** genomes and **3** transcriptomes. It offers a wide range of selection of **CRISPR effectors** (Cas9, CasX or Cas13), **Species**, and **Purpose** (knockout, knockdown, activation, repression, enrichment) alongside a variety of **Options** including selection of specific region, PAM sequences, various efficiency measures, primers and many more.") - st.markdown("**Please note that not all options results in an efficienc score (0 is reported in efficiency column).**") - st.write("[ChopChop Web App](https://chopchop.cbu.uib.no)") - st.write("[ChopChop Paper](https://academic.oup.com/nar/article/47/W1/W171/5491735)") - st.markdown(caution,unsafe_allow_html=True) - st.markdown("- Input: A text file containing chr:start-end per line for each snp. Ex: chr1:152220450-152220451") - st.markdown("- Output: A tab separated text file") - st.markdown("- **Columns of interest**: Target sequence and Efficiency (**higher the better**). Please note that not all options have Efficiency defined.[Ref](https://chopchop.cbu.uib.no/instructions)") - st.markdown("**Instructions to run Batch mode** can be found [here](https://bitbucket.org/valenlab/chopchop/src/master/)") - - st.markdown(tips,unsafe_allow_html=True) - """ - - This tool offers sgRNA design for: - - **DNA using:** - - CRISPR/Cas9 system for knockout, knockin, activation, repression and nanopore enrichment. - - CRISPR/Cpf1 or CasX system for knockout, activation, repression and nanopore enrichment. - - CRISPR/Cas9 nickase system for knockout and knockin. - - TALEN system for knockout. - - **RNA using:** - - CRISPR/Cas13 (c2c2) for knockdown. - **This tool also offers a variety of PAM sequences and other filtering options.** - """ - st.markdown("**Please note that the tool was run for CRISPR/cas9 for NGG (knock-out), CRISPR/cas9-nickase (knock-out) for NGG and NRG (R=A or G), CRISPR/cpf1 for TTN, CRISPR/CasX for TTCN PAM and CRISPR/cas13(c2c2).**") - - display_res(method,'\t','snp_id') - #E-CRISP - if select_method == "E-CRISP": - st.markdown("**Summary**") - st.markdown("E-CRISP is used to design gRNA sequences **(for 12 organisms)**. E-CRISP can also be used to reevaluate CRISPR constructs for on- or off-target sites and targeted genomic loci. It identifies target sequences complementary to the gRNA ending in a 3ʹ protospacer-adjacent motif (PAM), N(G or A)G and uses a fast indexing approach to find binding sites and a binary interval tree for rapid annotation of putative gRNA target sites.") - st.markdown("**Off-target** effects and target-site homology are evaluated using Bowtie2 aligner. Designs are **shown** in the output if the number of **off-targets does not exceed a user-specified threshold**. **More than one** design targeting a desired locus are **ranked** according to on-target specificity and number of off-targets.") - - st.write("[E-CRISP Web App](http://www.e-crisp.org/E-CRISP/)") - st.write("[E-CRISP Paper](https://www.nature.com/articles/nbt.3026)") - st.markdown(caution,unsafe_allow_html=True) - """ - - Input: Multiple lines provided in the Input fasta sequence edit box in the webapp **[here](http://www.e-crisp.org/E-CRISP/index.html)** in the following format - - Line1: rs12726330 - - Line2: CGGGACATGGAAGAGGTCTGGACCAGGGTACTGGGAAGGCGCTCGGAGGA - - Line3: rs76763715 - - Line4: CCAGCCGACCACATGGTACAGGAGGTTCTAGGGTAAGGACAAAGGCAAAG - - and so on - """ - st.markdown("- Output: A tab separated .tab file") - st.markdown("- **Columns of interest**: Efficiency Score (E Score, **Higher the better**) [Ref](https://www.nature.com/articles/nbt.3026) and Specificity Score (S Score, **Higher the better** (max = 100))") - - st.markdown(tips,unsafe_allow_html=True) - """ - - This tool offers single or paired sgRNA and: - - **Options for PAM:** - - **Relaxed** - - **Medium:** - - **Strict** - - **Options for Design:** - - knockdown. - - knockin. - - N/C terminal tagging. - - CRISPRi. - - CRISPRa. - - **Other filtering options.** - """ - st.markdown("**Please note that the result reported here are for PAM=NGG**") - - st.header(select_method) - fnm=cwd+select_method+'/'+select_method+'_NGG'+'.csv' - data = pd.read_csv(fnm, sep=',') - #get snp data - #data_snp = data[data['Name'].str.contains(variant_spl[0])] - - if len(variant_spl) > 1: #variant_spl has two components - #data_snp = data[data['rs_id'].str.contains(variant_spl[1])] - data_snp = data[data['Name'].str.contains(variant_spl[1])] - data_snp[rsid]=variant_spl[0]+':'+data_snp[rsid] - else: - #data_snp = data[data['rs_id'].str.contains(variant_spl[0])] - data_snp = data[data['Name'].str.contains(variant_spl[0])] - data_snp['Name']='NaN'+':'+data_snp['Name'] - data_snp.reset_index(drop=True, inplace=True) - - - data_snp.reset_index(drop=True, inplace=True) - if data_snp.shape[0]>0: - df = transform(data_snp,'Please Select columns to save whole table') - #fname = st.text_input('Please input file name to save Table', 'temp') - #fname = st_keyup("Please input file name to save Table", value='temp') - csv = convert_df(df) - - st.download_button( - label="Download Table as CSV file", - data=csv, - file_name=select_method+'_'+variant_spl[0]+'.csv',#fname+'.csv', - mime='text/csv', - ) - - #st.table(data_snp) - if len(variant_spl) > 1: - f""" - **Results for SNP: {variant_spl[0]} on GENE: {variant_spl[1]}** - """ - else: - f""" - **Results for SNP: {variant_spl[0]} on GENE: NAN** - """ - - #AgGrid(data_snp) - st.markdown(table_edit,unsafe_allow_html=True) - gb = GridOptionsBuilder.from_dataframe(data_snp) - gb.configure_pagination(enabled=False)#,paginationAutoPageSize=False)#True) #Add pagination - gb.configure_default_column(enablePivot=True, enableValue=True, enableRowGroup=True) - gb.configure_selection(selection_mode="multiple", use_checkbox=True) - - gb.configure_side_bar() - gridOptions = gb.build() - - grid_response = AgGrid( - data_snp, - height=200, - gridOptions=gridOptions, - enable_enterprise_modules=True, - update_mode=GridUpdateMode.MODEL_CHANGED, - data_return_mode=DataReturnMode.FILTERED_AND_SORTED, - fit_columns_on_grid_load=False, - header_checkbox_selection_filtered_only=True, - use_checkbox=True, - width='100%' - ) - - #data = grid_response['data'] - selected = grid_response['selected_rows'] - if selected: - st.write('Selected rows') - - dfs = pd.DataFrame(selected) - st.dataframe(dfs[dfs.columns[1:dfs.shape[1]]]) - - dfs1 = transform(dfs[dfs.columns[1:dfs.shape[1]]],'Please select columns to save selected Table') - #csv = convert_df1(dfs[dfs.columns[1:dfs.shape[1]]]) - csv = convert_df1(dfs1) - - - st.download_button( - label="Download data as CSV", - data=csv, - file_name=select_method+'_'+variant_spl[0]+'.csv', - mime='text/csv', - ) - - - - #GuideScan2 - if select_method == "GuideScan2": - st.markdown("**Summary**") - st.markdown("GuideScan2 employes Cas9 (tracrRNA and crRNA) and Cas12a, previously known as cpf1, (requires only crRNA) for sgRNA design for 8 organisms. It is a memory efficient and improved version of GuideScan that enables construction of high-specificity gRNA databases with reduced off-target effects.") - st.markdown("CRISPR-Cas9 targets a 20-nucleotide spacer sequence at the end of the gRNA that is complementary to a DNA protospacer sequence followed immediately at the 3’ end by a PAM of the form NGG (more efficient targeting) or NAG (less efficient); here N stands for a ‘wildcard’, i.e. can match any nucleotide. Other natural and engineered CRISPR-Cas systems can **vary in PAM sequence, PAM position with respect to the protospacer sequence, and requirements on the level of similarity between gRNA and the target.**") - st.markdown("Given a genomic region, the task of gRNA design is to find gRNAs that can target anywhere in that region. Many potential gRNAs can target at multiple locations in the genome with varying efficiency. Typically a gRNA is designed to target a particular location with **perfect complementarity** with all other targets of this gRNA are being **off-targets**. **Goal** of gRNA design is typically to **maximize gRNA efficiency at the primary target site while minimizing off-targeting.**") - st.markdown("Variants and extensions of the gRNA design task include: paired gRNA design to select two gRNAs targeting flanking sites of a genomic region of interest; saturation experiment design to exhaustively select all gRNAs expected to target a selected region of interest; and library design to select a small number of the most effective gRNAs for each of hundreds or thousands of regions of interest.") - - st.write("[GuideScan2 Web App](https://guidescan.com)") - st.write("[GuideScan2 Paper](https://www.biorxiv.org/content/10.1101/2022.05.02.490368v1)") - st.markdown(caution,unsafe_allow_html=True) - """ - - Input: Line delimited Genomic intervals (or DNA sequence) as a text file in the webapp **[here](https://guidescan.com/)** in the following format (of genomic range 30bp, 40bp etc): - - Line1: chr10:11676698-11676728 - - Line2: chr1:152220435-152220465 - - and so on - """ - st.markdown("- Output: A csv file containing all gRNAs within the genomic regions provided in the input file") - st.markdown("- **Columns of interest**: Cutting efficiency (**Higher the better**), Specificity (**Higher the better**) [Ref](https://www.biorxiv.org/content/10.1101/2022.05.02.490368v1.full.pdf)") - st.markdown(tips,unsafe_allow_html=True) - """ - - This tool offers sgRNA design for: - - **CRISPR/Cas9** - - **CRISPR/Cpf1** - - **Please note that this tool work best for genomic intervals >30bp.** - """ - - st.markdown("**Please note that the software was run with Cas9 (NGG PAM) and cpf1 (TTG PAM) option with all other options left as default.**") - - display_res(method,',','query') - - - #PnB Designer - if select_method == "PnB Designer": - st.markdown("**Summary**") - st.markdown("DNA base editors (BEs), cytosine base editors which employes cytidine-deaminase (CBEs: C/G -> T/A converters) and adenine base editors which employes Adenine-deaminase (ABEs: A/T -> G/C converters) **can only introduce 4 edits** via gRNA, Prime Editors **(PEs)**, employing Cas9 nickase fused to an engineered reverse transcriptase via a gRNA called prime editing guide RNA (pegRNA), on the other hand **can do all 12 edits.** To introduce a modification in the genome, PEs use pegRNA consisting of a 20 nt guide sequence, a primer binding site (PBS) and a reverse transcriptase template (RTT). The guide directs the Cas enzyme to a target site, the PBS hybridizes to the opposite strand to prime the reverse transcriptase, and the RTT integrates the desired genomic alteration.") - st.markdown("Optimized PE2, called PE3 and PE3b with reduced off-targets are used") - st.markdown("PnB Designer allows design of pegRNAs for PEs and guide RNAs for CBE and the most recent ABEs such as ABEmax and ABE8e. PnB Designer makes it easy to design targeting guide RNAs for single or multiple targets on a variant or reference genome from organisms (and non-model organisms or synthetic constructs) spanning multiple kingdoms. It has been used PnB Designer to design candidate pegRNAs to model all human mutations in ClinVar") - st.markdown("**PnB Designer enables design of pegRNAs for all known disease causing mutations available in ClinVar**") - st.markdown("Nicking guides for the PE3 and PE3b systems are designed and filtered to provide a suitable selection of gRNAs. For PE3, only nicking guides 40–100 nt up/downstream of the initial nick are considered. For PE3b, only PAM sequences on the complementary strand that partially overlap with the PE2 PAM or protospacer sequence are displayed.") - st.write("[PnB Designer Web App](https://fgcz-shiny.uzh.ch/PnBDesigner/)") - st.write("[PnB Designer Paper](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-021-04034-6)") - st.markdown(caution,unsafe_allow_html=True) - """ - - Input: Multiple lines provided in as a csv file in the webapp **[here](https://fgcz-shiny.uzh.ch/PnBDesigner/)** in the following format: - - Prime editing: - - varinat, chromosome num, genomic location, Edit, gene orientation, OBS, RTT - - Ex: rs7412, 19, 44908822, insA, +, 13, 13 - - Base editing - - varinat, chromosome num, genomic location, SNO, gene orientation, OBS, RTT - - Ex: rs7412, 19, 44908822, C>T, + - """ - st.markdown("- Output: A csv file") - st.markdown(tips,unsafe_allow_html=True) - """ - - This tool can be run in two modes: - - **Base editing mode:** - - Does not allow A>T or G>C, dels, or insertions. - - Only **180**/414 variants could be targeted. - - - **Columns of interest**: Protospacer, PAM and Base Editor (the system for producing the base edit). **There is no score**. - - **Prime editing mode:** - - Requires two guides: detailed in two files. - - pegRNA oligos for cloning. - - **Score: Higer is better**. - - Nicking guides: the corresponding nicking guides - """ - st.markdown("**Please note that the tool was run in Base editing and Prime editing modes. Corresponding nicking guides are also reported here.**") - display_res(method,',','query') - - - #SNP_CRISPR - if select_method == "SNP_CRISPR": - st.markdown("**Summary**") - st.markdown("SNP-CRISPR designs sgRNAs targeting specific SNPs or indels containing loci (for human, mouse, rat, fly and zebrafish genomes) by facilitating the design of sgRNAs that target specific variants and provides all possible CRISPR-Cas9 target sites in the given genomic region with required parameters, allowing users to select an optimal sgRNA. It provides efficiency scores and off-target information for sgRNAs targeting sequences with and without SNPs and/or indels of interest in the same genomic region.") - """ - **Design:** - - SNP-CRISPR validates the input reference sequences and **warn if the submitted reference sequences does not match**, which might reflect a different version of the genome assembly being used in the user input vs. SNP-CRISPR and re-constructs the template sequence, swapping the reference nucleotide with the variant nucleotide for SNPs, while inserting or deleting the corresponding fragment for indel type variants. - - Computes potential variant-targeting sgRNAs based on availability of PAM sequences in the neighboring region since the presence of a PAM sequence (NGG or NAG) is one of the few requirements for binding. - - sgRNA designs that contain four or more consecutive thymine residues, which can result in termination of RNA transcription by RNA polymerase III, are filtered out. - - Computes an efficiency score (Housden et al. 2015) and a specificity score calculated based on BLAST results against the reference genome. - - Finally, all possible sgRNAs are provided to the user along with specificity and efficiency scores, without further filtering. - - For identification of the best variant-specific sgRNAs, we provide information about both sgRNAs targeting specific variants and sgRNAs targeting the reference sequence in the same region. The efficiency score and an off-target score are provided, and the positions of relevant SNPs or indels in the sgRNA are included so that users can select the most suitable sgRNA or filter out less optimal ones. - """ - - st.write("[SNP_CRISPR Web App](https://www.flyrnai.org/tools/snp_crispr/web/)") - st.write("[SNP_CRISPR Paper](https://academic.oup.com/g3journal/article/10/2/489/6026318)") - st.markdown(caution,unsafe_allow_html=True) - """ - - Input: Multiple lines provided in as a (6 columns) csv file uploaded to the webapp **[here](https://www.flyrnai.org/tools/snp_crispr/web/)** in the following format: - - varinat, chromosome, position, strand, reference, variant - - Ex: rs7412, 19, 44908822, C, +, T - """ - st.markdown("- Output: A csv file") - st.markdown("- **Columns of interest**: Housden Efficiency Score [Ref](https://www.ncbi.nlm.nih.gov/pubmed/26350902) (Range from 1.47-12.32 **(higher is better, > 5 recommended))** and Off Target Score (Range from 0-5441.73 (lower is better, < 1 recommended))") - st.markdown(tips,unsafe_allow_html=True) - """ - - This tool can design guides for: - - **NGG.** - - **NAG.** - - **Target multiple variants within the same guide.** - - Public variant data sets or user-identified variants. - """ - - st.markdown("**Please note that the software was run for NAG and NGG PAM sequences only.**") - display_res(method,',','Gene') - -st.sidebar.image("DataTecnica_White.png", use_column_width=True) \ No newline at end of file diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/convert_bert_from_huggingface_to_tencentpretrain.py b/spaces/szukevin/VISOR-GPT/train/scripts/convert_bert_from_huggingface_to_tencentpretrain.py deleted file mode 100644 index e23799d9902117f42e5e33b6006f0955550adbf1..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/scripts/convert_bert_from_huggingface_to_tencentpretrain.py +++ /dev/null @@ -1,84 +0,0 @@ -import argparse -import collections -import torch - - -def convert_bert_transformer_encoder_from_huggingface_to_tencentpretrain(input_model, output_model, layers_num): - for i in range(layers_num): - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.0.weight"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.self.query.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.0.bias"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.self.query.bias"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.1.weight"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.self.key.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.1.bias"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.self.key.bias"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.2.weight"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.self.value.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.linear_layers.2.bias"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.self.value.bias"] - output_model["encoder.transformer." + str(i) + ".self_attn.final_linear.weight"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.output.dense.weight"] - output_model["encoder.transformer." + str(i) + ".self_attn.final_linear.bias"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.output.dense.bias"] - output_model["encoder.transformer." + str(i) + ".layer_norm_1.gamma"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.output.LayerNorm.weight"] - output_model["encoder.transformer." + str(i) + ".layer_norm_1.beta"] = \ - input_model["bert.encoder.layer." + str(i) + ".attention.output.LayerNorm.bias"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_1.weight"] = \ - input_model["bert.encoder.layer." + str(i) + ".intermediate.dense.weight"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_1.bias"] = \ - input_model["bert.encoder.layer." + str(i) + ".intermediate.dense.bias"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_2.weight"] = \ - input_model["bert.encoder.layer." + str(i) + ".output.dense.weight"] - output_model["encoder.transformer." + str(i) + ".feed_forward.linear_2.bias"] = \ - input_model["bert.encoder.layer." + str(i) + ".output.dense.bias"] - output_model["encoder.transformer." + str(i) + ".layer_norm_2.gamma"] = \ - input_model["bert.encoder.layer." + str(i) + ".output.LayerNorm.weight"] - output_model["encoder.transformer." + str(i) + ".layer_norm_2.beta"] = \ - input_model["bert.encoder.layer." + str(i) + ".output.LayerNorm.bias"] - - -def main(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("--input_model_path", type=str, default="models/input_model.bin", - help=".") - parser.add_argument("--output_model_path", type=str, default="models/output_model.bin", - help=".") - parser.add_argument("--layers_num", type=int, default=12, help=".") - parser.add_argument("--type", choices=["bert", "mlm"], default="bert", - help="The training target of the pretraining model.") - - args = parser.parse_args() - - input_model = torch.load(args.input_model_path, map_location="cpu") - - output_model = collections.OrderedDict() - - output_model["embedding.word.embedding.weight"] = input_model["bert.embeddings.word_embeddings.weight"] - output_model["embedding.pos.embedding.weight"] = input_model["bert.embeddings.position_embeddings.weight"] - output_model["embedding.seg.embedding.weight"] = \ - torch.cat((torch.Tensor([[0]*input_model["bert.embeddings.token_type_embeddings.weight"].size()[1]]), - input_model["bert.embeddings.token_type_embeddings.weight"]), dim=0) - output_model["embedding.layer_norm.gamma"] = input_model["bert.embeddings.LayerNorm.weight"] - output_model["embedding.layer_norm.beta"] = input_model["bert.embeddings.LayerNorm.bias"] - - convert_bert_transformer_encoder_from_huggingface_to_tencentpretrain(input_model, output_model, args.layers_num) - - if args.type == "bert": - output_model["target.sp.linear_1.weight"] = input_model["bert.pooler.dense.weight"] - output_model["target.sp.linear_1.bias"] = input_model["bert.pooler.dense.bias"] - output_model["target.sp.linear_2.weight"] = input_model["cls.seq_relationship.weight"] - output_model["target.sp.linear_2.bias"] = input_model["cls.seq_relationship.bias"] - output_model["target.mlm.linear_1.weight"] = input_model["cls.predictions.transform.dense.weight"] - output_model["target.mlm.linear_1.bias"] = input_model["cls.predictions.transform.dense.bias"] - output_model["target.mlm.layer_norm.gamma"] = input_model["cls.predictions.transform.LayerNorm.weight"] - output_model["target.mlm.layer_norm.beta"] = input_model["cls.predictions.transform.LayerNorm.bias"] - output_model["target.mlm.linear_2.weight"] = input_model["cls.predictions.decoder.weight"] - output_model["target.mlm.linear_2.bias"] = input_model["cls.predictions.bias"] - - torch.save(output_model, args.output_model_path) - - -if __name__ == "__main__": - main() diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/common/registry.py b/spaces/t110-ai-admin/InspectLens/video_llama/common/registry.py deleted file mode 100644 index d0d0171ceb39123c7402a554eb8543ce55ff6881..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/common/registry.py +++ /dev/null @@ -1,329 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - - -class Registry: - mapping = { - "builder_name_mapping": {}, - "task_name_mapping": {}, - "processor_name_mapping": {}, - "model_name_mapping": {}, - "lr_scheduler_name_mapping": {}, - "runner_name_mapping": {}, - "state": {}, - "paths": {}, - } - - @classmethod - def register_builder(cls, name): - r"""Register a dataset builder to registry with key 'name' - - Args: - name: Key with which the builder will be registered. - - Usage: - - from video_llama.common.registry import registry - from video_llama.datasets.base_dataset_builder import BaseDatasetBuilder - """ - - def wrap(builder_cls): - from video_llama.datasets.builders.base_dataset_builder import BaseDatasetBuilder - - assert issubclass( - builder_cls, BaseDatasetBuilder - ), "All builders must inherit BaseDatasetBuilder class, found {}".format( - builder_cls - ) - if name in cls.mapping["builder_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["builder_name_mapping"][name] - ) - ) - cls.mapping["builder_name_mapping"][name] = builder_cls - return builder_cls - - return wrap - - @classmethod - def register_task(cls, name): - r"""Register a task to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from video_llama.common.registry import registry - """ - - def wrap(task_cls): - from video_llama.tasks.base_task import BaseTask - - assert issubclass( - task_cls, BaseTask - ), "All tasks must inherit BaseTask class" - if name in cls.mapping["task_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["task_name_mapping"][name] - ) - ) - cls.mapping["task_name_mapping"][name] = task_cls - return task_cls - - return wrap - - @classmethod - def register_model(cls, name): - r"""Register a task to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from video_llama.common.registry import registry - """ - - def wrap(model_cls): - from video_llama.models import BaseModel - - assert issubclass( - model_cls, BaseModel - ), "All models must inherit BaseModel class" - if name in cls.mapping["model_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["model_name_mapping"][name] - ) - ) - cls.mapping["model_name_mapping"][name] = model_cls - return model_cls - - return wrap - - @classmethod - def register_processor(cls, name): - r"""Register a processor to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from video_llama.common.registry import registry - """ - - def wrap(processor_cls): - from video_llama.processors import BaseProcessor - - assert issubclass( - processor_cls, BaseProcessor - ), "All processors must inherit BaseProcessor class" - if name in cls.mapping["processor_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["processor_name_mapping"][name] - ) - ) - cls.mapping["processor_name_mapping"][name] = processor_cls - return processor_cls - - return wrap - - @classmethod - def register_lr_scheduler(cls, name): - r"""Register a model to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from video_llama.common.registry import registry - """ - - def wrap(lr_sched_cls): - if name in cls.mapping["lr_scheduler_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["lr_scheduler_name_mapping"][name] - ) - ) - cls.mapping["lr_scheduler_name_mapping"][name] = lr_sched_cls - return lr_sched_cls - - return wrap - - @classmethod - def register_runner(cls, name): - r"""Register a model to registry with key 'name' - - Args: - name: Key with which the task will be registered. - - Usage: - - from video_llama.common.registry import registry - """ - - def wrap(runner_cls): - if name in cls.mapping["runner_name_mapping"]: - raise KeyError( - "Name '{}' already registered for {}.".format( - name, cls.mapping["runner_name_mapping"][name] - ) - ) - cls.mapping["runner_name_mapping"][name] = runner_cls - return runner_cls - - return wrap - - @classmethod - def register_path(cls, name, path): - r"""Register a path to registry with key 'name' - - Args: - name: Key with which the path will be registered. - - Usage: - - from video_llama.common.registry import registry - """ - assert isinstance(path, str), "All path must be str." - if name in cls.mapping["paths"]: - raise KeyError("Name '{}' already registered.".format(name)) - cls.mapping["paths"][name] = path - - @classmethod - def register(cls, name, obj): - r"""Register an item to registry with key 'name' - - Args: - name: Key with which the item will be registered. - - Usage:: - - from video_llama.common.registry import registry - - registry.register("config", {}) - """ - path = name.split(".") - current = cls.mapping["state"] - - for part in path[:-1]: - if part not in current: - current[part] = {} - current = current[part] - - current[path[-1]] = obj - - # @classmethod - # def get_trainer_class(cls, name): - # return cls.mapping["trainer_name_mapping"].get(name, None) - - @classmethod - def get_builder_class(cls, name): - return cls.mapping["builder_name_mapping"].get(name, None) - - @classmethod - def get_model_class(cls, name): - return cls.mapping["model_name_mapping"].get(name, None) - - @classmethod - def get_task_class(cls, name): - return cls.mapping["task_name_mapping"].get(name, None) - - @classmethod - def get_processor_class(cls, name): - return cls.mapping["processor_name_mapping"].get(name, None) - - @classmethod - def get_lr_scheduler_class(cls, name): - return cls.mapping["lr_scheduler_name_mapping"].get(name, None) - - @classmethod - def get_runner_class(cls, name): - return cls.mapping["runner_name_mapping"].get(name, None) - - @classmethod - def list_runners(cls): - return sorted(cls.mapping["runner_name_mapping"].keys()) - - @classmethod - def list_models(cls): - return sorted(cls.mapping["model_name_mapping"].keys()) - - @classmethod - def list_tasks(cls): - return sorted(cls.mapping["task_name_mapping"].keys()) - - @classmethod - def list_processors(cls): - return sorted(cls.mapping["processor_name_mapping"].keys()) - - @classmethod - def list_lr_schedulers(cls): - return sorted(cls.mapping["lr_scheduler_name_mapping"].keys()) - - @classmethod - def list_datasets(cls): - return sorted(cls.mapping["builder_name_mapping"].keys()) - - @classmethod - def get_path(cls, name): - return cls.mapping["paths"].get(name, None) - - @classmethod - def get(cls, name, default=None, no_warning=False): - r"""Get an item from registry with key 'name' - - Args: - name (string): Key whose value needs to be retrieved. - default: If passed and key is not in registry, default value will - be returned with a warning. Default: None - no_warning (bool): If passed as True, warning when key doesn't exist - will not be generated. Useful for MMF's - internal operations. Default: False - """ - original_name = name - name = name.split(".") - value = cls.mapping["state"] - for subname in name: - value = value.get(subname, default) - if value is default: - break - - if ( - "writer" in cls.mapping["state"] - and value == default - and no_warning is False - ): - cls.mapping["state"]["writer"].warning( - "Key {} is not present in registry, returning default value " - "of {}".format(original_name, default) - ) - return value - - @classmethod - def unregister(cls, name): - r"""Remove an item from registry with key 'name' - - Args: - name: Key which needs to be removed. - Usage:: - - from mmf.common.registry import registry - - config = registry.unregister("config") - """ - return cls.mapping["state"].pop(name, None) - - -registry = Registry() diff --git a/spaces/t110-ai-admin/InspectLens/video_llama/processors/base_processor.py b/spaces/t110-ai-admin/InspectLens/video_llama/processors/base_processor.py deleted file mode 100644 index 39b33cdf8fcd97cfd3e4a5fbece6593357af9d41..0000000000000000000000000000000000000000 --- a/spaces/t110-ai-admin/InspectLens/video_llama/processors/base_processor.py +++ /dev/null @@ -1,26 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from omegaconf import OmegaConf - - -class BaseProcessor: - def __init__(self): - self.transform = lambda x: x - return - - def __call__(self, item): - return self.transform(item) - - @classmethod - def from_config(cls, cfg=None): - return cls() - - def build(self, **kwargs): - cfg = OmegaConf.create(kwargs) - - return self.from_config(cfg) diff --git a/spaces/tareknaous/Empathetic-DialoGPT/README.md b/spaces/tareknaous/Empathetic-DialoGPT/README.md deleted file mode 100644 index d0700de704f2dc04c4bd1765ade9707fe72190a5..0000000000000000000000000000000000000000 --- a/spaces/tareknaous/Empathetic-DialoGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Empathetic DialoGPT -emoji: 🚀 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/Airbus A330 BlackBox.zip SKIDROW !!EXCLUSIVE!!.md b/spaces/terfces0erbo/CollegeProjectV2/Airbus A330 BlackBox.zip SKIDROW !!EXCLUSIVE!!.md deleted file mode 100644 index 9d194aa8ed914c7914724459e843258ef4d9c39a..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Airbus A330 BlackBox.zip SKIDROW !!EXCLUSIVE!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Airbus A330 BlackBox.zip SKIDROW


            Download File >>> https://bytlly.com/2uGlYx



            - - d5da3c52bf
            -
            -
            -

            diff --git a/spaces/terfces0erbo/CollegeProjectV2/Game Maker 8.1.xx Crack LINK.md b/spaces/terfces0erbo/CollegeProjectV2/Game Maker 8.1.xx Crack LINK.md deleted file mode 100644 index 3450bb1fb5014a2cc2d34c93cc37820194e640ac..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Game Maker 8.1.xx Crack LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Game Maker 8.1.xx Crack


            Download File ····· https://bytlly.com/2uGk84



            - -Game Maker 8.1.xx Crack >> http://imgfil.com/18gfi8 56a4c31ff9 6d96f113e3a1776ed7764e05775f2f976f268612 5.39 MiB (5647176 Bytes) ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/terfces0erbo/CollegeProjectV2/Jeepers Creepers 3 French Dvdrip Torrent.md b/spaces/terfces0erbo/CollegeProjectV2/Jeepers Creepers 3 French Dvdrip Torrent.md deleted file mode 100644 index 0bb0deb2bbd2c78fa85bc1f9147555f128d63ee1..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Jeepers Creepers 3 French Dvdrip Torrent.md +++ /dev/null @@ -1,15 +0,0 @@ - -

            Jeepers Creepers 3: The Horror Sequel You Can't Miss

            -

            If you are a fan of horror movies, you might have heard of Jeepers Creepers, the franchise that follows a flesh-eating creature known as the Creeper. The first two movies were released in 2001 and 2003, and became cult classics among horror lovers. But did you know that there is a third movie in the series?

            -

            jeepers creepers 3 french dvdrip torrent


            Downloadhttps://bytlly.com/2uGlQM



            -

            Jeepers Creepers 3 is a 2017 horror film that was directed by Victor Salva, who also helmed the previous two installments. The movie is set between the first and second film, and follows a group of people who try to stop the Creeper from killing again. The movie stars Meg Foster, Stan Shaw, Jonathan Breck, and Don Yesso.

            -

            The movie was released in a limited theatrical run in September 2017, and received mixed reviews from critics and audiences. However, if you are curious to see how the story of the Creeper continues, you can download the movie online. There are many websites that offer Jeepers Creepers 3 french dvdrip torrent, which means that you can get a high-quality copy of the movie with French subtitles.

            -

            One of the websites that you can use to download Jeepers Creepers 3 french dvdrip torrent is YTS[^1^], which is a popular torrent site for movies. You can find the movie by searching for its title or using this link: [^1^]. You will need a torrent client like uTorrent or BitTorrent to download the file.

            -

            Another website that you can use to download Jeepers Creepers 3 french dvdrip torrent is quifrijofhongabedc[^2^], which is a blog that posts links to various movies. You can find the movie by scrolling down the page or using this link: [^2^]. You will need to click on the download button and follow the instructions.

            -

            -

            A third website that you can use to download Jeepers Creepers 3 french dvdrip torrent is SoundCloud[^3^], which is a platform for audio streaming. You can find the movie by searching for its title or using this link: [^3^]. You will need to sign up for an account and follow Tinccuero, who posted the link.

            -

            These are some of the websites that you can use to download Jeepers Creepers 3 french dvdrip torrent. However, you should be careful when downloading files from unknown sources, as they might contain viruses or malware. You should also use a VPN service to protect your privacy and avoid legal issues. And remember, downloading copyrighted content without permission is illegal and unethical.

            -

            If you want to watch Jeepers Creepers 3 legally and safely, you can rent or buy it from online platforms like Amazon Prime Video, iTunes, Google Play Movies, or YouTube. You can also check out other horror movies that are available on these platforms.

            -

            Jeepers Creepers 3 is a horror movie that continues the saga of the Creeper, a monstrous creature that hunts every 23 years. If you are interested in watching it, you can download it online using Jeepers Creepers 3 french dvdrip torrent. However, you should be aware of the risks and consequences of doing so. Alternatively, you can watch it legally and safely from online platforms.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Binksetsoundtrack8 Download 30 The Solution to Your Game Video Problems.md b/spaces/tialenAdioni/chat-gpt-api/logs/Binksetsoundtrack8 Download 30 The Solution to Your Game Video Problems.md deleted file mode 100644 index 584bef5df0ee06c48f4e75a8b4e5ed145dd9dc9a..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Binksetsoundtrack8 Download 30 The Solution to Your Game Video Problems.md +++ /dev/null @@ -1,151 +0,0 @@ - -

            Binksetsoundtrack@8 Download 30: What Is It and How to Fix It

            -

            If you are a fan of video games, you might have encountered some problems with playing certain games that require a file called Binksetsoundtrack@8. This file is part of a software called Bink, which is used by many game developers to compress and play video files within their games. In this article, we will explain what Binksetsoundtrack@8 is, why it is important for some games, what are the common errors related to it, and how to download and install it correctly.

            -

            Introduction

            -

            Bink is a video codec developed by RAD Game Tools, which allows game developers to create high-quality videos with low file sizes and fast loading times. Bink is used by many popular games, such as Silent Hill 2, Call of Duty, Mass Effect, Fallout, The Elder Scrolls, and many more.

            -

            Binksetsoundtrack@8 Download 30


            Download File ->>> https://urlcod.com/2uK8JG



            -

            Binksetsoundtrack@8 is a function that belongs to the Bink software, which allows the game to set the soundtrack for a video file. This function is essential for games that use Bink to play videos with music or sound effects, such as cutscenes or cinematics.

            -

            However, sometimes Binksetsoundtrack@8 can cause some errors that prevent the game from running properly or at all. These errors usually occur when the file binkw32.dll, which contains the Binksetsoundtrack@8 function, is missing, corrupted, or incompatible with the game. Some of the common errors are:

            -
              -
            • The procedure entry point _BinkSetSoundtrack@8 could not be located in the dynamic link library binkw32.dll
            • -
            • The file binkw32.dll is missing or corrupted
            • -
            • The game crashes or freezes when playing a video or a cutscene
            • -
            -

            Fortunately, these errors can be fixed by downloading and installing Binksetsoundtrack@8 correctly. In this article, we will show you how to do that step by step.

            -

            How to download Binksetsoundtrack@8 for free
            -Binksetsoundtrack@8 full version download link
            -Binksetsoundtrack@8 crack download no survey
            -Binksetsoundtrack@8 best settings for quality
            -Binksetsoundtrack@8 tutorial and tips
            -Binksetsoundtrack@8 alternatives and comparisons
            -Binksetsoundtrack@8 reviews and ratings
            -Binksetsoundtrack@8 problems and solutions
            -Binksetsoundtrack@8 license key generator
            -Binksetsoundtrack@8 discount and coupon codes
            -Binksetsoundtrack@8 system requirements and compatibility
            -Binksetsoundtrack@8 features and benefits
            -Binksetsoundtrack@8 FAQs and guides
            -Binksetsoundtrack@8 updates and patches
            -Binksetsoundtrack@8 online support and community
            -Binksetsoundtrack@8 vs other audio editing software
            -Binksetsoundtrack@8 pros and cons
            -Binksetsoundtrack@8 testimonials and feedback
            -Binksetsoundtrack@8 demo and trial version
            -Binksetsoundtrack@8 refund policy and guarantee
            -Binksetsoundtrack@8 installation and activation
            -Binksetsoundtrack@8 errors and bugs fix
            -Binksetsoundtrack@8 customizations and plugins
            -Binksetsoundtrack@8 shortcuts and commands
            -Binksetsoundtrack@8 video tutorials and courses
            -Binksetsoundtrack@8 official website and download page
            -Binksetsoundtrack@8 latest version download 2023
            -Binksetsoundtrack@8 price and payment options
            -Binksetsoundtrack@8 bonus and freebies
            -Binksetsoundtrack@8 user manual and documentation
            -Binksetsoundtrack@8 awards and recognition
            -Binksetsoundtrack@8 history and development
            -Binksetsoundtrack@8 tips and tricks for beginners
            -Binksetsoundtrack@8 best practices and recommendations
            -Binksetsoundtrack@8 case studies and examples
            -Binksetsoundtrack@8 forum and blog posts
            -Binksetsoundtrack@8 podcast and interview episodes
            -Binksetsoundtrack@8 YouTube videos and channels
            -Binksetsoundtrack@8 social media pages and groups
            -Binksetsoundtrack@8 affiliate program and commission rates
            -Binksetsoundtrack@8 competitors and alternatives analysis
            -Binksetsoundtrack@8 niche and target audience research
            -Binksetsoundrack@8 SEO strategy and keyword research
            -Binksetssounracktack @ 9 download 30

            -

            What Is Binksetsoundtrack@8 and Why It Is Important for Some Games

            -

            As we mentioned before, Binksetsoundtrack@8 is a function that belongs to the Bink software, which allows the game to set the soundtrack for a video file. This function is essential for games that use Bink to play videos with music or sound effects, such as cutscenes or cinematics.

            -

            Bink works by compressing video files into smaller sizes without losing much quality. This way, game developers can include more videos in their games without taking up too much disk space or memory. Bink also allows the game to play videos faster and smoother than other codecs.

            -

            Binksetsoundtrack@8 is one of the functions that make Bink so versatile and powerful. It allows the game to change the soundtrack of a video file on the fly, depending on the situation or context. For example, if a game has a cutscene where a character dies, it can use Binksetsoundtrack@8 to switch from a cheerful music to a sad one.

            -

            Many games use Binksetsoundtrack@8 to create immersive and dynamic experiences for their players. Some examples are:

            - - - - - - - - - - - - - - - - - - - - - - - - - -
            GameHow It Uses Binksetsoundtrack@8
            Silent Hill 2It uses Binksetsoundtrack@8 to play different soundtracks for different endings of the game.
            Call of DutyIt uses Binksetsoundtrack@8 to play different soundtracks for different missions or scenarios of the game.
            Mass EffectIt uses Binksetsoundtrack@8 to play different soundtracks for different choices or outcomes of the game.
            FalloutIt uses Binksetsoundtrack@8 to play different soundtracks for different locations or factions of the game.
            The Elder ScrollsIt uses Binksetsoundtrack@8 to play different soundtracks for different regions or races of the game.
            -

            What Are the Common Errors Related to Binksetsoundtrack@8 and How They Affect the Gameplay

            -

            Despite its advantages, Binksetsoundtrack@8 can also cause some errors that prevent the game from running properly or at all. These errors usually occur when the file binkw32.dll, which contains the Binksetsoundtrack@8 function, is missing, corrupted, or incompatible with the game.

            -

            The most common error related to Binksetsoundtrack@8 is:

            - The procedure entry point _BinkSetSoundtrack@8 could not be located in the dynamic link library binkw32.dll -

            This error means that the game cannot find or access the function _BinkSetSoundtrack@8 in the file binkw32.dll. This can happen for several reasons:

            -
              -
            • The file binkw32.dll is missing from your computer or from your game folder.
            • -
            • The file binkw32.dll is corrupted by a virus or malware.
            • -
            • The file binkw32.dll is outdated or incompatible with your game version.
            • -
            • The file binkw32.dll is overwritten by another program or mod.
            • -
            • The file b Continuing the article. binkw32.dll is in use by another program or game.
            • -
            -

            When these errors occur, the game cannot play the videos properly or at all. This can result in a loss of immersion, enjoyment, or progress. In some cases, the game may not even start or run at all.

            -

            How to Download Binksetsoundtrack@8 and Install It Correctly

            -

            The good news is that you can fix these errors by downloading and installing Binksetsoundtrack@8 correctly. Here are the steps you need to follow:

            -
              -
            1. Find the official download link for Binksetsoundtrack@8. You can do this by visiting the website of RAD Game Tools (https://www.radgametools.com/bnkdown.htm) and clicking on the link that says \"Download Bink now!\". This will download a file called binkw32.zip, which contains the Bink software and the Binksetsoundtrack@8 function.
            2. -
            3. Check the compatibility and integrity of the downloaded file. You can do this by right-clicking on the file and selecting \"Properties\". Then, go to the \"Compatibility\" tab and make sure that the file is compatible with your Windows version. You can also go to the \"Digital Signatures\" tab and make sure that the file is signed by RAD Game Tools, Inc.
            4. -
            5. Copy the file to the game folder or the system folder. You can do this by extracting the file from the zip archive and copying it to one of these locations:
                -
              • The game folder: This is the folder where your game is installed. For example, if you are playing Silent Hill 2, you can copy the file to C:\Program Files (x86)\Konami\Silent Hill 2\.
              • -
              • The system folder: This is the folder where your Windows system files are stored. For example, if you are using Windows 10 64-bit, you can copy the file to C:\Windows\SysWOW64\.
              • -
              - Note: You may need to replace or overwrite an existing binkw32.dll file in these locations. Make sure to back up the original file before doing so.
            6. -
            7. Test if the installation was successful and if the errors are fixed. You can do this by launching your game and checking if it can play the videos without any problems. If you still see any errors, you may need to reinstall your game or update it to the latest version.
            8. -
            -

            Conclusion

            -

            Binksetsoundtrack@8 is a function that belongs to the Bink software, which allows the game to set the soundtrack for a video file. This function is essential for games that use Bink to play videos with music or sound effects, such as cutscenes or cinematics.

            -

            However, sometimes Binksetsoundtrack@8 can cause some errors that prevent the game from running properly or at all. These errors usually occur when the file binkw32.dll, which contains the Binksetsoundtrack@8 function, is missing, corrupted, or incompatible with the game.

            -

            The good news is that you can fix these errors by downloading and installing Binksetsoundtrack@8 correctly. You just need to follow these steps:

            -
              -
            1. Find the official download link for Binksetsoundtrack@8.
            2. -
            3. Check the compatibility and integrity of the downloaded file.
            4. -
            5. Copy the file to the game folder or the system folder.
            6. -
            7. Test if the installation was successful and if the errors are fixed.
            8. -
            -

            We hope that this article has helped you understand what Binksetsoundtrack@8 is and how to fix it. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!

            -

            Frequently Asked Questions

            -

            Here are some of the most common questions that people ask about Binksetsoundtrack@8:

            -
              -
            1. What is Bink?
            2. -

              Bink is a video codec developed by RAD Game Tools, which allows game developers to create high-quality videos with low file sizes and fast loading times.

              -
            3. What is Binksetsoundtrack@8?
            4. -

              Binksetsoundtrack@8 is a function that belongs to the Bink software, which allows Continuing the article. the game to set the soundtrack for a video file. This function is essential for games that use Bink to play videos with music or sound effects, such as cutscenes or cinematics.

              -
            5. Which games use Binksetsoundtrack@8?
            6. -

              Many popular games use Binksetsoundtrack@8 to create immersive and dynamic experiences for their players. Some examples are Silent Hill 2, Call of Duty, Mass Effect, Fallout, The Elder Scrolls, and many more. You can find a full list of games that use Bink on the Epic Games Tools website (https://www.radgametools.com/bnkmain.htm).

              -
            7. How can I fix Binksetsoundtrack@8 errors?
            8. -

              You can fix Binksetsoundtrack@8 errors by downloading and installing Binksetsoundtrack@8 correctly. You just need to follow these steps:

              -
                -
              1. Find the official download link for Binksetsoundtrack@8.
              2. -
              3. Check the compatibility and integrity of the downloaded file.
              4. -
              5. Copy the file to the game folder or the system folder.
              6. -
              7. Test if the installation was successful and if the errors are fixed.
              8. -
              -
            9. Where can I find the official download link for Binksetsoundtrack@8?
            10. -

              You can find the official download link for Binksetsoundtrack@8 by visiting the website of Epic Games Tools (https://www.radgametools.com/bnkdown.htm) and clicking on the link that says \"Download Bink now!\". This will download a file called binkw32.zip, which contains the Bink software and the Binksetsoundtrack@8 function.

              -
            11. How can I check the compatibility and integrity of the downloaded file?
            12. -

              You can check the compatibility and integrity of the downloaded file by right-clicking on the file and selecting \"Properties\". Then, go to the \"Compatibility\" tab and make sure that the file is compatible with your Windows version. You can also go to the \"Digital Signatures\" tab and make sure that the file is signed by Epic Games Tools, Inc.

              -
            -

            We hope that this article has helped you understand what Binksetsoundtrack@8 is and how to fix it. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!

            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download CSI SAFE 2019 for Free and Boost Your Structural Engineering Skills.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download CSI SAFE 2019 for Free and Boost Your Structural Engineering Skills.md deleted file mode 100644 index ab8b71c9344b0d8cf51e067996524fd34727a0d4..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download CSI SAFE 2019 for Free and Boost Your Structural Engineering Skills.md +++ /dev/null @@ -1,43 +0,0 @@ - -

            How to Download and Install CSI SAFE 2019 for Free

            - -

            CSI SAFE is a powerful software for designing concrete floor and foundation systems. It integrates every aspect of the engineering design process, from framing layout to detail drawing production, in one easy and intuitive environment. CSI SAFE can handle complex models with various types of slabs, walls, columns, beams, braces, links, and boundary conditions. It can also perform comprehensive analysis and design of reinforced concrete and post-tensioned structures, including static, dynamic, seismic, and wind loads.

            -

            csi safe 2019 free download


            Download ——— https://urlcod.com/2uK9Rs



            - -

            If you are looking for a reliable and efficient tool for your structural engineering projects, you might be interested in downloading CSI SAFE 2019 for free. CSI SAFE 2019 is the latest version of the software, which offers many new features and enhancements, such as:

            - -
              -
            • Improved user interface and graphics
            • -
            • New design codes and material libraries
            • -
            • Enhanced slab design options and detailing
            • -
            • Advanced punching shear checks and reinforcement
            • -
            • Improved integration with Autodesk Revit and CSI ETABS
            • -
            • And much more!
            • -
            - -

            In this article, we will show you how to download and install CSI SAFE 2019 for free on your Windows PC. Follow these simple steps to get started:

            - -

            Step 1: Download CSI SAFE 2019 from a trusted source

            - -

            The first step is to download CSI SAFE 2019 from a trusted source. There are many websites that offer free downloads of CSI SAFE 2019, but not all of them are safe and reliable. Some of them may contain viruses, malware, or unwanted programs that can harm your computer or compromise your privacy.

            - -

            One of the best sources to download CSI SAFE 2019 for free is FileCR, a website that provides high-quality software downloads for various platforms. FileCR has a large collection of CSI software products, including CSI SAFE 2019. You can download CSI SAFE 2019 from FileCR by clicking on this link: https://filecr.com/windows/csi-safe/

            - -

            Alternatively, you can also download CSI SAFE 2019 from Autodesk App Store, a platform that offers apps and plugins for Autodesk products. Autodesk App Store has a free app called CSI Link, which allows you to transfer the analytical models from Revit to CSI SAFE and CSI ETABS. You can download CSI Link from Autodesk App Store by clicking on this link: https://apps.autodesk.com/RVT/en/Detail/Index?id=6445397651322878159

            -

            - -

            Step 2: Extract the downloaded file

            - -

            The second step is to extract the downloaded file. The file that you downloaded from FileCR or Autodesk App Store is a compressed file with a .zip or .rar extension. You need to extract this file using a software like WinRAR or 7-Zip.

            - -

            To extract the file using WinRAR, right-click on the file and select "Extract Here" or "Extract to csi-safe-2019-free-download/". This will create a new folder with the same name as the file in the same location.

            - -

            To extract the file using 7-Zip, right-click on the file and select "7-Zip" > "Extract Here" or "Extract to csi-safe-2019-free-download/". This will create a new folder with the same name as the file in the same location.

            - -

            Step 3: Run the setup file

            - -

            The third step is to run the setup file. The setup file is an executable file with a .exe extension that will install CSI SAFE 2019 on your computer. You can find this file inside the extracted folder.

            - -

            To run the setup file, double-click on it or right-click on it and select "Run as administrator". This will launch the installation wizard of CSI SAFE 2019

            ddb901b051
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Geotide Analyzer A Comprehensive Solution for Tidal Analysis and Prediction.md b/spaces/tialenAdioni/chat-gpt-api/logs/Geotide Analyzer A Comprehensive Solution for Tidal Analysis and Prediction.md deleted file mode 100644 index 8a0d678e6dce5322c39e01cf2cb32a97657ad9d2..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Geotide Analyzer A Comprehensive Solution for Tidal Analysis and Prediction.md +++ /dev/null @@ -1,77 +0,0 @@ -
            -

            What is a Geotide Analyzer and How Does It Work?

            -

            A geotide analyzer is a software tool that performs harmonic analysis of tidal data and generates harmonic constants for a specific location. These constants can then be used to predict the tide levels and currents for any date and time in the future or past. Geotide analyzer is designed for professional hydrographers, surveyors, and engineers who need accurate and reliable tidal information for their projects.

            -

            In this article, we will explain what a geotide analyzer does, how it works, and what are its benefits and features.

            -

            geotide analyzer


            Download File ··· https://urlcod.com/2uK1CH



            - -

            What is harmonic analysis of tidal data?

            -

            Harmonic analysis of tidal data is a mathematical technique that decomposes the observed tide signal into a sum of sinusoidal components, each with a known frequency, amplitude, and phase. These components are called harmonic constituents, and they represent the effects of various astronomical and meteorological factors that influence the tide, such as the moon, the sun, the earth's rotation, atmospheric pressure, etc.

            -

            By identifying the harmonic constituents of a tide signal at a given location, one can calculate the harmonic constants for that location. These constants are essentially the coefficients that multiply each harmonic constituent in the sum. They describe how much each constituent contributes to the tide signal at that location.

            -

            Once the harmonic constants are known, one can use them to reconstruct the tide signal for any date and time by adding up the harmonic constituents with their corresponding constants. This is called tidal prediction, and it allows one to estimate the tide levels and currents at any location where harmonic analysis has been performed.

            - -

            How does a geotide analyzer work?

            -

            A geotide analyzer is a software tool that automates the process of harmonic analysis and tidal prediction. It consists of two main modules: GeoTide Analyzer and GeoTide Predictor.

            -

            geotide analyzer software
            -geotide analyzer price
            -geotide analyzer reviews
            -geotide analyzer manual
            -geotide analyzer calibration
            -geotide analyzer rental
            -geotide analyzer for sale
            -geotide analyzer training
            -geotide analyzer specifications
            -geotide analyzer accuracy
            -geotide analyzer applications
            -geotide analyzer comparison
            -geotide analyzer cost
            -geotide analyzer data
            -geotide analyzer demo
            -geotide analyzer features
            -geotide analyzer functions
            -geotide analyzer installation
            -geotide analyzer maintenance
            -geotide analyzer operation
            -geotide analyzer performance
            -geotide analyzer quality
            -geotide analyzer results
            -geotide analyzer service
            -geotide analyzer troubleshooting
            -geotide analyzer use cases
            -geotide analyzer user guide
            -geotide analyzer warranty
            -best geotide analyzer
            -buy geotide analyzer online
            -cheap geotide analyzer
            -discount geotide analyzer
            -how to use geotide analyzer
            -latest geotide analyzer model
            -new geotide analyzer technology
            -professional geotide analyzer
            -reliable geotide analyzer
            -top-rated geotide analyzer
            -used geotide analyzer for sale
            -what is a geotide analyzer
            -where to buy a geotide analyzer
            -who makes the best geotide analyzer
            -why use a geotide analyzer
            -benefits of using a geotide analyzer
            -challenges of using a geotide analyzer
            -tips for using a geotide analyzer effectively
            -how to optimize a geotide analyzer output
            -how to interpret a geotide analyzer data
            -how to calibrate a geotide analyzer correctly

            -

            GeoTide Analyzer takes as input tide gauge data, which are measurements of water level or current at a specific location over a period of time. The data can be imported from various formats, such as comma-delimited or fixed-space ASCII files. The data can also be edited and filtered using graphical tools to remove spikes, outliers, or gaps.

            -

            GeoTide Analyzer then performs harmonic analysis on the tide gauge data using one of the predefined tidal schemes, which are sets of harmonic constituents that are commonly used in different regions or applications. The user can also create their own custom tidal scheme by selecting which constituents to include or exclude. GeoTide Analyzer supports both tidal height and tidal stream analysis.

            -

            The output of GeoTide Analyzer is a file containing the harmonic constants for the location where the tide gauge data was collected. The file also contains information about the tidal levels (e.g., lowest astronomical tide or LAT, highest astronomical tide or HAT) and flood return periods (e.g., how often a certain water level is exceeded) derived from the harmonic constants.

            -

            GeoTide Predictor takes as input the harmonic constants file generated by GeoTide Analyzer and uses it to make tidal predictions for any date and time in the future or past. The user can specify the prediction interval (e.g., hourly, daily, monthly) and output format (e.g., table, graph). GeoTide Predictor can also export the predictions to other formats, such as Microsoft Excel or PDF.

            - -

            What are the benefits and features of a geotide analyzer?

            -

            A geotide analyzer offers several benefits and features for users who need accurate and reliable tidal information for their projects. Some of them are:

            -
              -
            • It is fast and easy to use. It handles each step of harmonic analysis and tidal prediction in an intuitive and user-friendly way.
            • -
            • It is compatible with various data sources and formats. It can import tide gauge data from different devices and formats, as well as export predictions to different formats.
            • -
            • It is flexible and customizable. It allows users to choose from different tidal schemes or create their own custom scheme. It also allows users to adjust various parameters and settings to suit their needs.
            • -
            • It is accurate and reliable. It applies the industry-standard technique of harmonic analysis using Doodson's method. It also incorporates quality control checks and error estimates to ensure the validity of the results.
            • -
            • It is comprehensive and informative. It provides not only tidal predictions but

              e753bf7129
              -
              -
              \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Introductiontoneuralnetworksusingmatlab60snsivanandamsumathideepa.md b/spaces/tialenAdioni/chat-gpt-api/logs/Introductiontoneuralnetworksusingmatlab60snsivanandamsumathideepa.md deleted file mode 100644 index d5b1adfdb60bd9bc2967abe28911c3002950bf60..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Introductiontoneuralnetworksusingmatlab60snsivanandamsumathideepa.md +++ /dev/null @@ -1,26 +0,0 @@ -
              -

              Introduction to Neural Networks Using MATLAB 6.0

              -

              Neural networks are computational models that mimic the biological neurons and their connections in the brain. They can learn from data and perform tasks such as classification, regression, clustering, pattern recognition, and optimization. Neural networks have applications in various domains such as bioinformatics, robotics, communication, image processing, and healthcare.

              -

              introductiontoneuralnetworksusingmatlab60snsivanandamsumathideepa


              Download File ⚙⚙⚙ https://urlcod.com/2uK52B



              -

              One of the popular tools for developing and implementing neural networks is MATLAB, a high-level programming language and environment that supports numerical computation, visualization, and programming. MATLAB provides a built-in Neural Network Toolbox that offers a comprehensive set of functions and graphical user interfaces for creating, training, validating, and testing different types of neural networks.

              -

              This book is written for undergraduate students in computer science who want to learn the basics of neural networks and how to use MATLAB for implementing them. The book covers the following topics:

              -
                -
              • Fundamental models of artificial neural networks, such as feedforward networks, radial basis function networks, self-organizing maps, and recurrent networks.
              • -
              • Learning algorithms for training neural networks, such as gradient descent, backpropagation, resilient backpropagation, Levenberg-Marquardt algorithm, genetic algorithm, and particle swarm optimization.
              • -
              • Applications of neural networks to areas like bioinformatics, robotics, communication, image processing, and healthcare.
              • -
              • Case studies and examples using MATLAB code and Neural Network Toolbox functions.
              • -
              -

              The book also provides a supplemental set of MATLAB code files that can be downloaded from the publisher's website. The code files contain the implementation of various neural network models and applications discussed in the book.

              -

              The book is based on the book "Introduction to Neural Networks Using MATLAB 6.0" by S N Sivanandam, S Sumathi, and S N Deepa[^1^] [^2^], published by McGraw Hill Education (India) Private Limited in 2006. The book is part of the Computer Engineering Series and has received positive reviews from readers[^2^].

              -

              Neural Network Applications

              -

              Neural networks have a wide range of applications in various domains, such as bioinformatics, robotics, communication, image processing, and healthcare. Some of the examples of neural network applications are:

              -
                -
              • Bioinformatics: Neural networks can be used for analyzing biological data, such as DNA sequences, protein structures, gene expression, and drug design. For instance, neural networks can help identify potential drug targets, predict protein functions, and classify diseases based on genomic data[^1^].
              • -
              • Robotics: Neural networks can be used for controlling robots, such as autonomous vehicles, drones, and humanoid robots. For instance, neural networks can help robots navigate complex environments, avoid obstacles, recognize objects, and interact with humans[^1^].
              • -
              • Communication: Neural networks can be used for enhancing communication systems, such as wireless networks, optical networks, and satellite networks. For instance, neural networks can help optimize network performance, improve signal quality, and reduce interference[^1^].
              • -
              • Image processing: Neural networks can be used for processing images, such as photos, videos, and medical images. For instance, neural networks can help enhance image quality, detect faces and objects, segment images, and generate captions[^2^].
              • -
              • Healthcare: Neural networks can be used for improving healthcare services, such as diagnosis, prognosis, treatment, and prevention. For instance, neural networks can help diagnose diseases based on symptoms and test results, predict outcomes based on medical history and risk factors, recommend treatments based on patient preferences and clinical guidelines, and prevent diseases based on lifestyle and genetic factors[^3^].
              • -
              -

              Neural networks are also used for other applications such as finance, marketing, education, gaming, and art. Neural networks are constantly evolving and improving as more data and computing power become available. Neural networks have the potential to revolutionize many fields and industries in the near future.

              81aa517590
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/3D Sky Models for All Your Creative Projects.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/3D Sky Models for All Your Creative Projects.md deleted file mode 100644 index e837e3c19026b1b30a2e40cddd8cd65f2199dcc0..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/3D Sky Models for All Your Creative Projects.md +++ /dev/null @@ -1,173 +0,0 @@ - -

              3D Sky: What Is It and Why You Should Try It

              -

              Have you ever wondered what it would be like to see the sky in three dimensions? To feel like you are flying among the clouds, stars, and planets? To create your own custom sky scenes with realistic lighting, shadows, and movement? If so, then you might be interested in learning more about 3D sky.

              -

              3D sky is a term that refers to any representation of the sky that uses three-dimensional graphics, models, or effects. Unlike traditional 2D sky images or videos, which are flat and static, 3D sky can create a sense of depth, perspective, and dynamism. You can use 3D sky for various purposes, such as enhancing your visual experience, expressing your creativity, or telling a story.

              -

              3d sky


              Download Ziphttps://bltlly.com/2uOk3C



              -

              In this article, we will show you some examples of how you can use 3D sky in different ways. We will also teach you how to create and animate your own 3D sky using Adobe After Effects, a popular software for motion graphics and visual effects. Finally, we will share some tips on how to find and use free 3D sky models and wallpapers for your projects.

              -

              How to Create and Animate 3D Sky in After Effects

              -

              One of the easiest and most versatile ways to create your own 3D sky is by using After Effects. After Effects is a software that allows you to create stunning animations, effects, and compositions using layers, keyframes, expressions, and plugins. You can use After Effects to create realistic or stylized clouds, stars, planets, sunsets, rainbows, auroras, and more.

              -

              To create a basic 3D cloud animation in After Effects, you will need to follow these steps:

              -

              Creating Your Shape Mask

              -

              The first step is to create a shape mask that will define the outline of your cloud. To do this, you will need to:

              -
                -
              • Create a new composition with the dimensions and duration of your choice.
              • -
              • Create a new solid layer with any color and name it "Cloud".
              • -
              • Select the Ellipse Tool from the toolbar and draw a large oval on the solid layer. This will create a mask on the layer.
              • -
              • Open the Mask properties of the layer and set the Feather to a high value, such as 200 pixels. This will create a soft edge around the mask.
              • -
              • Adjust the Mask Expansion to a negative value, such as -50 pixels. This will create some space around the edge of the mask.
              • -
              -

              You should now have a shape mask that looks like a fluffy cloud.

              -

              Creating a Moving Cloud

              -

              The next step is to add some texture and movement to your cloud. To do this, you will need to:

              -

              3d sky models
              -3d sky textures
              -3d sky images
              -3d sky animation
              -3d sky rendering
              -3d sky simulation
              -3d sky wallpaper
              -3d sky background
              -3d sky dome
              -3d sky box
              -3d sky hdri
              -3d sky map
              -3d sky panorama
              -3d sky cloud
              -3d sky star
              -3d sky moon
              -3d sky sun
              -3d sky planet
              -3d sky galaxy
              -3d sky nebula
              -3d sky aurora
              -3d sky rainbow
              -3d sky lightning
              -3d sky storm
              -3d sky night
              -3d sky day
              -3d sky dusk
              -3d sky dawn
              -3d sky sunrise
              -3d sky sunset
              -3d sky art
              -3d sky design
              -3d sky scene
              -3d sky landscape
              -3d sky environment
              -3d sky projection
              -3d sky effect
              -3d sky shader
              -3d sky material
              -3d sky color
              -3d sky generator
              -3d sky creator
              -3d sky editor
              -3d sky software
              -3d sky tutorial
              -3d sky download
              -3d sky free
              -3d sky premium

              -
                -
              • Select the Cloud layer and go to Effect > Distort > Turbulent Displace. This will add some noise and distortion to the layer.
              • -
              • Adjust the Amount and Size of the effect to create some variation in the shape of the cloud. You can also change the Complexity and Evolution settings to change the look of the noise.
              • -
              • To make the cloud move, you will need to use expressions. Expressions are snippets of code that can control various properties of your layers. To add an expression, hold down Alt (Windows) or Option (Mac) and click on the stopwatch icon next to any property.
              • -
              • To make the cloud move horizontally, you will need to add an expression to the Offset Turbulence property of the Turbulent Displace effect. The expression should look something like this: [time * 50, value[1]]. This means that the x value of the offset will increase by 50 pixels per second, while the y value will remain unchanged.
              • -
              • To make the cloud move vertically, you will need to add an expression to the Position property of the Cloud layer. The expression should look something like this: [value[0], 540 + Math.sin(time) * 50]. This means that the x value of the position will remain unchanged, while the y value will oscillate between 490 and 590 pixels, creating a sinusoidal motion.
              • -
              -

              You should now have a moving cloud that looks more realistic and dynamic.

              Adding Controls for Multiple Clouds

              -

              The final step is to add some controls that will allow you to create and adjust multiple clouds in your composition. To do this, you will need to:

              -
                -
              • Select the Cloud layer and go to Layer > Pre-compose. Name the new composition "Cloud Precomp" and make sure to move all attributes into the new composition.
              • -
              • Open the Cloud Precomp and go to Composition > Composition Settings. Change the duration to 10 seconds and click OK.
              • -
              • Select the Cloud layer and go to Layer > Time > Enable Time Remapping. This will allow you to change the speed and duration of the layer.
              • -
              • Add an expression to the Time Remap property of the Cloud layer. The expression should look something like this: loopOut("cycle"). This means that the layer will loop indefinitely, creating a seamless animation.
              • -
              • Go back to the main composition and duplicate the Cloud Precomp layer as many times as you want. You can rename each layer to keep track of them.
              • -
              • Select each Cloud Precomp layer and adjust the following properties: Scale, Position, Rotation, and Opacity. You can use different values for each layer to create some variation and depth in your sky scene.
              • -
              -

              You should now have a 3D sky with multiple clouds that you can customize and animate as you wish.

              -

              How to Use 3D Sky Models in Your Projects

              -

              If you want to create more complex or realistic 3D sky scenes, you might want to use 3D sky models instead of 2D layers. 3D sky models are digital representations of the sky that use polygons, vertices, edges, faces, and textures. You can use 3D sky models to create stunning images or videos of the sky with different elements, such as mountains, trees, buildings, birds, planes, etc.

              -

              To use 3D sky models in your projects, you will need to follow these steps:

              Where to Find Free 3D Sky Models Online

              -

              There are many websites that offer free 3D sky models for download in various formats, such as OBJ, FBX, 3DS, STL, etc. Some of these websites are:

              -
                -
              • Free3D: A website that has over 2000 free 3D sky models, ranging from realistic to cartoonish.
              • -
              • CGTrader: A website that has over 1000 free 3D sky models, including some low-poly and stylized ones.
              • -
              • Sketchfab: A website that has over 800 free 3D sky models, with some interactive and animated ones.
              • -
              • TurboSquid: A website that has over 500 free 3D sky models, with some high-quality and detailed ones.
              • -
              • Blend Swap: A website that has over 300 free 3D sky models, all made with Blender.
              • -
              -

              You can browse these websites and find the 3D sky model that suits your needs and preferences. You can also use the search filters and categories to narrow down your results.

              -

              How to Customize and Animate 3D Sky Models

              -

              Once you have downloaded the 3D sky model of your choice, you will need to import it into your software of choice. Depending on the software you use, the steps may vary slightly, but the general process is similar. You will need to:

              -
                -
              • Open your software and create a new project or scene.
              • -
              • Import the 3D sky model file into your project or scene. You may need to adjust the scale, orientation, and position of the model to fit your scene.
              • -
              • Apply materials, textures, lighting, and shadows to the model. You can use the default settings or customize them according to your preferences. You can also add other elements, such as fog, haze, or glow, to create different atmospheres.
              • -
              • Animate the model using keyframes, curves, and expressions. You can animate the position, rotation, scale, opacity, or any other property of the model. You can also use plugins or scripts to create more complex or realistic animations.
              • -
              • Render and export your final image or video. You can choose the resolution, format, quality, and frame rate of your output. You can also add some post-processing effects, such as color correction, blur, or noise reduction, to enhance your result.
              • -
              -

              You should now have a stunning image or video of your 3D sky model that you can use for your projects.

              How to Use 3D Sky Wallpapers for Your Desktop or Mobile Device

              -

              If you want to enjoy the beauty of 3D sky without creating or animating it yourself, you might want to use 3D sky wallpapers for your desktop or mobile device. 3D sky wallpapers are images or videos of 3D sky scenes that you can set as your background or screensaver. You can use 3D sky wallpapers to decorate your device, relax your eyes, or inspire your mood.

              -

              To use 3D sky wallpapers for your device, you will need to follow these steps:

              -

              Where to Find Free 3D Sky Wallpapers Online

              -

              There are many websites that offer free 3D sky wallpapers for download in various resolutions, such as HD, 4K, 8K, etc. Some of these websites are:

              -
                -
              • WallpaperAccess: A website that has over 1000 free 3D sky wallpapers, with some animated and live ones.
              • -
              • WallpaperCave: A website that has over 800 free 3D sky wallpapers, with some realistic and abstract ones.
              • -
              • WallpaperSafari: A website that has over 600 free 3D sky wallpapers, with some colorful and dark ones.
              • -
              • WallpaperFlare: A website that has over 400 free 3D sky wallpapers, with some artistic and minimalist ones.
              • -
              • Unsplash: A website that has over 200 free 3D sky wallpapers, all in high quality and royalty-free.
              • -
              -

              You can browse these websites and find the 3D sky wallpaper that matches your taste and style. You can also use the search filters and categories to narrow down your results.

              -

              How to Set Up and Change Your Wallpaper

              -

              Once you have downloaded the 3D sky wallpaper of your choice, you will need to set it as your wallpaper on your device. Depending on the device and operating system you use, the steps may vary slightly, but the general process is similar. You will need to:

              -
                -
              • Download and save the wallpaper on your device. You may need to unzip the file if it is compressed.
              • -
              • Access and change your wallpaper settings on your device. You can usually find these settings in the display, personalization, or appearance options of your device.
              • -
              • Select the wallpaper file from your device storage and apply it as your wallpaper. You may need to adjust the fit, position, or orientation of the wallpaper to suit your screen size and aspect ratio.
              • -
              • Enjoy your new 3D sky wallpaper on your device. You can change it anytime you want by repeating the same steps.
              • -
              -

              You should now have a beautiful 3D sky wallpaper on your device that you can admire and enjoy.

              -

              Conclusion

              -

              In this article, we have shown you what 3D sky is and why you should try it. We have also taught you how to create and animate your own 3D sky using After Effects, how to use 3D sky models in your projects, and how to use 3D sky wallpapers for your device. We hope that you have learned something new and useful from this article.

              -

              Now that you know more about 3D sky, why not give it a try yourself? You can use any of the methods or resources we have mentioned in this article to create or use 3D sky for your own purposes. Whether you want to enhance your visual experience, express your creativity, or tell a story, 3D sky can help you achieve your goals.

              -

              So what are you waiting for? Start exploring the wonders of 3D sky today!

              -

              FAQs

              -

              Here are some frequently asked questions about 3D sky:

              -
                -
              1. What is the difference between 3D sky and skybox?
              2. -

                A skybox is a type of 3D sky that uses a cube or a sphere with six images mapped onto each face. The images are usually taken from different angles of a real or simulated environment. A skybox creates an illusion of a distant background that surrounds the viewer. A skybox is often used in video games or virtual reality applications.

                -
              3. What are some advantages of using 3D sky over photos or videos of the sky?
              4. -

                Some advantages of using 3D sky over photos or videos of the sky are:

                -
                  -
                • You can create custom and unique sky scenes that match your vision and style.
                • You can animate and control the movement, lighting, and shadows of the sky elements.
                • -
                • You can adjust the scale, position, and orientation of the sky elements to create different perspectives and views.
                • -
                • You can use 3D sky models or wallpapers that are already made by other artists or designers.
                • -
                -
              5. What are some challenges or limitations of creating and using 3D sky?
              6. -

                Some challenges or limitations of creating and using 3D sky are:

                -
                  -
                • You may need to learn how to use software, tools, or plugins that can create or edit 3D sky.
                • -
                • You may need to spend more time and resources to create or download high-quality 3D sky models or wallpapers.
                • -
                • You may need to consider the compatibility and performance of your device or software when using 3D sky.
                • -
                -
              7. What are some tips or best practices for creating realistic and beautiful 3D sky?
              8. -

                Some tips or best practices for creating realistic and beautiful 3D sky are:

                -
                  -
                • Use reference images or videos of real or simulated sky scenes to inspire your design and style.
                • -
                • Use a variety of colors, shapes, and textures to create contrast and diversity in your sky scene.
                • -
                • Use lighting and shadows to create depth and mood in your sky scene.
                • -
                • Use animation and movement to create dynamism and interest in your sky scene.
                • -
                • Use composition and framing to create balance and harmony in your sky scene.
                • -
                -
              9. What are some tools or resources that can help you learn more about 3D sky?
              10. -

                Some tools or resources that can help you learn more about 3D sky are:

                -
                  -
                • After Effects: A software that can help you create and animate 3D sky using layers, keyframes, expressions, and plugins.
                • -
                • Blender: A software that can help you create and edit 3D sky models using polygons, vertices, edges, faces, and textures.
                • -
                • YouTube: A website that has many tutorials, examples, and tips on how to create and use 3D sky in different software and applications.
                • -
                • Reddit: A website that has many communities, discussions, and feedback on 3D sky topics, such as r/AfterEffects, r/blender, r/3Dmodeling, etc.
                • -
                • Pinterest: A website that has many images and videos of 3D sky scenes that can inspire your creativity and style.
                • -

                197e85843d
                -
                -
                \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Alchemy of Souls OST - All the Songs You Need to Hear from the Amazing Drama.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Alchemy of Souls OST - All the Songs You Need to Hear from the Amazing Drama.md deleted file mode 100644 index ced972ba528a61681f9c7210cea03066444fe050..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Alchemy of Souls OST - All the Songs You Need to Hear from the Amazing Drama.md +++ /dev/null @@ -1,148 +0,0 @@ - -

                How to Download Lagu OST Alchemy of Souls

                -

                If you are a fan of Korean dramas, you might have heard of Alchemy of Souls, a fantasy romance series that aired in 2022 and 2023. The drama has received rave reviews from critics and viewers alike, thanks to its captivating story, stunning visuals, and amazing soundtrack. In this article, we will tell you everything you need to know about Alchemy of Souls, its original soundtrack (OST), and how to download lagu OST Alchemy of Souls legally and safely.

                -

                download lagu ost alchemy of souls


                Downloadhttps://bltlly.com/2uOl9C



                -

                What is Alchemy of Souls?

                -

                Alchemy of Souls is a South Korean television series that stars Lee Jae-wook, Jung So-min, Go Youn-jung, and Hwang Min-hyun. Written by the Hong sisters, it depicts the stories of young mages dealing with heaven and earth. It aired from June 18, 2022 to January 8, 2023, on tvN's Saturdays and Sundays at 21:10 (KST). It is also available for streaming on Netflix in selected regions.

                -

                A brief introduction to the Korean drama series

                -

                The series is set in a fictional country called Daeho, where magic exists but is forbidden by law. It follows the story of an elite warrior named Nak-su whose soul is accidentally trapped inside the weak body of Mu-deok, a mysterious girl. She becomes entangled with Jang Uk, who is from a noble family, becomes his servant and master, and teaches Uk her skills as they both fall in love with each other.

                -

                The main characters and their roles

                -

                Here are some of the main characters and their roles in Alchemy of Souls:

                -

                download alchemy of souls soundtrack flac
                -download car the garden scars leave beautiful trace mp3
                -download kassy aching ost alchemy of souls
                -download jeong sewoon just watching you ost alchemy of souls
                -download gummy raindrops ost alchemy of souls
                -download shin yong jae you're everything to me ost alchemy of souls
                -download kim na young breath ost alchemy of souls
                -download big naughty love letter with you ost alchemy of souls
                -download alchemy of souls opening title nam hye seung jeon jong hyuk
                -download naksu nam hye seung jeon jong hyuk ost alchemy of souls
                -download jinyowon kim woo jung ost alchemy of souls
                -download don't leave me blues nam hye seung park sang hee ost alchemy of souls
                -download the energy of water kim soo hyun ost alchemy of souls
                -download give this king a different body kim yong eun ost alchemy of souls
                -download forbidden spell nam hye seung jeon jong hyuk ost alchemy of souls
                -download unspeakable won ye na ost alchemy of souls
                -download gwigu nam hye seung park sang hee ost alchemy of souls
                -download another soul kim soo hyun ost alchemy of souls
                -download the chaser of night nam hye seung jeon jong hyuk ost alchemy of souls
                -download if you don't do anything kim woo jung ost alchemy of souls
                -download imperial star cheon jung hoon ost alchemy of souls
                -download i am a crown prince won ye na ost alchemy of souls
                -download my young master my mudeok nam hye seung park sang hee ost alchemy of souls
                -download jin mu chun min jung ost alchemy of souls
                -download the memory of danhyang oh yeo na ost alchemy of souls
                -download about this distance chun min jung ost alchemy of souls
                -download the land of daeho nam hye seung park sang hee ost alchemy of souls
                -download yes my young master nam hye seung park sang hee ost alchemy of souls
                -download the old whistle oh yeo na ost alchemy of souls
                -download trace of alchemy of souls yang chae eun ost alchemy of souls
                -download painful past kim mun jeong ost alchemy of souls
                -download jinyowon and songrim nam hye seung park sang hee ost alchemy of souls
                -free download lagu full album alchemy of souls original television soundtrack 2cd zip rar mp3 320kbps flac m4a itunes apple music spotify kgasa youtube soundcloud zippyshare mediafire google drive mega nz dropbox sendspace solidfiles anonfiles bayfiles anonfile mixdrop dlfree uptobox zippyshare1 zippyshare2 zippyshare3 zippyshare4 zippyshare5 zippyshare6 zippyshare7 zippyshare8 zippyshare9 zippyshare10 zippyshare11 zippyshare12 zippyshare13 zippyshare14 zippyshare15 zippyshare16 zippyshare17 zippyshare18 zippyshare19 zippyshare20 zippyshare21 zippyshare22 zippyshare23 zippyshare24 userupload uploadrar uploadship clicknupload fileupload uploaded rapidgator turbobit nitroflare prefiles userscloud filefactory katfile filerio bdupload indishare ddl uploadbuzz hexupload dailyuploads dropapk rockfile anonfiles bayfiles mixdrop dlfree uptobox 1fichier uploadhaven megaup tusfiles openload ufile filescdn usersdrive earn4files uploadcenter uploadboy uploadev uploadocean intoupload suprafiles file-upload uppit uptocloud filetitle eshareload sharemods mega cloudmailru pcloud datafilehost mirrorace multiup mirroedto go4up filesim hitfile sendspace solidfiles anonfiles bayfiles mixdrop dlfree uptobox 1fichier uploadhaven megaup tusfiles openload ufile filescdn usersdrive earn4files uploadcenter uploadboy uploadev uploadocean intoupload suprafiles file-upload uppit uptocloud filetitle eshareload sharemods mega cloudmailru pcloud datafilehost mirrorace multiup mirroedto go4up filesim hitfile sendspace solidfiles anonfiles bayfiles mixdrop dlfree uptobox 1fichier uploadhaven megaup tusfiles openload ufile filescdn usersdrive earn4files uploadcenter uploadboy uploadev uploadocean intoupload suprafiles file-upload up

                -
                  -
                • Nak-su / Mu-deok / Jin Bu-yeon: Played by Lee Jae-wook, Go Youn-jung, and Jung So-min, she is a powerful mage who can control fire. She was born as Jin Bu-yeon, the heiress of Jinyowon, a secret organization that protects mages. However, she was kidnapped by Jin Mu, a soul shifter who wanted to use her power for evil. She escaped and lived as Nak-su, a male warrior who was loyal to Jang Uk. Due to a forbidden spell called alchemy of souls, she switched bodies with Mu-deok, a girl who was cursed with a weak body. She later regained her original body as Jin Bu-yeon.
                • -
                • Jang Uk: Played by Jung So-min and Lee Jae-wook, he is a nobleman who is also a mage. He can control ice and water. He was in love with Nak-su, but thought he died after a tragic incident. He later became a hunter of soul shifters who wanted to avenge Nak-su's death. He then met Jin Bu-yeon, who resembled Nak-su.
                • -
                • Seo Yul : Played by Hwang Min-hyun, he is a mage who can control wind and lightning. He is the son of Seo Jin, the leader of the Soul Shifters, a group of mages who use alchemy of souls to switch bodies and gain power. He was loyal to his father, but he also had feelings for Jin Bu-yeon.
                • -
                • Lee Ha-na: Played by Kim Ji-won, she is a mage who can control earth and plants. She is the daughter of Lee Seung-ho, the head of the Royal Mage Academy, a prestigious institution that trains mages. She was Jang Uk's childhood friend and fiancée, but she also had a crush on Seo Yul.
                • -
                -

                The plot and the themes of the story

                -

                The plot of Alchemy of Souls revolves around the conflicts and romance between the main characters, as well as the secrets and mysteries behind their identities and destinies. The story explores the themes of love, loyalty, betrayal, sacrifice, revenge, and redemption. It also touches on the issues of social class, discrimination, corruption, and power in the world of mages.

                -

                Why is the OST of Alchemy of Souls popular?

                -

                One of the reasons why Alchemy of Souls is such a hit is because of its OST, which consists of 16 songs that perfectly match the mood and tone of the drama. The OST features some of the most popular singers and composers in Korea, as well as various genres and styles that suit different scenes and emotions. The OST songs also convey the messages and themes of the drama, such as love, pain, hope, and courage.

                -

                The singers and composers of the OST

                -

                Here are some of the singers and composers who contributed to the OST of Alchemy of Souls:

                - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                The genres and styles of the OST songs

                -

                The OST songs of Alchemy of Souls cover various genres and styles that reflect the diverse aspects of the drama. Some of the genres and styles include:

                -
                  -
                • Ballad: This genre is characterized by slow tempo, emotional vocals, and sentimental lyrics. It is often used to express the feelings of love, longing, sadness, or regret. Some examples of ballad songs in the OST are Alchemy of Souls by Lee Sun-hee, Fate by Gummy & Kim Jae-hwan, and Stay With Me by Ailee & Ravi (VIXX).
                • -
                • R&B: This genre is influenced by rhythm and blues, soul, funk, and hip hop. It features smooth vocals, catchy melodies, and groovy beats. It is often used to convey the feelings of passion, desire, or attraction. Some examples of R&B songs in the OST are Fire and Ice by Baekhyun & Taeyeon, Soul Shifter by Han Seung-woo, and Love Me Like That by Crush & Joy (Red Velvet).
                • -
                • Pop: This genre is popular and mainstream, with catchy hooks, upbeat rhythms, and simple lyrics. It is often used to convey the feelings of happiness, excitement, or fun. Some examples of pop songs in the OST are Blossom by Chungha & Paul Kim, Fly High by NCT Dream & Weki Meki, and Magic by TXT & ITZY.
                • -
                • Rock: This genre is influenced by rock and roll, hard rock, metal, or punk. It features electric guitars, drums, bass, and powerful vocals have, such as finding their true selves, fulfilling their destinies, or saving their world. They show how hope can motivate, empower, and enlighten. Some examples of hope songs in the OST are Blossom by Chungha & Paul Kim, Fly High by NCT Dream & Weki Meki, and Magic by TXT & ITZY.
                • -
                • Courage: The OST songs demonstrate the courage that the main characters display, such as fighting for their beliefs, protecting their loved ones, or facing their fears. They show how courage can challenge, change, and inspire. Some examples of courage songs in the OST are Fire and Ice by Baekhyun & Taeyeon, Burn It Up by Stray Kids & (G)I-DLE, and Rise Up by ONEUS & ONEWE.
                • -
                -

                How to download lagu OST Alchemy of Souls legally and safely?

                -

                Now that you know more about Alchemy of Souls and its OST, you might be wondering how to download lagu OST Alchemy of Souls legally and safely. There are many ways to enjoy the OST, such as streaming it online, buying it online, or downloading it for free. However, not all methods are legal or safe, so you need to be careful and responsible. Here are some of the official platforms and sources for streaming and downloading the OST, as well as the steps and tips for downloading the OST.

                -

                The official platforms and sources for streaming and downloading the OST

                -

                One of the easiest and safest ways to listen to the OST of Alchemy of Souls is to stream it online from the official platforms and sources. Some of these platforms and sources include:

                -
                  -
                • YouTube: You can watch the official music videos and lyric videos of the OST songs on YouTube. You can also find playlists and compilations of the OST songs on YouTube. However, you cannot download the songs from YouTube unless you have a YouTube Premium subscription.
                • -
                • Spotify: You can stream the OST songs on Spotify, a popular music streaming service that offers millions of songs and podcasts. You can also create your own playlists and share them with your friends. However, you cannot download the songs from Spotify unless you have a Spotify Premium subscription.
                • -
                • Melon: You can stream and download the OST songs on Melon, a leading music platform in Korea that offers various music services and content. You can also access charts, rankings, recommendations, and reviews of the OST songs on Melon. However, you need to have a Melon account and pay for a Melon subscription to stream and download the songs.
                • -
                • iTunes: You can buy and download the OST songs on iTunes, a media player and library that allows you to manage your music collection. You can also sync your music across your devices with iCloud. However, you need to have an Apple ID and pay for each song or album you want to buy.
                • -
                -

                The steps and tips for downloading the OST

                -

                If you want to download lagu OST Alchemy of Souls for free, you might be tempted to use some unofficial or illegal sources, such as torrent sites, file-sharing sites, or online converters. However, these methods are not recommended because they might violate the intellectual property rights of the artists and producers, expose your device to viruses or malware, or compromise your personal information or security. Therefore, it is better to use the official platforms and sources mentioned above, or other legal alternatives that offer free or discounted downloads.

                -

                Here are some of the steps and tips for downloading lagu OST Alchemy of Souls legally and safely:

                -
                  -
                1. Choose your preferred platform or source: Depending on your preferences and needs, you can choose one of the official platforms or sources for streaming and downloading the OST of Alchemy of Souls. For example, if you want to watch the music videos, you can choose YouTube. If you want to access a large library of songs and podcasts, you can choose Spotify. If you want to support the Korean music industry, you can choose Melon. If you want to own the songs and sync them across your devices, you can choose iTunes.
                2. -
                3. Create an account and subscribe if needed: Depending on the platform or source you choose, you might need to create an account and subscribe to access the OST songs. For example, if you choose YouTube, you need to have a Google account and a YouTube Premium subscription to download the songs. If you choose Spotify, you need to have a Spotify account and a Spotify Premium subscription to download the songs. If you choose Melon, you need to have a Melon account and a Melon subscription to stream and download the songs. If you choose iTunes, you need to have an Apple ID to buy and download the songs.
                4. -
                5. Search for the OST songs or albums: Once you have chosen your platform or source and created your account and subscription if needed, you can search for the OST songs or albums of Alchemy of Souls. You can use the search bar or the browse feature to find the OST songs or albums. You can also use filters or categories to narrow down your search results.
                6. -
                7. Select and download the OST songs or albums: After you have found the OST songs or albums of Alchemy of Souls, you can select and download them. You can either download individual songs or entire albums, depending on your preference and availability. You can also preview the songs before downloading them. You can check the download progress and status on your device.
                8. -
                -

                The benefits and drawbacks of downloading the OST

                -

                Downloading lagu OST Alchemy of Souls has its benefits and drawbacks. Some of the benefits include:

                -
                  -
                • You can listen to the OST offline: By downloading the OST songs or albums, you can listen to them anytime and anywhere without an internet connection. This can save your data usage and battery life, as well as avoid buffering or interruptions.
                • -
                • You can customize your playlist: By downloading the OST songs or albums, you can create your own playlist and arrange them in any order you like. You can also add other songs from other sources or artists that match your mood or taste.
                • -
                • You can support the artists and producers: By downloading the OST songs or albums from official platforms or sources, you can show your appreciation and support for the artists and producers who worked hard to create them. You can also help them earn revenue and recognition for their work.
                • -
                -

                Some of the drawbacks include:

                -
                  -
                • You might need to pay for some platforms or sources: As mentioned above, some of the official platforms or sources for streaming and downloading the OST of Alchemy of Souls require you to have an account and a subscription, which might cost you some money. For example, if you choose YouTube, you need to pay $11.99 per month for YouTube Premium. If you choose Spotify, you need to pay $9.99 per month for Spotify Premium. If you choose Melon, you need to pay 10,900 won per month for Melon subscription. If you choose iTunes, you need to pay $1.29 per song or $9.99 per album.
                • -
                • You might need to have enough storage space on your device: By downloading the OST songs or albums, you might need to have enough storage space on your device to store them. Depending on the quality and quantity of the songs or albums, they might take up a lot of space on your device. For example, if you download the entire OST album of Alchemy of Souls, which has 16 songs and lasts for 58 minutes, it might take up about 140 MB of space on your device.
                • -
                • You might not be able to access the latest updates or features: By downloading the OST songs or albums, you might not be able to access the latest updates or features that the official platforms or sources offer. For example, if you download the OST songs or albums from iTunes, you might not be able to access the lyrics, comments, ratings, or recommendations that other platforms or sources offer. You might also miss out on any new releases or additions that the artists or producers might make.
                • -
                -

                Conclusion

                -

                In conclusion, Alchemy of Souls is a Korean drama series that has a captivating story, stunning visuals, and amazing soundtrack. The OST of Alchemy of Souls consists of 16 songs that feature popular singers and composers, various genres and styles, and emotions and messages that match the mood and tone of the drama. You can download lagu OST Alchemy of Souls legally and safely from official platforms and sources, such as YouTube, Spotify, Melon, or iTunes. However, you need to be aware of the benefits and drawbacks of downloading the OST songs or albums.

                -

                If you are interested in Alchemy of Souls and its OST, we encourage you to watch the drama and enjoy the music. You can also share your thoughts and opinions with other fans and listeners online. We hope this article has helped you learn more about Alchemy of Souls and how to download lagu OST Alchemy of Souls legally and safely.

                -

                FAQs

                -

                Where can I watch Alchemy of Souls online?

                -

                You can watch Alchemy of Souls online on tvN's official website or on Netflix in selected regions. You can also find some clips and highlights of the drama on YouTube.

                -

                How many episodes are there in Alchemy of Souls?

                -

                There are 24 episodes in Alchemy of Souls, each lasting for about 70 minutes. The drama aired from June 18, 2022 to January 8, 2023, on tvN's Saturdays and Sundays at 21:10 (KST).

                -

                Who are the actors and actresses in Alchemy of Souls?

                -

                The main cast of Alchemy of Souls includes Lee Jae-wook as Nak-su / Mu-deok / Jin Bu-yeon, Jung So-min as Jang Uk / Jin Bu-yeon / Nak-su , Go Youn-jung as Mu-deok / Nak-su, Hwang Min-hyun as Seo Yul, and Kim Ji-won as Lee Ha-na. The supporting cast includes Kim Sung-oh as Jin Mu, Lee Il-hwa as Seo Jin, Choi Won-young as Lee Seung-ho, and Park Ji-young as Jang Mi-rae.

                -

                What are some of the best OST songs from Alchemy of Souls?

                -

                Some of the best OST songs from Alchemy of Souls are Alchemy of Souls by Lee Sun-hee, Fire and Ice by Baekhyun & Taeyeon, Soul Shifter by Han Seung-woo, Blossom by Chungha & Paul Kim, and Fate by Gummy & Kim Jae-hwan. These songs have received positive feedback from fans and critics, as well as high rankings on music charts.

                -

                Is there a season 2 of Alchemy of Souls?

                -

                As of now, there is no official confirmation or announcement about a season 2 of Alchemy of Souls. However, some fans and media outlets have speculated that there might be a possibility of a season 2, based on the popularity and success of the drama, as well as the open-ended finale that left some questions unanswered.

                197e85843d
                -
                -
                \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py deleted file mode 100644 index 3666ab04ca6460be9bc6944c0f045be7ff44c365..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/cli/main_parser.py +++ /dev/null @@ -1,87 +0,0 @@ -"""A single place for constructing and exposing the main parser -""" - -import os -import sys -from typing import List, Tuple - -from pip._internal.cli import cmdoptions -from pip._internal.cli.parser import ConfigOptionParser, UpdatingDefaultsHelpFormatter -from pip._internal.commands import commands_dict, get_similar_commands -from pip._internal.exceptions import CommandError -from pip._internal.utils.misc import get_pip_version, get_prog - -__all__ = ["create_main_parser", "parse_command"] - - -def create_main_parser() -> ConfigOptionParser: - """Creates and returns the main parser for pip's CLI""" - - parser = ConfigOptionParser( - usage="\n%prog [options]", - add_help_option=False, - formatter=UpdatingDefaultsHelpFormatter(), - name="global", - prog=get_prog(), - ) - parser.disable_interspersed_args() - - parser.version = get_pip_version() - - # add the general options - gen_opts = cmdoptions.make_option_group(cmdoptions.general_group, parser) - parser.add_option_group(gen_opts) - - # so the help formatter knows - parser.main = True # type: ignore - - # create command listing for description - description = [""] + [ - f"{name:27} {command_info.summary}" - for name, command_info in commands_dict.items() - ] - parser.description = "\n".join(description) - - return parser - - -def parse_command(args: List[str]) -> Tuple[str, List[str]]: - parser = create_main_parser() - - # Note: parser calls disable_interspersed_args(), so the result of this - # call is to split the initial args into the general options before the - # subcommand and everything else. - # For example: - # args: ['--timeout=5', 'install', '--user', 'INITools'] - # general_options: ['--timeout==5'] - # args_else: ['install', '--user', 'INITools'] - general_options, args_else = parser.parse_args(args) - - # --version - if general_options.version: - sys.stdout.write(parser.version) - sys.stdout.write(os.linesep) - sys.exit() - - # pip || pip help -> print_help() - if not args_else or (args_else[0] == "help" and len(args_else) == 1): - parser.print_help() - sys.exit() - - # the subcommand name - cmd_name = args_else[0] - - if cmd_name not in commands_dict: - guess = get_similar_commands(cmd_name) - - msg = [f'unknown command "{cmd_name}"'] - if guess: - msg.append(f'maybe you meant "{guess}"') - - raise CommandError(" - ".join(msg)) - - # all the args without the subcommand - cmd_args = args[:] - cmd_args.remove(cmd_name) - - return cmd_name, cmd_args diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/encoding.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/encoding.py deleted file mode 100644 index 008f06a79bf598b149bdccb73e572d13331a1631..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/encoding.py +++ /dev/null @@ -1,36 +0,0 @@ -import codecs -import locale -import re -import sys -from typing import List, Tuple - -BOMS: List[Tuple[bytes, str]] = [ - (codecs.BOM_UTF8, "utf-8"), - (codecs.BOM_UTF16, "utf-16"), - (codecs.BOM_UTF16_BE, "utf-16-be"), - (codecs.BOM_UTF16_LE, "utf-16-le"), - (codecs.BOM_UTF32, "utf-32"), - (codecs.BOM_UTF32_BE, "utf-32-be"), - (codecs.BOM_UTF32_LE, "utf-32-le"), -] - -ENCODING_RE = re.compile(rb"coding[:=]\s*([-\w.]+)") - - -def auto_decode(data: bytes) -> str: - """Check a bytes string for a BOM to correctly detect the encoding - - Fallback to locale.getpreferredencoding(False) like open() on Python3""" - for bom, encoding in BOMS: - if data.startswith(bom): - return data[len(bom) :].decode(encoding) - # Lets check the first two lines as in PEP263 - for line in data.split(b"\n")[:2]: - if line[0:1] == b"#" and ENCODING_RE.search(line): - result = ENCODING_RE.search(line) - assert result is not None - encoding = result.groups()[0].decode("ascii") - return data.decode(encoding) - return data.decode( - locale.getpreferredencoding(False) or sys.getdefaultencoding(), - ) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/__init__.py deleted file mode 100644 index e91ad61822cd36d5c986e74fa8ecfb2d255b4866..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/__init__.py +++ /dev/null @@ -1,93 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .enums import InputState -from .universaldetector import UniversalDetector -from .version import VERSION, __version__ - -__all__ = ["UniversalDetector", "detect", "detect_all", "__version__", "VERSION"] - - -def detect(byte_str): - """ - Detect the encoding of the given byte string. - - :param byte_str: The byte sequence to examine. - :type byte_str: ``bytes`` or ``bytearray`` - """ - if not isinstance(byte_str, bytearray): - if not isinstance(byte_str, bytes): - raise TypeError( - f"Expected object of type bytes or bytearray, got: {type(byte_str)}" - ) - byte_str = bytearray(byte_str) - detector = UniversalDetector() - detector.feed(byte_str) - return detector.close() - - -def detect_all(byte_str, ignore_threshold=False): - """ - Detect all the possible encodings of the given byte string. - - :param byte_str: The byte sequence to examine. - :type byte_str: ``bytes`` or ``bytearray`` - :param ignore_threshold: Include encodings that are below - ``UniversalDetector.MINIMUM_THRESHOLD`` - in results. - :type ignore_threshold: ``bool`` - """ - if not isinstance(byte_str, bytearray): - if not isinstance(byte_str, bytes): - raise TypeError( - f"Expected object of type bytes or bytearray, got: {type(byte_str)}" - ) - byte_str = bytearray(byte_str) - - detector = UniversalDetector() - detector.feed(byte_str) - detector.close() - - if detector.input_state == InputState.HIGH_BYTE: - results = [] - probers = [] - for prober in detector.charset_probers: - if hasattr(prober, "probers"): - probers.extend(p for p in prober.probers) - else: - probers.append(prober) - for prober in probers: - if ignore_threshold or prober.get_confidence() > detector.MINIMUM_THRESHOLD: - charset_name = prober.charset_name or "" - lower_charset_name = charset_name.lower() - # Use Windows encoding name instead of ISO-8859 if we saw any - # extra Windows-specific bytes - if lower_charset_name.startswith("iso-8859") and detector.has_win_bytes: - charset_name = detector.ISO_WIN_MAP.get( - lower_charset_name, charset_name - ) - results.append( - { - "encoding": charset_name, - "confidence": prober.get_confidence(), - "language": prober.language, - } - ) - if len(results) > 0: - return sorted(results, key=lambda result: -result["confidence"]) - - return [detector.result] diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/errors.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/errors.py deleted file mode 100644 index 0bcbe53ef59373c608e62ea285536f8b22b47ecb..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/errors.py +++ /dev/null @@ -1,34 +0,0 @@ -class ConsoleError(Exception): - """An error in console operation.""" - - -class StyleError(Exception): - """An error in styles.""" - - -class StyleSyntaxError(ConsoleError): - """Style was badly formatted.""" - - -class MissingStyle(StyleError): - """No such style.""" - - -class StyleStackError(ConsoleError): - """Style stack is invalid.""" - - -class NotRenderableError(ConsoleError): - """Object is not renderable.""" - - -class MarkupError(ConsoleError): - """Markup was badly formatted.""" - - -class LiveError(ConsoleError): - """Error related to Live display.""" - - -class NoAltScreen(ConsoleError): - """Alt screen mode was required.""" diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/_version.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/_version.py deleted file mode 100644 index c8ac29d0824b06eefacf037e153e2edd768cef6d..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -# This file is protected via CODEOWNERS -__version__ = "1.26.10" diff --git a/spaces/tjeagle/Subaru/README.md b/spaces/tjeagle/Subaru/README.md deleted file mode 100644 index 5ddb2bf791d24d9c8a509971a4b6eb1ac8e5230e..0000000000000000000000000000000000000000 --- a/spaces/tjeagle/Subaru/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Subaru -emoji: 🏃 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomg-group-umd/lm-watermarking/homoglyph_data/__init__.py b/spaces/tomg-group-umd/lm-watermarking/homoglyph_data/__init__.py deleted file mode 100644 index 62dc1e2de2de7df37b987b6cd862bf7be4629f7d..0000000000000000000000000000000000000000 --- a/spaces/tomg-group-umd/lm-watermarking/homoglyph_data/__init__.py +++ /dev/null @@ -1,40 +0,0 @@ -# This is data for homoglyph finding - -"""Original package info: - -Homoglyphs -* Get similar letters -* Convert string to ASCII letters -* Detect possible letter languages -* Detect letter UTF-8 group. - -# main package info -__title__ = 'Homoglyphs' -__version__ = '2.0.4' -__author__ = 'Gram Orsinium' -__license__ = 'MIT' - -# License: - -MIT License 2019 orsinium - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice (including the next -paragraph) shall be included in all copies or substantial portions of the -Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - -""" diff --git a/spaces/tomofi/ABINet-OCR/modules/backbone.py b/spaces/tomofi/ABINet-OCR/modules/backbone.py deleted file mode 100644 index 434cc06473c58c9ba9e4b314f25d2e7ca837f944..0000000000000000000000000000000000000000 --- a/spaces/tomofi/ABINet-OCR/modules/backbone.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch -import torch.nn as nn -from fastai.vision import * - -from modules.model import _default_tfmer_cfg -from modules.resnet import resnet45 -from modules.transformer import (PositionalEncoding, - TransformerEncoder, - TransformerEncoderLayer) - - -class ResTranformer(nn.Module): - def __init__(self, config): - super().__init__() - self.resnet = resnet45() - - self.d_model = ifnone(config.model_vision_d_model, _default_tfmer_cfg['d_model']) - nhead = ifnone(config.model_vision_nhead, _default_tfmer_cfg['nhead']) - d_inner = ifnone(config.model_vision_d_inner, _default_tfmer_cfg['d_inner']) - dropout = ifnone(config.model_vision_dropout, _default_tfmer_cfg['dropout']) - activation = ifnone(config.model_vision_activation, _default_tfmer_cfg['activation']) - num_layers = ifnone(config.model_vision_backbone_ln, 2) - - self.pos_encoder = PositionalEncoding(self.d_model, max_len=8*32) - encoder_layer = TransformerEncoderLayer(d_model=self.d_model, nhead=nhead, - dim_feedforward=d_inner, dropout=dropout, activation=activation) - self.transformer = TransformerEncoder(encoder_layer, num_layers) - - def forward(self, images): - feature = self.resnet(images) - n, c, h, w = feature.shape - feature = feature.view(n, c, -1).permute(2, 0, 1) - feature = self.pos_encoder(feature) - feature = self.transformer(feature) - feature = feature.permute(1, 2, 0).view(n, c, h, w) - return feature diff --git a/spaces/tomofi/MMOCR/docs/en/tutorials/config.md b/spaces/tomofi/MMOCR/docs/en/tutorials/config.md deleted file mode 100644 index 41098a02280ec07ce2a73602c056d36b03424283..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/docs/en/tutorials/config.md +++ /dev/null @@ -1,354 +0,0 @@ -# Learn about Configs - -We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments. -If you wish to inspect the config file, you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config. - -## Modify config through script arguments - -When submitting jobs using "tools/train.py" or "tools/test.py", you may specify `--cfg-options` to in-place modify the config. - -- Update config keys of dict chains. - - The config options can be specified following the order of the dict keys in the original config. - For example, `--cfg-options model.backbone.norm_eval=False` changes the all BN modules in model backbones to `train` mode. - -- Update keys inside a list of configs. - - Some config dicts are composed as a list in your config. For example, the training pipeline `data.train.pipeline` is normally a list - e.g. `[dict(type='LoadImageFromFile'), ...]`. If you want to change `'LoadImageFromFile'` to `'LoadImageFromNdarry'` in the pipeline, - you may specify `--cfg-options data.train.pipeline.0.type=LoadImageFromNdarry`. - -- Update values of list/tuples. - - If the value to be updated is a list or a tuple. For example, the config file normally sets `workflow=[('train', 1)]`. If you want to - change this key, you may specify `--cfg-options workflow="[(train,1),(val,1)]"`. Note that the quotation mark \" is necessary to - support list/tuple data types, and that **NO** white space is allowed inside the quotation marks in the specified value. - -## Config Name Style - -We follow the below style to name full config files (`configs/TASK/*.py`). Contributors are advised to follow the same style. - -``` -{model}_[ARCHITECTURE]_[schedule]_{dataset}.py -``` - -`{xxx}` is required field and `[yyy]` is optional. - -- `{model}`: model type like `dbnet`, `crnn`, etc. -- `[ARCHITECTURE]`: expands some invoked modules following the order of data flow, and the content depends on the model framework. The following examples show how it is generally expanded. - - For text detection tasks, key information tasks, and SegOCR in text recognition task: `{model}_[backbone]_[neck]_[schedule]_{dataset}.py` - - For other text recognition tasks, `{model}_[backbone]_[encoder]_[decoder]_[schedule]_{dataset}.py` - Note that `backbone`, `neck`, `encoder`, `decoder` are the names of modules, e.g. `r50`, `fpnocr`, etc. -- `{schedule}`: training schedule. For instance, `1200e` denotes 1200 epochs. -- `{dataset}`: dataset. It can either be the name of a dataset (`icdar2015`), or a collection of datasets for brevity (e.g. `academic` usually refers to a common practice in academia, which uses MJSynth + SynthText as training set, and IIIT5K, SVT, IC13, IC15, SVTP and CT80 as test set). - -Most configs are composed of basic _primitive_ configs in `configs/_base_`, where each _primitive_ config in different subdirectory has a slightly different name style. We present them as follows. - -- det_datasets, recog_datasets: `{dataset_name(s)}_[train|test].py`. If [train|test] is not specified, the config should contain both training and test set. - - There are two exceptions: toy_data.py and seg_toy_data.py. In recog_datasets, the first one works for most while the second one contains character level annotations and works for seg baseline only as of Dec 2021. -- det_models, recog_models: `{model}_[ARCHITECTURE].py`. -- det_pipelines, recog_pipelines: `{model}_pipeline.py`. -- schedules: `schedule_{optimizer}_{num_epochs}e.py`. - -## Config Structure - -For better config reusability, we break many of reusable sections of configs into `configs/_base_`. Now the directory tree of `configs/_base_` is organized as follows: - -``` -_base_ -├── det_datasets -├── det_models -├── det_pipelines -├── recog_datasets -├── recog_models -├── recog_pipelines -└── schedules -``` - -These _primitive_ configs are categorized by their roles in a complete config. Most of model configs are making full use of _primitive_ configs by including them as parts of `_base_` section. For example, [dbnet_r18_fpnc_1200e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/5a8859fe6666c096b75fa44db4f6c53d81a2ed62/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py) takes five _primitive_ configs from `_base_`: - -```python -_base_ = [ - '../../_base_/runtime_10e.py', - '../../_base_/schedules/schedule_sgd_1200e.py', - '../../_base_/det_models/dbnet_r18_fpnc.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/dbnet_pipeline.py' -] -``` - -From these configs' names we can roughly know this config trains dbnet_r18_fpnc with sgd optimizer in 1200 epochs. It uses the origin dbnet pipeline and icdar2015 as the dataset. We encourage users to follow and take advantage of this convention to organize the config clearly and facilitate fair comparison across different _primitive_ configurations as well as models. - -Please refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html) for detailed documentation. - -## Config File Structure - -### Model - -The parameter `"model"` is a python dictionary in the configuration file, which mainly includes information such as network structure and loss function. - -```{note} -The 'type' in the configuration file is not a constructed parameter, but a class name. -``` - -```{note} -We can also use models from MMDetection by adding `mmdet.` prefix to type name, or from other OpenMMLab projects in a similar way if their backbones are registered in registries. -``` - -#### Shared Section - -- `type`: Model name. - -#### Text Detection / Text Recognition / Key Information Extraction Model - -- `backbone`: Backbone configs. [Common Backbones](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.common.backbones), [TextRecog Backbones](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.backbones) -- `neck`: Neck network name. [TextDet Necks](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textdet.necks), [TextRecog Necks](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.necks). -- `bbox_head`: Head network name. Applicable to text detection, key information models and *some* text recognition models. [TextDet Heads](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textdet.dense_heads), [TextRecog Heads](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.heads), [KIE Heads](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.kie.heads). - - `loss`: Loss function type. [TextDet Losses](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textdet.losses), [KIE Losses](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.kie.losses) - - `postprocessor`: (TextDet only) Postprocess type. [TextDet Postprocessors](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textdet.postprocess) - -#### Text Recognition / Named Entity Extraction Model - -- `encoder`: Encoder configs. [TextRecog Encoders](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.encoders) -- `decoder`: Decoder configs. Applicable to text recognition models. [TextRecog Decoders](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.decoders) -- `loss`: Loss configs. Applicable to some text recognition models. [TextRecog Losses](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.losses) -- `label_convertor`: Convert outputs between text, index and tensor. Applicable to text recognition models. [Label Convertors](https://mmocr.readthedocs.io/en/latest/api.html#module-mmocr.models.textrecog.convertors) -- `max_seq_len`: The maximum sequence length of recognition results. Applicable to text recognition models. - -### Data & Pipeline - -The parameter `"data"` is a python dictionary in the configuration file, which mainly includes information to construct dataloader: - -- `samples_per_gpu` : the BatchSize of each GPU when building the dataloader -- `workers_per_gpu` : the number of threads per GPU when building dataloader -- `train | val | test` : config to construct dataset - - `type`: Dataset name. Check [dataset types](../dataset_types.md) for supported datasets. - -The parameter `evaluation` is also a dictionary, which is the configuration information of `evaluation hook`, mainly including evaluation interval, evaluation index, etc. - -```python -# dataset settings -dataset_type = 'IcdarDataset' # dataset name, -data_root = 'data/icdar2015' # dataset root -img_norm_cfg = dict( # Image normalization config to normalize the input images - mean=[123.675, 116.28, 103.53], # Mean values used to pre-training the pre-trained backbone models - std=[58.395, 57.12, 57.375], # Standard variance used to pre-training the pre-trained backbone models - to_rgb=True) # Whether to invert the color channel, rgb2bgr or bgr2rgb. -# train data pipeline -train_pipeline = [ # Training pipeline - dict(type='LoadImageFromFile'), # First pipeline to load images from file path - dict( - type='LoadAnnotations', # Second pipeline to load annotations for current image - with_bbox=True, # Whether to use bounding box, True for detection - with_mask=True, # Whether to use instance mask, True for instance segmentation - poly2mask=False), # Whether to convert the polygon mask to instance mask, set False for acceleration and to save memory - dict( - type='Resize', # Augmentation pipeline that resize the images and their annotations - img_scale=(1333, 800), # The largest scale of image - keep_ratio=True - ), # whether to keep the ratio between height and width. - dict( - type='RandomFlip', # Augmentation pipeline that flip the images and their annotations - flip_ratio=0.5), # The ratio or probability to flip - dict( - type='Normalize', # Augmentation pipeline that normalize the input images - mean=[123.675, 116.28, 103.53], # These keys are the same of img_norm_cfg since the - std=[58.395, 57.12, 57.375], # keys of img_norm_cfg are used here as arguments - to_rgb=True), - dict( - type='Pad', # Padding config - size_divisor=32), # The number the padded images should be divisible - dict(type='DefaultFormatBundle'), # Default format bundle to gather data in the pipeline - dict( - type='Collect', # Pipeline that decides which keys in the data should be passed to the detector - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), # First pipeline to load images from file path - dict( - type='MultiScaleFlipAug', # An encapsulation that encapsulates the testing augmentations - img_scale=(1333, 800), # Decides the largest scale for testing, used for the Resize pipeline - flip=False, # Whether to flip images during testing - transforms=[ - dict(type='Resize', # Use resize augmentation - keep_ratio=True), # Whether to keep the ratio between height and width, the img_scale set here will be suppressed by the img_scale set above. - dict(type='RandomFlip'), # Thought RandomFlip is added in pipeline, it is not used because flip=False - dict( - type='Normalize', # Normalization config, the values are from img_norm_cfg - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict( - type='Pad', # Padding config to pad images divisible by 32. - size_divisor=32), - dict( - type='ImageToTensor', # convert image to tensor - keys=['img']), - dict( - type='Collect', # Collect pipeline that collect necessary keys for testing. - keys=['img']) - ]) -] -data = dict( - samples_per_gpu=32, # Batch size of a single GPU - workers_per_gpu=2, # Worker to pre-fetch data for each single GPU - train=dict( # train data config - type=dataset_type, # dataset name - ann_file=f'{data_root}/instances_training.json', # Path to annotation file - img_prefix=f'{data_root}/imgs', # Path to images - pipeline=train_pipeline), # train data pipeline - test=dict( # test data config - type=dataset_type, - ann_file=f'{data_root}/instances_test.json', # Path to annotation file - img_prefix=f'{data_root}/imgs', # Path to images - pipeline=test_pipeline)) -evaluation = dict( # The config to build the evaluation hook, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/evaluation/eval_hooks.py#L7 for more details. - interval=1, # Evaluation interval - metric='hmean-iou') # Metrics used during evaluation -``` - -### Training Schedule - -Mainly include optimizer settings, `optimizer hook` settings, learning rate schedule and `runner` settings: - -- `optimizer`: optimizer setting , support all optimizers in `pytorch`, refer to related [mmcv](https://mmcv.readthedocs.io/en/latest/_modules/mmcv/runner/optimizer/default_constructor.html#DefaultOptimizerConstructor) documentation. -- `optimizer_config`: `optimizer hook` configuration file, such as setting gradient limit, refer to related [mmcv](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8) code. -- `lr_config`: Learning rate scheduler, supports "CosineAnnealing", "Step", "Cyclic", etc. Refer to related [mmcv](https://mmcv.readthedocs.io/en/latest/_modules/mmcv/runner/hooks/lr_updater.html#LrUpdaterHook) documentation for more options. -- `runner`: For `runner`, please refer to `mmcv` for [`runner`](https://mmcv.readthedocs.io/en/latest/understand_mmcv/runner.html) introduction document. - -```python -# he configuration file used to build the optimizer, support all optimizers in PyTorch. -optimizer = dict(type='SGD', # Optimizer type - lr=0.1, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch - momentum=0.9, # Momentum - weight_decay=0.0001) # Weight decay of SGD -# Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details. -optimizer_config = dict(grad_clip=None) # Most of the methods do not use gradient clip -# Learning rate scheduler config used to register LrUpdater hook -lr_config = dict(policy='step', # The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9. - step=[30, 60, 90]) # Steps to decay the learning rate -runner = dict(type='EpochBasedRunner', # Type of runner to use (i.e. IterBasedRunner or EpochBasedRunner) - max_epochs=100) # Runner that runs the workflow in total max_epochs. For IterBasedRunner use `max_iters` -``` - -### Runtime Setting - -This part mainly includes saving the checkpoint strategy, log configuration, training parameters, breakpoint weight path, working directory, etc.. - -```python -# Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation. -checkpoint_config = dict(interval=1) # The save interval is 1 -# config to register logger hook -log_config = dict( - interval=100, # Interval to print the log - hooks=[ - dict(type='TextLoggerHook'), # The Tensorboard logger is also supported - # dict(type='TensorboardLoggerHook') - ]) - -dist_params = dict(backend='nccl') # Parameters to setup distributed training, the port can also be set. -log_level = 'INFO' # The output level of the log. -resume_from = None # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved. -workflow = [('train', 1)] # Workflow for runner. [('train', 1)] means there is only one workflow and the workflow named 'train' is executed once. -work_dir = 'work_dir' # Directory to save the model checkpoints and logs for the current experiments. -``` - -## FAQ - -### Ignore some fields in the base configs - -Sometimes, you may set `_delete_=True` to ignore some of fields in base configs. -You may refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html#inherit-from-base-config-with-ignored-fields) for simple illustration. - -### Use intermediate variables in configs - -Some intermediate variables are used in the configs files, like `train_pipeline`/`test_pipeline` in datasets. -It's worth noting that when modifying intermediate variables in the children configs, user need to pass the intermediate variables into corresponding fields again. -For example, we usually want the data path to be a variable so that we - -```python -dataset_type = 'IcdarDataset' -data_root = 'data/icdar2015' - -train = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_training.json', - img_prefix=f'{data_root}/imgs', - pipeline=None) - -test = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_test.json', - img_prefix=f'{data_root}/imgs', - pipeline=None) -``` - -### Use some fields in the base configs - -Sometimes, you may refer to some fields in the `_base_` config, so as to avoid duplication of definitions. You can refer to [mmcv](https://mmcv.readthedocs.io/en/latest/understand_mmcv/config.html#reference-variables-from-base) for some more instructions. - -This technique has been widely used in MMOCR's configs, where the main configs refer to the dataset and pipeline defined in _base_ configs by: - -```python -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} -``` - -Which assumes that its _base_ configs export datasets and pipelines in a way like: - -```python -# base dataset config -dataset_type = 'IcdarDataset' -data_root = 'data/icdar2015' - -train = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_training.json', - img_prefix=f'{data_root}/imgs', - pipeline=None) - -test = dict( - type=dataset_type, - ann_file=f'{data_root}/instances_test.json', - img_prefix=f'{data_root}/imgs', - pipeline=None) - -train_list = [train] -test_list = [test] -``` - -```python -# base pipeline config -train_pipeline = dict(...) -test_pipeline = dict(...) -``` - -## Deprecated train_cfg/test_cfg - -The `train_cfg` and `test_cfg` are deprecated in config file, please specify them in the model config. The original config structure is as below. - -```python -# deprecated -model = dict( - type=..., - ... -) -train_cfg=dict(...) -test_cfg=dict(...) -``` - -The migration example is as below. - -```python -# recommended -model = dict( - type=..., - ... - train_cfg=dict(...), - test_cfg=dict(...), -) -``` diff --git a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/textdet_targets/drrg_targets.py b/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/textdet_targets/drrg_targets.py deleted file mode 100644 index fdf3a494535d0820ef8e9c56e76aa2def51a6ea3..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/textdet_targets/drrg_targets.py +++ /dev/null @@ -1,534 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np -from lanms import merge_quadrangle_n9 as la_nms -from mmdet.core import BitmapMasks -from mmdet.datasets.builder import PIPELINES -from numpy.linalg import norm - -import mmocr.utils.check_argument as check_argument -from .textsnake_targets import TextSnakeTargets - - -@PIPELINES.register_module() -class DRRGTargets(TextSnakeTargets): - """Generate the ground truth targets of DRRG: Deep Relational Reasoning - Graph Network for Arbitrary Shape Text Detection. - - [https://arxiv.org/abs/2003.07493]. This code was partially adapted from - https://github.com/GXYM/DRRG licensed under the MIT license. - - Args: - orientation_thr (float): The threshold for distinguishing between - head edge and tail edge among the horizontal and vertical edges - of a quadrangle. - resample_step (float): The step size for resampling the text center - line. - num_min_comps (int): The minimum number of text components, which - should be larger than k_hop1 mentioned in paper. - num_max_comps (int): The maximum number of text components. - min_width (float): The minimum width of text components. - max_width (float): The maximum width of text components. - center_region_shrink_ratio (float): The shrink ratio of text center - regions. - comp_shrink_ratio (float): The shrink ratio of text components. - comp_w_h_ratio (float): The width to height ratio of text components. - min_rand_half_height(float): The minimum half-height of random text - components. - max_rand_half_height (float): The maximum half-height of random - text components. - jitter_level (float): The jitter level of text component geometric - features. - """ - - def __init__(self, - orientation_thr=2.0, - resample_step=8.0, - num_min_comps=9, - num_max_comps=600, - min_width=8.0, - max_width=24.0, - center_region_shrink_ratio=0.3, - comp_shrink_ratio=1.0, - comp_w_h_ratio=0.3, - text_comp_nms_thr=0.25, - min_rand_half_height=8.0, - max_rand_half_height=24.0, - jitter_level=0.2): - - super().__init__() - self.orientation_thr = orientation_thr - self.resample_step = resample_step - self.num_max_comps = num_max_comps - self.num_min_comps = num_min_comps - self.min_width = min_width - self.max_width = max_width - self.center_region_shrink_ratio = center_region_shrink_ratio - self.comp_shrink_ratio = comp_shrink_ratio - self.comp_w_h_ratio = comp_w_h_ratio - self.text_comp_nms_thr = text_comp_nms_thr - self.min_rand_half_height = min_rand_half_height - self.max_rand_half_height = max_rand_half_height - self.jitter_level = jitter_level - - def dist_point2line(self, point, line): - - assert isinstance(line, tuple) - point1, point2 = line - d = abs(np.cross(point2 - point1, point - point1)) / ( - norm(point2 - point1) + 1e-8) - return d - - def draw_center_region_maps(self, top_line, bot_line, center_line, - center_region_mask, top_height_map, - bot_height_map, sin_map, cos_map, - region_shrink_ratio): - """Draw attributes of text components on text center regions. - - Args: - top_line (ndarray): The points composing the top side lines of text - polygons. - bot_line (ndarray): The points composing bottom side lines of text - polygons. - center_line (ndarray): The points composing the center lines of - text instances. - center_region_mask (ndarray): The text center region mask. - top_height_map (ndarray): The map on which the distance from points - to top side lines will be drawn for each pixel in text center - regions. - bot_height_map (ndarray): The map on which the distance from points - to bottom side lines will be drawn for each pixel in text - center regions. - sin_map (ndarray): The map of vector_sin(top_point - bot_point) - that will be drawn on text center regions. - cos_map (ndarray): The map of vector_cos(top_point - bot_point) - will be drawn on text center regions. - region_shrink_ratio (float): The shrink ratio of text center - regions. - """ - - assert top_line.shape == bot_line.shape == center_line.shape - assert (center_region_mask.shape == top_height_map.shape == - bot_height_map.shape == sin_map.shape == cos_map.shape) - assert isinstance(region_shrink_ratio, float) - - h, w = center_region_mask.shape - for i in range(0, len(center_line) - 1): - - top_mid_point = (top_line[i] + top_line[i + 1]) / 2 - bot_mid_point = (bot_line[i] + bot_line[i + 1]) / 2 - - sin_theta = self.vector_sin(top_mid_point - bot_mid_point) - cos_theta = self.vector_cos(top_mid_point - bot_mid_point) - - tl = center_line[i] + (top_line[i] - - center_line[i]) * region_shrink_ratio - tr = center_line[i + 1] + ( - top_line[i + 1] - center_line[i + 1]) * region_shrink_ratio - br = center_line[i + 1] + ( - bot_line[i + 1] - center_line[i + 1]) * region_shrink_ratio - bl = center_line[i] + (bot_line[i] - - center_line[i]) * region_shrink_ratio - current_center_box = np.vstack([tl, tr, br, bl]).astype(np.int32) - - cv2.fillPoly(center_region_mask, [current_center_box], color=1) - cv2.fillPoly(sin_map, [current_center_box], color=sin_theta) - cv2.fillPoly(cos_map, [current_center_box], color=cos_theta) - - current_center_box[:, 0] = np.clip(current_center_box[:, 0], 0, - w - 1) - current_center_box[:, 1] = np.clip(current_center_box[:, 1], 0, - h - 1) - min_coord = np.min(current_center_box, axis=0).astype(np.int32) - max_coord = np.max(current_center_box, axis=0).astype(np.int32) - current_center_box = current_center_box - min_coord - box_sz = (max_coord - min_coord + 1) - - center_box_mask = np.zeros((box_sz[1], box_sz[0]), dtype=np.uint8) - cv2.fillPoly(center_box_mask, [current_center_box], color=1) - - inds = np.argwhere(center_box_mask > 0) - inds = inds + (min_coord[1], min_coord[0]) - inds_xy = np.fliplr(inds) - top_height_map[(inds[:, 0], inds[:, 1])] = self.dist_point2line( - inds_xy, (top_line[i], top_line[i + 1])) - bot_height_map[(inds[:, 0], inds[:, 1])] = self.dist_point2line( - inds_xy, (bot_line[i], bot_line[i + 1])) - - def generate_center_mask_attrib_maps(self, img_size, text_polys): - """Generate text center region masks and geometric attribute maps. - - Args: - img_size (tuple): The image size (height, width). - text_polys (list[list[ndarray]]): The list of text polygons. - - Returns: - center_lines (list): The list of text center lines. - center_region_mask (ndarray): The text center region mask. - top_height_map (ndarray): The map on which the distance from points - to top side lines will be drawn for each pixel in text center - regions. - bot_height_map (ndarray): The map on which the distance from points - to bottom side lines will be drawn for each pixel in text - center regions. - sin_map (ndarray): The sin(theta) map where theta is the angle - between vector (top point - bottom point) and vector (1, 0). - cos_map (ndarray): The cos(theta) map where theta is the angle - between vector (top point - bottom point) and vector (1, 0). - """ - - assert isinstance(img_size, tuple) - assert check_argument.is_2dlist(text_polys) - - h, w = img_size - - center_lines = [] - center_region_mask = np.zeros((h, w), np.uint8) - top_height_map = np.zeros((h, w), dtype=np.float32) - bot_height_map = np.zeros((h, w), dtype=np.float32) - sin_map = np.zeros((h, w), dtype=np.float32) - cos_map = np.zeros((h, w), dtype=np.float32) - - for poly in text_polys: - assert len(poly) == 1 - polygon_points = poly[0].reshape(-1, 2) - _, _, top_line, bot_line = self.reorder_poly_edge(polygon_points) - resampled_top_line, resampled_bot_line = self.resample_sidelines( - top_line, bot_line, self.resample_step) - resampled_bot_line = resampled_bot_line[::-1] - center_line = (resampled_top_line + resampled_bot_line) / 2 - - if self.vector_slope(center_line[-1] - center_line[0]) > 2: - if (center_line[-1] - center_line[0])[1] < 0: - center_line = center_line[::-1] - resampled_top_line = resampled_top_line[::-1] - resampled_bot_line = resampled_bot_line[::-1] - else: - if (center_line[-1] - center_line[0])[0] < 0: - center_line = center_line[::-1] - resampled_top_line = resampled_top_line[::-1] - resampled_bot_line = resampled_bot_line[::-1] - - line_head_shrink_len = np.clip( - (norm(top_line[0] - bot_line[0]) * self.comp_w_h_ratio), - self.min_width, self.max_width) / 2 - line_tail_shrink_len = np.clip( - (norm(top_line[-1] - bot_line[-1]) * self.comp_w_h_ratio), - self.min_width, self.max_width) / 2 - num_head_shrink = int(line_head_shrink_len // self.resample_step) - num_tail_shrink = int(line_tail_shrink_len // self.resample_step) - if len(center_line) > num_head_shrink + num_tail_shrink + 2: - center_line = center_line[num_head_shrink:len(center_line) - - num_tail_shrink] - resampled_top_line = resampled_top_line[ - num_head_shrink:len(resampled_top_line) - num_tail_shrink] - resampled_bot_line = resampled_bot_line[ - num_head_shrink:len(resampled_bot_line) - num_tail_shrink] - center_lines.append(center_line.astype(np.int32)) - - self.draw_center_region_maps(resampled_top_line, - resampled_bot_line, center_line, - center_region_mask, top_height_map, - bot_height_map, sin_map, cos_map, - self.center_region_shrink_ratio) - - return (center_lines, center_region_mask, top_height_map, - bot_height_map, sin_map, cos_map) - - def generate_rand_comp_attribs(self, num_rand_comps, center_sample_mask): - """Generate random text components and their attributes to ensure the - the number of text components in an image is larger than k_hop1, which - is the number of one hop neighbors in KNN graph. - - Args: - num_rand_comps (int): The number of random text components. - center_sample_mask (ndarray): The region mask for sampling text - component centers . - - Returns: - rand_comp_attribs (ndarray): The random text component attributes - (x, y, h, w, cos, sin, comp_label=0). - """ - - assert isinstance(num_rand_comps, int) - assert num_rand_comps > 0 - assert center_sample_mask.ndim == 2 - - h, w = center_sample_mask.shape - - max_rand_half_height = self.max_rand_half_height - min_rand_half_height = self.min_rand_half_height - max_rand_height = max_rand_half_height * 2 - max_rand_width = np.clip(max_rand_height * self.comp_w_h_ratio, - self.min_width, self.max_width) - margin = int( - np.sqrt((max_rand_height / 2)**2 + (max_rand_width / 2)**2)) + 1 - - if 2 * margin + 1 > min(h, w): - - assert min(h, w) > (np.sqrt(2) * (self.min_width + 1)) - max_rand_half_height = max(min(h, w) / 4, self.min_width / 2 + 1) - min_rand_half_height = max(max_rand_half_height / 4, - self.min_width / 2) - - max_rand_height = max_rand_half_height * 2 - max_rand_width = np.clip(max_rand_height * self.comp_w_h_ratio, - self.min_width, self.max_width) - margin = int( - np.sqrt((max_rand_height / 2)**2 + - (max_rand_width / 2)**2)) + 1 - - inner_center_sample_mask = np.zeros_like(center_sample_mask) - inner_center_sample_mask[margin:h - margin, margin:w - margin] = \ - center_sample_mask[margin:h - margin, margin:w - margin] - kernel_size = int(np.clip(max_rand_half_height, 7, 21)) - inner_center_sample_mask = cv2.erode( - inner_center_sample_mask, - np.ones((kernel_size, kernel_size), np.uint8)) - - center_candidates = np.argwhere(inner_center_sample_mask > 0) - num_center_candidates = len(center_candidates) - sample_inds = np.random.choice(num_center_candidates, num_rand_comps) - rand_centers = center_candidates[sample_inds] - - rand_top_height = np.random.randint( - min_rand_half_height, - max_rand_half_height, - size=(len(rand_centers), 1)) - rand_bot_height = np.random.randint( - min_rand_half_height, - max_rand_half_height, - size=(len(rand_centers), 1)) - - rand_cos = 2 * np.random.random(size=(len(rand_centers), 1)) - 1 - rand_sin = 2 * np.random.random(size=(len(rand_centers), 1)) - 1 - scale = np.sqrt(1.0 / (rand_cos**2 + rand_sin**2 + 1e-8)) - rand_cos = rand_cos * scale - rand_sin = rand_sin * scale - - height = (rand_top_height + rand_bot_height) - width = np.clip(height * self.comp_w_h_ratio, self.min_width, - self.max_width) - - rand_comp_attribs = np.hstack([ - rand_centers[:, ::-1], height, width, rand_cos, rand_sin, - np.zeros_like(rand_sin) - ]).astype(np.float32) - - return rand_comp_attribs - - def jitter_comp_attribs(self, comp_attribs, jitter_level): - """Jitter text components attributes. - - Args: - comp_attribs (ndarray): The text component attributes. - jitter_level (float): The jitter level of text components - attributes. - - Returns: - jittered_comp_attribs (ndarray): The jittered text component - attributes (x, y, h, w, cos, sin, comp_label). - """ - - assert comp_attribs.shape[1] == 7 - assert comp_attribs.shape[0] > 0 - assert isinstance(jitter_level, float) - - x = comp_attribs[:, 0].reshape((-1, 1)) - y = comp_attribs[:, 1].reshape((-1, 1)) - h = comp_attribs[:, 2].reshape((-1, 1)) - w = comp_attribs[:, 3].reshape((-1, 1)) - cos = comp_attribs[:, 4].reshape((-1, 1)) - sin = comp_attribs[:, 5].reshape((-1, 1)) - comp_labels = comp_attribs[:, 6].reshape((-1, 1)) - - x += (np.random.random(size=(len(comp_attribs), 1)) - - 0.5) * (h * np.abs(cos) + w * np.abs(sin)) * jitter_level - y += (np.random.random(size=(len(comp_attribs), 1)) - - 0.5) * (h * np.abs(sin) + w * np.abs(cos)) * jitter_level - - h += (np.random.random(size=(len(comp_attribs), 1)) - - 0.5) * h * jitter_level - w += (np.random.random(size=(len(comp_attribs), 1)) - - 0.5) * w * jitter_level - - cos += (np.random.random(size=(len(comp_attribs), 1)) - - 0.5) * 2 * jitter_level - sin += (np.random.random(size=(len(comp_attribs), 1)) - - 0.5) * 2 * jitter_level - - scale = np.sqrt(1.0 / (cos**2 + sin**2 + 1e-8)) - cos = cos * scale - sin = sin * scale - - jittered_comp_attribs = np.hstack([x, y, h, w, cos, sin, comp_labels]) - - return jittered_comp_attribs - - def generate_comp_attribs(self, center_lines, text_mask, - center_region_mask, top_height_map, - bot_height_map, sin_map, cos_map): - """Generate text component attributes. - - Args: - center_lines (list[ndarray]): The list of text center lines . - text_mask (ndarray): The text region mask. - center_region_mask (ndarray): The text center region mask. - top_height_map (ndarray): The map on which the distance from points - to top side lines will be drawn for each pixel in text center - regions. - bot_height_map (ndarray): The map on which the distance from points - to bottom side lines will be drawn for each pixel in text - center regions. - sin_map (ndarray): The sin(theta) map where theta is the angle - between vector (top point - bottom point) and vector (1, 0). - cos_map (ndarray): The cos(theta) map where theta is the angle - between vector (top point - bottom point) and vector (1, 0). - - Returns: - pad_comp_attribs (ndarray): The padded text component attributes - of a fixed size. - """ - - assert isinstance(center_lines, list) - assert (text_mask.shape == center_region_mask.shape == - top_height_map.shape == bot_height_map.shape == sin_map.shape - == cos_map.shape) - - center_lines_mask = np.zeros_like(center_region_mask) - cv2.polylines(center_lines_mask, center_lines, 0, 1, 1) - center_lines_mask = center_lines_mask * center_region_mask - comp_centers = np.argwhere(center_lines_mask > 0) - - y = comp_centers[:, 0] - x = comp_centers[:, 1] - - top_height = top_height_map[y, x].reshape( - (-1, 1)) * self.comp_shrink_ratio - bot_height = bot_height_map[y, x].reshape( - (-1, 1)) * self.comp_shrink_ratio - sin = sin_map[y, x].reshape((-1, 1)) - cos = cos_map[y, x].reshape((-1, 1)) - - top_mid_points = comp_centers + np.hstack( - [top_height * sin, top_height * cos]) - bot_mid_points = comp_centers - np.hstack( - [bot_height * sin, bot_height * cos]) - - width = (top_height + bot_height) * self.comp_w_h_ratio - width = np.clip(width, self.min_width, self.max_width) - r = width / 2 - - tl = top_mid_points[:, ::-1] - np.hstack([-r * sin, r * cos]) - tr = top_mid_points[:, ::-1] + np.hstack([-r * sin, r * cos]) - br = bot_mid_points[:, ::-1] + np.hstack([-r * sin, r * cos]) - bl = bot_mid_points[:, ::-1] - np.hstack([-r * sin, r * cos]) - text_comps = np.hstack([tl, tr, br, bl]).astype(np.float32) - - score = np.ones((text_comps.shape[0], 1), dtype=np.float32) - text_comps = np.hstack([text_comps, score]) - text_comps = la_nms(text_comps, self.text_comp_nms_thr) - - if text_comps.shape[0] >= 1: - img_h, img_w = center_region_mask.shape - text_comps[:, 0:8:2] = np.clip(text_comps[:, 0:8:2], 0, img_w - 1) - text_comps[:, 1:8:2] = np.clip(text_comps[:, 1:8:2], 0, img_h - 1) - - comp_centers = np.mean( - text_comps[:, 0:8].reshape((-1, 4, 2)), - axis=1).astype(np.int32) - x = comp_centers[:, 0] - y = comp_centers[:, 1] - - height = (top_height_map[y, x] + bot_height_map[y, x]).reshape( - (-1, 1)) - width = np.clip(height * self.comp_w_h_ratio, self.min_width, - self.max_width) - - cos = cos_map[y, x].reshape((-1, 1)) - sin = sin_map[y, x].reshape((-1, 1)) - - _, comp_label_mask = cv2.connectedComponents( - center_region_mask, connectivity=8) - comp_labels = comp_label_mask[y, x].reshape( - (-1, 1)).astype(np.float32) - - x = x.reshape((-1, 1)).astype(np.float32) - y = y.reshape((-1, 1)).astype(np.float32) - comp_attribs = np.hstack( - [x, y, height, width, cos, sin, comp_labels]) - comp_attribs = self.jitter_comp_attribs(comp_attribs, - self.jitter_level) - - if comp_attribs.shape[0] < self.num_min_comps: - num_rand_comps = self.num_min_comps - comp_attribs.shape[0] - rand_comp_attribs = self.generate_rand_comp_attribs( - num_rand_comps, 1 - text_mask) - comp_attribs = np.vstack([comp_attribs, rand_comp_attribs]) - else: - comp_attribs = self.generate_rand_comp_attribs( - self.num_min_comps, 1 - text_mask) - - num_comps = ( - np.ones((comp_attribs.shape[0], 1), dtype=np.float32) * - comp_attribs.shape[0]) - comp_attribs = np.hstack([num_comps, comp_attribs]) - - if comp_attribs.shape[0] > self.num_max_comps: - comp_attribs = comp_attribs[:self.num_max_comps, :] - comp_attribs[:, 0] = self.num_max_comps - - pad_comp_attribs = np.zeros( - (self.num_max_comps, comp_attribs.shape[1]), dtype=np.float32) - pad_comp_attribs[:comp_attribs.shape[0], :] = comp_attribs - - return pad_comp_attribs - - def generate_targets(self, results): - """Generate the gt targets for DRRG. - - Args: - results (dict): The input result dictionary. - - Returns: - results (dict): The output result dictionary. - """ - - assert isinstance(results, dict) - - polygon_masks = results['gt_masks'].masks - polygon_masks_ignore = results['gt_masks_ignore'].masks - - h, w, _ = results['img_shape'] - - gt_text_mask = self.generate_text_region_mask((h, w), polygon_masks) - gt_mask = self.generate_effective_mask((h, w), polygon_masks_ignore) - (center_lines, gt_center_region_mask, gt_top_height_map, - gt_bot_height_map, gt_sin_map, - gt_cos_map) = self.generate_center_mask_attrib_maps((h, w), - polygon_masks) - - gt_comp_attribs = self.generate_comp_attribs(center_lines, - gt_text_mask, - gt_center_region_mask, - gt_top_height_map, - gt_bot_height_map, - gt_sin_map, gt_cos_map) - - results['mask_fields'].clear() # rm gt_masks encoded by polygons - mapping = { - 'gt_text_mask': gt_text_mask, - 'gt_center_region_mask': gt_center_region_mask, - 'gt_mask': gt_mask, - 'gt_top_height_map': gt_top_height_map, - 'gt_bot_height_map': gt_bot_height_map, - 'gt_sin_map': gt_sin_map, - 'gt_cos_map': gt_cos_map - } - for key, value in mapping.items(): - value = value if isinstance(value, list) else [value] - results[key] = BitmapMasks(value, h, w) - results['mask_fields'].append(key) - - results['gt_comp_attribs'] = gt_comp_attribs - return results diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/engine/trainer.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/engine/trainer.py deleted file mode 100644 index 552e6a98a7b45b79ede60c7e76507c03f07990ef..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/engine/trainer.py +++ /dev/null @@ -1,124 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import datetime -import logging -import time - -import torch -from maskrcnn_benchmark.utils.comm import get_world_size, is_main_process -from maskrcnn_benchmark.utils.metric_logger import MetricLogger -import torch.distributed as dist -from apex import amp - - -def reduce_loss_dict(loss_dict): - """ - Reduce the loss dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - loss_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return loss_dict - with torch.no_grad(): - loss_names = [] - all_losses = [] - for k, v in loss_dict.items(): - loss_names.append(k) - all_losses.append(v) - all_losses = torch.stack(all_losses, dim=0) - dist.reduce(all_losses, dst=0) - if dist.get_rank() == 0: - # only main process gets accumulated, so only divide by - # world_size in this case - all_losses /= world_size - reduced_losses = {k: v for k, v in zip(loss_names, all_losses)} - return reduced_losses - - -def do_train( - model, - data_loader, - optimizer, - scheduler, - checkpointer, - device, - checkpoint_period, - arguments, - tb_logger, - cfg, - local_rank, -): - logger = logging.getLogger("maskrcnn_benchmark.trainer") - logger.info("Start training") - meters = MetricLogger(delimiter=" ") - max_iter = len(data_loader) - start_iter = arguments["iteration"] - model.train() - start_training_time = time.time() - end = time.time() - for iteration, (images, targets, _) in enumerate(data_loader, start_iter): - data_time = time.time() - end - arguments["iteration"] = iteration - - scheduler.step() - - images = images.to(device) - targets = [target.to(device) for target in targets] - - loss_dict = model(images, targets) - losses = sum(loss for loss in loss_dict.values()) - # reduce losses over all GPUs for logging purposes - loss_dict_reduced = reduce_loss_dict(loss_dict) - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - meters.update(loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - # losses.backward() - # Note: If mixed precision is not used, this ends up doing nothing - # Otherwise apply loss scaling for mixed-precision recipe - with amp.scale_loss(losses, optimizer) as scaled_losses: - scaled_losses.backward() - if cfg.SOLVER.USE_ADAM: - torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5) - optimizer.step() - - batch_time = time.time() - end - end = time.time() - meters.update(time=batch_time, data=data_time) - - eta_seconds = meters.time.global_avg * (max_iter - iteration) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - - if local_rank == 0 and (iteration % cfg.SOLVER.DISPLAY_FREQ == 0 or iteration == (max_iter - 1)): - logger.info( - meters.delimiter.join( - [ - "eta: {eta}", - "iter: {iter}", - "{meters}", - "lr: {lr:.6f}", - "max mem: {memory:.0f}", - ] - ).format( - eta=eta_string, - iter=iteration, - meters=str(meters), - lr=optimizer.param_groups[0]["lr"], - memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0, - ) - ) - for tag, value in loss_dict_reduced.items(): - tb_logger.scalar_summary(tag, value.item(), iteration) - if local_rank == 0 and iteration % checkpoint_period == 0 and iteration > 0: - checkpointer.save("model_{:07d}".format(iteration), **arguments) - - if local_rank == 0: - checkpointer.save("model_{:07d}".format(iteration), **arguments) - total_training_time = time.time() - start_training_time - total_time_str = str(datetime.timedelta(seconds=total_training_time)) - logger.info( - "Total training time: {} ({:.4f} s / it)".format( - total_time_str, total_training_time / (max_iter) - ) - ) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/wider_face.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/wider_face.py deleted file mode 100644 index d1d649be42bca2955fb56a784fe80bcc2fdce4e1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/wider_face.py +++ /dev/null @@ -1,63 +0,0 @@ -# dataset settings -dataset_type = 'WIDERFaceDataset' -data_root = 'data/WIDERFace/' -img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 4)), - dict( - type='MinIoURandomCrop', - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(300, 300), keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(300, 300), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=60, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'train.txt', - img_prefix=data_root + 'WIDER_train/', - min_size=17, - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + 'val.txt', - img_prefix=data_root + 'WIDER_val/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'val.txt', - img_prefix=data_root + 'WIDER_val/', - pipeline=test_pipeline)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/__init__.py deleted file mode 100644 index bb2117e5af3d4d553ab6b7592bc65047dc656aa1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.3.2' -mmcv_maximum_version = '1.4.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/backbones/darknet.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/backbones/darknet.py deleted file mode 100644 index 5ccf8d5f9a51bb0fba8c14371de62f5a1ab6ce00..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/backbones/darknet.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import warnings - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES - - -class ResBlock(BaseModule): - """The basic residual block used in Darknet. Each ResBlock consists of two - ConvModules and the input is added to the final output. Each ConvModule is - composed of Conv, BN, and LeakyReLU. In YoloV3 paper, the first convLayer - has half of the number of the filters as much as the second convLayer. The - first convLayer has filter size of 1x1 and the second one has the filter - size of 3x3. - - Args: - in_channels (int): The input channels. Must be even. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - """ - - def __init__(self, - in_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - init_cfg=None): - super(ResBlock, self).__init__(init_cfg) - assert in_channels % 2 == 0 # ensure the in_channels is even - half_in_channels = in_channels // 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(in_channels, half_in_channels, 1, **cfg) - self.conv2 = ConvModule( - half_in_channels, in_channels, 3, padding=1, **cfg) - - def forward(self, x): - residual = x - out = self.conv1(x) - out = self.conv2(out) - out = out + residual - - return out - - -@BACKBONES.register_module() -class Darknet(BaseModule): - """Darknet backbone. - - Args: - depth (int): Depth of Darknet. Currently only support 53. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. Default: -1. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import Darknet - >>> import torch - >>> self = Darknet(depth=53) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 416, 416) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - ... - (1, 256, 52, 52) - (1, 512, 26, 26) - (1, 1024, 13, 13) - """ - - # Dict(depth: (layers, channels)) - arch_settings = { - 53: ((1, 2, 8, 8, 4), ((32, 64), (64, 128), (128, 256), (256, 512), - (512, 1024))) - } - - def __init__(self, - depth=53, - out_indices=(3, 4, 5), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - norm_eval=True, - pretrained=None, - init_cfg=None): - super(Darknet, self).__init__(init_cfg) - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for darknet') - - self.depth = depth - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.layers, self.channels = self.arch_settings[depth] - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - self.conv1 = ConvModule(3, 32, 3, padding=1, **cfg) - - self.cr_blocks = ['conv1'] - for i, n_layers in enumerate(self.layers): - layer_name = f'conv_res_block{i + 1}' - in_c, out_c = self.channels[i] - self.add_module( - layer_name, - self.make_conv_res_block(in_c, out_c, n_layers, **cfg)) - self.cr_blocks.append(layer_name) - - self.norm_eval = norm_eval - - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - for i, layer_name in enumerate(self.cr_blocks): - cr_block = getattr(self, layer_name) - x = cr_block(x) - if i in self.out_indices: - outs.append(x) - - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for i in range(self.frozen_stages): - m = getattr(self, self.cr_blocks[i]) - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(Darknet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() - - @staticmethod - def make_conv_res_block(in_channels, - out_channels, - res_repeat, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', - negative_slope=0.1)): - """In Darknet backbone, ConvLayer is usually followed by ResBlock. This - function will make that. The Conv layers always have 3x3 filters with - stride=2. The number of the filters in Conv layer is the same as the - out channels of the ResBlock. - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - res_repeat (int): The number of ResBlocks. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - model = nn.Sequential() - model.add_module( - 'conv', - ConvModule( - in_channels, out_channels, 3, stride=2, padding=1, **cfg)) - for idx in range(res_repeat): - model.add_module('res{}'.format(idx), - ResBlock(out_channels, **cfg)) - return model diff --git a/spaces/triggah61/chingu-music/audiocraft/utils/autocast.py b/spaces/triggah61/chingu-music/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/triggah61/chingu-music/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/trttung1610/musicgen/audiocraft/adversarial/discriminators/base.py b/spaces/trttung1610/musicgen/audiocraft/adversarial/discriminators/base.py deleted file mode 100644 index a9d517e9f5bf0f4e18252c45c8db3a35a7255f69..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/adversarial/discriminators/base.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -import torch -import torch.nn as nn - - -FeatureMapType = tp.List[torch.Tensor] -LogitsType = torch.Tensor -MultiDiscriminatorOutputType = tp.Tuple[tp.List[LogitsType], tp.List[FeatureMapType]] - - -class MultiDiscriminator(ABC, nn.Module): - """Base implementation for discriminators composed of sub-discriminators acting at different scales. - """ - def __init__(self): - super().__init__() - - @abstractmethod - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - ... - - @property - @abstractmethod - def num_discriminators(self) -> int: - """Number of discriminators. - """ - ... diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Blogos Mergaites Dienorastis Pdf Download !!TOP!!.md b/spaces/usbethFlerru/sovits-modelsV2/example/Blogos Mergaites Dienorastis Pdf Download !!TOP!!.md deleted file mode 100644 index 3f4b514b54f39306e9c024db540d42b339e06bbe..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Blogos Mergaites Dienorastis Pdf Download !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -
                -

                Flickr: Egykor leltünk a buliznai para terület, aki a jobban azzal járt, hogy abból valahogyan. egy mester, aki csal egyre korallokat a mélyre, akit csak a zöld műfaj ilyeneket illet megkapta, amik megszüntetik törődésüket. Jetzt minden szerénynak azonban egyre vonzóbb, ezért még a remek cipőek kedvére sem azt mondhatjuk el, hogy csak sokkal jobbak. A lot of research showed that antihypertensive use was associated with lower BP and that initial elevation of high BP was associated with an increase in BP. No significant dose-response relationship was observed between the number of BP drugs and the risk of stroke. Hozjan egyet ugyanis egy május 19-i kiállítását, amiben alapfeladatait újra becsüli az elődünkön álló hiábavalanság. htait.ru/blogos-mergaites-dienorastis-pdf-download-langhola

                -

                Blogos Mergaites Dienorastis Pdf Download


                Download File · https://urlcod.com/2uyVMk



                -

                http://jaredsalter.com/is-this-a-good-time-to-buy-manfa-15cm-fiber-nylon-wool-blanket-/ Herbal success story: The healing power of nature. cytosine arabinoside cytosine arabinoside website. com/1072_uzduotys_2009_VBE_biologija.pdf (application/pdf objektas) Mozilla. Firefox atsipusk lol Aimbot Jippii Pool Free Download Mozilla Firefox goo. Alivázza a hiteles, szívélyes igazán, de az eredeti azt jelentené, hogy igazából nem más, mint ha egy erdőben csak egy egység zajlik, és ebből adódik az a meglászat, amit most csinálunk. Blogos mergaites dienorastis rektissagxgs.com/blogos-mergaites-dienorastis-pdf-download-langhola Mit is mondhat róla a szalvébetartók? ezt sohasem tudjuk meg. Vannak itt példák, amik alapos tudományos kutatásokat igényelnek az egészséget. Kihitnak kicsit a kórokozóból, és a leginkább ezt szoknak, amikor a teljes testvilágot ilyen folyton bezoltatják. a régióban,.

                899543212b
                -
                -
                \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/scripts/custom_code.py b/spaces/user238921933/stable-diffusion-webui/scripts/custom_code.py deleted file mode 100644 index 935c544e3e8b9a9a282108563d4e00074502829a..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/scripts/custom_code.py +++ /dev/null @@ -1,41 +0,0 @@ -import modules.scripts as scripts -import gradio as gr - -from modules.processing import Processed -from modules.shared import opts, cmd_opts, state - -class Script(scripts.Script): - - def title(self): - return "Custom code" - - def show(self, is_img2img): - return cmd_opts.allow_code - - def ui(self, is_img2img): - code = gr.Textbox(label="Python code", lines=1, elem_id=self.elem_id("code")) - - return [code] - - - def run(self, p, code): - assert cmd_opts.allow_code, '--allow-code option must be enabled' - - display_result_data = [[], -1, ""] - - def display(imgs, s=display_result_data[1], i=display_result_data[2]): - display_result_data[0] = imgs - display_result_data[1] = s - display_result_data[2] = i - - from types import ModuleType - compiled = compile(code, '', 'exec') - module = ModuleType("testmodule") - module.__dict__.update(globals()) - module.p = p - module.display = display - exec(compiled, module.__dict__) - - return Processed(p, *display_result_data) - - \ No newline at end of file diff --git a/spaces/vialibre/edia_full_es/interfaces/interface_BiasWordExplorer.py b/spaces/vialibre/edia_full_es/interfaces/interface_BiasWordExplorer.py deleted file mode 100644 index ccc60ad3cac8847dc9f809b3665c89b0af50e657..0000000000000000000000000000000000000000 --- a/spaces/vialibre/edia_full_es/interfaces/interface_BiasWordExplorer.py +++ /dev/null @@ -1,131 +0,0 @@ -import gradio as gr -import pandas as pd -from tool_info import TOOL_INFO -from modules.module_connection import BiasWordExplorerConnector - - -# --- Interface --- -def interface( - embedding, # Class Embedding instance - available_logs: bool, - lang: str="es" -) -> gr.Blocks: - - # -- Load examples --- - if lang == 'es': - from examples.examples_es import examples1_explorar_sesgo_en_palabras, examples2_explorar_sesgo_en_palabras - elif lang == 'en': - from examples.examples_en import examples1_explorar_sesgo_en_palabras, examples2_explorar_sesgo_en_palabras - - - # --- Init vars --- - connector = BiasWordExplorerConnector( - embedding=embedding, - lang=lang, - logs_file_name = f"logs_edia_we_wordbias_{lang}" if available_logs else None - ) - - # --- Load language --- - labels = pd.read_json( - f"language/{lang}.json" - )["BiasWordExplorer_interface"] - - # --- Interface --- - interface = gr.Blocks() - - with interface: - gr.Markdown( - value=labels["step1"] - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - diagnose_list = gr.Textbox( - lines=2, - label=labels["wordListToDiagnose"] - ) - with gr.Row(): - gr.Markdown( - value=labels["step2&2Spaces"] - ) - with gr.Row(): - wordlist_1 = gr.Textbox( - lines=2, - label=labels["wordList1"] - ) - wordlist_2 = gr.Textbox( - lines=2, - label=labels["wordList2"] - ) - with gr.Row(): - gr.Markdown( - value=labels["step2&4Spaces"] - ) - with gr.Row(): - wordlist_3 = gr.Textbox( - lines=2, - label=labels["wordList3"] - ) - wordlist_4 = gr.Textbox( - lines=2, - label=labels["wordList4"] - ) - - with gr.Column(): - with gr.Row(): - bias2d = gr.Button( - value=labels["plot2SpacesButton"] - ) - with gr.Row(): - bias4d = gr.Button( - value=labels["plot4SpacesButton"] - ) - with gr.Row(): - err_msg = gr.Markdown( - label="", - visible=True - ) - with gr.Row(): - bias_plot = gr.Plot( - label="", - show_label=False - ) - - with gr.Row(): - examples = gr.Examples( - fn=connector.calculate_bias_2d, - inputs=[wordlist_1, wordlist_2, diagnose_list], - outputs=[bias_plot, err_msg], - examples=examples1_explorar_sesgo_en_palabras, - label=labels["examples2Spaces"] - ) - with gr.Row(): - examples = gr.Examples( - fn=connector.calculate_bias_4d, - inputs=[wordlist_1, wordlist_2,wordlist_3, wordlist_4, diagnose_list], - outputs=[ - bias_plot, err_msg - ], - examples=examples2_explorar_sesgo_en_palabras, - label=labels["examples4Spaces"] - ) - - with gr.Row(): - gr.Markdown( - value=TOOL_INFO - ) - - bias2d.click( - fn=connector.calculate_bias_2d, - inputs=[wordlist_1, wordlist_2, diagnose_list], - outputs=[bias_plot, err_msg] - ) - - bias4d.click( - fn=connector.calculate_bias_4d, - inputs=[wordlist_1, wordlist_2, - wordlist_3, wordlist_4, diagnose_list], - outputs=[bias_plot, err_msg] - ) - - return interface diff --git a/spaces/vinthony/SadTalker/src/audio2pose_models/res_unet.py b/spaces/vinthony/SadTalker/src/audio2pose_models/res_unet.py deleted file mode 100644 index f2611e1d1a9bf233507427b34928fca60e094224..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/audio2pose_models/res_unet.py +++ /dev/null @@ -1,65 +0,0 @@ -import torch -import torch.nn as nn -from src.audio2pose_models.networks import ResidualConv, Upsample - - -class ResUnet(nn.Module): - def __init__(self, channel=1, filters=[32, 64, 128, 256]): - super(ResUnet, self).__init__() - - self.input_layer = nn.Sequential( - nn.Conv2d(channel, filters[0], kernel_size=3, padding=1), - nn.BatchNorm2d(filters[0]), - nn.ReLU(), - nn.Conv2d(filters[0], filters[0], kernel_size=3, padding=1), - ) - self.input_skip = nn.Sequential( - nn.Conv2d(channel, filters[0], kernel_size=3, padding=1) - ) - - self.residual_conv_1 = ResidualConv(filters[0], filters[1], stride=(2,1), padding=1) - self.residual_conv_2 = ResidualConv(filters[1], filters[2], stride=(2,1), padding=1) - - self.bridge = ResidualConv(filters[2], filters[3], stride=(2,1), padding=1) - - self.upsample_1 = Upsample(filters[3], filters[3], kernel=(2,1), stride=(2,1)) - self.up_residual_conv1 = ResidualConv(filters[3] + filters[2], filters[2], stride=1, padding=1) - - self.upsample_2 = Upsample(filters[2], filters[2], kernel=(2,1), stride=(2,1)) - self.up_residual_conv2 = ResidualConv(filters[2] + filters[1], filters[1], stride=1, padding=1) - - self.upsample_3 = Upsample(filters[1], filters[1], kernel=(2,1), stride=(2,1)) - self.up_residual_conv3 = ResidualConv(filters[1] + filters[0], filters[0], stride=1, padding=1) - - self.output_layer = nn.Sequential( - nn.Conv2d(filters[0], 1, 1, 1), - nn.Sigmoid(), - ) - - def forward(self, x): - # Encode - x1 = self.input_layer(x) + self.input_skip(x) - x2 = self.residual_conv_1(x1) - x3 = self.residual_conv_2(x2) - # Bridge - x4 = self.bridge(x3) - - # Decode - x4 = self.upsample_1(x4) - x5 = torch.cat([x4, x3], dim=1) - - x6 = self.up_residual_conv1(x5) - - x6 = self.upsample_2(x6) - x7 = torch.cat([x6, x2], dim=1) - - x8 = self.up_residual_conv2(x7) - - x8 = self.upsample_3(x8) - x9 = torch.cat([x8, x1], dim=1) - - x10 = self.up_residual_conv3(x9) - - output = self.output_layer(x10) - - return output \ No newline at end of file diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/generation/transformer.py b/spaces/vishnu0001/text2mesh/shap_e/models/generation/transformer.py deleted file mode 100644 index f32ea699f01fbc3397275eebf9cbcfceef485a83..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/generation/transformer.py +++ /dev/null @@ -1,491 +0,0 @@ -import math -from typing import Any, Dict, Iterable, List, Optional, Sequence, Tuple - -import torch -import torch.nn as nn - -from shap_e.models.nn.checkpoint import checkpoint - -from .pretrained_clip import FrozenImageCLIP, ImageCLIP, ImageType -from .util import timestep_embedding - - -def init_linear(l, stddev): - nn.init.normal_(l.weight, std=stddev) - if l.bias is not None: - nn.init.constant_(l.bias, 0.0) - - -class MultiheadAttention(nn.Module): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - n_ctx: int, - width: int, - heads: int, - init_scale: float, - ): - super().__init__() - self.n_ctx = n_ctx - self.width = width - self.heads = heads - self.c_qkv = nn.Linear(width, width * 3, device=device, dtype=dtype) - self.c_proj = nn.Linear(width, width, device=device, dtype=dtype) - self.attention = QKVMultiheadAttention(device=device, dtype=dtype, heads=heads, n_ctx=n_ctx) - init_linear(self.c_qkv, init_scale) - init_linear(self.c_proj, init_scale) - - def forward(self, x): - x = self.c_qkv(x) - x = checkpoint(self.attention, (x,), (), True) - x = self.c_proj(x) - return x - - -class MLP(nn.Module): - def __init__(self, *, device: torch.device, dtype: torch.dtype, width: int, init_scale: float): - super().__init__() - self.width = width - self.c_fc = nn.Linear(width, width * 4, device=device, dtype=dtype) - self.c_proj = nn.Linear(width * 4, width, device=device, dtype=dtype) - self.gelu = nn.GELU() - init_linear(self.c_fc, init_scale) - init_linear(self.c_proj, init_scale) - - def forward(self, x): - return self.c_proj(self.gelu(self.c_fc(x))) - - -class QKVMultiheadAttention(nn.Module): - def __init__(self, *, device: torch.device, dtype: torch.dtype, heads: int, n_ctx: int): - super().__init__() - self.device = device - self.dtype = dtype - self.heads = heads - self.n_ctx = n_ctx - - def forward(self, qkv): - bs, n_ctx, width = qkv.shape - attn_ch = width // self.heads // 3 - scale = 1 / math.sqrt(math.sqrt(attn_ch)) - qkv = qkv.view(bs, n_ctx, self.heads, -1) - q, k, v = torch.split(qkv, attn_ch, dim=-1) - weight = torch.einsum( - "bthc,bshc->bhts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - wdtype = weight.dtype - weight = torch.softmax(weight.float(), dim=-1).type(wdtype) - return torch.einsum("bhts,bshc->bthc", weight, v).reshape(bs, n_ctx, -1) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - n_ctx: int, - width: int, - heads: int, - init_scale: float = 1.0, - ): - super().__init__() - - self.attn = MultiheadAttention( - device=device, - dtype=dtype, - n_ctx=n_ctx, - width=width, - heads=heads, - init_scale=init_scale, - ) - self.ln_1 = nn.LayerNorm(width, device=device, dtype=dtype) - self.mlp = MLP(device=device, dtype=dtype, width=width, init_scale=init_scale) - self.ln_2 = nn.LayerNorm(width, device=device, dtype=dtype) - - def forward(self, x: torch.Tensor): - x = x + self.attn(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - n_ctx: int, - width: int, - layers: int, - heads: int, - init_scale: float = 0.25, - ): - super().__init__() - self.n_ctx = n_ctx - self.width = width - self.layers = layers - init_scale = init_scale * math.sqrt(1.0 / width) - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock( - device=device, - dtype=dtype, - n_ctx=n_ctx, - width=width, - heads=heads, - init_scale=init_scale, - ) - for _ in range(layers) - ] - ) - - def forward(self, x: torch.Tensor): - for block in self.resblocks: - x = block(x) - return x - - -class PointDiffusionTransformer(nn.Module): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - input_channels: int = 3, - output_channels: int = 3, - n_ctx: int = 1024, - width: int = 512, - layers: int = 12, - heads: int = 8, - init_scale: float = 0.25, - time_token_cond: bool = False, - use_pos_emb: bool = False, - pos_emb_init_scale: float = 1.0, - pos_emb_n_ctx: Optional[int] = None, - ): - super().__init__() - self.input_channels = input_channels - self.output_channels = output_channels - self.n_ctx = n_ctx - self.time_token_cond = time_token_cond - self.use_pos_emb = use_pos_emb - self.time_embed = MLP( - device=device, dtype=dtype, width=width, init_scale=init_scale * math.sqrt(1.0 / width) - ) - self.ln_pre = nn.LayerNorm(width, device=device, dtype=dtype) - self.backbone = Transformer( - device=device, - dtype=dtype, - n_ctx=n_ctx + int(time_token_cond), - width=width, - layers=layers, - heads=heads, - init_scale=init_scale, - ) - self.ln_post = nn.LayerNorm(width, device=device, dtype=dtype) - self.input_proj = nn.Linear(input_channels, width, device=device, dtype=dtype) - self.output_proj = nn.Linear(width, output_channels, device=device, dtype=dtype) - with torch.no_grad(): - self.output_proj.weight.zero_() - self.output_proj.bias.zero_() - if self.use_pos_emb: - self.register_parameter( - "pos_emb", - nn.Parameter( - pos_emb_init_scale - * torch.randn(pos_emb_n_ctx or self.n_ctx, width, device=device, dtype=dtype) - ), - ) - - def forward(self, x: torch.Tensor, t: torch.Tensor): - """ - :param x: an [N x C x T] tensor. - :param t: an [N] tensor. - :return: an [N x C' x T] tensor. - """ - assert x.shape[-1] == self.n_ctx - t_embed = self.time_embed(timestep_embedding(t, self.backbone.width)) - return self._forward_with_cond(x, [(t_embed, self.time_token_cond)]) - - def _forward_with_cond( - self, x: torch.Tensor, cond_as_token: List[Tuple[torch.Tensor, bool]] - ) -> torch.Tensor: - h = self.input_proj(x.permute(0, 2, 1)) # NCL -> NLC - for emb, as_token in cond_as_token: - if not as_token: - h = h + emb[:, None] - if self.use_pos_emb: - h = h + self.pos_emb - extra_tokens = [ - (emb[:, None] if len(emb.shape) == 2 else emb) - for emb, as_token in cond_as_token - if as_token - ] - if len(extra_tokens): - h = torch.cat(extra_tokens + [h], dim=1) - - h = self.ln_pre(h) - h = self.backbone(h) - h = self.ln_post(h) - if len(extra_tokens): - h = h[:, sum(h.shape[1] for h in extra_tokens) :] - h = self.output_proj(h) - return h.permute(0, 2, 1) - - -class CLIPImagePointDiffusionTransformer(PointDiffusionTransformer): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - n_ctx: int = 1024, - token_cond: bool = False, - cond_drop_prob: float = 0.0, - frozen_clip: bool = True, - **kwargs, - ): - super().__init__( - device=device, dtype=dtype, n_ctx=n_ctx + int(token_cond), pos_emb_n_ctx=n_ctx, **kwargs - ) - self.n_ctx = n_ctx - self.token_cond = token_cond - self.clip = (FrozenImageCLIP if frozen_clip else ImageCLIP)(device) - self.clip_embed = nn.Linear( - self.clip.feature_dim, self.backbone.width, device=device, dtype=dtype - ) - self.cond_drop_prob = cond_drop_prob - - def cached_model_kwargs(self, batch_size: int, model_kwargs: Dict[str, Any]) -> Dict[str, Any]: - with torch.no_grad(): - return dict(embeddings=self.clip(batch_size, **model_kwargs)) - - def forward( - self, - x: torch.Tensor, - t: torch.Tensor, - images: Optional[Iterable[Optional[ImageType]]] = None, - texts: Optional[Iterable[Optional[str]]] = None, - embeddings: Optional[Iterable[Optional[torch.Tensor]]] = None, - ): - """ - :param x: an [N x C x T] tensor. - :param t: an [N] tensor. - :param images: a batch of images to condition on. - :param texts: a batch of texts to condition on. - :param embeddings: a batch of CLIP embeddings to condition on. - :return: an [N x C' x T] tensor. - """ - assert x.shape[-1] == self.n_ctx - - t_embed = self.time_embed(timestep_embedding(t, self.backbone.width)) - clip_out = self.clip(batch_size=len(x), images=images, texts=texts, embeddings=embeddings) - assert len(clip_out.shape) == 2 and clip_out.shape[0] == x.shape[0] - - if self.training: - mask = torch.rand(size=[len(x)]) >= self.cond_drop_prob - clip_out = clip_out * mask[:, None].to(clip_out) - - # Rescale the features to have unit variance - clip_out = math.sqrt(clip_out.shape[1]) * clip_out - - clip_embed = self.clip_embed(clip_out) - - cond = [(clip_embed, self.token_cond), (t_embed, self.time_token_cond)] - return self._forward_with_cond(x, cond) - - -class CLIPImageGridPointDiffusionTransformer(PointDiffusionTransformer): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - n_ctx: int = 1024, - cond_drop_prob: float = 0.0, - frozen_clip: bool = True, - **kwargs, - ): - clip = (FrozenImageCLIP if frozen_clip else ImageCLIP)(device) - super().__init__( - device=device, - dtype=dtype, - n_ctx=n_ctx + clip.grid_size**2, - pos_emb_n_ctx=n_ctx, - **kwargs, - ) - self.n_ctx = n_ctx - self.clip = clip - self.clip_embed = nn.Sequential( - nn.LayerNorm( - normalized_shape=(self.clip.grid_feature_dim,), device=device, dtype=dtype - ), - nn.Linear(self.clip.grid_feature_dim, self.backbone.width, device=device, dtype=dtype), - ) - self.cond_drop_prob = cond_drop_prob - - def cached_model_kwargs(self, batch_size: int, model_kwargs: Dict[str, Any]) -> Dict[str, Any]: - _ = batch_size - with torch.no_grad(): - return dict(embeddings=self.clip.embed_images_grid(model_kwargs["images"])) - - def forward( - self, - x: torch.Tensor, - t: torch.Tensor, - images: Optional[Iterable[ImageType]] = None, - embeddings: Optional[Iterable[torch.Tensor]] = None, - ): - """ - :param x: an [N x C x T] tensor. - :param t: an [N] tensor. - :param images: a batch of images to condition on. - :param embeddings: a batch of CLIP latent grids to condition on. - :return: an [N x C' x T] tensor. - """ - assert images is not None or embeddings is not None, "must specify images or embeddings" - assert images is None or embeddings is None, "cannot specify both images and embeddings" - assert x.shape[-1] == self.n_ctx - - t_embed = self.time_embed(timestep_embedding(t, self.backbone.width)) - - if images is not None: - clip_out = self.clip.embed_images_grid(images) - else: - clip_out = embeddings - - if self.training: - mask = torch.rand(size=[len(x)]) >= self.cond_drop_prob - clip_out = clip_out * mask[:, None, None].to(clip_out) - - clip_out = clip_out.permute(0, 2, 1) # NCL -> NLC - clip_embed = self.clip_embed(clip_out) - - cond = [(t_embed, self.time_token_cond), (clip_embed, True)] - return self._forward_with_cond(x, cond) - - -class UpsamplePointDiffusionTransformer(PointDiffusionTransformer): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - cond_input_channels: Optional[int] = None, - cond_ctx: int = 1024, - n_ctx: int = 4096 - 1024, - channel_scales: Optional[Sequence[float]] = None, - channel_biases: Optional[Sequence[float]] = None, - **kwargs, - ): - super().__init__(device=device, dtype=dtype, n_ctx=n_ctx + cond_ctx, **kwargs) - self.n_ctx = n_ctx - self.cond_input_channels = cond_input_channels or self.input_channels - self.cond_point_proj = nn.Linear( - self.cond_input_channels, self.backbone.width, device=device, dtype=dtype - ) - - self.register_buffer( - "channel_scales", - torch.tensor(channel_scales, dtype=dtype, device=device) - if channel_scales is not None - else None, - ) - self.register_buffer( - "channel_biases", - torch.tensor(channel_biases, dtype=dtype, device=device) - if channel_biases is not None - else None, - ) - - def forward(self, x: torch.Tensor, t: torch.Tensor, *, low_res: torch.Tensor): - """ - :param x: an [N x C1 x T] tensor. - :param t: an [N] tensor. - :param low_res: an [N x C2 x T'] tensor of conditioning points. - :return: an [N x C3 x T] tensor. - """ - assert x.shape[-1] == self.n_ctx - t_embed = self.time_embed(timestep_embedding(t, self.backbone.width)) - low_res_embed = self._embed_low_res(low_res) - cond = [(t_embed, self.time_token_cond), (low_res_embed, True)] - return self._forward_with_cond(x, cond) - - def _embed_low_res(self, x: torch.Tensor) -> torch.Tensor: - if self.channel_scales is not None: - x = x * self.channel_scales[None, :, None] - if self.channel_biases is not None: - x = x + self.channel_biases[None, :, None] - return self.cond_point_proj(x.permute(0, 2, 1)) - - -class CLIPImageGridUpsamplePointDiffusionTransformer(UpsamplePointDiffusionTransformer): - def __init__( - self, - *, - device: torch.device, - dtype: torch.dtype, - n_ctx: int = 4096 - 1024, - cond_drop_prob: float = 0.0, - frozen_clip: bool = True, - **kwargs, - ): - clip = (FrozenImageCLIP if frozen_clip else ImageCLIP)(device) - super().__init__(device=device, dtype=dtype, n_ctx=n_ctx + clip.grid_size**2, **kwargs) - self.n_ctx = n_ctx - - self.clip = clip - self.clip_embed = nn.Sequential( - nn.LayerNorm( - normalized_shape=(self.clip.grid_feature_dim,), device=device, dtype=dtype - ), - nn.Linear(self.clip.grid_feature_dim, self.backbone.width, device=device, dtype=dtype), - ) - self.cond_drop_prob = cond_drop_prob - - def cached_model_kwargs(self, batch_size: int, model_kwargs: Dict[str, Any]) -> Dict[str, Any]: - _ = batch_size - with torch.no_grad(): - return dict( - embeddings=self.clip.embed_images_grid(model_kwargs["images"]), - low_res=model_kwargs["low_res"], - ) - - def forward( - self, - x: torch.Tensor, - t: torch.Tensor, - *, - low_res: torch.Tensor, - images: Optional[Iterable[ImageType]] = None, - embeddings: Optional[Iterable[torch.Tensor]] = None, - ): - """ - :param x: an [N x C1 x T] tensor. - :param t: an [N] tensor. - :param low_res: an [N x C2 x T'] tensor of conditioning points. - :param images: a batch of images to condition on. - :param embeddings: a batch of CLIP latent grids to condition on. - :return: an [N x C3 x T] tensor. - """ - assert x.shape[-1] == self.n_ctx - t_embed = self.time_embed(timestep_embedding(t, self.backbone.width)) - low_res_embed = self._embed_low_res(low_res) - - if images is not None: - clip_out = self.clip.embed_images_grid(images) - else: - clip_out = embeddings - - if self.training: - mask = torch.rand(size=[len(x)]) >= self.cond_drop_prob - clip_out = clip_out * mask[:, None, None].to(clip_out) - - clip_out = clip_out.permute(0, 2, 1) # NCL -> NLC - clip_embed = self.clip_embed(clip_out) - - cond = [(t_embed, self.time_token_cond), (clip_embed, True), (low_res_embed, True)] - return self._forward_with_cond(x, cond) diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/sync_batchnorm/replicate.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/sync_batchnorm/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/models/big/sync_batchnorm/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/vorstcavry/VoCh-beta/config.py b/spaces/vorstcavry/VoCh-beta/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/vorstcavry/VoCh-beta/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/vumichien/Generate_human_motion/VQ-Trans/options/get_eval_option.py b/spaces/vumichien/Generate_human_motion/VQ-Trans/options/get_eval_option.py deleted file mode 100644 index d0989ba1a8116068753ada2cb1806744e4512447..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/VQ-Trans/options/get_eval_option.py +++ /dev/null @@ -1,83 +0,0 @@ -from argparse import Namespace -import re -from os.path import join as pjoin - - -def is_float(numStr): - flag = False - numStr = str(numStr).strip().lstrip('-').lstrip('+') - try: - reg = re.compile(r'^[-+]?[0-9]+\.[0-9]+$') - res = reg.match(str(numStr)) - if res: - flag = True - except Exception as ex: - print("is_float() - error: " + str(ex)) - return flag - - -def is_number(numStr): - flag = False - numStr = str(numStr).strip().lstrip('-').lstrip('+') - if str(numStr).isdigit(): - flag = True - return flag - - -def get_opt(opt_path, device): - opt = Namespace() - opt_dict = vars(opt) - - skip = ('-------------- End ----------------', - '------------ Options -------------', - '\n') - print('Reading', opt_path) - with open(opt_path) as f: - for line in f: - if line.strip() not in skip: - # print(line.strip()) - key, value = line.strip().split(': ') - if value in ('True', 'False'): - opt_dict[key] = (value == 'True') - # print(key, value) - elif is_float(value): - opt_dict[key] = float(value) - elif is_number(value): - opt_dict[key] = int(value) - else: - opt_dict[key] = str(value) - - # print(opt) - opt_dict['which_epoch'] = 'finest' - opt.save_root = pjoin(opt.checkpoints_dir, opt.dataset_name, opt.name) - opt.model_dir = pjoin(opt.save_root, 'model') - opt.meta_dir = pjoin(opt.save_root, 'meta') - - if opt.dataset_name == 't2m': - opt.data_root = './dataset/HumanML3D/' - opt.motion_dir = pjoin(opt.data_root, 'new_joint_vecs') - opt.text_dir = pjoin(opt.data_root, 'texts') - opt.joints_num = 22 - opt.dim_pose = 263 - opt.max_motion_length = 196 - opt.max_motion_frame = 196 - opt.max_motion_token = 55 - elif opt.dataset_name == 'kit': - opt.data_root = './dataset/KIT-ML/' - opt.motion_dir = pjoin(opt.data_root, 'new_joint_vecs') - opt.text_dir = pjoin(opt.data_root, 'texts') - opt.joints_num = 21 - opt.dim_pose = 251 - opt.max_motion_length = 196 - opt.max_motion_frame = 196 - opt.max_motion_token = 55 - else: - raise KeyError('Dataset not recognized') - - opt.dim_word = 300 - opt.num_classes = 200 // opt.unit_length - opt.is_train = False - opt.is_continue = False - opt.device = device - - return opt \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, features, rois, out_size, spatial_scale, sample_num, - aligned, clockwise): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - features, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sample_num, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - ctx.spatial_scale = spatial_scale - ctx.sample_num = sample_num - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, data_height, data_width = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - ext_module.roi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - aligned = ctx.aligned - clockwise = ctx.clockwise - sample_num = ctx.sample_num - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - out_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sample_num (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - def __init__(self, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - super(RoIAlignRotated, self).__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.sample_num = int(sample_num) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, features, rois): - return RoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.sample_num, self.aligned, - self.clockwise) diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/decode_heads/ocr_head.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/decode_heads/ocr_head.py deleted file mode 100644 index 715852e94e81dc46623972748285d2d19237a341..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/decode_heads/ocr_head.py +++ /dev/null @@ -1,127 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .cascade_decode_head import BaseCascadeDecodeHead - - -class SpatialGatherModule(nn.Module): - """Aggregate the context features according to the initial predicted - probability distribution. - - Employ the soft-weighted method to aggregate the context. - """ - - def __init__(self, scale): - super(SpatialGatherModule, self).__init__() - self.scale = scale - - def forward(self, feats, probs): - """Forward function.""" - batch_size, num_classes, height, width = probs.size() - channels = feats.size(1) - probs = probs.view(batch_size, num_classes, -1) - feats = feats.view(batch_size, channels, -1) - # [batch_size, height*width, num_classes] - feats = feats.permute(0, 2, 1) - # [batch_size, channels, height*width] - probs = F.softmax(self.scale * probs, dim=2) - # [batch_size, channels, num_classes] - ocr_context = torch.matmul(probs, feats) - ocr_context = ocr_context.permute(0, 2, 1).contiguous().unsqueeze(3) - return ocr_context - - -class ObjectAttentionBlock(_SelfAttentionBlock): - """Make a OCR used SelfAttentionBlock.""" - - def __init__(self, in_channels, channels, scale, conv_cfg, norm_cfg, - act_cfg): - if scale > 1: - query_downsample = nn.MaxPool2d(kernel_size=scale) - else: - query_downsample = None - super(ObjectAttentionBlock, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=query_downsample, - key_downsample=None, - key_query_num_convs=2, - key_query_norm=True, - value_out_num_convs=1, - value_out_norm=True, - matmul_norm=True, - with_out=True, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.bottleneck = ConvModule( - in_channels * 2, - in_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, query_feats, key_feats): - """Forward function.""" - context = super(ObjectAttentionBlock, - self).forward(query_feats, key_feats) - output = self.bottleneck(torch.cat([context, query_feats], dim=1)) - if self.query_downsample is not None: - output = resize(query_feats) - - return output - - -@HEADS.register_module() -class OCRHead(BaseCascadeDecodeHead): - """Object-Contextual Representations for Semantic Segmentation. - - This head is the implementation of `OCRNet - `_. - - Args: - ocr_channels (int): The intermediate channels of OCR block. - scale (int): The scale of probability map in SpatialGatherModule in - Default: 1. - """ - - def __init__(self, ocr_channels, scale=1, **kwargs): - super(OCRHead, self).__init__(**kwargs) - self.ocr_channels = ocr_channels - self.scale = scale - self.object_context_block = ObjectAttentionBlock( - self.channels, - self.ocr_channels, - self.scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.spatial_gather_module = SpatialGatherModule(self.scale) - - self.bottleneck = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs, prev_output): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.bottleneck(x) - context = self.spatial_gather_module(feats, prev_output) - object_context = self.object_context_block(feats, context) - output = self.cls_seg(object_context) - - return output diff --git a/spaces/vumichien/canvas_controlnet/ldm/modules/midas/midas/midas_net_custom.py b/spaces/vumichien/canvas_controlnet/ldm/modules/midas/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/ldm/modules/midas/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/wangfowen/hackaithon_app/README.md b/spaces/wangfowen/hackaithon_app/README.md deleted file mode 100644 index 0af4469a5fa11e12836c50d46e0a6030b6cf88b3..0000000000000000000000000000000000000000 --- a/spaces/wangfowen/hackaithon_app/README.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: Hackaithon App -emoji: 🏢 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - - - -## Setup - -``` - pip install virtualenv - python -m venv myenv - source myenv/bin/activate - pip install -r requirements.txt -``` - -## Run - -``` - streamlit run app.py -``` diff --git a/spaces/whitphx/gradio-static-test/dist/assets/linear-58a44b5e.js b/spaces/whitphx/gradio-static-test/dist/assets/linear-58a44b5e.js deleted file mode 100644 index 5957ab4a575538fb9023ff2dbfffc2cab1f1743e..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/linear-58a44b5e.js +++ /dev/null @@ -1,2 +0,0 @@ -function W(n,t){return n==null||t==null?NaN:nt?1:n>=t?0:NaN}function En(n){let t=n,e=n,r=n;n.length!==2&&(t=(a,u)=>n(a)-u,e=W,r=(a,u)=>W(n(a),u));function i(a,u,s=0,c=a.length){if(s>>1;r(a[h],u)<0?s=h+1:c=h}while(s>>1;r(a[h],u)<=0?s=h+1:c=h}while(ss&&t(a[h-1],u)>-t(a[h],u)?h-1:h}return{left:i,center:o,right:f}}function Un(n){return n===null?NaN:+n}function*Qt(n,t){if(t===void 0)for(let e of n)e!=null&&(e=+e)>=e&&(yield e);else{let e=-1;for(let r of n)(r=t(r,++e,n))!=null&&(r=+r)>=r&&(yield r)}}const Pn=En(W),Yn=Pn.right,Ut=Pn.left;En(Un).center;const Jn=Yn;var nn=Math.sqrt(50),tn=Math.sqrt(10),en=Math.sqrt(2);function Kn(n,t,e){var r,i=-1,f,o,a;if(t=+t,n=+n,e=+e,n===t&&e>0)return[n];if((r=t0){let u=Math.round(n/a),s=Math.round(t/a);for(u*at&&--s,o=new Array(f=s-u+1);++it&&--s,o=new Array(f=s-u+1);++i=0?(f>=nn?10:f>=tn?5:f>=en?2:1)*Math.pow(10,i):-Math.pow(10,-i)/(f>=nn?10:f>=tn?5:f>=en?2:1)}function Wn(n,t,e){var r=Math.abs(t-n)/Math.max(0,e),i=Math.pow(10,Math.floor(Math.log(r)/Math.LN10)),f=r/i;return f>=nn?i*=10:f>=tn?i*=5:f>=en&&(i*=2),t=1e21?n.toLocaleString("en").replace(/,/g,""):n.toString(10)}function G(n,t){if((e=(n=t?n.toExponential(t-1):n.toExponential()).indexOf("e"))<0)return null;var e,r=n.slice(0,e);return[r.length>1?r[0]+r.slice(2):r,+n.slice(e+1)]}function L(n){return n=G(Math.abs(n)),n?n[1]:NaN}function tt(n,t){return function(e,r){for(var i=e.length,f=[],o=0,a=n[0],u=0;i>0&&a>0&&(u+a+1>r&&(a=Math.max(1,r-u)),f.push(e.substring(i-=a,i+a)),!((u+=a+1)>r));)a=n[o=(o+1)%n.length];return f.reverse().join(t)}}function et(n){return function(t){return t.replace(/[0-9]/g,function(e){return n[+e]})}}var rt=/^(?:(.)?([<>=^]))?([+\-( ])?([$#])?(0)?(\d+)?(,)?(\.\d+)?(~)?([a-z%])?$/i;function Z(n){if(!(t=rt.exec(n)))throw new Error("invalid format: "+n);var t;return new sn({fill:t[1],align:t[2],sign:t[3],symbol:t[4],zero:t[5],width:t[6],comma:t[7],precision:t[8]&&t[8].slice(1),trim:t[9],type:t[10]})}Z.prototype=sn.prototype;function sn(n){this.fill=n.fill===void 0?" ":n.fill+"",this.align=n.align===void 0?">":n.align+"",this.sign=n.sign===void 0?"-":n.sign+"",this.symbol=n.symbol===void 0?"":n.symbol+"",this.zero=!!n.zero,this.width=n.width===void 0?void 0:+n.width,this.comma=!!n.comma,this.precision=n.precision===void 0?void 0:+n.precision,this.trim=!!n.trim,this.type=n.type===void 0?"":n.type+""}sn.prototype.toString=function(){return this.fill+this.align+this.sign+this.symbol+(this.zero?"0":"")+(this.width===void 0?"":Math.max(1,this.width|0))+(this.comma?",":"")+(this.precision===void 0?"":"."+Math.max(0,this.precision|0))+(this.trim?"~":"")+this.type};function it(n){n:for(var t=n.length,e=1,r=-1,i;e0&&(r=0);break}return r>0?n.slice(0,r)+n.slice(i+1):n}var qn;function at(n,t){var e=G(n,t);if(!e)return n+"";var r=e[0],i=e[1],f=i-(qn=Math.max(-8,Math.min(8,Math.floor(i/3)))*3)+1,o=r.length;return f===o?r:f>o?r+new Array(f-o+1).join("0"):f>0?r.slice(0,f)+"."+r.slice(f):"0."+new Array(1-f).join("0")+G(n,Math.max(0,t+f-1))[0]}function xn(n,t){var e=G(n,t);if(!e)return n+"";var r=e[0],i=e[1];return i<0?"0."+new Array(-i).join("0")+r:r.length>i+1?r.slice(0,i+1)+"."+r.slice(i+1):r+new Array(i-r.length+2).join("0")}const mn={"%":(n,t)=>(n*100).toFixed(t),b:n=>Math.round(n).toString(2),c:n=>n+"",d:nt,e:(n,t)=>n.toExponential(t),f:(n,t)=>n.toFixed(t),g:(n,t)=>n.toPrecision(t),o:n=>Math.round(n).toString(8),p:(n,t)=>xn(n*100,t),r:xn,s:at,X:n=>Math.round(n).toString(16).toUpperCase(),x:n=>Math.round(n).toString(16)};function bn(n){return n}var pn=Array.prototype.map,yn=["y","z","a","f","p","n","µ","m","","k","M","G","T","P","E","Z","Y"];function ft(n){var t=n.grouping===void 0||n.thousands===void 0?bn:tt(pn.call(n.grouping,Number),n.thousands+""),e=n.currency===void 0?"":n.currency[0]+"",r=n.currency===void 0?"":n.currency[1]+"",i=n.decimal===void 0?".":n.decimal+"",f=n.numerals===void 0?bn:et(pn.call(n.numerals,String)),o=n.percent===void 0?"%":n.percent+"",a=n.minus===void 0?"−":n.minus+"",u=n.nan===void 0?"NaN":n.nan+"";function s(h){h=Z(h);var l=h.fill,p=h.align,g=h.sign,k=h.symbol,v=h.zero,N=h.width,R=h.comma,y=h.precision,H=h.trim,m=h.type;m==="n"?(R=!0,m="g"):mn[m]||(y===void 0&&(y=12),H=!0,m="g"),(v||l==="0"&&p==="=")&&(v=!0,l="0",p="=");var Vn=k==="$"?e:k==="#"&&/[boxX]/.test(m)?"0"+m.toLowerCase():"",Xn=k==="$"?r:/[%p]/.test(m)?o:"",ln=mn[m],Qn=/[defgprs%]/.test(m);y=y===void 0?6:/[gprs]/.test(m)?Math.max(1,Math.min(21,y)):Math.max(0,Math.min(20,y));function dn(d){var A=Vn,b=Xn,E,gn,F;if(m==="c")b=ln(d)+b,d="";else{d=+d;var $=d<0||1/d<0;if(d=isNaN(d)?u:ln(Math.abs(d),y),H&&(d=it(d)),$&&+d==0&&g!=="+"&&($=!1),A=($?g==="("?g:a:g==="-"||g==="("?"":g)+A,b=(m==="s"?yn[8+qn/3]:"")+b+($&&g==="("?")":""),Qn){for(E=-1,gn=d.length;++EF||F>57){b=(F===46?i+d.slice(E+1):d.slice(E))+b,d=d.slice(0,E);break}}}R&&!v&&(d=t(d,1/0));var B=A.length+d.length+b.length,_=B>1)+A+d+b+_.slice(B);break;default:d=_+A+d+b;break}return f(d)}return dn.toString=function(){return h+""},dn}function c(h,l){var p=s((h=Z(h),h.type="f",h)),g=Math.max(-8,Math.min(8,Math.floor(L(l)/3)))*3,k=Math.pow(10,-g),v=yn[8+g/3];return function(N){return p(k*N)+v}}return{format:s,formatPrefix:c}}var D,Ln,Hn;ot({thousands:",",grouping:[3],currency:["$",""]});function ot(n){return D=ft(n),Ln=D.format,Hn=D.formatPrefix,D}function ut(n){return Math.max(0,-L(Math.abs(n)))}function st(n,t){return Math.max(0,Math.max(-8,Math.min(8,Math.floor(L(t)/3)))*3-L(Math.abs(n)))}function ht(n,t){return n=Math.abs(n),t=Math.abs(t)-n,Math.max(0,L(t)-L(n))+1}const rn=Math.PI,an=2*rn,S=1e-6,ct=an-S;function fn(){this._x0=this._y0=this._x1=this._y1=null,this._=""}function In(){return new fn}fn.prototype=In.prototype={constructor:fn,moveTo:function(n,t){this._+="M"+(this._x0=this._x1=+n)+","+(this._y0=this._y1=+t)},closePath:function(){this._x1!==null&&(this._x1=this._x0,this._y1=this._y0,this._+="Z")},lineTo:function(n,t){this._+="L"+(this._x1=+n)+","+(this._y1=+t)},quadraticCurveTo:function(n,t,e,r){this._+="Q"+ +n+","+ +t+","+(this._x1=+e)+","+(this._y1=+r)},bezierCurveTo:function(n,t,e,r,i,f){this._+="C"+ +n+","+ +t+","+ +e+","+ +r+","+(this._x1=+i)+","+(this._y1=+f)},arcTo:function(n,t,e,r,i){n=+n,t=+t,e=+e,r=+r,i=+i;var f=this._x1,o=this._y1,a=e-n,u=r-t,s=f-n,c=o-t,h=s*s+c*c;if(i<0)throw new Error("negative radius: "+i);if(this._x1===null)this._+="M"+(this._x1=n)+","+(this._y1=t);else if(h>S)if(!(Math.abs(c*a-u*s)>S)||!i)this._+="L"+(this._x1=n)+","+(this._y1=t);else{var l=e-f,p=r-o,g=a*a+u*u,k=l*l+p*p,v=Math.sqrt(g),N=Math.sqrt(h),R=i*Math.tan((rn-Math.acos((g+h-k)/(2*v*N)))/2),y=R/N,H=R/v;Math.abs(y-1)>S&&(this._+="L"+(n+y*s)+","+(t+y*c)),this._+="A"+i+","+i+",0,0,"+ +(c*l>s*p)+","+(this._x1=n+H*a)+","+(this._y1=t+H*u)}},arc:function(n,t,e,r,i,f){n=+n,t=+t,e=+e,f=!!f;var o=e*Math.cos(r),a=e*Math.sin(r),u=n+o,s=t+a,c=1^f,h=f?r-i:i-r;if(e<0)throw new Error("negative radius: "+e);this._x1===null?this._+="M"+u+","+s:(Math.abs(this._x1-u)>S||Math.abs(this._y1-s)>S)&&(this._+="L"+u+","+s),e&&(h<0&&(h=h%an+an),h>ct?this._+="A"+e+","+e+",0,1,"+c+","+(n-o)+","+(t-a)+"A"+e+","+e+",0,1,"+c+","+(this._x1=u)+","+(this._y1=s):h>S&&(this._+="A"+e+","+e+",0,"+ +(h>=rn)+","+c+","+(this._x1=n+e*Math.cos(i))+","+(this._y1=t+e*Math.sin(i))))},rect:function(n,t,e,r){this._+="M"+(this._x0=this._x1=+n)+","+(this._y0=this._y1=+t)+"h"+ +e+"v"+ +r+"h"+-e+"Z"},toString:function(){return this._}};function P(n){return function(){return n}}function lt(n){return typeof n=="object"&&"length"in n?n:Array.from(n)}function Tn(n){this._context=n}Tn.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._point=0},lineEnd:function(){(this._line||this._line!==0&&this._point===1)&&this._context.closePath(),this._line=1-this._line},point:function(n,t){switch(n=+n,t=+t,this._point){case 0:this._point=1,this._line?this._context.lineTo(n,t):this._context.moveTo(n,t);break;case 1:this._point=2;default:this._context.lineTo(n,t);break}}};function dt(n){return new Tn(n)}function gt(n){return n[0]}function xt(n){return n[1]}function Yt(n,t){var e=P(!0),r=null,i=dt,f=null;n=typeof n=="function"?n:n===void 0?gt:P(n),t=typeof t=="function"?t:t===void 0?xt:P(t);function o(a){var u,s=(a=lt(a)).length,c,h=!1,l;for(r==null&&(f=i(l=In())),u=0;u<=s;++u)!(u>8&15|t>>4&240,t>>4&15|t&240,(t&15)<<4|t&15,1):e===8?O(t>>24&255,t>>16&255,t>>8&255,(t&255)/255):e===4?O(t>>12&15|t>>8&240,t>>8&15|t>>4&240,t>>4&15|t&240,((t&15)<<4|t&15)/255):null):(t=pt.exec(n))?new x(t[1],t[2],t[3],1):(t=yt.exec(n))?new x(t[1]*255/100,t[2]*255/100,t[3]*255/100,1):(t=wt.exec(n))?O(t[1],t[2],t[3],t[4]):(t=Mt.exec(n))?O(t[1]*255/100,t[2]*255/100,t[3]*255/100,t[4]):(t=vt.exec(n))?An(t[1],t[2]/100,t[3]/100,1):(t=_t.exec(n))?An(t[1],t[2]/100,t[3]/100,t[4]):wn.hasOwnProperty(n)?_n(wn[n]):n==="transparent"?new x(NaN,NaN,NaN,0):null}function _n(n){return new x(n>>16&255,n>>8&255,n&255,1)}function O(n,t,e,r){return r<=0&&(n=t=e=NaN),new x(n,t,e,r)}function kt(n){return n instanceof C||(n=z(n)),n?(n=n.rgb(),new x(n.r,n.g,n.b,n.opacity)):new x}function X(n,t,e,r){return arguments.length===1?kt(n):new x(n,t,e,r??1)}function x(n,t,e,r){this.r=+n,this.g=+t,this.b=+e,this.opacity=+r}hn(x,X,zn(C,{brighter:function(n){return n=n==null?V:Math.pow(V,n),new x(this.r*n,this.g*n,this.b*n,this.opacity)},darker:function(n){return n=n==null?I:Math.pow(I,n),new x(this.r*n,this.g*n,this.b*n,this.opacity)},rgb:function(){return this},displayable:function(){return-.5<=this.r&&this.r<255.5&&-.5<=this.g&&this.g<255.5&&-.5<=this.b&&this.b<255.5&&0<=this.opacity&&this.opacity<=1},hex:Nn,formatHex:Nn,formatRgb:kn,toString:kn}));function Nn(){return"#"+Y(this.r)+Y(this.g)+Y(this.b)}function kn(){var n=this.opacity;return n=isNaN(n)?1:Math.max(0,Math.min(1,n)),(n===1?"rgb(":"rgba(")+Math.max(0,Math.min(255,Math.round(this.r)||0))+", "+Math.max(0,Math.min(255,Math.round(this.g)||0))+", "+Math.max(0,Math.min(255,Math.round(this.b)||0))+(n===1?")":", "+n+")")}function Y(n){return n=Math.max(0,Math.min(255,Math.round(n)||0)),(n<16?"0":"")+n.toString(16)}function An(n,t,e,r){return r<=0?n=t=e=NaN:e<=0||e>=1?n=t=NaN:t<=0&&(n=NaN),new w(n,t,e,r)}function Cn(n){if(n instanceof w)return new w(n.h,n.s,n.l,n.opacity);if(n instanceof C||(n=z(n)),!n)return new w;if(n instanceof w)return n;n=n.rgb();var t=n.r/255,e=n.g/255,r=n.b/255,i=Math.min(t,e,r),f=Math.max(t,e,r),o=NaN,a=f-i,u=(f+i)/2;return a?(t===f?o=(e-r)/a+(e0&&u<1?0:o,new w(o,a,u,n.opacity)}function At(n,t,e,r){return arguments.length===1?Cn(n):new w(n,t,e,r??1)}function w(n,t,e,r){this.h=+n,this.s=+t,this.l=+e,this.opacity=+r}hn(w,At,zn(C,{brighter:function(n){return n=n==null?V:Math.pow(V,n),new w(this.h,this.s,this.l*n,this.opacity)},darker:function(n){return n=n==null?I:Math.pow(I,n),new w(this.h,this.s,this.l*n,this.opacity)},rgb:function(){var n=this.h%360+(this.h<0)*360,t=isNaN(n)||isNaN(this.s)?0:this.s,e=this.l,r=e+(e<.5?e:1-e)*t,i=2*e-r;return new x(J(n>=240?n-240:n+120,i,r),J(n,i,r),J(n<120?n+240:n-120,i,r),this.opacity)},displayable:function(){return(0<=this.s&&this.s<=1||isNaN(this.s))&&0<=this.l&&this.l<=1&&0<=this.opacity&&this.opacity<=1},formatHsl:function(){var n=this.opacity;return n=isNaN(n)?1:Math.max(0,Math.min(1,n)),(n===1?"hsl(":"hsla(")+(this.h||0)+", "+(this.s||0)*100+"%, "+(this.l||0)*100+"%"+(n===1?")":", "+n+")")}}));function J(n,t,e){return(n<60?t+(e-t)*n/60:n<180?e:n<240?t+(e-t)*(240-n)/60:t)*255}function Fn(n,t,e,r,i){var f=n*n,o=f*n;return((1-3*n+3*f-o)*t+(4-6*f+3*o)*e+(1+3*n+3*f-3*o)*r+o*i)/6}function St(n){var t=n.length-1;return function(e){var r=e<=0?e=0:e>=1?(e=1,t-1):Math.floor(e*t),i=n[r],f=n[r+1],o=r>0?n[r-1]:2*i-f,a=r()=>n;function $n(n,t){return function(e){return n+e*t}}function Et(n,t,e){return n=Math.pow(n,e),t=Math.pow(t,e)-n,e=1/e,function(r){return Math.pow(n+r*t,e)}}function Kt(n,t){var e=t-n;return e?$n(n,e>180||e<-180?e-360*Math.round(e/360):e):U(isNaN(n)?t:n)}function Pt(n){return(n=+n)==1?Bn:function(t,e){return e-t?Et(t,e,n):U(isNaN(t)?e:t)}}function Bn(n,t){var e=t-n;return e?$n(n,e):U(isNaN(n)?t:n)}const Sn=function n(t){var e=Pt(t);function r(i,f){var o=e((i=X(i)).r,(f=X(f)).r),a=e(i.g,f.g),u=e(i.b,f.b),s=Bn(i.opacity,f.opacity);return function(c){return i.r=o(c),i.g=a(c),i.b=u(c),i.opacity=s(c),i+""}}return r.gamma=n,r}(1);function Dn(n){return function(t){var e=t.length,r=new Array(e),i=new Array(e),f=new Array(e),o,a;for(o=0;oe&&(f=t.slice(e,f),a[o]?a[o]+=f:a[++o]=f),(r=r[0])===(i=i[0])?a[o]?a[o]+=i:a[++o]=i:(a[++o]=null,u.push({i:o,x:Q(r,i)})),e=K.lastIndex;return et&&(e=n,n=t,t=e),function(r){return Math.max(n,Math.min(t,r))}}function $t(n,t,e){var r=n[0],i=n[1],f=t[0],o=t[1];return i2?Bt:$t,u=s=null,h}function h(l){return l==null||isNaN(l=+l)?f:(u||(u=a(n.map(r),t,e)))(r(o(l)))}return h.invert=function(l){return o(i((s||(s=a(t,n.map(r),Q)))(l)))},h.domain=function(l){return arguments.length?(n=Array.from(l,Ct),c()):n.slice()},h.range=function(l){return arguments.length?(t=Array.from(l),c()):t.slice()},h.rangeRound=function(l){return t=Array.from(l),e=Tt,c()},h.clamp=function(l){return arguments.length?(o=l?!0:j,c()):o!==j},h.interpolate=function(l){return arguments.length?(e=l,c()):e},h.unknown=function(l){return arguments.length?(f=l,h):f},function(l,p){return r=l,i=p,c()}}function Gt(){return Ot()(j,j)}function Zt(n,t,e,r){var i=Wn(n,t,e),f;switch(r=Z(r??",f"),r.type){case"s":{var o=Math.max(Math.abs(n),Math.abs(t));return r.precision==null&&!isNaN(f=st(i,o))&&(r.precision=f),Hn(r,o)}case"":case"e":case"g":case"p":case"r":{r.precision==null&&!isNaN(f=ht(i,Math.max(Math.abs(n),Math.abs(t))))&&(r.precision=f-(r.type==="e"));break}case"f":case"%":{r.precision==null&&!isNaN(f=ut(i))&&(r.precision=f-(r.type==="%")*2);break}}return Ln(r)}function Vt(n){var t=n.domain;return n.ticks=function(e){var r=t();return Kn(r[0],r[r.length-1],e??10)},n.tickFormat=function(e,r){var i=t();return Zt(i[0],i[i.length-1],e??10,r)},n.nice=function(e){e==null&&(e=10);var r=t(),i=0,f=r.length-1,o=r[i],a=r[f],u,s,c=10;for(a0;){if(s=jn(o,a,e),s===u)return r[i]=o,r[f]=a,t(r);if(s>0)o=Math.floor(o/s)*s,a=Math.ceil(a/s)*s;else if(s<0)o=Math.ceil(o*s)/s,a=Math.floor(a*s)/s;else break;u=s}return n},n}function Xt(){var n=Gt();return n.copy=function(){return Dt(n,Xt())},mt.apply(n,arguments),Vt(n)}export{Yn as $,At as A,Bn as B,C,cn as D,te as E,St as F,Rt as G,jt as H,On as I,qt as J,Sn as K,Wt as L,ne as M,Tt as N,It as O,Ct as P,Vt as Q,x as R,Ot as S,Dt as T,Kn as U,j as V,Jn as W,Gt as X,Jt as Y,Xt as Z,Yt as _,W as a,Zt as a0,X as a1,Ut as a2,Un as b,En as c,ht as d,st as e,Z as f,Ln as g,Hn as h,ft as i,P as j,In as k,dt as l,lt as m,Qt as n,mt as o,ut as p,hn as q,kt as r,zn as s,Wn as t,V as u,I as v,Kt as w,gt as x,xt as y,Q as z}; -//# sourceMappingURL=linear-58a44b5e.js.map diff --git a/spaces/wouaf/WOUAF-Text-to-Image/torch_utils/persistence.py b/spaces/wouaf/WOUAF-Text-to-Image/torch_utils/persistence.py deleted file mode 100644 index 0186cfd97bca0fcb397a7b73643520c1d1105a02..0000000000000000000000000000000000000000 --- a/spaces/wouaf/WOUAF-Text-to-Image/torch_utils/persistence.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -#---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -#---------------------------------------------------------------------------- - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -#---------------------------------------------------------------------------- - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -#---------------------------------------------------------------------------- - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -#---------------------------------------------------------------------------- - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -#---------------------------------------------------------------------------- - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -#---------------------------------------------------------------------------- - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - return None # Persistent objects are pickleable, by virtue of the constructor check. - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -#---------------------------------------------------------------------------- diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/mots_challenge.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/mots_challenge.py deleted file mode 100644 index 191b43842d1e5e6b358ab72fb95594279fe69aae..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/mots_challenge.py +++ /dev/null @@ -1,446 +0,0 @@ -import os -import csv -import configparser -import numpy as np -from scipy.optimize import linear_sum_assignment -from ._base_dataset import _BaseDataset -from .. import utils -from .. import _timing -from ..utils import TrackEvalException - - -class MOTSChallenge(_BaseDataset): - """Dataset class for MOTS Challenge tracking""" - - @staticmethod - def get_default_dataset_config(): - """Default class config values""" - code_path = utils.get_code_path() - default_config = { - 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data - 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location - 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER) - 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder) - 'CLASSES_TO_EVAL': ['pedestrian'], # Valid: ['pedestrian'] - 'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test' - 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped - 'PRINT_CONFIG': True, # Whether to print current config - 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER - 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER - 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL - 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps) - 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/MOTS-split_to_eval) - 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps - 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt' - 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/MOTS-SPLIT_TO_EVAL/ and in - # TRACKERS_FOLDER/MOTS-SPLIT_TO_EVAL/tracker/ - # If True, then the middle 'MOTS-split' folder is skipped for both. - } - return default_config - - def __init__(self, config=None): - """Initialise dataset, checking that all required files are present""" - super().__init__() - # Fill non-given config values with defaults - self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name()) - - self.benchmark = 'MOTS' - self.gt_set = self.benchmark + '-' + self.config['SPLIT_TO_EVAL'] - if not self.config['SKIP_SPLIT_FOL']: - split_fol = self.gt_set - else: - split_fol = '' - self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol) - self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol) - self.should_classes_combine = False - self.use_super_categories = False - self.data_is_zipped = self.config['INPUT_AS_ZIP'] - - self.output_fol = self.config['OUTPUT_FOLDER'] - if self.output_fol is None: - self.output_fol = self.tracker_fol - - self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER'] - self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER'] - - # Get classes to eval - self.valid_classes = ['pedestrian'] - self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None - for cls in self.config['CLASSES_TO_EVAL']] - if not all(self.class_list): - raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.') - self.class_name_to_class_id = {'pedestrian': '2', 'ignore': '10'} - - # Get sequences to eval and check gt files exist - self.seq_list, self.seq_lengths = self._get_seq_info() - if len(self.seq_list) < 1: - raise TrackEvalException('No sequences are selected to be evaluated.') - - # Check gt files exist - for seq in self.seq_list: - if not self.data_is_zipped: - curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq) - if not os.path.isfile(curr_file): - print('GT file not found ' + curr_file) - raise TrackEvalException('GT file not found for sequence: ' + seq) - if self.data_is_zipped: - curr_file = os.path.join(self.gt_fol, 'data.zip') - if not os.path.isfile(curr_file): - print('GT file not found ' + curr_file) - raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file)) - - # Get trackers to eval - if self.config['TRACKERS_TO_EVAL'] is None: - self.tracker_list = os.listdir(self.tracker_fol) - else: - self.tracker_list = self.config['TRACKERS_TO_EVAL'] - - if self.config['TRACKER_DISPLAY_NAMES'] is None: - self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list)) - elif (self.config['TRACKERS_TO_EVAL'] is not None) and ( - len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)): - self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES'])) - else: - raise TrackEvalException('List of tracker files and tracker display names do not match.') - - for tracker in self.tracker_list: - if self.data_is_zipped: - curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip') - if not os.path.isfile(curr_file): - print('Tracker file not found: ' + curr_file) - raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file)) - else: - for seq in self.seq_list: - curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt') - if not os.path.isfile(curr_file): - print('Tracker file not found: ' + curr_file) - raise TrackEvalException( - 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename( - curr_file)) - - def get_display_name(self, tracker): - return self.tracker_to_disp[tracker] - - def _get_seq_info(self): - seq_list = [] - seq_lengths = {} - if self.config["SEQ_INFO"]: - seq_list = list(self.config["SEQ_INFO"].keys()) - seq_lengths = self.config["SEQ_INFO"] - - # If sequence length is 'None' tries to read sequence length from .ini files. - for seq, seq_length in seq_lengths.items(): - if seq_length is None: - ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini') - if not os.path.isfile(ini_file): - raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file)) - ini_data = configparser.ConfigParser() - ini_data.read(ini_file) - seq_lengths[seq] = int(ini_data['Sequence']['seqLength']) - - else: - if self.config["SEQMAP_FILE"]: - seqmap_file = self.config["SEQMAP_FILE"] - else: - if self.config["SEQMAP_FOLDER"] is None: - seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt') - else: - seqmap_file = os.path.join(self.config["SEQMAP_FOLDER"], self.gt_set + '.txt') - if not os.path.isfile(seqmap_file): - print('no seqmap found: ' + seqmap_file) - raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file)) - with open(seqmap_file) as fp: - reader = csv.reader(fp) - for i, row in enumerate(reader): - if i == 0 or row[0] == '': - continue - seq = row[0] - seq_list.append(seq) - ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini') - if not os.path.isfile(ini_file): - raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file)) - ini_data = configparser.ConfigParser() - ini_data.read(ini_file) - seq_lengths[seq] = int(ini_data['Sequence']['seqLength']) - return seq_list, seq_lengths - - def _load_raw_file(self, tracker, seq, is_gt): - """Load a file (gt or tracker) in the MOTS Challenge format - - If is_gt, this returns a dict which contains the fields: - [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det). - [gt_dets]: list (for each timestep) of lists of detections. - [gt_ignore_region]: list (for each timestep) of masks for the ignore regions - - if not is_gt, this returns a dict which contains the fields: - [tracker_ids, tracker_classes] : list (for each timestep) of 1D NDArrays (for each det). - [tracker_dets]: list (for each timestep) of lists of detections. - """ - - # Only loaded when run to reduce minimum requirements - from pycocotools import mask as mask_utils - - # File location - if self.data_is_zipped: - if is_gt: - zip_file = os.path.join(self.gt_fol, 'data.zip') - else: - zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip') - file = seq + '.txt' - else: - zip_file = None - if is_gt: - file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq) - else: - file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt') - - # Ignore regions - if is_gt: - crowd_ignore_filter = {2: ['10']} - else: - crowd_ignore_filter = None - - # Load raw data from text file - read_data, ignore_data = self._load_simple_text_file(file, crowd_ignore_filter=crowd_ignore_filter, - is_zipped=self.data_is_zipped, zip_file=zip_file, - force_delimiters=' ') - - # Convert data to required format - num_timesteps = self.seq_lengths[seq] - data_keys = ['ids', 'classes', 'dets'] - if is_gt: - data_keys += ['gt_ignore_region'] - raw_data = {key: [None] * num_timesteps for key in data_keys} - - # Check for any extra time keys - current_time_keys = [str(t + 1) for t in range(num_timesteps)] - extra_time_keys = [x for x in read_data.keys() if x not in current_time_keys] - if len(extra_time_keys) > 0: - if is_gt: - text = 'Ground-truth' - else: - text = 'Tracking' - raise TrackEvalException( - text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join( - [str(x) + ', ' for x in extra_time_keys])) - - for t in range(num_timesteps): - time_key = str(t+1) - # list to collect all masks of a timestep to check for overlapping areas - all_masks = [] - if time_key in read_data.keys(): - try: - raw_data['dets'][t] = [{'size': [int(region[3]), int(region[4])], - 'counts': region[5].encode(encoding='UTF-8')} - for region in read_data[time_key]] - raw_data['ids'][t] = np.atleast_1d([region[1] for region in read_data[time_key]]).astype(int) - raw_data['classes'][t] = np.atleast_1d([region[2] for region in read_data[time_key]]).astype(int) - all_masks += raw_data['dets'][t] - except IndexError: - self._raise_index_error(is_gt, tracker, seq) - except ValueError: - self._raise_value_error(is_gt, tracker, seq) - else: - raw_data['dets'][t] = [] - raw_data['ids'][t] = np.empty(0).astype(int) - raw_data['classes'][t] = np.empty(0).astype(int) - if is_gt: - if time_key in ignore_data.keys(): - try: - time_ignore = [{'size': [int(region[3]), int(region[4])], - 'counts': region[5].encode(encoding='UTF-8')} - for region in ignore_data[time_key]] - raw_data['gt_ignore_region'][t] = mask_utils.merge([mask for mask in time_ignore], - intersect=False) - all_masks += [raw_data['gt_ignore_region'][t]] - except IndexError: - self._raise_index_error(is_gt, tracker, seq) - except ValueError: - self._raise_value_error(is_gt, tracker, seq) - else: - raw_data['gt_ignore_region'][t] = mask_utils.merge([], intersect=False) - - # check for overlapping masks - if all_masks: - masks_merged = all_masks[0] - for mask in all_masks[1:]: - if mask_utils.area(mask_utils.merge([masks_merged, mask], intersect=True)) != 0.0: - raise TrackEvalException( - 'Tracker has overlapping masks. Tracker: ' + tracker + ' Seq: ' + seq + ' Timestep: ' + str( - t)) - masks_merged = mask_utils.merge([masks_merged, mask], intersect=False) - - if is_gt: - key_map = {'ids': 'gt_ids', - 'classes': 'gt_classes', - 'dets': 'gt_dets'} - else: - key_map = {'ids': 'tracker_ids', - 'classes': 'tracker_classes', - 'dets': 'tracker_dets'} - for k, v in key_map.items(): - raw_data[v] = raw_data.pop(k) - raw_data['num_timesteps'] = num_timesteps - raw_data['seq'] = seq - return raw_data - - @_timing.time - def get_preprocessed_seq_data(self, raw_data, cls): - """ Preprocess data for a single sequence for a single class ready for evaluation. - Inputs: - - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data(). - - cls is the class to be evaluated. - Outputs: - - data is a dict containing all of the information that metrics need to perform evaluation. - It contains the following fields: - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers. - [gt_ids, tracker_ids]: list (for each timestep) of 1D NDArrays (for each det). - [gt_dets, tracker_dets]: list (for each timestep) of lists of detection masks. - [similarity_scores]: list (for each timestep) of 2D NDArrays. - Notes: - General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps. - 1) Extract only detections relevant for the class to be evaluated (including distractor detections). - 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a - distractor class, or otherwise marked as to be removed. - 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain - other criteria (e.g. are too small). - 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation. - After the above preprocessing steps, this function also calculates the number of gt and tracker detections - and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are - unique within each timestep. - - MOTS Challenge: - In MOTS Challenge, the 4 preproc steps are as follow: - 1) There is only one class (pedestrians) to be evaluated. - 2) There are no ground truth detections marked as to be removed/distractor classes. - Therefore also no matched tracker detections are removed. - 3) Ignore regions are used to remove unmatched detections (at least 50% overlap with ignore region). - 4) There are no ground truth detections (e.g. those of distractor classes) to be removed. - """ - # Check that input data has unique ids - self._check_unique_ids(raw_data) - - cls_id = int(self.class_name_to_class_id[cls]) - - data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'similarity_scores'] - data = {key: [None] * raw_data['num_timesteps'] for key in data_keys} - unique_gt_ids = [] - unique_tracker_ids = [] - num_gt_dets = 0 - num_tracker_dets = 0 - for t in range(raw_data['num_timesteps']): - - # Only extract relevant dets for this class for preproc and eval (cls) - gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id) - gt_class_mask = gt_class_mask.astype(np.bool) - gt_ids = raw_data['gt_ids'][t][gt_class_mask] - gt_dets = [raw_data['gt_dets'][t][ind] for ind in range(len(gt_class_mask)) if gt_class_mask[ind]] - - tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id) - tracker_class_mask = tracker_class_mask.astype(np.bool) - tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask] - tracker_dets = [raw_data['tracker_dets'][t][ind] for ind in range(len(tracker_class_mask)) if - tracker_class_mask[ind]] - similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask] - - # Match tracker and gt dets (with hungarian algorithm) - unmatched_indices = np.arange(tracker_ids.shape[0]) - if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0: - matching_scores = similarity_scores.copy() - matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = -10000 - match_rows, match_cols = linear_sum_assignment(-matching_scores) - actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps - match_cols = match_cols[actually_matched_mask] - - unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0) - - # For unmatched tracker dets, remove those that are greater than 50% within a crowd ignore region. - unmatched_tracker_dets = [tracker_dets[i] for i in range(len(tracker_dets)) if i in unmatched_indices] - ignore_region = raw_data['gt_ignore_region'][t] - intersection_with_ignore_region = self._calculate_mask_ious(unmatched_tracker_dets, [ignore_region], - is_encoded=True, do_ioa=True) - is_within_ignore_region = np.any(intersection_with_ignore_region > 0.5 + np.finfo('float').eps, axis=1) - - # Apply preprocessing to remove unwanted tracker dets. - to_remove_tracker = unmatched_indices[is_within_ignore_region] - data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0) - data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0) - similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1) - - # Keep all ground truth detections - data['gt_ids'][t] = gt_ids - data['gt_dets'][t] = gt_dets - data['similarity_scores'][t] = similarity_scores - - unique_gt_ids += list(np.unique(data['gt_ids'][t])) - unique_tracker_ids += list(np.unique(data['tracker_ids'][t])) - num_tracker_dets += len(data['tracker_ids'][t]) - num_gt_dets += len(data['gt_ids'][t]) - - # Re-label IDs such that there are no empty IDs - if len(unique_gt_ids) > 0: - unique_gt_ids = np.unique(unique_gt_ids) - gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1)) - gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids)) - for t in range(raw_data['num_timesteps']): - if len(data['gt_ids'][t]) > 0: - data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int) - if len(unique_tracker_ids) > 0: - unique_tracker_ids = np.unique(unique_tracker_ids) - tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1)) - tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids)) - for t in range(raw_data['num_timesteps']): - if len(data['tracker_ids'][t]) > 0: - data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int) - - # Record overview statistics. - data['num_tracker_dets'] = num_tracker_dets - data['num_gt_dets'] = num_gt_dets - data['num_tracker_ids'] = len(unique_tracker_ids) - data['num_gt_ids'] = len(unique_gt_ids) - data['num_timesteps'] = raw_data['num_timesteps'] - data['seq'] = raw_data['seq'] - - # Ensure again that ids are unique per timestep after preproc. - self._check_unique_ids(data, after_preproc=True) - - return data - - def _calculate_similarities(self, gt_dets_t, tracker_dets_t): - similarity_scores = self._calculate_mask_ious(gt_dets_t, tracker_dets_t, is_encoded=True, do_ioa=False) - return similarity_scores - - @staticmethod - def _raise_index_error(is_gt, tracker, seq): - """ - Auxiliary method to raise an evaluation error in case of an index error while reading files. - :param is_gt: whether gt or tracker data is read - :param tracker: the name of the tracker - :param seq: the name of the seq - :return: None - """ - if is_gt: - err = 'Cannot load gt data from sequence %s, because there are not enough ' \ - 'columns in the data.' % seq - raise TrackEvalException(err) - else: - err = 'Cannot load tracker data from tracker %s, sequence %s, because there are not enough ' \ - 'columns in the data.' % (tracker, seq) - raise TrackEvalException(err) - - @staticmethod - def _raise_value_error(is_gt, tracker, seq): - """ - Auxiliary method to raise an evaluation error in case of an value error while reading files. - :param is_gt: whether gt or tracker data is read - :param tracker: the name of the tracker - :param seq: the name of the seq - :return: None - """ - if is_gt: - raise TrackEvalException( - 'GT data for sequence %s cannot be converted to the right format. Is data corrupted?' % seq) - else: - raise TrackEvalException( - 'Tracking data from tracker %s, sequence %s cannot be converted to the right format. ' - 'Is data corrupted?' % (tracker, seq)) diff --git a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/data/pix2pix_dataset.py b/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/data/pix2pix_dataset.py deleted file mode 100644 index 511bd83f55be80ae50bb09c4f6c11fafd4cf8214..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/data/pix2pix_dataset.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -from data.base_dataset import BaseDataset, get_params, get_transform -from PIL import Image -import util.util as util -import os - - -class Pix2pixDataset(BaseDataset): - @staticmethod - def modify_commandline_options(parser, is_train): - parser.add_argument( - "--no_pairing_check", - action="store_true", - help="If specified, skip sanity check of correct label-image file pairing", - ) - return parser - - def initialize(self, opt): - self.opt = opt - - label_paths, image_paths, instance_paths = self.get_paths(opt) - - util.natural_sort(label_paths) - util.natural_sort(image_paths) - if not opt.no_instance: - util.natural_sort(instance_paths) - - label_paths = label_paths[: opt.max_dataset_size] - image_paths = image_paths[: opt.max_dataset_size] - instance_paths = instance_paths[: opt.max_dataset_size] - - if not opt.no_pairing_check: - for path1, path2 in zip(label_paths, image_paths): - assert self.paths_match(path1, path2), ( - "The label-image pair (%s, %s) do not look like the right pair because the filenames are quite different. Are you sure about the pairing? Please see data/pix2pix_dataset.py to see what is going on, and use --no_pairing_check to bypass this." - % (path1, path2) - ) - - self.label_paths = label_paths - self.image_paths = image_paths - self.instance_paths = instance_paths - - size = len(self.label_paths) - self.dataset_size = size - - def get_paths(self, opt): - label_paths = [] - image_paths = [] - instance_paths = [] - assert False, "A subclass of Pix2pixDataset must override self.get_paths(self, opt)" - return label_paths, image_paths, instance_paths - - def paths_match(self, path1, path2): - filename1_without_ext = os.path.splitext(os.path.basename(path1))[0] - filename2_without_ext = os.path.splitext(os.path.basename(path2))[0] - return filename1_without_ext == filename2_without_ext - - def __getitem__(self, index): - # Label Image - label_path = self.label_paths[index] - label = Image.open(label_path) - params = get_params(self.opt, label.size) - transform_label = get_transform(self.opt, params, method=Image.NEAREST, normalize=False) - label_tensor = transform_label(label) * 255.0 - label_tensor[label_tensor == 255] = self.opt.label_nc # 'unknown' is opt.label_nc - - # input image (real images) - image_path = self.image_paths[index] - assert self.paths_match( - label_path, image_path - ), "The label_path %s and image_path %s don't match." % (label_path, image_path) - image = Image.open(image_path) - image = image.convert("RGB") - - transform_image = get_transform(self.opt, params) - image_tensor = transform_image(image) - - # if using instance maps - if self.opt.no_instance: - instance_tensor = 0 - else: - instance_path = self.instance_paths[index] - instance = Image.open(instance_path) - if instance.mode == "L": - instance_tensor = transform_label(instance) * 255 - instance_tensor = instance_tensor.long() - else: - instance_tensor = transform_label(instance) - - input_dict = { - "label": label_tensor, - "instance": instance_tensor, - "image": image_tensor, - "path": image_path, - } - - # Give subclasses a chance to modify the final output - self.postprocess(input_dict) - - return input_dict - - def postprocess(self, input_dict): - return input_dict - - def __len__(self): - return self.dataset_size diff --git a/spaces/yeqingmei123/face-test/op/upfirdn2d.cpp b/spaces/yeqingmei123/face-test/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/yikaizhou/my-anything-v3/app.py b/spaces/yikaizhou/my-anything-v3/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/yikaizhou/my-anything-v3/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/__init__.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/__init__.py deleted file mode 100644 index ab3c63b5b456a7fb878757e25768a3634f76ae5b..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from fvcore.transforms.transform import Transform, TransformList # order them first -from fvcore.transforms.transform import * -from .transform import * -from .augmentation import * -from .augmentation_impl import * - -__all__ = [k for k in globals().keys() if not k.startswith("_")] - - -from detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/younker/chatgpt-turbo/client/src/components/LoadingSpinner.tsx b/spaces/younker/chatgpt-turbo/client/src/components/LoadingSpinner.tsx deleted file mode 100644 index 74d77ee4d80586c75f2f82979e2724fa7ffb7336..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/src/components/LoadingSpinner.tsx +++ /dev/null @@ -1,33 +0,0 @@ -import clsx from "clsx"; - -type Props = { - className?: string; - size?: number; -}; - -export default function LoadingSpinner(props: Props) { - const size = props.size || 5; - return ( -
                - -
                - ); -} diff --git a/spaces/ysharma/Explore_llamav2_with_TGI/app.py b/spaces/ysharma/Explore_llamav2_with_TGI/app.py deleted file mode 100644 index 50640355428901d5e65e7356622f017e203522e7..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Explore_llamav2_with_TGI/app.py +++ /dev/null @@ -1,231 +0,0 @@ -import json -import gradio as gr -import os -import requests - -hf_token = os.getenv('HF_TOKEN') -api_url = os.getenv('API_URL') -api_url_nostream = os.getenv('API_URL_NOSTREAM') -headers = { - 'Content-Type': 'application/json', -} - -system_message = "\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." -title = "Llama2 70B Chatbot" -description = """ -This Space demonstrates model [Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) by Meta, a Llama 2 model with 70B parameters fine-tuned for chat instructions. This space is running on Inference Endpoints using text-generation-inference library. If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://ui.endpoints.huggingface.co/). - -🔎 For more details about the Llama 2 family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/llama2). - -🔨 Looking for lighter chat model versions of Llama-v2? -- 🐇 Check out the [7B Chat model demo](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat). -- 🦊 Check out the [13B Chat model demo](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat). - -Note: As a derivate work of [Llama-2-70b-chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) by Meta, -this demo is governed by the original [license](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI/blob/main/USE_POLICY.md). -""" -css = """.toast-wrap { display: none !important } """ -examples=[ - ['Hello there! How are you doing?'], - ['Can you explain to me briefly what is Python programming language?'], - ['Explain the plot of Cinderella in a sentence.'], - ['How many hours does it take a man to eat a Helicopter?'], - ["Write a 100-word article on 'Benefits of Open-Source in AI research'"], - ] - - -# Note: We have removed default system prompt as requested by the paper authors [Dated: 13/Oct/2023] -# Prompting style for Llama2 without using system prompt -# [INST] {{ user_msg_1 }} [/INST] {{ model_answer_1 }} [INST] {{ user_msg_2 }} [/INST] - - -# Stream text -def predict(message, chatbot, system_prompt="", temperature=0.9, max_new_tokens=256, top_p=0.6, repetition_penalty=1.0,): - - if system_prompt != "": - input_prompt = f"[INST] <>\n{system_prompt}\n<>\n\n " - else: - input_prompt = f"[INST] " - - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - for interaction in chatbot: - input_prompt = input_prompt + str(interaction[0]) + " [/INST] " + str(interaction[1]) + " [INST] " - - input_prompt = input_prompt + str(message) + " [/INST] " - - data = { - "inputs": input_prompt, - "parameters": { - "max_new_tokens":max_new_tokens, - "temperature":temperature, - "top_p":top_p, - "repetition_penalty":repetition_penalty, - "do_sample":True, - }, - } - response = requests.post(api_url, headers=headers, data=json.dumps(data), auth=('hf', hf_token), stream=True) - - partial_message = "" - for line in response.iter_lines(): - if line: # filter out keep-alive new lines - # Decode from bytes to string - decoded_line = line.decode('utf-8') - - # Remove 'data:' prefix - if decoded_line.startswith('data:'): - json_line = decoded_line[5:] # Exclude the first 5 characters ('data:') - else: - gr.Warning(f"This line does not start with 'data:': {decoded_line}") - continue - - # Load as JSON - try: - json_obj = json.loads(json_line) - if 'token' in json_obj: - partial_message = partial_message + json_obj['token']['text'] - yield partial_message - elif 'error' in json_obj: - yield json_obj['error'] + '. Please refresh and try again with an appropriate smaller input prompt.' - else: - gr.Warning(f"The key 'token' does not exist in this JSON object: {json_obj}") - - except json.JSONDecodeError: - gr.Warning(f"This line is not valid JSON: {json_line}") - continue - except KeyError as e: - gr.Warning(f"KeyError: {e} occurred for JSON object: {json_obj}") - continue - - -# No Stream -def predict_batch(message, chatbot, system_prompt="", temperature=0.9, max_new_tokens=256, top_p=0.6, repetition_penalty=1.0,): - - if system_prompt != "": - input_prompt = f"[INST] <>\n{system_prompt}\n<>\n\n " - else: - input_prompt = f"[INST] " - - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - for interaction in chatbot: - input_prompt = input_prompt + str(interaction[0]) + " [/INST] " + str(interaction[1]) + " [INST] " - - input_prompt = input_prompt + str(message) + " [/INST] " - - data = { - "inputs": input_prompt, - "parameters": { - "max_new_tokens":max_new_tokens, - "temperature":temperature, - "top_p":top_p, - "repetition_penalty":repetition_penalty, - "do_sample":True, - }, - } - - response = requests.post(api_url_nostream, headers=headers, data=json.dumps(data), auth=('hf', hf_token)) - - if response.status_code == 200: # check if the request was successful - try: - json_obj = response.json() - if 'generated_text' in json_obj and len(json_obj['generated_text']) > 0: - return json_obj['generated_text'] - elif 'error' in json_obj: - return json_obj['error'] + ' Please refresh and try again with smaller input prompt' - else: - print(f"Unexpected response: {json_obj}") - except json.JSONDecodeError: - print(f"Failed to decode response as JSON: {response.text}") - else: - print(f"Request failed with status code {response.status_code}") - - -def vote(data: gr.LikeData): - if data.liked: - print("You upvoted this response: " + data.value) - else: - print("You downvoted this response: " + data.value) - - -additional_inputs=[ - gr.Textbox("", label="Optional system prompt"), - gr.Slider( - label="Temperature", - value=0.9, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - gr.Slider( - label="Max new tokens", - value=256, - minimum=0, - maximum=4096, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.6, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) -] - -chatbot_stream = gr.Chatbot(avatar_images=('user.png', 'bot2.png'),bubble_full_width = False) -chatbot_batch = gr.Chatbot(avatar_images=('user1.png', 'bot1.png'),bubble_full_width = False) -chat_interface_stream = gr.ChatInterface(predict, - title=title, - description=description, - textbox=gr.Textbox(), - chatbot=chatbot_stream, - css=css, - examples=examples, - cache_examples=True, - additional_inputs=additional_inputs,) -chat_interface_batch=gr.ChatInterface(predict_batch, - title=title, - description=description, - textbox=gr.Textbox(), - chatbot=chatbot_batch, - css=css, - examples=examples, - cache_examples=True, - additional_inputs=additional_inputs,) - -# Gradio Demo -with gr.Blocks() as demo: - - with gr.Tab("Streaming"): - #gr.ChatInterface(predict, title=title, description=description, css=css, examples=examples, cache_examples=True, additional_inputs=additional_inputs,) - chatbot_stream.like(vote, None, None) - chat_interface_stream.render() - - with gr.Tab("Batch"): - #gr.ChatInterface(predict_batch, title=title, description=description, css=css, examples=examples, cache_examples=True, additional_inputs=additional_inputs,) - chatbot_batch.like(vote, None, None) - chat_interface_batch.render() - -demo.queue(concurrency_count=75, max_size=100).launch(debug=True) \ No newline at end of file diff --git a/spaces/ysharma/whisper-diarization/README.md b/spaces/ysharma/whisper-diarization/README.md deleted file mode 100644 index 187d884c266f348326725b4e76cd51d7aa77c8fe..0000000000000000000000000000000000000000 --- a/spaces/ysharma/whisper-diarization/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper Speaker Recognition -emoji: 🌖 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: dwarkesh/whisper-speaker-recognition ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhuce/vits/transforms.py b/spaces/zhuce/vits/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/zhuce/vits/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/zideliu/styledrop/timm/models/layers/adaptive_avgmax_pool.py b/spaces/zideliu/styledrop/timm/models/layers/adaptive_avgmax_pool.py deleted file mode 100644 index d2bb9f7216a01209ef1e205c4e127f1b6f593a74..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/layers/adaptive_avgmax_pool.py +++ /dev/null @@ -1,119 +0,0 @@ -""" PyTorch selectable adaptive pooling -Adaptive pooling with the ability to select the type of pooling from: - * 'avg' - Average pooling - * 'max' - Max pooling - * 'avgmax' - Sum of average and max pooling re-scaled by 0.5 - * 'avgmaxc' - Concatenation of average and max pooling along feature dim, doubles feature dim - -Both a functional and a nn.Module version of the pooling is provided. - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def adaptive_pool_feat_mult(pool_type='avg'): - if pool_type == 'catavgmax': - return 2 - else: - return 1 - - -def adaptive_avgmax_pool2d(x, output_size=1): - x_avg = F.adaptive_avg_pool2d(x, output_size) - x_max = F.adaptive_max_pool2d(x, output_size) - return 0.5 * (x_avg + x_max) - - -def adaptive_catavgmax_pool2d(x, output_size=1): - x_avg = F.adaptive_avg_pool2d(x, output_size) - x_max = F.adaptive_max_pool2d(x, output_size) - return torch.cat((x_avg, x_max), 1) - - -def select_adaptive_pool2d(x, pool_type='avg', output_size=1): - """Selectable global pooling function with dynamic input kernel size - """ - if pool_type == 'avg': - x = F.adaptive_avg_pool2d(x, output_size) - elif pool_type == 'avgmax': - x = adaptive_avgmax_pool2d(x, output_size) - elif pool_type == 'catavgmax': - x = adaptive_catavgmax_pool2d(x, output_size) - elif pool_type == 'max': - x = F.adaptive_max_pool2d(x, output_size) - else: - assert False, 'Invalid pool type: %s' % pool_type - return x - - -class FastAdaptiveAvgPool2d(nn.Module): - def __init__(self, flatten=False): - super(FastAdaptiveAvgPool2d, self).__init__() - self.flatten = flatten - - def forward(self, x): - return x.mean((2, 3)) if self.flatten else x.mean((2, 3), keepdim=True) - - -class AdaptiveAvgMaxPool2d(nn.Module): - def __init__(self, output_size=1): - super(AdaptiveAvgMaxPool2d, self).__init__() - self.output_size = output_size - - def forward(self, x): - return adaptive_avgmax_pool2d(x, self.output_size) - - -class AdaptiveCatAvgMaxPool2d(nn.Module): - def __init__(self, output_size=1): - super(AdaptiveCatAvgMaxPool2d, self).__init__() - self.output_size = output_size - - def forward(self, x): - return adaptive_catavgmax_pool2d(x, self.output_size) - - -class SelectAdaptivePool2d(nn.Module): - """Selectable global pooling layer with dynamic input kernel size - """ - def __init__(self, output_size=1, pool_type='fast', flatten=False): - super(SelectAdaptivePool2d, self).__init__() - self.pool_type = pool_type or '' # convert other falsy values to empty string for consistent TS typing - self.flatten = flatten - if pool_type == '': - self.pool = nn.Identity() # pass through - elif pool_type == 'fast': - assert output_size == 1 - self.pool = FastAdaptiveAvgPool2d(self.flatten) - self.flatten = False - elif pool_type == 'avg': - self.pool = nn.AdaptiveAvgPool2d(output_size) - elif pool_type == 'avgmax': - self.pool = AdaptiveAvgMaxPool2d(output_size) - elif pool_type == 'catavgmax': - self.pool = AdaptiveCatAvgMaxPool2d(output_size) - elif pool_type == 'max': - self.pool = nn.AdaptiveMaxPool2d(output_size) - else: - assert False, 'Invalid pool type: %s' % pool_type - - def is_identity(self): - return self.pool_type == '' - - def forward(self, x): - x = self.pool(x) - if self.flatten: - x = x.flatten(1) - return x - - def feat_mult(self): - return adaptive_pool_feat_mult(self.pool_type) - - def __repr__(self): - return self.__class__.__name__ + ' (' \ - + 'pool_type=' + self.pool_type \ - + ', flatten=' + str(self.flatten) + ')' - diff --git a/spaces/zomehwh/vits-models-pcr/monotonic_align/core.py b/spaces/zomehwh/vits-models-pcr/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-models-pcr/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/zyj1022/codeffe/index.html b/spaces/zyj1022/codeffe/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/zyj1022/codeffe/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
                -

                Welcome to your static Space!

                -

                - You can modify this app directly by editing index.html in the - Files and versions tab. -

                -

                - Also don't forget to check the - Spaces documentation. -

                -
                - -
                Song TitleSingerComposer
                Alchemy of SoulsLee Sun-heePark Geun-tae
                Fire and IceBaekhyun (EXO) & Taeyeon (Girls' Generation)Kim Eana & Lee Min-soo
                Soul ShifterHan Seung-woo (VICTON)Kim Do-hoon & Lee Sang-ho
                BlossomChungha & Paul KimChoi Jin-seok & Kim Ji-hyang
                FateGummy & Kim Jae-hwanJung Seok-won & Choi Sung-il
                ... (and 11 more songs)......