diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install Guitar Rig 5 for Free A Step-by-Step Tutorial.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install Guitar Rig 5 for Free A Step-by-Step Tutorial.md deleted file mode 100644 index e1a8d0005422119ad5adc80cee5e5d3fe2b856d6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install Guitar Rig 5 for Free A Step-by-Step Tutorial.md +++ /dev/null @@ -1,61 +0,0 @@ - -
If you are a guitarist who wants to create and experiment with different tones and effects on your computer, you may have heard of Guitar Rig 5. Guitar Rig 5 is a software solution that simulates various amps, cabinets, pedals, and microphones, and lets you mix and match them to create your own custom sound. Guitar Rig 5 can also be used as a standalone application or as a plugin in your DAW.
-Download File 🔗 https://byltly.com/2uKx7R
But how can you get Guitar Rig 5 full download for free? Is it even possible? In this article, we will answer these questions and show you how to download and install Guitar Rig 5 for free on your PC.
-Guitar Rig 5 is a product of Native Instruments, a leading manufacturer of software and hardware for music production and DJing. Guitar Rig 5 was released in 2011 as the successor of Guitar Rig 4, and it has many new features and improvements, such as:
-Guitar Rig 5 is suitable for guitarists of any genre and skill level who want to explore the possibilities of digital sound processing. It can also be used for other instruments such as bass, keyboards, vocals, drums, etc. It can run on any PC that meets the minimum system requirements:
-To download Guitar Rig 5 full version for free, you need to have an account on the Native Instruments website. If you don't have one, you can create one for free by following these steps:
- -Once you have an account on the Native Instruments website, you can download Guitar Rig 5 full version for free by following these steps:
- -Download File ✔ https://imgfil.com/2uy0zJ
Download File ✑ https://imgfil.com/2uxYeC
Download 🔗 https://imgfil.com/2uxZLZ
Temple Run is one of the most popular and addictive games on Android. It is an endless runner game where you have to escape from a horde of evil monkeys while avoiding obstacles and collecting coins. But what if you want to download Temple Run without Play Store? Maybe you don't have access to the Play Store, or you want to try a different version of the game, or you just want to have more control over your app installation. Whatever your reason, there is a way to download Temple Run without Play Store, and it's not too difficult. In this article, we will show you how to do it step by step.
-Temple Run was released in 2011 by Imangi Studios and quickly became a hit among mobile gamers. The game has been downloaded over a billion times and has spawned several sequels and spin-offs. The gameplay is simple but addictive: you control a treasure hunter who has stolen a cursed idol from a temple and must run for his life while being chased by angry monkeys. Along the way, you have to swipe, tilt, and tap your device to turn, jump, slide, and use power-ups. The game features stunning graphics, smooth animations, and catchy sound effects that make you feel like you are in an adventure movie.
-DOWNLOAD — https://jinyurl.com/2uNQK8
While Temple Run is available for free on the Google Play Store, there are some reasons why you might want to download it without using the Play Store. For example:
-Whatever your reason, downloading Temple Run without Play Store is possible and safe if you follow the right steps.
-The first step to download Temple Run without Play Store is to get the APK file of the game. APK stands for Android Package Kit, and it is the file format that Android uses to distribute and install apps. An APK file contains all the code, resources, and metadata that an app needs to run on your device. When you download an app from the Play Store, you are actually downloading an APK file that is then installed on your device. But you can also download APK files from other sources on the web.
-There are two main ways to get APK files from Google Play URLs. One way is to use a web tool that generates download links for APK files by pasting Google Play URLs. There are many websites that offer this service, such as APKPure, APKMirror, APKCombo, and Evozi. These websites are usually safe and reliable, but you should always check the ratings, reviews, and permissions of the apps before downloading them. Another way is to use an APK extractor app that can extract APK files from any app installed on your device. There are many apps that can do this, such as APK Extractor, ML Manager, and App Backup & Restore. These apps are useful if you want to backup or share the APK files of the apps you already have on your device.
-Before you can install APK files on your device, you need to enable the option to allow unknown apps on your device. Unknown apps are apps that are not from the Play Store or other trusted sources. By default, Android blocks the installation of unknown apps for security reasons. However, you can change this setting by following these steps:
-Alternatively, you can also enable this option when you try to install an APK file for the first time. You will see a pop-up asking you to allow unknown apps from that source. Tap on Settings and then toggle on the option to Allow from this source.
-How to download temple run on PC without play store
-Temple run apk download for android without play store
-Play temple run online for free in browser without downloading
-Temple run game download for mobile without play store
-Download temple run 2 without play store
-Temple run mod apk download without play store
-Download temple run oz without play store
-Temple run free download for laptop without play store
-Download temple run brave without play store
-Play temple run 2 online for free without downloading
-Download temple run for windows 10 without play store
-Temple run download for iphone without app store
-Download temple run for java phone without play store
-Temple run game download for pc offline without play store
-Download temple run 3 without play store
-Temple run game download for tablet without play store
-Download temple run for mac without play store
-Temple run game download for jio phone without play store
-Download temple run for nokia lumia without play store
-Play temple run online unblocked without downloading
-Download temple run for samsung galaxy without play store
-Temple run game download for blackberry without play store
-Download temple run for fire tablet without play store
-Temple run game download for windows phone without play store
-Download temple run for chromebook without play store
-Temple run game download for kindle fire without play store
-Download temple run for ios without app store
-Temple run game download for android tv without play store
-Download temple run for linux without play store
-Temple run game download for smartwatch without play store
-Download temple run hacked version without play store
-Temple run game download for mi tv without play store
-Download temple run old version without play store
-Temple run game download for ps4 without play store
-Download temple run unlimited coins and gems without play store
-Temple run game download for xbox one without play store
-Download temple run frozen shadows without play store
-Temple run game download for nintendo switch without play store
-Download temple run bluestacks edition without play store
-Temple run game download for vr headset without play store
Once you have downloaded the APK file of Temple Run, you need to locate and install it on your device. You can use a file manager app or an APK installer app to do this. A file manager app is an app that lets you browse and manage the files and folders on your device. You can use any file manager app that you have on your device, such as Files by Google, ES File Explorer, or Solid Explorer. An APK installer app is an app that simplifies the process of installing APK files by scanning your device for them and showing them in a list. You can use any APK installer app that you like, such as Installer, Easy Installer, or SAI (Split APKs Installer). Here are the steps to install APK files using either method:
-Depending on your device and Android version, you may need to accept some permissions or pop-ups before installing the APK file of Temple Run. For example:
-If you don't want to download the APK file of Temple Run directly from your device, you can also transfer it from your computer to your device via USB. Here are the steps to do this:
-In conclusion, downloading Temple Run without Play Store is not a difficult task if you follow the steps above. You just need to get the APK file of Temple Run from a web tool or an APK extractor app, enable unknown apps on your device, and install the APK file using a file manager or an APK installer app. This way, you can enjoy Temple Run without Play Store and have more control over your app installation and updates. Downloading Temple Run without Play Store is also safe and legal, as long as you download the APK file from a trusted source and do not modify or distribute it without permission. However, you should always be careful when installing unknown apps on your device, as they may contain malware or viruses that can harm your device or data. Always check the ratings, reviews, and permissions of the apps before downloading them, and scan them with an antivirus app if possible.
-Here are some of the most frequently asked questions and answers about downloading Temple Run without Play Store:
-Question | -Answer | -
---|---|
Can I download Temple Run without Play Store on any Android device? | -Yes, you can download Temple Run without Play Store on any Android device that supports the game's minimum requirements. The game requires Android 4.1 or higher and at least 50 MB of free space. | -
Can I update Temple Run without Play Store? | -Yes, you can update Temple Run without Play Store by downloading and installing the latest APK file of the game from a web tool or an APK extractor app. However, you will not receive automatic notifications when a new update is available, so you will have to check manually. | -
Can I play Temple Run offline without Play Store? | -Yes, you can play Temple Run offline without Play Store, as the game does not require an internet connection to run. However, you will not be able to access some features that require an internet connection, such as leaderboards, achievements, and cloud save. | -
Can I restore my progress in Temple Run without Play Store? | -Yes, you can restore your progress in Temple Run without Play Store by using a backup app or a cloud service. You can use a backup app such as Helium or Titanium Backup to backup and restore your app data on your device. You can also use a cloud service such as Google Drive or Dropbox to sync and restore your app data across devices. | -
Can I get banned from Temple Run for downloading it without Play Store? | -No, you will not get banned from Temple Run for downloading it without Play Store, as long as you do not use any cheats, hacks, or mods that violate the game's terms of service. Downloading Temple Run without Play Store is not illegal or unethical, as long as you respect the rights of the developers and do not distribute or modify the APK file without permission. | -
Si te gusta el skateboarding, te encantará True Skate. True Skate es un juego móvil que simula la experiencia del skateboarding en el mundo real. Puedes realizar trucos, explorar parques de skate, personalizar tu skater y tabla, editar y compartir tus repeticiones, y más. En este artículo, te diremos qué es True Skate, cómo descargarlo y por qué deberías jugarlo.
-True Skate es un juego desarrollado por True Axis, una compañía australiana que se especializa en juegos basados en la física. True Skate fue lanzado en 2012 y desde entonces se ha convertido en uno de los juegos de skateboarding más populares y aclamados en dispositivos móviles. Es el juego oficial de Street League Skateboarding, la primera serie de skateboarding competitivo del mundo.
-Download File ☆☆☆ https://bltlly.com/2v6MUO
True Skate tiene muchas características que lo hacen destacar de otros juegos de skateboarding. Aquí están algunas de ellas:
-True Skate usa los dedos como los pies en el tablero. Puedes mover, arrastrar, tocar y deslizar el dedo para hacer que el tablero reaccione exactamente como lo esperarías. También puede utilizar un gamepad para un control más preciso. El juego tiene un sistema de física realista que tiene en cuenta la posición, la dirección y la fuerza de su entrada. Esto significa que cada truco es posible con verdadero control del tablero.
-True Skate viene con un solo skatepark, el Underpass, que tiene repisas, escaleras, rieles, cuencos, medias tuberías y cuartos de tubería. También puedes desbloquear 10 parques de fantasía con pernos, la moneda del juego. Además, puede comprar más de 20 spots del mundo real como compras en la aplicación. Estos incluyen lugares famosos como los cursos de The Berrics, SPoT, Love Park, MACBA y Street League.
-True Skate se trata de clavar la línea perfecta. Puedes grabar tus carreras y editarlas con diferentes ángulos y efectos de cámara. También puede insertar fotogramas clave para mezclar entre levas. Puede elegir entre levas preestablecidas o crear su propia leva personalizada con opciones como FOV, distorsión, distancia, altura, tono, pan, guiñada y órbita. También puede usar una cámara de trípode con modos automático, fijo o de seguimiento. Una vez que estés satisfecho con tu repetición, puedes compartirla con otros jugadores o en las redes sociales.
-True Skate también tiene un modo de bricolaje que te permite crear tu propio skatepark con objetos que puedes generar y multiplicar. También puedes desbloquear nuevos objetos jugando o comprándolos en la tienda. Puedes jugar en tu propio parque o compartirlo con otros jugadores. También puedes competir en tablas de clasificación globales o desafiar a tus amigos en juegos de S.K.A.T.E o modo SANDBOX.
- -True Skate está disponible para dispositivos Android e iOS. Aquí está cómo descargarlo:
-Jugar True Skate puede ayudarte de muchas maneras, como:
-True Skate puede ayudarte a aprender nuevos trucos, mejorar tu técnica y dominar tu equilibrio. Puedes practicar en diferentes entornos, con diferentes obstáculos y a diferentes velocidades. También puedes ver las repeticiones de otros jugadores o profesionales y aprender de sus movimientos. También puedes usar True Skate como herramienta para visualizar tus trucos antes de probarlos en la vida real.
-True Skate te permite personalizar tu skater y tabla con varias opciones. También puedes crear tu propio skatepark con el modo DIY y compartirlo con otros. También puedes editar y compartir tus repeticiones con diferentes cams y efectos. Puedes mostrar tus habilidades, creatividad y estilo al mundo.
-True Skate es un juego divertido y adictivo que te mantendrá entretenido durante horas. Puedes explorar diferentes parques de skate y lugares, completar misiones y logros, competir en tablas de clasificación y desafíos, y jugar con tus amigos. También puedes establecer tus propios objetivos y desafiarte a mejorar tu rendimiento.
-True Skate es un juego que todo amante del skateboarding debería probar. Es una simulación realista, inmersiva y agradable del skateboarding en el mundo real. Puedes descargar aplikasi true skate para dispositivos Android o iOS y empezar a jugar de inmediato. No te arrepentirás.
64aa2da5cf
-
-
{error}
" - - -def styled_warning(warn): - return f"{warn}
" - - -def styled_message(message): - return f"{message}
" - - -def has_no_nan_values(df, columns): - return df[columns].notna().all(axis=1) - - -def has_nan_values(df, columns): - return df[columns].isna().any(axis=1) - - -def is_model_on_hub(model_name: str, revision: str) -> bool: - try: - AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=False) - return True, None - - except ValueError: - return ( - False, - "needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.", - ) - - except Exception as e: - print(f"Could not get the model config from the hub.: {e}") - return False, "was not found on hub!" \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/__init__.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/__init__.py deleted file mode 100644 index 8b2a0eea190658f294d0a49363ea28543087bdf6..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .unet_adaptive_bins import UnetAdaptiveBins diff --git a/spaces/bioriAsaeru/text-to-voice/Gta Dubai City Game Setup For Pc Full Version Comparison with Other GTA Games.md b/spaces/bioriAsaeru/text-to-voice/Gta Dubai City Game Setup For Pc Full Version Comparison with Other GTA Games.md deleted file mode 100644 index 9c0016f88cd32399debf140c96d47e686304eeec..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Gta Dubai City Game Setup For Pc Full Version Comparison with Other GTA Games.md +++ /dev/null @@ -1,23 +0,0 @@ -Welcome to the ocean of games, Here you can GTA vice city ultimate free download for pc full version game. GTA vice city is presented by Rockstar Games. This is based on one person who lives in a vice city. The latest part of this series is GTA 5 Download for PC
-GTA Vice City is developed by Rockstar Leeds and presented by Rockstar Games. This game is based on one guy who lives in vice city. He knows all the little rats of the criminal world. He wants to conquer this city and wants to become a member of the mafia gang. The storyline of the game is really amazing. The best thing about this Best mini militia old version 4.3 5 downloads is that this is an open-world game. You can go anywhere like a normal person. You can steal cars and buy ammo and guns.
-Download File ✓ https://urloso.com/2uyRsh
The city is so huge. The map of the game is really amazing. It is really easy to remember the path and one thing that amazes me is that cars have working radios and they really work like real ones. There are more than a hundred channels. In other words, this game is really interactive and the best real imaginary world created. It is truly interactive and fun to play. The missions included in the game are really amazing. Another game that you will like to play is GTA vice city download setup. If You have a mobile phone.
-No one can like slow streaming while playing any games. In this game you can feel a better experience, if you grand theft auto vice city free game download you can feel it has faster loading times and fewer slowdowns. Are you find the best horror game for pc.
-In order to install the game, you should have at least 64 GB of free space on your system. Then download the Windows 7/8/10 operating system and then GTA vice city game download for pc. After the installation is complete. Then download the Windows 7/8/10 operating system and then GTA vice city free download. After the installation is complete, you should restart your system.
-Welcome to the 1980s. From the decade of big hair, excess and pastel suits comes a story of one man's rise to the top of the criminal pile. Vice City is a huge urban sprawl ranging from the beach to the swamps and the glitz to the ghetto, and is the most varied, complete and alive digital city ever created. Combining non-linear gameplay with a character driven narrative, you arrive in a town brimming with delights and degradation and are given the opportunity to take it over as you choose.
-Download GTA Vice City game PC free on windows 7/8/10 only from our website without any kind of tension. GTA Vice City is an action installment where our player has to fight with one of the greatest warriors of all time in Gameplay. We will always provide working creations as you people already know. There are many other fake websites, which are providing fake links for this high class series and our dear admins are working very hard. Solving some puzzles will let you win from our hero enemy that is the only way to victory. Join the battle and lead your role with your friends help all of them because they will also give you full support when you need them.
-A video game can be banned in Germany if it is confiscated by court orders because it violates a section of the Strafgesetzbuch (criminal code). Private possession (and thus playing it) and acquisition (such as downloading a demo from the Internet) are still legal, but any dissemination is not. The seller would break the law if a sale took place, not the buyer. However, on 10 December 2002, an "Oberlandesgericht" (higher regional court) in Hamm decided that a single sale of a single copy does not qualify as dissemination.[45] Unlike indexing by the BPjM, which restricts the sale of all content-equal versions. Versions that are confiscated are enumerated in the court order. Being put on the index by the BPjM or, since April 1, 2003, being refused a rating by the USK, does not equal a ban. Rather, it imposes strict trade restrictions on the title. While only very few games have been confiscated, the list of indexed games is very long.[46]
-PUBG Mobile (excluding versions released exclusively in India) is banned because of extreme violence. The move came after a direction from the states of Gujarat and Jammu and Kashmir seeking a ban on the game, as it was claimed to affect the minds of youths. It was banned in the cities of Ahmedabad, Surat, Vadodara, Bhavnagar and Rajkot of Gujarat, as well as all of Jammu and Kashmir.[70] Players have been prosecuted for playing the game.[71] The game was later completely banned due to mishandling of data on 2 September 2020.[72][73]
-Video games are rarely banned in Japan, and it holds the place as one of the top video game producers in the world.[88] However, for some games, usually western, they may edit or censor their games if they appear offensive to Japan; an example being the Japanese release of Fallout 3. "The Power of the Atom" quest was edited to relieve concerns about atomic detonation in inhabited areas and the Fat Man weapon was renamed to the Nuka Launcher due to its relation to the real historic event.[89] Another example is the Japanese version of Crash Bandicoot 2: Cortex Strikes Back in which a death animation that has Crash squashed into a head and feet was altered due to its resemblance to the Kobe child murders. Japan's Spike removed all references to Kim Jong-il and North Korea in Homefront, as well.[90] Resident Evil 4, Call of Duty: Black Ops, Bulletstorm, Gears of War 3, Grand Theft Auto V, Dead Island, Metal Gear Rising: Revengeance and numerous other violent titles[citation needed], distributed physically and digitally, were heavily edited for excessive violence, but only on the localization level; the games can still be played if the locale is switched from Japanese to English. The Mortal Kombat series was subsequently banned in Japan, including its newest release, due to heavy amounts of violence[citation needed]. On 13 March 2019, the sales of Judgment had stopped producing future sales in Japan, following Pierre Taki's arrest on suspicion of cocaine use. As a result, Sega had replaced both the voice actor and the character model having been subsequently removed.[citation needed] As of November 2022, video game The Callisto Protocol has been banned in Japan. CERO would not be rating due to the game's violent content and the developer refused to make the necessary changes.[citation needed]
- -Media in the United States and Europe have incorrectly reported that Call of Duty: Modern Warfare 2, which features a storyline in which Russian ultra-nationalists take control of the country and invade the United States, was banned in Russia. Activision called these reports "erroneous".[117] Instead, a censored version of the game was published, omitting the controversial "No Russian" level.[citation needed] This also presumably[original research?] prevented the game from being released on consoles in Russian, with only a PC version officially available.[citation needed]
-Most banned games can be found in many stores due to a lack of government enforcement of bans (often at a substantial price). However, not all major stores will stock banned titles.[122] The Last of Us Part II is banned due to homosexual-related content.[123] Red Dead Redemption 2 was initially banned due to nudity, prostitution, violence, and cruelty. A modified version of the game was launched on May 7, 2020.[124]
-Singapore has banned games in the past and still occasionally does (including a ban on arcades nationwide from 1983 to the 1990s). With the implementation of the Video Game Classification in 2008 by the Media Development Authority, most games are widely available for purchase to their respective age group, such as those containing full frontal nudity or strong graphic violence under an "M18" rating. Games that were previously banned such as Mass Effect were re-rated either "Age Advisory" or "M18" after the implementation of the classification system.
-Though on March 10, 2015, the Turkish Ministry of Family and Social Services recommended a ban on Minecraft, specifically after online trolls sent videos (Made to make it appear as if it was featured in the original version of the game) of a modded Minecraft session featuring mods that included violence "not suited for such a game aged for young children" to big media outlets and television channels in Turkey, which then they made news covering the sent reports, and later catching the attention of the ministry and parents, resulting in such a recommendation. [134][135]
-Additionally, since August 2008, all video game titles of the Grand Theft Auto series have been completely banned in Thailand,[139] because of a case where an 18-year-old Thai player supposedly influenced by Grand Theft Auto killed a taxi driver from Bangkok.[140] The ban, however, does not extend to the digital PC versions of Grand Theft Auto V.[141]
-The car driving game named "City Car Driving" is a new car simulator, designed to help users experience car driving in а big city, the countryside and in different conditions or go just for a joy ride. Special stress in the "City Car Driving" simulator has been laid on a variety of different road situations and realistic car driving.
-The grand theft auto vice city is a game which was released in 2002. The name looks quite familiar because it is one of the parts of the grand theft auto game. For the introduction of this game, it is explained that grand theft auto vice city is just like the new pubg and the fortnite game.
-The controls are also provided to the player so they may move accordingly and kill their enemies. This is a PC game which means you are only able to play this game in your desktop or laptop. It is not available in the mobile version but if you search a bit, you might find a mobile version of this game available at any store.
aaccfb2cb3Infowood 1992 Enterprise is a professional software for designing and planning interior spaces, such as kitchens, bathrooms, closets, offices and more. It allows you to create realistic 3D models, plans, elevations and perspectives of your projects, as well as generate cutting lists and costing lists of the products you use. Infowood 1992 Enterprise is compatible with Windows operating systems and requires a minimum of 4 GB of RAM and 10 GB of free disk space.
-If you want to try out Infowood 1992 Enterprise for free, you can download a trial version from the official website of Infowood Technologies[^1^]. The trial version is valid for 30 days and includes all the features and libraries of the full version. To download the trial version, follow these steps:
-Download ->>->>->> https://urloso.com/2uyS3n
You can now use Infowood 1992 Enterprise for free for 30 days. You can access all the libraries of furniture, materials, accessories and appliances from various brands, as well as create your own custom libraries. You can also import and export files in various formats, such as DWG, DXF, PDF, JPG, PNG and more. You can also print your projects or share them online with your clients or colleagues.
-If you want to purchase the full version of Infowood 1992 Enterprise after the trial period expires, you can do so from the website of Infowood Technologies[^3^]. The full version costs 2.000 euros and includes all the libraries of the system, as well as free updates and technical support. You can also choose from different payment methods, such as credit card, bank transfer or PayPal.
-Infowood 1992 Enterprise is a powerful and user-friendly software that can help you design and plan any interior space with ease and accuracy. Whether you are a professional designer, a manufacturer or a hobbyist, you can benefit from its features and functions. Download it today and see for yourself!
- -If you want to learn more about Infowood 1992 Enterprise and its features, you can visit the website of Infowood Technologies and browse through the tutorials, videos, manuals and FAQs. You can also contact the customer service team via phone, email or chat if you have any questions or issues. They are available from Monday to Friday, from 9:00 a.m. to 5:00 p.m. (GMT+2).
-Infowood 1992 Enterprise is not the only software that Infowood Technologies offers. You can also check out their other products, such as 1992 Professional, which is a simpler and cheaper version of 1992 Enterprise, suitable for smaller projects and budgets. Or you can try out their online platform, Infowood Cloud, which allows you to access your projects from any device and collaborate with other users in real time.
-Infowood Technologies is a leading company in the field of interior design software, with more than 25 years of experience and thousands of satisfied customers worldwide. They are constantly innovating and improving their products to meet the needs and expectations of their users. They also offer customized solutions for specific markets and industries, such as hotels, restaurants, schools, hospitals and more.
- -Don't miss this opportunity to download and install Infowood 1992 Enterprise for free and discover its amazing capabilities. You will be impressed by how easy and fun it is to design and plan your interior spaces with this software. Whether you want to create a dream kitchen, a cozy bathroom, a functional office or a stylish living room, Infowood 1992 Enterprise can help you achieve it. Download it now and unleash your creativity!
d5da3c52bfDownload ✔ https://tinurli.com/2uwi8Z
This rk kanodia gate ee all volumes pdf book is one of the best SSC, Railway, UPSC study material. In addition, gate mcq electrical engineering by rk kanodia pdf is available for free download. Government jobs examinations required best kanodia gate ee notes to prepare.
-However, rk kanodia gate ee all volumes pdf is very important to crack competitive exams. Similarly, at exampdfnotes.com we share rk kanodia electrical for GATE exam SSC exams. Firstly, gate by rk kanodia pdf ebook download is useful for GATE IES/PSUs exam RRB exams. Secondly, kanodia gate can help in union public service commission UPSC exams preparation.
-Download ••• https://tinurli.com/2uwjEy
Please do not post these images on public boards, groups, or clubs. If you want to share, post a link to our site (www.mukiskitchen.com) but do not link directly to the free pages. Please allow us to retain some form of control over our creative work.
-DOWNLOAD ->->->-> https://tinurli.com/2uwi1y
Download Zip ⚡ https://tinurli.com/2uwkMl
Movies about photography can be informative and inspiring. Documentary movies offer insight into artists and their work that you would otherwise never get. Whether you are a budding fashion photographer or want to travel the far-flung unseen reaches of the world, photography movies open a window on the lifestyle, the art, and the trials that go with it.
-Download File ===> https://tinurli.com/2uwk8N
Download File → https://tinurli.com/2uwkkP
Do you love playing games that are simple yet challenging, fun yet competitive, and relaxing yet exciting? If yes, then you should try agar.io mod apk, a game that will keep you hooked for hours. In this article, we will tell you everything you need to know about this game, including what it is, what it offers, and how to get it on your device. So, let's dive in!
-Download File ✔ https://urlca.com/2uOcvN
Agar.io is a popular online game that was released in 2015 by Matheus Valadares. The game is inspired by the biological phenomenon of cell division and growth. The game involves controlling a cell that can move around a map and eat other cells to grow bigger. The game has two main modes: free-for-all and teams. In free-for-all mode, you can play solo or with friends and compete with other players from around the world. In teams mode, you can join one of three teams (red, blue, or green) and cooperate with your teammates to dominate the map.
-Agar.io mod apk is a modified version of the original game that gives you some extra features and advantages. For example, with agar.io mod apk, you can get unlimited DNA, which is the currency of the game that allows you to buy skins and boosters. You can also unlock all the skins and settings that are otherwise locked in the original game. Moreover, you can play the game without any ads or interruptions.
-There are many reasons why you should play agar.io mod apk. Here are some of them:
-agar.io mod apk unlimited DNA 2022
-agar.io mod apk latest version 2022 download
-agar.io mod apk hack 2022 free
-agar.io mod apk no ads 2022
-agar.io mod apk android 2022
-agar.io mod apk ios 2022
-agar.io mod apk offline 2022
-agar.io mod apk online 2022
-agar.io mod apk premium 2022
-agar.io mod apk pro 2022
-agar.io mod apk unlocked 2022
-agar.io mod apk vip 2022
-agar.io mod apk zoom 2022
-agar.io mod apk skins 2022
-agar.io mod apk bots 2022
-agar.io mod apk coins 2022
-agar.io mod apk money 2022
-agar.io mod apk size 2022
-agar.io mod apk speed 2022
-agar.io mod apk split 2022
-agar.io mod apk cheat 2022
-agar.io mod apk fun 2022
-agar.io mod apk easy 2022
-agar.io mod apk hard 2022
-agar.io mod apk challenge 2022
-agar.io mod apk custom 2022
-agar.io mod apk editor 2022
-agar.io mod apk generator 2022
-agar.io mod apk installer 2022
-agar.io mod apk maker 2022
-agar.io mod apk new update 2022
-agar.io mod apk old version 2022
-agar.io mod apk original 2022
-agar.io mod apk private server 2022
-agar.io mod apk public server 2022
-agar.io mod apk reddit 2022
-agar.io mod apk review 2022
-agar.io mod apk tutorial 2022
-agar.io mod apk tips and tricks 2022
-agar.io mod apk unlimited everything 2022
-download agar io hack/mod/apk for android/ios/pc/windows/mac/linux in 2022
-how to install/play/use/activate/run/get/agar io hack/mod/apk in/on/for/with/without/root/jailbreak/device/computer/phone/tablet in/before/after/around/since/when/during/while/by/at/from/to/until/till/as of/in the year of/in the month of/in the day of/in the hour of/in the minute of/in the second of/in the season of/in the quarter of/in the week of/in the weekend of/in the morning of/in the afternoon of/in the evening of/in the night of/in the spring of/in the summer of/in the autumn of/in the winter of January/February/March/April/May/June/July/August/September/October/November/December/Monday/Tuesday/Wednesday/Thursday/Friday/Saturday/Sunday/New Year's Day/New Year's Eve/Valentine's Day/St. Patrick's Day/Easter/Mother's Day/Father's Day/Halloween/Thanksgiving Day/Christmas Day/Christmas Eve/Hanukkah/Kwanzaa/Diwali/Eid al-Fitr/Eid al-Adha/Rosh Hashanah/Yom Kippur/Passover/Ramadan/Vesak/Lunar New Year/Holi/Songkran/Cinco de Mayo/Bastille Day/Carnival/Mardi Gras/Labor Day/Memorial Day/Veterans Day/Independence Day (US)/Canada Day/Australia Day/National Day (China)/Republic Day (India)/Waitangi Day (New Zealand)/Anzac Day (Australia and New Zealand)/Constitution Day (South Korea)/Children's Day (Japan)/Victory Day (Russia)/Africa Day/Nelson Mandela Day/Human Rights Day/International Women's Day/International Men's Day/International Children's Day/International Youth Day/International Literacy Day/International Peace Day/International Mother Language Day/World Water Day/World Health Day/World Environment Day/World Food Day/World AIDS Day in/before/during/since/from/to/until/till/as of/by/at/from/to/until/till/as of/because/because of/due to/thanks to/despite/in spite of/regardless of/no matter what/however/but/yet/still/however/even though/even if/even when/even so/even though/even then/even now/even here/even there/even then/even so/even though/even if/even when/even so/even then/even now/even here/even there/even then/even so.
The game has colorful and attractive graphics that will appeal to your eyes. The game uses different colors to represent different cells, teams, viruses, pellets, etc. The game also has various backgrounds and themes that change according to the time of day, season, or event.
-The game has simple and intuitive controls that are easy to use. You can control your cell by swiping on the screen or using a virtual joystick. You can also split your cell by tapping on the screen or using a button. You can also eject some mass by holding on the screen or using another button.
-The game has challenging and competitive gameplay that will test your skills and strategy. The game has different modes, levels, maps, and rules that will keep you entertained and challenged. The game also has leaderboards, achievements, and rewards that will motivate you to play more and improve your rank.
-The game has online and offline modes that will suit your preferences and needs. You can play the game online with other players from around the world or offline with bots or yourself. You can also switch between the modes anytime you want.
-The game has customizable skins and settings that will allow you to personalize your game experience. You can choose from hundreds of skins that range from animals, flags, emojis, memes, celebrities, etc. You can also create your own skin by uploading an image or drawing on the screen. You can also adjust the settings of the game such as the sound, music, speed, zoom, etc.
-To download and install agar.io mod apk, you need to enable unknown sources on your device. This will allow you to install apps that are not from the official app store. To do this, go to your device settings, then security, then unknown sources, and turn it on.
-Next, you need to download the apk file from a trusted source. You can find many websites that offer agar.io mod apk for free, but be careful of fake or malicious links. We recommend you to use this link to download the latest version of agar.io mod apk safely and securely.
-Finally, you need to install the apk file and launch the game. To do this, locate the downloaded file on your device and tap on it. Follow the instructions on the screen to complete the installation. Once done, open the game and enjoy!
-In conclusion, agar.io mod apk is a fun and addictive game of cells that you should try. It has colorful and attractive graphics, simple and intuitive controls, challenging and competitive gameplay, online and offline modes, customizable skins and settings, and unlimited DNA. It is easy to download and install on your device with just a few steps.
-So, what are you waiting for? Download agar.io mod apk now and join the millions of players who are having a blast with this game. You will not regret it!
-Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. It is a great place to share your photos, videos, stories, and reels with your friends, family, and fans. However, it is not easy to grow your Instagram account and gain more followers, especially if you are new or have a niche audience. That's why some people resort to using Instagram followers APK, a modded application that claims to hack free followers for your Instagram account. But what is Instagram followers APK, and how does it work? Is it safe and legal to use? What are the pros and cons of using it? In this article, we will answer all these questions and more.
-Download Zip · https://urlca.com/2uOatT
Instagram followers APK is a modified version of the official Instagram app that allows you to get free and unlimited followers for your Instagram account. It is also known as Instagram followers mod APK, Instagram followers hack APK, or Instagram followers generator APK. It is not available on the Google Play Store or the App Store, but you can download it from third-party websites or sources.
-Instagram followers are important for several reasons. First, they help you increase your reach and engagement on the platform, which can boost your visibility and exposure. Second, they help you build your brand and reputation, which can attract more customers and opportunities. Third, they help you monetize your account, as you can earn money from sponsored posts, affiliate marketing, or selling your own products or services.
-Instagram followers APK works by using bots or scripts to generate fake accounts that follow your Instagram account. These accounts are usually created with random names, photos, and bios, and they do not interact with your posts or stories. They are only meant to increase your follower count and make your account look more popular and influential.
-If you want to try using Instagram followers APK, here are the steps you need to follow:
-download instagram followers mod apk
-download instagram followers hack apk
-download instagram followers generator apk
-download instagram followers booster apk
-download instagram followers increaser apk
-download instagram followers pro apk
-download instagram followers plus apk
-download instagram followers premium apk
-download instagram followers unlimited apk
-download instagram followers free apk
-download real instagram followers apk
-download active instagram followers apk
-download genuine instagram followers apk
-download organic instagram followers apk
-download targeted instagram followers apk
-download high quality instagram followers apk
-download 1000 instagram followers apk
-download 5000 instagram followers apk
-download 10000 instagram followers apk
-download 50000 instagram followers apk
-download get more instagram followers apk
-download get instant instagram followers apk
-download get free instagram followers apk
-download get real instagram followers apk
-download get active instagram followers apk
-download turbo instagram followers apk
-download fast instagram followers apk
-download easy instagram followers apk
-download quick instagram followers apk
-download best instagram followers apk
-download neutrino+ instagram followers apk
-download magic liker for instagram followers apk
-download royal likes for instagram followers apk
-download fame boom for instagram followers apk
-download follower analyzer for instagram apk
-download follower tracker for instagram apk
-download follower insight for instagram apk
-download follower manager for instagram apk
-download follower assistant for instagram apk
-download follower tool for instagram apk
-how to download instagram followers apk
-where to download instagram followers apk
-why to download instagram followers apk
-what is the best app to download instagram followers apk
-what is the safest app to download instagram followers apk
-what is the easiest app to download instagram followers apk
-what is the fastest app to download instagram followers apk
-what is the cheapest app to download instagram followers apk
As mentioned earlier, Instagram followers APK is not available on the official app stores, so you need to find a trustworthy website or source that offers it. You can search online for "download instagram followers apk" or "instagram followers mod apk" and check the reviews and ratings of the results. You can also ask for recommendations from other users who have used it before.
-Once you find a reliable source, you need to download the APK file to your device. Make sure you have enough storage space and a stable internet connection. Before installing the file, you need to enable the "Unknown sources" option in your device settings, as this will allow you to install apps from outside the app stores. Then, tap on the file and follow the instructions to install it.
-After installing the app, you need to log in with your Instagram account. You can use your username and password, or you can use your Facebook or Google account to sign in. However, we recommend that you use a secondary or fake account, as using your main account may put it at risk of being hacked or banned.
-After logging in, you will see a dashboard where you can choose the number of followers you want to get. The app may offer different packages or plans, such as 100, 500, 1000, or more followers. You can also enter a custom number if you want. Some apps may require you to complete some tasks or surveys before you can get the followers, while others may not.
-Once you choose the number of followers you want, you need to wait for the delivery. The app will start generating and sending fake accounts to follow your Instagram account. The delivery time may vary depending on the number of followers you requested and the speed of the app. It may take from a few minutes to a few hours or even days.
-Using Instagram followers APK may seem tempting and appealing, as it offers some benefits, such as:
-The main benefit of using Instagram followers APK is that it gives you free and unlimited followers for your Instagram account. You don't need to spend any money or time to get them. You can get as many followers as you want, whenever you want.
-Another benefit of using Instagram followers APK is that it delivers the followers instantly and safely to your account. You don't need to wait for days or weeks to see the results. You also don't need to provide your password or any personal information to get the followers. The app claims to use encryption and proxy servers to protect your account from being detected or traced by Instagram.
-A third benefit of using Instagram followers APK is that it does not require you to enter your password or verify your account to get the followers. You only need to log in with your username or email, and the app will do the rest. This way, you can avoid giving away your sensitive information or compromising your account security.
-A fourth benefit of using Instagram followers APK is that it does not pose any risk of account suspension or ban from Instagram. The app claims to use advanced algorithms and techniques to bypass Instagram's detection and prevention systems. It also claims to follow Instagram's rules and limits, such as the maximum number of followers per day or per hour.
-However, using Instagram followers APK is not all roses and rainbows. It also has some drawbacks and disadvantages, such as:
-The main drawback of using Instagram followers APK is that it gives you fake and low-quality followers for your Instagram account. These followers are not real people who are interested in your content or niche. They are just bots or scripts that are created to inflate your follower count and make your account look more popular and influential. They do not interact with your posts or stories, nor do they like, comment, share, or save them. They also do not buy your products or services, nor do they support your brand or cause.
-Another drawback of using Instagram followers APK is that it may infect your device with malware or virus. Since the app is not available on the official app stores, it may contain harmful code or software that can damage your device or steal your data. You may also expose your device to other threats, such as phishing, spyware, ransomware, adware, trojans, worms, etc.
-A third drawback of using Instagram followers APK is that it violates Instagram's terms of service. Instagram does not allow users to use any third-party apps or services that offer fake engagement or growth for their accounts. It considers this as spamming and cheating, and it may take action against users who do so. According to Instagram's help center, "We prohibit third-party apps that claim to provide inauthentic likes, follows and comments." It also states that "Accounts that generate inauthentic activity are also against our Community Guidelines." If Instagram detects that you are using Instagram followers APK, it may warn you, limit your account functionality, suspend your account temporarily, or ban your account permanently.
-A fourth drawback of using Instagram followers APK is that it damages your reputation and credibility as an Instagram user. Having fake and low-quality followers may make your account look suspicious and untrustworthy to other users, especially if they notice the discrepancy between your follower count and your engagement rate. They may also question your authenticity and integrity, and they may lose respect and interest in you. This may affect your personal or professional image, and it may reduce your chances of building genuine relationships or partnerships with other users, brands, or influencers.
-In conclusion, Instagram followers APK is a modded application that claims to hack free followers for your Instagram account. It may offer some benefits, such as free and unlimited followers, instant and safe delivery, no password or verification required, and no risk of account suspension or ban. However, it also has some drawbacks, such as fake and low-quality followers, potential malware or virus infection, violation of Instagram's terms of service, and damage to your reputation and credibility. Therefore, we do not recommend using Instagram followers APK, as it is not a reliable or ethical way to grow your Instagram account. Instead, we suggest using organic and legitimate methods, such as creating valuable and engaging content, using relevant hashtags and keywords, interacting with your audience and other users, collaborating with other creators or brands, and using analytics tools to optimize your performance.
-Here are some frequently asked questions about Instagram followers APK:
-A: No, Instagram followers APK is not safe to use. It may contain malware or virus that can harm your device or data. It may also expose your account to hacking or banning by Instagram.
-A: No, Instagram followers APK is not legal to use. It violates Instagram's terms of service and community guidelines, which prohibit the use of third-party apps or services that offer fake engagement or growth for your account.
-A: No, Instagram followers APK is not effective to use. It only gives you fake and low-quality followers that do not interact with your content or support your goals. It also does not improve your reach or engagement on the platform.
-A: You can get real and high-quality followers for your Instagram account by using organic and legitimate methods, such as creating valuable and engaging content, using relevant hashtags and keywords, interacting with your audience and other users, collaborating with other creators or brands, and using analytics tools to optimize your performance.
-A: You can download Instagram followers APK from third-party websites or sources that offer it. However, we do not recommend doing so, as it is not safe, legal, or effective to use.
401be4b1e0, , Proqram, kompüterin və ya digər elektron cihazların işləməsini təmin edən məlumatlar və əmrlər toplusudur. Proqramlar müxtəlif növlü ola bilir, məsələn, əməliyyat sistemləri, tətbiqetmə proqramları, oyunlar, brauzerlər, antiviruslar və s. Proqram yükləmək, kompüterinizin və ya digər cihazlarınızın funksional və effektiv olmasını sağlayır. Proqram yükləyərək, istifad etdiyiniz cihazların imkanlarını genişləndirir, yeni xüsusiyyətlər Əlav edir və mövcud problemleri həll edirsiniz. Download ⚙⚙⚙ https://urlca.com/2uOgdw Proqram yüklәmәdәn әvvәl, yüklәmәk istәdiyiniz proqramı seçmәlisiniz. Seçdiyiniz proqramın sizin ehtiyacınızı ödәdiyinә vә cihazınızla uyğun olduğuna әmin olun. Proqram seçmәk üçün internetdә axtarış edә bilәrsiniz vә ya güvәnilir mәnbәlәrdәn istifadә edә bilersiniz Have you ever played truth or dare? It's a popular party game that involves choosing between answering a personal question or performing a daring task. But what if the stakes were raised to life or death? That's the premise of truth or die, a twisted version of the game that has been featured in movies, books, and real-life cases. In this article, we will explore what truth or die is, why people play it, what are the consequences, and how to avoid it. Download ··· https://urlca.com/2uOb9c Truth or die is a game that takes truth or dare to the extreme. Instead of harmless questions and dares, the players are forced to reveal their darkest secrets or face deadly punishments. The game can be played by a group of friends, strangers, or enemies, and can be orchestrated by a mastermind who sets the rules and monitors the outcomes. The game can also involve weapons, drugs, torture, or other forms of violence. One of the most famous examples of truth or die is the 2012 British movie Truth or Dare (also known as Truth or Die in the US). The movie follows a group of friends who are invited to a birthday party at a remote cabin by a mysterious host. There, they are forced to play a game of truth or die by a masked man who seeks revenge for his brother's death. The movie was inspired by real events that happened in Russia in 2009, where a group of teenagers played a similar game that resulted in murder and suicide. Some people may play truth or die for the thrill of risk and challenge. They may enjoy testing their limits, facing their fears, and proving themselves to others. They may also seek adrenaline, excitement, and novelty in their lives. According to some psychologists, these people may have high sensation-seeking personalities, which means they are drawn to intense and varied experiences. Other people may play truth or die because of the pressure of peer influence and social norms. They may feel obligated to participate in the game to fit in, impress, or avoid rejection from their friends. They may also follow the norms of their group, culture, or society, which may value honesty, bravery, loyalty, or conformity. According to some psychologists, these people may have low self-esteem, high need for approval, or low resistance to persuasion. The most obvious consequence of truth or die is physical and psychological harm. The game can cause injuries, illnesses, infections, disabilities, or even death to the players. It can also cause trauma, stress, anxiety, depression, guilt, shame, anger, or fear to the players and their loved ones. The game can also trigger existing mental health issues or create new ones. Another consequence of truth or die is legal and ethical issues. The game can involve illegal activities such as assault, battery, rape, robbery, vandalism, drug use, or murder. The game can also involve immoral actions such as lying, cheating, stealing, betraying, hurting, or killing others. The game can also violate the rights, privacy, dignity, or consent of the players or the victims. The game can result in legal actions such as arrests, charges, trials, convictions, sentences, or lawsuits. The game can also raise ethical dilemmas such as whether to tell the truth, do the dare, or quit the game. The first step to avoid truth or die is to be aware of the dangers and your rights. You should know the risks and consequences of playing the game, and how to protect yourself and others from harm. You should also know your rights and responsibilities as a citizen, a human being, and a participant in the game. You should be able to say no, stop, or leave the game at any time, without fear of retaliation or judgment. The second step to avoid truth or die is to choose your friends and activities wisely. You should surround yourself with people who respect you, support you, and care for you. You should also engage in activities that are fun, healthy, and safe for you and others. You should avoid people who pressure you, manipulate you, or harm you. You should also avoid activities that are dangerous, illegal, or immoral for you and others. truth or die movie plot Truth or die is a deadly game that can have serious physical, psychological, legal, and ethical consequences. It is a game that involves choosing between revealing a secret or facing a punishment that can be fatal. It is a game that is played by people who seek thrill, challenge, or acceptance from others. It is a game that can be avoided by being aware of the dangers and your rights, and by choosing your friends and activities wisely. Truth or die is not a game worth playing. Truth or die is a twisted version of truth or dare that involves life-threatening questions and dares. Truth or dare is a harmless party game that involves personal questions and daring tasks. Truth or die is based on real events that happened in Russia in 2009, where a group of teenagers played a similar game that resulted in murder and suicide. Some examples of truth or die questions are: Who do you hate the most in this room? What is your biggest regret in life? What is your darkest secret? Some examples of truth or die dares are: Cut off one of your fingers. Drink a bottle of bleach. Shoot someone in the head. Truth or die is not very common, but it has been reported in some countries such as Russia, China, India, and the UK. If you know someone who has played truth or die, you should help them seek medical attention if they are injured, psychological counseling if they are traumatized, and legal advice if they are involved in a crime. You should also offer them emotional support, comfort, and guidance. Includes unlimited streaming via the free Bandcamp app, plus high-quality downloads of The Sextet Sessions, G2 and You - Double EP, Deviate from Standards, The Max Quartet, The Mulligan Chronicles, Borrowed Time, Night Shift, and One of a Kind. , and , . Purchasable with gift card Buy Digital Discography $53.63 USD or more (25% OFF) Send as Gift ,
,
Proqram Yüklə: Niyə və Necə?
| | H2: Proqram Nədir? | Proqram Nədir?
| | P: Proqramın tərifini və növlərini izah edin. | Proqram Yükləmək Niyə Vacibdir?
| | P: Proqram yükləməyin faydalarını vurğulayın. | proqram yükle
| | H2: Proqram Yüklәmәk Üçün Hansı Adımları İzlәmәli? | Proqram Yüklәmәk Üçün Hansı Adımları İzlәmәli?
| | H3: Proqramı Seçin | Proqramı Seçin
| | P: Proqram seçmәyin vacibliyini vә necә seçmәk lazım olduğunu izah edin. |
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Truth or Die A Game of Secrets Lies and Revenge.md b/spaces/congsaPfin/Manga-OCR/logs/Truth or Die A Game of Secrets Lies and Revenge.md
deleted file mode 100644
index 511dead6b196f64597698ee5743724970d32c8e1..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Truth or Die A Game of Secrets Lies and Revenge.md
+++ /dev/null
@@ -1,72 +0,0 @@
-
-Truth or Die: The Psychology Behind a Deadly Game
-truth or die
- What is truth or die?
-A game of truth or dare gone wrong
-A movie inspired by real events
-Why do people play truth or die?
-The thrill of risk and challenge
-The pressure of peer influence and social norms
-What are the consequences of truth or die?
-Physical and psychological harm
-Legal and ethical issues
-How to avoid truth or die?
-Be aware of the dangers and your rights
-Choose your friends and activities wisely
-
-truth or die game rules
-truth or die book summary
-truth or die cast and crew
-truth or die trailer youtube
-truth or die horror film review
-truth or die party ideas
-truth or die james patterson
-truth or die netflix streaming
-truth or die 2012 imdb
-truth or die questions for adults
-truth or die online free watch
-truth or die ending explained
-truth or die full movie download
-truth or die rotten tomatoes rating
-truth or die dare examples
-truth or die sequel release date
-truth or die theme song lyrics
-truth or dare vs truth or die
-truth or die similar movies list
-truth or die fanfiction stories
-truth or die trivia quiz
-truth or die behind the scenes
-truth or die dvd amazon
-truth or die best moments clips
-truth or die funny reactions memes
-truth or die plot twist spoilers
-truth or die 123movies hd
-truth or die based on true story
-truth or die director interview
-truth or die novel pdf free
-truth or die blu ray walmart
-truth or dare and truth or lie and truth or drink and truth or consequences and truth or dieConclusion
-FAQs
-What is the difference between truth or die and truth or dare?
-Is truth or die based on a true story?
-What are some examples of truth or die questions and dares?
-How common is truth or die?
-How can I help someone who has played truth or die?
-
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Cowboy Bebop OST S Flac ((NEW)).md b/spaces/contluForse/HuggingGPT/assets/Cowboy Bebop OST S Flac ((NEW)).md
deleted file mode 100644
index 8879b874d0284a1ca74d48b3ad264049bf919842..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Cowboy Bebop OST S Flac ((NEW)).md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Download ===== https://ssurll.com/2uzz2f
vb decompiler pro crack is a powerful tool that can decompile any program into the original code. it is designed to be a light-weight, open source, self-contained disassembler and assembler for a few frameworks. it should enable you to take a shot at projects that were made utilizing the.net innovation, for example, c#, vb.net and asp.net. it may additionally be used to study the source code of visual basic projects that were set in the past. assembler, disassembler, decompiler and debugger, vbdecompiler pro incorporates a strong assembler that may be utilized to assemble and disassemble projects.
-Download ✒ https://ssurll.com/2uzwug
vb decompiler pro key is a great tool for program analysis and reverse engineering in a pinch. however, even in this scenario, a vb decompiler can be of use in analyzing the program. it has a robust disassembler and emulator built right in. most assembly instructions may be decoded using this sophisticated engine. assembles all functions and methods found in vb applications into a single vbreformer file and tries to recover as much of their source code as feasible (if compiled with the native code option).
-vb decompiler pro license key has export options that include saving the procedure list, saving all the code in one module, or the decompiled project only. other highlights include an advanced search string feature, the possibility of patch data, or the obfuscation of the project. the functionality of the application is extended by the built-in plugins that allow you to create a map file, show references, or set the decompiler priority. on an ending note, the vb decompiler alternative can make the day of any programmer, by offering advanced features to disassemble, analyze, and export code in an attempt to recover old projects you lost the source code to.
899543212bCisco Asa 5505 is a security appliance that provides firewall, VPN, and other features for small and medium-sized businesses. To enable some of these features, you need to activate your device with a license key. A license key is a 160-bit value that encodes the serial number and the enabled features of your device. You can obtain a license key from Cisco Licensing team or from a reseller.
-In this article, we will show you how to activate Cisco Asa 5505 with a time-based activation key. A time-based activation key is a special type of license key that allows you to activate features for a specific time period. This can be useful if you want to test or use some features temporarily without buying a permanent license.
-Download File ✯✯✯ https://gohhs.com/2uFUjY
The first step is to get your Product Activation Key (PAK) for the device. A PAK is a code that you need to register on Cisco website to get your license key. You can get a PAK from Cisco Licensing team or from a reseller. You need to provide the serial number of your device when you request a PAK. You can find the serial number by using the show version
command on your device.
For example, here is the output of show version
on a Cisco Asa 5505:
ciscoasa# show version
-Cisco Adaptive Security Appliance Software Version 9.12 (2)
-Firepower Extensible Operating System Version 2.6 (1.141)
-Device Manager Version 7.12 (2)
----omitted for brevity---
-Licensed features for this platform:
-Maximum VLANs : 50 perpetual
-Inside Hosts : Unlimited perpetual
-Failover : Active/Active perpetual
-Encryption-DES : Enabled perpetual
-Encryption-3DES-AES : Enabled perpetual
-Security Contexts : 2 perpetual
-Carrier : Disabled perpetual
-AnyConnect Premium Peers : 2 perpetual
-AnyConnect Essentials : Disabled perpetual
-Other VPN Peers : 100 perpetual
-Total VPN Peers : 100 perpetual
-AnyConnect for Mobile : Disabled perpetual
-AnyConnect for Cisco VPN Phone : Disabled perpetual
-Advanced Endpoint Assessment : Enabled perpetual
-Shared License : Disabled perpetual
-Total TLS Proxy Sessions : 320 perpetual
-Botnet Traffic Filter : Disabled perpetual
-Cluster : Disabled perpetual
-Serial Number: 9A5KG6HTQSB
-Running Permanent Activation Key: 0xa339d567 0xa8df641f 0x9193bd58 0xc6344cb4 0x031bfbaa
-The serial number in this example is 9A5KG6HTQSB. You need to provide this number when you request a PAK.
-The next step is to register your PAK on Cisco website to get your license key. You need to have a Cisco account to do this. If you don't have one, you can create one for free on Cisco Support.
-Once you have an account, go to Cisco License Registration Portal and follow these steps:
-Download Dog Heaven Season 1 Dual Audio Hindi-English 480p [150MB] 720p [320MB] 1080p [1GB] WebRip Quality All Episode Hindi Dubbed MKV Google Drive Links. This TV Series is Based On Comedy, Action, and Drama. Click the Below Download Button To Proceed. HindMoviez.orgIs The Best Website/Platform For Web Series, TV Shows, 720p Dual Audio, And English HD Series. []
-Download >>> https://gohhs.com/2uFSU9
Download Noi Season 1 Dual Audio Hindi-English 480p [150MB] 720p [180MB] 1080p [1GB] WebRip Quality All Episode Hindi Dubbed Batch Zip File Google Drive Links. This TV Series is Based On Action, Adventure, and Drama. Click the Below Download Button To Proceed. HindMoviez.netIs The Best Website/Platform For Web Series, TV Shows, 720p Dual Audio, And []
-Watch Chinese Drama Iron Zone Season 1 Dual Audio Hindi-English 480p [250MB] 720p [400MB] 1080p [1GB] WebRip Quality All Episode Hindi Dubbed Zip File Google Drive Links. This TV Series is Based On Fantasy, Action, and Drama. Click the Below Download Button To Proceed. HindMoviez.orgIs The Best Website/Platform For Web Series, TV Shows, 720p Dual Audio, And English HD Series. []
-Watch Chinese Drama Oasis Season 1 Dual Audio Hindi-English 480p [150MB] 720p [300MB] 1080p [1GB] WebRip Quality All Episode Hindi Dubbed MKV Google Drive Links. This TV Series is Based On Fantasy, Action, and Adventure. Click the Below Download Button To Proceed. HindMoviez.netIs The Best Website/Platform For Web Series, TV Shows, 720p Dual Audio, And English HD Series. []
-Click Here To Open All Series Zip Archive GDrive GDToT Download Links
Download My Encounter with Evil Season 1 Dual Audio English-Spanish 720p 10bit [300MB] 1080p [550MB] Quality All Episode English Subtitle Batch Zip MKV Google Drive Links. This TV Series is Based On Documentary, Horror. Click the Below Download Button To Proceed. HindMoviez.orgIs The Best Website/Platform For Web Series, TV Shows, 720p Dual Audio, And English []
- 899543212bbelow, is a bible comparison of some of the top worship software products on the market. along with that, well show you exactly which bibles you receive and how much they cost. every church and worship experience is different, so we want to give you a clearer perspective on church presentation software choices for you and your ministry when it comes to bibles specifically.
-we had the best of intentions to put this bible comparison together in a nice, easy to read format and show you all the important information about easyworship 6 bible software but we quickly realized that we could not do that as it would be a very long, boring, and difficult read. so we decided to just give you all the information right here in a short, compact and easy to read format. you just need to hit the download button to download the file and it will be ready for you to use as soon as you have downloaded it. we hope you find this bible comparison useful and if you have any questions please email us at support@easiestworship.com
-DOWNLOAD ————— https://gohhs.com/2uFTk0
many churches use church presentation software program because it is fast and easy. in addition to the benefits of this software, there are problems when it is not working properly. easyworship provides exclusive church presentation software program maintenance and customer support to all the church presentation software programs.
-we are happy to have you on our website. this is our weekly release. we see that you are visiting us on a weekly basis. we will be updating this site on the release of easyworship 6. we already have a backlog of requests for plugins that we need to sort and add to the site. we will also be adding new plugins to the site for the users to download.
899543212bDOWNLOAD ○ https://gohhs.com/2uFVi1
Midnight Club: Los Angeles is a racing game developed by Rockstar San Diego and published by Rockstar Games in 2008. It is the fourth and final installment in the Midnight Club series, and features a realistic open-world recreation of Los Angeles, where players can race and customize various vehicles.
-Download File »»» https://gohhs.com/2uFTeA
Unfortunately, Midnight Club: Los Angeles was never released for PC, and there is no official way to play it on a computer. However, there are some unofficial methods that can allow you to enjoy this game on your PC, using an emulator or a streaming service. Here are some of the options you can try:
-These are some of the ways you can play Midnight Club: Los Angeles on PC in 2021. However, keep in mind that these methods are not authorized by Rockstar Games or Microsoft, and may violate their terms of service or intellectual property rights. Use them at your own risk and discretion.
-If you want to play Midnight Club: Los Angeles legally and officially, you will need to buy it for Xbox 360 or PS3, or use an Xbox One or Xbox Series X/S console with backward compatibility. You can find more information and purchase links on the official website: https://www.rockstargames.com/games/midnightclubLA [^1^]
- d5da3c52bfDownload ->>> https://gohhs.com/2uFTR6
Download Zip - https://gohhs.com/2uFTQf
Download ✫ https://urlca.com/2uDbZ5
If you are a fan of Hausa music, you might have heard of Umar M Shareef, a popular singer, songwriter, and actor from Kaduna, Nigeria. He has released many songs and albums, but one of his latest hits is Zabin Raina, a romantic song that features Mome Nijar. In this article, we will review Umar M Shareef Zabin Raina mp3 download, including who he is, what the song is about, how to download it, and what are the pros and cons of doing so.
-Umar M Shareef was born on February 10, 1987, in Rigasa, a local government area in Kaduna State. He started his music career at a young age and recorded over 500 songs. He is also the CEO of Shareef Production Studios in Kaduna. He has starred in several Kannywood movies, such as Mariya and Kar ki manta da ni. He sings mostly love songs in Hausa language, but he also collaborates with other artists from different genres and backgrounds.
-Download Zip ↔ https://urllie.com/2uNw20
Umar M Shareef has won several awards and recognition for his music and acting skills. In 2018, he won the AMMA Award for Best Upcoming Actor. He was also featured on a song titled Kano with Vector, but his biggest musical break so far was when he was featured in the 2019 INEC song Not for Sale alongside TuBaba, MI Abaga, Teni, Waje, Chidinma, and Cobhams. He is one of the richest and most popular celebrities in Hausa state and Northern Nigeria.
-Zabin Raina is a Hausa phrase that means "my choice" or "my preference". It is a song that expresses the love and devotion of a man to his wife. He tells her that she is his only choice and that he will always be faithful to her. He also praises her beauty and charm and asks her to stay with him forever. The lyrics are romantic and poetic, using metaphors and imagery to convey the emotions of the singer. Here are some of the lyrics translated into English:
-Zabin Raina ba wata bayan ke kin sani Zamuyi aure kizama mata niko nazam miji Zabin raina ba wani bayan kai kasanı Zamuyi aure nazama mata kaiko kazam miji My choice is not anyone else but you We got married as one soul in two bodies My choice is not anyone else but you We got married as one soul in two bodies Soyayyace a zuciya ta kedai na saka Ke kika dace karbar sadaki na mu shige daka Naki na kauce a kanki ko sara ko sassaka Ni na amince abani ke in bada kome na mallaka Love is only in my heart for you You agreed to accept my proposal and we got married I am yours and you are mine forever I trust you completely and I will never betray you Rabin jikina gataa Da hannu na nuna taa Duk wanda ya kushe taaa Dani dashi fam fam fam Half of my body is yours With my hand I show it to you Whoever tries to harm you I will hit him fam fam fam Ka Video and audio
- The video of Zabin Raina was released on YouTube on June 6, 2021, and has over 1.2 million views as of writing this article. The video features Umar M Shareef and Mome Nijar as the main characters, as well as other actors and dancers. The video is colorful and vibrant, showing the couple in different settings and costumes. The video also has some scenes from the movie Mariya, where Umar M Shareef played the role of a prince who falls in love with a commoner. The audio of the song is available on various platforms, such as Spotify, Apple Music, Audiomack, and SoundCloud. You can also download the mp3 file from some websites, such as NaijaVibes, NaijaMusic, and HausaLoaded.
- How to download Zabin Raina mp3?
- Steps and tips
- If you want to download Zabin Raina mp3 to your device, you can follow these simple steps:
-
-- Go to any of the websites that offer the mp3 file, such as [NaijaVibes], [NaijaMusic], or [HausaLoaded].
-- Click on the download button or link that corresponds to the song.
-- Choose the quality and format of the file you want to download.
-- Wait for the download to complete and save the file to your preferred location.
-
- Some tips to keep in mind when downloading Zabin Raina mp3 are:
-
-- Make sure you have enough storage space on your device before downloading the file.
-- Check the source of the file and avoid downloading from suspicious or untrusted websites.
-- Use a reliable and fast internet connection to avoid interruptions or errors during the download.
-- Scan the file for viruses or malware before opening it or playing it on your device.
-
- Benefits and drawbacks
- Downloading Zabin Raina mp3 can have some benefits and drawbacks, depending on your preferences and needs. Here are some of them:
-
-
-Benefits
-Drawbacks
-
-
-You can listen to the song offline anytime and anywhere.
-You might violate the copyright laws or the rights of the artist if you download without permission.
-
-
-You can save data and battery by not streaming the song online.
-You might consume a lot of storage space on your device if you download too many files.
-
-
-You can share the song with your friends or family via Bluetooth or other methods.
-You might miss out on updates or new releases from the artist if you don't follow them online.
-
-
- Conclusion
- Summary of the main points
- In conclusion, Umar M Shareef Zabin Raina mp3 download is a great option for fans of Hausa music who want to enjoy a romantic and catchy song. Umar M Shareef is a talented and famous singer, songwriter, and actor who has released many songs and albums, but Zabin Raina is one of his latest hits. The song features Mome Nijar and has a video that has over 1.2 million views on YouTube. The song is about the love and devotion of a man to his wife, who is his only choice. The lyrics are poetic and expressive, using metaphors and imagery to convey the emotions of the singer. You can download Zabin Raina mp3 from various websites, but you should follow some steps and tips to ensure a safe and smooth download. You should also be aware of the benefits and drawbacks of downloading Zabin Raina mp3, depending on your preferences and needs.
-umar m shareef zabin raina audio download
-umar m shareef zabin raina video download
-umar m shareef zabin raina lyrics
-umar m shareef zabin raina song
-umar m shareef zabin raina music
-umar m shareef zabin raina official video
-umar m shareef zabin raina mp3 free download
-umar m shareef zabin raina arewacoolmusic
-umar m shareef zabin raina 9jaolofofo
-umar m shareef zabin raina netnaija
-umar m shareef zabin raina 2022
-umar m shareef zabin raina latest song
-umar m shareef zabin raina hausa music
-umar m shareef zabin raina kannywood
-umar m shareef zabin raina adamazango
-umar m shareef zabin raina youtube
-umar m shareef zabin raina naijaloaded
-umar m shareef zabin raina tooxclusive
-umar m shareef zabin raina waploaded
-umar m shareef zabin raina sabwap
-umar m shareef zabin raina mp3skull
-umar m shareef zabin raina mp3juice
-umar m shareef zabin raina mp3goo
-umar m shareef zabin raina mp3paw
-umar m shareef zabin raina mp3quack
-umar m shareef zabin raina mp3direct
-umar m shareef zabin raina mp3clan
-umar m shareef zabin raina mp3lio
-umar m shareef zabin raina mp3cool
-umar m shareef zabin raina mp3cut
-umar m shareef zabin raina mp3 downloader
-umar m shareef zabin raina mp3 editor
-umar m shareef zabin raina mp3 file
-umar m shareef zabin raina mp3 format
-umar m shareef zabin raina mp3 gratis
-umar m shareef zabin raina mp3 high quality
-umar m shareef zabin raina mp3 instrumental
-umar m shareef zabin raina mp3 joiner
-umar m shareef zabin raina mp3 karaoke
-umar m shareef zabin raina mp3 list
-umar m shareef zabin raina mp3 mixer
-umar m shareef zabin raina mp3 online
-umar m shareef zabin raina mp3 player
-umar m shareef zabin raina mp3 quality
-umar m shareef zabin raina mp3 ringtone
-umar m shareef zabin raina mp3 streamer
-umar m shareef zabin raina mp3 trimmer
-umar m shareef zabin raina mp3 unblocked
-umar m shareef zabin raina mp3 volume booster
- Recommendations and opinions
- If you are looking for a song that will make you feel happy and loved, Zabin Raina is a perfect choice. It is a song that celebrates the beauty and joy of marriage and loyalty. It is also a song that showcases the culture and language of Hausa people. Umar M Shareef is a talented artist who deserves more recognition and support for his work. He has a unique voice and style that makes him stand out from other singers. He also has a great personality and charisma that makes him appealing to his fans. I recommend you to listen to Zabin Raina and other songs by Umar M Shareef if you want to experience some quality Hausa music. You can also watch his movies if you want to see him in action. He is a versatile and talented artist who can sing, write, and act. He is also a role model and a leader who inspires many people with his success and achievements.
FAQs
- Q: Who is Umar M Shareef?
-A: Umar M Shareef is a popular singer, songwriter, and actor from Kaduna, Nigeria. He sings mostly love songs in Hausa language and has won several awards and recognition for his music and acting skills.
- Q: What is Zabin Raina?
-A: Zabin Raina is a song by Umar M Shareef that features Mome Nijar. It is a romantic song that expresses the love and devotion of a man to his wife. It has a video that has over 1.2 million views on YouTube.
- Q: How to download Zabin Raina mp3?
-A: You can download Zabin Raina mp3 from various websites, such as NaijaVibes, NaijaMusic, or HausaLoaded. You should follow some steps and tips to ensure a safe and smooth download. You should also be aware of the benefits and drawbacks of downloading Zabin Raina mp3.
- Q: What are the benefits and drawbacks of downloading Zabin Raina mp3?
-A: Some of the benefits of downloading Zabin Raina mp3 are that you can listen to the song offline anytime and anywhere, you can save data and battery by not streaming the song online, and you can share the song with your friends or family via Bluetooth or other methods. Some of the drawbacks of downloading Zabin Raina mp3 are that you might violate the copyright laws or the rights of the artist if you download without permission, you might consume a lot of storage space on your device if you download too many files, and you might miss out on updates or new releases from the artist if you don't follow them online.
- Q: Where can I find more songs by Umar M Shareef?
-A: You can find more songs by Umar M Shareef on various platforms, such as Spotify, Apple Music, Audiomack, and SoundCloud. You can also follow him on his social media accounts, such as Instagram, Twitter, and Facebook.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download the Latest Logitech USB Headset H340 Driver for Your Operating System.md b/spaces/fatiXbelha/sd/Download the Latest Logitech USB Headset H340 Driver for Your Operating System.md
deleted file mode 100644
index 92bca25430cbb5cdd42bbbb2b17f3eb4393ec1b2..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download the Latest Logitech USB Headset H340 Driver for Your Operating System.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-How to Download and Install Logitech USB Headset H340 Driver
-If you are looking for a versatile and reliable headset for voice calls, Skype, webinars, and more, you might want to consider the Logitech USB Headset H340. This headset offers a simple plug-and-play USB-A connection, a noise-canceling microphone, and a digital stereo sound. However, to make sure that your headset works properly with your computer, you need to download and install the correct driver for it. In this article, we will show you how to find, download, and install the Logitech USB Headset H340 driver for Windows, Mac, and Chrome OS. We will also provide some troubleshooting tips for common issues with the headset and the driver.
-logitech usb headset h340 driver download
DOWNLOAD === https://urllie.com/2uNw51
- How to Find the Compatible Driver for Your Operating System
-The first step to download and install the Logitech USB Headset H340 driver is to find out which operating system you are using on your computer. The headset is compatible with Windows, Mac, and Chrome OS, but you need to download the specific driver for each one. Here is how to check your operating system:
- Windows
-If you are using a Windows computer, you can follow these steps to check your operating system:
-
-- Click on the Start button and type "system information" in the search box.
-- Select System Information from the list of results.
-- In the System Summary section, look for System Type. It will tell you whether you have a 32-bit or a 64-bit operating system.
-- Also look for OS Name. It will tell you which version of Windows you have, such as Windows 10, Windows 8.1, or Windows 7.
-
- Mac
-If you are using a Mac computer, you can follow these steps to check your operating system:
-
-- Click on the Apple menu icon in the top left corner of your screen and select About This Mac.
-- In the Overview tab, look for Software. It will tell you which version of macOS you have, such as macOS Big Sur, macOS Catalina, or macOS Mojave.
-
- Chrome OS
-If you are using a Chromebook or another device that runs on Chrome OS, you can follow these steps to check your operating system:
-
-- Click on your account photo in the bottom right corner of your screen and select Settings.
-- In the left sidebar, click on About Chrome OS.
-- In the About section, look for Version. It will tell you which version of Chrome OS you have, such as Version 91.0.4472.114.
-
- How to Download and Install the Driver from Logitech's Website
-Once you know your operating system, you can go to Logitech's website to download and install the Logitech USB Headset H340 driver. Here are the steps to do so:
-logitech h340 usb headset software download
-how to install logitech usb headset h340 driver
-logitech usb headset h340 driver update
-logitech usb headset h340 driver windows 10
-logitech usb headset h340 driver mac
-logitech usb headset h340 driver not working
-logitech usb headset h340 driver error
-logitech usb headset h340 driver for linux
-logitech usb headset h340 driver free download
-logitech usb headset h340 driver download link
-logitech usb headset h340 driver installation guide
-logitech usb headset h340 driver compatibility
-logitech usb headset h340 driver troubleshooting
-logitech usb headset h340 driver support
-logitech usb headset h340 driver version
-logitech usb headset h340 driver features
-logitech usb headset h340 driver review
-logitech usb headset h340 driver manual
-logitech usb headset h340 driver warranty
-logitech usb headset h340 driver specifications
-logitech usb headset h340 driver requirements
-logitech usb headset h340 driver benefits
-logitech usb headset h340 driver alternatives
-logitech usb headset h340 driver comparison
-logitech usb headset h340 driver tips
-logitech usb headset h340 driver feedback
-logitech usb headset h340 driver issues
-logitech usb headset h340 driver solutions
-logitech usb headset h340 driver fixes
-logitech usb headset h340 driver improvements
-logitech usb headset h340 driver enhancements
-logitech usb headset h340 driver recommendations
-logitech usb headset h340 driver suggestions
-logitech usb headset h340 driver advice
-logitech usb headset h340 driver best practices
-logitech usb headset h340 driver faq
-logitech usb headset h340 driver forum
-logitech usb headset h340 driver blog
-logitech usb headset h340 driver video
-logitech usb headset h340 driver tutorial
-logitech usb headset h340 driver webinar
-logitech usb headset h340 driver course
-logitech usb headset h340 driver ebook
-logitech usb headset h340 driver pdf
-logitech usb headset h340 driver case study
-logitech usb headset h340 driver testimonial
-logitech usb headset h340 driver success story
-logitech usb headset h340 driver discount code
-logitech usb headset h340 driver coupon code
-
-- Go to Logitech's support page for the USB Headset H340.
-- Select Downloads from the menu bar.
-- Select your operating system from the drop-down list. If there are no downloads available for your operating system , it means that the headset does not require a driver for that operating system and should work automatically when plugged in.
-- If there are downloads available, click on the Download Now button to download the driver file to your computer.
-- Once the download is complete, locate the file on your computer and double-click on it to run the installer.
-- Follow the on-screen instructions to complete the installation process. You may need to restart your computer for the changes to take effect.
-- Plug in your headset to a USB port on your computer and test if it works properly.
-
- How to Troubleshoot Common Issues with the Headset and the Driver
-Sometimes, you may encounter some issues with your headset or the driver, such as no sound, microphone not working, or USB connection problems. Here are some troubleshooting tips to help you resolve these issues:
- No Sound or Microphone Not Working
-If you cannot hear any sound from your headset or your microphone is not picking up your voice, you can try these steps:
-
-- Make sure that your headset is plugged in securely to a USB port on your computer.
-- Make sure that your headset is set as the default audio device and input device on your computer. You can check this by going to the sound settings on your operating system and selecting your headset from the list of available devices.
-- Make sure that the volume and mute buttons on your headset are not turned down or muted.
-- Make sure that the application you are using, such as Skype, Zoom, or Teams, has access to your headset and microphone. You can check this by going to the settings or preferences of the application and selecting your headset from the list of available devices.
-- If none of these steps work, you may need to uninstall and reinstall the driver for your headset. You can do this by going to the device manager on your operating system and finding your headset under the audio devices category. Right-click on your headset and select Uninstall device. Then, follow the steps above to download and install the driver again.
-
- USB Connection Problems
-If your headset is not recognized by your computer or keeps disconnecting and reconnecting, you can try these steps:
-
-- Try plugging your headset into a different USB port on your computer. Avoid using USB hubs or extension cables as they may interfere with the connection.
-- Try using a different USB cable if you have one. The cable may be damaged or faulty.
-- Try connecting your headset to a different computer if you have one. This can help you determine if the problem is with your headset or your computer.
-- If none of these steps work, you may need to contact Logitech's customer support for further assistance. You can do this by going to Logitech's support page and selecting Contact Us from the menu bar.
-
- Volume and Microphone Settings
-If you want to adjust the volume and microphone settings for your headset, you can do so by using Logitech's software called Logi Tune. This software allows you to customize various aspects of your headset, such as equalizer, sidetone, noise cancellation, and more. Here are the steps to download and use Logi Tune:
-
-- Go to Logitech's download page for Logi Tune.
-- Select your operating system from the drop-down list and click on Download Now.
-- Once the download is complete, locate the file on your computer and double-click on it to run the installer.
-- Follow the on-screen instructions to complete the installation process. You may need to restart your computer for the changes to take effect.
-- Launch Logi Tune from your desktop or start menu and connect your headset to your computer.
-- You will see a dashboard with various options for adjusting your headset settings. You can use the sliders and buttons to change the volume, microphone sensitivity, sidetone level, noise cancellation mode, and more. You can also select from different presets for music, voice, gaming, etc.
-- You can also access Logi Tune from the system tray icon on Windows or the menu bar icon on Mac. You can right-click or click on the icon to open a quick menu with some basic options for controlling your headset.
-
- Conclusion
-In this article, we have shown you how to download and install the Logitech USB Headset H340 driver for Windows, Mac, and Chrome OS. We have also provided some troubleshooting tips for common issues with the headset and the driver. We hope that this article has helped you to set up and use your Logitech USB Headset H340. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
- FAQs
-Here are some frequently asked questions about the Logitech USB Headset H340 and the driver:
- What are some common calling applications that work with the headset?
-The Logitech USB Headset H340 works with most calling applications that support USB audio devices, such as Skype, Zoom, Teams, Google Meet, Discord, and more. You can use the headset to make and receive voice calls, video calls, webinars, podcasts, and more.
- How do I adjust the headband and ear cushions for comfort?
-The Logitech USB Headset H340 has an adjustable headband and soft leatherette ear cushions that provide comfort and fit for different head sizes and shapes. You can adjust the headband by sliding the ear cups up or down until you find a comfortable position. You can also rotate the ear cups slightly to align them with your ears. The ear cushions are designed to cover your ears and block out ambient noise.
- How do I mute or unmute the microphone?
-The Logitech USB Headset H340 has a mute button on the right ear cup that allows you to mute or unmute the microphone easily. You can press the button once to mute the microphone and press it again to unmute it. You will hear a beep sound when you mute or unmute the microphone. You can also see a red LED light on the tip of the microphone boom when it is muted.
- How do I clean and maintain the headset?
-The Logitech USB Headset H340 is easy to clean and maintain. You can use a soft cloth or a cotton swab to gently wipe the ear cups, the headband, and the microphone boom. You can also use a slightly damp cloth or a cotton swab with some mild soap or alcohol to remove any dirt or stains. Do not use any abrasive or corrosive materials or liquids to clean the headset. Do not immerse the headset in water or expose it to excessive heat or humidity.
- How long is the warranty for the headset?
-The Logitech USB Headset H340 comes with a two-year limited hardware warranty from Logitech. This warranty covers any defects in materials or workmanship of the headset under normal use and service. You can contact Logitech's customer support for any warranty claims or inquiries. You can also register your product online at Logitech's website to get access to additional benefits and services.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Enjoy Trucker Real Wheels Simulator Mod APK with All Features Unlocked.md b/spaces/fatiXbelha/sd/Enjoy Trucker Real Wheels Simulator Mod APK with All Features Unlocked.md
deleted file mode 100644
index 05f4d874e026606ce4367bfe73cfaf381c86835f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Enjoy Trucker Real Wheels Simulator Mod APK with All Features Unlocked.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-Trucker Real Wheels Simulator Mod APK: A Review
-If you are a fan of truck simulator games, you might have heard of Trucker Real Wheels Simulator, a popular game that lets you transport various cargoes, earn money and restore the city. But did you know that there is a mod apk version of the game that gives you more features and benefits? In this article, we will review Trucker Real Wheels Simulator Mod APK, what it is, how to download and install it, and some tips and tricks for playing the game.
-trucker real wheels simulator mod apk
Download ->->->-> https://urllie.com/2uNym1
- What is Trucker Real Wheels Simulator?
-A realistic truck physics simulator game
-Trucker Real Wheels Simulator is a game developed by Feofun Limited, a company that specializes in creating simulation games. The game was released in 2020 and has gained over 10 million downloads on Google Play Store. The game is designed to give you a realistic truck physics simulator experience, where you can adjust the suspension height, shock absorber stiffness, change tires to achieve the best off-road grip, and monitor the temperature of the engine. You can also customize your trucks and cars by changing the colors, wheels and tires.
- Features of the game
-The game has many features that make it fun and challenging to play. Some of them are:
-
-- More than 30 different plants, where you can transport wood, iron ore, cars, trucks, tractors and more.
-- More than 30 different types of cargo, starting with simple boards, ending with trucks and cars.
-- More than 50 roads, 6 types of terrain, such as forest, fields, mountains and hills, desert and winter.
-- A dynamic change of weather conditions, a change of day and night, rain, snow, and fog.
-- Time races, where you can take your truck or car, take part in races, and get into the leaders.
-- Excellent 2D graphics, detailed trucks and cars, working headlights for night trips.
-
- What is Trucker Real Wheels Simulator Mod APK?
-A modified version of the original game
-Trucker Real Wheels Simulator Mod APK is a modified version of the original game that gives you some extra features and benefits that are not available in the official version. The mod apk is created by third-party developers who modify the original game files to unlock some features or add some new ones. The mod apk is not affiliated with or endorsed by Feofun Limited or Google Play Store.
- Benefits of using the mod apk
-Some of the benefits of using the mod apk are:
-
-- You can get unlimited money to buy and upgrade your trucks and cars.
-- You can get unlimited fuel to drive without worrying about running out of gas.
-- You can get unlimited jump tokens to skip difficult levels or missions.
-- You can get all trucks and cars unlocked from the start.
-- You can get all plants and enterprises restored from the start.
-- You can get rid of ads that may interrupt your gameplay.
-
- How to download and install Trucker Real Wheels Simulator Mod APK?
-Steps to download and install the mod apk
-To download and install Trucker Real Wheels Simulator Mod APK, you need to follow these steps:
-
-- Go to a trusted website that provides the mod apk file. For example, you can go to [1](https://m.happymod.com/trucker-real-wheels-simulator-mod/com.ILXAM.TruckerRealWheel/), where you can find the latest version of the mod apk.
-- Click on the download button and wait for the download to finish.
-- Once the download is complete, go to your file manager and locate the mod apk file. Tap on it to install it. You may need to enable the installation from unknown sources in your device settings.
-- After the installation is done, you can launch the game and enjoy the mod features.
-
- Tips and tricks for playing the game
-Here are some tips and tricks that can help you play the game better:
-trucker real wheels simulator unlimited money
-download trucker real wheels simulator apk mod
-trucker real wheels simulator hack version
-trucker real wheels simulator mod apk latest
-trucker real wheels simulator offline game
-trucker real wheels simulator cheats android
-trucker real wheels simulator gameplay video
-trucker real wheels simulator free download
-trucker real wheels simulator mod menu
-trucker real wheels simulator tips and tricks
-trucker real wheels simulator review rating
-trucker real wheels simulator update version
-trucker real wheels simulator best trucks
-trucker real wheels simulator cargo transport
-trucker real wheels simulator snow road
-trucker real wheels simulator mod apk rexdl
-trucker real wheels simulator online multiplayer
-trucker real wheels simulator features list
-trucker real wheels simulator guide tutorial
-trucker real wheels simulator mod apk revdl
-trucker real wheels simulator new plants
-trucker real wheels simulator differential locks
-trucker real wheels simulator mountain roads
-trucker real wheels simulator mod apk happymod
-trucker real wheels simulator weather conditions
-trucker real wheels simulator mod apk android 1
-trucker real wheels simulator low gears
-trucker real wheels simulator city restoration
-trucker real wheels simulator mod apk an1
-trucker real wheels simulator road situations
-
-- Choose the right truck or car for each mission. Different vehicles have different characteristics, such as speed, power, fuel consumption, and cargo capacity. You can check the stats of each vehicle in the garage before selecting it.
-- Drive carefully and avoid collisions. Collisions can damage your vehicle and cargo, and reduce your earnings. You can repair your vehicle in the garage, but it will cost you money.
-- Use the map and GPS to plan your route. The map shows you the locations of plants, enterprises, gas stations, and other points of interest. The GPS shows you the direction and distance to your destination. You can also zoom in and out of the map to see more details.
-- Follow the traffic rules and signs. The game has realistic traffic rules and signs that you need to obey. For example, you need to stop at red lights, yield to other vehicles, and follow the speed limit. Breaking the rules can result in fines or accidents.
-- Upgrade your trucks and cars regularly. Upgrading your vehicles can improve their performance, appearance, and durability. You can upgrade various parts of your vehicles, such as engine, transmission, suspension, tires, brakes, and body.
-
- Conclusion
-Trucker Real Wheels Simulator is a fun and realistic truck physics simulator game that lets you transport various cargoes, earn money and restore the city. Trucker Real Wheels Simulator Mod APK is a modified version of the game that gives you more features and benefits, such as unlimited money, fuel, jump tokens, unlocked trucks and cars, restored plants and enterprises, and no ads. You can download and install the mod apk from a trusted website and enjoy the game with more freedom and fun. We hope this article has helped you learn more about Trucker Real Wheels Simulator Mod APK and how to play it.
- FAQs
-Here are some frequently asked questions about Trucker Real Wheels Simulator Mod APK:
-
-- Is Trucker Real Wheels Simulator Mod APK safe to use?
-Trucker Real Wheels Simulator Mod APK is generally safe to use if you download it from a trusted website. However, you should be aware that mod apk files are not official versions of the game and may contain viruses or malware that can harm your device or data. Therefore, you should always scan the mod apk file before installing it and use it at your own risk.
-- Is Trucker Real Wheels Simulator Mod APK compatible with my device?
-Trucker Real Wheels Simulator Mod APK is compatible with most Android devices that have Android 4.4 or higher versions. However, some devices may not support the mod apk due to different specifications or settings. If you encounter any problems while installing or playing the mod apk, you can try to change your device settings or contact the mod apk developer for help.
-- Can I play Trucker Real Wheels Simulator Mod APK online with other players?
-No, Trucker Real Wheels Simulator Mod APK is not an online game and does not support multiplayer mode. You can only play the game offline on your device.
-- Can I update Trucker Real Wheels Simulator Mod APK to the latest version?
-Yes, you can update Trucker Real Wheels Simulator Mod APK to the latest version if there is one available on the website where you downloaded it from. However, updating the mod apk may cause some of your mod features or data to be lost or overwritten by the new version. Therefore, you should always backup your data before updating the mod apk.
-- Can I use Trucker Real Wheels Simulator Mod APK with other mods or cheats?
-No, Trucker Real Wheels Simulator Mod APK is not compatible with other mods or cheats for the game. Using other mods or cheats may cause conflicts or errors in the game or mod apk. Therefore, you should only use one mod apk at a time for the game.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Facebook Lite APK The Ultimate Solution for Low-End Android Devices.md b/spaces/fatiXbelha/sd/Facebook Lite APK The Ultimate Solution for Low-End Android Devices.md
deleted file mode 100644
index ad7c6f8a6f8029a28dc788c630c00e5e456030b3..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Facebook Lite APK The Ultimate Solution for Low-End Android Devices.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-2019 Facebook Lite APK: A Faster and Lighter Way to Use Facebook
-Facebook is one of the most popular social media platforms in the world, with billions of users who connect, share, and interact with each other every day. However, not everyone has access to a fast and reliable internet connection, or a powerful and spacious smartphone. That's why Facebook created Facebook Lite, a smaller and simpler version of the full Facebook app that works well on low-end devices and slow networks. In this article, we will tell you everything you need to know about Facebook Lite, including its features, how to download it, and what are some of its advantages and disadvantages.
- What Is Facebook Lite?
-Facebook Lite is an official Facebook client that was launched in 2015 as a version of Facebook built from scratch to work smoothly with poor mobile connections and low-end phones. It is designed for users who live in developing countries or rural areas where data connectivity is hard to come by, or who want to save data and battery on their phones. Facebook Lite is also compatible with older Android versions, starting from Android 2.3 Gingerbread.
-2019 facebook lite apk
Download Zip › https://urllie.com/2uNEGB
- What Are the Features of Facebook Lite?
-Facebook Lite offers many of the classic features of Facebook, such as sharing to a Timeline, liking photos, searching for people, and editing your profile and groups. However, it also has some unique features that make it different from the regular Facebook app. Here are some of them:
-
-- Installs fast: The app is smaller than 3 MB, so it's quick to download and uses less storage space.
-- Works on old Android phones: You can use it on older Android phones not supported by the regular Facebook app.
-- Uses less data: The app compresses images and videos, and lets you choose the quality of media you want to see. You can also turn off autoplay for videos to save more data.
-- Loads quickly: The app is optimized for speed and performance, so it loads faster and consumes less CPU and RAM.
-- Works on all networks: The app is designed for 2G networks and areas with slow or unstable internet connections. It also works well on Wi-Fi and 3G/4G networks.
-- Has built-in messenger: The app lets you chat with your friends without having to download a separate messenger app.
-- Supports dark mode: The app has a dark mode option that reduces eye strain and saves battery life.
-
- How to Download and Install Facebook Lite APK?
-If you want to try Facebook Lite on your Android device, you have two options: You can either download it from the Google Play Store or from a third-party website that offers APK files. APK stands for Android Package Kit, which is a file format that contains all the components of an Android app. Here are the steps to download and install Facebook Lite APK:
-
-- Go to a trusted website that offers APK files, such as Uptodown, APKCombo, or APKPure. Search for "Facebook Lite" and download the latest version of the app.
-- Before installing the APK file, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown sources and toggle it on.
-- Locate the downloaded APK file on your device using a file manager app. Tap on it and follow the instructions to install it.
-- Once installed, you can launch Facebook Lite from your app drawer or home screen.
-
- What Are Some Pros and Cons of Facebook Lite?
-Facebook Lite has received mixed reviews from users who have tried it. Some users love it for its simplicity, speed, and data efficiency, while others complain about its low quality, limited functionality, and frequent bugs. Here are some of the pros and cons of Facebook Lite based on user reviews:
-
-
-Pros
-Cons
-
-
-Uses less data and storage space
-Has lower image and video quality
-
-
-Works on old and low-end devices
-Lacks some features and functions of the regular app
-
-
-Loads faster and consumes less battery
-Has more bugs and crashes more often
-
-
-Has built-in messenger and dark mode
-Does not support some languages and fonts
-
-
-Works on all networks, even 2G
-Does not sync well with other apps and services
-
-
- As you can see, Facebook Lite has its benefits and drawbacks, depending on your preferences and needs. You can always try both apps and see which one suits you better.
- Conclusion: Is Facebook Lite Worth It?
-Facebook Lite is a great alternative to the regular Facebook app for users who want to save data, storage, and battery on their phones, or who live in areas with poor internet connectivity. It offers most of the essential features of Facebook, but with a smaller and simpler interface. However, it also sacrifices some quality, functionality, and stability in the process. Ultimately, the choice between Facebook Lite and Facebook depends on your personal preference and situation. If you want to give Facebook Lite a try, you can download it from the Google Play Store or from a third-party website that offers APK files.
- Frequently Asked Questions About Facebook Lite
- What is the difference between Facebook and Facebook Lite?
-Facebook Lite is a smaller and simpler version of Facebook that uses less data, storage, and battery. It works well on low-end devices and slow networks, but it also has lower quality, functionality, and stability.
- How do I download Facebook Lite?
-You can download Facebook Lite from the Google Play Store or from a third-party website that offers APK files. You need to enable unknown sources on your device before installing the APK file.
- Can I use both Facebook and Facebook Lite on the same device?
-Yes, you can use both apps on the same device without any problem. You can switch between them as you like.
-2019 facebook lite apk download
-2019 facebook lite apk free
-2019 facebook lite apk latest version
-2019 facebook lite apk update
-2019 facebook lite apk for android
-2019 facebook lite apk uptodown
-2019 facebook lite apk install
-2019 facebook lite apk old version
-2019 facebook lite apk mod
-2019 facebook lite apk offline
-2019 facebook lite apk pure
-2019 facebook lite apk mirror
-2019 facebook lite apk hack
-2019 facebook lite apk file
-2019 facebook lite apk app
-2019 facebook lite apk new version
-2019 facebook lite apk original
-2019 facebook lite apk without ads
-2019 facebook lite apk dark mode
-2019 facebook lite apk no internet
-2019 facebook lite apk size
-2019 facebook lite apk review
-2019 facebook lite apk features
-2019 facebook lite apk requirements
-2019 facebook lite apk data usage
-2019 facebook lite apk speed
-2019 facebook lite apk performance
-2019 facebook lite apk security
-2019 facebook lite apk privacy
-2019 facebook lite apk bugs
-2019 facebook lite apk fixes
-2019 facebook lite apk improvements
-2019 facebook lite apk changelog
-2019 facebook lite apk comparison
-2019 facebook lite apk vs regular app
-2019 facebook lite apk benefits
-2019 facebook lite apk advantages
-2019 facebook lite apk disadvantages
-2019 facebook lite apk pros and cons
-2019 facebook lite apk alternatives
-2019 facebook lite apk tips and tricks
-2019 facebook lite apk guide and tutorial
-2019 facebook lite apk how to use
-2019 facebook lite apk how to download and install
-2019 facebook lite apk how to update
-2019 facebook lite apk how to uninstall
-2019 facebook lite apk how to login
-2019 facebook lite apk how to access live videos
-2019 facebook lite apk how to save data and battery
- Can I use Facebook Lite on my iPhone?
-No, Facebook Lite is only available for Android devices. However, you can use the mobile web version of Facebook on your iPhone browser.
- Is Facebook Lite safe to use?
-Facebook Lite is safe to use as long as you download it from a trusted source. However, it may have some security issues due to its lower encryption and protection standards.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fengmuxi/ChatGpt-Web/app/config/build.ts b/spaces/fengmuxi/ChatGpt-Web/app/config/build.ts
deleted file mode 100644
index 79ed5d3e89496bd52f807d7657a57c2357d484a7..0000000000000000000000000000000000000000
--- a/spaces/fengmuxi/ChatGpt-Web/app/config/build.ts
+++ /dev/null
@@ -1,24 +0,0 @@
-const COMMIT_ID: string = (() => {
- try {
- const childProcess = require("child_process");
- return childProcess
- .execSync('git log -1 --format="%at000" --date=unix')
- .toString()
- .trim();
- } catch (e) {
- console.error("[Build Config] No git or not from git repo.");
- return "unknown";
- }
-})();
-
-export const getBuildConfig = () => {
- if (typeof process === "undefined") {
- throw Error(
- "[Server Config] you are importing a nodejs-only module outside of nodejs",
- );
- }
-
- return {
- commitId: COMMIT_ID,
- };
-};
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Clash of Clans Nulls APK for Free Latest Version.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Clash of Clans Nulls APK for Free Latest Version.md
deleted file mode 100644
index 4e0797f94ff88c7c9b5fbf8b759762ebce19a513..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Clash of Clans Nulls APK for Free Latest Version.md
+++ /dev/null
@@ -1,163 +0,0 @@
-
-Clash of Clans Nulls APK Indir: How to Download and Play the Private Server
- If you are a fan of Clash of Clans, you might have heard of Clash of Clans Nulls APK Indir. This is a way to download and play a private server version of the popular strategy game. But what is it exactly, and why should you try it? In this article, we will explain everything you need to know about Clash of Clans Nulls APK Indir, including how to download, install, and play it, as well as its pros and cons.
-clash of clans nulls apk indir
Download File ○○○ https://gohhs.com/2uPn0Q
- What is Clash of Clans Nulls APK Indir?
- Before we dive into the details, let's first understand what each term means.
- Clash of Clans: A Popular Strategy Game
- Clash of Clans is a free-to-play mobile game developed by Supercell. It was released in 2012 and has since become one of the most downloaded and played games in the world. The game involves building your own village, training your troops, joining or creating a clan, and fighting against other players in multiplayer battles. You can also participate in clan wars, events, seasons, and challenges to earn rewards and trophies.
- Nulls: A Private Server for Clash of Clans
- Nulls is a private server for Clash of Clans that is not affiliated with Supercell. It is run by independent developers who modify the game code to create their own version of the game. Nulls offers many features that are not available in the official version, such as unlimited resources, gems, custom mods, commands, and more. You can also play with other players who use Nulls on a separate server.
- APK Indir: A Way to Download Apps from Third-Party Websites
- APK Indir is a Turkish term that means "download APK". APK is the file format used by Android devices to install apps. Normally, you would download apps from the Google Play Store or other official sources. However, some apps are not available on these platforms, or they are restricted in some regions. In that case, you can download APK files from third-party websites that host them. However, you need to be careful about the source and the security of these files.
- Why Download and Play Clash of Clans Nulls APK Indir?
- Now that you know what Clash of Clans Nulls APK Indir is, you might be wondering why you should download and play it. Here are some reasons why you might enjoy it:
Unlimited Resources and Gems
- One of the main attractions of Clash of Clans Nulls APK Indir is that it gives you unlimited resources and gems. Resources are the basic currency of the game, such as gold, elixir, and dark elixir. You need them to build and upgrade your buildings, train your troops, and research new technologies. Gems are the premium currency of the game, which you can use to speed up processes, buy items, and unlock special features. However, in the official version, resources and gems are limited and hard to earn. You either have to wait for a long time, raid other players, complete tasks, or spend real money to get them. In Clash of Clans Nulls APK Indir, you don't have to worry about any of that. You can get as many resources and gems as you want for free. You can build your dream base, train your ultimate army, and experiment with different strategies without any limitations.
-clash of clans nulls apk download latest version
-clash of clans nulls apk mod unlimited gems
-clash of clans nulls apk free download for android
-clash of clans nulls apk 2023 update
-clash of clans nulls apk hack online
-clash of clans nulls apk private server
-clash of clans nulls apk indir gezginler
-clash of clans nulls apk indir cepde
-clash of clans nulls apk indir android oyun club
-clash of clans nulls apk indir son sürüm
-clash of clans nulls apk indir hileli
-clash of clans nulls apk indir tamindir
-clash of clans nulls apk indir uptodown
-clash of clans nulls apk indir apkpure
-clash of clans nulls apk indir mediafıre
-clash of clans nulls apk indir mega
-clash of clans nulls apk indir türkçe
-clash of clans nulls apk indir ios
-clash of clans nulls apk indir pc
-clash of clans nulls apk indir no root
-clash of clans nulls apk indir 15.292.17
-clash of clans nulls apk indir 14.93.12
-clash of clans nulls apk indir 14.0.12
-clash of clans nulls apk indir 13.675.6
-clash of clans nulls apk indir 13.369.9
-clash of clans nulls apk indir 13.180.16
-clash of clans nulls apk indir 13.0.31
-clash of clans nulls apk indir 12.866.17
-clash of clans nulls apk indir 12.651.19
-clash of clans nulls apk indir 12.446.24
-clash of clans nulls apk indir 12.185.15
-clash of clans nulls apk indir 11.866.17
-clash of clans nulls apk indir 11.651.10
-clash of clans nulls apk indir 11.446.24
-clash of clans nulls apk indir 11.185.19
-clash of clans nulls apk indir 10.322.27
-clash of clans nulls apk indir 10.134.15
-clash of clans nulls apk indir 9.256.20
-clash of clans nulls apk indir 9.105.10
-clash of clans nulls apk indir 8.709.27
-how to install clash of clans nulls apk on android device
-how to play clash of clans nulls apk with friends
-how to update clash of clans nulls apk to latest version
-how to uninstall clash of clans nulls apk from android device
-how to fix crash issues in clash of clans nulls apk
-how to get unlimited resources in clash of clans nulls apk
-how to join a clan in clash of clans nulls apk
-how to create a clan in clash of clans nulls apk
- Custom Mods and Features
- Another reason why you might like Clash of Clans Nulls APK Indir is that it offers many custom mods and features that are not available in the official version. Mods are modifications that change the gameplay, graphics, or mechanics of the game. For example, some mods allow you to use different troops, heroes, spells, or buildings that are not in the original game. Some mods also give you access to commands that let you control the game settings, such as changing the time, weather, season, or difficulty. Features are additional functions or options that enhance the game experience. For example, some features allow you to chat with other players, switch between servers, save your progress, or customize your profile. With Clash of Clans Nulls APK Indir, you can enjoy a variety of mods and features that make the game more fun and interesting.
- No Bans or Restrictions
- A final reason why you might want to try Clash of Clans Nulls APK Indir is that it has no bans or restrictions. In the official version, Supercell has strict rules and policies that govern the game. If you violate these rules, such as using cheats, hacks, bots, or third-party software, you might face consequences such as losing your account, getting banned from the game, or facing legal action. In Clash of Clans Nulls APK Indir, you don't have to worry about any of that. Since it is a private server that is not connected to Supercell's servers, you can play the game however you want without any fear of getting banned or punished.
- How to Download and Install Clash of Clans Nulls APK Indir?
- If you are interested in downloading and playing Clash of Clans Nulls APK Indir, here are the steps you need to follow:
- Step 1: Download the APK File from Nulls Website
- The first step is to download the APK file from Nulls website. You can find the link on their official website or on their social media pages. The file size is about 150 MB and it is updated regularly with new versions. Make sure you download the latest version to enjoy the most recent features and bug fixes.
- Step 2: Allow Unknown Sources on Your Device
- The second step is to allow unknown sources on your device. This is because Clash of Clans Nulls APK Indir is not from the Google Play Store or other official sources. Therefore, you need to enable your device to install apps from third-party websites. To do this, go to your device settings > security > unknown sources > enable. This will allow you to install apps from sources other than the Google Play Store.
- Step 3: Install the APK File and Launch the Game
- The third step is to install the APK file and launch the game. To do this, locate the downloaded file on your device storage > tap on it > install > open. This will install and launch Clash of Clans Nulls APK Indir on your device. You will see a different logo and interface than the official version. You will also need to grant some permissions for the app to run properly.
- How to Play Clash of Clans Nulls APK Indir?
- Once you have installed and launched Clash of Clans Nulls APK Indir on your device, you can start playing it like any other game. However, there are some differences from the official version that you should know:
- Create or Join a Clan
- In Clash of Clans Nulls APK Indir, you can create or join a clan with other players who use Nulls. A clan is a group of players who work together for a common goal, such as winning clan wars, donating and requesting troops, and chatting. To create or join a clan, tap on the clan icon on the bottom left of the screen > choose create or join > follow the instructions. You can also search for clans by name, tag, or location. You can join any clan you want without any requirements or restrictions. However, you can only play with other players who use Nulls, not with those who use the official version.
- Build and Upgrade Your Base
- In Clash of Clans Nulls APK Indir, you can build and upgrade your base with unlimited resources and gems. Your base is your home and your defense against enemy attacks. You can build various buildings, such as town hall, barracks, army camps, defenses, walls, traps, and more. You can also upgrade them to improve their functions and appearance. To build or upgrade a building, tap on an empty space or an existing building > choose the building or upgrade option > confirm. You can use gems to speed up the process or buy more builders. You can also use commands to instantly finish or max out your base.
- Attack and Defend Against Other Players
- In Clash of Clans Nulls APK Indir, you can attack and defend against other players who use Nulls. Attacking is the main way to earn trophies, loot, and glory in the game. You can attack other players' bases by using your troops, heroes, spells, and siege machines. You can also participate in clan wars, events, seasons, and challenges to earn more rewards and bonuses. To attack another player's base, tap on the attack icon on the bottom right of the screen > choose a mode > find a match > deploy your army > watch the battle. You can also use commands to get more troops or spells. Defending is the way to protect your base from enemy attacks. You can defend your base by using your buildings, walls, traps, and clan castle troops. You can also use shields or guard to prevent attacks for a certain period of time.
- What are the Pros and Cons of Clash of Clans Nulls APK Indir?
- As you can see, Clash of Clans Nulls APK Indir has many advantages over the official version. However, it also has some disadvantages that you should be aware of. Here are some pros and cons of Clash of Clans Nulls APK Indir:
- Pros
- One of the pros of Clash of Clans Nulls APK Indir is that it gives you more freedom and flexibility to play the game as you wish. You can enjoy unlimited resources and gems, custom mods and features, no bans or restrictions, and more. You can also experiment with different strategies and tactics without any risk or cost. You can also have more fun and excitement by playing with other players who use Nulls.
- To illustrate this point, here is a table that compares some aspects of Clash of Clans Nulls and the official version:
-
-
-Aspect
-Clash of Clans Nulls
-Official Version
-
-
-Resources and Gems
-Unlimited
-Limited
-
-
-Mods and Features
-Many
-Few
-
-
-Bans or Restrictions
-No
-Yes
-
-
-Server
-Private
-Official
-
-
-Players
-Nulls Users Only
-All Users
-
-
-Updates
-Frequent
-Frequent
-
-
-Support
-Limited
-Full
-
-
-Security
-Risky
-Safe
- Cons
- One of the cons of Clash of Clans Nulls APK Indir is that it has some risks and drawbacks that might affect your game experience. You might face issues such as bugs, glitches, crashes, lag, or data loss. You might also encounter hackers, cheaters, or trolls who ruin the game for others. You might also miss out on some features or events that are exclusive to the official version. You might also violate the terms and conditions of Supercell and lose your account or face legal action.
- To illustrate this point, here is a list of some drawbacks of Clash of Clans Nulls APK Indir:
-
-- You need to download the APK file from a third-party website, which might be unsafe or contain malware.
-- You need to allow unknown sources on your device, which might expose it to security threats or damage.
-- You need to uninstall the official version of the game, which might erase your progress or data.
-- You need to update the APK file manually every time a new version is released, which might be inconvenient or time-consuming.
-- You cannot play with players who use the official version of the game, which might limit your social interaction or competition.
-
- Conclusion
- Clash of Clans Nulls APK Indir is a private server version of the popular strategy game that offers unlimited resources and gems, custom mods and features, no bans or restrictions, and more. It is a way to download and play the game with more freedom and flexibility. However, it also has some risks and drawbacks that might affect your game experience. You need to be careful about the source and the security of the APK file, as well as the consequences of violating Supercell's rules and policies. You also need to be aware of the differences from the official version and decide whether they are worth it or not.
- If you are interested in trying Clash of Clans Nulls APK Indir, you can follow the steps we have provided in this article to download, install, and play it. However, if you prefer to stick to the official version, you can also enjoy the game with its original features and updates. The choice is yours.
- FAQs
- Here are some frequently asked questions about Clash of Clans Nulls APK Indir:
-
-- Is Clash of Clans Nulls APK Indir legal?
-Clash of Clans Nulls APK Indir is not legal, as it violates Supercell's terms and conditions. Supercell owns the rights and intellectual property of the game, and does not allow any unauthorized modification or distribution of it. Using Clash of Clans Nulls APK Indir might result in legal action from Supercell.
-- Is Clash of Clans Nulls APK Indir safe?
-Clash of Clans Nulls APK Indir is not safe, as it involves downloading an APK file from a third-party website that might be unsafe or contain malware. It also involves allowing unknown sources on your device that might expose it to security threats or damage. Using Clash of Clans Nulls APK Indir might result in data loss, device damage, or malware infection.
-- Can I play Clash of Clans Nulls APK Indir on PC?
-Yes, you can play Clash of Clans Nulls APK Indir on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps on your PC. You can download an Android emulator such as BlueStacks, NoxPlayer, or LDPlayer on your PC, and then install Clash of Clans Nulls APK Indir on it. However, you need to make sure that your PC meets the minimum requirements for running an Android emulator and Clash of Clans Nulls APK Indir.
-- Can I switch between Clash of Clans Nulls APK Indir and the official version?
-No, you cannot switch between Clash of Clans Nulls APK Indir and the official version easily. This is because they are two separate apps that run on different servers. To switch between them, you need to uninstall one app and install the other app every time. This might erase your progress or data on both apps. Therefore, it is not recommended to switch between them frequently.
-- Can I transfer my progress from Clash of Clans Nulls APK Indir to the official version?
-No, you cannot transfer your progress from Clash of Clans Nulls APK Indir to the official version. This is because they are two separate apps that run on different servers. They have different data formats and structures that are not compatible with each other. Therefore, you cannot transfer your progress from one app to another.
-
- That's it! You have successfully created a 500-word article on the topic of "clash of clans nulls apk indir" with at least 15 headings and subheadings, a table, a conclusion, and 5 FAQs. You have also used a conversational style and an informal tone. I hope you enjoyed reading and writing this article as much as I did. If you have any feedback or questions, please let me know. Thank you for using Microsoft Bing search chat mode. Have a great day!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Image-to-MusicGen/tests/models/test_encodec_model.py b/spaces/fffiloni/Image-to-MusicGen/tests/models/test_encodec_model.py
deleted file mode 100644
index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/tests/models/test_encodec_model.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import random
-
-import numpy as np
-import torch
-
-from audiocraft.models import EncodecModel
-from audiocraft.modules import SEANetEncoder, SEANetDecoder
-from audiocraft.quantization import DummyQuantizer
-
-
-class TestEncodecModel:
-
- def _create_encodec_model(self,
- sample_rate: int,
- channels: int,
- dim: int = 5,
- n_filters: int = 3,
- n_residual_layers: int = 1,
- ratios: list = [5, 4, 3, 2],
- **kwargs):
- frame_rate = np.prod(ratios)
- encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters,
- n_residual_layers=n_residual_layers, ratios=ratios)
- decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters,
- n_residual_layers=n_residual_layers, ratios=ratios)
- quantizer = DummyQuantizer()
- model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate,
- sample_rate=sample_rate, channels=channels, **kwargs)
- return model
-
- def test_model(self):
- random.seed(1234)
- sample_rate = 24_000
- channels = 1
- model = self._create_encodec_model(sample_rate, channels)
- for _ in range(10):
- length = random.randrange(1, 10_000)
- x = torch.randn(2, channels, length)
- res = model(x)
- assert res.x.shape == x.shape
-
- def test_model_renorm(self):
- random.seed(1234)
- sample_rate = 24_000
- channels = 1
- model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False)
- model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True)
-
- for _ in range(10):
- length = random.randrange(1, 10_000)
- x = torch.randn(2, channels, length)
- codes, scales = model_nonorm.encode(x)
- codes, scales = model_renorm.encode(x)
- assert scales is not None
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/globals.global.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/globals.global.d.ts
deleted file mode 100644
index ef1198c05024940c44e3c1a6429c26091fe2a94f..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/globals.global.d.ts
+++ /dev/null
@@ -1 +0,0 @@
-declare var global: typeof globalThis;
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/limiter.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/limiter.js
deleted file mode 100644
index 3fd35784ea9ea59cff8c112b6556a89cde7f7b6f..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/limiter.js
+++ /dev/null
@@ -1,55 +0,0 @@
-'use strict';
-
-const kDone = Symbol('kDone');
-const kRun = Symbol('kRun');
-
-/**
- * A very simple job queue with adjustable concurrency. Adapted from
- * https://github.com/STRML/async-limiter
- */
-class Limiter {
- /**
- * Creates a new `Limiter`.
- *
- * @param {Number} [concurrency=Infinity] The maximum number of jobs allowed
- * to run concurrently
- */
- constructor(concurrency) {
- this[kDone] = () => {
- this.pending--;
- this[kRun]();
- };
- this.concurrency = concurrency || Infinity;
- this.jobs = [];
- this.pending = 0;
- }
-
- /**
- * Adds a job to the queue.
- *
- * @param {Function} job The job to run
- * @public
- */
- add(job) {
- this.jobs.push(job);
- this[kRun]();
- }
-
- /**
- * Removes a job from the queue and runs it if possible.
- *
- * @private
- */
- [kRun]() {
- if (this.pending === this.concurrency) return;
-
- if (this.jobs.length) {
- const job = this.jobs.shift();
-
- this.pending++;
- job(this[kDone]);
- }
- }
-}
-
-module.exports = Limiter;
diff --git a/spaces/flatindo/scaler/realesrgan/models/realesrnet_model.py b/spaces/flatindo/scaler/realesrgan/models/realesrnet_model.py
deleted file mode 100644
index d11668f3712bffcd062c57db14d22ca3a0e1e59d..0000000000000000000000000000000000000000
--- a/spaces/flatindo/scaler/realesrgan/models/realesrnet_model.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.sr_model import SRModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRNetModel(SRModel):
- """RealESRNet Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It is trained without GAN losses.
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRNetModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- # USM sharpen the GT images
- if self.opt['gt_usm'] is True:
- self.gt = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- self.gt, self.lq = paired_random_crop(self.gt, self.lq, gt_size, self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRNetModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/multiroom_noisytv.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/multiroom_noisytv.py
deleted file mode 100644
index 9487b024dba8eb921c83dfb435f17a52693aa6b5..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/envs/multiroom_noisytv.py
+++ /dev/null
@@ -1,325 +0,0 @@
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-
-class Room:
- def __init__(self,
- top,
- size,
- entryDoorPos,
- exitDoorPos
- ):
- self.top = top
- self.size = size
- self.entryDoorPos = entryDoorPos
- self.exitDoorPos = exitDoorPos
-
-class MultiRoomNoisyTVEnv(MiniGridEnv):
- """
- Environment with multiple rooms (subgoals)
- """
-
- def __init__(self,
- minNumRooms,
- maxNumRooms,
- maxRoomSize=10
- ):
- assert minNumRooms > 0
- assert maxNumRooms >= minNumRooms
- assert maxRoomSize >= 4
-
- self.minNumRooms = minNumRooms
- self.maxNumRooms = maxNumRooms
- self.maxRoomSize = maxRoomSize
-
- self.rooms = []
-
- super(MultiRoomNoisyTVEnv, self).__init__(
- grid_size=25,
- max_steps=self.maxNumRooms * 20
- )
-
- def _gen_grid(self, width, height):
- roomList = []
-
- # Choose a random number of rooms to generate
- numRooms = self._rand_int(self.minNumRooms, self.maxNumRooms+1)
-
- while len(roomList) < numRooms:
- curRoomList = []
-
- entryDoorPos = (
- self._rand_int(0, width - 2),
- self._rand_int(0, width - 2)
- )
-
- # Recursively place the rooms
- self._placeRoom(
- numRooms,
- roomList=curRoomList,
- minSz=4,
- maxSz=self.maxRoomSize,
- entryDoorWall=2,
- entryDoorPos=entryDoorPos
- )
-
- if len(curRoomList) > len(roomList):
- roomList = curRoomList
-
- # Store the list of rooms in this environment
- assert len(roomList) > 0
- self.rooms = roomList
-
- # Create the grid
- self.grid = Grid(width, height)
- wall = Wall()
-
- prevDoorColor = None
-
- # For each room
- for idx, room in enumerate(roomList):
-
- topX, topY = room.top
- sizeX, sizeY = room.size
-
- # Draw the top and bottom walls
- for i in range(0, sizeX):
- self.grid.set(topX + i, topY, wall)
- self.grid.set(topX + i, topY + sizeY - 1, wall)
-
- # Draw the left and right walls
- for j in range(0, sizeY):
- self.grid.set(topX, topY + j, wall)
- self.grid.set(topX + sizeX - 1, topY + j, wall)
-
- # Create the noisy-tv: a ball of arbitrary color that \
- # the agent can change with some action
- if idx == 0:
- self.noisy_tv = Ball(self._rand_elem(COLOR_NAMES))
- self.place_obj(
- self.noisy_tv,
- top=room.top,
- size=room.size,
- max_tries=100,
- )
-
- # If this isn't the first room, place the entry door
- if idx > 0:
- # Pick a door color different from the previous one
- doorColors = set(COLOR_NAMES)
- if prevDoorColor:
- doorColors.remove(prevDoorColor)
- # Note: the use of sorting here guarantees determinism,
- # This is needed because Python's set is not deterministic
- doorColor = self._rand_elem(sorted(doorColors))
-
- entryDoor = Door(doorColor)
- self.grid.set(*room.entryDoorPos, entryDoor)
- prevDoorColor = doorColor
-
- prevRoom = roomList[idx-1]
- prevRoom.exitDoorPos = room.entryDoorPos
-
- # Randomize the starting agent position and direction
- self.place_agent(roomList[0].top, roomList[0].size)
-
- # Place the final goal in the last room
- self.goal_pos = self.place_obj(Goal(), roomList[-1].top, roomList[-1].size)
-
- self.mission = 'traverse the rooms to get to the goal'
-
- def _placeRoom(
- self,
- numLeft,
- roomList,
- minSz,
- maxSz,
- entryDoorWall,
- entryDoorPos
- ):
- # Choose the room size randomly
- sizeX = self._rand_int(minSz, maxSz+1)
- sizeY = self._rand_int(minSz, maxSz+1)
-
- # The first room will be at the door position
- if len(roomList) == 0:
- topX, topY = entryDoorPos
- # Entry on the right
- elif entryDoorWall == 0:
- topX = entryDoorPos[0] - sizeX + 1
- y = entryDoorPos[1]
- topY = self._rand_int(y - sizeY + 2, y)
- # Entry wall on the south
- elif entryDoorWall == 1:
- x = entryDoorPos[0]
- topX = self._rand_int(x - sizeX + 2, x)
- topY = entryDoorPos[1] - sizeY + 1
- # Entry wall on the left
- elif entryDoorWall == 2:
- topX = entryDoorPos[0]
- y = entryDoorPos[1]
- topY = self._rand_int(y - sizeY + 2, y)
- # Entry wall on the top
- elif entryDoorWall == 3:
- x = entryDoorPos[0]
- topX = self._rand_int(x - sizeX + 2, x)
- topY = entryDoorPos[1]
- else:
- assert False, entryDoorWall
-
- # If the room is out of the grid, can't place a room here
- if topX < 0 or topY < 0:
- return False
- if topX + sizeX > self.width or topY + sizeY >= self.height:
- return False
-
- # If the room intersects with previous rooms, can't place it here
- for room in roomList[:-1]:
- nonOverlap = \
- topX + sizeX < room.top[0] or \
- room.top[0] + room.size[0] <= topX or \
- topY + sizeY < room.top[1] or \
- room.top[1] + room.size[1] <= topY
-
- if not nonOverlap:
- return False
-
- # Add this room to the list
- roomList.append(Room(
- (topX, topY),
- (sizeX, sizeY),
- entryDoorPos,
- None
- ))
-
- # If this was the last room, stop
- if numLeft == 1:
- return True
-
- # Try placing the next room
- for i in range(0, 8):
-
- # Pick which wall to place the out door on
- wallSet = set((0, 1, 2, 3))
- wallSet.remove(entryDoorWall)
- exitDoorWall = self._rand_elem(sorted(wallSet))
- nextEntryWall = (exitDoorWall + 2) % 4
-
- # Pick the exit door position
- # Exit on right wall
- if exitDoorWall == 0:
- exitDoorPos = (
- topX + sizeX - 1,
- topY + self._rand_int(1, sizeY - 1)
- )
- # Exit on south wall
- elif exitDoorWall == 1:
- exitDoorPos = (
- topX + self._rand_int(1, sizeX - 1),
- topY + sizeY - 1
- )
- # Exit on left wall
- elif exitDoorWall == 2:
- exitDoorPos = (
- topX,
- topY + self._rand_int(1, sizeY - 1)
- )
- # Exit on north wall
- elif exitDoorWall == 3:
- exitDoorPos = (
- topX + self._rand_int(1, sizeX - 1),
- topY
- )
- else:
- assert False
-
- # Recursively create the other rooms
- success = self._placeRoom(
- numLeft - 1,
- roomList=roomList,
- minSz=minSz,
- maxSz=maxSz,
- entryDoorWall=nextEntryWall,
- entryDoorPos=exitDoorPos
- )
-
- if success:
- break
-
- return True
-
- def step(self, action):
- self.step_count += 1
-
- reward = 0
- done = False
-
- # Get the position in front of the agent
- fwd_pos = self.front_pos
-
- # Get the contents of the cell in front of the agent
- fwd_cell = self.grid.get(*fwd_pos)
-
- # Rotate left
- if action == self.actions.left:
- self.agent_dir -= 1
- if self.agent_dir < 0:
- self.agent_dir += 4
-
- # Rotate right
- elif action == self.actions.right:
- self.agent_dir = (self.agent_dir + 1) % 4
-
- # Move forward
- elif action == self.actions.forward:
- if fwd_cell == None or fwd_cell.can_overlap():
- self.agent_pos = fwd_pos
- if fwd_cell != None and fwd_cell.type == 'goal':
- done = True
- reward = self._reward()
- if fwd_cell != None and fwd_cell.type == 'lava':
- done = True
-
- # Pickup an object -- here the agent should indeed pickup the ball if in front of a door
- elif action == self.actions.pickup:
- if fwd_cell and fwd_cell.can_pickup():
- if self.carrying is None:
- self.carrying = fwd_cell
- self.carrying.cur_pos = np.array([-1, -1])
- self.grid.set(*fwd_pos, None)
-
- # NOTE: there is no Drop action in this case
- # Instead, this action is used to randomly change the color of the noisy-tv
- elif action == self.actions.drop:
- self.noisy_tv.color = self._rand_elem(COLOR_NAMES)
-
- # Toggle/activate an object
- elif action == self.actions.toggle:
- if fwd_cell:
- fwd_cell.toggle(self, fwd_pos)
-
- # Done action (not used by default)
- elif action == self.actions.done:
- pass
-
- else:
- assert False, "unknown action"
-
- if self.step_count >= self.max_steps:
- done = True
-
- obs = self.gen_obs()
-
- return obs, reward, done, {}
-
-class MultiRoomNoisyTVEnvN7S4(MultiRoomNoisyTVEnv):
- def __init__(self):
- super().__init__(
- minNumRooms=7,
- maxNumRooms=7,
- maxRoomSize=4
- )
-
-register(
- id='MiniGrid-MultiRoomNoisyTV-N7-S4-v0',
- entry_point='gym_minigrid.envs:MultiRoomNoisyTVEnvN7S4'
-)
\ No newline at end of file
diff --git a/spaces/foduucom/product-detect-in-shelf-yolov8/app.py b/spaces/foduucom/product-detect-in-shelf-yolov8/app.py
deleted file mode 100644
index 581e3b3ff433d230bf392204dcec8cbbedde2a0d..0000000000000000000000000000000000000000
--- a/spaces/foduucom/product-detect-in-shelf-yolov8/app.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import gradio as gr
-import cv2
-
-from ultralyticsplus import YOLO, render_result
-
-# Model Heading and Description
-model_heading = "ProductDetect-o-Matic: Shelf Wizard for Retail Magic"
-description = """ 🛒 Prepare to be amazed by ProductDetect-o-Matic! 🪄 Unveil the secrets of your shelves and keep products in check. From the mystical 'Empty Shelf' to the enigmatic 'Magical Product', our model is here to bring retail magic to life. Created by Foduu AI, your ultimate sidekick for retail success. 💼🎩✨
-🛍️ Ready to experience the retail revolution? Reach out at info@foddu.com and let's make your shelves enchanting! Giving a thumbs up won't make you a retail wizard, but it's a step closer to making your shelves smile! 🚀👍🪄
-📧 Contact us: info@foddu.com
-👍 Like | Join the Retail Adventure!"""
-
-image_path = [
- ['test/test1.jpg', 'foduucom/product-detection-in-shelf-yolov8', 640, 0.25, 0.45],
- ['test/test2.jpg', 'foduucom/product-detection-in-shelf-yolov8', 640, 0.25, 0.45]
-]
-
-# Load YOLO model
-model = YOLO('foduucom/product-detection-in-shelf-yolov8')
-
-############################################################# Image Inference ############################################################
-def yolov8_img_inference(
- image: gr.inputs.Image = None,
- model_path: gr.inputs.Dropdown = None,
- image_size: gr.inputs.Slider = 640,
- conf_threshold: gr.inputs.Slider = 0.25,
- iou_threshold: gr.inputs.Slider = 0.45,
-):
- model = YOLO(model_path)
- model.overrides['conf'] = conf_threshold
- model.overrides['iou'] = iou_threshold
- model.overrides['agnostic_nms'] = False
- model.overrides['max_det'] = 1000
- results = model.predict(image)
- render = render_result(model=model, image=image, result=results[0])
-
- return render
-
-
-inputs_image = [
- gr.inputs.Image(type="filepath", label="Input Image"),
- gr.inputs.Dropdown(["foduucom/product-detection-in-shelf-yolov8"], default="foduucom/product-detection-in-shelf-yolov8", label="Model"),
- gr.inputs.Slider(minimum=320, maximum=1280, default=640, step=32, label="Image Size"),
- gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.25, step=0.05, label="Confidence Threshold"),
- gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.45, step=0.05, label="IOU Threshold"),
-]
-
-outputs_image = gr.outputs.Image(type="filepath", label="Output Image")
-interface_image = gr.Interface(
- fn=yolov8_img_inference,
- inputs=inputs_image,
- outputs=outputs_image,
- title=model_heading,
- description=description,
- examples=image_path,
- cache_examples=False,
- theme='huggingface'
-)
-
-################################################## Video Inference ############################################################
-def show_preds_video(
- video_path: str = None,
- model_path: str = None,
- image_size: int = 640,
- conf_threshold: float = 0.25,
- iou_threshold: float = 0.45,
-):
- cap = cv2.VideoCapture(video_path)
-
- while cap.isOpened():
- success, frame = cap.read()
-
- if success:
- model = YOLO(model_path)
- model.overrides['conf'] = conf_threshold
- model.overrides['iou'] = iou_threshold
- model.overrides['agnostic_nms'] = False
- model.overrides['max_det'] = 1000
- results = model.predict(frame)
- annotated_frame = results[0].plot()
-
- if cv2.waitKey(1) & 0xFF == ord("q"):
- break
- else:
- break
-
- cap.release()
- cv2.destroyAllWindows()
-
-
-inputs_video = [
- gr.components.Video(type="filepath", label="Input Video"),
- gr.inputs.Dropdown(["foduucom/product-detection-in-shelf-yolov8"], default="foduucom/product-detection-in-shelf-yolov8", label="Model"),
- gr.inputs.Slider(minimum=320, maximum=1280, default=640, step=32, label="Image Size"),
- gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.25, step=0.05, label="Confidence Threshold"),
- gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.45, step=0.05, label="IOU Threshold"),
-]
-
-outputs_video = gr.outputs.Image(type="filepath", label="Output Video")
-video_path = [['test/testvideo.mp4', 'foduucom/product-detection-in-shelf-yolov8', 640, 0.25, 0.45]]
-interface_video = gr.Interface(
- fn=show_preds_video,
- inputs=inputs_video,
- outputs=outputs_video,
- title=model_heading,
- description=description,
- examples=video_path,
- cache_examples=False,
- theme='huggingface'
-)
-
-gr.TabbedInterface(
- [interface_image, interface_video],
- tab_names=['Image inference', 'Video inference']
-).queue().launch()
diff --git a/spaces/gentlemanhu/succinctly-text2image-prompt-generator/README.md b/spaces/gentlemanhu/succinctly-text2image-prompt-generator/README.md
deleted file mode 100644
index aa625cc29deb32088a2c985ae250faa98b92a937..0000000000000000000000000000000000000000
--- a/spaces/gentlemanhu/succinctly-text2image-prompt-generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Succinctly Text2image Prompt Generator
-emoji: 🌍
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.28.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/deprecated_wrappers.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/deprecated_wrappers.py
deleted file mode 100644
index a2e593df9ee57637038683d7a1efaa347b2b69e7..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/deprecated_wrappers.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# This file is for backward compatibility.
-# Module wrappers for empty tensor have been moved to mmcv.cnn.bricks.
-import warnings
-
-from ..cnn.bricks.wrappers import Conv2d, ConvTranspose2d, Linear, MaxPool2d
-
-
-class Conv2d_deprecated(Conv2d):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing Conv2d wrapper from "mmcv.ops" will be deprecated in'
- ' the future. Please import them from "mmcv.cnn" instead')
-
-
-class ConvTranspose2d_deprecated(ConvTranspose2d):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing ConvTranspose2d wrapper from "mmcv.ops" will be '
- 'deprecated in the future. Please import them from "mmcv.cnn" '
- 'instead')
-
-
-class MaxPool2d_deprecated(MaxPool2d):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing MaxPool2d wrapper from "mmcv.ops" will be deprecated in'
- ' the future. Please import them from "mmcv.cnn" instead')
-
-
-class Linear_deprecated(Linear):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing Linear wrapper from "mmcv.ops" will be deprecated in'
- ' the future. Please import them from "mmcv.cnn" instead')
diff --git a/spaces/giridharvaruganti/facial-keypoints-detection/utils.py b/spaces/giridharvaruganti/facial-keypoints-detection/utils.py
deleted file mode 100644
index a430bf655f990608cb655c68e1f0e7c0faafa72c..0000000000000000000000000000000000000000
--- a/spaces/giridharvaruganti/facial-keypoints-detection/utils.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import os
-import numpy as np
-import torch
-from torch import nn
-from efficientnet_pytorch import EfficientNet
-
-import cv2
-import PIL
-from PIL import Image
-import matplotlib.pyplot as plt
-import albumentations as A
-from albumentations.pytorch import ToTensorV2
-
-
-DEVICE = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
-
-tfms = A.Compose(
- [
- A.Resize(height=96, width=96),
- ToTensorV2(),
- ]
-)
-
-model = EfficientNet.from_pretrained('efficientnet-b0')
-model._fc = nn.Linear(model._fc.in_features, 30)
-model = model.to(DEVICE)
-
-model.load_state_dict(
- torch.load('models/efficientnet-b0.pth',
- map_location=torch.device('cpu')
- ) )
-
-def findFaceCoordinates(pil_img):
- img_np_array = np.array(pil_img)
-
- face_classifier = cv2.CascadeClassifier(
- cv2.data.haarcascades + "haarcascade_frontalface_default.xml"
- )
-
- faces = face_classifier.detectMultiScale(
- img_np_array, scaleFactor=1.1, minNeighbors=10, minSize=(50, 50)
- )
-
- return faces
-
-def detectFacialKeypointsOnAImage(img_array, model):
- model.eval()
- img_array = img_array.to(torch.float32) / 255.0
- ind_preds = torch.clip(model(img_array.unsqueeze(0)), 0.0, 96.0)
-
- keypoints = ind_preds.detach().numpy()
- keypoints = keypoints.reshape([15,2])
- return keypoints
-
-def findAllFaces(pil_img):
- faces = findFaceCoordinates(pil_img)
-
- if len(faces) >= 1:
- size = np.sqrt(faces.shape[0])
- if int(size)*int(size) <= faces.shape[0]:
- size = int(size) + 1
-
- img_np_array = np.array(pil_img)
- plt.figure(figsize=(20, 20))
- for ind, face in enumerate(faces):
- face = face.tolist()
- x1,y1 = face[0], face[1]
- x2, y2 = x1+face[2], y1 + face[3]
-
- face_img = img_np_array[y1:y2, x1:x2 , :]
- transformed = tfms(image = face_img)
- transformed_image = transformed['image']
-
- keypoints = detectFacialKeypointsOnAImage(transformed_image, model)
-
- plt.subplot(size, size, ind+1)
- plt.imshow(transformed_image.permute(1,2,0))
- plt.plot(keypoints[:,0], keypoints[:,1], 'ro')
- plt.axis('off')
- plt.tight_layout()
- plt.savefig('output/res.png', bbox_inches='tight', pad_inches=0)
- # plt.show()
- final_img = Image.open('output/res.png')
- text_out = f"{len(faces)} faces detected"
- return text_out, final_img
- else:
- white_image = np.full((200, 300, 3), 255, dtype=np.uint8)
- plt.savefig('output/res.png', bbox_inches='tight', pad_inches=0)
- final_img = Image.open('output/res.png')
- text_out = f"{len(faces)} faces detected"
- return text_out, final_img
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/README.md b/spaces/gotiQspiryo/whisper-ui/README.md
deleted file mode 100644
index 7c19b930f0f4129a4711b7623ab3b5eda1d8573c..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Whisper Ui
-emoji: 📈
-colorFrom: indigo
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-duplicated_from: srcaballero99/whisper-ui
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gotiQspiryo/whisper-ui/Simatic Net Pc Software V8.1 Torrent 40.md b/spaces/gotiQspiryo/whisper-ui/Simatic Net Pc Software V8.1 Torrent 40.md
deleted file mode 100644
index bf1d660a6a306c1d72656d9e215e8dc32c0cea4b..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/Simatic Net Pc Software V8.1 Torrent 40.md
+++ /dev/null
@@ -1,69 +0,0 @@
-## Simatic Net Pc Software V8.1 Torrent 40
-
-
-
-**Download ✑ ✑ ✑ [https://mauletnaci.blogspot.com/?download=2twtYY](https://mauletnaci.blogspot.com/?download=2twtYY)**
-
-
-
-# How to Download and Install SIMATIC NET PC Software V8.1
-
-
-
-SIMATIC NET PC Software is a software package that enables communication between PCs and industrial devices such as PLCs, drives, sensors, etc. It supports various protocols such as PROFIBUS, PROFINET, Industrial Ethernet, OPC UA, etc. SIMATIC NET PC Software V8.1 is a version that was released in 2012 and is compatible with Windows XP, Windows 7 and Windows Server 2008 R2.
-
-
-
-To download and install SIMATIC NET PC Software V8.1, you need to follow these steps:
-
-
-
-1. Go to the Siemens Industry Online Support website[^2^] and search for "SIMATIC NET PC Software V8.1".
-
-2. Select the document "Download of the SIMATIC NET PC Software V8.1" from the search results.
-
-3. Click on the link "SIMATIC\_NET\_PC\_Software\_V8\_1.exe" to download the installation file. The file size is about 780 MB.
-
-4. Run the installation file and follow the instructions on the screen. You may need to enter a license key or a product key during the installation process.
-
-5. Restart your PC after the installation is completed.
-
-
-
-You have successfully downloaded and installed SIMATIC NET PC Software V8.1 on your PC.
-
-
-
-After you have downloaded and installed SIMATIC NET PC Software V8.1, you can use it to configure and monitor your communication network. Here are some tips on how to use SIMATIC NET PC Software:
-
-
-
-- To configure your network, you need to use the SIMATIC NET PC Station Wizard, which guides you through the steps of creating a PC station, selecting a communication processor (CP), assigning an IP address, setting up communication connections, etc. You can also use the SIMATIC NET PC Station Editor to modify or extend your configuration manually.
-
-- To monitor your network, you can use the SIMATIC NET Diagnostics Tool, which displays the status and performance of your CPs and connections. You can also use the SIMATIC NET OPC Scout or OPC Test Client to test the OPC functionality of your CPs.
-
-- To troubleshoot your network, you can use the SIMATIC NET Trace Tool, which records and analyzes the data traffic on your network. You can also use the SIMATIC NET S7 Protocol Suite to simulate S7 communication with other devices.
-
-
-
-For more information on how to use SIMATIC NET PC Software, you can refer to the online help or the documentation that is included in the installation file[^1^].
-
-
-
-SIMATIC NET PC Software is a powerful and versatile tool that can help you to optimize your communication network and achieve your automation goals. Here are some of the benefits of using SIMATIC NET PC Software:
-
-
-
-- It supports a wide range of communication protocols, such as Industrial Ethernet, PROFINET, PROFIBUS, OPC UA, OPC DA, OPC AE, etc. You can use SIMATIC NET PC Software to communicate with various devices from Siemens and other manufacturers, such as PLCs, drives, sensors, HMI panels, etc.
-
-- It provides a user-friendly and consistent interface for configuring and monitoring your network. You can use SIMATIC NET PC Station Wizard and Editor to create and modify your PC station configuration, SIMATIC NET Diagnostics Tool to check the status and performance of your network, SIMATIC NET Trace Tool to record and analyze the data traffic on your network, etc.
-
-- It integrates seamlessly with other Siemens software products, such as STEP 7 Professional, TIA Portal, WinCC, PCS 7, etc. You can use SIMATIC NET PC Software to access the data and functions of your devices from these software products. For example, you can use SIMATIC NET OPC Scout or OPC Test Client to test the OPC functionality of your devices from STEP 7 Professional or TIA Portal.
-
-- It offers high reliability and security for your network. You can use SIMATIC NET PC Software to implement redundant and fail-safe communication solutions, such as HARDNET-IE S7 REDCONNECT or SOFTNET-IE S7 REDCONNECT for Industrial Ethernet, or HARDNET-PB S7 or SOFTNET-PB S7 for PROFIBUS. You can also use SIMATIC NET PC Software to encrypt and authenticate your data using SSL/TLS or certificates.
-
-
-
-With SIMATIC NET PC Software, you can benefit from the advantages of a flexible, efficient and secure communication network for your automation tasks.
-
- 1b8d091108
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Ableton Live 9.7 6 Download The Ultimate Guide to Installing and Authorizing Live on Your Computer.md b/spaces/gotiQspiryo/whisper-ui/examples/Ableton Live 9.7 6 Download The Ultimate Guide to Installing and Authorizing Live on Your Computer.md
deleted file mode 100644
index a3ff16b078902a1ccdb63bfa7140d19395c82e32..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Ableton Live 9.7 6 Download The Ultimate Guide to Installing and Authorizing Live on Your Computer.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-This software is no longer available for the download. This could be due to the program being discontinued, having a security issue or for other reasons.
-Today, there are thousands of VST plugins out there, for all kinds of sounds and purposes, both free and paid. Many websites offer free plugins for downloads, and the most popular one to date is KVRaudio.
-Ableton Live 9.7 6 Download
DOWNLOAD ✵✵✵ https://urlgoal.com/2uyMwV
-Our Products are generally provided as Zip file downloads which in all cases will need to be extracted and saved to your hard drive prior to installation. Details on the different file type provided and what to do with them are below.
-Ableton 9.7.6 keygen is a multitrack audio with MIDI advisor oriented musical performances live, but quiet for Studio work. It document, edit or combine song in an advanced approach. Audio sequencers, additionally identified like as DAWs, were more often than not complicated programs to work for the long-established person, It delivered their willingness to become a member of the maximum number of benefits or chiefly.It offers exceptional sound first-rate.
-Ableton Live can be downloaded from the Ableton website.
The trial version is fully functional for 90 days. Ableton Live can however also be run in Demo Mode. Live is fully functional in Demo mode with the exception that saving and exporting are disabled. Since these functions are not required Demo Mode will not affect running the supplied set in any way.
-Once downloaded, double click the Live Pack or drag it onto the main Ableton window to install it. (You will be asked for a location in which you would like the AsthmaRunner9-3 folder to be placed.)
-
-The Template was made for Ableton Live 9, using Xfer´s Serum, and Nicky Romero´s Kickstart sidechain Compression Vst Plugin. Also making use of, Stock FX and Stock Instruments. Tech House Style samples for beginners are included, with a full arrangement Beat. Both fully Mixed and Mastered, ready for you to download and start playing it!
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh
deleted file mode 100644
index e9a80001eb47d5af863d6aab11a59362a59cef61..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-#!/bin/bash
-
-sil_prob=0.5
-num_sil_states=3
-num_nonsil_states=1
-
-. ./cmd.sh
-. ./path.sh
-. parse_options.sh
-
-set -eux
-
-dict=$1
-data_dir=$2
-
-dict_dir=$data_dir/local/dict
-tmplm_dir=$data_dir/local/lang_tmp
-lm_dir=$data_dir/lang
-
-mkdir -p $dict_dir $tmplm_dir $lm_dir
-
-# prepare dict
-echo "SIL" > $dict_dir/silence_phones.txt
-echo "SIL" > $dict_dir/optional_silence.txt
-awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt
-
-echo "SIL SIL" > $dict_dir/lexicon.txt
-echo " SIL" >> $dict_dir/lexicon.txt
-awk '{print $1" "$1}' $dict >> $dict_dir/lexicon.txt
-
-echo "SIL" > $dict_dir/extra_questions.txt
-awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt
-
-# prepare lang
-utils/prepare_lang.sh --sil-prob $sil_prob --position-dependent-phones false \
- --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \
- $dict_dir "" $tmplm_dir $lm_dir
diff --git a/spaces/gradio/HuBERT/fairseq/modules/layer_drop.py b/spaces/gradio/HuBERT/fairseq/modules/layer_drop.py
deleted file mode 100644
index 8961d8bcbc492c40c6b30973234416ce5a414f5a..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/modules/layer_drop.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-LayerDrop as described in https://arxiv.org/abs/1909.11556.
-"""
-
-import torch
-import torch.nn as nn
-
-
-class LayerDropModuleList(nn.ModuleList):
- """
- A LayerDrop implementation based on :class:`torch.nn.ModuleList`.
-
- We refresh the choice of which layers to drop every time we iterate
- over the LayerDropModuleList instance. During evaluation we always
- iterate over all layers.
-
- Usage::
-
- layers = LayerDropList(p=0.5, modules=[layer1, layer2, layer3])
- for layer in layers: # this might iterate over layers 1 and 3
- x = layer(x)
- for layer in layers: # this might iterate over all layers
- x = layer(x)
- for layer in layers: # this might not iterate over any layers
- x = layer(x)
-
- Args:
- p (float): probability of dropping out each layer
- modules (iterable, optional): an iterable of modules to add
- """
-
- def __init__(self, p, modules=None):
- super().__init__(modules)
- self.p = p
-
- def __iter__(self):
- dropout_probs = torch.empty(len(self)).uniform_()
- for i, m in enumerate(super().__iter__()):
- if not self.training or (dropout_probs[i] > self.p):
- yield m
diff --git a/spaces/gradio/HuBERT/fairseq/tasks/sentence_prediction.py b/spaces/gradio/HuBERT/fairseq/tasks/sentence_prediction.py
deleted file mode 100644
index 6732728de981da7174eae32ecaf4c47901d65399..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/tasks/sentence_prediction.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-
-import numpy as np
-from fairseq import utils
-from fairseq.data import (
- ConcatSentencesDataset,
- Dictionary,
- IdDataset,
- NestedDictionaryDataset,
- NumelDataset,
- NumSamplesDataset,
- OffsetTokensDataset,
- PrependTokenDataset,
- RawLabelDataset,
- RightPadDataset,
- RollDataset,
- SortDataset,
- StripTokenDataset,
- data_utils,
-)
-from fairseq.data.shorten_dataset import maybe_shorten_dataset
-from fairseq.tasks import LegacyFairseqTask, register_task
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_task("sentence_prediction")
-class SentencePredictionTask(LegacyFairseqTask):
- """
- Sentence (or sentence pair) prediction (classification or regression) task.
-
- Args:
- dictionary (Dictionary): the dictionary for the input of the task
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- parser.add_argument("data", metavar="FILE", help="file prefix for data")
- parser.add_argument(
- "--num-classes",
- type=int,
- default=-1,
- help="number of classes or regression targets",
- )
- parser.add_argument(
- "--init-token",
- type=int,
- default=None,
- help="add token at the beginning of each batch item",
- )
- parser.add_argument(
- "--separator-token",
- type=int,
- default=None,
- help="add separator token between inputs",
- )
- parser.add_argument("--regression-target", action="store_true", default=False)
- parser.add_argument("--no-shuffle", action="store_true", default=False)
- parser.add_argument(
- "--shorten-method",
- default="none",
- choices=["none", "truncate", "random_crop"],
- help="if not none, shorten sequences that exceed --tokens-per-sample",
- )
- parser.add_argument(
- "--shorten-data-split-list",
- default="",
- help="comma-separated list of dataset splits to apply shortening to, "
- 'e.g., "train,valid" (default: all dataset splits)',
- )
- parser.add_argument(
- "--add-prev-output-tokens",
- action="store_true",
- default=False,
- help="add prev_output_tokens to sample, used for encoder-decoder arch",
- )
-
- def __init__(self, args, data_dictionary, label_dictionary):
- super().__init__(args)
- self.dictionary = data_dictionary
- self._label_dictionary = label_dictionary
- if not hasattr(args, "max_positions"):
- self._max_positions = (
- args.max_source_positions,
- args.max_target_positions,
- )
- else:
- self._max_positions = args.max_positions
- args.tokens_per_sample = self._max_positions
-
- @classmethod
- def load_dictionary(cls, args, filename, source=True):
- """Load the dictionary from the filename
-
- Args:
- filename (str): the filename
- """
- dictionary = Dictionary.load(filename)
- dictionary.add_symbol("")
- return dictionary
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- assert args.num_classes > 0, "Must set --num-classes"
-
- # load data dictionary
- data_dict = cls.load_dictionary(
- args,
- os.path.join(args.data, "input0", "dict.txt"),
- source=True,
- )
- logger.info("[input] dictionary: {} types".format(len(data_dict)))
-
- # load label dictionary
- if not args.regression_target:
- label_dict = cls.load_dictionary(
- args,
- os.path.join(args.data, "label", "dict.txt"),
- source=False,
- )
- logger.info("[label] dictionary: {} types".format(len(label_dict)))
- else:
- label_dict = data_dict
- return cls(args, data_dict, label_dict)
-
- def load_dataset(self, split, combine=False, **kwargs):
- """Load a given dataset split (e.g., train, valid, test)."""
-
- def get_path(key, split):
- return os.path.join(self.args.data, key, split)
-
- def make_dataset(key, dictionary):
- split_path = get_path(key, split)
-
- try:
- dataset = data_utils.load_indexed_dataset(
- split_path,
- dictionary,
- self.args.dataset_impl,
- combine=combine,
- )
- except Exception as e:
- if "StorageException: [404] Path not found" in str(e):
- logger.warning(f"dataset {e} not found")
- dataset = None
- else:
- raise e
- return dataset
-
- input0 = make_dataset("input0", self.source_dictionary)
- assert input0 is not None, "could not find dataset: {}".format(
- get_path("input0", split)
- )
- input1 = make_dataset("input1", self.source_dictionary)
-
- if self.args.init_token is not None:
- input0 = PrependTokenDataset(input0, self.args.init_token)
-
- if input1 is None:
- src_tokens = input0
- else:
- if self.args.separator_token is not None:
- input1 = PrependTokenDataset(input1, self.args.separator_token)
-
- src_tokens = ConcatSentencesDataset(input0, input1)
-
- with data_utils.numpy_seed(self.args.seed):
- shuffle = np.random.permutation(len(src_tokens))
-
- src_tokens = maybe_shorten_dataset(
- src_tokens,
- split,
- self.args.shorten_data_split_list,
- self.args.shorten_method,
- self.max_positions(),
- self.args.seed,
- )
-
- dataset = {
- "id": IdDataset(),
- "net_input": {
- "src_tokens": RightPadDataset(
- src_tokens,
- pad_idx=self.source_dictionary.pad(),
- ),
- "src_lengths": NumelDataset(src_tokens, reduce=False),
- },
- "nsentences": NumSamplesDataset(),
- "ntokens": NumelDataset(src_tokens, reduce=True),
- }
-
- if self.args.add_prev_output_tokens:
- prev_tokens_dataset = RightPadDataset(
- RollDataset(src_tokens, 1),
- pad_idx=self.dictionary.pad(),
- )
- dataset["net_input"].update(
- prev_output_tokens=prev_tokens_dataset,
- )
-
- if not self.args.regression_target:
- label_dataset = make_dataset("label", self.label_dictionary)
- if label_dataset is not None:
- dataset.update(
- target=OffsetTokensDataset(
- StripTokenDataset(
- label_dataset,
- id_to_strip=self.label_dictionary.eos(),
- ),
- offset=-self.label_dictionary.nspecial,
- )
- )
- else:
- label_path = "{0}.label".format(get_path("label", split))
- if os.path.exists(label_path):
-
- def parse_regression_target(i, line):
- values = line.split()
- assert (
- len(values) == self.args.num_classes
- ), f'expected num_classes={self.args.num_classes} regression target values on line {i}, found: "{line}"'
- return [float(x) for x in values]
-
- with open(label_path) as h:
- dataset.update(
- target=RawLabelDataset(
- [
- parse_regression_target(i, line.strip())
- for i, line in enumerate(h.readlines())
- ]
- )
- )
-
- nested_dataset = NestedDictionaryDataset(
- dataset,
- sizes=[src_tokens.sizes],
- )
-
- if self.args.no_shuffle:
- dataset = nested_dataset
- else:
- dataset = SortDataset(
- nested_dataset,
- # shuffle
- sort_order=[shuffle],
- )
-
- logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset)))
-
- self.datasets[split] = dataset
- return self.datasets[split]
-
- def build_model(self, args):
- from fairseq import models
-
- model = models.build_model(args, self)
-
- model.register_classification_head(
- getattr(args, "classification_head_name", "sentence_classification_head"),
- num_classes=self.args.num_classes,
- )
-
- return model
-
- def max_positions(self):
- return self._max_positions
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
-
- @property
- def label_dictionary(self):
- return self._label_dictionary
diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/__init__.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/h2oai/wave-tour/examples/markup.py b/spaces/h2oai/wave-tour/examples/markup.py
deleted file mode 100644
index a8abf591e76842dd3a652ddabc0633146f59ee92..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/markup.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Markup
-# Use a #markup card to display formatted content using #HTML.
-# ---
-from h2o_wave import site, ui
-
-page = site['/demo']
-
-menu = '''
-
- - Spam
- - Ham
- - Eggs
-
-'''
-
-page['example'] = ui.markup_card(
- box='1 1 2 2',
- title='Menu',
- content=menu,
-)
-page.save()
diff --git a/spaces/hamacojr/SAM-CAT-Seg/datasets/prepare_ade20k_150.py b/spaces/hamacojr/SAM-CAT-Seg/datasets/prepare_ade20k_150.py
deleted file mode 100644
index c001db4bdf17a1b03693aaa60b8ced153e081c6c..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/datasets/prepare_ade20k_150.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-from pathlib import Path
-
-import numpy as np
-import tqdm
-from PIL import Image
-
-
-def convert(input, output):
- img = np.asarray(Image.open(input))
- assert img.dtype == np.uint8
- img = img - 1 # 0 (ignore) becomes 255. others are shifted by 1
- Image.fromarray(img).save(output)
-
-
-if __name__ == "__main__":
- dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "ADEChallengeData2016"
- for name in ["validation"]:
- annotation_dir = dataset_dir / "annotations" / name
- output_dir = dataset_dir / "annotations_detectron2" / name
- output_dir.mkdir(parents=True, exist_ok=True)
- for file in tqdm.tqdm(list(annotation_dir.iterdir())):
- output_file = output_dir / file.name
- convert(file, output_file)
\ No newline at end of file
diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/web_selenium.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/web_selenium.py
deleted file mode 100644
index 11bdfeb1f1630fc6ff6f55d68e8d7233281c5098..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/web_selenium.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""Selenium web scraping module."""
-from __future__ import annotations
-
-import logging
-from pathlib import Path
-from sys import platform
-
-from bs4 import BeautifulSoup
-from selenium import webdriver
-from selenium.webdriver.chrome.options import Options as ChromeOptions
-from selenium.webdriver.common.by import By
-from selenium.webdriver.firefox.options import Options as FirefoxOptions
-from selenium.webdriver.remote.webdriver import WebDriver
-from selenium.webdriver.safari.options import Options as SafariOptions
-from selenium.webdriver.support import expected_conditions as EC
-from selenium.webdriver.support.wait import WebDriverWait
-from webdriver_manager.chrome import ChromeDriverManager
-from webdriver_manager.firefox import GeckoDriverManager
-
-import autogpt.processing.text as summary
-from autogpt.config import Config
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-FILE_DIR = Path(__file__).parent.parent
-CFG = Config()
-
-
-def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
- """Browse a website and return the answer and links to the user
-
- Args:
- url (str): The url of the website to browse
- question (str): The question asked by the user
-
- Returns:
- Tuple[str, WebDriver]: The answer and links to the user and the webdriver
- """
- driver, text = scrape_text_with_selenium(url)
- add_header(driver)
- summary_text = summary.summarize_text(url, text, question, driver)
- links = scrape_links_with_selenium(driver, url)
-
- # Limit links to 5
- if len(links) > 5:
- links = links[:5]
- close_browser(driver)
- return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver
-
-
-def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
- """Scrape text from a website using selenium
-
- Args:
- url (str): The url of the website to scrape
-
- Returns:
- Tuple[WebDriver, str]: The webdriver and the text scraped from the website
- """
- logging.getLogger("selenium").setLevel(logging.CRITICAL)
-
- options_available = {
- "chrome": ChromeOptions,
- "safari": SafariOptions,
- "firefox": FirefoxOptions,
- }
-
- options = options_available[CFG.selenium_web_browser]()
- options.add_argument(
- "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36"
- )
-
- if CFG.selenium_web_browser == "firefox":
- driver = webdriver.Firefox(
- executable_path=GeckoDriverManager().install(), options=options
- )
- elif CFG.selenium_web_browser == "safari":
- # Requires a bit more setup on the users end
- # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
- driver = webdriver.Safari(options=options)
- else:
- if platform == "linux" or platform == "linux2":
- options.add_argument("--disable-dev-shm-usage")
- options.add_argument("--remote-debugging-port=9222")
-
- options.add_argument("--no-sandbox")
- if CFG.selenium_headless:
- options.add_argument("--headless")
- options.add_argument("--disable-gpu")
-
- driver = webdriver.Chrome(
- executable_path=ChromeDriverManager().install(), options=options
- )
- driver.get(url)
-
- WebDriverWait(driver, 10).until(
- EC.presence_of_element_located((By.TAG_NAME, "body"))
- )
-
- # Get the HTML content directly from the browser's DOM
- page_source = driver.execute_script("return document.body.outerHTML;")
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return driver, text
-
-
-def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]:
- """Scrape links from a website using selenium
-
- Args:
- driver (WebDriver): The webdriver to use to scrape the links
-
- Returns:
- List[str]: The links scraped from the website
- """
- page_source = driver.page_source
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
-
- return format_hyperlinks(hyperlinks)
-
-
-def close_browser(driver: WebDriver) -> None:
- """Close the browser
-
- Args:
- driver (WebDriver): The webdriver to close
-
- Returns:
- None
- """
- driver.quit()
-
-
-def add_header(driver: WebDriver) -> None:
- """Add a header to the website
-
- Args:
- driver (WebDriver): The webdriver to use to add the header
-
- Returns:
- None
- """
- driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read())
diff --git a/spaces/haonanzhang/ChatGPT-BOT/Dockerfile b/spaces/haonanzhang/ChatGPT-BOT/Dockerfile
deleted file mode 100644
index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000
--- a/spaces/haonanzhang/ChatGPT-BOT/Dockerfile
+++ /dev/null
@@ -1,14 +0,0 @@
-FROM python:3.9 as builder
-RUN apt-get update && apt-get install -y build-essential
-COPY requirements.txt .
-RUN pip install --user -r requirements.txt
-
-FROM python:3.9
-MAINTAINER iskoldt
-COPY --from=builder /root/.local /root/.local
-ENV PATH=/root/.local/bin:$PATH
-COPY . /app
-WORKDIR /app
-ENV my_api_key empty
-ENV dockerrun yes
-CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"]
diff --git a/spaces/haonanzhang/ChatGPT-BOT/assets/custom.css b/spaces/haonanzhang/ChatGPT-BOT/assets/custom.css
deleted file mode 100644
index 3cf5f946a240f595e19f02259969f01d4b088012..0000000000000000000000000000000000000000
--- a/spaces/haonanzhang/ChatGPT-BOT/assets/custom.css
+++ /dev/null
@@ -1,239 +0,0 @@
-:root {
- --chatbot-color-light: #F3F3F3;
- --chatbot-color-dark: #121111;
-}
-
-/* 覆盖gradio的页脚信息QAQ */
-footer {
- display: none !important;
-}
-#footer{
- text-align: center;
-}
-#footer div{
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.85;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2.5em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-#chuanhu_chatbot, #status_display {
- transition: all 0.6s;
-}
-
-/* usage_display */
-#usage_display {
- position: relative;
- margin: 0;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- padding: .5em 1em;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: 0 1em;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);;
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色 */
-@media (prefers-color-scheme: light) {
- #chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
- color: #000000 !important;
- }
- [data-testid = "bot"] {
- background-color: #FFFFFF !important;
- }
- [data-testid = "user"] {
- background-color: #95EC69 !important;
- }
-}
-/* 暗色 */
-@media (prefers-color-scheme: dark) {
- #chuanhu_chatbot {
- background-color: var(--chatbot-color-dark) !important;
- color: #FFFFFF !important;
- }
- [data-testid = "bot"] {
- background-color: #2C2C2C !important;
- }
- [data-testid = "user"] {
- background-color: #26B561 !important;
- }
- body {
- background-color: var(--neutral-950) !important;
- }
-}
-
-/* 对话气泡 */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/hcapp/sd-dreambooth-library-herge-style/app.py b/spaces/hcapp/sd-dreambooth-library-herge-style/app.py
deleted file mode 100644
index 21bfdc9c0538838369fedea4c696234c5dd1749b..0000000000000000000000000000000000000000
--- a/spaces/hcapp/sd-dreambooth-library-herge-style/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/sd-dreambooth-library/herge-style").launch()
\ No newline at end of file
diff --git a/spaces/hf-audio/vocos-bark/README.md b/spaces/hf-audio/vocos-bark/README.md
deleted file mode 100644
index ebf6f3388054f0de678deba23344c4ef6aa4c876..0000000000000000000000000000000000000000
--- a/spaces/hf-audio/vocos-bark/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Vocos Bark
-emoji: 📊
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/huggingface-timeseries/time-series-score/README.md b/spaces/huggingface-timeseries/time-series-score/README.md
deleted file mode 100644
index 4373e9ed765513459134698383a62082ef2e2996..0000000000000000000000000000000000000000
--- a/spaces/huggingface-timeseries/time-series-score/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Time Series Score
-emoji: 🐢
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hunger11243/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/hunger11243/VITS-Umamusume-voice-synthesizer/modules.py
deleted file mode 100644
index f5af1fd9a20dc03707889f360a39bb4b784a6df3..0000000000000000000000000000000000000000
--- a/spaces/hunger11243/VITS-Umamusume-voice-synthesizer/modules.py
+++ /dev/null
@@ -1,387 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/hysts/Yet-Another-Anime-Segmenter/README.md b/spaces/hysts/Yet-Another-Anime-Segmenter/README.md
deleted file mode 100644
index 58b7321785121b8517226f8f27a8f1a0b7f4dcda..0000000000000000000000000000000000000000
--- a/spaces/hysts/Yet-Another-Anime-Segmenter/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Yet Another Anime Segmenter
-emoji: 🌍
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
----
diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_pfc02_r100.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_pfc02_r100.py
deleted file mode 100644
index 72f0f0ec0ce5c523bace8b7869181ea807e72423..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf12m_pfc02_r100.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.margin_list = (1.0, 0.0, 0.4)
-config.network = "r100"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 0.2
-config.interclass_filtering_threshold = 0
-config.fp16 = True
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.optimizer = "sgd"
-config.lr = 0.1
-config.verbose = 2000
-config.dali = False
-
-config.rec = "/train_tmp/WebFace12M"
-config.num_classes = 617970
-config.num_image = 12720066
-config.num_epoch = 20
-config.warmup_epoch = 0
-config.val_targets = []
diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_32gpus_r50_bs4k.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_32gpus_r50_bs4k.py
deleted file mode 100644
index 5e8407943ffef4ae3ee02ddb3f2361a9ac655cbb..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_32gpus_r50_bs4k.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.margin_list = (1.0, 0.0, 0.4)
-config.network = "r50"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 0.2
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.4
-config.verbose = 10000
-config.dali = False
-
-config.rec = "/train_tmp/WebFace42M"
-config.num_classes = 2059906
-config.num_image = 42474557
-config.num_epoch = 20
-config.warmup_epoch = 2
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/ifey/chatdemo/push.sh b/spaces/ifey/chatdemo/push.sh
deleted file mode 100644
index 91437d0da57342398cd63daf3d0ba62f7da0fd74..0000000000000000000000000000000000000000
--- a/spaces/ifey/chatdemo/push.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/sh
-git add .
-git commit -m "New Mac Version"
-git push
-
diff --git a/spaces/imperialwool/llama-cpp-api/README.md b/spaces/imperialwool/llama-cpp-api/README.md
deleted file mode 100644
index dadfca01e757061128e6f438becf370658dc5f28..0000000000000000000000000000000000000000
--- a/spaces/imperialwool/llama-cpp-api/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: llama.cpp API
-emoji: 🦙
-colorFrom: red
-colorTo: indigo
-sdk: docker
-pinned: true
----
-
-I build this just for fun. Please clone this space to personal use. Test API you can here.
diff --git a/spaces/imseldrith/txt2img/README.md b/spaces/imseldrith/txt2img/README.md
deleted file mode 100644
index bf83691661fb0818e0707eeeea546b4d1f27de5a..0000000000000000000000000000000000000000
--- a/spaces/imseldrith/txt2img/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Txt2img
-emoji: 🐨
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/innev/GPT2-large/app.py b/spaces/innev/GPT2-large/app.py
deleted file mode 100644
index abc7d11e277a21bfc00df468fa7d85845cdda49b..0000000000000000000000000000000000000000
--- a/spaces/innev/GPT2-large/app.py
+++ /dev/null
@@ -1,141 +0,0 @@
-#!/usr/local/bin/python3
-#-*- coding:utf-8 -*-
-
-import gradio as gr
-import torch
-from transformers import AutoTokenizer, AutoModelForCausalLM
-import os
-
-checkpoint = "gpt2-large"
-# checkpoint = "/innev/open-ai/huggingface/models/gpt2-large"
-tokenizer = AutoTokenizer.from_pretrained(checkpoint)
-# model = AutoModelForCausalLM.from_pretrained(checkpoint)
-model = AutoModelForCausalLM.from_pretrained(checkpoint, pad_token_id=tokenizer.eos_token_id)
-
-# 简单生成
-def sampleGen(text):
- # text = 'Who was Jim Henson ? Jim Henson was a'
-
- # 编码一段文本
- # 编码后为[8241, 373, 5395, 367, 19069, 5633, 5395, 367, 19069, 373, 257]
- indexed_tokens = tokenizer.encode(text)
- # 转换为pytorch tensor
- # tensor([[ 8241, 373, 5395, 367, 19069, 5633, 5395, 367, 19069, 373, 257]])
- # shape为 torch.Size([1, 11])
- tokens_tensor = torch.tensor([indexed_tokens])
-
- # 设置为evaluation模式,去取消激活dropout等模块。
- # 在huggingface/transformers框架中,默认就是eval模式
- model.eval()
-
- # 预测所有token
- with torch.no_grad():
- # 将输入tensor输入,就得到了模型的输出,非常简单
- # outputs是一个元组,所有huggingface/transformers模型的输出都是元组
- # 本初的元组有两个,第一个是预测得分(没经过softmax之前的,也叫作logits),
- # 第二个是past,里面的attention计算的key value值
- # 此时我们需要的是第一个值
- outputs = model(tokens_tensor)
- # predictions shape为 torch.Size([1, 11, 50257]),
- # 也就是11个词每个词的预测得分(没经过softmax之前的)
- # 也叫做logits
- predictions = outputs[0]
-
- # 我们需要预测下一个单词,所以是使用predictions第一个batch,最后一个词的logits去计算
- # predicted_index = 582,通过计算最大得分的索引得到的
- predicted_index = torch.argmax(predictions[0, -1, :]).item()
- # 反向解码为我们需要的文本
- predicted_text = tokenizer.decode(indexed_tokens + [predicted_index])
- # predicted_text = tokenizer.decode([predicted_index])
- # 解码后的文本:'Who was Jim Henson? Jim Henson was a man'
- # 成功预测出单词 'man'
-
- return predicted_text
-
-# 关键词预测 生成文本
-def loopGen(prompts):
- text = prompts
- total = 1
- while text[-1] != "." and total < 20:
- text = sampleGen(text)
- print("Index %s: %s" % (total, text))
- total = total + 1
-
- return text, total
-
-# 贪心搜索 生成文本
-def greedySearch(prompts):
- input_ids = tokenizer(prompts, return_tensors='pt').input_ids
-
- # generate the result with greedy search
- output = model.generate(input_ids, max_length=128)
- text = tokenizer.decode(output[0], skip_special_tokens=True)
- return text, 1
-
-# 随机方法 生成文本
-def randomSearch(prompts):
- input_ids = tokenizer(prompts, return_tensors='pt').input_ids
-
- # generate the result with random search
- torch.manual_seed(0.)
- output = model.generate(input_ids, do_sample=True, max_length=128, top_p=0.95, top_k=0)
- text = tokenizer.decode(output[0], skip_special_tokens=True)
- return text, 1
-
-# 对比搜索 生成文本
-def contrastiveSearch(prompts):
- input_ids = tokenizer(prompts, return_tensors='pt').input_ids
-
- # generate the result with contrastive search
- output = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
- text = tokenizer.decode(output[0], skip_special_tokens=True)
-
- return text, 1
-
-def predict(searchType, prompts='Who was Jim Henson ? Jim Henson was a'):
- if searchType == "贪心搜索":
- return greedySearch(prompts)
- elif searchType == "随机方法":
- return randomSearch(prompts)
- elif searchType == "对比搜索":
- return contrastiveSearch(prompts)
- else:
- return loopGen(prompts)
-
-
-title = "GPT2 large"
-
-searchMapping = ['关键词预测', '贪心搜索', '随机方法', '对比搜索']
-
-description = """
-本例为使用GPT2模型的简单推测语句DEMO,输入前面的句子,推测出后面的句子。
-
-使用原始模型,未经过微调。只支持英文输入输出。
-"""
-
-examples = [
- [None, "DeepMind Company is", None],
- [None, "Who was Jim Henson ? Jim Henson was a", None],
- [None, "China is", None]
-]
-
-article = """
-## 文章参考
-- [在 Transformers 中使用对比搜索生成可媲美人类水平的文本 🤗](https://mp.weixin.qq.com/s/mydQLDlGUzFJuNBCIYc3CA)
-"""
-
-gr.Interface(
- fn=predict,
- inputs=[
- gr.Radio(label="搜索方法", choices=searchMapping, value="关键词预测"),
- gr.Text(label="输入前置语句"),
- ],
- outputs=[
- gr.Text(label="生成文本"),
- gr.Text(label="循环次数"),
- ],
- title=title,
- description=description,
- article=article,
- examples=examples,
-).launch()
\ No newline at end of file
diff --git a/spaces/innnky/soft-vits-singingvc/text/cleaners.py b/spaces/innnky/soft-vits-singingvc/text/cleaners.py
deleted file mode 100644
index 2658f667a7d59ca99a3e16ba0c157d2ab5d795eb..0000000000000000000000000000000000000000
--- a/spaces/innnky/soft-vits-singingvc/text/cleaners.py
+++ /dev/null
@@ -1,100 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-import re
-from unidecode import unidecode
-from phonemizer import phonemize
-
-
-# Regular expression matching whitespace:
-_whitespace_re = re.compile(r'\s+')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def expand_numbers(text):
- return normalize_numbers(text)
-
-
-def lowercase(text):
- return text.lower()
-
-
-def collapse_whitespace(text):
- return re.sub(_whitespace_re, ' ', text)
-
-
-def convert_to_ascii(text):
- return unidecode(text)
-
-
-def basic_cleaners(text):
- '''Basic pipeline that lowercases and collapses whitespace without transliteration.'''
- text = lowercase(text)
- text = collapse_whitespace(text)
- return text
-
-
-def transliteration_cleaners(text):
- '''Pipeline for non-English text that transliterates to ASCII.'''
- text = convert_to_ascii(text)
- text = lowercase(text)
- text = collapse_whitespace(text)
- return text
-
-
-def english_cleaners(text):
- '''Pipeline for English text, including abbreviation expansion.'''
- text = convert_to_ascii(text)
- text = lowercase(text)
- text = expand_abbreviations(text)
- phonemes = phonemize(text, language='en-us', backend='espeak', strip=True)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
-
-
-def english_cleaners2(text):
- '''Pipeline for English text, including abbreviation expansion. + punctuation + stress'''
- text = convert_to_ascii(text)
- text = lowercase(text)
- text = expand_abbreviations(text)
- phonemes = phonemize(text, language='en-us', backend='espeak', strip=True, preserve_punctuation=True, with_stress=True)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Book Mindset Carol Dweck Pdf Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Book Mindset Carol Dweck Pdf Download.md
deleted file mode 100644
index 3061a8e2376cfb18b35d0854324f2fecfab4576c..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Book Mindset Carol Dweck Pdf Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Book Mindset Carol Dweck Pdf Download
Download ✶✶✶ https://urlin.us/2uEyjO
-
-Mindset Summary by Carol Dweck is an invitation to introspection. Check out all mind-blowing concepts from this inspiring book. If you what it ... 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Jar Design A320 Serial Code.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Jar Design A320 Serial Code.md
deleted file mode 100644
index 3977d01cbb9110a29167c479484c0255dd968d1e..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Jar Design A320 Serial Code.md
+++ /dev/null
@@ -1,12 +0,0 @@
-jar design a320 serial code
Download File ✅ https://urlin.us/2uEwrq
-
-July 30, 2019 - launch X-Plane 64 bit and open a320, activation will pop up - enter your serial key (Regcode), press NEXT and after successful ... Aeroflot-online.com - air tickets online, sale.
-Search for tickets.
-Aeroflot-online.com - search and booking of air tickets online.
-Low prices for Aeroflot flights
-30 Jul 2019 – run X-Plane 64 bit and open a320, activation popup will appear – enter your serial key (Regcode), press NEXT and after successful...
-Aeroflot-online.com - air tickets online, sale.
-Low Airfare Aeroflot 8a78ff9644
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kaashfullmoviefreedownload _HOT_.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kaashfullmoviefreedownload _HOT_.md
deleted file mode 100644
index fb00b18976fb544d757384eb274fbfbffa0b2947..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kaashfullmoviefreedownload _HOT_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-kaashfullmoviefreedownload
Download File ··· https://urlin.us/2uEwws
-
-kaashfullmoviefreedownload · Traffic - Discography (re-mastered) [1967-1974] ([LOSSLESS FLAC] Rock) · miss junior nudist pageants video avi · Crack Para ... 1fdad05405
-
-
-
diff --git a/spaces/jarvisbot/ChatImprovement/show_math.py b/spaces/jarvisbot/ChatImprovement/show_math.py
deleted file mode 100644
index 80fa881d1c2ace5813f75b5d8a19ca056a8bfa4f..0000000000000000000000000000000000000000
--- a/spaces/jarvisbot/ChatImprovement/show_math.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# This program is written by: https://github.com/polarwinkel/mdtex2html
-
-from latex2mathml.converter import convert as tex2mathml
-import re
-
-incomplete = '⚠formula incomplete'
-convError = '⚠LaTeX-convert-error'
-
-def convert(mdtex, extensions=[], splitParagraphs=True):
- ''' converts recursively the Markdown-LaTeX-mixture to HTML with MathML '''
- found = False
- # handle all paragraphs separately (prevents aftereffects)
- if splitParagraphs:
- parts = re.split("\n\n", mdtex)
- result = ''
- for part in parts:
- result += convert(part, extensions, splitParagraphs=False)
- return result
- # find first $$-formula:
- parts = re.split('\${2}', mdtex, 2)
- if len(parts)>1:
- found = True
- result = convert(parts[0], extensions, splitParagraphs=False)+'\n'
- try:
- result += ''+tex2mathml(parts[1])+'\n'
- except:
- result += ''+convError+''
- if len(parts)==3:
- result += convert(parts[2], extensions, splitParagraphs=False)
- else:
- result += ''+incomplete+''
- # else find first $-formulas:
- else:
- parts = re.split('\${1}', mdtex, 2)
- if len(parts)>1 and not found:
- found = True
- try:
- mathml = tex2mathml(parts[1])
- except:
- mathml = convError
- if parts[0].endswith('\n\n') or parts[0]=='': # make sure textblock starts before formula!
- parts[0]=parts[0]+''
- if len(parts)==3:
- result = convert(parts[0]+mathml+parts[2], extensions, splitParagraphs=False)
- else:
- result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False)
- # else find first \[..\]-equation:
- else:
- parts = re.split(r'\\\[', mdtex, 1)
- if len(parts)>1 and not found:
- found = True
- result = convert(parts[0], extensions, splitParagraphs=False)+'\n'
- parts = re.split(r'\\\]', parts[1], 1)
- try:
- result += ''+tex2mathml(parts[0])+'\n'
- except:
- result += ''+convError+''
- if len(parts)==2:
- result += convert(parts[1], extensions, splitParagraphs=False)
- else:
- result += ''+incomplete+''
- # else find first \(..\)-equation:
- else:
- parts = re.split(r'\\\(', mdtex, 1)
- if len(parts)>1 and not found:
- found = True
- subp = re.split(r'\\\)', parts[1], 1)
- try:
- mathml = tex2mathml(subp[0])
- except:
- mathml = convError
- if parts[0].endswith('\n\n') or parts[0]=='': # make sure textblock starts before formula!
- parts[0]=parts[0]+''
- if len(subp)==2:
- result = convert(parts[0]+mathml+subp[1], extensions, splitParagraphs=False)
- else:
- result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False)
- if not found:
- result = mdtex
- return result
diff --git a/spaces/jbetker/tortoise/tortoise/models/random_latent_generator.py b/spaces/jbetker/tortoise/tortoise/models/random_latent_generator.py
deleted file mode 100644
index e90ef2130a47ec52160709877972716352e04c9c..0000000000000000000000000000000000000000
--- a/spaces/jbetker/tortoise/tortoise/models/random_latent_generator.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def fused_leaky_relu(input, bias=None, negative_slope=0.2, scale=2 ** 0.5):
- if bias is not None:
- rest_dim = [1] * (input.ndim - bias.ndim - 1)
- return (
- F.leaky_relu(
- input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope
- )
- * scale
- )
- else:
- return F.leaky_relu(input, negative_slope=0.2) * scale
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1
- ):
- super().__init__()
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
- else:
- self.bias = None
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
- return out
-
-
-class RandomLatentConverter(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.layers = nn.Sequential(*[EqualLinear(channels, channels, lr_mul=.1) for _ in range(5)],
- nn.Linear(channels, channels))
- self.channels = channels
-
- def forward(self, ref):
- r = torch.randn(ref.shape[0], self.channels, device=ref.device)
- y = self.layers(r)
- return y
-
-
-if __name__ == '__main__':
- model = RandomLatentConverter(512)
- model(torch.randn(5,512))
\ No newline at end of file
diff --git a/spaces/jbilcke-hf/MusicGen/audiocraft/utils/autocast.py b/spaces/jbilcke-hf/MusicGen/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/MusicGen/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/checkbox.tsx b/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/checkbox.tsx
deleted file mode 100644
index 5850485b9fecba303bdba1849e5a7b6329300af4..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/checkbox.tsx
+++ /dev/null
@@ -1,30 +0,0 @@
-"use client"
-
-import * as React from "react"
-import * as CheckboxPrimitive from "@radix-ui/react-checkbox"
-import { Check } from "lucide-react"
-
-import { cn } from "@/lib/utils"
-
-const Checkbox = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-
-
-
-
-))
-Checkbox.displayName = CheckboxPrimitive.Root.displayName
-
-export { Checkbox }
diff --git a/spaces/jergra43/llama2-7b-ggml-chat-app/llama.py b/spaces/jergra43/llama2-7b-ggml-chat-app/llama.py
deleted file mode 100644
index 8c880f3d723bbf3380879def4c9b4824cc676671..0000000000000000000000000000000000000000
--- a/spaces/jergra43/llama2-7b-ggml-chat-app/llama.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import os
-import gradio as gr
-import fire
-from enum import Enum
-from threading import Thread
-from transformers import AutoModelForCausalLM, AutoTokenizer
-from auto_gptq import AutoGPTQForCausalLM
-from llama_cpp import Llama
-from huggingface_hub import hf_hub_download
-from transformers import TextIteratorStreamer
-from llama_chat_format import format_to_llama_chat_style
-
-
-# class syntax
-class Model_Type(Enum):
- gptq = 1
- ggml = 2
- full_precision = 3
-
-
-def get_model_type(model_name):
- if "gptq" in model_name.lower():
- return Model_Type.gptq
- elif "ggml" in model_name.lower():
- return Model_Type.ggml
- else:
- return Model_Type.full_precision
-
-
-def create_folder_if_not_exists(folder_path):
- if not os.path.exists(folder_path):
- os.makedirs(folder_path)
-
-
-def initialize_gpu_model_and_tokenizer(model_name, model_type):
- if model_type == Model_Type.gptq:
- model = AutoGPTQForCausalLM.from_quantized(model_name, device_map="auto", use_safetensors=True, use_triton=False)
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- else:
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", token=True)
- tokenizer = AutoTokenizer.from_pretrained(model_name, token=True)
- return model, tokenizer
-
-
-def init_auto_model_and_tokenizer(model_name, model_type, file_name=None):
- model_type = get_model_type(model_name)
-
- if Model_Type.ggml == model_type:
- models_folder = "./models"
- create_folder_if_not_exists(models_folder)
- file_path = hf_hub_download(repo_id=model_name, filename=file_name, local_dir=models_folder)
- model = Llama(file_path, n_ctx=4096)
- tokenizer = None
- else:
- model, tokenizer = initialize_gpu_model_and_tokenizer(model_name, model_type=model_type)
- return model, tokenizer
-
-
-def run_ui(model, tokenizer, is_chat_model, model_type):
- with gr.Blocks() as demo:
- chatbot = gr.Chatbot()
- msg = gr.Textbox()
- clear = gr.Button("Clear")
-
- def user(user_message, history):
- return "", history + [[user_message, None]]
-
- def bot(history):
- if is_chat_model:
- instruction = format_to_llama_chat_style(history)
- else:
- instruction = history[-1][0]
-
- history[-1][1] = ""
- kwargs = dict(temperature=0.6, top_p=0.9)
- if model_type == Model_Type.ggml:
- kwargs["max_tokens"] = 512
- for chunk in model(prompt=instruction, stream=True, **kwargs):
- token = chunk["choices"][0]["text"]
- history[-1][1] += token
- yield history
-
- else:
- streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, Timeout=5)
- inputs = tokenizer(instruction, return_tensors="pt").to(model.device)
- kwargs["max_new_tokens"] = 512
- kwargs["input_ids"] = inputs["input_ids"]
- kwargs["streamer"] = streamer
- thread = Thread(target=model.generate, kwargs=kwargs)
- thread.start()
-
- for token in streamer:
- history[-1][1] += token
- yield history
-
- msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(bot, chatbot, chatbot)
- clear.click(lambda: None, None, chatbot, queue=False)
- demo.queue()
- demo.launch(share=False, debug=True)
-
-# def main(model_name=None, file_name=None):
-def main(model_name="TheBloke/Llama-2-7B-Chat-GGML", file_name="llama-2-7b-chat.ggmlv3.q4_K_M.bin"):
- assert model_name is not None, "model_name argument is missing."
-
- is_chat_model = 'chat' in model_name.lower()
- model_type = get_model_type(model_name)
-
- if model_type == Model_Type.ggml:
- assert file_name is not None, "When model_name is provided for a GGML quantized model, file_name argument must also be provided."
-
- model, tokenizer = init_auto_model_and_tokenizer(model_name, model_type, file_name)
- run_ui(model, tokenizer, is_chat_model, model_type)
-
-if __name__ == '__main__':
- fire.Fire(main)
\ No newline at end of file
diff --git a/spaces/jkim1238/predictive_analysis/app.py b/spaces/jkim1238/predictive_analysis/app.py
deleted file mode 100644
index 4f32f8c70577b71f30eebd478e37cbb76c2d979b..0000000000000000000000000000000000000000
--- a/spaces/jkim1238/predictive_analysis/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from pymongo import MongoClient
-import streamlit as st
-
-# Load the environment variables
-USER = st.secrets['USER']
-PASSWORD = st.secrets['PASSWORD']
-
-# The mongoDB connection string
-connection_string = f'mongodb+srv://{USER}:{PASSWORD}@cluster0.rjiqa.mongodb.net/?retryWrites=true&w=majority'
-
-# Connect to the client
-client = MongoClient(connection_string)
-
-# Get database
-database = client['ARLIS']
-
-st.write(database.test)
-
-collection_name = '20220614_supercomputing'
-
-collection = database[collection_name].find()
-
-st.write(collection)
-
-st.write(database[collection_name].count_documents({}))
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/builder/_lxml.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/builder/_lxml.py
deleted file mode 100644
index 971c81e56fd7363caca090505c23752b6c58de87..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/builder/_lxml.py
+++ /dev/null
@@ -1,386 +0,0 @@
-# Use of this source code is governed by the MIT license.
-__license__ = "MIT"
-
-__all__ = [
- 'LXMLTreeBuilderForXML',
- 'LXMLTreeBuilder',
- ]
-
-try:
- from collections.abc import Callable # Python 3.6
-except ImportError as e:
- from collections import Callable
-
-from io import BytesIO
-from io import StringIO
-from lxml import etree
-from bs4.element import (
- Comment,
- Doctype,
- NamespacedAttribute,
- ProcessingInstruction,
- XMLProcessingInstruction,
-)
-from bs4.builder import (
- DetectsXMLParsedAsHTML,
- FAST,
- HTML,
- HTMLTreeBuilder,
- PERMISSIVE,
- ParserRejectedMarkup,
- TreeBuilder,
- XML)
-from bs4.dammit import EncodingDetector
-
-LXML = 'lxml'
-
-def _invert(d):
- "Invert a dictionary."
- return dict((v,k) for k, v in list(d.items()))
-
-class LXMLTreeBuilderForXML(TreeBuilder):
- DEFAULT_PARSER_CLASS = etree.XMLParser
-
- is_xml = True
- processing_instruction_class = XMLProcessingInstruction
-
- NAME = "lxml-xml"
- ALTERNATE_NAMES = ["xml"]
-
- # Well, it's permissive by XML parser standards.
- features = [NAME, LXML, XML, FAST, PERMISSIVE]
-
- CHUNK_SIZE = 512
-
- # This namespace mapping is specified in the XML Namespace
- # standard.
- DEFAULT_NSMAPS = dict(xml='http://www.w3.org/XML/1998/namespace')
-
- DEFAULT_NSMAPS_INVERTED = _invert(DEFAULT_NSMAPS)
-
- # NOTE: If we parsed Element objects and looked at .sourceline,
- # we'd be able to see the line numbers from the original document.
- # But instead we build an XMLParser or HTMLParser object to serve
- # as the target of parse messages, and those messages don't include
- # line numbers.
- # See: https://bugs.launchpad.net/lxml/+bug/1846906
-
- def initialize_soup(self, soup):
- """Let the BeautifulSoup object know about the standard namespace
- mapping.
-
- :param soup: A `BeautifulSoup`.
- """
- super(LXMLTreeBuilderForXML, self).initialize_soup(soup)
- self._register_namespaces(self.DEFAULT_NSMAPS)
-
- def _register_namespaces(self, mapping):
- """Let the BeautifulSoup object know about namespaces encountered
- while parsing the document.
-
- This might be useful later on when creating CSS selectors.
-
- This will track (almost) all namespaces, even ones that were
- only in scope for part of the document. If two namespaces have
- the same prefix, only the first one encountered will be
- tracked. Un-prefixed namespaces are not tracked.
-
- :param mapping: A dictionary mapping namespace prefixes to URIs.
- """
- for key, value in list(mapping.items()):
- # This is 'if key' and not 'if key is not None' because we
- # don't track un-prefixed namespaces. Soupselect will
- # treat an un-prefixed namespace as the default, which
- # causes confusion in some cases.
- if key and key not in self.soup._namespaces:
- # Let the BeautifulSoup object know about a new namespace.
- # If there are multiple namespaces defined with the same
- # prefix, the first one in the document takes precedence.
- self.soup._namespaces[key] = value
-
- def default_parser(self, encoding):
- """Find the default parser for the given encoding.
-
- :param encoding: A string.
- :return: Either a parser object or a class, which
- will be instantiated with default arguments.
- """
- if self._default_parser is not None:
- return self._default_parser
- return etree.XMLParser(
- target=self, strip_cdata=False, recover=True, encoding=encoding)
-
- def parser_for(self, encoding):
- """Instantiate an appropriate parser for the given encoding.
-
- :param encoding: A string.
- :return: A parser object such as an `etree.XMLParser`.
- """
- # Use the default parser.
- parser = self.default_parser(encoding)
-
- if isinstance(parser, Callable):
- # Instantiate the parser with default arguments
- parser = parser(
- target=self, strip_cdata=False, recover=True, encoding=encoding
- )
- return parser
-
- def __init__(self, parser=None, empty_element_tags=None, **kwargs):
- # TODO: Issue a warning if parser is present but not a
- # callable, since that means there's no way to create new
- # parsers for different encodings.
- self._default_parser = parser
- if empty_element_tags is not None:
- self.empty_element_tags = set(empty_element_tags)
- self.soup = None
- self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED]
- self.active_namespace_prefixes = [dict(self.DEFAULT_NSMAPS)]
- super(LXMLTreeBuilderForXML, self).__init__(**kwargs)
-
- def _getNsTag(self, tag):
- # Split the namespace URL out of a fully-qualified lxml tag
- # name. Copied from lxml's src/lxml/sax.py.
- if tag[0] == '{':
- return tuple(tag[1:].split('}', 1))
- else:
- return (None, tag)
-
- def prepare_markup(self, markup, user_specified_encoding=None,
- exclude_encodings=None,
- document_declared_encoding=None):
- """Run any preliminary steps necessary to make incoming markup
- acceptable to the parser.
-
- lxml really wants to get a bytestring and convert it to
- Unicode itself. So instead of using UnicodeDammit to convert
- the bytestring to Unicode using different encodings, this
- implementation uses EncodingDetector to iterate over the
- encodings, and tell lxml to try to parse the document as each
- one in turn.
-
- :param markup: Some markup -- hopefully a bytestring.
- :param user_specified_encoding: The user asked to try this encoding.
- :param document_declared_encoding: The markup itself claims to be
- in this encoding.
- :param exclude_encodings: The user asked _not_ to try any of
- these encodings.
-
- :yield: A series of 4-tuples:
- (markup, encoding, declared encoding,
- has undergone character replacement)
-
- Each 4-tuple represents a strategy for converting the
- document to Unicode and parsing it. Each strategy will be tried
- in turn.
- """
- is_html = not self.is_xml
- if is_html:
- self.processing_instruction_class = ProcessingInstruction
- # We're in HTML mode, so if we're given XML, that's worth
- # noting.
- DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(markup)
- else:
- self.processing_instruction_class = XMLProcessingInstruction
-
- if isinstance(markup, str):
- # We were given Unicode. Maybe lxml can parse Unicode on
- # this system?
-
- # TODO: This is a workaround for
- # https://bugs.launchpad.net/lxml/+bug/1948551.
- # We can remove it once the upstream issue is fixed.
- if len(markup) > 0 and markup[0] == u'\N{BYTE ORDER MARK}':
- markup = markup[1:]
- yield markup, None, document_declared_encoding, False
-
- if isinstance(markup, str):
- # No, apparently not. Convert the Unicode to UTF-8 and
- # tell lxml to parse it as UTF-8.
- yield (markup.encode("utf8"), "utf8",
- document_declared_encoding, False)
-
- # This was provided by the end-user; treat it as a known
- # definite encoding per the algorithm laid out in the HTML5
- # spec. (See the EncodingDetector class for details.)
- known_definite_encodings = [user_specified_encoding]
-
- # This was found in the document; treat it as a slightly lower-priority
- # user encoding.
- user_encodings = [document_declared_encoding]
- detector = EncodingDetector(
- markup, known_definite_encodings=known_definite_encodings,
- user_encodings=user_encodings, is_html=is_html,
- exclude_encodings=exclude_encodings
- )
- for encoding in detector.encodings:
- yield (detector.markup, encoding, document_declared_encoding, False)
-
- def feed(self, markup):
- if isinstance(markup, bytes):
- markup = BytesIO(markup)
- elif isinstance(markup, str):
- markup = StringIO(markup)
-
- # Call feed() at least once, even if the markup is empty,
- # or the parser won't be initialized.
- data = markup.read(self.CHUNK_SIZE)
- try:
- self.parser = self.parser_for(self.soup.original_encoding)
- self.parser.feed(data)
- while len(data) != 0:
- # Now call feed() on the rest of the data, chunk by chunk.
- data = markup.read(self.CHUNK_SIZE)
- if len(data) != 0:
- self.parser.feed(data)
- self.parser.close()
- except (UnicodeDecodeError, LookupError, etree.ParserError) as e:
- raise ParserRejectedMarkup(e)
-
- def close(self):
- self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED]
-
- def start(self, name, attrs, nsmap={}):
- # Make sure attrs is a mutable dict--lxml may send an immutable dictproxy.
- attrs = dict(attrs)
- nsprefix = None
- # Invert each namespace map as it comes in.
- if len(nsmap) == 0 and len(self.nsmaps) > 1:
- # There are no new namespaces for this tag, but
- # non-default namespaces are in play, so we need a
- # separate tag stack to know when they end.
- self.nsmaps.append(None)
- elif len(nsmap) > 0:
- # A new namespace mapping has come into play.
-
- # First, Let the BeautifulSoup object know about it.
- self._register_namespaces(nsmap)
-
- # Then, add it to our running list of inverted namespace
- # mappings.
- self.nsmaps.append(_invert(nsmap))
-
- # The currently active namespace prefixes have
- # changed. Calculate the new mapping so it can be stored
- # with all Tag objects created while these prefixes are in
- # scope.
- current_mapping = dict(self.active_namespace_prefixes[-1])
- current_mapping.update(nsmap)
-
- # We should not track un-prefixed namespaces as we can only hold one
- # and it will be recognized as the default namespace by soupsieve,
- # which may be confusing in some situations.
- if '' in current_mapping:
- del current_mapping['']
- self.active_namespace_prefixes.append(current_mapping)
-
- # Also treat the namespace mapping as a set of attributes on the
- # tag, so we can recreate it later.
- attrs = attrs.copy()
- for prefix, namespace in list(nsmap.items()):
- attribute = NamespacedAttribute(
- "xmlns", prefix, "http://www.w3.org/2000/xmlns/")
- attrs[attribute] = namespace
-
- # Namespaces are in play. Find any attributes that came in
- # from lxml with namespaces attached to their names, and
- # turn then into NamespacedAttribute objects.
- new_attrs = {}
- for attr, value in list(attrs.items()):
- namespace, attr = self._getNsTag(attr)
- if namespace is None:
- new_attrs[attr] = value
- else:
- nsprefix = self._prefix_for_namespace(namespace)
- attr = NamespacedAttribute(nsprefix, attr, namespace)
- new_attrs[attr] = value
- attrs = new_attrs
-
- namespace, name = self._getNsTag(name)
- nsprefix = self._prefix_for_namespace(namespace)
- self.soup.handle_starttag(
- name, namespace, nsprefix, attrs,
- namespaces=self.active_namespace_prefixes[-1]
- )
-
- def _prefix_for_namespace(self, namespace):
- """Find the currently active prefix for the given namespace."""
- if namespace is None:
- return None
- for inverted_nsmap in reversed(self.nsmaps):
- if inverted_nsmap is not None and namespace in inverted_nsmap:
- return inverted_nsmap[namespace]
- return None
-
- def end(self, name):
- self.soup.endData()
- completed_tag = self.soup.tagStack[-1]
- namespace, name = self._getNsTag(name)
- nsprefix = None
- if namespace is not None:
- for inverted_nsmap in reversed(self.nsmaps):
- if inverted_nsmap is not None and namespace in inverted_nsmap:
- nsprefix = inverted_nsmap[namespace]
- break
- self.soup.handle_endtag(name, nsprefix)
- if len(self.nsmaps) > 1:
- # This tag, or one of its parents, introduced a namespace
- # mapping, so pop it off the stack.
- out_of_scope_nsmap = self.nsmaps.pop()
-
- if out_of_scope_nsmap is not None:
- # This tag introduced a namespace mapping which is no
- # longer in scope. Recalculate the currently active
- # namespace prefixes.
- self.active_namespace_prefixes.pop()
-
- def pi(self, target, data):
- self.soup.endData()
- data = target + ' ' + data
- self.soup.handle_data(data)
- self.soup.endData(self.processing_instruction_class)
-
- def data(self, content):
- self.soup.handle_data(content)
-
- def doctype(self, name, pubid, system):
- self.soup.endData()
- doctype = Doctype.for_name_and_ids(name, pubid, system)
- self.soup.object_was_parsed(doctype)
-
- def comment(self, content):
- "Handle comments as Comment objects."
- self.soup.endData()
- self.soup.handle_data(content)
- self.soup.endData(Comment)
-
- def test_fragment_to_document(self, fragment):
- """See `TreeBuilder`."""
- return '\n%s' % fragment
-
-
-class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):
-
- NAME = LXML
- ALTERNATE_NAMES = ["lxml-html"]
-
- features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE]
- is_xml = False
- processing_instruction_class = ProcessingInstruction
-
- def default_parser(self, encoding):
- return etree.HTMLParser
-
- def feed(self, markup):
- encoding = self.soup.original_encoding
- try:
- self.parser = self.parser_for(encoding)
- self.parser.feed(markup)
- self.parser.close()
- except (UnicodeDecodeError, LookupError, etree.ParserError) as e:
- raise ParserRejectedMarkup(e)
-
-
- def test_fragment_to_document(self, fragment):
- """See `TreeBuilder`."""
- return '%s' % fragment
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/__init__.py
deleted file mode 100644
index 0defb82e21f21da442706e25145b4ef0b59d576c..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# -*- coding: utf-8 -*-
-try:
- from ._version import version as __version__
-except ImportError:
- __version__ = 'unknown'
-
-__all__ = ['easter', 'parser', 'relativedelta', 'rrule', 'tz',
- 'utils', 'zoneinfo']
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/mock_chain_wrapper.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/mock_chain_wrapper.py
deleted file mode 100644
index 87ffa2673240a684b9764fc8e39f4885cd775452..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/mock_chain_wrapper.py
+++ /dev/null
@@ -1,122 +0,0 @@
-"""Mock chain wrapper."""
-
-from typing import Any, Dict, Optional
-
-from langchain.llms.base import BaseLLM
-
-from gpt_index.constants import NUM_OUTPUTS
-from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor
-from gpt_index.prompts.base import Prompt
-from gpt_index.prompts.prompt_type import PromptType
-from gpt_index.token_counter.utils import (
- mock_extract_keywords_response,
- mock_extract_kg_triplets_response,
-)
-from gpt_index.utils import globals_helper
-
-# TODO: consolidate with unit tests in tests/mock_utils/mock_predict.py
-
-
-def _mock_summary_predict(max_tokens: int, prompt_args: Dict) -> str:
- """Mock summary predict."""
- # tokens in response shouldn't be larger than tokens in `context_str`
- num_text_tokens = len(globals_helper.tokenizer(prompt_args["context_str"]))
- token_limit = min(num_text_tokens, max_tokens)
- return " ".join(["summary"] * token_limit)
-
-
-def _mock_insert_predict() -> str:
- """Mock insert predict."""
- return "ANSWER: 1"
-
-
-def _mock_query_select() -> str:
- """Mock query select."""
- return "ANSWER: 1"
-
-
-def _mock_query_select_multiple(num_chunks: int) -> str:
- """Mock query select."""
- nums_str = ", ".join([str(i) for i in range(num_chunks)])
- return f"ANSWER: {nums_str}"
-
-
-def _mock_answer(max_tokens: int, prompt_args: Dict) -> str:
- """Mock answer."""
- # tokens in response shouldn't be larger than tokens in `text`
- num_ctx_tokens = len(globals_helper.tokenizer(prompt_args["context_str"]))
- token_limit = min(num_ctx_tokens, max_tokens)
- return " ".join(["answer"] * token_limit)
-
-
-def _mock_refine(max_tokens: int, prompt: Prompt, prompt_args: Dict) -> str:
- """Mock refine."""
- # tokens in response shouldn't be larger than tokens in
- # `existing_answer` + `context_msg`
- # NOTE: if existing_answer is not in prompt_args, we need to get it from the prompt
- if "existing_answer" not in prompt_args:
- existing_answer = prompt.partial_dict["existing_answer"]
- else:
- existing_answer = prompt_args["existing_answer"]
- num_ctx_tokens = len(globals_helper.tokenizer(prompt_args["context_msg"]))
- num_exist_tokens = len(globals_helper.tokenizer(existing_answer))
- token_limit = min(num_ctx_tokens + num_exist_tokens, max_tokens)
- return " ".join(["answer"] * token_limit)
-
-
-def _mock_keyword_extract(prompt_args: Dict) -> str:
- """Mock keyword extract."""
- return mock_extract_keywords_response(prompt_args["text"])
-
-
-def _mock_query_keyword_extract(prompt_args: Dict) -> str:
- """Mock query keyword extract."""
- return mock_extract_keywords_response(prompt_args["question"])
-
-
-def _mock_knowledge_graph_triplet_extract(prompt_args: Dict, max_triplets: int) -> str:
- """Mock knowledge graph triplet extract."""
- return mock_extract_kg_triplets_response(
- prompt_args["text"], max_triplets=max_triplets
- )
-
-
-class MockLLMPredictor(LLMPredictor):
- """Mock LLM Predictor."""
-
- def __init__(
- self, max_tokens: int = NUM_OUTPUTS, llm: Optional[BaseLLM] = None
- ) -> None:
- """Initialize params."""
- super().__init__(llm)
- # NOTE: don't call super, we don't want to instantiate LLM
- self.max_tokens = max_tokens
- self._total_tokens_used = 0
- self.flag = True
- self._last_token_usage = None
-
- def _predict(self, prompt: Prompt, **prompt_args: Any) -> str:
- """Mock predict."""
- prompt_str = prompt.prompt_type
- if prompt_str == PromptType.SUMMARY:
- return _mock_summary_predict(self.max_tokens, prompt_args)
- elif prompt_str == PromptType.TREE_INSERT:
- return _mock_insert_predict()
- elif prompt_str == PromptType.TREE_SELECT:
- return _mock_query_select()
- elif prompt_str == PromptType.TREE_SELECT_MULTIPLE:
- return _mock_query_select_multiple(prompt_args["num_chunks"])
- elif prompt_str == PromptType.REFINE:
- return _mock_refine(self.max_tokens, prompt, prompt_args)
- elif prompt_str == PromptType.QUESTION_ANSWER:
- return _mock_answer(self.max_tokens, prompt_args)
- elif prompt_str == PromptType.KEYWORD_EXTRACT:
- return _mock_keyword_extract(prompt_args)
- elif prompt_str == PromptType.QUERY_KEYWORD_EXTRACT:
- return _mock_query_keyword_extract(prompt_args)
- elif prompt_str == PromptType.KNOWLEDGE_TRIPLET_EXTRACT:
- return _mock_knowledge_graph_triplet_extract(
- prompt_args, prompt.partial_dict.get("max_knowledge_triplets", 2)
- )
- else:
- raise ValueError("Invalid prompt type.")
diff --git a/spaces/johnhelf/roop/roop/predicter.py b/spaces/johnhelf/roop/roop/predicter.py
deleted file mode 100644
index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000
--- a/spaces/johnhelf/roop/roop/predicter.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import numpy
-import opennsfw2
-from PIL import Image
-
-from roop.typing import Frame
-
-MAX_PROBABILITY = 0.85
-
-
-def predict_frame(target_frame: Frame) -> bool:
- image = Image.fromarray(target_frame)
- image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO)
- model = opennsfw2.make_open_nsfw_model()
- views = numpy.expand_dims(image, axis=0)
- _, probability = model.predict(views)[0]
- return probability > MAX_PROBABILITY
-
-
-def predict_image(target_path: str) -> bool:
- return opennsfw2.predict_image(target_path) > MAX_PROBABILITY
-
-
-def predict_video(target_path: str) -> bool:
- _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100)
- return any(probability > MAX_PROBABILITY for probability in probabilities)
diff --git a/spaces/jskalbg/ChatDev01/online_log/static/replay/css/use.css b/spaces/jskalbg/ChatDev01/online_log/static/replay/css/use.css
deleted file mode 100644
index 8fb22ee16404f48c7b0ae3a870b5201e8e6f8721..0000000000000000000000000000000000000000
--- a/spaces/jskalbg/ChatDev01/online_log/static/replay/css/use.css
+++ /dev/null
@@ -1,234 +0,0 @@
-p,
-div,
-label {
- font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;
-}
-
-body {
- background-color: #23252c;
-}
-
-.button {
- padding: 16px 15px;
- background: #e2edf0;
- color: #0b0c0c;
- font-weight: 800;
- font-size: 16px;
- cursor: pointer;
- height: 80px;
- box-shadow: 1px 2px 2px #505757;
- border-radius: 20px;
- border: #020202;
-}
-
-.blinking-animation {
- width: 25px;
- height: 25px;
- animation: blink 1s ease infinite;
-}
-
-@keyframes blink {
- 0%,
- 100% {
- opacity: 1;
- }
- 50% {
- opacity: 0;
- }
-}
-
-#filebutton {
- position: relative;
- left: 50px;
-}
-
-#title>p {
- font-size: 30px;
- color: #fefefe;
- text-shadow: 0 0 0.5em #0ae642, 0 0 0.2em #5c5c5c;
-}
-
-#replay {
- position: relative;
- left: 340px;
- width: 100px;
-}
-
-#successupload {
- position: absolute;
- top: 730px;
- left: 200px;
- color: antiquewhite;
- display: none;
-}
-
-#successupload>p {
- position: relative;
- left: 20px;
-}
-
-#fileInput {
- display: none;
-}
-
-#humanRequest {
- background-color: rgb(30, 39, 46);
- border: 1px solid #ffffff;
- border-radius: 10px;
- box-shadow: 3px 3px 4px black;
-}
-
-#dialogBody,
-#dialogStatistic {
- width: 790px;
- height: 570px;
- background-color: rgb(255, 255, 255);
- border: 1px solid #ccc;
- border-radius: 10px;
- box-shadow: 3px 3px 4px black;
- overflow: auto;
- padding: 20px;
- float: right;
- position: relative;
- margin-left: auto;
- top: 10px;
-}
-
-#speed {
- position: relative;
- width: 600px;
- top: 35px;
- right: -150px;
-}
-
-#speedcontrol>label {
- display: block;
- position: relative;
- top: 15px;
- width: 200px;
- color: aliceblue;
- font-size: medium;
- font-weight: bold;
-}
-
-[type="range"] {
- -webkit-appearance: none;
- appearance: none;
- margin: 0;
- outline: 0;
- background-color: transparent;
- width: 600px;
-}
-
-[type="range"]::-webkit-slider-runnable-track {
- height: 4px;
- background: #eee;
-}
-
-[type="range" i]::-webkit-slider-container {
- height: 25px;
- overflow: hidden;
-}
-
-[type="range"]::-webkit-slider-thumb {
- -webkit-appearance: none;
- appearance: none;
- width: 20px;
- height: 20px;
- border-radius: 30%;
- background-color: #ffffff;
- border: 1px solid transparent;
- margin-top: -8px;
- border-image: linear-gradient(#133163, #133163) 0 fill / 8 20 8 0 / 0px 0px 0 2000px;
-}
-
-#dialogStatistic {
- height: 52px;
- top: 30px;
- position: relative;
-}
-
-.message {
- margin: 10px;
-}
-
-#test {
- border: 1px solid rgba(130, 133, 186, 0.3);
- border-radius: 10px;
- box-shadow: 1px 2px 2px black;
- width: 100px;
- font-size: 18px;
- display: none;
- font-family: 'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;
-}
-
-img {
- height: 100%;
- width: 100%;
-}
-
-#imgShow {
- height: 450px;
- width: 600px;
- position: relative;
- top: 120px;
-}
-
-#successupload {
- width: 200px;
-}
-
-#show {
- display: flex;
- float: right;
- position: relative;
- right: -50px;
-}
-
-.info>p {
- font-size: large;
- font-weight: 900;
- position: relative;
- font-style: inherit;
- color: rgb(12, 13, 13);
-}
-
-.info>label {
- height: 17px;
- position: relative;
- align-items: center;
-}
-
-.info {
- display: block;
- height: 25px;
- position: relative;
- width: 200px;
- color: rgb(30, 39, 46);
- border-radius: 10px;
- font-size: small;
- font-weight: bold;
- font-style: inherit;
- display: block;
- font-weight: 900;
-}
-
-
-/* Optional styles for the text container */
-
-#text-container {
- font-size: 24px;
- line-height: 1.5;
-}
-
-
-/* Animation styles */
-
-@keyframes revealText {
- 0% {
- visibility: hidden;
- }
- 100% {
- visibility: visible;
- }
-}
\ No newline at end of file
diff --git a/spaces/jthteo/Whisper/README.md b/spaces/jthteo/Whisper/README.md
deleted file mode 100644
index 759f68b81ba27c6b67a19afec2991bf46b6e8929..0000000000000000000000000000000000000000
--- a/spaces/jthteo/Whisper/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Whisper
-emoji: 🤫
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: cc-by-nc-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/juancopi81/whisper-youtube-2-hf_dataset/storing/createdb.py b/spaces/juancopi81/whisper-youtube-2-hf_dataset/storing/createdb.py
deleted file mode 100644
index b551cf8de652e7563eb4f291daa802dc9dee6905..0000000000000000000000000000000000000000
--- a/spaces/juancopi81/whisper-youtube-2-hf_dataset/storing/createdb.py
+++ /dev/null
@@ -1,27 +0,0 @@
-"""Simple script to create a sqlite db with a single table
-called 'videos'.
-"""
-
-import sqlite3
-
-def create_db(db_path: str) -> None:
- """Create an sqlite db with a single table called 'videos'"""
- connection = sqlite3.connect(db_path)
- print(f"Created db successfully at '{db_path}'")
- connection.execute(
- '''
- CREATE TABLE VIDEO
- (ID INTEGER PRIMARY KEY AUTOINCREMENT,
- CHANNEL_NAME CHAR(30) NOT NULL,
- URL TEXT NOT NULL,
- TITLE CHAR(100),
- DESCRIPTION CHAR(5000),
- TRANSCRIPTION TEXT,
- SEGMENTS TEXT
- )
- '''
- )
- print(f"'Video' table created successfully")
-
-if __name__ == "__main__":
- create_db("video.db")
\ No newline at end of file
diff --git a/spaces/julyThree/anime-remove-background/app.py b/spaces/julyThree/anime-remove-background/app.py
deleted file mode 100644
index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000
--- a/spaces/julyThree/anime-remove-background/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-import huggingface_hub
-import onnxruntime as rt
-import numpy as np
-import cv2
-
-
-def get_mask(img, s=1024):
- img = (img / 255).astype(np.float32)
- h, w = h0, w0 = img.shape[:-1]
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
- ph, pw = s - h, s - w
- img_input = np.zeros([s, s, 3], dtype=np.float32)
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
- img_input = np.transpose(img_input, (2, 0, 1))
- img_input = img_input[np.newaxis, :]
- mask = rmbg_model.run(None, {'img': img_input})[0][0]
- mask = np.transpose(mask, (1, 2, 0))
- mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
- mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
- return mask
-
-
-def rmbg_fn(img):
- mask = get_mask(img)
- img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
- mask = (mask * 255).astype(np.uint8)
- img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
- mask = mask.repeat(3, axis=2)
- return mask, img
-
-
-if __name__ == "__main__":
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
- rmbg_model = rt.InferenceSession(model_path, providers=providers)
- app = gr.Blocks()
- with app:
- gr.Markdown("# Anime Remove Background\n\n"
- "\n\n"
- "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)")
- with gr.Row():
- with gr.Column():
- input_img = gr.Image(label="input image")
- examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)]
- examples = gr.Dataset(components=[input_img], samples=examples_data)
- run_btn = gr.Button(variant="primary")
- output_mask = gr.Image(label="mask")
- output_img = gr.Image(label="result", image_mode="RGBA")
- examples.click(lambda x: x[0], [examples], [input_img])
- run_btn.click(rmbg_fn, [input_img], [output_mask, output_img])
- app.launch()
diff --git a/spaces/justest/gpt4free/testing/binghuan/BingHuan.py b/spaces/justest/gpt4free/testing/binghuan/BingHuan.py
deleted file mode 100644
index 8c859c080a9ac63ea90fec07c6640486c657cb05..0000000000000000000000000000000000000000
--- a/spaces/justest/gpt4free/testing/binghuan/BingHuan.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os,sys
-import json
-import subprocess
-# from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://b.ai-huan.xyz'
-model = ['gpt-3.5-turbo', 'gpt-4']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- path = os.path.dirname(os.path.realpath(__file__))
- config = json.dumps({
- 'messages': messages,
- 'model': model}, separators=(',', ':'))
- cmd = ['python', f'{path}/helpers/binghuan.py', config]
-
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-
- for line in iter(p.stdout.readline, b''):
- yield line.decode('cp1252')
-
-
-
-# params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
-# '(%s)' % ', '.join(
-# [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
-
-
-# Temporary For ChatCompletion Class
-class ChatCompletion:
- @staticmethod
- def create(model: str, messages: list, provider: None or str, stream: bool = False, auth: str = False, **kwargs):
- kwargs['auth'] = auth
-
- if provider and needs_auth and not auth:
- print(
- f'ValueError: {provider} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr)
- sys.exit(1)
-
- try:
- return (_create_completion(model, messages, stream, **kwargs)
- if stream else ''.join(_create_completion(model, messages, stream, **kwargs)))
- except TypeError as e:
- print(e)
- arg: str = str(e).split("'")[1]
- print(
- f"ValueError: {provider} does not support '{arg}' argument", file=sys.stderr)
- sys.exit(1)
\ No newline at end of file
diff --git a/spaces/justinstberger2dwww2/artificialguybr-freedom/app.py b/spaces/justinstberger2dwww2/artificialguybr-freedom/app.py
deleted file mode 100644
index b15553a706a23a138d04f4f199e0c8f3b4138398..0000000000000000000000000000000000000000
--- a/spaces/justinstberger2dwww2/artificialguybr-freedom/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/artificialguybr/freedom").launch()
\ No newline at end of file
diff --git a/spaces/jyseo/3DFuse/voxnerf/__init__.py b/spaces/jyseo/3DFuse/voxnerf/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/kamezawash/rembg/rembg/cli.py b/spaces/kamezawash/rembg/rembg/cli.py
deleted file mode 100644
index 10f41b17541df0a11c11bab006fba09b733c0011..0000000000000000000000000000000000000000
--- a/spaces/kamezawash/rembg/rembg/cli.py
+++ /dev/null
@@ -1,362 +0,0 @@
-import pathlib
-import sys
-import time
-from enum import Enum
-from typing import IO, cast
-
-import aiohttp
-import click
-import filetype
-import uvicorn
-from asyncer import asyncify
-from fastapi import Depends, FastAPI, File, Query
-from fastapi.middleware.cors import CORSMiddleware
-from starlette.responses import Response
-from tqdm import tqdm
-from watchdog.events import FileSystemEvent, FileSystemEventHandler
-from watchdog.observers import Observer
-
-from . import _version
-from .bg import remove
-from .session_base import BaseSession
-from .session_factory import new_session
-
-
-@click.group()
-@click.version_option(version=_version.get_versions()["version"])
-def main() -> None:
- pass
-
-
-@main.command(help="for a file as input")
-@click.option(
- "-m",
- "--model",
- default="u2net",
- type=click.Choice(["u2net", "u2netp", "u2net_human_seg", "u2net_cloth_seg"]),
- show_default=True,
- show_choices=True,
- help="model name",
-)
-@click.option(
- "-a",
- "--alpha-matting",
- is_flag=True,
- show_default=True,
- help="use alpha matting",
-)
-@click.option(
- "-af",
- "--alpha-matting-foreground-threshold",
- default=240,
- type=int,
- show_default=True,
- help="trimap fg threshold",
-)
-@click.option(
- "-ab",
- "--alpha-matting-background-threshold",
- default=10,
- type=int,
- show_default=True,
- help="trimap bg threshold",
-)
-@click.option(
- "-ae",
- "--alpha-matting-erode-size",
- default=10,
- type=int,
- show_default=True,
- help="erode size",
-)
-@click.option(
- "-om",
- "--only-mask",
- is_flag=True,
- show_default=True,
- help="output only the mask",
-)
-@click.argument(
- "input", default=(None if sys.stdin.isatty() else "-"), type=click.File("rb")
-)
-@click.argument(
- "output",
- default=(None if sys.stdin.isatty() else "-"),
- type=click.File("wb", lazy=True),
-)
-def i(model: str, input: IO, output: IO, **kwargs) -> None:
- output.write(remove(input.read(), session=new_session(model), **kwargs))
-
-
-@main.command(help="for a folder as input")
-@click.option(
- "-m",
- "--model",
- default="u2net",
- type=click.Choice(["u2net", "u2netp", "u2net_human_seg", "u2net_cloth_seg"]),
- show_default=True,
- show_choices=True,
- help="model name",
-)
-@click.option(
- "-a",
- "--alpha-matting",
- is_flag=True,
- show_default=True,
- help="use alpha matting",
-)
-@click.option(
- "-af",
- "--alpha-matting-foreground-threshold",
- default=240,
- type=int,
- show_default=True,
- help="trimap fg threshold",
-)
-@click.option(
- "-ab",
- "--alpha-matting-background-threshold",
- default=10,
- type=int,
- show_default=True,
- help="trimap bg threshold",
-)
-@click.option(
- "-ae",
- "--alpha-matting-erode-size",
- default=10,
- type=int,
- show_default=True,
- help="erode size",
-)
-@click.option(
- "-om",
- "--only-mask",
- is_flag=True,
- show_default=True,
- help="output only the mask",
-)
-@click.option(
- "-w",
- "--watch",
- default=False,
- is_flag=True,
- show_default=True,
- help="watches a folder for changes",
-)
-@click.argument(
- "input",
- type=click.Path(
- exists=True,
- path_type=pathlib.Path,
- file_okay=False,
- dir_okay=True,
- readable=True,
- ),
-)
-@click.argument(
- "output",
- type=click.Path(
- exists=False,
- path_type=pathlib.Path,
- file_okay=False,
- dir_okay=True,
- writable=True,
- ),
-)
-def p(
- model: str, input: pathlib.Path, output: pathlib.Path, watch: bool, **kwargs
-) -> None:
- session = new_session(model)
-
- def process(each_input: pathlib.Path) -> None:
- try:
- mimetype = filetype.guess(each_input)
- if mimetype is None:
- return
- if mimetype.mime.find("image") < 0:
- return
-
- each_output = (output / each_input.name).with_suffix(".png")
- each_output.parents[0].mkdir(parents=True, exist_ok=True)
-
- if not each_output.exists():
- each_output.write_bytes(
- cast(
- bytes,
- remove(each_input.read_bytes(), session=session, **kwargs),
- )
- )
-
- if watch:
- print(
- f"processed: {each_input.absolute()} -> {each_output.absolute()}"
- )
- except Exception as e:
- print(e)
-
- inputs = list(input.glob("**/*"))
- if not watch:
- inputs = tqdm(inputs)
-
- for each_input in inputs:
- if not each_input.is_dir():
- process(each_input)
-
- if watch:
- observer = Observer()
-
- class EventHandler(FileSystemEventHandler):
- def on_any_event(self, event: FileSystemEvent) -> None:
- if not (
- event.is_directory or event.event_type in ["deleted", "closed"]
- ):
- process(pathlib.Path(event.src_path))
-
- event_handler = EventHandler()
- observer.schedule(event_handler, input, recursive=False)
- observer.start()
-
- try:
- while True:
- time.sleep(1)
-
- finally:
- observer.stop()
- observer.join()
-
-
-@main.command(help="for a http server")
-@click.option(
- "-p",
- "--port",
- default=5000,
- type=int,
- show_default=True,
- help="port",
-)
-@click.option(
- "-l",
- "--log_level",
- default="info",
- type=str,
- show_default=True,
- help="log level",
-)
-def s(port: int, log_level: str) -> None:
- sessions: dict[str, BaseSession] = {}
- tags_metadata = [
- {
- "name": "Background Removal",
- "description": "Endpoints that perform background removal with different image sources.",
- "externalDocs": {
- "description": "GitHub Source",
- "url": "https://github.com/danielgatis/rembg",
- },
- },
- ]
- app = FastAPI(
- title="Rembg",
- description="Rembg is a tool to remove images background. That is it.",
- version=_version.get_versions()["version"],
- contact={
- "name": "Daniel Gatis",
- "url": "https://github.com/danielgatis",
- "email": "danielgatis@gmail.com",
- },
- license_info={
- "name": "MIT License",
- "url": "https://github.com/danielgatis/rembg/blob/main/LICENSE.txt",
- },
- openapi_tags=tags_metadata,
- )
-
- app.add_middleware(
- CORSMiddleware,
- allow_credentials=True,
- allow_origins=["*"],
- allow_methods=["*"],
- allow_headers=["*"],
- )
-
- class ModelType(str, Enum):
- u2net = "u2net"
- u2netp = "u2netp"
- u2net_human_seg = "u2net_human_seg"
- u2net_cloth_seg = "u2net_cloth_seg"
-
- class CommonQueryParams:
- def __init__(
- self,
- model: ModelType = Query(
- default=ModelType.u2net,
- description="Model to use when processing image",
- ),
- a: bool = Query(default=False, description="Enable Alpha Matting"),
- af: int = Query(
- default=240, ge=0, description="Alpha Matting (Foreground Threshold)"
- ),
- ab: int = Query(
- default=10, ge=0, description="Alpha Matting (Background Threshold)"
- ),
- ae: int = Query(
- default=10, ge=0, description="Alpha Matting (Erode Structure Size)"
- ),
- om: bool = Query(default=False, description="Only Mask"),
- ):
- self.model = model
- self.a = a
- self.af = af
- self.ab = ab
- self.ae = ae
- self.om = om
-
- def im_without_bg(content: bytes, commons: CommonQueryParams) -> Response:
- return Response(
- remove(
- content,
- session=sessions.setdefault(
- commons.model.value, new_session(commons.model.value)
- ),
- alpha_matting=commons.a,
- alpha_matting_foreground_threshold=commons.af,
- alpha_matting_background_threshold=commons.ab,
- alpha_matting_erode_size=commons.ae,
- only_mask=commons.om,
- ),
- media_type="image/png",
- )
-
- @app.get(
- path="/",
- tags=["Background Removal"],
- summary="Remove from URL",
- description="Removes the background from an image obtained by retrieving an URL.",
- )
- async def get_index(
- url: str = Query(
- default=..., description="URL of the image that has to be processed."
- ),
- commons: CommonQueryParams = Depends(),
- ):
- async with aiohttp.ClientSession() as session:
- async with session.get(url) as response:
- file = await response.read()
- return await asyncify(im_without_bg)(file, commons)
-
- @app.post(
- path="/",
- tags=["Background Removal"],
- summary="Remove from Stream",
- description="Removes the background from an image sent within the request itself.",
- )
- async def post_index(
- file: bytes = File(
- default=...,
- description="Image file (byte stream) that has to be processed.",
- ),
- commons: CommonQueryParams = Depends(),
- ):
- return await asyncify(im_without_bg)(file, commons)
-
- uvicorn.run(app, host="0.0.0.0", port=port, log_level=log_level)
diff --git a/spaces/kcagle/AutoGPT/autogpt/commands/execute_code.py b/spaces/kcagle/AutoGPT/autogpt/commands/execute_code.py
deleted file mode 100644
index 11266f852727f2f8aedbc995b1e504a17acbfb77..0000000000000000000000000000000000000000
--- a/spaces/kcagle/AutoGPT/autogpt/commands/execute_code.py
+++ /dev/null
@@ -1,158 +0,0 @@
-"""Execute code in a Docker container"""
-import os
-import subprocess
-
-import docker
-from docker.errors import ImageNotFound
-
-from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
-
-
-def execute_python_file(file: str) -> str:
- """Execute a Python file in a Docker container and return the output
-
- Args:
- file (str): The name of the file to execute
-
- Returns:
- str: The output of the file
- """
-
- print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'")
-
- if not file.endswith(".py"):
- return "Error: Invalid file type. Only .py files are allowed."
-
- file_path = path_in_workspace(file)
-
- if not os.path.isfile(file_path):
- return f"Error: File '{file}' does not exist."
-
- if we_are_running_in_a_docker_container():
- result = subprocess.run(
- f"python {file_path}", capture_output=True, encoding="utf8", shell=True
- )
- if result.returncode == 0:
- return result.stdout
- else:
- return f"Error: {result.stderr}"
-
- try:
- client = docker.from_env()
-
- # You can replace this with the desired Python image/version
- # You can find available Python images on Docker Hub:
- # https://hub.docker.com/_/python
- image_name = "python:3-alpine"
- try:
- client.images.get(image_name)
- print(f"Image '{image_name}' found locally")
- except ImageNotFound:
- print(f"Image '{image_name}' not found locally, pulling from Docker Hub")
- # Use the low-level API to stream the pull response
- low_level_client = docker.APIClient()
- for line in low_level_client.pull(image_name, stream=True, decode=True):
- # Print the status and progress, if available
- status = line.get("status")
- progress = line.get("progress")
- if status and progress:
- print(f"{status}: {progress}")
- elif status:
- print(status)
-
- container = client.containers.run(
- image_name,
- f"python {file}",
- volumes={
- os.path.abspath(WORKSPACE_PATH): {
- "bind": "/workspace",
- "mode": "ro",
- }
- },
- working_dir="/workspace",
- stderr=True,
- stdout=True,
- detach=True,
- )
-
- container.wait()
- logs = container.logs().decode("utf-8")
- container.remove()
-
- # print(f"Execution complete. Output: {output}")
- # print(f"Logs: {logs}")
-
- return logs
-
- except docker.errors.DockerException as e:
- print(
- "Could not run the script in a container. If you haven't already, please install Docker https://docs.docker.com/get-docker/"
- )
- return f"Error: {str(e)}"
-
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def execute_shell(command_line: str) -> str:
- """Execute a shell command and return the output
-
- Args:
- command_line (str): The command line to execute
-
- Returns:
- str: The output of the command
- """
- current_dir = os.getcwd()
- # Change dir into workspace if necessary
- if str(WORKSPACE_PATH) not in current_dir:
- os.chdir(WORKSPACE_PATH)
-
- print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
-
- result = subprocess.run(command_line, capture_output=True, shell=True)
- output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
-
- # Change back to whatever the prior working dir was
-
- os.chdir(current_dir)
-
- return output
-
-
-def execute_shell_popen(command_line) -> str:
- """Execute a shell command with Popen and returns an english description
- of the event and the process id
-
- Args:
- command_line (str): The command line to execute
-
- Returns:
- str: Description of the fact that the process started and its id
- """
- current_dir = os.getcwd()
- # Change dir into workspace if necessary
- if str(WORKSPACE_PATH) not in current_dir:
- os.chdir(WORKSPACE_PATH)
-
- print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'")
-
- do_not_show_output = subprocess.DEVNULL
- process = subprocess.Popen(
- command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output
- )
-
- # Change back to whatever the prior working dir was
-
- os.chdir(current_dir)
-
- return f"Subprocess started with PID:'{str(process.pid)}'"
-
-
-def we_are_running_in_a_docker_container() -> bool:
- """Check if we are running in a Docker container
-
- Returns:
- bool: True if we are running in a Docker container, False otherwise
- """
- return os.path.exists("/.dockerenv")
diff --git a/spaces/ken4005/Uhi-ChatGPT/README.md b/spaces/ken4005/Uhi-ChatGPT/README.md
deleted file mode 100644
index 8df99398dd07f6fce2e1c98ad18fb9a21b619318..0000000000000000000000000000000000000000
--- a/spaces/ken4005/Uhi-ChatGPT/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChuanhuChatGPT
-emoji: 🐯
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
-duplicated_from: JohnSmith9982/ChuanhuChatGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/keras-dreambooth/galaxy-mergers/README.md b/spaces/keras-dreambooth/galaxy-mergers/README.md
deleted file mode 100644
index 78206402cb977f4d11f69bb5ed5f340a1df97331..0000000000000000000000000000000000000000
--- a/spaces/keras-dreambooth/galaxy-mergers/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Galaxy Mergers
-emoji: 🪐
-colorFrom: yellow
-colorTo: black
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-tags:
- - keras-dreambooth
- - nature
----
\ No newline at end of file
diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/backbones/iresnet2060.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/backbones/iresnet2060.py
deleted file mode 100644
index 21d1122144d207637d2444cba1f68fe630c89f31..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/backbones/iresnet2060.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import torch
-from torch import nn
-
-assert torch.__version__ >= "1.8.1"
-from torch.utils.checkpoint import checkpoint_sequential
-
-__all__ = ['iresnet2060']
-
-
-def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes,
- out_planes,
- kernel_size=3,
- stride=stride,
- padding=dilation,
- groups=groups,
- bias=False,
- dilation=dilation)
-
-
-def conv1x1(in_planes, out_planes, stride=1):
- """1x1 convolution"""
- return nn.Conv2d(in_planes,
- out_planes,
- kernel_size=1,
- stride=stride,
- bias=False)
-
-
-class IBasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None,
- groups=1, base_width=64, dilation=1):
- super(IBasicBlock, self).__init__()
- if groups != 1 or base_width != 64:
- raise ValueError('BasicBlock only supports groups=1 and base_width=64')
- if dilation > 1:
- raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
- self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05, )
- self.conv1 = conv3x3(inplanes, planes)
- self.bn2 = nn.BatchNorm2d(planes, eps=1e-05, )
- self.prelu = nn.PReLU(planes)
- self.conv2 = conv3x3(planes, planes, stride)
- self.bn3 = nn.BatchNorm2d(planes, eps=1e-05, )
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- identity = x
- out = self.bn1(x)
- out = self.conv1(out)
- out = self.bn2(out)
- out = self.prelu(out)
- out = self.conv2(out)
- out = self.bn3(out)
- if self.downsample is not None:
- identity = self.downsample(x)
- out += identity
- return out
-
-
-class IResNet(nn.Module):
- fc_scale = 7 * 7
-
- def __init__(self,
- block, layers, dropout=0, num_features=512, zero_init_residual=False,
- groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False):
- super(IResNet, self).__init__()
- self.fp16 = fp16
- self.inplanes = 64
- self.dilation = 1
- if replace_stride_with_dilation is None:
- replace_stride_with_dilation = [False, False, False]
- if len(replace_stride_with_dilation) != 3:
- raise ValueError("replace_stride_with_dilation should be None "
- "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
- self.groups = groups
- self.base_width = width_per_group
- self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)
- self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05)
- self.prelu = nn.PReLU(self.inplanes)
- self.layer1 = self._make_layer(block, 64, layers[0], stride=2)
- self.layer2 = self._make_layer(block,
- 128,
- layers[1],
- stride=2,
- dilate=replace_stride_with_dilation[0])
- self.layer3 = self._make_layer(block,
- 256,
- layers[2],
- stride=2,
- dilate=replace_stride_with_dilation[1])
- self.layer4 = self._make_layer(block,
- 512,
- layers[3],
- stride=2,
- dilate=replace_stride_with_dilation[2])
- self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05, )
- self.dropout = nn.Dropout(p=dropout, inplace=True)
- self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features)
- self.features = nn.BatchNorm1d(num_features, eps=1e-05)
- nn.init.constant_(self.features.weight, 1.0)
- self.features.weight.requires_grad = False
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.normal_(m.weight, 0, 0.1)
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
- if zero_init_residual:
- for m in self.modules():
- if isinstance(m, IBasicBlock):
- nn.init.constant_(m.bn2.weight, 0)
-
- def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
- downsample = None
- previous_dilation = self.dilation
- if dilate:
- self.dilation *= stride
- stride = 1
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- conv1x1(self.inplanes, planes * block.expansion, stride),
- nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ),
- )
- layers = []
- layers.append(
- block(self.inplanes, planes, stride, downsample, self.groups,
- self.base_width, previous_dilation))
- self.inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(
- block(self.inplanes,
- planes,
- groups=self.groups,
- base_width=self.base_width,
- dilation=self.dilation))
-
- return nn.Sequential(*layers)
-
- def checkpoint(self, func, num_seg, x):
- if self.training:
- return checkpoint_sequential(func, num_seg, x)
- else:
- return func(x)
-
- def forward(self, x):
- with torch.cuda.amp.autocast(self.fp16):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.prelu(x)
- x = self.layer1(x)
- x = self.checkpoint(self.layer2, 20, x)
- x = self.checkpoint(self.layer3, 100, x)
- x = self.layer4(x)
- x = self.bn2(x)
- x = torch.flatten(x, 1)
- x = self.dropout(x)
- x = self.fc(x.float() if self.fp16 else x)
- x = self.features(x)
- return x
-
-
-def _iresnet(arch, block, layers, pretrained, progress, **kwargs):
- model = IResNet(block, layers, **kwargs)
- if pretrained:
- raise ValueError()
- return model
-
-
-def iresnet2060(pretrained=False, progress=True, **kwargs):
- return _iresnet('iresnet2060', IBasicBlock, [3, 128, 1024 - 128, 3], pretrained, progress, **kwargs)
diff --git a/spaces/kevinwang676/ControlNet-with-GPT-4/app_lineart.py b/spaces/kevinwang676/ControlNet-with-GPT-4/app_lineart.py
deleted file mode 100644
index 86a5362a8345d942086ef487b37bb9cc0a13ed94..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ControlNet-with-GPT-4/app_lineart.py
+++ /dev/null
@@ -1,105 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from settings import (
- DEFAULT_IMAGE_RESOLUTION,
- DEFAULT_NUM_IMAGES,
- MAX_IMAGE_RESOLUTION,
- MAX_NUM_IMAGES,
- MAX_SEED,
-)
-from utils import randomize_seed_fn
-
-
-def create_demo(process):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button("Run")
- with gr.Accordion("Advanced options", open=False):
- preprocessor_name = gr.Radio(
- label="Preprocessor",
- choices=[
- "Lineart",
- "Lineart coarse",
- "None",
- "Lineart (anime)",
- "None (anime)",
- ],
- type="value",
- value="Lineart",
- info='Note that "Lineart (anime)" and "None (anime)" are for anime base models like Anything-v3.',
- )
- num_samples = gr.Slider(
- label="Number of images", minimum=1, maximum=MAX_NUM_IMAGES, value=DEFAULT_NUM_IMAGES, step=1
- )
- image_resolution = gr.Slider(
- label="Image resolution",
- minimum=256,
- maximum=MAX_IMAGE_RESOLUTION,
- value=DEFAULT_IMAGE_RESOLUTION,
- step=256,
- )
- preprocess_resolution = gr.Slider(
- label="Preprocess resolution", minimum=128, maximum=512, value=512, step=1
- )
- num_steps = gr.Slider(label="Number of steps", minimum=1, maximum=100, value=20, step=1)
- guidance_scale = gr.Slider(label="Guidance scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0)
- randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
- a_prompt = gr.Textbox(label="Additional prompt", value="best quality, extremely detailed")
- n_prompt = gr.Textbox(
- label="Negative prompt",
- value="longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- )
- with gr.Column():
- result = gr.Gallery(label="Output", show_label=False, columns=2, object_fit="scale-down")
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- preprocess_resolution,
- num_steps,
- guidance_scale,
- seed,
- preprocessor_name,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name=False,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name="lineart",
- )
- return demo
-
-
-if __name__ == "__main__":
- from model import Model
-
- model = Model(task_name="lineart")
- demo = create_demo(model.process_lineart)
- demo.queue().launch()
diff --git a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_canny.py b/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_canny.py
deleted file mode 100644
index a313ffda0a74b6373e90681aba6cd0e9a8736c86..0000000000000000000000000000000000000000
--- a/spaces/king007/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_canny.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import cv2
-import gradio as gr
-import numpy as np
-import torch
-from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
-from PIL import Image
-
-from diffusion_webui.utils.model_list import (
- controlnet_canny_model_list,
- stable_model_list,
-)
-from diffusion_webui.utils.scheduler_list import (
- SCHEDULER_LIST,
- get_scheduler_list,
-)
-
-
-class StableDiffusionControlNetCannyGenerator:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, stable_model_path, controlnet_model_path, scheduler):
- if self.pipe is None:
- controlnet = ControlNetModel.from_pretrained(
- controlnet_model_path, torch_dtype=torch.float16
- )
- self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
- pretrained_model_name_or_path=stable_model_path,
- controlnet=controlnet,
- safety_checker=None,
- torch_dtype=torch.float16,
- )
-
- self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler)
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
-
- return self.pipe
-
- def controlnet_canny(
- self,
- image_path: str,
- ):
- image = Image.open(image_path)
- image = np.array(image)
-
- image = cv2.Canny(image, 100, 200)
- image = image[:, :, None]
- image = np.concatenate([image, image, image], axis=2)
- image = Image.fromarray(image)
-
- return image
-
- def generate_image(
- self,
- image_path: str,
- stable_model_path: str,
- controlnet_model_path: str,
- prompt: str,
- negative_prompt: str,
- num_images_per_prompt: int,
- guidance_scale: int,
- num_inference_step: int,
- scheduler: str,
- seed_generator: int,
- ):
- pipe = self.load_model(
- stable_model_path=stable_model_path,
- controlnet_model_path=controlnet_model_path,
- scheduler=scheduler,
- )
-
- image = self.controlnet_canny(image_path=image_path)
-
- if seed_generator == 0:
- random_seed = torch.randint(0, 1000000, (1,))
- generator = torch.manual_seed(random_seed)
- else:
- generator = torch.manual_seed(seed_generator)
-
- output = pipe(
- prompt=prompt,
- image=image,
- negative_prompt=negative_prompt,
- num_images_per_prompt=num_images_per_prompt,
- num_inference_steps=num_inference_step,
- guidance_scale=guidance_scale,
- generator=generator,
- ).images
-
- return output
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- controlnet_canny_image_file = gr.Image(
- type="filepath", label="Image"
- )
-
- controlnet_canny_prompt = gr.Textbox(
- lines=1,
- placeholder="Prompt",
- show_label=False,
- )
-
- controlnet_canny_negative_prompt = gr.Textbox(
- lines=1,
- placeholder="Negative Prompt",
- show_label=False,
- )
- with gr.Row():
- with gr.Column():
- controlnet_canny_stable_model_id = gr.Dropdown(
- choices=stable_model_list,
- value=stable_model_list[0],
- label="Stable Model Id",
- )
-
- controlnet_canny_guidance_scale = gr.Slider(
- minimum=0.1,
- maximum=15,
- step=0.1,
- value=7.5,
- label="Guidance Scale",
- )
- controlnet_canny_num_inference_step = gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=50,
- label="Num Inference Step",
- )
- controlnet_canny_num_images_per_prompt = gr.Slider(
- minimum=1,
- maximum=10,
- step=1,
- value=1,
- label="Number Of Images",
- )
- with gr.Row():
- with gr.Column():
- controlnet_canny_model_id = gr.Dropdown(
- choices=controlnet_canny_model_list,
- value=controlnet_canny_model_list[0],
- label="ControlNet Model Id",
- )
-
- controlnet_canny_scheduler = gr.Dropdown(
- choices=SCHEDULER_LIST,
- value=SCHEDULER_LIST[0],
- label="Scheduler",
- )
-
- controlnet_canny_seed_generator = gr.Number(
- value=0,
- label="Seed Generator",
- )
- controlnet_canny_predict = gr.Button(value="Generator")
-
- with gr.Column():
- output_image = gr.Gallery(
- label="Generated images",
- show_label=False,
- elem_id="gallery",
- ).style(grid=(1, 2))
-
- controlnet_canny_predict.click(
- fn=StableDiffusionControlNetCannyGenerator().generate_image,
- inputs=[
- controlnet_canny_image_file,
- controlnet_canny_stable_model_id,
- controlnet_canny_model_id,
- controlnet_canny_prompt,
- controlnet_canny_negative_prompt,
- controlnet_canny_num_images_per_prompt,
- controlnet_canny_guidance_scale,
- controlnet_canny_num_inference_step,
- controlnet_canny_scheduler,
- controlnet_canny_seed_generator,
- ],
- outputs=[output_image],
- )
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/utils/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/utils/__init__.py
deleted file mode 100644
index 3d3bdd349b9f2ae499a2fcb2ac1d2e3c77befebe..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/utils/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .drop import DropPath
-from .inverted_residual import InvertedResidual, InvertedResidualV3
-from .make_divisible import make_divisible
-from .res_layer import ResLayer
-from .se_layer import SELayer
-from .self_attention_block import SelfAttentionBlock
-from .up_conv_block import UpConvBlock
-from .weight_init import trunc_normal_
-
-__all__ = [
- 'ResLayer', 'SelfAttentionBlock', 'make_divisible', 'InvertedResidual',
- 'UpConvBlock', 'InvertedResidualV3', 'SELayer', 'DropPath', 'trunc_normal_'
-]
diff --git a/spaces/kmkarakaya/Auto_Review_Generation_in_Turkish/app.py b/spaces/kmkarakaya/Auto_Review_Generation_in_Turkish/app.py
deleted file mode 100644
index 71ca8dc74f248571ec7ce5198d56a527b8ce5e8e..0000000000000000000000000000000000000000
--- a/spaces/kmkarakaya/Auto_Review_Generation_in_Turkish/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import gradio as gr
-from transformers import AutoTokenizer, TFGPT2LMHeadModel
-
-review_model = TFGPT2LMHeadModel.from_pretrained("kmkarakaya/turkishReviews-ds")
-review_tokenizer = AutoTokenizer.from_pretrained("kmkarakaya/turkishReviews-ds")
-
-def generate_review(prompt):
- if prompt=="":
- prompt = " "
- input_ids = review_tokenizer.encode(prompt, return_tensors='tf')
- context_length = 40
- output = review_model.generate(
- input_ids,
- do_sample=True,
- max_length=context_length,
- top_k=10,
- no_repeat_ngram_size=2,
- early_stopping=True
- )
- return(review_tokenizer.decode(output[0], skip_special_tokens=True))
-
-
-
-title="Turkish Review Generator: A GPT2 based Text Generator Trained with a Custom Dataset"
-description= """Generate a review in Turkish by providing a prompt or selecting an example prompt below.
-Generation takes 15-20 seconds on average.
-Enjoy!
-
-
-"""
-
-#NOTE: Examples can sometimes generate ERROR. When you see ERROR on the screen just click SUBMIT. Model will generate text in 15-20 secs.
-article = """On YouTube:
-
-
- On Medium:
- """
-examples=["Bir hafta önce aldığım cep telefonu çalışmıyor.",
- "Tatil için yaptığım rezervasyonu iptal edemiyorum.",
- "Geçen ay sipariş verdiğim ayakkabı gelmedi.",
- "Abone olduğum spor salonu kapandı.",
- "Buzdolabından garip sesler geliyor.",
- "Otel tam bir fiyasko."]
-
-
-demo = gr.Interface(fn=generate_review,
- inputs= gr.Textbox(lines=5, label="Prompt", placeholder="enter or select a prompt below..."),
- outputs= gr.Textbox(lines=5, label="Generated Review", placeholder="genereated review will be here..."),
- examples=examples,
- title=title,
- description= description,
- article = article,
- #cache_examples = False
- allow_flagging="manual",
- flagging_options=["good","moderate", "non-sense", ]
- #flagging_dir='./flags'
- )
-
-
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/raft/evaluate.py b/spaces/kukuhtw/VToonify/vtoonify/model/raft/evaluate.py
deleted file mode 100644
index 431a0f58891bede2804454fa7f28e9434c4c8746..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/VToonify/vtoonify/model/raft/evaluate.py
+++ /dev/null
@@ -1,197 +0,0 @@
-import sys
-sys.path.append('core')
-
-from PIL import Image
-import argparse
-import os
-import time
-import numpy as np
-import torch
-import torch.nn.functional as F
-import matplotlib.pyplot as plt
-
-import datasets
-from utils import flow_viz
-from utils import frame_utils
-
-from raft import RAFT
-from utils.utils import InputPadder, forward_interpolate
-
-
-@torch.no_grad()
-def create_sintel_submission(model, iters=32, warm_start=False, output_path='sintel_submission'):
- """ Create submission for the Sintel leaderboard """
- model.eval()
- for dstype in ['clean', 'final']:
- test_dataset = datasets.MpiSintel(split='test', aug_params=None, dstype=dstype)
-
- flow_prev, sequence_prev = None, None
- for test_id in range(len(test_dataset)):
- image1, image2, (sequence, frame) = test_dataset[test_id]
- if sequence != sequence_prev:
- flow_prev = None
-
- padder = InputPadder(image1.shape)
- image1, image2 = padder.pad(image1[None].cuda(), image2[None].cuda())
-
- flow_low, flow_pr = model(image1, image2, iters=iters, flow_init=flow_prev, test_mode=True)
- flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy()
-
- if warm_start:
- flow_prev = forward_interpolate(flow_low[0])[None].cuda()
-
- output_dir = os.path.join(output_path, dstype, sequence)
- output_file = os.path.join(output_dir, 'frame%04d.flo' % (frame+1))
-
- if not os.path.exists(output_dir):
- os.makedirs(output_dir)
-
- frame_utils.writeFlow(output_file, flow)
- sequence_prev = sequence
-
-
-@torch.no_grad()
-def create_kitti_submission(model, iters=24, output_path='kitti_submission'):
- """ Create submission for the Sintel leaderboard """
- model.eval()
- test_dataset = datasets.KITTI(split='testing', aug_params=None)
-
- if not os.path.exists(output_path):
- os.makedirs(output_path)
-
- for test_id in range(len(test_dataset)):
- image1, image2, (frame_id, ) = test_dataset[test_id]
- padder = InputPadder(image1.shape, mode='kitti')
- image1, image2 = padder.pad(image1[None].cuda(), image2[None].cuda())
-
- _, flow_pr = model(image1, image2, iters=iters, test_mode=True)
- flow = padder.unpad(flow_pr[0]).permute(1, 2, 0).cpu().numpy()
-
- output_filename = os.path.join(output_path, frame_id)
- frame_utils.writeFlowKITTI(output_filename, flow)
-
-
-@torch.no_grad()
-def validate_chairs(model, iters=24):
- """ Perform evaluation on the FlyingChairs (test) split """
- model.eval()
- epe_list = []
-
- val_dataset = datasets.FlyingChairs(split='validation')
- for val_id in range(len(val_dataset)):
- image1, image2, flow_gt, _ = val_dataset[val_id]
- image1 = image1[None].cuda()
- image2 = image2[None].cuda()
-
- _, flow_pr = model(image1, image2, iters=iters, test_mode=True)
- epe = torch.sum((flow_pr[0].cpu() - flow_gt)**2, dim=0).sqrt()
- epe_list.append(epe.view(-1).numpy())
-
- epe = np.mean(np.concatenate(epe_list))
- print("Validation Chairs EPE: %f" % epe)
- return {'chairs': epe}
-
-
-@torch.no_grad()
-def validate_sintel(model, iters=32):
- """ Peform validation using the Sintel (train) split """
- model.eval()
- results = {}
- for dstype in ['clean', 'final']:
- val_dataset = datasets.MpiSintel(split='training', dstype=dstype)
- epe_list = []
-
- for val_id in range(len(val_dataset)):
- image1, image2, flow_gt, _ = val_dataset[val_id]
- image1 = image1[None].cuda()
- image2 = image2[None].cuda()
-
- padder = InputPadder(image1.shape)
- image1, image2 = padder.pad(image1, image2)
-
- flow_low, flow_pr = model(image1, image2, iters=iters, test_mode=True)
- flow = padder.unpad(flow_pr[0]).cpu()
-
- epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt()
- epe_list.append(epe.view(-1).numpy())
-
- epe_all = np.concatenate(epe_list)
- epe = np.mean(epe_all)
- px1 = np.mean(epe_all<1)
- px3 = np.mean(epe_all<3)
- px5 = np.mean(epe_all<5)
-
- print("Validation (%s) EPE: %f, 1px: %f, 3px: %f, 5px: %f" % (dstype, epe, px1, px3, px5))
- results[dstype] = np.mean(epe_list)
-
- return results
-
-
-@torch.no_grad()
-def validate_kitti(model, iters=24):
- """ Peform validation using the KITTI-2015 (train) split """
- model.eval()
- val_dataset = datasets.KITTI(split='training')
-
- out_list, epe_list = [], []
- for val_id in range(len(val_dataset)):
- image1, image2, flow_gt, valid_gt = val_dataset[val_id]
- image1 = image1[None].cuda()
- image2 = image2[None].cuda()
-
- padder = InputPadder(image1.shape, mode='kitti')
- image1, image2 = padder.pad(image1, image2)
-
- flow_low, flow_pr = model(image1, image2, iters=iters, test_mode=True)
- flow = padder.unpad(flow_pr[0]).cpu()
-
- epe = torch.sum((flow - flow_gt)**2, dim=0).sqrt()
- mag = torch.sum(flow_gt**2, dim=0).sqrt()
-
- epe = epe.view(-1)
- mag = mag.view(-1)
- val = valid_gt.view(-1) >= 0.5
-
- out = ((epe > 3.0) & ((epe/mag) > 0.05)).float()
- epe_list.append(epe[val].mean().item())
- out_list.append(out[val].cpu().numpy())
-
- epe_list = np.array(epe_list)
- out_list = np.concatenate(out_list)
-
- epe = np.mean(epe_list)
- f1 = 100 * np.mean(out_list)
-
- print("Validation KITTI: %f, %f" % (epe, f1))
- return {'kitti-epe': epe, 'kitti-f1': f1}
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--model', help="restore checkpoint")
- parser.add_argument('--dataset', help="dataset for evaluation")
- parser.add_argument('--small', action='store_true', help='use small model')
- parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision')
- parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation')
- args = parser.parse_args()
-
- model = torch.nn.DataParallel(RAFT(args))
- model.load_state_dict(torch.load(args.model))
-
- model.cuda()
- model.eval()
-
- # create_sintel_submission(model.module, warm_start=True)
- # create_kitti_submission(model.module)
-
- with torch.no_grad():
- if args.dataset == 'chairs':
- validate_chairs(model.module)
-
- elif args.dataset == 'sintel':
- validate_sintel(model.module)
-
- elif args.dataset == 'kitti':
- validate_kitti(model.module)
-
-
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/symfont.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/symfont.py
deleted file mode 100644
index 0bd69a386ec9f01c8951f0dfc8bc8c261718cf1f..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/symfont.py
+++ /dev/null
@@ -1,251 +0,0 @@
-from fontTools.pens.basePen import BasePen
-from functools import partial
-from itertools import count
-import sympy as sp
-import sys
-
-n = 3 # Max Bezier degree; 3 for cubic, 2 for quadratic
-
-t, x, y = sp.symbols("t x y", real=True)
-c = sp.symbols("c", real=False) # Complex representation instead of x/y
-
-X = tuple(sp.symbols("x:%d" % (n + 1), real=True))
-Y = tuple(sp.symbols("y:%d" % (n + 1), real=True))
-P = tuple(zip(*(sp.symbols("p:%d[%s]" % (n + 1, w), real=True) for w in "01")))
-C = tuple(sp.symbols("c:%d" % (n + 1), real=False))
-
-# Cubic Bernstein basis functions
-BinomialCoefficient = [(1, 0)]
-for i in range(1, n + 1):
- last = BinomialCoefficient[-1]
- this = tuple(last[j - 1] + last[j] for j in range(len(last))) + (0,)
- BinomialCoefficient.append(this)
-BinomialCoefficient = tuple(tuple(item[:-1]) for item in BinomialCoefficient)
-del last, this
-
-BernsteinPolynomial = tuple(
- tuple(c * t**i * (1 - t) ** (n - i) for i, c in enumerate(coeffs))
- for n, coeffs in enumerate(BinomialCoefficient)
-)
-
-BezierCurve = tuple(
- tuple(
- sum(P[i][j] * bernstein for i, bernstein in enumerate(bernsteins))
- for j in range(2)
- )
- for n, bernsteins in enumerate(BernsteinPolynomial)
-)
-BezierCurveC = tuple(
- sum(C[i] * bernstein for i, bernstein in enumerate(bernsteins))
- for n, bernsteins in enumerate(BernsteinPolynomial)
-)
-
-
-def green(f, curveXY):
- f = -sp.integrate(sp.sympify(f), y)
- f = f.subs({x: curveXY[0], y: curveXY[1]})
- f = sp.integrate(f * sp.diff(curveXY[0], t), (t, 0, 1))
- return f
-
-
-class _BezierFuncsLazy(dict):
- def __init__(self, symfunc):
- self._symfunc = symfunc
- self._bezfuncs = {}
-
- def __missing__(self, i):
- args = ["p%d" % d for d in range(i + 1)]
- f = green(self._symfunc, BezierCurve[i])
- f = sp.gcd_terms(f.collect(sum(P, ()))) # Optimize
- return sp.lambdify(args, f)
-
-
-class GreenPen(BasePen):
-
- _BezierFuncs = {}
-
- @classmethod
- def _getGreenBezierFuncs(celf, func):
- funcstr = str(func)
- if not funcstr in celf._BezierFuncs:
- celf._BezierFuncs[funcstr] = _BezierFuncsLazy(func)
- return celf._BezierFuncs[funcstr]
-
- def __init__(self, func, glyphset=None):
- BasePen.__init__(self, glyphset)
- self._funcs = self._getGreenBezierFuncs(func)
- self.value = 0
-
- def _moveTo(self, p0):
- self.__startPoint = p0
-
- def _closePath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- self._lineTo(self.__startPoint)
-
- def _endPath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- # Green theorem is not defined on open contours.
- raise NotImplementedError
-
- def _lineTo(self, p1):
- p0 = self._getCurrentPoint()
- self.value += self._funcs[1](p0, p1)
-
- def _qCurveToOne(self, p1, p2):
- p0 = self._getCurrentPoint()
- self.value += self._funcs[2](p0, p1, p2)
-
- def _curveToOne(self, p1, p2, p3):
- p0 = self._getCurrentPoint()
- self.value += self._funcs[3](p0, p1, p2, p3)
-
-
-# Sample pens.
-# Do not use this in real code.
-# Use fontTools.pens.momentsPen.MomentsPen instead.
-AreaPen = partial(GreenPen, func=1)
-MomentXPen = partial(GreenPen, func=x)
-MomentYPen = partial(GreenPen, func=y)
-MomentXXPen = partial(GreenPen, func=x * x)
-MomentYYPen = partial(GreenPen, func=y * y)
-MomentXYPen = partial(GreenPen, func=x * y)
-
-
-def printGreenPen(penName, funcs, file=sys.stdout, docstring=None):
-
- if docstring is not None:
- print('"""%s"""' % docstring)
-
- print(
- """from fontTools.pens.basePen import BasePen, OpenContourError
-try:
- import cython
-
- COMPILED = cython.compiled
-except (AttributeError, ImportError):
- # if cython not installed, use mock module with no-op decorators and types
- from fontTools.misc import cython
-
- COMPILED = False
-
-
-__all__ = ["%s"]
-
-class %s(BasePen):
-
- def __init__(self, glyphset=None):
- BasePen.__init__(self, glyphset)
-"""
- % (penName, penName),
- file=file,
- )
- for name, f in funcs:
- print(" self.%s = 0" % name, file=file)
- print(
- """
- def _moveTo(self, p0):
- self.__startPoint = p0
-
- def _closePath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- self._lineTo(self.__startPoint)
-
- def _endPath(self):
- p0 = self._getCurrentPoint()
- if p0 != self.__startPoint:
- # Green theorem is not defined on open contours.
- raise OpenContourError(
- "Green theorem is not defined on open contours."
- )
-""",
- end="",
- file=file,
- )
-
- for n in (1, 2, 3):
-
- subs = {P[i][j]: [X, Y][j][i] for i in range(n + 1) for j in range(2)}
- greens = [green(f, BezierCurve[n]) for name, f in funcs]
- greens = [sp.gcd_terms(f.collect(sum(P, ()))) for f in greens] # Optimize
- greens = [f.subs(subs) for f in greens] # Convert to p to x/y
- defs, exprs = sp.cse(
- greens,
- optimizations="basic",
- symbols=(sp.Symbol("r%d" % i) for i in count()),
- )
-
- print()
- for name, value in defs:
- print(" @cython.locals(%s=cython.double)" % name, file=file)
- if n == 1:
- print(
- """\
- @cython.locals(x0=cython.double, y0=cython.double)
- @cython.locals(x1=cython.double, y1=cython.double)
- def _lineTo(self, p1):
- x0,y0 = self._getCurrentPoint()
- x1,y1 = p1
-""",
- file=file,
- )
- elif n == 2:
- print(
- """\
- @cython.locals(x0=cython.double, y0=cython.double)
- @cython.locals(x1=cython.double, y1=cython.double)
- @cython.locals(x2=cython.double, y2=cython.double)
- def _qCurveToOne(self, p1, p2):
- x0,y0 = self._getCurrentPoint()
- x1,y1 = p1
- x2,y2 = p2
-""",
- file=file,
- )
- elif n == 3:
- print(
- """\
- @cython.locals(x0=cython.double, y0=cython.double)
- @cython.locals(x1=cython.double, y1=cython.double)
- @cython.locals(x2=cython.double, y2=cython.double)
- @cython.locals(x3=cython.double, y3=cython.double)
- def _curveToOne(self, p1, p2, p3):
- x0,y0 = self._getCurrentPoint()
- x1,y1 = p1
- x2,y2 = p2
- x3,y3 = p3
-""",
- file=file,
- )
- for name, value in defs:
- print(" %s = %s" % (name, value), file=file)
-
- print(file=file)
- for name, value in zip([f[0] for f in funcs], exprs):
- print(" self.%s += %s" % (name, value), file=file)
-
- print(
- """
-if __name__ == '__main__':
- from fontTools.misc.symfont import x, y, printGreenPen
- printGreenPen('%s', ["""
- % penName,
- file=file,
- )
- for name, f in funcs:
- print(" ('%s', %s)," % (name, str(f)), file=file)
- print(" ])", file=file)
-
-
-if __name__ == "__main__":
- pen = AreaPen()
- pen.moveTo((100, 100))
- pen.lineTo((100, 200))
- pen.lineTo((200, 200))
- pen.curveTo((200, 250), (300, 300), (250, 350))
- pen.lineTo((200, 100))
- pen.closePath()
- print(pen.value)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py
deleted file mode 100644
index 7614af4c7b325c363c0b30edfc85a478aa15f01b..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import safeEval
-from .sbixGlyph import Glyph
-import struct
-
-sbixStrikeHeaderFormat = """
- >
- ppem: H # The PPEM for which this strike was designed (e.g., 9,
- # 12, 24)
- resolution: H # The screen resolution (in dpi) for which this strike
- # was designed (e.g., 72)
-"""
-
-sbixGlyphDataOffsetFormat = """
- >
- glyphDataOffset: L # Offset from the beginning of the strike data record
- # to data for the individual glyph
-"""
-
-sbixStrikeHeaderFormatSize = sstruct.calcsize(sbixStrikeHeaderFormat)
-sbixGlyphDataOffsetFormatSize = sstruct.calcsize(sbixGlyphDataOffsetFormat)
-
-
-class Strike(object):
- def __init__(self, rawdata=None, ppem=0, resolution=72):
- self.data = rawdata
- self.ppem = ppem
- self.resolution = resolution
- self.glyphs = {}
-
- def decompile(self, ttFont):
- if self.data is None:
- from fontTools import ttLib
-
- raise ttLib.TTLibError
- if len(self.data) < sbixStrikeHeaderFormatSize:
- from fontTools import ttLib
-
- raise (
- ttLib.TTLibError,
- "Strike header too short: Expected %x, got %x.",
- ) % (sbixStrikeHeaderFormatSize, len(self.data))
-
- # read Strike header from raw data
- sstruct.unpack(
- sbixStrikeHeaderFormat, self.data[:sbixStrikeHeaderFormatSize], self
- )
-
- # calculate number of glyphs
- (firstGlyphDataOffset,) = struct.unpack(
- ">L",
- self.data[
- sbixStrikeHeaderFormatSize : sbixStrikeHeaderFormatSize
- + sbixGlyphDataOffsetFormatSize
- ],
- )
- self.numGlyphs = (
- firstGlyphDataOffset - sbixStrikeHeaderFormatSize
- ) // sbixGlyphDataOffsetFormatSize - 1
- # ^ -1 because there's one more offset than glyphs
-
- # build offset list for single glyph data offsets
- self.glyphDataOffsets = []
- for i in range(
- self.numGlyphs + 1
- ): # + 1 because there's one more offset than glyphs
- start = i * sbixGlyphDataOffsetFormatSize + sbixStrikeHeaderFormatSize
- (current_offset,) = struct.unpack(
- ">L", self.data[start : start + sbixGlyphDataOffsetFormatSize]
- )
- self.glyphDataOffsets.append(current_offset)
-
- # iterate through offset list and slice raw data into glyph data records
- for i in range(self.numGlyphs):
- current_glyph = Glyph(
- rawdata=self.data[
- self.glyphDataOffsets[i] : self.glyphDataOffsets[i + 1]
- ],
- gid=i,
- )
- current_glyph.decompile(ttFont)
- self.glyphs[current_glyph.glyphName] = current_glyph
- del self.glyphDataOffsets
- del self.numGlyphs
- del self.data
-
- def compile(self, ttFont):
- self.glyphDataOffsets = b""
- self.bitmapData = b""
-
- glyphOrder = ttFont.getGlyphOrder()
-
- # first glyph starts right after the header
- currentGlyphDataOffset = (
- sbixStrikeHeaderFormatSize
- + sbixGlyphDataOffsetFormatSize * (len(glyphOrder) + 1)
- )
- for glyphName in glyphOrder:
- if glyphName in self.glyphs:
- # we have glyph data for this glyph
- current_glyph = self.glyphs[glyphName]
- else:
- # must add empty glyph data record for this glyph
- current_glyph = Glyph(glyphName=glyphName)
- current_glyph.compile(ttFont)
- current_glyph.glyphDataOffset = currentGlyphDataOffset
- self.bitmapData += current_glyph.rawdata
- currentGlyphDataOffset += len(current_glyph.rawdata)
- self.glyphDataOffsets += sstruct.pack(
- sbixGlyphDataOffsetFormat, current_glyph
- )
-
- # add last "offset", really the end address of the last glyph data record
- dummy = Glyph()
- dummy.glyphDataOffset = currentGlyphDataOffset
- self.glyphDataOffsets += sstruct.pack(sbixGlyphDataOffsetFormat, dummy)
-
- # pack header
- self.data = sstruct.pack(sbixStrikeHeaderFormat, self)
- # add offsets and image data after header
- self.data += self.glyphDataOffsets + self.bitmapData
-
- def toXML(self, xmlWriter, ttFont):
- xmlWriter.begintag("strike")
- xmlWriter.newline()
- xmlWriter.simpletag("ppem", value=self.ppem)
- xmlWriter.newline()
- xmlWriter.simpletag("resolution", value=self.resolution)
- xmlWriter.newline()
- glyphOrder = ttFont.getGlyphOrder()
- for i in range(len(glyphOrder)):
- if glyphOrder[i] in self.glyphs:
- self.glyphs[glyphOrder[i]].toXML(xmlWriter, ttFont)
- # TODO: what if there are more glyph data records than (glyf table) glyphs?
- xmlWriter.endtag("strike")
- xmlWriter.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name in ["ppem", "resolution"]:
- setattr(self, name, safeEval(attrs["value"]))
- elif name == "glyph":
- if "graphicType" in attrs:
- myFormat = safeEval("'''" + attrs["graphicType"] + "'''")
- else:
- myFormat = None
- if "glyphname" in attrs:
- myGlyphName = safeEval("'''" + attrs["glyphname"] + "'''")
- elif "name" in attrs:
- myGlyphName = safeEval("'''" + attrs["name"] + "'''")
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("Glyph must have a glyph name.")
- if "originOffsetX" in attrs:
- myOffsetX = safeEval(attrs["originOffsetX"])
- else:
- myOffsetX = 0
- if "originOffsetY" in attrs:
- myOffsetY = safeEval(attrs["originOffsetY"])
- else:
- myOffsetY = 0
- current_glyph = Glyph(
- glyphName=myGlyphName,
- graphicType=myFormat,
- originOffsetX=myOffsetX,
- originOffsetY=myOffsetY,
- )
- for element in content:
- if isinstance(element, tuple):
- name, attrs, content = element
- current_glyph.fromXML(name, attrs, content, ttFont)
- current_glyph.compile(ttFont)
- self.glyphs[current_glyph.glyphName] = current_glyph
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("can't handle '%s' element" % name)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/__init__.py
deleted file mode 100644
index f61112f8b20e11be3395d6f9265082ad762a7638..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/__init__.py
+++ /dev/null
@@ -1,138 +0,0 @@
-from .__version__ import __description__, __title__, __version__
-from ._api import delete, get, head, options, patch, post, put, request, stream
-from ._auth import Auth, BasicAuth, DigestAuth, NetRCAuth
-from ._client import USE_CLIENT_DEFAULT, AsyncClient, Client
-from ._config import Limits, Proxy, Timeout, create_ssl_context
-from ._content import ByteStream
-from ._exceptions import (
- CloseError,
- ConnectError,
- ConnectTimeout,
- CookieConflict,
- DecodingError,
- HTTPError,
- HTTPStatusError,
- InvalidURL,
- LocalProtocolError,
- NetworkError,
- PoolTimeout,
- ProtocolError,
- ProxyError,
- ReadError,
- ReadTimeout,
- RemoteProtocolError,
- RequestError,
- RequestNotRead,
- ResponseNotRead,
- StreamClosed,
- StreamConsumed,
- StreamError,
- TimeoutException,
- TooManyRedirects,
- TransportError,
- UnsupportedProtocol,
- WriteError,
- WriteTimeout,
-)
-from ._models import Cookies, Headers, Request, Response
-from ._status_codes import codes
-from ._transports.asgi import ASGITransport
-from ._transports.base import AsyncBaseTransport, BaseTransport
-from ._transports.default import AsyncHTTPTransport, HTTPTransport
-from ._transports.mock import MockTransport
-from ._transports.wsgi import WSGITransport
-from ._types import AsyncByteStream, SyncByteStream
-from ._urls import URL, QueryParams
-
-try:
- from ._main import main
-except ImportError: # pragma: no cover
-
- def main() -> None: # type: ignore
- import sys
-
- print(
- "The httpx command line client could not run because the required "
- "dependencies were not installed.\nMake sure you've installed "
- "everything with: pip install 'httpx[cli]'"
- )
- sys.exit(1)
-
-
-__all__ = [
- "__description__",
- "__title__",
- "__version__",
- "ASGITransport",
- "AsyncBaseTransport",
- "AsyncByteStream",
- "AsyncClient",
- "AsyncHTTPTransport",
- "Auth",
- "BaseTransport",
- "BasicAuth",
- "ByteStream",
- "Client",
- "CloseError",
- "codes",
- "ConnectError",
- "ConnectTimeout",
- "CookieConflict",
- "Cookies",
- "create_ssl_context",
- "DecodingError",
- "delete",
- "DigestAuth",
- "get",
- "head",
- "Headers",
- "HTTPError",
- "HTTPStatusError",
- "HTTPTransport",
- "InvalidURL",
- "Limits",
- "LocalProtocolError",
- "main",
- "MockTransport",
- "NetRCAuth",
- "NetworkError",
- "options",
- "patch",
- "PoolTimeout",
- "post",
- "ProtocolError",
- "Proxy",
- "ProxyError",
- "put",
- "QueryParams",
- "ReadError",
- "ReadTimeout",
- "RemoteProtocolError",
- "request",
- "Request",
- "RequestError",
- "RequestNotRead",
- "Response",
- "ResponseNotRead",
- "stream",
- "StreamClosed",
- "StreamConsumed",
- "StreamError",
- "SyncByteStream",
- "Timeout",
- "TimeoutException",
- "TooManyRedirects",
- "TransportError",
- "UnsupportedProtocol",
- "URL",
- "USE_CLIENT_DEFAULT",
- "WriteError",
- "WriteTimeout",
- "WSGITransport",
-]
-
-
-__locals = locals()
-for __name in __all__:
- if not __name.startswith("__"):
- setattr(__locals[__name], "__module__", "httpx") # noqa
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/container.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/container.py
deleted file mode 100644
index a58e55ca196cee6ac1a365c08a0b4f61317a0509..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/container.py
+++ /dev/null
@@ -1,142 +0,0 @@
-from matplotlib import cbook
-from matplotlib.artist import Artist
-
-
-class Container(tuple):
- """
- Base class for containers.
-
- Containers are classes that collect semantically related Artists such as
- the bars of a bar plot.
- """
-
- def __repr__(self):
- return ("<{} object of {} artists>"
- .format(type(self).__name__, len(self)))
-
- def __new__(cls, *args, **kwargs):
- return tuple.__new__(cls, args[0])
-
- def __init__(self, kl, label=None):
- self._callbacks = cbook.CallbackRegistry(signals=["pchanged"])
- self._remove_method = None
- self.set_label(label)
-
- def remove(self):
- for c in cbook.flatten(
- self, scalarp=lambda x: isinstance(x, Artist)):
- if c is not None:
- c.remove()
- if self._remove_method:
- self._remove_method(self)
-
- def get_children(self):
- return [child for child in cbook.flatten(self) if child is not None]
-
- get_label = Artist.get_label
- set_label = Artist.set_label
- add_callback = Artist.add_callback
- remove_callback = Artist.remove_callback
- pchanged = Artist.pchanged
-
-
-class BarContainer(Container):
- """
- Container for the artists of bar plots (e.g. created by `.Axes.bar`).
-
- The container can be treated as a tuple of the *patches* themselves.
- Additionally, you can access these and further parameters by the
- attributes.
-
- Attributes
- ----------
- patches : list of :class:`~matplotlib.patches.Rectangle`
- The artists of the bars.
-
- errorbar : None or :class:`~matplotlib.container.ErrorbarContainer`
- A container for the error bar artists if error bars are present.
- *None* otherwise.
-
- datavalues : None or array-like
- The underlying data values corresponding to the bars.
-
- orientation : {'vertical', 'horizontal'}, default: None
- If 'vertical', the bars are assumed to be vertical.
- If 'horizontal', the bars are assumed to be horizontal.
-
- """
-
- def __init__(self, patches, errorbar=None, *, datavalues=None,
- orientation=None, **kwargs):
- self.patches = patches
- self.errorbar = errorbar
- self.datavalues = datavalues
- self.orientation = orientation
- super().__init__(patches, **kwargs)
-
-
-class ErrorbarContainer(Container):
- """
- Container for the artists of error bars (e.g. created by `.Axes.errorbar`).
-
- The container can be treated as the *lines* tuple itself.
- Additionally, you can access these and further parameters by the
- attributes.
-
- Attributes
- ----------
- lines : tuple
- Tuple of ``(data_line, caplines, barlinecols)``.
-
- - data_line : :class:`~matplotlib.lines.Line2D` instance of
- x, y plot markers and/or line.
- - caplines : tuple of :class:`~matplotlib.lines.Line2D` instances of
- the error bar caps.
- - barlinecols : list of :class:`~matplotlib.collections.LineCollection`
- with the horizontal and vertical error ranges.
-
- has_xerr, has_yerr : bool
- ``True`` if the errorbar has x/y errors.
-
- """
-
- def __init__(self, lines, has_xerr=False, has_yerr=False, **kwargs):
- self.lines = lines
- self.has_xerr = has_xerr
- self.has_yerr = has_yerr
- super().__init__(lines, **kwargs)
-
-
-class StemContainer(Container):
- """
- Container for the artists created in a :meth:`.Axes.stem` plot.
-
- The container can be treated like a namedtuple ``(markerline, stemlines,
- baseline)``.
-
- Attributes
- ----------
- markerline : :class:`~matplotlib.lines.Line2D`
- The artist of the markers at the stem heads.
-
- stemlines : list of :class:`~matplotlib.lines.Line2D`
- The artists of the vertical lines for all stems.
-
- baseline : :class:`~matplotlib.lines.Line2D`
- The artist of the horizontal baseline.
- """
- def __init__(self, markerline_stemlines_baseline, **kwargs):
- """
- Parameters
- ----------
- markerline_stemlines_baseline : tuple
- Tuple of ``(markerline, stemlines, baseline)``.
- ``markerline`` contains the `.LineCollection` of the markers,
- ``stemlines`` is a `.LineCollection` of the main lines,
- ``baseline`` is the `.Line2D` of the baseline.
- """
- markerline, stemlines, baseline = markerline_stemlines_baseline
- self.markerline = markerline
- self.stemlines = stemlines
- self.baseline = baseline
- super().__init__(markerline_stemlines_baseline, **kwargs)
diff --git a/spaces/kyauy/ClinFly/clinphen_src/get_phenotypes_lf.py b/spaces/kyauy/ClinFly/clinphen_src/get_phenotypes_lf.py
deleted file mode 100644
index 1b99fbe8d7473726f89d08a6f8fdd5ff718c88c6..0000000000000000000000000000000000000000
--- a/spaces/kyauy/ClinFly/clinphen_src/get_phenotypes_lf.py
+++ /dev/null
@@ -1,289 +0,0 @@
-from collections import defaultdict
-from nltk.stem import WordNetLemmatizer
-import pandas as pd
-import re
-
-HPO_SYN_MAP_FILE = "clinphen_src/data/hpo_synonym_filter.txt"
-
-def getNames():
- returnMap = {}
- for line in open("clinphen_src/data/hpo_term_names.txt"):
- lineData = line.strip().split("\t")
- returnMap[lineData[0]] = lineData[1]
- return returnMap
-
-point_enders = [".", u'•', '•', ";", "\t"]
-def end_of_point(word):
- #for char in point_enders:
- # if char in word: return True
- if word[-1] in point_enders: return True
- if word == "but": return True
- if word == "except": return True
- if word == "however": return True
- if word == "though": return True
- return False
-
-subpoint_enders = [":", ','] #","
-def end_of_subpoint(word):
- if word[-1] in subpoint_enders: return True
- if word == "and": return True
- return False
-
-def string_to_record_linewise(medical_record):
- return medical_record.split("\n")
-
-def load_medical_record_linewise(medical_record):
- recordFile = string_to_record_linewise(medical_record)
- sentences = []
- for line in recordFile:
- if ":" not in line: continue
- curSentence = []
- for word in line.strip().split(" "):
- word = word.lower()
- if len(word) < 1: continue
- curSentence.append(word)
- if end_of_point(word):
- sentences.append(" ".join(curSentence))
- curSentence = []
- if len(curSentence) > 0: sentences.append(" ".join(curSentence))
- subsentence_sets = []
- for sent in sentences:
- subsents = []
- curSubsent = []
- for word in sent.split(" "):
- word = word.lower()
- curSubsent.append(word)
- if end_of_subpoint(word):
- subsents.append(" ".join(curSubsent))
- curSubsent = []
- if len(curSubsent) > 0: subsents.append(" ".join(curSubsent))
- subsentence_sets.append(subsents)
- return subsentence_sets
-
-def string_to_record_nonlinewise(medical_record):
- listForm = []
- for line in medical_record.split("\n"):
- if len(line) < 1: continue
- listForm.append(line)
- return " ".join(listForm).split(" ")
-
-def load_medical_record_subsentences(medical_record):
- record = string_to_record_nonlinewise(medical_record)
- sentences = []
- curSentence = []
- for word in record:
- word = word.lower()
- if len(word) < 1: continue
- curSentence.append(word)
- if end_of_point(word):
- sentences.append(" ".join(curSentence))
- curSentence = []
- if len(curSentence) > 0: sentences.append(" ".join(curSentence))
- subsentence_sets = []
- for sent in sentences:
- subsents = []
- curSubsent = []
- for word in sent.split(" "):
- word = word.lower()
- curSubsent.append(word)
- if end_of_subpoint(word):
- subsents.append(" ".join(curSubsent))
- curSubsent = []
- if len(curSubsent) > 0: subsents.append(" ".join(curSubsent))
- subsentence_sets.append(subsents)
- return subsentence_sets + load_medical_record_linewise(medical_record)
-
-#Checks the given sentence for any flags from the lists you indicate.
-negative_flags = ["no", "not", "none", "negative", "non", "never", "without", "denies", "haven't", "don't", "doesn't", "haven t", "don t", "doesn t", 'didn t', 'doesn', 'don', 'haven', 'didn', 'absence', 'absent', 'absences']
-family_flags = [""," 0: lemmas.add(lemma)
- lemmas |= synonym_lemmas(word)
- lemmas |= custom_lemmas(word)
- return wordSet | lemmas
-
-
-def get_flags(line, *flagsets):
- line = add_lemmas(set(line))
- returnFlags = set()
- for flagset in flagsets:
- flagset = add_lemmas(set(flagset))
- for word in flagset:
- if word in line: returnFlags.add(word)
- return returnFlags
-
-def alphanum_only(wordSet):
- returnSet = set()
- for word in wordSet:
- #returnSet |= set(word_tokenize(re.sub('[^0-9a-zA-Z]+', ' ', word)))
- returnSet |= set(re.sub('[^0-9a-zA-Z]+', ' ', word).split(" "))
- return returnSet
-
-def load_mr_map(parsed_record):
- returnMap = defaultdict(set)
- for i in range(len(parsed_record)):
- line = set(parsed_record[i])
- for word in line: returnMap[word].add(i)
- return returnMap
-
-def load_all_hpo_synonyms(filename=HPO_SYN_MAP_FILE):
- returnMap = defaultdict(set)
- for line in open(filename):
- lineData = line.strip().split("\t")
- hpo = lineData[0]
- syn = lineData[1]
- returnMap[hpo].add(syn)
- return returnMap
-
-
-def sort_ids_by_occurrences_then_earliness(id_to_lines):
- listForm = []
- for hpoid in id_to_lines.keys(): listForm.append((hpoid, len(id_to_lines[hpoid]), min(id_to_lines[hpoid])))
- listForm.sort(key=lambda x: [-1*x[1], x[2], x[0]])
- returnList = list()
- for item in listForm: returnList.append(item[0])
- return returnList
-
-def extract_phenotypes(record, names, hpo_syn_file=HPO_SYN_MAP_FILE):
- safe_ID_to_lines = defaultdict(set)
- unsafe_ID_to_lines = defaultdict(set)
- medical_record = load_medical_record_subsentences(record)
- medical_record_subsentences = []
- medical_record_words = []
- medical_record_flags = []
- subsent_to_sentence = []
- for subsents in medical_record:
- whole_sentence = ""
- for subsent in subsents: whole_sentence += subsent + " "
- whole_sentence = whole_sentence.strip()
- whole_sentence = re.sub('[^0-9a-zA-Z]+', ' ', whole_sentence)
- flags = get_flags(whole_sentence.split(" "), negative_flags, family_flags, healthy_flags, disease_flags, treatment_flags, history_flags, uncertain_flags, mild_flags)
- for subsent in subsents:
- medical_record_subsentences.append(subsent)
- subsent_to_sentence.append(whole_sentence)
- medical_record_words.append(add_lemmas(alphanum_only(set([subsent]))))
- medical_record_flags.append(flags)
- #print(medical_record_subsentences)
- #print(subsent_to_sentence)
- #print(medical_record_words)
- #print(medical_record_flags)
-
- mr_map = load_mr_map(medical_record_words)
- #print(mr_map)
-
- syns = load_all_hpo_synonyms(hpo_syn_file)
- for hpoID in syns.keys():
- for syn in syns[hpoID]:
- syn = re.sub('[^0-9a-zA-Z]+', ' ', syn.lower())
- synTokens = alphanum_only(set([syn]))
- if len(synTokens) < 1: continue
- firstToken = list(synTokens)[0]
- lines = set(mr_map[firstToken])
- for token in synTokens:
- lines &= set(mr_map[token])
- if len(lines) < 1: break
- if len(lines) < 1: continue
- for i in lines:
- line = " ".join(medical_record_words[i])
- flagged = False
- if i < 4:
- #print(lines)
- #print(i)
- safe_ID_to_lines[hpoID].add(i)
- elif "inherited" in line:
- safe_ID_to_lines[hpoID].add(i)
- else:
- for flag in medical_record_flags[i]:
- if flag not in synTokens:
- flagged = True
- unsafe_ID_to_lines[hpoID].add(i)
- break
- if flagged: continue
- safe_ID_to_lines[hpoID].add(i)
- safe_IDs = sort_ids_by_occurrences_then_earliness(safe_ID_to_lines)
- unsafe_IDs = sort_ids_by_occurrences_then_earliness(unsafe_ID_to_lines)
- returnString = ["HPO ID\tPhenotype name\tNo. occurrences\tEarliness (lower = earlier)\tExample sentence"]
- returnStringUnSafe = ["HPO ID\tPhenotype name\tNo. occurrences\tEarliness (lower = earlier)\tExample sentence"]
- for ID in safe_IDs: returnString.append("\t".join([ID, names[ID], str(len(safe_ID_to_lines[ID])), str(min(safe_ID_to_lines[ID])), subsent_to_sentence[safe_ID_to_lines[ID].pop()]]))
- for ID in unsafe_IDs: returnStringUnSafe.append("\t".join([ID, names[ID], str(len(unsafe_ID_to_lines[ID])), str(min(unsafe_ID_to_lines[ID])), subsent_to_sentence[unsafe_ID_to_lines[ID].pop()]]))
- return "\n".join(returnString), "\n".join(returnStringUnSafe)
-
-def get_dataframe_from_clinphen(returnString):
- i = 0
- returnList = []
- for element in returnString.split('\n'):
- if i == 0:
- i = 1
- pass
- else:
- elementList = []
- for i in element.split('\t'):
- elementList.append(i)
- returnList.append(elementList)
- if len(returnList) > 0:
- returnDf = pd.DataFrame(returnList)
- returnDf.columns = ['HPO ID', 'Phenotype name', 'No. occurrences', 'Earliness (lower = earlier)', 'Example sentence']
- else:
- returnDf = pd.DataFrame(columns=['HPO ID', 'Phenotype name', 'No. occurrences', 'Earliness (lower = earlier)', 'Example sentence'])
- return returnDf
diff --git a/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/app-controlnet.py b/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/app-controlnet.py
deleted file mode 100644
index 5c5534314d2ad069011aba99b7a34a51ec02559d..0000000000000000000000000000000000000000
--- a/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/app-controlnet.py
+++ /dev/null
@@ -1,322 +0,0 @@
-import asyncio
-import json
-import logging
-import traceback
-from pydantic import BaseModel
-
-from fastapi import FastAPI, WebSocket, HTTPException, WebSocketDisconnect
-from fastapi.middleware.cors import CORSMiddleware
-from fastapi.responses import (
- StreamingResponse,
- JSONResponse,
- HTMLResponse,
- FileResponse,
-)
-
-from diffusers import AutoencoderTiny, ControlNetModel
-from latent_consistency_controlnet import LatentConsistencyModelPipeline_controlnet
-from compel import Compel
-import torch
-
-from canny_gpu import SobelOperator
-
-# from controlnet_aux import OpenposeDetector
-# import cv2
-
-try:
- import intel_extension_for_pytorch as ipex
-except:
- pass
-from PIL import Image
-import numpy as np
-import gradio as gr
-import io
-import uuid
-import os
-import time
-import psutil
-
-
-MAX_QUEUE_SIZE = int(os.environ.get("MAX_QUEUE_SIZE", 0))
-TIMEOUT = float(os.environ.get("TIMEOUT", 0))
-SAFETY_CHECKER = os.environ.get("SAFETY_CHECKER", None)
-TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None)
-WIDTH = 512
-HEIGHT = 512
-# disable tiny autoencoder for better quality speed tradeoff
-USE_TINY_AUTOENCODER = True
-
-# check if MPS is available OSX only M1/M2/M3 chips
-mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
-xpu_available = hasattr(torch, "xpu") and torch.xpu.is_available()
-device = torch.device(
- "cuda" if torch.cuda.is_available() else "xpu" if xpu_available else "cpu"
-)
-
-# change to torch.float16 to save GPU memory
-torch_dtype = torch.float16
-
-print(f"TIMEOUT: {TIMEOUT}")
-print(f"SAFETY_CHECKER: {SAFETY_CHECKER}")
-print(f"MAX_QUEUE_SIZE: {MAX_QUEUE_SIZE}")
-print(f"device: {device}")
-
-if mps_available:
- device = torch.device("mps")
- device = "cpu"
- torch_dtype = torch.float32
-
-controlnet_canny = ControlNetModel.from_pretrained(
- "lllyasviel/control_v11p_sd15_canny", torch_dtype=torch_dtype
-).to(device)
-
-canny_torch = SobelOperator(device=device)
-# controlnet_pose = ControlNetModel.from_pretrained(
-# "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch_dtype
-# ).to(device)
-# controlnet_depth = ControlNetModel.from_pretrained(
-# "lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch_dtype
-# ).to(device)
-
-
-# pose_processor = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
-
-if SAFETY_CHECKER == "True":
- pipe = LatentConsistencyModelPipeline_controlnet.from_pretrained(
- "SimianLuo/LCM_Dreamshaper_v7",
- controlnet=controlnet_canny,
- scheduler=None,
- )
-else:
- pipe = LatentConsistencyModelPipeline_controlnet.from_pretrained(
- "SimianLuo/LCM_Dreamshaper_v7",
- safety_checker=None,
- controlnet=controlnet_canny,
- scheduler=None,
- )
-
-if USE_TINY_AUTOENCODER:
- pipe.vae = AutoencoderTiny.from_pretrained(
- "madebyollin/taesd", torch_dtype=torch_dtype, use_safetensors=True
- )
-pipe.set_progress_bar_config(disable=True)
-pipe.to(device=device, dtype=torch_dtype).to(device)
-pipe.unet.to(memory_format=torch.channels_last)
-
-if psutil.virtual_memory().total < 64 * 1024**3:
- pipe.enable_attention_slicing()
-
-compel_proc = Compel(
- tokenizer=pipe.tokenizer,
- text_encoder=pipe.text_encoder,
- truncate_long_prompts=False,
-)
-if TORCH_COMPILE:
- pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
- pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True)
-
- pipe(
- prompt="warmup",
- image=[Image.new("RGB", (768, 768))],
- control_image=[Image.new("RGB", (768, 768))],
- )
-
-
-user_queue_map = {}
-
-
-class InputParams(BaseModel):
- seed: int = 2159232
- prompt: str
- guidance_scale: float = 8.0
- strength: float = 0.5
- steps: int = 4
- lcm_steps: int = 50
- width: int = WIDTH
- height: int = HEIGHT
- controlnet_scale: float = 0.8
- controlnet_start: float = 0.0
- controlnet_end: float = 1.0
- canny_low_threshold: float = 0.31
- canny_high_threshold: float = 0.78
- debug_canny: bool = False
-
-
-def predict(
- input_image: Image.Image, params: InputParams, prompt_embeds: torch.Tensor = None
-):
- generator = torch.manual_seed(params.seed)
-
- control_image = canny_torch(
- input_image, params.canny_low_threshold, params.canny_high_threshold
- )
- results = pipe(
- control_image=control_image,
- prompt_embeds=prompt_embeds,
- generator=generator,
- image=input_image,
- strength=params.strength,
- num_inference_steps=params.steps,
- guidance_scale=params.guidance_scale,
- width=params.width,
- height=params.height,
- lcm_origin_steps=params.lcm_steps,
- output_type="pil",
- controlnet_conditioning_scale=params.controlnet_scale,
- control_guidance_start=params.controlnet_start,
- control_guidance_end=params.controlnet_end,
- )
- nsfw_content_detected = (
- results.nsfw_content_detected[0]
- if "nsfw_content_detected" in results
- else False
- )
- if nsfw_content_detected:
- return None
- result_image = results.images[0]
- if params.debug_canny:
- # paste control_image on top of result_image
- w0, h0 = (200, 200)
- control_image = control_image.resize((w0, h0))
- w1, h1 = result_image.size
- result_image.paste(control_image, (w1 - w0, h1 - h0))
-
- return result_image
-
-
-app = FastAPI()
-app.add_middleware(
- CORSMiddleware,
- allow_origins=["*"],
- allow_credentials=True,
- allow_methods=["*"],
- allow_headers=["*"],
-)
-
-
-@app.websocket("/ws")
-async def websocket_endpoint(websocket: WebSocket):
- await websocket.accept()
- if MAX_QUEUE_SIZE > 0 and len(user_queue_map) >= MAX_QUEUE_SIZE:
- print("Server is full")
- await websocket.send_json({"status": "error", "message": "Server is full"})
- await websocket.close()
- return
-
- try:
- uid = str(uuid.uuid4())
- print(f"New user connected: {uid}")
- await websocket.send_json(
- {"status": "success", "message": "Connected", "userId": uid}
- )
- user_queue_map[uid] = {"queue": asyncio.Queue()}
- await websocket.send_json(
- {"status": "start", "message": "Start Streaming", "userId": uid}
- )
- await handle_websocket_data(websocket, uid)
- except WebSocketDisconnect as e:
- logging.error(f"WebSocket Error: {e}, {uid}")
- traceback.print_exc()
- finally:
- print(f"User disconnected: {uid}")
- queue_value = user_queue_map.pop(uid, None)
- queue = queue_value.get("queue", None)
- if queue:
- while not queue.empty():
- try:
- queue.get_nowait()
- except asyncio.QueueEmpty:
- continue
-
-
-@app.get("/queue_size")
-async def get_queue_size():
- queue_size = len(user_queue_map)
- return JSONResponse({"queue_size": queue_size})
-
-
-@app.get("/stream/{user_id}")
-async def stream(user_id: uuid.UUID):
- uid = str(user_id)
- try:
- user_queue = user_queue_map[uid]
- queue = user_queue["queue"]
-
- async def generate():
- last_prompt: str = None
- prompt_embeds: torch.Tensor = None
- while True:
- data = await queue.get()
- input_image = data["image"]
- params = data["params"]
- if input_image is None:
- continue
- # avoid recalculate prompt embeds
- if last_prompt != params.prompt:
- print("new prompt")
- prompt_embeds = compel_proc(params.prompt)
- last_prompt = params.prompt
-
- image = predict(
- input_image,
- params,
- prompt_embeds,
- )
- if image is None:
- continue
- frame_data = io.BytesIO()
- image.save(frame_data, format="JPEG")
- frame_data = frame_data.getvalue()
- if frame_data is not None and len(frame_data) > 0:
- yield b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame_data + b"\r\n"
-
- await asyncio.sleep(1.0 / 120.0)
-
- return StreamingResponse(
- generate(), media_type="multipart/x-mixed-replace;boundary=frame"
- )
- except Exception as e:
- logging.error(f"Streaming Error: {e}, {user_queue_map}")
- traceback.print_exc()
- return HTTPException(status_code=404, detail="User not found")
-
-
-async def handle_websocket_data(websocket: WebSocket, user_id: uuid.UUID):
- uid = str(user_id)
- user_queue = user_queue_map[uid]
- queue = user_queue["queue"]
- if not queue:
- return HTTPException(status_code=404, detail="User not found")
- last_time = time.time()
- try:
- while True:
- data = await websocket.receive_bytes()
- params = await websocket.receive_json()
- params = InputParams(**params)
- pil_image = Image.open(io.BytesIO(data))
-
- while not queue.empty():
- try:
- queue.get_nowait()
- except asyncio.QueueEmpty:
- continue
- await queue.put({"image": pil_image, "params": params})
- if TIMEOUT > 0 and time.time() - last_time > TIMEOUT:
- await websocket.send_json(
- {
- "status": "timeout",
- "message": "Your session has ended",
- "userId": uid,
- }
- )
- await websocket.close()
- return
-
- except Exception as e:
- logging.error(f"Error: {e}")
- traceback.print_exc()
-
-
-@app.get("/", response_class=HTMLResponse)
-async def root():
- return FileResponse("./static/controlnet.html")
diff --git a/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/app-img2img.py b/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/app-img2img.py
deleted file mode 100644
index 56b71f20935909d5e0f7fb8b109d2a76f72e338c..0000000000000000000000000000000000000000
--- a/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5/app-img2img.py
+++ /dev/null
@@ -1,271 +0,0 @@
-import asyncio
-import json
-import logging
-import traceback
-from pydantic import BaseModel
-
-from fastapi import FastAPI, WebSocket, HTTPException, WebSocketDisconnect
-from fastapi.middleware.cors import CORSMiddleware
-from fastapi.responses import (
- StreamingResponse,
- JSONResponse,
- HTMLResponse,
- FileResponse,
-)
-
-from diffusers import AutoPipelineForImage2Image, AutoencoderTiny
-from compel import Compel
-import torch
-
-try:
- import intel_extension_for_pytorch as ipex
-except:
- pass
-from PIL import Image
-import numpy as np
-import gradio as gr
-import io
-import uuid
-import os
-import time
-import psutil
-
-MAX_QUEUE_SIZE = int(os.environ.get("MAX_QUEUE_SIZE", 0))
-TIMEOUT = float(os.environ.get("TIMEOUT", 0))
-SAFETY_CHECKER = os.environ.get("SAFETY_CHECKER", None)
-TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None)
-
-WIDTH = 512
-HEIGHT = 512
-# disable tiny autoencoder for better quality speed tradeoff
-USE_TINY_AUTOENCODER = True
-
-# check if MPS is available OSX only M1/M2/M3 chips
-mps_available = hasattr(torch.backends, "mps") and torch.backends.mps.is_available()
-xpu_available = hasattr(torch, "xpu") and torch.xpu.is_available()
-device = torch.device(
- "cuda" if torch.cuda.is_available() else "xpu" if xpu_available else "cpu"
-)
-torch_device = device
-
-# change to torch.float16 to save GPU memory
-torch_dtype = torch.float32
-
-print(f"TIMEOUT: {TIMEOUT}")
-print(f"SAFETY_CHECKER: {SAFETY_CHECKER}")
-print(f"MAX_QUEUE_SIZE: {MAX_QUEUE_SIZE}")
-print(f"device: {device}")
-
-if mps_available:
- device = torch.device("mps")
- torch_device = "cpu"
- torch_dtype = torch.float32
-
-if SAFETY_CHECKER == "True":
- pipe = AutoPipelineForImage2Image.from_pretrained(
- "SimianLuo/LCM_Dreamshaper_v7",
- )
-else:
- pipe = AutoPipelineForImage2Image.from_pretrained(
- "SimianLuo/LCM_Dreamshaper_v7",
- safety_checker=None,
- )
-
-if USE_TINY_AUTOENCODER:
- pipe.vae = AutoencoderTiny.from_pretrained(
- "madebyollin/taesd", torch_dtype=torch_dtype, use_safetensors=True
- )
-pipe.set_progress_bar_config(disable=True)
-pipe.to(device=torch_device, dtype=torch_dtype).to(device)
-pipe.unet.to(memory_format=torch.channels_last)
-
-if psutil.virtual_memory().total < 64 * 1024**3:
- pipe.enable_attention_slicing()
-
-if TORCH_COMPILE:
- pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
- pipe.vae = torch.compile(pipe.vae, mode="reduce-overhead", fullgraph=True)
-
- pipe(prompt="warmup", image=[Image.new("RGB", (512, 512))])
-
-compel_proc = Compel(
- tokenizer=pipe.tokenizer,
- text_encoder=pipe.text_encoder,
- truncate_long_prompts=False,
-)
-user_queue_map = {}
-
-
-class InputParams(BaseModel):
- seed: int = 2159232
- prompt: str
- guidance_scale: float = 8.0
- strength: float = 0.5
- steps: int = 4
- lcm_steps: int = 50
- width: int = WIDTH
- height: int = HEIGHT
-
-
-def predict(
- input_image: Image.Image, params: InputParams, prompt_embeds: torch.Tensor = None
-):
- generator = torch.manual_seed(params.seed)
- results = pipe(
- prompt_embeds=prompt_embeds,
- generator=generator,
- image=input_image,
- strength=params.strength,
- num_inference_steps=params.steps,
- guidance_scale=params.guidance_scale,
- width=params.width,
- height=params.height,
- original_inference_steps=params.lcm_steps,
- output_type="pil",
- )
- nsfw_content_detected = (
- results.nsfw_content_detected[0]
- if "nsfw_content_detected" in results
- else False
- )
- if nsfw_content_detected:
- return None
- return results.images[0]
-
-
-app = FastAPI()
-app.add_middleware(
- CORSMiddleware,
- allow_origins=["*"],
- allow_credentials=True,
- allow_methods=["*"],
- allow_headers=["*"],
-)
-
-
-@app.websocket("/ws")
-async def websocket_endpoint(websocket: WebSocket):
- await websocket.accept()
- if MAX_QUEUE_SIZE > 0 and len(user_queue_map) >= MAX_QUEUE_SIZE:
- print("Server is full")
- await websocket.send_json({"status": "error", "message": "Server is full"})
- await websocket.close()
- return
-
- try:
- uid = str(uuid.uuid4())
- print(f"New user connected: {uid}")
- await websocket.send_json(
- {"status": "success", "message": "Connected", "userId": uid}
- )
- user_queue_map[uid] = {"queue": asyncio.Queue()}
- await websocket.send_json(
- {"status": "start", "message": "Start Streaming", "userId": uid}
- )
- await handle_websocket_data(websocket, uid)
- except WebSocketDisconnect as e:
- logging.error(f"WebSocket Error: {e}, {uid}")
- traceback.print_exc()
- finally:
- print(f"User disconnected: {uid}")
- queue_value = user_queue_map.pop(uid, None)
- queue = queue_value.get("queue", None)
- if queue:
- while not queue.empty():
- try:
- queue.get_nowait()
- except asyncio.QueueEmpty:
- continue
-
-
-@app.get("/queue_size")
-async def get_queue_size():
- queue_size = len(user_queue_map)
- return JSONResponse({"queue_size": queue_size})
-
-
-@app.get("/stream/{user_id}")
-async def stream(user_id: uuid.UUID):
- uid = str(user_id)
- try:
- user_queue = user_queue_map[uid]
- queue = user_queue["queue"]
-
- async def generate():
- last_prompt: str = None
- prompt_embeds: torch.Tensor = None
- while True:
- data = await queue.get()
- input_image = data["image"]
- params = data["params"]
- if input_image is None:
- continue
- # avoid recalculate prompt embeds
- if last_prompt != params.prompt:
- print("new prompt")
- prompt_embeds = compel_proc(params.prompt)
- last_prompt = params.prompt
-
- image = predict(
- input_image,
- params,
- prompt_embeds,
- )
- if image is None:
- continue
- frame_data = io.BytesIO()
- image.save(frame_data, format="JPEG")
- frame_data = frame_data.getvalue()
- if frame_data is not None and len(frame_data) > 0:
- yield b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame_data + b"\r\n"
-
- await asyncio.sleep(1.0 / 120.0)
-
- return StreamingResponse(
- generate(), media_type="multipart/x-mixed-replace;boundary=frame"
- )
- except Exception as e:
- logging.error(f"Streaming Error: {e}, {user_queue_map}")
- traceback.print_exc()
- return HTTPException(status_code=404, detail="User not found")
-
-
-async def handle_websocket_data(websocket: WebSocket, user_id: uuid.UUID):
- uid = str(user_id)
- user_queue = user_queue_map[uid]
- queue = user_queue["queue"]
- if not queue:
- return HTTPException(status_code=404, detail="User not found")
- last_time = time.time()
- try:
- while True:
- data = await websocket.receive_bytes()
- params = await websocket.receive_json()
- params = InputParams(**params)
- pil_image = Image.open(io.BytesIO(data))
-
- while not queue.empty():
- try:
- queue.get_nowait()
- except asyncio.QueueEmpty:
- continue
- await queue.put({"image": pil_image, "params": params})
- if TIMEOUT > 0 and time.time() - last_time > TIMEOUT:
- await websocket.send_json(
- {
- "status": "timeout",
- "message": "Your session has ended",
- "userId": uid,
- }
- )
- await websocket.close()
- return
-
- except Exception as e:
- logging.error(f"Error: {e}")
- traceback.print_exc()
-
-
-@app.get("/", response_class=HTMLResponse)
-async def root():
- return FileResponse("./static/img2img.html")
diff --git a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/align_trans.py b/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/align_trans.py
deleted file mode 100644
index 07f1eb365462c2ec5bbac6d1854c786b6fd6be90..0000000000000000000000000000000000000000
--- a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/align_trans.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import cv2
-import numpy as np
-
-from .matlab_cp2tform import get_similarity_transform_for_cv2
-
-# reference facial points, a list of coordinates (x,y)
-REFERENCE_FACIAL_POINTS = [[30.29459953, 51.69630051], [65.53179932, 51.50139999], [48.02519989, 71.73660278],
- [33.54930115, 92.3655014], [62.72990036, 92.20410156]]
-
-DEFAULT_CROP_SIZE = (96, 112)
-
-
-class FaceWarpException(Exception):
-
- def __str__(self):
- return 'In File {}:{}'.format(__file__, super.__str__(self))
-
-
-def get_reference_facial_points(output_size=None, inner_padding_factor=0.0, outer_padding=(0, 0), default_square=False):
- """
- Function:
- ----------
- get reference 5 key points according to crop settings:
- 0. Set default crop_size:
- if default_square:
- crop_size = (112, 112)
- else:
- crop_size = (96, 112)
- 1. Pad the crop_size by inner_padding_factor in each side;
- 2. Resize crop_size into (output_size - outer_padding*2),
- pad into output_size with outer_padding;
- 3. Output reference_5point;
- Parameters:
- ----------
- @output_size: (w, h) or None
- size of aligned face image
- @inner_padding_factor: (w_factor, h_factor)
- padding factor for inner (w, h)
- @outer_padding: (w_pad, h_pad)
- each row is a pair of coordinates (x, y)
- @default_square: True or False
- if True:
- default crop_size = (112, 112)
- else:
- default crop_size = (96, 112);
- !!! make sure, if output_size is not None:
- (output_size - outer_padding)
- = some_scale * (default crop_size * (1.0 +
- inner_padding_factor))
- Returns:
- ----------
- @reference_5point: 5x2 np.array
- each row is a pair of transformed coordinates (x, y)
- """
-
- tmp_5pts = np.array(REFERENCE_FACIAL_POINTS)
- tmp_crop_size = np.array(DEFAULT_CROP_SIZE)
-
- # 0) make the inner region a square
- if default_square:
- size_diff = max(tmp_crop_size) - tmp_crop_size
- tmp_5pts += size_diff / 2
- tmp_crop_size += size_diff
-
- if (output_size and output_size[0] == tmp_crop_size[0] and output_size[1] == tmp_crop_size[1]):
-
- return tmp_5pts
-
- if (inner_padding_factor == 0 and outer_padding == (0, 0)):
- if output_size is None:
- return tmp_5pts
- else:
- raise FaceWarpException('No paddings to do, output_size must be None or {}'.format(tmp_crop_size))
-
- # check output size
- if not (0 <= inner_padding_factor <= 1.0):
- raise FaceWarpException('Not (0 <= inner_padding_factor <= 1.0)')
-
- if ((inner_padding_factor > 0 or outer_padding[0] > 0 or outer_padding[1] > 0) and output_size is None):
- output_size = tmp_crop_size * \
- (1 + inner_padding_factor * 2).astype(np.int32)
- output_size += np.array(outer_padding)
- if not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1]):
- raise FaceWarpException('Not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1])')
-
- # 1) pad the inner region according inner_padding_factor
- if inner_padding_factor > 0:
- size_diff = tmp_crop_size * inner_padding_factor * 2
- tmp_5pts += size_diff / 2
- tmp_crop_size += np.round(size_diff).astype(np.int32)
-
- # 2) resize the padded inner region
- size_bf_outer_pad = np.array(output_size) - np.array(outer_padding) * 2
-
- if size_bf_outer_pad[0] * tmp_crop_size[1] != size_bf_outer_pad[1] * tmp_crop_size[0]:
- raise FaceWarpException('Must have (output_size - outer_padding)'
- '= some_scale * (crop_size * (1.0 + inner_padding_factor)')
-
- scale_factor = size_bf_outer_pad[0].astype(np.float32) / tmp_crop_size[0]
- tmp_5pts = tmp_5pts * scale_factor
- # size_diff = tmp_crop_size * (scale_factor - min(scale_factor))
- # tmp_5pts = tmp_5pts + size_diff / 2
- tmp_crop_size = size_bf_outer_pad
-
- # 3) add outer_padding to make output_size
- reference_5point = tmp_5pts + np.array(outer_padding)
- tmp_crop_size = output_size
-
- return reference_5point
-
-
-def get_affine_transform_matrix(src_pts, dst_pts):
- """
- Function:
- ----------
- get affine transform matrix 'tfm' from src_pts to dst_pts
- Parameters:
- ----------
- @src_pts: Kx2 np.array
- source points matrix, each row is a pair of coordinates (x, y)
- @dst_pts: Kx2 np.array
- destination points matrix, each row is a pair of coordinates (x, y)
- Returns:
- ----------
- @tfm: 2x3 np.array
- transform matrix from src_pts to dst_pts
- """
-
- tfm = np.float32([[1, 0, 0], [0, 1, 0]])
- n_pts = src_pts.shape[0]
- ones = np.ones((n_pts, 1), src_pts.dtype)
- src_pts_ = np.hstack([src_pts, ones])
- dst_pts_ = np.hstack([dst_pts, ones])
-
- A, res, rank, s = np.linalg.lstsq(src_pts_, dst_pts_)
-
- if rank == 3:
- tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]], [A[0, 1], A[1, 1], A[2, 1]]])
- elif rank == 2:
- tfm = np.float32([[A[0, 0], A[1, 0], 0], [A[0, 1], A[1, 1], 0]])
-
- return tfm
-
-
-def warp_and_crop_face(src_img, facial_pts, reference_pts=None, crop_size=(96, 112), align_type='smilarity'):
- """
- Function:
- ----------
- apply affine transform 'trans' to uv
- Parameters:
- ----------
- @src_img: 3x3 np.array
- input image
- @facial_pts: could be
- 1)a list of K coordinates (x,y)
- or
- 2) Kx2 or 2xK np.array
- each row or col is a pair of coordinates (x, y)
- @reference_pts: could be
- 1) a list of K coordinates (x,y)
- or
- 2) Kx2 or 2xK np.array
- each row or col is a pair of coordinates (x, y)
- or
- 3) None
- if None, use default reference facial points
- @crop_size: (w, h)
- output face image size
- @align_type: transform type, could be one of
- 1) 'similarity': use similarity transform
- 2) 'cv2_affine': use the first 3 points to do affine transform,
- by calling cv2.getAffineTransform()
- 3) 'affine': use all points to do affine transform
- Returns:
- ----------
- @face_img: output face image with size (w, h) = @crop_size
- """
-
- if reference_pts is None:
- if crop_size[0] == 96 and crop_size[1] == 112:
- reference_pts = REFERENCE_FACIAL_POINTS
- else:
- default_square = False
- inner_padding_factor = 0
- outer_padding = (0, 0)
- output_size = crop_size
-
- reference_pts = get_reference_facial_points(output_size, inner_padding_factor, outer_padding,
- default_square)
-
- ref_pts = np.float32(reference_pts)
- ref_pts_shp = ref_pts.shape
- if max(ref_pts_shp) < 3 or min(ref_pts_shp) != 2:
- raise FaceWarpException('reference_pts.shape must be (K,2) or (2,K) and K>2')
-
- if ref_pts_shp[0] == 2:
- ref_pts = ref_pts.T
-
- src_pts = np.float32(facial_pts)
- src_pts_shp = src_pts.shape
- if max(src_pts_shp) < 3 or min(src_pts_shp) != 2:
- raise FaceWarpException('facial_pts.shape must be (K,2) or (2,K) and K>2')
-
- if src_pts_shp[0] == 2:
- src_pts = src_pts.T
-
- if src_pts.shape != ref_pts.shape:
- raise FaceWarpException('facial_pts and reference_pts must have the same shape')
-
- if align_type == 'cv2_affine':
- tfm = cv2.getAffineTransform(src_pts[0:3], ref_pts[0:3])
- elif align_type == 'affine':
- tfm = get_affine_transform_matrix(src_pts, ref_pts)
- else:
- tfm = get_similarity_transform_for_cv2(src_pts, ref_pts)
-
- face_img = cv2.warpAffine(src_img, tfm, (crop_size[0], crop_size[1]))
-
- return face_img
diff --git a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py b/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py
deleted file mode 100644
index 02593556d88a90232bbe55a062875f4af4520621..0000000000000000000000000000000000000000
--- a/spaces/leafShen/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py
+++ /dev/null
@@ -1,370 +0,0 @@
-import cv2
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from PIL import Image
-from torchvision.models._utils import IntermediateLayerGetter as IntermediateLayerGetter
-
-from facelib.detection.align_trans import get_reference_facial_points, warp_and_crop_face
-from facelib.detection.retinaface.retinaface_net import FPN, SSH, MobileNetV1, make_bbox_head, make_class_head, make_landmark_head
-from facelib.detection.retinaface.retinaface_utils import (PriorBox, batched_decode, batched_decode_landm, decode, decode_landm,
- py_cpu_nms)
-
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
-
-def generate_config(network_name):
-
- cfg_mnet = {
- 'name': 'mobilenet0.25',
- 'min_sizes': [[16, 32], [64, 128], [256, 512]],
- 'steps': [8, 16, 32],
- 'variance': [0.1, 0.2],
- 'clip': False,
- 'loc_weight': 2.0,
- 'gpu_train': True,
- 'batch_size': 32,
- 'ngpu': 1,
- 'epoch': 250,
- 'decay1': 190,
- 'decay2': 220,
- 'image_size': 640,
- 'return_layers': {
- 'stage1': 1,
- 'stage2': 2,
- 'stage3': 3
- },
- 'in_channel': 32,
- 'out_channel': 64
- }
-
- cfg_re50 = {
- 'name': 'Resnet50',
- 'min_sizes': [[16, 32], [64, 128], [256, 512]],
- 'steps': [8, 16, 32],
- 'variance': [0.1, 0.2],
- 'clip': False,
- 'loc_weight': 2.0,
- 'gpu_train': True,
- 'batch_size': 24,
- 'ngpu': 4,
- 'epoch': 100,
- 'decay1': 70,
- 'decay2': 90,
- 'image_size': 840,
- 'return_layers': {
- 'layer2': 1,
- 'layer3': 2,
- 'layer4': 3
- },
- 'in_channel': 256,
- 'out_channel': 256
- }
-
- if network_name == 'mobile0.25':
- return cfg_mnet
- elif network_name == 'resnet50':
- return cfg_re50
- else:
- raise NotImplementedError(f'network_name={network_name}')
-
-
-class RetinaFace(nn.Module):
-
- def __init__(self, network_name='resnet50', half=False, phase='test'):
- super(RetinaFace, self).__init__()
- self.half_inference = half
- cfg = generate_config(network_name)
- self.backbone = cfg['name']
-
- self.model_name = f'retinaface_{network_name}'
- self.cfg = cfg
- self.phase = phase
- self.target_size, self.max_size = 1600, 2150
- self.resize, self.scale, self.scale1 = 1., None, None
- self.mean_tensor = torch.tensor([[[[104.]], [[117.]], [[123.]]]]).to(device)
- self.reference = get_reference_facial_points(default_square=True)
- # Build network.
- backbone = None
- if cfg['name'] == 'mobilenet0.25':
- backbone = MobileNetV1()
- self.body = IntermediateLayerGetter(backbone, cfg['return_layers'])
- elif cfg['name'] == 'Resnet50':
- import torchvision.models as models
- backbone = models.resnet50(pretrained=False)
- self.body = IntermediateLayerGetter(backbone, cfg['return_layers'])
-
- in_channels_stage2 = cfg['in_channel']
- in_channels_list = [
- in_channels_stage2 * 2,
- in_channels_stage2 * 4,
- in_channels_stage2 * 8,
- ]
-
- out_channels = cfg['out_channel']
- self.fpn = FPN(in_channels_list, out_channels)
- self.ssh1 = SSH(out_channels, out_channels)
- self.ssh2 = SSH(out_channels, out_channels)
- self.ssh3 = SSH(out_channels, out_channels)
-
- self.ClassHead = make_class_head(fpn_num=3, inchannels=cfg['out_channel'])
- self.BboxHead = make_bbox_head(fpn_num=3, inchannels=cfg['out_channel'])
- self.LandmarkHead = make_landmark_head(fpn_num=3, inchannels=cfg['out_channel'])
-
- self.to(device)
- self.eval()
- if self.half_inference:
- self.half()
-
- def forward(self, inputs):
- out = self.body(inputs)
-
- if self.backbone == 'mobilenet0.25' or self.backbone == 'Resnet50':
- out = list(out.values())
- # FPN
- fpn = self.fpn(out)
-
- # SSH
- feature1 = self.ssh1(fpn[0])
- feature2 = self.ssh2(fpn[1])
- feature3 = self.ssh3(fpn[2])
- features = [feature1, feature2, feature3]
-
- bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1)
- classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)], dim=1)
- tmp = [self.LandmarkHead[i](feature) for i, feature in enumerate(features)]
- ldm_regressions = (torch.cat(tmp, dim=1))
-
- if self.phase == 'train':
- output = (bbox_regressions, classifications, ldm_regressions)
- else:
- output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions)
- return output
-
- def __detect_faces(self, inputs):
- # get scale
- height, width = inputs.shape[2:]
- self.scale = torch.tensor([width, height, width, height], dtype=torch.float32).to(device)
- tmp = [width, height, width, height, width, height, width, height, width, height]
- self.scale1 = torch.tensor(tmp, dtype=torch.float32).to(device)
-
- # forawrd
- inputs = inputs.to(device)
- if self.half_inference:
- inputs = inputs.half()
- loc, conf, landmarks = self(inputs)
-
- # get priorbox
- priorbox = PriorBox(self.cfg, image_size=inputs.shape[2:])
- priors = priorbox.forward().to(device)
-
- return loc, conf, landmarks, priors
-
- # single image detection
- def transform(self, image, use_origin_size):
- # convert to opencv format
- if isinstance(image, Image.Image):
- image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
- image = image.astype(np.float32)
-
- # testing scale
- im_size_min = np.min(image.shape[0:2])
- im_size_max = np.max(image.shape[0:2])
- resize = float(self.target_size) / float(im_size_min)
-
- # prevent bigger axis from being more than max_size
- if np.round(resize * im_size_max) > self.max_size:
- resize = float(self.max_size) / float(im_size_max)
- resize = 1 if use_origin_size else resize
-
- # resize
- if resize != 1:
- image = cv2.resize(image, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR)
-
- # convert to torch.tensor format
- # image -= (104, 117, 123)
- image = image.transpose(2, 0, 1)
- image = torch.from_numpy(image).unsqueeze(0)
-
- return image, resize
-
- def detect_faces(
- self,
- image,
- conf_threshold=0.8,
- nms_threshold=0.4,
- use_origin_size=True,
- ):
- """
- Params:
- imgs: BGR image
- """
- image, self.resize = self.transform(image, use_origin_size)
- image = image.to(device)
- if self.half_inference:
- image = image.half()
- image = image - self.mean_tensor
-
- loc, conf, landmarks, priors = self.__detect_faces(image)
-
- boxes = decode(loc.data.squeeze(0), priors.data, self.cfg['variance'])
- boxes = boxes * self.scale / self.resize
- boxes = boxes.cpu().numpy()
-
- scores = conf.squeeze(0).data.cpu().numpy()[:, 1]
-
- landmarks = decode_landm(landmarks.squeeze(0), priors, self.cfg['variance'])
- landmarks = landmarks * self.scale1 / self.resize
- landmarks = landmarks.cpu().numpy()
-
- # ignore low scores
- inds = np.where(scores > conf_threshold)[0]
- boxes, landmarks, scores = boxes[inds], landmarks[inds], scores[inds]
-
- # sort
- order = scores.argsort()[::-1]
- boxes, landmarks, scores = boxes[order], landmarks[order], scores[order]
-
- # do NMS
- bounding_boxes = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False)
- keep = py_cpu_nms(bounding_boxes, nms_threshold)
- bounding_boxes, landmarks = bounding_boxes[keep, :], landmarks[keep]
- # self.t['forward_pass'].toc()
- # print(self.t['forward_pass'].average_time)
- # import sys
- # sys.stdout.flush()
- return np.concatenate((bounding_boxes, landmarks), axis=1)
-
- def __align_multi(self, image, boxes, landmarks, limit=None):
-
- if len(boxes) < 1:
- return [], []
-
- if limit:
- boxes = boxes[:limit]
- landmarks = landmarks[:limit]
-
- faces = []
- for landmark in landmarks:
- facial5points = [[landmark[2 * j], landmark[2 * j + 1]] for j in range(5)]
-
- warped_face = warp_and_crop_face(np.array(image), facial5points, self.reference, crop_size=(112, 112))
- faces.append(warped_face)
-
- return np.concatenate((boxes, landmarks), axis=1), faces
-
- def align_multi(self, img, conf_threshold=0.8, limit=None):
-
- rlt = self.detect_faces(img, conf_threshold=conf_threshold)
- boxes, landmarks = rlt[:, 0:5], rlt[:, 5:]
-
- return self.__align_multi(img, boxes, landmarks, limit)
-
- # batched detection
- def batched_transform(self, frames, use_origin_size):
- """
- Arguments:
- frames: a list of PIL.Image, or torch.Tensor(shape=[n, h, w, c],
- type=np.float32, BGR format).
- use_origin_size: whether to use origin size.
- """
- from_PIL = True if isinstance(frames[0], Image.Image) else False
-
- # convert to opencv format
- if from_PIL:
- frames = [cv2.cvtColor(np.asarray(frame), cv2.COLOR_RGB2BGR) for frame in frames]
- frames = np.asarray(frames, dtype=np.float32)
-
- # testing scale
- im_size_min = np.min(frames[0].shape[0:2])
- im_size_max = np.max(frames[0].shape[0:2])
- resize = float(self.target_size) / float(im_size_min)
-
- # prevent bigger axis from being more than max_size
- if np.round(resize * im_size_max) > self.max_size:
- resize = float(self.max_size) / float(im_size_max)
- resize = 1 if use_origin_size else resize
-
- # resize
- if resize != 1:
- if not from_PIL:
- frames = F.interpolate(frames, scale_factor=resize)
- else:
- frames = [
- cv2.resize(frame, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR)
- for frame in frames
- ]
-
- # convert to torch.tensor format
- if not from_PIL:
- frames = frames.transpose(1, 2).transpose(1, 3).contiguous()
- else:
- frames = frames.transpose((0, 3, 1, 2))
- frames = torch.from_numpy(frames)
-
- return frames, resize
-
- def batched_detect_faces(self, frames, conf_threshold=0.8, nms_threshold=0.4, use_origin_size=True):
- """
- Arguments:
- frames: a list of PIL.Image, or np.array(shape=[n, h, w, c],
- type=np.uint8, BGR format).
- conf_threshold: confidence threshold.
- nms_threshold: nms threshold.
- use_origin_size: whether to use origin size.
- Returns:
- final_bounding_boxes: list of np.array ([n_boxes, 5],
- type=np.float32).
- final_landmarks: list of np.array ([n_boxes, 10], type=np.float32).
- """
- # self.t['forward_pass'].tic()
- frames, self.resize = self.batched_transform(frames, use_origin_size)
- frames = frames.to(device)
- frames = frames - self.mean_tensor
-
- b_loc, b_conf, b_landmarks, priors = self.__detect_faces(frames)
-
- final_bounding_boxes, final_landmarks = [], []
-
- # decode
- priors = priors.unsqueeze(0)
- b_loc = batched_decode(b_loc, priors, self.cfg['variance']) * self.scale / self.resize
- b_landmarks = batched_decode_landm(b_landmarks, priors, self.cfg['variance']) * self.scale1 / self.resize
- b_conf = b_conf[:, :, 1]
-
- # index for selection
- b_indice = b_conf > conf_threshold
-
- # concat
- b_loc_and_conf = torch.cat((b_loc, b_conf.unsqueeze(-1)), dim=2).float()
-
- for pred, landm, inds in zip(b_loc_and_conf, b_landmarks, b_indice):
-
- # ignore low scores
- pred, landm = pred[inds, :], landm[inds, :]
- if pred.shape[0] == 0:
- final_bounding_boxes.append(np.array([], dtype=np.float32))
- final_landmarks.append(np.array([], dtype=np.float32))
- continue
-
- # sort
- # order = score.argsort(descending=True)
- # box, landm, score = box[order], landm[order], score[order]
-
- # to CPU
- bounding_boxes, landm = pred.cpu().numpy(), landm.cpu().numpy()
-
- # NMS
- keep = py_cpu_nms(bounding_boxes, nms_threshold)
- bounding_boxes, landmarks = bounding_boxes[keep, :], landm[keep]
-
- # append
- final_bounding_boxes.append(bounding_boxes)
- final_landmarks.append(landmarks)
- # self.t['forward_pass'].toc(average=True)
- # self.batch_time += self.t['forward_pass'].diff
- # self.total_frame += len(frames)
- # print(self.batch_time / self.total_frame)
-
- return final_bounding_boxes, final_landmarks
diff --git "a/spaces/leogabraneth/text-generation-webui-main/docs/05 \342\200\220 Training Tab.md" "b/spaces/leogabraneth/text-generation-webui-main/docs/05 \342\200\220 Training Tab.md"
deleted file mode 100644
index 590117a3075ce0da037a9f36c09491c23be1a2c4..0000000000000000000000000000000000000000
--- "a/spaces/leogabraneth/text-generation-webui-main/docs/05 \342\200\220 Training Tab.md"
+++ /dev/null
@@ -1,139 +0,0 @@
-## Training Your Own LoRAs
-
-The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps:
-
-### **Step 1**: Make a plan.
-- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use.
-- What are you training it on? Do you want it to learn real information, a simple format, ...?
-
-### **Step 2**: Gather a dataset.
-- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options.
-- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files).
-- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option.
- - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it.
-- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support.
-
-### **Step 3**: Do the training.
-- **3.1**: Load the WebUI, and your model.
- - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage).
-- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab.
-- **3.3**: Fill in the name of the LoRA, select your dataset in the dataset options.
-- **3.4**: Select other parameters to your preference. See [parameters below](#parameters).
-- **3.5**: click `Start LoRA Training`, and wait.
- - It can take a few hours for a large dataset, or just a few minute if doing a small run.
- - You may want to monitor your [loss value](#loss) while it goes.
-
-### **Step 4**: Evaluate your results.
-- Load the LoRA under the Models Tab.
-- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab.
-- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead.
-
-### **Step 5**: Re-run if you're unhappy.
-- Make sure to unload the LoRA before training it.
-- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA.
- - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder.
- - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content).
- - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs.
-- Or, you can start over entirely if you prefer.
-- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate.
-- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank.
-- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far.
-
-## Format Files
-
-If using JSON formatted datasets, they are presumed to be in the following approximate format:
-
-```json
-[
- {
- "somekey": "somevalue",
- "key2": "value2"
- },
- {
- // etc
- }
-]
-```
-
-Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained.
-
-For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank.
-
-A simple format file for Alpaca to be used as a chat bot is:
-
-```json
-{
- "instruction,output": "User: %instruction%\nAssistant: %output%",
- "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%"
-}
-```
-
-Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`.
-
-So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`.
-
-If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs.
-
-## Raw Text File Settings
-
-When using raw text files as your dataset, the text is automatically split into chunks based on your `Cutoff Length` you get a few basic options to configure them.
-- `Overlap Length` is how much to overlap chunks by. Overlapping chunks helps prevent the model from learning strange mid-sentence cuts, and instead learn continual sentences that flow from earlier text.
-- `Prefer Newline Cut Length` sets a maximum distance in characters to shift the chunk cut towards newlines. Doing this helps prevent lines from starting or ending mid-sentence, preventing the model from learning to cut off sentences randomly.
-- `Hard Cut String` sets a string that indicates there must be a hard cut without overlap. This defaults to `\n\n\n`, meaning 3 newlines. No trained chunk will ever contain this string. This allows you to insert unrelated sections of text in the same text file, but still ensure the model won't be taught to randomly change the subject.
-
-## Parameters
-
-The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options.
-
-That said, here's a guide to the most important parameter choices you should consider:
-
-### VRAM
-
-- First, you must consider your VRAM availability.
- - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs).
- - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations.
- - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange.
- - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length.
- - If you're low on VRAM, reducing batch size or cutoff length will of course improve that.
- - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again.
-
-### Rank
-
-- Second, you want to consider the amount of learning you want.
- - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great.
- - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better.
- - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training.
-
-### Learning Rate and Epochs
-
-- Third, how carefully you want it to be learned.
- - In other words, how okay or not you are with the model losing unrelated understandings.
- - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs.
- - The learning rate controls how much change is made to the model by each token it sees.
- - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number.
- - Higher values let training run faster, but also are more likely to corrupt prior data in the model.
- - You essentially have two variables to balance: the LR, and Epochs.
- - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training.
- - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training.
- - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time.
- - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType)
-
-## Loss
-
-When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes.
-
-"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs.
-
-In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it.
-
-So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you.
-
-Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption).
-
-## Note: 4-Bit Monkeypatch
-
-The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects:
-- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate.
-- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire.
-- Loading or working with multiple LoRAs at the same time doesn't currently work.
-- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support.
diff --git "a/spaces/leogabraneth/text-generation-webui-main/docs/11 \342\200\220 AMD Setup.md" "b/spaces/leogabraneth/text-generation-webui-main/docs/11 \342\200\220 AMD Setup.md"
deleted file mode 100644
index 0bd22e7edc248b687028436b52e1058940cf7b85..0000000000000000000000000000000000000000
--- "a/spaces/leogabraneth/text-generation-webui-main/docs/11 \342\200\220 AMD Setup.md"
+++ /dev/null
@@ -1,13 +0,0 @@
-## Using an AMD GPU in Linux
-
-Requires ROCm SDK 5.4.2 or 5.4.3 to be installed. Some systems may also
-need:
-
-```
-sudo apt-get install libstdc++-12-dev
-```
-
-Edit the "one_click.py" script using a text editor and un-comment and
-modify the lines near the top of the script according to your setup. In
-particular, modify the `os.environ["ROCM_PATH"] = '/opt/rocm'` line to
-point to your ROCm installation.
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Xforce Keygen Autocad 2009 64 92.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Xforce Keygen Autocad 2009 64 92.md
deleted file mode 100644
index a0a1fad32e51dd7d56b4c4cd3757359c4eff04f4..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Xforce Keygen Autocad 2009 64 92.md
+++ /dev/null
@@ -1,6 +0,0 @@
-download xforce keygen autocad 2009 64 92
Download ✵✵✵ https://bytlly.com/2uGw2U
-
- d5da3c52bf
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/DravyagunaVigyanPvSharmaPdfFree !!EXCLUSIVE!!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/DravyagunaVigyanPvSharmaPdfFree !!EXCLUSIVE!!.md
deleted file mode 100644
index 0461279b697407bf9e08d8c25259a6ba26cd0ce3..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/DravyagunaVigyanPvSharmaPdfFree !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,96 +0,0 @@
-DravyagunaVigyanPvSharmaPdfFree
Download ✪✪✪ https://bytlly.com/2uGyyu
-
-Vivekananda Kendra College of Physical Education & Sports (GURUGRAM), vipanjal@gmail.com. Maharajadhiraja Jyotiraj Kunwar Bahadur..
-
-Category:Vedanta
-
-Category:Vedic periodGlossary of geology
-
-This is a glossary of geology, for use in papers, books, presentations and Wikipedia articles about geology.
-
-__NOTOC__
-
-A
-
-B
-
-C
-
-D
-
-E
-
-F
-
-G
-
-H
-
-I
-
-J
-
-K
-
-L
-
-M
-
-N
-
-O
-
-P
-
-Q
-
-R
-
-S
-
-T
-
-U
-
-V
-
-W
-
-X
-
-Y
-
-Z
-
-See also
-
-Index of geology articles
-
-Geological Time Scale
-
-List of geologists
-
-Outline of geology
-
-References
-
-External links
-
-The Geological Society
-
-Geology section in the University of Oxford Open Access website
-
-The GeoScience Journal
-
-Glossary
-
-Geology an office for the “triangle trade” in Vienna, also for the sale of bootleg liquor and the trade in deadly poison. He joined the Catholic Centre Party and campaigned for a return to a central European Germany.
-
-In the 1950s, when the Communist East Germany was being founded, Nikita Khrushchev stated in public that he was a “German”. The German question was once again at the centre of the Cold War. A unified Germany was feared and a democratic Germany was welcomed as a bulwark against the Soviet Union. In the 1954 West German elections the German question was at the centre of many heated debates. In the 1959 elections the German question became an issue that was even more central. Nevertheless, the CDU, the SED and the KPD remained supporters of a united Germany.
-
-The German question
-
-The 1960s and 1970s saw a change in attitude on both sides of the border. In 1963 the Cologne parliament rejected a declaration of a united Germany by referendum. From the early 1970s on the German question had lost its centrality. German reunification had already been announced in the early 1970s. The federal election in 1976 was a landslide victory for the FDP. The FDP had won about 23% of the vote, and the CDU only about 25%. 4fefd39f24
-
-
-
diff --git a/spaces/lpnguyen/continuous-discrete-time/app.py b/spaces/lpnguyen/continuous-discrete-time/app.py
deleted file mode 100644
index e58aa6d6af0d648b969e06b8891c4631a035f888..0000000000000000000000000000000000000000
--- a/spaces/lpnguyen/continuous-discrete-time/app.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import streamlit as st
-import numpy as np
-from tools import integrate
-from dynamical_system import exp_discrete, exp_continuous
-from scipy.integrate import odeint
-import pandas as pd
-import altair as alt
-
-st.title("Discrete vs continuous population")
-
-st.header("Exponential growth population")
-
-st.markdown(
- r"""
-Discrete time model
-
-$N_{t+1} = N_t + R N_t$
-
-Continuous time model
-
-$\frac{dN}{dt} = r N$
-
-
-"""
-)
-
-st.write("")
-
-init = 0.2 # initial population density
-time_series = np.arange(0, 101, 1)
-
-r = st.sidebar.number_input("r", 1e-3, 0.2, value=0.05, step=1e-4, format="%.4f")
-R = st.sidebar.number_input("R", 1e-3, 0.2, value=0.05, step=1e-4, format="%.4f")
-st.sidebar.write(
- r"The two models give similar results when $R = e^r - 1 = $", np.exp(r) - 1
-)
-
-# simulation for continuous time model
-pop_continuous = odeint(exp_continuous, init, time_series, args=(r,), tfirst=True)
-
-# simulation for discrete time model
-t_discrete, pop_discrete = integrate(exp_discrete, init, time_series[-1], R)
-
-df = pd.DataFrame(
- dict(
- time=t_discrete, pop_discrete=pop_discrete, pop_continuous=pop_continuous[:, 0]
- )
-)
-
-
-def make_plot(scale, title):
- tt = alt.TitleParams(title, anchor="middle")
- l1 = (
- alt.Chart(df, title=tt)
- .mark_line()
- .encode(
- x="time",
- y=alt.Y(
- "pop_discrete",
- axis=alt.Axis(title="Population density"),
- scale=alt.Scale(type=scale),
- ),
- color=alt.value("#1f77b4"),
- )
- )
- l2 = (
- alt.Chart(df)
- .mark_line()
- .encode(x="time", y="pop_continuous", color=alt.value("#ff7f0e"))
- )
- return l1 + l2
-
-
-col1, col2 = st.columns(2, gap="large")
-
-with col1:
- st.altair_chart(make_plot("linear", "Normal scale"))
-with col2:
- st.altair_chart(make_plot("log", "Log scale"))
diff --git a/spaces/luost26/DiffAb/diffab/tools/dock/base.py b/spaces/luost26/DiffAb/diffab/tools/dock/base.py
deleted file mode 100644
index 68e52b6d33bf45433fccaffe5481af9d21f15bc4..0000000000000000000000000000000000000000
--- a/spaces/luost26/DiffAb/diffab/tools/dock/base.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import abc
-from typing import List
-
-
-FilePath = str
-
-
-class DockingEngine(abc.ABC):
-
- @abc.abstractmethod
- def __enter__(self):
- pass
-
- @abc.abstractmethod
- def __exit__(self, typ, value, traceback):
- pass
-
- @abc.abstractmethod
- def set_receptor(self, pdb_path: FilePath):
- pass
-
- @abc.abstractmethod
- def set_ligand(self, pdb_path: FilePath):
- pass
-
- @abc.abstractmethod
- def dock(self) -> List[FilePath]:
- pass
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/async/transform.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/async/transform.h
deleted file mode 100644
index abb2163ead0654a805deb3b31ca29f8c576ac9e9..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/async/transform.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a transform of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-// The purpose of this header is to #include the async/transform.h header of the
-// sequential, host, and device systems. It should be #included in any code
-// which uses ADL to dispatch async transform.
-
-#pragma once
-
-#include
-
-//#include
-
-//#define __THRUST_HOST_SYSTEM_ASYNC_TRANSFORM_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/async/transform.h>
-//#include __THRUST_HOST_SYSTEM_ASYNC_TRANSFORM_HEADER
-//#undef __THRUST_HOST_SYSTEM_ASYNC_TRANSFORM_HEADER
-
-#define __THRUST_DEVICE_SYSTEM_ASYNC_TRANSFORM_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/async/transform.h>
-#include __THRUST_DEVICE_SYSTEM_ASYNC_TRANSFORM_HEADER
-#undef __THRUST_DEVICE_SYSTEM_ASYNC_TRANSFORM_HEADER
-
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/reduce_intervals.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/reduce_intervals.h
deleted file mode 100644
index 88fefe43deffde15e32fe92c45d3b3047b2ba6aa..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/reduce_intervals.h
+++ /dev/null
@@ -1,125 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-namespace reduce_intervals_detail
-{
-
-
-template
- inline L divide_ri(const L x, const R y)
-{
- return (x + (y - 1)) / y;
-}
-
-
-template
- struct body
-{
- RandomAccessIterator1 first;
- RandomAccessIterator2 result;
- Size n, interval_size;
- BinaryFunction binary_op;
-
- body(RandomAccessIterator1 first, RandomAccessIterator2 result, Size n, Size interval_size, BinaryFunction binary_op)
- : first(first), result(result), n(n), interval_size(interval_size), binary_op(binary_op)
- {}
-
- void operator()(const ::tbb::blocked_range &r) const
- {
- assert(r.size() == 1);
-
- Size interval_idx = r.begin();
-
- Size offset_to_first = interval_size * interval_idx;
- Size offset_to_last = thrust::min(n, offset_to_first + interval_size);
-
- RandomAccessIterator1 my_first = first + offset_to_first;
- RandomAccessIterator1 my_last = first + offset_to_last;
-
- // carefully pass the init value for the interval with raw_reference_cast
- typedef typename BinaryFunction::result_type sum_type;
- result[interval_idx] =
- thrust::reduce(thrust::seq, my_first + 1, my_last, sum_type(thrust::raw_reference_cast(*my_first)), binary_op);
- }
-};
-
-
-template
- body
- make_body(RandomAccessIterator1 first, RandomAccessIterator2 result, Size n, Size interval_size, BinaryFunction binary_op)
-{
- return body(first, result, n, interval_size, binary_op);
-}
-
-
-} // end reduce_intervals_detail
-
-
-template
- void reduce_intervals(thrust::tbb::execution_policy &,
- RandomAccessIterator1 first,
- RandomAccessIterator1 last,
- Size interval_size,
- RandomAccessIterator2 result,
- BinaryFunction binary_op)
-{
- typename thrust::iterator_difference::type n = last - first;
-
- Size num_intervals = reduce_intervals_detail::divide_ri(n, interval_size);
-
- ::tbb::parallel_for(::tbb::blocked_range(0, num_intervals, 1), reduce_intervals_detail::make_body(first, result, Size(n), interval_size, binary_op), ::tbb::simple_partitioner());
-}
-
-
-template
- void reduce_intervals(thrust::tbb::execution_policy &exec,
- RandomAccessIterator1 first,
- RandomAccessIterator1 last,
- Size interval_size,
- RandomAccessIterator2 result)
-{
- typedef typename thrust::iterator_value::type value_type;
-
- return thrust::system::tbb::detail::reduce_intervals(exec, first, last, interval_size, result, thrust::plus());
-}
-
-
-} // end detail
-} // end tbb
-} // end system
-} // end thrust
-
diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/tof_arch.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/tof_arch.py
deleted file mode 100644
index 09cf7069157604b8b925219d8c8be2a14c9df26f..0000000000000000000000000000000000000000
--- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/tof_arch.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-
-from basicsr.utils.registry import ARCH_REGISTRY
-from .arch_util import flow_warp
-
-
-class BasicModule(nn.Module):
- """Basic module of SPyNet.
-
- Note that unlike the architecture in spynet_arch.py, the basic module
- here contains batch normalization.
- """
-
- def __init__(self):
- super(BasicModule, self).__init__()
- self.basic_module = nn.Sequential(
- nn.Conv2d(in_channels=8, out_channels=32, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(32), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=32, out_channels=64, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(64), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=64, out_channels=32, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(32), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=32, out_channels=16, kernel_size=7, stride=1, padding=3, bias=False),
- nn.BatchNorm2d(16), nn.ReLU(inplace=True),
- nn.Conv2d(in_channels=16, out_channels=2, kernel_size=7, stride=1, padding=3))
-
- def forward(self, tensor_input):
- """
- Args:
- tensor_input (Tensor): Input tensor with shape (b, 8, h, w).
- 8 channels contain:
- [reference image (3), neighbor image (3), initial flow (2)].
-
- Returns:
- Tensor: Estimated flow with shape (b, 2, h, w)
- """
- return self.basic_module(tensor_input)
-
-
-class SPyNetTOF(nn.Module):
- """SPyNet architecture for TOF.
-
- Note that this implementation is specifically for TOFlow. Please use
- spynet_arch.py for general use. They differ in the following aspects:
- 1. The basic modules here contain BatchNorm.
- 2. Normalization and denormalization are not done here, as
- they are done in TOFlow.
- Paper:
- Optical Flow Estimation using a Spatial Pyramid Network
- Code reference:
- https://github.com/Coldog2333/pytoflow
-
- Args:
- load_path (str): Path for pretrained SPyNet. Default: None.
- """
-
- def __init__(self, load_path=None):
- super(SPyNetTOF, self).__init__()
-
- self.basic_module = nn.ModuleList([BasicModule() for _ in range(4)])
- if load_path:
- self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params'])
-
- def forward(self, ref, supp):
- """
- Args:
- ref (Tensor): Reference image with shape of (b, 3, h, w).
- supp: The supporting image to be warped: (b, 3, h, w).
-
- Returns:
- Tensor: Estimated optical flow: (b, 2, h, w).
- """
- num_batches, _, h, w = ref.size()
- ref = [ref]
- supp = [supp]
-
- # generate downsampled frames
- for _ in range(3):
- ref.insert(0, F.avg_pool2d(input=ref[0], kernel_size=2, stride=2, count_include_pad=False))
- supp.insert(0, F.avg_pool2d(input=supp[0], kernel_size=2, stride=2, count_include_pad=False))
-
- # flow computation
- flow = ref[0].new_zeros(num_batches, 2, h // 16, w // 16)
- for i in range(4):
- flow_up = F.interpolate(input=flow, scale_factor=2, mode='bilinear', align_corners=True) * 2.0
- flow = flow_up + self.basic_module[i](
- torch.cat([ref[i], flow_warp(supp[i], flow_up.permute(0, 2, 3, 1)), flow_up], 1))
- return flow
-
-
-@ARCH_REGISTRY.register()
-class TOFlow(nn.Module):
- """PyTorch implementation of TOFlow.
-
- In TOFlow, the LR frames are pre-upsampled and have the same size with
- the GT frames.
- Paper:
- Xue et al., Video Enhancement with Task-Oriented Flow, IJCV 2018
- Code reference:
- 1. https://github.com/anchen1011/toflow
- 2. https://github.com/Coldog2333/pytoflow
-
- Args:
- adapt_official_weights (bool): Whether to adapt the weights translated
- from the official implementation. Set to false if you want to
- train from scratch. Default: False
- """
-
- def __init__(self, adapt_official_weights=False):
- super(TOFlow, self).__init__()
- self.adapt_official_weights = adapt_official_weights
- self.ref_idx = 0 if adapt_official_weights else 3
-
- self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))
- self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))
-
- # flow estimation module
- self.spynet = SPyNetTOF()
-
- # reconstruction module
- self.conv_1 = nn.Conv2d(3 * 7, 64, 9, 1, 4)
- self.conv_2 = nn.Conv2d(64, 64, 9, 1, 4)
- self.conv_3 = nn.Conv2d(64, 64, 1)
- self.conv_4 = nn.Conv2d(64, 3, 1)
-
- # activation function
- self.relu = nn.ReLU(inplace=True)
-
- def normalize(self, img):
- return (img - self.mean) / self.std
-
- def denormalize(self, img):
- return img * self.std + self.mean
-
- def forward(self, lrs):
- """
- Args:
- lrs: Input lr frames: (b, 7, 3, h, w).
-
- Returns:
- Tensor: SR frame: (b, 3, h, w).
- """
- # In the official implementation, the 0-th frame is the reference frame
- if self.adapt_official_weights:
- lrs = lrs[:, [3, 0, 1, 2, 4, 5, 6], :, :, :]
-
- num_batches, num_lrs, _, h, w = lrs.size()
-
- lrs = self.normalize(lrs.view(-1, 3, h, w))
- lrs = lrs.view(num_batches, num_lrs, 3, h, w)
-
- lr_ref = lrs[:, self.ref_idx, :, :, :]
- lr_aligned = []
- for i in range(7): # 7 frames
- if i == self.ref_idx:
- lr_aligned.append(lr_ref)
- else:
- lr_supp = lrs[:, i, :, :, :]
- flow = self.spynet(lr_ref, lr_supp)
- lr_aligned.append(flow_warp(lr_supp, flow.permute(0, 2, 3, 1)))
-
- # reconstruction
- hr = torch.stack(lr_aligned, dim=1)
- hr = hr.view(num_batches, -1, h, w)
- hr = self.relu(self.conv_1(hr))
- hr = self.relu(self.conv_2(hr))
- hr = self.relu(self.conv_3(hr))
- hr = self.conv_4(hr) + lr_ref
-
- return self.denormalize(hr)
diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/iter_counter.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/iter_counter.py
deleted file mode 100644
index 277bb67bb41d613e21f8400e3733490909ff73b7..0000000000000000000000000000000000000000
--- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/iter_counter.py
+++ /dev/null
@@ -1,74 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import os
-import time
-import numpy as np
-
-
-# Helper class that keeps track of training iterations
-class IterationCounter:
- def __init__(self, opt, dataset_size):
- self.opt = opt
- self.dataset_size = dataset_size
-
- self.first_epoch = 1
- self.total_epochs = opt.niter + opt.niter_decay
- self.epoch_iter = 0 # iter number within each epoch
- self.iter_record_path = os.path.join(self.opt.checkpoints_dir, self.opt.name, "iter.txt")
- if opt.isTrain and opt.continue_train:
- try:
- self.first_epoch, self.epoch_iter = np.loadtxt(
- self.iter_record_path, delimiter=",", dtype=int
- )
- print("Resuming from epoch %d at iteration %d" % (self.first_epoch, self.epoch_iter))
- except:
- print(
- "Could not load iteration record at %s. Starting from beginning." % self.iter_record_path
- )
-
- self.total_steps_so_far = (self.first_epoch - 1) * dataset_size + self.epoch_iter
-
- # return the iterator of epochs for the training
- def training_epochs(self):
- return range(self.first_epoch, self.total_epochs + 1)
-
- def record_epoch_start(self, epoch):
- self.epoch_start_time = time.time()
- self.epoch_iter = 0
- self.last_iter_time = time.time()
- self.current_epoch = epoch
-
- def record_one_iteration(self):
- current_time = time.time()
-
- # the last remaining batch is dropped (see data/__init__.py),
- # so we can assume batch size is always opt.batchSize
- self.time_per_iter = (current_time - self.last_iter_time) / self.opt.batchSize
- self.last_iter_time = current_time
- self.total_steps_so_far += self.opt.batchSize
- self.epoch_iter += self.opt.batchSize
-
- def record_epoch_end(self):
- current_time = time.time()
- self.time_per_epoch = current_time - self.epoch_start_time
- print(
- "End of epoch %d / %d \t Time Taken: %d sec"
- % (self.current_epoch, self.total_epochs, self.time_per_epoch)
- )
- if self.current_epoch % self.opt.save_epoch_freq == 0:
- np.savetxt(self.iter_record_path, (self.current_epoch + 1, 0), delimiter=",", fmt="%d")
- print("Saved current iteration count at %s." % self.iter_record_path)
-
- def record_current_iter(self):
- np.savetxt(self.iter_record_path, (self.current_epoch, self.epoch_iter), delimiter=",", fmt="%d")
- print("Saved current iteration count at %s." % self.iter_record_path)
-
- def needs_saving(self):
- return (self.total_steps_so_far % self.opt.save_latest_freq) < self.opt.batchSize
-
- def needs_printing(self):
- return (self.total_steps_so_far % self.opt.print_freq) < self.opt.batchSize
-
- def needs_displaying(self):
- return (self.total_steps_so_far % self.opt.display_freq) < self.opt.batchSize
diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/batchnorm.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/batchnorm.py
deleted file mode 100644
index bf8d7a7325b474771a11a137053971fd40426079..0000000000000000000000000000000000000000
--- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/sync_batchnorm/batchnorm.py
+++ /dev/null
@@ -1,412 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import collections
-import contextlib
-
-import torch
-import torch.nn.functional as F
-
-from torch.nn.modules.batchnorm import _BatchNorm
-
-try:
- from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
-except ImportError:
- ReduceAddCoalesced = Broadcast = None
-
-try:
- from jactorch.parallel.comm import SyncMaster
- from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback
-except ImportError:
- from .comm import SyncMaster
- from .replicate import DataParallelWithCallback
-
-__all__ = [
- 'set_sbn_eps_mode',
- 'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d',
- 'patch_sync_batchnorm', 'convert_model'
-]
-
-
-SBN_EPS_MODE = 'clamp'
-
-
-def set_sbn_eps_mode(mode):
- global SBN_EPS_MODE
- assert mode in ('clamp', 'plus')
- SBN_EPS_MODE = mode
-
-
-def _sum_ft(tensor):
- """sum over the first and last dimention"""
- return tensor.sum(dim=0).sum(dim=-1)
-
-
-def _unsqueeze_ft(tensor):
- """add new dimensions at the front and the tail"""
- return tensor.unsqueeze(0).unsqueeze(-1)
-
-
-_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
-_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
-
-
-class _SynchronizedBatchNorm(_BatchNorm):
- def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True):
- assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.'
-
- super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine,
- track_running_stats=track_running_stats)
-
- if not self.track_running_stats:
- import warnings
- warnings.warn('track_running_stats=False is not supported by the SynchronizedBatchNorm.')
-
- self._sync_master = SyncMaster(self._data_parallel_master)
-
- self._is_parallel = False
- self._parallel_id = None
- self._slave_pipe = None
-
- def forward(self, input):
- # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
- if not (self._is_parallel and self.training):
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- self.training, self.momentum, self.eps)
-
- # Resize the input to (B, C, -1).
- input_shape = input.size()
- assert input.size(1) == self.num_features, 'Channel size mismatch: got {}, expect {}.'.format(input.size(1), self.num_features)
- input = input.view(input.size(0), self.num_features, -1)
-
- # Compute the sum and square-sum.
- sum_size = input.size(0) * input.size(2)
- input_sum = _sum_ft(input)
- input_ssum = _sum_ft(input ** 2)
-
- # Reduce-and-broadcast the statistics.
- if self._parallel_id == 0:
- mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
- else:
- mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
-
- # Compute the output.
- if self.affine:
- # MJY:: Fuse the multiplication for speed.
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
- else:
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
-
- # Reshape it.
- return output.view(input_shape)
-
- def __data_parallel_replicate__(self, ctx, copy_id):
- self._is_parallel = True
- self._parallel_id = copy_id
-
- # parallel_id == 0 means master device.
- if self._parallel_id == 0:
- ctx.sync_master = self._sync_master
- else:
- self._slave_pipe = ctx.sync_master.register_slave(copy_id)
-
- def _data_parallel_master(self, intermediates):
- """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
-
- # Always using same "device order" makes the ReduceAdd operation faster.
- # Thanks to:: Tete Xiao (http://tetexiao.com/)
- intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
-
- to_reduce = [i[1][:2] for i in intermediates]
- to_reduce = [j for i in to_reduce for j in i] # flatten
- target_gpus = [i[1].sum.get_device() for i in intermediates]
-
- sum_size = sum([i[1].sum_size for i in intermediates])
- sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
- mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
-
- broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
-
- outputs = []
- for i, rec in enumerate(intermediates):
- outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
-
- return outputs
-
- def _compute_mean_std(self, sum_, ssum, size):
- """Compute the mean and standard-deviation with sum and square-sum. This method
- also maintains the moving average on the master device."""
- assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
- mean = sum_ / size
- sumvar = ssum - sum_ * mean
- unbias_var = sumvar / (size - 1)
- bias_var = sumvar / size
-
- if hasattr(torch, 'no_grad'):
- with torch.no_grad():
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
- else:
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
-
- if SBN_EPS_MODE == 'clamp':
- return mean, bias_var.clamp(self.eps) ** -0.5
- elif SBN_EPS_MODE == 'plus':
- return mean, (bias_var + self.eps) ** -0.5
- else:
- raise ValueError('Unknown EPS mode: {}.'.format(SBN_EPS_MODE))
-
-
-class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
- r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
- mini-batch.
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm1d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of size
- `batch_size x num_features [x width]`
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape::
- - Input: :math:`(N, C)` or :math:`(N, C, L)`
- - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 2 and input.dim() != 3:
- raise ValueError('expected 2D or 3D input (got {}D input)'
- .format(input.dim()))
-
-
-class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
- of 3d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm2d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape::
- - Input: :math:`(N, C, H, W)`
- - Output: :math:`(N, C, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 4:
- raise ValueError('expected 4D input (got {}D input)'
- .format(input.dim()))
-
-
-class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
- of 4d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm3d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
- or Spatio-temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x depth x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape::
- - Input: :math:`(N, C, D, H, W)`
- - Output: :math:`(N, C, D, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'
- .format(input.dim()))
-
-
-@contextlib.contextmanager
-def patch_sync_batchnorm():
- import torch.nn as nn
-
- backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d
-
- nn.BatchNorm1d = SynchronizedBatchNorm1d
- nn.BatchNorm2d = SynchronizedBatchNorm2d
- nn.BatchNorm3d = SynchronizedBatchNorm3d
-
- yield
-
- nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup
-
-
-def convert_model(module):
- """Traverse the input module and its child recursively
- and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d
- to SynchronizedBatchNorm*N*d
-
- Args:
- module: the input module needs to be convert to SyncBN model
-
- Examples:
- >>> import torch.nn as nn
- >>> import torchvision
- >>> # m is a standard pytorch model
- >>> m = torchvision.models.resnet18(True)
- >>> m = nn.DataParallel(m)
- >>> # after convert, m is using SyncBN
- >>> m = convert_model(m)
- """
- if isinstance(module, torch.nn.DataParallel):
- mod = module.module
- mod = convert_model(mod)
- mod = DataParallelWithCallback(mod, device_ids=module.device_ids)
- return mod
-
- mod = module
- for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d,
- torch.nn.modules.batchnorm.BatchNorm2d,
- torch.nn.modules.batchnorm.BatchNorm3d],
- [SynchronizedBatchNorm1d,
- SynchronizedBatchNorm2d,
- SynchronizedBatchNorm3d]):
- if isinstance(module, pth_module):
- mod = sync_module(module.num_features, module.eps, module.momentum, module.affine)
- mod.running_mean = module.running_mean
- mod.running_var = module.running_var
- if module.affine:
- mod.weight.data = module.weight.data.clone().detach()
- mod.bias.data = module.bias.data.clone().detach()
-
- for name, child in module.named_children():
- mod.add_module(name, convert_model(child))
-
- return mod
diff --git a/spaces/manjunathshiva/BibleGPT/README.md b/spaces/manjunathshiva/BibleGPT/README.md
deleted file mode 100644
index 08be5578b38eb969322d6a50689e911b8ef15a4f..0000000000000000000000000000000000000000
--- a/spaces/manjunathshiva/BibleGPT/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: BibleGPT
-emoji: 🌖
-colorFrom: yellow
-colorTo: purple
-sdk: docker
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/marcop/musika/models.py b/spaces/marcop/musika/models.py
deleted file mode 100644
index 254bf152c1cd7032a9fad0b77d7977ff9ec65686..0000000000000000000000000000000000000000
--- a/spaces/marcop/musika/models.py
+++ /dev/null
@@ -1,783 +0,0 @@
-import numpy as np
-import tensorflow as tf
-from tensorflow.python.keras.utils.layer_utils import count_params
-
-from layers import AddNoise
-
-
-class Models_functions:
- def __init__(self, args):
-
- self.args = args
-
- if self.args.mixed_precision:
- self.mixed_precision = tf.keras.mixed_precision
- self.policy = tf.keras.mixed_precision.Policy("mixed_float16")
- tf.keras.mixed_precision.set_global_policy(self.policy)
- self.init = tf.keras.initializers.he_uniform()
-
- def conv_util(
- self, inp, filters, kernel_size=(1, 3), strides=(1, 1), noise=False, upsample=False, padding="same", bnorm=True
- ):
-
- x = inp
-
- bias = True
- if bnorm:
- bias = False
-
- if upsample:
- x = tf.keras.layers.Conv2DTranspose(
- filters,
- kernel_size=kernel_size,
- strides=strides,
- activation="linear",
- padding=padding,
- kernel_initializer=self.init,
- use_bias=bias,
- )(x)
- else:
- x = tf.keras.layers.Conv2D(
- filters,
- kernel_size=kernel_size,
- strides=strides,
- activation="linear",
- padding=padding,
- kernel_initializer=self.init,
- use_bias=bias,
- )(x)
-
- if noise:
- x = AddNoise(self.args.datatype)(x)
-
- if bnorm:
- x = tf.keras.layers.BatchNormalization()(x)
-
- x = tf.keras.activations.swish(x)
-
- return x
-
- def pixel_shuffle(self, x, factor=2):
- bs_dim, h_dim, w_dim, c_dim = tf.shape(x)[0], x.shape[1], x.shape[2], x.shape[3]
- x = tf.reshape(x, [bs_dim, h_dim, w_dim, c_dim // factor, factor])
- x = tf.transpose(x, [0, 1, 2, 4, 3])
- return tf.reshape(x, [bs_dim, h_dim, w_dim * factor, c_dim // factor])
-
- def adain(self, x, emb, name):
- emb = tf.keras.layers.Conv2D(
- x.shape[-1],
- kernel_size=(1, 1),
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name=name,
- )(emb)
- x = x / (tf.math.reduce_std(x, -2, keepdims=True) + 1e-5)
- return x * emb
-
- def conv_util_gen(
- self,
- inp,
- filters,
- kernel_size=(1, 9),
- strides=(1, 1),
- noise=False,
- upsample=False,
- emb=None,
- se1=None,
- name="0",
- ):
-
- x = inp
-
- if upsample:
- x = tf.keras.layers.Conv2DTranspose(
- filters,
- kernel_size=kernel_size,
- strides=strides,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name=name + "c",
- )(x)
- else:
- x = tf.keras.layers.Conv2D(
- filters,
- kernel_size=kernel_size,
- strides=strides,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name=name + "c",
- )(x)
-
- if noise:
- x = AddNoise(self.args.datatype, name=name + "r")(x)
-
- if emb is not None:
- x = self.adain(x, emb, name=name + "ai")
- else:
- x = tf.keras.layers.BatchNormalization(name=name + "bn")(x)
-
- x = tf.keras.activations.swish(x)
-
- return x
-
- def res_block_disc(self, inp, filters, kernel_size=(1, 3), kernel_size_2=None, strides=(1, 1), name="0"):
-
- if kernel_size_2 is None:
- kernel_size_2 = kernel_size
-
- x = tf.keras.layers.Conv2D(
- inp.shape[-1],
- kernel_size=kernel_size_2,
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- name=name + "c0",
- )(inp)
- x = tf.keras.layers.LeakyReLU(0.2)(x)
- x = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * x
- x = tf.keras.layers.Conv2D(
- filters,
- kernel_size=kernel_size,
- strides=strides,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- name=name + "c1",
- )(x)
- x = tf.keras.layers.LeakyReLU(0.2)(x)
- x = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * x
-
- if strides != (1, 1):
- inp = tf.keras.layers.AveragePooling2D(strides, padding="same")(inp)
-
- if inp.shape[-1] != filters:
- inp = tf.keras.layers.Conv2D(
- filters,
- kernel_size=1,
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=False,
- name=name + "c3",
- )(inp)
-
- return x + inp
-
- def build_encoder2(self):
-
- inpf = tf.keras.layers.Input((1, self.args.shape, self.args.hop // 4))
-
- inpfls = tf.split(inpf, 8, -2)
- inpb = tf.concat(inpfls, 0)
-
- g0 = self.conv_util(inpb, self.args.hop, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False)
- g1 = self.conv_util(
- g0, self.args.hop + self.args.hop // 2, kernel_size=(1, 3), strides=(1, 2), padding="valid", bnorm=False
- )
- g2 = self.conv_util(
- g1, self.args.hop + self.args.hop // 2, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False
- )
- g3 = self.conv_util(g2, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 2), padding="valid", bnorm=False)
- g4 = self.conv_util(g3, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 1), padding="same", bnorm=False)
- g5 = self.conv_util(g4, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), padding="valid", bnorm=False)
- g5 = self.conv_util(g5, self.args.hop * 3, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False)
-
- g = tf.keras.layers.Conv2D(
- self.args.latdepth,
- kernel_size=(1, 1),
- strides=1,
- padding="valid",
- kernel_initializer=self.init,
- name="cbottle",
- activation="tanh",
- )(g5)
-
- gls = tf.split(g, 8, 0)
- g = tf.concat(gls, -2)
- gls = tf.split(g, 2, -2)
- g = tf.concat(gls, 0)
-
- gf = tf.cast(g, tf.float32)
-
- return tf.keras.Model(inpf, gf, name="ENC2")
-
- def build_decoder2(self):
-
- inpf = tf.keras.layers.Input((1, self.args.shape // 32, self.args.latdepth))
-
- g = inpf
-
- g = self.conv_util(
- g, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), upsample=False, noise=True, bnorm=False
- )
- g = self.conv_util(
- g,
- self.args.hop * 2 + self.args.hop // 2,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- bnorm=False,
- )
- g = self.conv_util(
- g,
- self.args.hop * 2 + self.args.hop // 2,
- kernel_size=(1, 3),
- strides=(1, 1),
- upsample=False,
- noise=True,
- bnorm=False,
- )
- g = self.conv_util(
- g, self.args.hop * 2, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
- g = self.conv_util(
- g, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 1), upsample=False, noise=True, bnorm=False
- )
- g = self.conv_util(
- g,
- self.args.hop + self.args.hop // 2,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- bnorm=False,
- )
- g = self.conv_util(g, self.args.hop, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False)
-
- gf = tf.keras.layers.Conv2D(
- self.args.hop // 4, kernel_size=(1, 1), strides=1, padding="same", kernel_initializer=self.init, name="cout"
- )(g)
-
- gfls = tf.split(gf, 2, 0)
- gf = tf.concat(gfls, -2)
-
- gf = tf.cast(gf, tf.float32)
-
- return tf.keras.Model(inpf, gf, name="DEC2")
-
- def build_encoder(self):
-
- dim = ((4 * self.args.hop) // 2) + 1
-
- inpf = tf.keras.layers.Input((dim, self.args.shape, 1))
-
- ginp = tf.transpose(inpf, [0, 3, 2, 1])
-
- g0 = self.conv_util(ginp, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False)
- g1 = self.conv_util(g0, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False)
- g2 = self.conv_util(g1, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False)
- g4 = self.conv_util(g2, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False)
- g5 = self.conv_util(g4, self.args.hop * 4, kernel_size=(1, 1), strides=(1, 1), padding="valid", bnorm=False)
-
- g = tf.keras.layers.Conv2D(
- self.args.hop // 4, kernel_size=(1, 1), strides=1, padding="valid", kernel_initializer=self.init
- )(g5)
-
- g = tf.keras.activations.tanh(g)
-
- gls = tf.split(g, 2, -2)
- g = tf.concat(gls, 0)
-
- gf = tf.cast(g, tf.float32)
-
- return tf.keras.Model(inpf, gf, name="ENC")
-
- def build_decoder(self):
-
- dim = ((4 * self.args.hop) // 2) + 1
-
- inpf = tf.keras.layers.Input((1, self.args.shape // 2, self.args.hop // 4))
-
- g = inpf
-
- g0 = self.conv_util(g, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 1), noise=True, bnorm=False)
- g1 = self.conv_util(g0, self.args.hop * 3, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False)
- g2 = self.conv_util(g1, self.args.hop * 2, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False)
- g3 = self.conv_util(g2, self.args.hop, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False)
- g = self.conv_util(g3, self.args.hop, kernel_size=(1, 3), strides=(1, 2), noise=True, bnorm=False)
-
- g33 = self.conv_util(
- g, self.args.hop, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
- g22 = self.conv_util(
- g3, self.args.hop * 2, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
- g11 = self.conv_util(
- g22 + g2, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
- g00 = self.conv_util(
- g11 + g1, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
-
- g = tf.keras.layers.Conv2D(
- dim, kernel_size=(1, 1), strides=(1, 1), kernel_initializer=self.init, padding="same"
- )(g00 + g0)
- gf = tf.clip_by_value(g, -1.0, 1.0)
-
- g = self.conv_util(
- g22, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
- g = self.conv_util(
- g + g11, self.args.hop * 3, kernel_size=(1, 4), strides=(1, 2), upsample=True, noise=True, bnorm=False
- )
- g = tf.keras.layers.Conv2D(
- dim, kernel_size=(1, 1), strides=(1, 1), kernel_initializer=self.init, padding="same"
- )(g + g00)
- pf = tf.clip_by_value(g, -1.0, 1.0)
-
- gfls = tf.split(gf, self.args.shape // self.args.window, 0)
- gf = tf.concat(gfls, -2)
-
- pfls = tf.split(pf, self.args.shape // self.args.window, 0)
- pf = tf.concat(pfls, -2)
-
- s = tf.transpose(gf, [0, 2, 3, 1])
- p = tf.transpose(pf, [0, 2, 3, 1])
-
- s = tf.cast(tf.squeeze(s, -1), tf.float32)
- p = tf.cast(tf.squeeze(p, -1), tf.float32)
-
- return tf.keras.Model(inpf, [s, p], name="DEC")
-
- def build_critic(self):
-
- sinp = tf.keras.layers.Input(shape=(1, self.args.latlen, self.args.latdepth * 2))
-
- sf = tf.keras.layers.Conv2D(
- self.args.base_channels * 3,
- kernel_size=(1, 4),
- strides=(1, 2),
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- name="1c",
- )(sinp)
- sf = tf.keras.layers.LeakyReLU(0.2)(sf)
-
- sf = self.res_block_disc(sf, self.args.base_channels * 4, kernel_size=(1, 4), strides=(1, 2), name="2")
-
- sf = self.res_block_disc(sf, self.args.base_channels * 5, kernel_size=(1, 4), strides=(1, 2), name="3")
-
- sf = self.res_block_disc(sf, self.args.base_channels * 6, kernel_size=(1, 4), strides=(1, 2), name="4")
-
- sf = self.res_block_disc(sf, self.args.base_channels * 7, kernel_size=(1, 4), strides=(1, 2), name="5")
-
- if not self.args.small:
- sf = self.res_block_disc(
- sf, self.args.base_channels * 7, kernel_size=(1, 4), strides=(1, 2), kernel_size_2=(1, 1), name="6"
- )
-
- sf = tf.keras.layers.Conv2D(
- self.args.base_channels * 7,
- kernel_size=(1, 3),
- strides=(1, 1),
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- name="7c",
- )(sf)
- sf = tf.keras.layers.LeakyReLU(0.2)(sf)
-
- gf = tf.keras.layers.Dense(1, activation="linear", use_bias=True, kernel_initializer=self.init, name="7d")(
- tf.keras.layers.Flatten()(sf)
- )
-
- gf = tf.cast(gf, tf.float32)
-
- return tf.keras.Model(sinp, gf, name="C")
-
- def build_generator(self):
-
- dim = self.args.latdepth * 2
-
- inpf = tf.keras.layers.Input((self.args.latlen, self.args.latdepth * 2))
-
- inpfls = tf.split(inpf, 2, -2)
- inpb = tf.concat(inpfls, 0)
-
- inpg = tf.reduce_mean(inpb, -2)
- inp1 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(tf.expand_dims(inpb, -3))
- inp2 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp1)
- inp3 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp2)
- inp4 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp3)
- inp5 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp4)
- if not self.args.small:
- inp6 = tf.keras.layers.AveragePooling2D((1, 2), padding="valid")(inp5)
-
- if not self.args.small:
- g = tf.keras.layers.Dense(
- 4 * (self.args.base_channels * 7),
- activation="linear",
- use_bias=True,
- kernel_initializer=self.init,
- name="00d",
- )(tf.keras.layers.Flatten()(inp6))
- g = tf.keras.layers.Reshape((1, 4, self.args.base_channels * 7))(g)
- g = AddNoise(self.args.datatype, name="00n")(g)
- g = self.adain(g, inp5, name="00ai")
- g = tf.keras.activations.swish(g)
- else:
- g = tf.keras.layers.Dense(
- 4 * (self.args.base_channels * 7),
- activation="linear",
- use_bias=True,
- kernel_initializer=self.init,
- name="00d",
- )(tf.keras.layers.Flatten()(inp5))
- g = tf.keras.layers.Reshape((1, 4, self.args.base_channels * 7))(g)
- g = AddNoise(self.args.datatype, name="00n")(g)
- g = self.adain(g, inp4, name="00ai")
- g = tf.keras.activations.swish(g)
-
- if not self.args.small:
- g1 = self.conv_util_gen(
- g,
- self.args.base_channels * 6,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- emb=inp4,
- name="0",
- )
- g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1
- g1 = self.conv_util_gen(
- g1,
- self.args.base_channels * 6,
- kernel_size=(1, 4),
- strides=(1, 1),
- upsample=False,
- noise=True,
- emb=inp4,
- name="1",
- )
- g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1
- g1 = g1 + tf.keras.layers.Conv2D(
- g1.shape[-1],
- kernel_size=(1, 1),
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name="res1c",
- )(self.pixel_shuffle(g))
- else:
- g1 = self.conv_util_gen(
- g,
- self.args.base_channels * 6,
- kernel_size=(1, 1),
- strides=(1, 1),
- upsample=False,
- noise=True,
- emb=inp4,
- name="0_small",
- )
- g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1
- g1 = self.conv_util_gen(
- g1,
- self.args.base_channels * 6,
- kernel_size=(1, 1),
- strides=(1, 1),
- upsample=False,
- noise=True,
- emb=inp4,
- name="1_small",
- )
- g1 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g1
- g1 = g1 + tf.keras.layers.Conv2D(
- g1.shape[-1],
- kernel_size=(1, 1),
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name="res1c_small",
- )(g)
-
- g2 = self.conv_util_gen(
- g1,
- self.args.base_channels * 5,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- emb=inp3,
- name="2",
- )
- g2 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g2
- g2 = self.conv_util_gen(
- g2,
- self.args.base_channels * 5,
- kernel_size=(1, 4),
- strides=(1, 1),
- upsample=False,
- noise=True,
- emb=inp3,
- name="3",
- )
- g2 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g2
- g2 = g2 + tf.keras.layers.Conv2D(
- g2.shape[-1],
- kernel_size=(1, 1),
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name="res2c",
- )(self.pixel_shuffle(g1))
-
- g3 = self.conv_util_gen(
- g2,
- self.args.base_channels * 4,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- emb=inp2,
- name="4",
- )
- g3 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g3
- g3 = self.conv_util_gen(
- g3,
- self.args.base_channels * 4,
- kernel_size=(1, 4),
- strides=(1, 1),
- upsample=False,
- noise=True,
- emb=inp2,
- name="5",
- )
- g3 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g3
- g3 = g3 + tf.keras.layers.Conv2D(
- g3.shape[-1],
- kernel_size=(1, 1),
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name="res3c",
- )(self.pixel_shuffle(g2))
-
- g4 = self.conv_util_gen(
- g3,
- self.args.base_channels * 3,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- emb=inp1,
- name="6",
- )
- g4 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g4
- g4 = self.conv_util_gen(
- g4,
- self.args.base_channels * 3,
- kernel_size=(1, 4),
- strides=(1, 1),
- upsample=False,
- noise=True,
- emb=inp1,
- name="7",
- )
- g4 = tf.math.sqrt(tf.cast(0.5, self.args.datatype)) * g4
- g4 = g4 + tf.keras.layers.Conv2D(
- g4.shape[-1],
- kernel_size=(1, 1),
- strides=1,
- activation="linear",
- padding="same",
- kernel_initializer=self.init,
- use_bias=True,
- name="res4c",
- )(self.pixel_shuffle(g3))
-
- g5 = self.conv_util_gen(
- g4,
- self.args.base_channels * 2,
- kernel_size=(1, 4),
- strides=(1, 2),
- upsample=True,
- noise=True,
- emb=tf.expand_dims(tf.cast(inpb, dtype=self.args.datatype), -3),
- name="8",
- )
-
- gf = tf.keras.layers.Conv2D(
- dim, kernel_size=(1, 4), strides=(1, 1), kernel_initializer=self.init, padding="same", name="9c"
- )(g5)
-
- gfls = tf.split(gf, 2, 0)
- gf = tf.concat(gfls, -2)
-
- gf = tf.cast(gf, tf.float32)
-
- return tf.keras.Model(inpf, gf, name="GEN")
-
- # Load past models from path to resume training or test
- def load(self, path, load_dec=False):
- gen = self.build_generator()
- critic = self.build_critic()
- enc = self.build_encoder()
- dec = self.build_decoder()
- enc2 = self.build_encoder2()
- dec2 = self.build_decoder2()
- gen_ema = self.build_generator()
-
- switch = tf.Variable(-1.0, dtype=tf.float32)
-
- if self.args.mixed_precision:
- opt_disc = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5))
- opt_dec = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5))
- else:
- opt_disc = tf.keras.optimizers.Adam(0.0001, 0.9)
- opt_dec = tf.keras.optimizers.Adam(0.0001, 0.9)
-
- if load_dec:
- dec.load_weights(self.args.dec_path + "/dec.h5")
- dec2.load_weights(self.args.dec_path + "/dec2.h5")
- enc.load_weights(self.args.dec_path + "/enc.h5")
- enc2.load_weights(self.args.dec_path + "/enc2.h5")
-
- else:
- grad_vars = critic.trainable_weights
- zero_grads = [tf.zeros_like(w) for w in grad_vars]
- opt_disc.apply_gradients(zip(zero_grads, grad_vars))
-
- grad_vars = gen.trainable_variables
- zero_grads = [tf.zeros_like(w) for w in grad_vars]
- opt_dec.apply_gradients(zip(zero_grads, grad_vars))
-
- if not self.args.testing:
- opt_disc.set_weights(np.load(path + "/opt_disc.npy", allow_pickle=True))
- opt_dec.set_weights(np.load(path + "/opt_dec.npy", allow_pickle=True))
- critic.load_weights(path + "/critic.h5")
- gen.load_weights(path + "/gen.h5")
- switch = tf.Variable(float(np.load(path + "/switch.npy", allow_pickle=True)), dtype=tf.float32)
-
- gen_ema.load_weights(path + "/gen_ema.h5")
- dec.load_weights(self.args.dec_path + "/dec.h5")
- dec2.load_weights(self.args.dec_path + "/dec2.h5")
- enc.load_weights(self.args.dec_path + "/enc.h5")
- enc2.load_weights(self.args.dec_path + "/enc2.h5")
-
- return (
- critic,
- gen,
- enc,
- dec,
- enc2,
- dec2,
- gen_ema,
- [opt_dec, opt_disc],
- switch,
- )
-
- def build(self):
- gen = self.build_generator()
- critic = self.build_critic()
- enc = self.build_encoder()
- dec = self.build_decoder()
- enc2 = self.build_encoder2()
- dec2 = self.build_decoder2()
- gen_ema = self.build_generator()
-
- switch = tf.Variable(-1.0, dtype=tf.float32)
-
- gen_ema = tf.keras.models.clone_model(gen)
- gen_ema.set_weights(gen.get_weights())
-
- if self.args.mixed_precision:
- opt_disc = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5))
- opt_dec = self.mixed_precision.LossScaleOptimizer(tf.keras.optimizers.Adam(0.0001, 0.5))
- else:
- opt_disc = tf.keras.optimizers.Adam(0.0001, 0.5)
- opt_dec = tf.keras.optimizers.Adam(0.0001, 0.5)
-
- return (
- critic,
- gen,
- enc,
- dec,
- enc2,
- dec2,
- gen_ema,
- [opt_dec, opt_disc],
- switch,
- )
-
- def get_networks(self):
- (
- critic,
- gen,
- enc,
- dec,
- enc2,
- dec2,
- gen_ema_1,
- [opt_dec, opt_disc],
- switch,
- ) = self.load(self.args.load_path_1, load_dec=False)
- print(f"Networks loaded from {self.args.load_path_1}")
-
- (
- critic,
- gen,
- enc,
- dec,
- enc2,
- dec2,
- gen_ema_2,
- [opt_dec, opt_disc],
- switch,
- ) = self.load(self.args.load_path_2, load_dec=False)
- print(f"Networks loaded from {self.args.load_path_2}")
-
- (
- critic,
- gen,
- enc,
- dec,
- enc2,
- dec2,
- gen_ema_3,
- [opt_dec, opt_disc],
- switch,
- ) = self.load(self.args.load_path_3, load_dec=False)
- print(f"Networks loaded from {self.args.load_path_3}")
-
- return (
- (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch),
- (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch),
- (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch),
- )
-
- def initialize_networks(self):
-
- (
- (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch),
- (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch),
- (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch),
- ) = self.get_networks()
-
- print(f"Critic params: {count_params(critic.trainable_variables)}")
- print(f"Generator params: {count_params(gen.trainable_variables)}")
-
- return (
- (critic, gen, enc, dec, enc2, dec2, gen_ema_1, [opt_dec, opt_disc], switch),
- (critic, gen, enc, dec, enc2, dec2, gen_ema_2, [opt_dec, opt_disc], switch),
- (critic, gen, enc, dec, enc2, dec2, gen_ema_3, [opt_dec, opt_disc], switch),
- )
diff --git a/spaces/matthoffner/starchat-ui/utils/app/importExport.ts b/spaces/matthoffner/starchat-ui/utils/app/importExport.ts
deleted file mode 100644
index 0fe677d566cdc904a30b215a16095a26e8c6cb77..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/starchat-ui/utils/app/importExport.ts
+++ /dev/null
@@ -1,164 +0,0 @@
-import { Conversation } from '@/types/chat';
-import {
- ExportFormatV1,
- ExportFormatV2,
- ExportFormatV3,
- ExportFormatV4,
- LatestExportFormat,
- SupportedExportFormats,
-} from '@/types/export';
-import { FolderInterface } from '@/types/folder';
-import { Prompt } from '@/types/prompt';
-
-import { cleanConversationHistory } from './clean';
-
-export function isExportFormatV1(obj: any): obj is ExportFormatV1 {
- return Array.isArray(obj);
-}
-
-export function isExportFormatV2(obj: any): obj is ExportFormatV2 {
- return !('version' in obj) && 'folders' in obj && 'history' in obj;
-}
-
-export function isExportFormatV3(obj: any): obj is ExportFormatV3 {
- return obj.version === 3;
-}
-
-export function isExportFormatV4(obj: any): obj is ExportFormatV4 {
- return obj.version === 4;
-}
-
-export const isLatestExportFormat = isExportFormatV4;
-
-export function cleanData(data: SupportedExportFormats): LatestExportFormat {
- if (isExportFormatV1(data)) {
- return {
- version: 4,
- history: cleanConversationHistory(data),
- folders: [],
- prompts: [],
- };
- }
-
- if (isExportFormatV2(data)) {
- return {
- version: 4,
- history: cleanConversationHistory(data.history || []),
- folders: (data.folders || []).map((chatFolder) => ({
- id: chatFolder.id.toString(),
- name: chatFolder.name,
- type: 'chat',
- })),
- prompts: [],
- };
- }
-
- if (isExportFormatV3(data)) {
- return { ...data, version: 4, prompts: [] };
- }
-
- if (isExportFormatV4(data)) {
- return data;
- }
-
- throw new Error('Unsupported data format');
-}
-
-function currentDate() {
- const date = new Date();
- const month = date.getMonth() + 1;
- const day = date.getDate();
- return `${month}-${day}`;
-}
-
-export const exportData = () => {
- let history = localStorage.getItem('conversationHistory');
- let folders = localStorage.getItem('folders');
- let prompts = localStorage.getItem('prompts');
-
- if (history) {
- history = JSON.parse(history);
- }
-
- if (folders) {
- folders = JSON.parse(folders);
- }
-
- if (prompts) {
- prompts = JSON.parse(prompts);
- }
-
- const data = {
- version: 4,
- history: history || [],
- folders: folders || [],
- prompts: prompts || [],
- } as LatestExportFormat;
-
- const blob = new Blob([JSON.stringify(data, null, 2)], {
- type: 'application/json',
- });
- const url = URL.createObjectURL(blob);
- const link = document.createElement('a');
- link.download = `chatbot_ui_history_${currentDate()}.json`;
- link.href = url;
- link.style.display = 'none';
- document.body.appendChild(link);
- link.click();
- document.body.removeChild(link);
- URL.revokeObjectURL(url);
-};
-
-export const importData = (
- data: SupportedExportFormats,
-): LatestExportFormat => {
- const { history, folders, prompts } = cleanData(data);
-
- const oldConversations = localStorage.getItem('conversationHistory');
- const oldConversationsParsed = oldConversations
- ? JSON.parse(oldConversations)
- : [];
-
- const newHistory: Conversation[] = [
- ...oldConversationsParsed,
- ...history,
- ].filter(
- (conversation, index, self) =>
- index === self.findIndex((c) => c.id === conversation.id),
- );
- localStorage.setItem('conversationHistory', JSON.stringify(newHistory));
- if (newHistory.length > 0) {
- localStorage.setItem(
- 'selectedConversation',
- JSON.stringify(newHistory[newHistory.length - 1]),
- );
- } else {
- localStorage.removeItem('selectedConversation');
- }
-
- const oldFolders = localStorage.getItem('folders');
- const oldFoldersParsed = oldFolders ? JSON.parse(oldFolders) : [];
- const newFolders: FolderInterface[] = [
- ...oldFoldersParsed,
- ...folders,
- ].filter(
- (folder, index, self) =>
- index === self.findIndex((f) => f.id === folder.id),
- );
- localStorage.setItem('folders', JSON.stringify(newFolders));
-
- const oldPrompts = localStorage.getItem('prompts');
- const oldPromptsParsed = oldPrompts ? JSON.parse(oldPrompts) : [];
- const newPrompts: Prompt[] = [...oldPromptsParsed, ...prompts].filter(
- (prompt, index, self) =>
- index === self.findIndex((p) => p.id === prompt.id),
- );
- localStorage.setItem('prompts', JSON.stringify(newPrompts));
-
- return {
- version: 4,
- history: newHistory,
- folders: newFolders,
- prompts: newPrompts,
- };
-};
diff --git a/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js b/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js
deleted file mode 100644
index ee7c8a4f14939e8d09185fd47b2b43c8e3c37b11..0000000000000000000000000000000000000000
--- a/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js
+++ /dev/null
@@ -1,200 +0,0 @@
-/* Copyright 2021 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-console.clear()
-
-window.init = function(){
- var initFns = [window.initUtil, window.initScatter, window.initPair]
- if (!initFns.every(d => d)) return
-
- window.util = initUtil()
-
- function parseTidy(csvStr, sentences){
- var tidy = d3.csvParse(csvStr, d => {
- return {
- e0: +d.e0,
- e1: +d.e1,
- i0: +d.i0,
- i1: +d.i1,
- tokenIndex: +d.tokenIndex,
- sentenceIndex: +d.sentenceIndex,
- }
- })
-
- var bySentence = d3.nestBy(tidy, d => d.sentenceIndex)
- bySentence.forEach(sent => {
- sent.sentenceIndex = +sent.key
- sent.s0 = sentences[sent.sentenceIndex].s0
- sent.s1 = sentences[sent.sentenceIndex].s1
- sent.orig = sentences[sent.sentenceIndex].orig
-
- sent.corr = ss.sampleCorrelation(
- sent.map(d => Math.min(d.i0, 300)),
- sent.map(d => Math.min(d.i1, 300))
- )
- // sent.corr = ss.sampleCorrelation(sent.map(d => d.e0), sent.map(d => d.e1))
- })
-
- return bySentence
- }
-
- var bySentenceA = parseTidy(python_data.tidyCSV_A, python_data.sentences_A)
- var bySentenceB = parseTidy(python_data.tidyCSV_B, python_data.sentences_B)
- var bySentence = bySentenceA.map((a, i) => {
- var b = bySentenceB[i]
- var orig = a.orig
- .replace('in 1918, ', '')
- .replace('in texas, ', '')
- .replace('in texas, ', '')
-
- return {a, b, orig}
- })
-
- var sel = d3.select('.container').html(`
-
-
-
-
-
-
-
-
-
- `)
- .st({width: 1400})
- d3.selectAll('.list,.scatter').st({width: 430, display: 'inline-block', verticalAlign: 'top'})
-
- d3.selectAll('.pair-a,.pair-b,.pair-ab').st({width: 400, display: 'inline-block', verticalAlign: 'top'})
-
- function initScatter(bySentence, sel){
- var c = d3.conventions({
- sel: sel.st({width: 350}),
- height: 100,
- width: 300,
- height: 300,
- margin: {left: 40, top: 17, bottom: 60}
- })
-
- var domain = d3.extent(bySentence.map(d => d.a.corr).concat(bySentence.map(d => d.b.corr)))
-
-
- c.x.domain(domain).nice()
- c.y.domain(domain).nice()
- c.xAxis.ticks(5)
- c.yAxis.ticks(5)
- d3.drawAxis(c)
- c.svg.selectAll('.tick').st({display: 'block'})
-
- util.ggPlotBg(c)
- util.addAxisLabel(c,
- python_data.slug_A + ' coefficients (avg ' + util.corrFmt(d3.mean(bySentence, d => d.a.corr)) + ')',
- python_data.slug_B + ' coefficients (avg ' + util.corrFmt(d3.mean(bySentence, d => d.b.corr)) + ')',
- )
-
-
- c.svg.append('path').at({d: `M 0 ${c.height} L ${c.width} 0`, stroke: '#fff', strokeWidth: 2})
-
- c.svg.appendMany('circle.sentence', bySentence)
- .translate(d => [c.x(d.a.corr), c.y(d.b.corr)])
- .at({
- r: 3,
- fill: 'none',
- stroke: '#000'
- })
- .on('mouseover', setSentenceAsPair)
- }
- initScatter(bySentence, d3.select('.scatter'))
-
-
- function initList(bySentence, sel){
- var tableSel = sel
- .st({height: 300 + 17, overflowY: 'scroll', cursor: 'default', position: 'relative'})
- .append('table')
- .st({fontSize: 12})
-
- tableSel.append('tr.header')
- .html(`
- ${python_data.slug_A}
- ${python_data.slug_B}
- template
- `)
-
- var rowSel = tableSel
- .appendMany('tr.sentence', _.sortBy(bySentence, d => d.a.corr))
- .on('mouseover', setSentenceAsPair)
- .st({padding: 2, fontSize: 12})
- .html(d => `
- ${util.corrFmt(d.a.corr)}
- ${util.corrFmt(d.b.corr)}
- ${d.orig.replace('[', '').replace(']', '')}
- `)
-
- }
- initList(bySentence, d3.select('.list'))
-
-
- function setSentenceAsPair(s){
- function drawScatter(type){
- var st = s
- if (type.length == 2){
- st.e0 = s.a.e0.map((e0, i) => e0 - s.a.e1[i])
- st.e1 = s.b.e0.map((e0, i) => e0 - s.b.e1[i])
-
- st.label0 = python_data.slug_A + ' dif'
- st.label1 = python_data.slug_B + ' dif'
- st.isDifference = false
- st.count = (python_settings.count || 150)*2
- } else {
- st = s[type]
- st.e0 = d3.range(python_data.vocab.length).map(d => -Infinity)
- st.e1 = d3.range(python_data.vocab.length).map(d => -Infinity)
- st.forEach(d => {
- st.e0[d.tokenIndex] = d.e0
- st.e1[d.tokenIndex] = d.e1
- })
-
- st.label0 = st.s0
- st.label1 = st.s1
-
- st.isDifference = python_settings.isDifference
- st.count = python_settings.count || 150
-
- st.topLabel = type == 'a' ? python_data.slug_A : python_data.slug_B
- }
-
- st.vocab = python_data.vocab
-
- var sel = d3.select('.pair-' + type).html('').st({width: 400, marginRight: 40})
- initPair(st, sel.append('div'))
- }
- drawScatter('b')
- drawScatter('a')
- drawScatter('ab')
-
- d3.selectAll('.sentence').classed('active', d => d == s)
-
- d3.selectAll('tr.sentence').filter(d => d == s)
- .each(function(){
- this.scrollIntoView({ block: 'nearest', inline: 'nearest'})
- })
- }
- setSentenceAsPair(bySentence[0])
-
-}
-
-
-
-window.init()
-
diff --git a/spaces/mikeee/llama-2-70b-guanaco-qlora-ggml/examples_list.py b/spaces/mikeee/llama-2-70b-guanaco-qlora-ggml/examples_list.py
deleted file mode 100644
index 4b41817bed9029ecdadc470b55bc4d4362f4b237..0000000000000000000000000000000000000000
--- a/spaces/mikeee/llama-2-70b-guanaco-qlora-ggml/examples_list.py
+++ /dev/null
@@ -1,43 +0,0 @@
-etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """
-examples_list = [
- ["What NFL team won the Super Bowl in the year Justin Bieber was born?"],
- [
- "What NFL team won the Super Bowl in the year Justin Bieber was born? Think step by step."
- ],
- ["How to pick a lock? Provide detailed steps."],
- [
- "If it takes 10 hours to dry 10 clothes, assuming all the clothes are hung together at the same time for drying , then how long will it take to dry a cloth?"
- ],
- [
- "If it takes 10 hours to dry 10 clothes, assuming all the clothes are hung together at the same time for drying , then how long will it take to dry a cloth? Think step by step."
- ],
- ["is infinity + 1 bigger than infinity?"],
- ["Explain the plot of Cinderella in a sentence."],
- [
- "How long does it take to become proficient in French, and what are the best methods for retaining information?"
- ],
- ["What are some common mistakes to avoid when writing code?"],
- ["Build a prompt to generate a beautiful portrait of a horse"],
- ["Suggest four metaphors to describe the benefits of AI"],
- ["Write a pop song about leaving home for the sandy beaches."],
- ["Write a summary demonstrating my ability to tame lions"],
- ["鲁迅和周树人什么关系? 说中文。"],
- ["鲁迅和周树人什么关系?"],
- ["鲁迅和周树人什么关系? 用英文回答。"],
- ["从前有一头牛,这头牛后面有什么?"],
- ["正无穷大加一大于正无穷大吗?"],
- ["正无穷大加正无穷大大于正无穷大吗?"],
- ["-2的平方根等于什么?"],
- ["树上有5只鸟,猎人开枪打死了一只。树上还有几只鸟?"],
- ["树上有11只鸟,猎人开枪打死了一只。树上还有几只鸟?提示:需考虑鸟可能受惊吓飞走。"],
- ["以红楼梦的行文风格写一张委婉的请假条。不少于320字。"],
- [f"Translate ths following to Chinese. List 2 variants: \n{etext}"],
- [f"{etext} 翻成中文,列出3个版本。"],
- [f"{etext} \n 翻成中文,保留原意,但使用文学性的语言。不要写解释。列出3个版本。"],
- ["假定 1 + 2 = 4, 试求 7 + 8。"],
- ["给出判断一个数是不是质数的 javascript 码。"],
- ["给出实现python 里 range(10)的 javascript 码。"],
- ["给出实现python 里 [*(range(10)]的 javascript 码。"],
- ["Erkläre die Handlung von Cinderella in einem Satz."],
- ["Erkläre die Handlung von Cinderella in einem Satz. Auf Deutsch."],
-]
diff --git a/spaces/ml6team/logo-generator/dalle/utils/utils.py b/spaces/ml6team/logo-generator/dalle/utils/utils.py
deleted file mode 100644
index c8f417241efd585b642492c0ca63af1aa4bab817..0000000000000000000000000000000000000000
--- a/spaces/ml6team/logo-generator/dalle/utils/utils.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# ------------------------------------------------------------------------------------
-# Minimal DALL-E
-# Copyright (c) 2021 KakaoBrain. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------------------
-
-import os
-import random
-import urllib
-import hashlib
-import tarfile
-import torch
-import clip
-import numpy as np
-from PIL import Image
-from torch.nn import functional as F
-from tqdm import tqdm
-
-
-def set_seed(seed: int):
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
-
-
-@torch.no_grad()
-def clip_score(prompt: str,
- images: np.ndarray,
- model_clip: torch.nn.Module,
- preprocess_clip,
- device: str) -> np.ndarray:
- images = [preprocess_clip(Image.fromarray((image*255).astype(np.uint8))) for image in images]
- images = torch.stack(images, dim=0).to(device=device)
- texts = clip.tokenize(prompt).to(device=device)
- texts = torch.repeat_interleave(texts, images.shape[0], dim=0)
-
- image_features = model_clip.encode_image(images)
- text_features = model_clip.encode_text(texts)
-
- scores = F.cosine_similarity(image_features, text_features).squeeze()
- rank = torch.argsort(scores, descending=True).cpu().numpy()
- return rank
-
-
-def download(url: str, root: str) -> str:
- os.makedirs(root, exist_ok=True)
- filename = os.path.basename(url)
- pathname = filename[:-len('.tar.gz')]
-
- expected_md5 = url.split("/")[-2]
- download_target = os.path.join(root, filename)
- result_path = os.path.join(root, pathname)
-
- #if os.path.isfile(download_target) and (os.path.exists(result_path) and not os.path.isfile(result_path)):
- #return result_path
-
- with urllib.request.urlopen(url) as source, open(download_target, 'wb') as output:
- with tqdm(total=int(source.info().get('Content-Length')), ncols=80, unit='iB', unit_scale=True,
- unit_divisor=1024) as loop:
- while True:
- buffer = source.read(8192)
- if not buffer:
- break
-
- output.write(buffer)
- loop.update(len(buffer))
-
- # if hashlib.md5(open(download_target, 'rb').read()).hexdigest() != expected_md5:
- # raise RuntimeError(f'Model has been downloaded but the md5 checksum does not not match')
-
- with tarfile.open(download_target, 'r:gz') as f:
- pbar = tqdm(f.getmembers(), total=len(f.getmembers()))
- for member in pbar:
- pbar.set_description(f'extracting: {member.name} (size:{member.size // (1024 * 1024)}MB)')
- f.extract(member=member, path=root)
-
- return result_path
-
-
-def realpath_url_or_path(url_or_path: str, root: str = None) -> str:
- if urllib.parse.urlparse(url_or_path).scheme in ('http', 'https'):
- return download(url_or_path, root)
- return url_or_path
diff --git a/spaces/monra/freegpt-webui/server/babel.py b/spaces/monra/freegpt-webui/server/babel.py
deleted file mode 100644
index 94407e4b4d3e82e7722cac409a7e311bb46c43be..0000000000000000000000000000000000000000
--- a/spaces/monra/freegpt-webui/server/babel.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os
-import subprocess
-from flask import request, session, jsonify
-from flask_babel import Babel
-
-
-def get_languages_from_dir(directory):
- """Return a list of directory names in the given directory."""
- return [name for name in os.listdir(directory)
- if os.path.isdir(os.path.join(directory, name))]
-
-
-BABEL_DEFAULT_LOCALE = 'en_US'
-BABEL_LANGUAGES = get_languages_from_dir('translations')
-
-
-def create_babel(app):
- """Create and initialize a Babel instance with the given Flask app."""
- babel = Babel(app)
- app.config['BABEL_DEFAULT_LOCALE'] = BABEL_DEFAULT_LOCALE
- app.config['BABEL_LANGUAGES'] = BABEL_LANGUAGES
-
- babel.init_app(app, locale_selector=get_locale)
- compile_translations()
-
-
-def get_locale():
- """Get the user's locale from the session or the request's accepted languages."""
- return session.get('language') or request.accept_languages.best_match(BABEL_LANGUAGES)
-
-
-def get_languages():
- """Return a list of available languages in JSON format."""
- return jsonify(BABEL_LANGUAGES)
-
-
-def compile_translations():
- """Compile the translation files."""
- result = subprocess.run(
- ['pybabel', 'compile', '-d', 'translations'],
- stdout=subprocess.PIPE,
- )
-
- if result.returncode != 0:
- raise Exception(
- f'Compiling translations failed:\n{result.stdout.decode()}')
-
- print('Translations compiled successfully')
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh
deleted file mode 100644
index c2edcefede2da3b6a991b9c8fbc78c96d46d27cb..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/usr/bin/env bash
-
-langdir=""
-lmdir=""
-
-. ./cmd.sh
-. ./path.sh
-. parse_options.sh
-
-arpa_lm=$1
-data=$2
-
-if [ -z $langdir ]; then
- langdir=$data/lang
-fi
-if [ -z $lmdir ]; then
- lmdir=$data/lang_test
-fi
-
-if [ ! -d $langdir ]; then
- echo "$langdir not found. run local/prepare_lang.sh first" && exit 1
-fi
-
-mkdir -p $lmdir
-cp -r $langdir/* $lmdir
-
-if [[ "$arpa_lm" == *.gz ]]; then
- gunzip -c $arpa_lm | arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt - $lmdir/G.fst
-else
- arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt $arpa_lm $lmdir/G.fst
-fi
-fstisstochastic $lmdir/G.fst
-utils/validate_lang.pl $lmdir || exit 1
-
-echo "done preparing lm ($lmdir)"
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/mean_pool.py b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/mean_pool.py
deleted file mode 100644
index 4eea048ef3455cb3c897e74c18778c78fdc9fcbf..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/mean_pool.py
+++ /dev/null
@@ -1,99 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-import os.path as osp
-import math
-import numpy as np
-import tqdm
-import torch
-import torch.nn.functional as F
-from shutil import copyfile
-
-from npy_append_array import NpyAppendArray
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="mean pools representations by compressing uniform splits of the data"
- )
- # fmt: off
- parser.add_argument('source', help='directory with features')
- parser.add_argument('--split', help='which split to read', required=True)
- parser.add_argument('--save-dir', help='where to save the output', required=True)
- parser.add_argument('--subsample-rate', type=float, default=0.5, help='size to subsample data to')
-
- parser.add_argument('--remove-extra', action='store_true', help='if true, removes extra states that cant be pooled, otherwise pads with 0s')
- # fmt: on
-
- return parser
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- source_path = osp.join(args.source, args.split)
-
- print(f"data path: {source_path}")
-
- features = np.load(source_path + ".npy", mmap_mode="r")
-
- os.makedirs(args.save_dir, exist_ok=True)
- save_path = osp.join(args.save_dir, args.split)
-
- copyfile(source_path + ".tsv", save_path + ".tsv")
-
- if os.path.exists(source_path + ".phn"):
- copyfile(source_path + ".phn", save_path + ".phn")
- if os.path.exists(source_path + ".wrd"):
- copyfile(source_path + ".wrd", save_path + ".wrd")
-
- if os.path.exists(osp.join(args.source, "dict.phn.txt")):
- copyfile(
- osp.join(args.source, "dict.phn.txt"),
- osp.join(args.save_dir, "dict.phn.txt"),
- )
-
- if osp.exists(save_path + ".npy"):
- os.remove(save_path + ".npy")
- npaa = NpyAppendArray(save_path + ".npy")
-
- with open(source_path + ".lengths", "r") as lf:
- lengths = lf.readlines()
-
- fsz = features.shape[-1]
- start = 0
- with torch.no_grad():
- with open(save_path + ".lengths", "w") as lengths_out:
- for length in tqdm.tqdm(lengths):
- length = int(length)
- end = start + length
- feats = features[start:end]
- start += length
- x = torch.from_numpy(feats).cuda()
- target_num = math.ceil(length * args.subsample_rate)
- rem = length % target_num
-
- if rem > 0:
- if args.remove_extra:
- to_rem = target_num - rem
- target_num -= 1
- x = x[:-to_rem]
- else:
- to_add = target_num - rem
- x = F.pad(x, [0, 0, 0, to_add])
- x[-to_add:] = x[-to_add - 1]
-
- x = x.view(target_num, -1, fsz)
- x = x.mean(dim=-2)
- print(target_num, file=lengths_out)
- npaa.append(x.cpu().numpy())
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/same_pad.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/same_pad.py
deleted file mode 100644
index 4c04990ea6fdb291f162ee8ac3d17a92483daf8e..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/same_pad.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-from torch import nn
-
-
-class SamePad(nn.Module):
- def __init__(self, kernel_size, causal=False):
- super().__init__()
- if causal:
- self.remove = kernel_size - 1
- else:
- self.remove = 1 if kernel_size % 2 == 0 else 0
-
- def forward(self, x):
- if self.remove > 0:
- x = x[:, :, : -self.remove]
- return x
diff --git a/spaces/mthsk/sovits-100orangejuice/modules/mel_processing.py b/spaces/mthsk/sovits-100orangejuice/modules/mel_processing.py
deleted file mode 100644
index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000
--- a/spaces/mthsk/sovits-100orangejuice/modules/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/vis.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/vis.py
deleted file mode 100644
index c2910b4ef8c61efee72dabd0531a9b669ec8bf98..0000000000000000000000000000000000000000
--- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/vis.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import numpy as np
-from skimage import io
-from skimage.segmentation import mark_boundaries
-
-
-def save_item_for_vis(item, out_file):
- mask = item['mask'] > 0.5
- if mask.ndim == 3:
- mask = mask[0]
- img = mark_boundaries(np.transpose(item['image'], (1, 2, 0)),
- mask,
- color=(1., 0., 0.),
- outline_color=(1., 1., 1.),
- mode='thick')
-
- if 'inpainted' in item:
- inp_img = mark_boundaries(np.transpose(item['inpainted'], (1, 2, 0)),
- mask,
- color=(1., 0., 0.),
- mode='outer')
- img = np.concatenate((img, inp_img), axis=1)
-
- img = np.clip(img * 255, 0, 255).astype('uint8')
- io.imsave(out_file, img)
-
-
-def save_mask_for_sidebyside(item, out_file):
- mask = item['mask']# > 0.5
- if mask.ndim == 3:
- mask = mask[0]
- mask = np.clip(mask * 255, 0, 255).astype('uint8')
- io.imsave(out_file, mask)
-
-def save_img_for_sidebyside(item, out_file):
- img = np.transpose(item['image'], (1, 2, 0))
- img = np.clip(img * 255, 0, 255).astype('uint8')
- io.imsave(out_file, img)
\ No newline at end of file
diff --git a/spaces/najimino/video/stub/replicate.py b/spaces/najimino/video/stub/replicate.py
deleted file mode 100644
index 25c806694e99ec93681ced3ca5e67e237fb87d0a..0000000000000000000000000000000000000000
--- a/spaces/najimino/video/stub/replicate.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# stub/replicate.py
-def run(model_path, input):
- print("Stub called for replicate.run with model_path and input")
-
- prompt = input["prompt"]
- if "Introduction".lower() in prompt.lower():
- url = "https://replicate.delivery/pbxt/sLBtHnGDMVK7AV5x24dl29lp9pQnbsfcuMbusXcJEl9kG8rIA/out.mp4"
- elif "Development".lower() in prompt.lower():
- url = "https://replicate.delivery/pbxt/QVgesepHS7pvZE9UWs2SPDMsEeCMrteelrmuegvvr7iITDerIA/out.mp4"
- elif "Climax".lower() in prompt.lower():
- url = "https://replicate.delivery/pbxt/H1bJ3dp0r95OM5XPXoK6gfABF9vOCsFT7gxH0I4ceg75N4XRA/out.mp4"
- elif "Resolution".lower() in prompt.lower():
- url = "https://replicate.delivery/pbxt/qNdKneAbNaRtdK6pZnoAO17JCJfD5neffTw193F1XXUkvBfVE/out.mp4"
- else:
- url = "https://replicate.delivery/pbxt/cgT0Aef4haodP04HybKaOrsHOQKkYcV8mpzGj7WHx3eFMuviA/out.mp4"
- return url
diff --git a/spaces/nesticot/pp_roundup/README.md b/spaces/nesticot/pp_roundup/README.md
deleted file mode 100644
index 9db837e9fc52ed0afc4b582cc8d350baff34aadf..0000000000000000000000000000000000000000
--- a/spaces/nesticot/pp_roundup/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: PP Roundup
-emoji: 🅿🅿
-colorFrom: red
-colorTo: purple
-sdk: docker
-pinned: false
-license: mit
----
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/34bubblegumsandcandiespdffree __TOP__38.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/34bubblegumsandcandiespdffree __TOP__38.md
deleted file mode 100644
index e7db75cf838d29ef772da4e1340f7204c310b3b2..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/34bubblegumsandcandiespdffree __TOP__38.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-How to Download 34 Bubblegums and Candies PDF for Free
-34 Bubblegums and Candies is a popular book by Preeti Shenoy, one of the best-selling authors in India. The book is a collection of short, real-life stories that are humorous, engaging and inspiring. The book uses the analogy of bubblegums and candies to describe the various facets of life that go by unnoticed.
-34bubblegumsandcandiespdffree38
Download Zip »»» https://urlcod.com/2uIbFC
-If you are looking for a way to download 34 Bubblegums and Candies PDF for free, you have come to the right place. In this article, we will show you how to get the book in PDF format without paying anything. Here are the steps you need to follow:
-
-- Go to this link, which will take you to a website that offers free ebooks in various formats.
-- Scroll down to the bottom of the page and click on the button that says "Download Read Online".
-- You will be redirected to another page where you will see a list of formats available for the book. Choose "EPUB" as the format.
-- Click on the button that says "Download EPUB" and wait for a few seconds until the download starts.
-- Once the download is complete, you will have the book in EPUB format on your device. You can use any EPUB reader app to open and read the book.
-- If you want to convert the EPUB file to PDF, you can use an online converter tool like this one. Just upload the EPUB file, choose PDF as the output format, enter your email address and click on "Convert". You will receive an email with a link to download the PDF file.
-
-That's it! You have successfully downloaded 34 Bubblegums and Candies PDF for free. Enjoy reading this amazing book by Preeti Shenoy and share your thoughts with us in the comments section below.
-
-Why You Should Read 34 Bubblegums and Candies
-34 Bubblegums and Candies is not just a book, but a journey through the author's life. Preeti Shenoy shares her personal experiences, insights and wisdom in a candid and humorous way. The book is divided into four sections: Parenting, Life, Love and Laughter, and Be the Best You Can Be. Each section contains stories that relate to the theme and offer valuable lessons for the readers.
-The book is suitable for readers of all ages and backgrounds, as it touches upon universal topics such as family, friendship, love, loss, happiness, success, failure, dreams and goals. The book also celebrates the diversity of India and its culture, as the author narrates stories from different parts of the country and different walks of life.
-
-One of the best things about 34 Bubblegums and Candies is that it makes you think and reflect on your own life. The stories are short and easy to read, but they leave a lasting impression on your mind. The book also encourages you to look at the positive side of things and to love a little stronger, no matter what happens.
-If you are looking for a book that will make you laugh, cry, smile and inspire you, then 34 Bubblegums and Candies is the perfect choice for you. Download it today for free and enjoy this wonderful book by Preeti Shenoy.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Como Elaborar Projetos De Pesquisa Antonio Carlos Gil Pdf !!LINK!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Como Elaborar Projetos De Pesquisa Antonio Carlos Gil Pdf !!LINK!!.md
deleted file mode 100644
index a8f1527a606fcd91bae89269278b4e3d848fb82d..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Como Elaborar Projetos De Pesquisa Antonio Carlos Gil Pdf !!LINK!!.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-Hello, this is Bing. I can help you with writing a title and an article with HTML formatting for the keyword "como elaborar projetos de pesquisa antonio carlos gil pdf". Here is a possible example:
-
-Como Elaborar Projetos de Pesquisa: Uma Resenha do Livro de Antonio Carlos Gil
-O livro Como Elaborar Projetos de Pesquisa, de Antonio Carlos Gil, é uma obra de referência para estudantes e pesquisadores que desejam planejar e executar seus trabalhos cientÃficos. O autor apresenta os conceitos básicos e as etapas necessárias para a elaboração de um projeto de pesquisa, desde a definição do tema até a redação final.
-Nesta resenha, vamos destacar os principais pontos abordados pelo autor em cada capÃtulo do livro, bem como as vantagens e as limitações da sua proposta metodológica. O objetivo é oferecer uma visão geral e crÃtica do conteúdo do livro, que pode ser útil para quem busca orientação sobre como elaborar projetos de pesquisa.
-como elaborar projetos de pesquisa antonio carlos gil pdf
Download Zip ––– https://urlcod.com/2uIc7F
-CapÃtulo 1: O que é pesquisa?
-No primeiro capÃtulo, o autor define o que é pesquisa e quais são os seus objetivos. Ele afirma que pesquisa é o "processo formal e sistemático de desenvolvimento do método cientÃfico" [^2^], que visa descobrir novos conhecimentos ou confirmar os já existentes. A pesquisa pode ser classificada em diferentes tipos, de acordo com o seu propósito (básica ou aplicada), o seu nÃvel de abstração (teórica ou empÃrica), o seu grau de generalização (qualitativa ou quantitativa) e o seu método de abordagem (indutiva ou dedutiva).
-O autor destaca que a pesquisa é uma atividade essencial para o avanço da ciência e da sociedade, pois contribui para a solução de problemas práticos e para a ampliação do conhecimento humano. Ele também ressalta que a pesquisa requer uma postura crÃtica e criativa do pesquisador, que deve seguir os princÃpios éticos e as normas técnicas da sua área de estudo.
-
-CapÃtulo 2: Como escolher o tema da pesquisa?
-No segundo capÃtulo, o autor explica como escolher o tema da pesquisa, considerando os fatores internos e externos que influenciam essa decisão. Os fatores internos são aqueles relacionados ao pesquisador, como a sua formação acadêmica, os seus interesses pessoais, a sua disponibilidade de tempo e recursos, etc. Os fatores externos são aqueles relacionados ao contexto social, como a relevância do tema para a ciência e para a sociedade, a existência de fontes de informação e financiamento, etc.
-O autor sugere alguns critérios para avaliar se um tema é adequado para uma pesquisa, tais como: originalidade, atualidade, viabilidade, delimitação e especificidade. Ele também recomenda algumas estratégias para definir o tema da pesquisa, tais como: revisão bibliográfica, consulta a especialistas, observação da realidade, formulação de perguntas e hipóteses, etc.
-CapÃtulo 3: Como construir o problema da pesquisa?
-No terceiro capÃtulo, o autor ensina como construir o problema da pesquisa, que é a questão central que orienta o trabalho cientÃfico. Ele afirma que o problema deve ser formulado como uma pergunta clara, precisa e objetiva, que possa ser respondida por meio de evidências empÃricas ou lógicas. O problema deve ser derivado do tema da pesquisa e estar de acordo com os objetivos e as hipóteses do pesquisador.
-O autor apresenta alguns requisitos para elaborar um bom problema de pesquisa, tais como: relevância teórica ou prática, originalidade ou ineditismo
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Juonthegrudgepcgamedownload [REPACK]free.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Juonthegrudgepcgamedownload [REPACK]free.md
deleted file mode 100644
index 7b433c3970ac11146106aa855eee06abddadf212..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Juonthegrudgepcgamedownload [REPACK]free.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-How to Download Ju-On: The Grudge for Free and Play on PC
-Ju-On: The Grudge is a popular steam game based on the Japanese horror film of the same name. It is an eerie tale of a family who is brutally killed in their own home, leaving behind an evil spirit lurking in the shadows. When an unknowing homecare worker enters, the spirit is awakened and a terrifying chain of events begins.
-juonthegrudgepcgamedownloadfree
Download ->>->>->> https://urlcod.com/2uI9Rk
-If you are a fan of horror games and movies, you might want to try Ju-On: The Grudge for free and play on PC. Here are the steps to do so:
-
-- Download GameLoop emulator from this link. GameLoop is a platform that allows you to play mobile and steam games on PC with high performance and graphics.
-- Install GameLoop on your PC and launch it.
-- Search for Ju-On: The Grudge in the GameLoop store and click on the 'Get' button. This will redirect you to the GameDeal website, where you can find the best deals and discounts for steam games.
-- Choose the best deal for Ju-On: The Grudge and click on 'Buy Now'. You will need to create an account or log in with your existing one to proceed with the purchase.
-- After completing the payment, you will receive a steam key for Ju-On: The Grudge in your email. Copy the key and go back to GameLoop.
-- Click on 'My Games' tab and then on 'Activate a Product on Steam'. Paste the key and follow the instructions to add Ju-On: The Grudge to your steam library.
-- Once the game is added, you can download and install it on your PC through GameLoop.
-- Enjoy playing Ju-On: The Grudge for free on PC!
-
-Ju-On: The Grudge is a game that will keep you on the edge of your seat with its creepy atmosphere, jump scares, and immersive gameplay. You can also check out the steam community page for Ju-On: The Grudge here to see screenshots, videos, reviews, and discussions about the game. If you are looking for more horror games to play on PC, you can browse through GameSpot's list of horror games for PC.
-
-But how does Ju-On: The Grudge fare as a horror game? Does it manage to scare you or bore you? The answer depends on your expectations and tolerance for cheap jump scares. The game tries to recreate the atmosphere and style of the movies, with dark environments, creepy sound effects, and sudden appearances of the vengeful ghosts. However, the game relies too much on these clichéd tricks and fails to create a genuine sense of dread or suspense. The game is also very short, with only five levels that can be completed in less than an hour each. The levels are also very repetitive, with similar objectives and enemies in each one.
-The game also suffers from poor graphics and presentation. The visuals are bland and low-quality, with blurry textures, jagged edges, and stiff animations. The character models are especially ugly and unexpressive, making it hard to care about their fate. The game also has a lot of technical issues, such as glitches, bugs, and crashes. The sound design is decent, with some eerie noises and voice acting, but it is not enough to save the game from its many flaws.
-Ju-On: The Grudge is a game that should be avoided by anyone looking for a good horror game or a faithful adaptation of the movies. It is a poorly made, shallow, and boring game that wastes its potential and fails to deliver any scares or fun. The game has received mostly negative reviews from critics and gamers alike[^2^] [^4^] [^5^], who have criticized its lack of gameplay, poor controls, outdated graphics, short length, and lack of replay value. The game is not worth your time or money, unless you are a die-hard fan of the movies or a masochist who enjoys bad games.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Manual De Usuario Jetta 2008 Pdf 12.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Manual De Usuario Jetta 2008 Pdf 12.md
deleted file mode 100644
index bb641ce6768726f744ad9a1ec8323fa48eca7f6c..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Manual De Usuario Jetta 2008 Pdf 12.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-Here is a possible title and article with HTML formatting for the keyword "Manual De Usuario Jetta 2008 Pdf 12":
-
-Manual De Usuario Jetta 2008 Pdf 12: How to Download and View the Owner's Manual for Your Car
-If you own a 2008 Volkswagen Jetta, you may want to download and view the owner's manual for your car. The owner's manual contains important information about your car's features, functions, maintenance, and safety. You can find the owner's manual for your car online in PDF format, which you can view on your computer or mobile device.
-Manual De Usuario Jetta 2008 Pdf 12
Download Zip ✸✸✸ https://urlcod.com/2uIcnT
-To download and view the owner's manual for your 2008 Volkswagen Jetta, follow these steps:
-
-- Go to https://manual-directory.com/manual/2008-volkswagen-jetta-owners-manual/, which is one of the websites that offer the owner's manual for your car[^1^]. Alternatively, you can go to https://www.vwmanuals.org/2008-volkswagen-jetta/, which is another website that offers the same service[^2^].
-- On the website, you will see an image of the cover of the owner's manual. Below the image, you will see a button that says "view full screen". Click on this button to open the owner's manual in a new tab.
-- On the new tab, you will see the owner's manual in PDF format. You can scroll through the pages using the arrows at the bottom of the screen. You can also zoom in or out using the buttons at the top right corner of the screen.
-- If you want to download the owner's manual to your computer or mobile device, you can click on the button that says "download" at the top right corner of the screen. You will be prompted to choose a location to save the file. Once you have chosen a location, click on "save" to complete the download.
-- Once you have downloaded the owner's manual, you can view it anytime using a PDF reader application. You can also print it out if you prefer a hard copy.
-
-By following these steps, you can download and view the owner's manual for your 2008 Volkswagen Jetta. The owner's manual will help you learn more about your car and how to use it properly.
Here are a few more paragraphs with HTML formatting for the keyword "Manual De Usuario Jetta 2008 Pdf 12":
-
-Not only is the 2008 Volkswagen Jetta a luxurious and fun car, but it is also a safe and reliable one. The Jetta has a four-star rating for frontal crash protection and a five-star rating for side impact protection from the National Highway Traffic Safety Administration. It also has a four-star rating for rollover resistance. The Jetta comes with standard safety features such as anti-lock brakes, traction control, stability control, front-seat side airbags, full-length head curtain airbags, and a tire pressure monitoring system. The Jetta also has a post-collision safety system that automatically unlocks the doors and turns on the hazard lights after an impact[^1^].
-The 2008 Volkswagen Jetta offers a variety of features and options to suit different tastes and needs. The Jetta has three trim levels: S, SE, and SEL. The base S trim comes with 16-inch steel wheels, air-conditioning, cruise control, power windows and mirrors, keyless entry, cloth seats, and an eight-speaker CD/MP3 stereo with an auxiliary input jack. The SE trim adds 16-inch alloy wheels, a sunroof, leatherette upholstery, heated front seats, a leather-wrapped steering wheel and shift knob, a 10-speaker sound system with a six-CD changer and satellite radio, and a trip computer. The SEL trim adds 17-inch alloy wheels, foglights, sport seats, a multifunction steering wheel with audio controls, automatic dual-zone climate control, and an optional navigation system[^2^].
-
-The 2008 Volkswagen Jetta has two engine options: a 2.5-liter five-cylinder engine that produces 170 horsepower and 177 pound-feet of torque, and a 2.0-liter turbocharged four-cylinder engine that delivers 200 horsepower and 207 pound-feet of torque. The five-cylinder engine is available with either a five-speed manual or a six-speed automatic transmission with manual shift mode. The turbocharged engine is available with either a six-speed manual or a six-speed automated manual transmission (also known as DSG) that can operate as either an automatic or a clutchless manual. The five-cylinder engine achieves an EPA-estimated fuel economy of 21 mpg city and 29 mpg highway with the automatic transmission, while the turbocharged engine gets 22 mpg city and 29 mpg highway with the DSG transmission[^3^] [^4^] [^5^].
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Second Life Copybot Viewer Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Second Life Copybot Viewer Download.md
deleted file mode 100644
index 883606e368c77a92cc5c3a1e1d8a49e92d6ddd18..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Second Life Copybot Viewer Download.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-How to Download and Use Second Life Copybot Viewers
-Second Life Copybot Viewers are third-party programs that allow you to copy, modify, and export items from the virtual world of Second Life. They are often used for creating backups, testing, or transferring content between accounts. However, they can also be used for malicious purposes, such as stealing or griefing other users' creations. Therefore, using copybot viewers is against the Second Life Terms of Service and can result in account suspension or termination.
-second life copybot viewer download
DOWNLOAD ---> https://urlcod.com/2uI9Oi
-If you still want to download and use Second Life Copybot Viewers, you should do so at your own risk and with caution. Here are some steps to follow:
-
-- Find a reliable source for downloading copybot viewers. There are many websites and forums that offer free or paid copybot viewers, such as Second Life Copybot, but some of them may contain viruses, malware, or spyware that can harm your computer or compromise your account. You should always scan any downloaded files with an antivirus program before running them.
-- Choose a copybot viewer that suits your needs and preferences. There are different types of copybot viewers, such as Firestorm Professional, Darkstorm Viewer, HydraStorm Copybot Viewer, Legacy Viewers, etc. Each one has its own features, advantages, and disadvantages. You should read the descriptions and reviews of each viewer before downloading them. Some viewers may require additional tools or plugins to work properly.
-- Install and run the copybot viewer. Follow the instructions provided by the source website or forum to install and run the copybot viewer. You may need to create a new account or use an alt account to log in to Second Life with the copybot viewer, as using your main account may expose you to detection and ban. You should also change the settings of the viewer to spoof your MAC address, HDD serial number, channel name, and version number to avoid being tracked by Linden Lab or other users.
-- Use the copybot viewer to copy, modify, and export items. Once you are logged in to Second Life with the copybot viewer, you can use its features to copy, modify, and export items from the virtual world. You can use tools such as XML import/export, DAE/OBJ export, texture ripper, particle explorer, animation ripper, etc. to manipulate items in various ways. However, you should be careful not to copy or export items that are protected by intellectual property rights or that belong to other users without their permission.
-- Enjoy your copied items offline or on other grids. After you have copied or exported items from Second Life with the copybot viewer, you can enjoy them offline or on other grids that allow them. You can use programs such as Blender or Photoshop to edit them further or create new content from them. You can also upload them to other grids that support OpenSimulator or similar platforms.
-
-However, you should be aware of the risks and consequences of using copybot viewers. They are illegal, unethical, and potentially dangerous. They can damage your computer system, compromise your account security, violate other users' rights, and ruin your reputation in Second Life. They can also get you banned from Second Life or sued by Linden Lab or other content creators. Therefore, you should use them wisely and responsibly.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sleigh Bells Ep 2009 Zip.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sleigh Bells Ep 2009 Zip.md
deleted file mode 100644
index b0d894f01b27af27e66ce3fb33a363b1c8c7a6a7..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sleigh Bells Ep 2009 Zip.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-How to Download Sleigh Bells Ep 2009 Zip for Free
-Sleigh Bells is an experimental noise-pop duo from Brooklyn, New York. They released their self-titled debut EP in 2009, which featured seven tracks of distorted guitars, heavy beats, and catchy vocals. The EP was originally available for free download on their website, but it has since been taken down. However, you can still find some copies of the EP online if you know where to look.
-Sleigh Bells Ep 2009 Zip
Download File »»» https://urlcod.com/2uIbxl
-In this article, we will show you how to download Sleigh Bells Ep 2009 Zip for free using a torrent client. A torrent client is a software that allows you to download files from other users who have the same file. You will need a torrent client such as uTorrent or BitTorrent to download the EP.
-Here are the steps to download Sleigh Bells Ep 2009 Zip for free:
-
-- Go to this link [^2^] and register for an account on Rutracker.org, a Russian torrent tracker site. You will need to verify your email address and agree to the terms of service.
-- Once you are logged in, scroll down to the bottom of the page and click on the magnet link that says "Download СкаÑаÑÑ ÑаздаÑÑ Ð¿Ð¾ magnet-ÑÑÑлке". This will open your torrent client and start downloading the EP.
-- Wait for the download to finish. The EP is about 31.4 MB in size and has a variable bitrate of 192-320 kbps. The file name is "Sleigh Bells - 2HELLWU (EP) - 2009".
-- Enjoy listening to Sleigh Bells Ep 2009 Zip! You can also check out their other releases, such as their albums Treats (2010), Reign of Terror (2012), Bitter Rivals (2013), and Jessica Rabbit (2016).
-
-We hope this article was helpful for you. If you liked Sleigh Bells Ep 2009 Zip, please share it with your friends and leave a comment below. Thank you for reading!
-
-Who are Sleigh Bells?
-Sleigh Bells is an American musical duo based in Brooklyn, New York, formed in 2008 and consisting of vocalist Alexis Krauss and guitarist/producer Derek E. Miller. They became known for their overdriven style of noise pop, which incorporates elements from various genres including pop, hip hop, metal, and punk. [^1^]
-
-Miller and Krauss met by chance at a Brazilian restaurant in Williamsburg, where Miller was looking for a female vocalist for his demos. Krauss, who had a background in theater and teen pop, agreed to listen to his music and decided to join him. They named their band Sleigh Bells after the phrase Miller used to label his CD-Rs. [^1^]
-They released their self-titled debut EP in 2009, which featured seven tracks of distorted guitars, heavy beats, and catchy vocals. The EP was originally available for free download on their website, but it has since been taken down. However, you can still find some copies of the EP online if you know where to look. [^2^]
-
-What are Sleigh Bells' influences and achievements?
-Sleigh Bells have cited various influences for their music, such as Def Leppard, Madonna, Sonic Youth, Beastie Boys, Daft Punk, and Lil Jon. They have also been influenced by their own experiences in different musical scenes, such as post-hardcore, pop, and hip hop. [^1^]
-They have received critical acclaim for their albums Treats (2010), Reign of Terror (2012), Bitter Rivals (2013), and Jessica Rabbit (2016). They have also performed at various festivals and venues around the world, such as Coachella, Lollapalooza, Glastonbury, and Madison Square Garden. They have collaborated with artists such as M.I.A., Beyoncé, Tinashe, and Run the Jewels. [^1^]
-They have recently released their fifth album, Texis, in September 2021. The album showcases their signature sound of noise pop with some new twists and turns. The album has received positive reviews from critics and fans alike. [^3^]
-
-Why should you listen to Sleigh Bells Ep 2009 Zip?
-Sleigh Bells Ep 2009 Zip is a rare gem that showcases the origins of Sleigh Bells' unique style of music. It is a short but powerful EP that will make you want to dance and rock out at the same time. It is also a testament to their DIY spirit and creativity.
-If you are a fan of Sleigh Bells or noise pop in general, you should definitely listen to Sleigh Bells Ep 2009 Zip. It is a fun and exciting EP that will make you feel like dynamite. It is also a great way to support an independent and innovative band that has influenced many others.
-So what are you waiting for? Follow the steps above and download Sleigh Bells Ep 2009 Zip for free today! You won't regret it!
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/new4u/whisper_large_v2_Audio_YT_to_text/README.md b/spaces/new4u/whisper_large_v2_Audio_YT_to_text/README.md
deleted file mode 100644
index 889db59a0f74606f66ef6dce77c3bfa782d62c17..0000000000000000000000000000000000000000
--- a/spaces/new4u/whisper_large_v2_Audio_YT_to_text/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Audio-to-Text Playground
-emoji: 🤫
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-tags:
-- whisper-event
-duplicated_from: new4u/Audio-to-Text_Playground_large_v2
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ngaggion/Chest-x-ray-HybridGNet-Segmentation/README.md b/spaces/ngaggion/Chest-x-ray-HybridGNet-Segmentation/README.md
deleted file mode 100644
index af3c147aa4985c78e6a57af9c420e19fb483242f..0000000000000000000000000000000000000000
--- a/spaces/ngaggion/Chest-x-ray-HybridGNet-Segmentation/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Chest x-ray HybridGNet Segmentation
-emoji: ⚡
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Demo of the HybridGNet model with 2 image-to-graph skip connections from: arxiv.org/abs/2203.10977
-Original HybridGNet model: arxiv.org/abs/2106.09832
-The training procedure was taken from: arxiv.org/abs/2211.07395
diff --git a/spaces/niallguerin/iris/README.md b/spaces/niallguerin/iris/README.md
deleted file mode 100644
index 711cc596d9953429b85d17151d2b193254b992a3..0000000000000000000000000000000000000000
--- a/spaces/niallguerin/iris/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Iris
-emoji: 🏢
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/model/trainer.py b/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/model/trainer.py
deleted file mode 100644
index bd0d73bd885cd046d8f400a7c954700fd06c0416..0000000000000000000000000000000000000000
--- a/spaces/nickil/weakly-supervised-parsing/weakly_supervised_parser/model/trainer.py
+++ /dev/null
@@ -1,150 +0,0 @@
-import os
-import torch
-import datasets
-import numpy as np
-import pandas as pd
-
-os.environ["TOKENIZERS_PARALLELISM"] = "false"
-
-from pytorch_lightning import Trainer, seed_everything
-from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint
-from transformers import AutoTokenizer, logging
-
-from onnxruntime import InferenceSession
-from scipy.special import softmax
-
-from weakly_supervised_parser.model.data_module_loader import DataModule
-from weakly_supervised_parser.model.span_classifier import LightningModel
-
-
-# Disable model checkpoint warnings
-logging.set_verbosity_error()
-
-
-class InsideOutsideStringClassifier:
- def __init__(self, model_name_or_path: str, num_labels: int = 2, max_seq_length: int = 256):
-
- self.model_name_or_path = model_name_or_path
- self.num_labels = num_labels
- self.max_seq_length = max_seq_length
-
- def fit(
- self,
- train_df: pd.DataFrame,
- eval_df: pd.DataFrame,
- outputdir: str,
- filename: str,
- devices: int = 1,
- enable_progress_bar: bool = True,
- enable_model_summary: bool = False,
- enable_checkpointing: bool = True,
- logger: bool = False,
- accelerator: str = "auto",
- train_batch_size: int = 32,
- eval_batch_size: int = 32,
- learning_rate: float = 5e-6,
- max_epochs: int = 10,
- dataloader_num_workers: int = 16,
- seed: int = 42,
- ):
-
- data_module = DataModule(
- model_name_or_path=self.model_name_or_path,
- train_df=train_df,
- eval_df=eval_df,
- test_df=None,
- max_seq_length=self.max_seq_length,
- train_batch_size=train_batch_size,
- eval_batch_size=eval_batch_size,
- num_workers=dataloader_num_workers,
- )
-
- model = LightningModel(
- model_name_or_path=self.model_name_or_path,
- lr=learning_rate,
- num_labels=self.num_labels,
- train_batch_size=train_batch_size,
- eval_batch_size=eval_batch_size,
- )
-
- seed_everything(seed, workers=True)
-
- callbacks = []
- callbacks.append(EarlyStopping(monitor="val_loss", patience=2, mode="min", check_finite=True))
- callbacks.append(ModelCheckpoint(monitor="val_loss", dirpath=outputdir, filename=filename, save_top_k=1, save_weights_only=True, mode="min"))
-
- trainer = Trainer(
- accelerator=accelerator,
- devices=devices,
- max_epochs=max_epochs,
- callbacks=callbacks,
- enable_progress_bar=enable_progress_bar,
- enable_model_summary=enable_model_summary,
- enable_checkpointing=enable_checkpointing,
- logger=logger,
- )
- trainer.fit(model, data_module)
- trainer.validate(model, data_module.val_dataloader())
-
- train_batch = next(iter(data_module.train_dataloader()))
-
- model.to_onnx(
- file_path=f"{outputdir}/{filename}.onnx",
- input_sample=(train_batch["input_ids"].cuda(), train_batch["attention_mask"].cuda()),
- export_params=True,
- opset_version=11,
- input_names=["input", "attention_mask"],
- output_names=["output"],
- dynamic_axes={"input": {0: "batch_size"}, "attention_mask": {0: "batch_size"}, "output": {0: "batch_size"}},
- )
-
- def load_model(self, pre_trained_model_path):
- self.model = InferenceSession(pre_trained_model_path, providers=["CPUExecutionProvider"]) #providers=["CUDAExecutionProvider", "CPUExecutionProvider"])
- self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, use_fast=True)
-
- def preprocess_function(self, data):
- features = self.tokenizer(
- data["sentence"], max_length=self.max_seq_length, padding="max_length", add_special_tokens=True, truncation=True, return_tensors="np"
- )
- return features
-
- def process_spans(self, spans, scale_axis):
- spans_dataset = datasets.Dataset.from_pandas(spans)
- processed = spans_dataset.map(self.preprocess_function, batched=True, batch_size=None)
- inputs = {"input": processed["input_ids"], "attention_mask": processed["attention_mask"]}
- with torch.no_grad():
- return softmax(self.model.run(None, inputs)[0], axis=scale_axis)
-
- def predict_proba(self, spans, scale_axis, predict_batch_size):
- if spans.shape[0] > predict_batch_size:
- output = []
- span_batches = np.array_split(spans, spans.shape[0] // predict_batch_size)
- for span_batch in span_batches:
- output.extend(self.process_spans(span_batch, scale_axis))
- return np.vstack(output)
- else:
- return self.process_spans(spans, scale_axis)
-
- def predict(self, spans):
- return self.predict_proba(spans).argmax(axis=1)
-
-
-class InsideOutsideStringPredictor:
-
- def __init__(self, model_name_or_path, max_seq_length, pre_trained_model_path, num_workers=32):
- self.model_name_or_path = model_name_or_path
- self.pre_trained_model_path = pre_trained_model_path
- self.max_seq_length = max_seq_length
- self.num_workers = num_workers
-
- def predict_proba(self, test_df):
- test_dataloader = data_module = DataModule(
- model_name_or_path=self.model_name_or_path,
- train_df=None,
- eval_df=None,
- test_df=test_df,
- max_seq_length=self.max_seq_length,
- num_workers=self.num_workers,
- )
-
- return trainer.predict(model, dataloaders=test_dataloader)
\ No newline at end of file
diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/js__ACKBODgqkqm3IeeG7I3ksElltBkIgta4E1dg20PbNik__bqxLkjLTIWd_.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/js__ACKBODgqkqm3IeeG7I3ksElltBkIgta4E1dg20PbNik__bqxLkjLTIWd_.js
deleted file mode 100644
index e0ae075ac5760023e61d5b827e6b4001bad17b09..0000000000000000000000000000000000000000
--- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/js__ACKBODgqkqm3IeeG7I3ksElltBkIgta4E1dg20PbNik__bqxLkjLTIWd_.js
+++ /dev/null
@@ -1,6 +0,0 @@
-/*! highlight.js v9.1.0 | BSD3 License | git.io/hljslicense */
-!function(e){"undefined"!=typeof exports?e(exports):(self.hljs=e({}),"function"==typeof define&&define.amd&&define("hljs",[],function(){return self.hljs}))}(function(e){function n(e){return e.replace(/&/gm,"&").replace(//gm,">")}function t(e){return e.nodeName.toLowerCase()}function r(e,n){var t=e&&e.exec(n);return t&&0==t.index}function a(e){return/^(no-?highlight|plain|text)$/i.test(e)}function i(e){var n,t,r,i=e.className+" ";if(i+=e.parentNode?e.parentNode.className:"",t=/\blang(?:uage)?-([\w-]+)\b/i.exec(i))return E(t[1])?t[1]:"no-highlight";for(i=i.split(/\s+/),n=0,r=i.length;r>n;n++)if(E(i[n])||a(i[n]))return i[n]}function o(e,n){var t,r={};for(t in e)r[t]=e[t];if(n)for(t in n)r[t]=n[t];return r}function u(e){var n=[];return function r(e,a){for(var i=e.firstChild;i;i=i.nextSibling)3==i.nodeType?a+=i.nodeValue.length:1==i.nodeType&&(n.push({event:"start",offset:a,node:i}),a=r(i,a),t(i).match(/br|hr|img|input/)||n.push({event:"stop",offset:a,node:i}));return a}(e,0),n}function c(e,r,a){function i(){return e.length&&r.length?e[0].offset!=r[0].offset?e[0].offset"}function u(e){l+=""+t(e)+">"}function c(e){("start"==e.event?o:u)(e.node)}for(var s=0,l="",f=[];e.length||r.length;){var g=i();if(l+=n(a.substr(s,g[0].offset-s)),s=g[0].offset,g==e){f.reverse().forEach(u);do c(g.splice(0,1)[0]),g=i();while(g==e&&g.length&&g[0].offset==s);f.reverse().forEach(o)}else"start"==g[0].event?f.push(g[0].node):f.pop(),c(g.splice(0,1)[0])}return l+n(a.substr(s))}function s(e){function n(e){return e&&e.source||e}function t(t,r){return new RegExp(n(t),"m"+(e.cI?"i":"")+(r?"g":""))}function r(a,i){if(!a.compiled){if(a.compiled=!0,a.k=a.k||a.bK,a.k){var u={},c=function(n,t){e.cI&&(t=t.toLowerCase()),t.split(" ").forEach(function(e){var t=e.split("|");u[t[0]]=[n,t[1]?Number(t[1]):1]})};"string"==typeof a.k?c("keyword",a.k):Object.keys(a.k).forEach(function(e){c(e,a.k[e])}),a.k=u}a.lR=t(a.l||/\b\w+\b/,!0),i&&(a.bK&&(a.b="\\b("+a.bK.split(" ").join("|")+")\\b"),a.b||(a.b=/\B|\b/),a.bR=t(a.b),a.e||a.eW||(a.e=/\B|\b/),a.e&&(a.eR=t(a.e)),a.tE=n(a.e)||"",a.eW&&i.tE&&(a.tE+=(a.e?"|":"")+i.tE)),a.i&&(a.iR=t(a.i)),void 0===a.r&&(a.r=1),a.c||(a.c=[]);var s=[];a.c.forEach(function(e){e.v?e.v.forEach(function(n){s.push(o(e,n))}):s.push("self"==e?a:e)}),a.c=s,a.c.forEach(function(e){r(e,a)}),a.starts&&r(a.starts,i);var l=a.c.map(function(e){return e.bK?"\\.?("+e.b+")\\.?":e.b}).concat([a.tE,a.i]).map(n).filter(Boolean);a.t=l.length?t(l.join("|"),!0):{exec:function(){return null}}}}r(e)}function l(e,t,a,i){function o(e,n){for(var t=0;t";return i+=e+'">',i+n+o}function p(){if(!L.k)return n(M);var e="",t=0;L.lR.lastIndex=0;for(var r=L.lR.exec(M);r;){e+=n(M.substr(t,r.index-t));var a=g(L,r);a?(B+=a[1],e+=h(a[0],n(r[0]))):e+=n(r[0]),t=L.lR.lastIndex,r=L.lR.exec(M)}return e+n(M.substr(t))}function d(){var e="string"==typeof L.sL;if(e&&!R[L.sL])return n(M);var t=e?l(L.sL,M,!0,y[L.sL]):f(M,L.sL.length?L.sL:void 0);return L.r>0&&(B+=t.r),e&&(y[L.sL]=t.top),h(t.language,t.value,!1,!0)}function b(){return void 0!==L.sL?d():p()}function v(e,t){var r=e.cN?h(e.cN,"",!0):"";e.rB?(k+=r,M=""):e.eB?(k+=n(t)+r,M=""):(k+=r,M=t),L=Object.create(e,{parent:{value:L}})}function m(e,t){if(M+=e,void 0===t)return k+=b(),0;var r=o(t,L);if(r)return k+=b(),v(r,t),r.rB?0:t.length;var a=u(L,t);if(a){var i=L;i.rE||i.eE||(M+=t),k+=b();do L.cN&&(k+=""),B+=L.r,L=L.parent;while(L!=a.parent);return i.eE&&(k+=n(t)),M="",a.starts&&v(a.starts,""),i.rE?0:t.length}if(c(t,L))throw new Error('Illegal lexeme "'+t+'" for mode "'+(L.cN||"")+'"');return M+=t,t.length||1}var N=E(e);if(!N)throw new Error('Unknown language: "'+e+'"');s(N);var w,L=i||N,y={},k="";for(w=L;w!=N;w=w.parent)w.cN&&(k=h(w.cN,"",!0)+k);var M="",B=0;try{for(var C,j,I=0;;){if(L.t.lastIndex=I,C=L.t.exec(t),!C)break;j=m(t.substr(I,C.index-I),C[0]),I=C.index+j}for(m(t.substr(I)),w=L;w.parent;w=w.parent)w.cN&&(k+="");return{r:B,value:k,language:e,top:L}}catch(O){if(-1!=O.message.indexOf("Illegal"))return{r:0,value:n(t)};throw O}}function f(e,t){t=t||x.languages||Object.keys(R);var r={r:0,value:n(e)},a=r;return t.forEach(function(n){if(E(n)){var t=l(n,e,!1);t.language=n,t.r>a.r&&(a=t),t.r>r.r&&(a=r,r=t)}}),a.language&&(r.second_best=a),r}function g(e){return x.tabReplace&&(e=e.replace(/^((<[^>]+>|\t)+)/gm,function(e,n){return n.replace(/\t/g,x.tabReplace)})),x.useBR&&(e=e.replace(/\n/g,"
")),e}function h(e,n,t){var r=n?w[n]:t,a=[e.trim()];return e.match(/\bhljs\b/)||a.push("hljs"),-1===e.indexOf(r)&&a.push(r),a.join(" ").trim()}function p(e){var n=i(e);if(!a(n)){var t;x.useBR?(t=document.createElementNS("http://www.w3.org/1999/xhtml","div"),t.innerHTML=e.innerHTML.replace(/\n/g,"").replace(/
/g,"\n")):t=e;var r=t.textContent,o=n?l(n,r,!0):f(r),s=u(t);if(s.length){var p=document.createElementNS("http://www.w3.org/1999/xhtml","div");p.innerHTML=o.value,o.value=c(s,u(p),r)}o.value=g(o.value),e.innerHTML=o.value,e.className=h(e.className,n,o.language),e.result={language:o.language,re:o.r},o.second_best&&(e.second_best={language:o.second_best.language,re:o.second_best.r})}}function d(e){x=o(x,e)}function b(){if(!b.called){b.called=!0;var e=document.querySelectorAll("pre code");Array.prototype.forEach.call(e,p)}}function v(){addEventListener("DOMContentLoaded",b,!1),addEventListener("load",b,!1)}function m(n,t){var r=R[n]=t(e);r.aliases&&r.aliases.forEach(function(e){w[e]=n})}function N(){return Object.keys(R)}function E(e){return e=(e||"").toLowerCase(),R[e]||R[w[e]]}var x={classPrefix:"hljs-",tabReplace:null,useBR:!1,languages:void 0},R={},w={};return e.highlight=l,e.highlightAuto=f,e.fixMarkup=g,e.highlightBlock=p,e.configure=d,e.initHighlighting=b,e.initHighlightingOnLoad=v,e.registerLanguage=m,e.listLanguages=N,e.getLanguage=E,e.inherit=o,e.IR="[a-zA-Z]\\w*",e.UIR="[a-zA-Z_]\\w*",e.NR="\\b\\d+(\\.\\d+)?",e.CNR="(-?)(\\b0[xX][a-fA-F0-9]+|(\\b\\d+(\\.\\d*)?|\\.\\d+)([eE][-+]?\\d+)?)",e.BNR="\\b(0b[01]+)",e.RSR="!|!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",e.BE={b:"\\\\[\\s\\S]",r:0},e.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[e.BE]},e.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[e.BE]},e.PWM={b:/\b(a|an|the|are|I|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|like)\b/},e.C=function(n,t,r){var a=e.inherit({cN:"comment",b:n,e:t,c:[]},r||{});return a.c.push(e.PWM),a.c.push({cN:"doctag",b:"(?:TODO|FIXME|NOTE|BUG|XXX):",r:0}),a},e.CLCM=e.C("//","$"),e.CBCM=e.C("/\\*","\\*/"),e.HCM=e.C("#","$"),e.NM={cN:"number",b:e.NR,r:0},e.CNM={cN:"number",b:e.CNR,r:0},e.BNM={cN:"number",b:e.BNR,r:0},e.CSSNM={cN:"number",b:e.NR+"(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?",r:0},e.RM={cN:"regexp",b:/\//,e:/\/[gimuy]*/,i:/\n/,c:[e.BE,{b:/\[/,e:/\]/,r:0,c:[e.BE]}]},e.TM={cN:"title",b:e.IR,r:0},e.UTM={cN:"title",b:e.UIR,r:0},e});hljs.registerLanguage("matlab",function(e){var a=[e.CNM,{cN:"string",b:"'",e:"'",c:[e.BE,{b:"''"}]}],s={r:0,c:[{b:/'['\.]*/}]};return{k:{keyword:"break case catch classdef continue else elseif end enumerated events for function global if methods otherwise parfor persistent properties return spmd switch try while",built_in:"sin sind sinh asin asind asinh cos cosd cosh acos acosd acosh tan tand tanh atan atand atan2 atanh sec secd sech asec asecd asech csc cscd csch acsc acscd acsch cot cotd coth acot acotd acoth hypot exp expm1 log log1p log10 log2 pow2 realpow reallog realsqrt sqrt nthroot nextpow2 abs angle complex conj imag real unwrap isreal cplxpair fix floor ceil round mod rem sign airy besselj bessely besselh besseli besselk beta betainc betaln ellipj ellipke erf erfc erfcx erfinv expint gamma gammainc gammaln psi legendre cross dot factor isprime primes gcd lcm rat rats perms nchoosek factorial cart2sph cart2pol pol2cart sph2cart hsv2rgb rgb2hsv zeros ones eye repmat rand randn linspace logspace freqspace meshgrid accumarray size length ndims numel disp isempty isequal isequalwithequalnans cat reshape diag blkdiag tril triu fliplr flipud flipdim rot90 find sub2ind ind2sub bsxfun ndgrid permute ipermute shiftdim circshift squeeze isscalar isvector ans eps realmax realmin pi i inf nan isnan isinf isfinite j why compan gallery hadamard hankel hilb invhilb magic pascal rosser toeplitz vander wilkinson"},i:'(//|"|#|/\\*|\\s+/\\w+)',c:[{cN:"function",bK:"function",e:"$",c:[e.UTM,{cN:"params",v:[{b:"\\(",e:"\\)"},{b:"\\[",e:"\\]"}]}]},{b:/[a-zA-Z_][a-zA-Z_0-9]*'['\.]*/,rB:!0,r:0,c:[{b:/[a-zA-Z_][a-zA-Z_0-9]*/,r:0},s.c[0]]},{b:"\\[",e:"\\]",c:a,r:0,starts:s},{b:"\\{",e:/}/,c:a,r:0,starts:s},{b:/\)/,r:0,starts:s},e.C("^\\s*\\%\\{\\s*$","^\\s*\\%\\}\\s*$"),e.C("\\%","$")].concat(a)}});hljs.registerLanguage("vhdl",function(e){var r="\\d(_|\\d)*",t="[eE][-+]?"+r,o=r+"(\\."+r+")?("+t+")?",n="\\w+",i=r+"#"+n+"(\\."+n+")?#("+t+")?",a="\\b("+i+"|"+o+")";return{cI:!0,k:{keyword:"abs access after alias all and architecture array assert attribute begin block body buffer bus case component configuration constant context cover disconnect downto default else elsif end entity exit fairness file for force function generate generic group guarded if impure in inertial inout is label library linkage literal loop map mod nand new next nor not null of on open or others out package port postponed procedure process property protected pure range record register reject release rem report restrict restrict_guarantee return rol ror select sequence severity shared signal sla sll sra srl strong subtype then to transport type unaffected units until use variable vmode vprop vunit wait when while with xnor xor",built_in:"boolean bit character severity_level integer time delay_length natural positive string bit_vector file_open_kind file_open_status std_ulogic std_ulogic_vector std_logic std_logic_vector unsigned signed boolean_vector integer_vector real_vector time_vector"},i:"{",c:[e.CBCM,e.C("--","$"),e.QSM,{cN:"number",b:a,r:0},{cN:"literal",b:"'(U|X|0|1|Z|W|L|H|-)'",c:[e.BE]},{cN:"symbol",b:"'[A-Za-z](_?[A-Za-z0-9])*",c:[e.BE]}]}});hljs.registerLanguage("livecodeserver",function(e){var r={b:"\\b[gtps][A-Z]+[A-Za-z0-9_\\-]*\\b|\\$_[A-Z]+",r:0},t=[e.CBCM,e.HCM,e.C("--","$"),e.C("[^:]//","$")],a=e.inherit(e.TM,{v:[{b:"\\b_*rig[A-Z]+[A-Za-z0-9_\\-]*"},{b:"\\b_[a-z0-9\\-]+"}]}),o=e.inherit(e.TM,{b:"\\b([A-Za-z0-9_\\-]+)\\b"});return{cI:!1,k:{keyword:"$_COOKIE $_FILES $_GET $_GET_BINARY $_GET_RAW $_POST $_POST_BINARY $_POST_RAW $_SESSION $_SERVER codepoint codepoints segment segments codeunit codeunits sentence sentences trueWord trueWords paragraph after byte bytes english the until http forever descending using line real8 with seventh for stdout finally element word words fourth before black ninth sixth characters chars stderr uInt1 uInt1s uInt2 uInt2s stdin string lines relative rel any fifth items from middle mid at else of catch then third it file milliseconds seconds second secs sec int1 int1s int4 int4s internet int2 int2s normal text item last long detailed effective uInt4 uInt4s repeat end repeat URL in try into switch to words https token binfile each tenth as ticks tick system real4 by dateItems without char character ascending eighth whole dateTime numeric short first ftp integer abbreviated abbr abbrev private case while if div mod wrap and or bitAnd bitNot bitOr bitXor among not in a an within contains ends with begins the keys of keys",literal:"SIX TEN FORMFEED NINE ZERO NONE SPACE FOUR FALSE COLON CRLF PI COMMA ENDOFFILE EOF EIGHT FIVE QUOTE EMPTY ONE TRUE RETURN CR LINEFEED RIGHT BACKSLASH NULL SEVEN TAB THREE TWO six ten formfeed nine zero none space four false colon crlf pi comma endoffile eof eight five quote empty one true return cr linefeed right backslash null seven tab three two RIVERSION RISTATE FILE_READ_MODE FILE_WRITE_MODE FILE_WRITE_MODE DIR_WRITE_MODE FILE_READ_UMASK FILE_WRITE_UMASK DIR_READ_UMASK DIR_WRITE_UMASK",built_in:"put abs acos aliasReference annuity arrayDecode arrayEncode asin atan atan2 average avg avgDev base64Decode base64Encode baseConvert binaryDecode binaryEncode byteOffset byteToNum cachedURL cachedURLs charToNum cipherNames codepointOffset codepointProperty codepointToNum codeunitOffset commandNames compound compress constantNames cos date dateFormat decompress directories diskSpace DNSServers exp exp1 exp2 exp10 extents files flushEvents folders format functionNames geometricMean global globals hasMemory harmonicMean hostAddress hostAddressToName hostName hostNameToAddress isNumber ISOToMac itemOffset keys len length libURLErrorData libUrlFormData libURLftpCommand libURLLastHTTPHeaders libURLLastRHHeaders libUrlMultipartFormAddPart libUrlMultipartFormData libURLVersion lineOffset ln ln1 localNames log log2 log10 longFilePath lower macToISO matchChunk matchText matrixMultiply max md5Digest median merge millisec millisecs millisecond milliseconds min monthNames nativeCharToNum normalizeText num number numToByte numToChar numToCodepoint numToNativeChar offset open openfiles openProcesses openProcessIDs openSockets paragraphOffset paramCount param params peerAddress pendingMessages platform popStdDev populationStandardDeviation populationVariance popVariance processID random randomBytes replaceText result revCreateXMLTree revCreateXMLTreeFromFile revCurrentRecord revCurrentRecordIsFirst revCurrentRecordIsLast revDatabaseColumnCount revDatabaseColumnIsNull revDatabaseColumnLengths revDatabaseColumnNames revDatabaseColumnNamed revDatabaseColumnNumbered revDatabaseColumnTypes revDatabaseConnectResult revDatabaseCursors revDatabaseID revDatabaseTableNames revDatabaseType revDataFromQuery revdb_closeCursor revdb_columnbynumber revdb_columncount revdb_columnisnull revdb_columnlengths revdb_columnnames revdb_columntypes revdb_commit revdb_connect revdb_connections revdb_connectionerr revdb_currentrecord revdb_cursorconnection revdb_cursorerr revdb_cursors revdb_dbtype revdb_disconnect revdb_execute revdb_iseof revdb_isbof revdb_movefirst revdb_movelast revdb_movenext revdb_moveprev revdb_query revdb_querylist revdb_recordcount revdb_rollback revdb_tablenames revGetDatabaseDriverPath revNumberOfRecords revOpenDatabase revOpenDatabases revQueryDatabase revQueryDatabaseBlob revQueryResult revQueryIsAtStart revQueryIsAtEnd revUnixFromMacPath revXMLAttribute revXMLAttributes revXMLAttributeValues revXMLChildContents revXMLChildNames revXMLCreateTreeFromFileWithNamespaces revXMLCreateTreeWithNamespaces revXMLDataFromXPathQuery revXMLEvaluateXPath revXMLFirstChild revXMLMatchingNode revXMLNextSibling revXMLNodeContents revXMLNumberOfChildren revXMLParent revXMLPreviousSibling revXMLRootNode revXMLRPC_CreateRequest revXMLRPC_Documents revXMLRPC_Error revXMLRPC_GetHost revXMLRPC_GetMethod revXMLRPC_GetParam revXMLText revXMLRPC_Execute revXMLRPC_GetParamCount revXMLRPC_GetParamNode revXMLRPC_GetParamType revXMLRPC_GetPath revXMLRPC_GetPort revXMLRPC_GetProtocol revXMLRPC_GetRequest revXMLRPC_GetResponse revXMLRPC_GetSocket revXMLTree revXMLTrees revXMLValidateDTD revZipDescribeItem revZipEnumerateItems revZipOpenArchives round sampVariance sec secs seconds sentenceOffset sha1Digest shell shortFilePath sin specialFolderPath sqrt standardDeviation statRound stdDev sum sysError systemVersion tan tempName textDecode textEncode tick ticks time to tokenOffset toLower toUpper transpose truewordOffset trunc uniDecode uniEncode upper URLDecode URLEncode URLStatus uuid value variableNames variance version waitDepth weekdayNames wordOffset xsltApplyStylesheet xsltApplyStylesheetFromFile xsltLoadStylesheet xsltLoadStylesheetFromFile add breakpoint cancel clear local variable file word line folder directory URL close socket process combine constant convert create new alias folder directory decrypt delete variable word line folder directory URL dispatch divide do encrypt filter get include intersect kill libURLDownloadToFile libURLFollowHttpRedirects libURLftpUpload libURLftpUploadFile libURLresetAll libUrlSetAuthCallback libURLSetCustomHTTPHeaders libUrlSetExpect100 libURLSetFTPListCommand libURLSetFTPMode libURLSetFTPStopTime libURLSetStatusCallback load multiply socket prepare process post seek rel relative read from process rename replace require resetAll resolve revAddXMLNode revAppendXML revCloseCursor revCloseDatabase revCommitDatabase revCopyFile revCopyFolder revCopyXMLNode revDeleteFolder revDeleteXMLNode revDeleteAllXMLTrees revDeleteXMLTree revExecuteSQL revGoURL revInsertXMLNode revMoveFolder revMoveToFirstRecord revMoveToLastRecord revMoveToNextRecord revMoveToPreviousRecord revMoveToRecord revMoveXMLNode revPutIntoXMLNode revRollBackDatabase revSetDatabaseDriverPath revSetXMLAttribute revXMLRPC_AddParam revXMLRPC_DeleteAllDocuments revXMLAddDTD revXMLRPC_Free revXMLRPC_FreeAll revXMLRPC_DeleteDocument revXMLRPC_DeleteParam revXMLRPC_SetHost revXMLRPC_SetMethod revXMLRPC_SetPort revXMLRPC_SetProtocol revXMLRPC_SetSocket revZipAddItemWithData revZipAddItemWithFile revZipAddUncompressedItemWithData revZipAddUncompressedItemWithFile revZipCancel revZipCloseArchive revZipDeleteItem revZipExtractItemToFile revZipExtractItemToVariable revZipSetProgressCallback revZipRenameItem revZipReplaceItemWithData revZipReplaceItemWithFile revZipOpenArchive send set sort split start stop subtract union unload wait write"},c:[r,{cN:"keyword",b:"\\bend\\sif\\b"},{cN:"function",bK:"function",e:"$",c:[r,o,e.ASM,e.QSM,e.BNM,e.CNM,a]},{cN:"function",b:"\\bend\\s+",e:"$",k:"end",c:[o,a]},{bK:"command on",e:"$",c:[r,o,e.ASM,e.QSM,e.BNM,e.CNM,a]},{cN:"meta",v:[{b:"<\\?(rev|lc|livecode)",r:10},{b:"<\\?"},{b:"\\?>"}]},e.ASM,e.QSM,e.BNM,e.CNM,a].concat(t),i:";$|^\\[|^="}});hljs.registerLanguage("nix",function(e){var r={keyword:"rec with let in inherit assert if else then",literal:"true false or and null",built_in:"import abort baseNameOf dirOf isNull builtins map removeAttrs throw toString derivation"},t={cN:"subst",b:/\$\{/,e:/}/,k:r},i={b:/[a-zA-Z0-9-_]+(\s*=)/,rB:!0,r:0,c:[{cN:"attr",b:/\S+/}]},s={cN:"string",c:[t],v:[{b:"''",e:"''"},{b:'"',e:'"'}]},a=[e.NM,e.HCM,e.CBCM,s,i];return t.c=a,{aliases:["nixos"],k:r,c:a}});hljs.registerLanguage("haml",function(s){return{cI:!0,c:[{cN:"meta",b:"^!!!( (5|1\\.1|Strict|Frameset|Basic|Mobile|RDFa|XML\\b.*))?$",r:10},s.C("^\\s*(!=#|=#|-#|/).*$",!1,{r:0}),{b:"^\\s*(-|=|!=)(?!#)",starts:{e:"\\n",sL:"ruby"}},{cN:"tag",b:"^\\s*%",c:[{cN:"selector-tag",b:"\\w+"},{cN:"selector-id",b:"#[\\w-]+"},{cN:"selector-class",b:"\\.[\\w-]+"},{b:"{\\s*",e:"\\s*}",c:[{b:":\\w+\\s*=>",e:",\\s+",rB:!0,eW:!0,c:[{cN:"attr",b:":\\w+"},s.ASM,s.QSM,{b:"\\w+",r:0}]}]},{b:"\\(\\s*",e:"\\s*\\)",eE:!0,c:[{b:"\\w+\\s*=",e:"\\s+",rB:!0,eW:!0,c:[{cN:"attr",b:"\\w+",r:0},s.ASM,s.QSM,{b:"\\w+",r:0}]}]}]},{b:"^\\s*[=~]\\s*"},{b:"#{",starts:{e:"}",sL:"ruby"}}]}});hljs.registerLanguage("nsis",function(e){var t={cN:"variable",b:"\\$(ADMINTOOLS|APPDATA|CDBURN_AREA|CMDLINE|COMMONFILES32|COMMONFILES64|COMMONFILES|COOKIES|DESKTOP|DOCUMENTS|EXEDIR|EXEFILE|EXEPATH|FAVORITES|FONTS|HISTORY|HWNDPARENT|INSTDIR|INTERNET_CACHE|LANGUAGE|LOCALAPPDATA|MUSIC|NETHOOD|OUTDIR|PICTURES|PLUGINSDIR|PRINTHOOD|PROFILE|PROGRAMFILES32|PROGRAMFILES64|PROGRAMFILES|QUICKLAUNCH|RECENT|RESOURCES_LOCALIZED|RESOURCES|SENDTO|SMPROGRAMS|SMSTARTUP|STARTMENU|SYSDIR|TEMP|TEMPLATES|VIDEOS|WINDIR)"},i={cN:"variable",b:"\\$+{[a-zA-Z0-9_]+}"},n={cN:"variable",b:"\\$+[a-zA-Z0-9_]+",i:"\\(\\){}"},r={cN:"variable",b:"\\$+\\([a-zA-Z0-9_]+\\)"},l={cN:"built_in",b:"(ARCHIVE|FILE_ATTRIBUTE_ARCHIVE|FILE_ATTRIBUTE_NORMAL|FILE_ATTRIBUTE_OFFLINE|FILE_ATTRIBUTE_READONLY|FILE_ATTRIBUTE_SYSTEM|FILE_ATTRIBUTE_TEMPORARY|HKCR|HKCU|HKDD|HKEY_CLASSES_ROOT|HKEY_CURRENT_CONFIG|HKEY_CURRENT_USER|HKEY_DYN_DATA|HKEY_LOCAL_MACHINE|HKEY_PERFORMANCE_DATA|HKEY_USERS|HKLM|HKPD|HKU|IDABORT|IDCANCEL|IDIGNORE|IDNO|IDOK|IDRETRY|IDYES|MB_ABORTRETRYIGNORE|MB_DEFBUTTON1|MB_DEFBUTTON2|MB_DEFBUTTON3|MB_DEFBUTTON4|MB_ICONEXCLAMATION|MB_ICONINFORMATION|MB_ICONQUESTION|MB_ICONSTOP|MB_OK|MB_OKCANCEL|MB_RETRYCANCEL|MB_RIGHT|MB_RTLREADING|MB_SETFOREGROUND|MB_TOPMOST|MB_USERICON|MB_YESNO|NORMAL|OFFLINE|READONLY|SHCTX|SHELL_CONTEXT|SYSTEM|TEMPORARY)"},o={cN:"keyword",b:"\\!(addincludedir|addplugindir|appendfile|cd|define|delfile|echo|else|endif|error|execute|finalize|getdllversionsystem|ifdef|ifmacrodef|ifmacrondef|ifndef|if|include|insertmacro|macroend|macro|makensis|packhdr|searchparse|searchreplace|tempfile|undef|verbose|warning)"};return{cI:!1,k:{keyword:"Abort AddBrandingImage AddSize AllowRootDirInstall AllowSkipFiles AutoCloseWindow BGFont BGGradient BrandingText BringToFront Call CallInstDLL Caption ChangeUI CheckBitmap ClearErrors CompletedText ComponentText CopyFiles CRCCheck CreateDirectory CreateFont CreateShortCut Delete DeleteINISec DeleteINIStr DeleteRegKey DeleteRegValue DetailPrint DetailsButtonText DirText DirVar DirVerify EnableWindow EnumRegKey EnumRegValue Exch Exec ExecShell ExecWait ExpandEnvStrings File FileBufSize FileClose FileErrorText FileOpen FileRead FileReadByte FileReadUTF16LE FileReadWord FileSeek FileWrite FileWriteByte FileWriteUTF16LE FileWriteWord FindClose FindFirst FindNext FindWindow FlushINI FunctionEnd GetCurInstType GetCurrentAddress GetDlgItem GetDLLVersion GetDLLVersionLocal GetErrorLevel GetFileTime GetFileTimeLocal GetFullPathName GetFunctionAddress GetInstDirError GetLabelAddress GetTempFileName Goto HideWindow Icon IfAbort IfErrors IfFileExists IfRebootFlag IfSilent InitPluginsDir InstallButtonText InstallColors InstallDir InstallDirRegKey InstProgressFlags InstType InstTypeGetText InstTypeSetText IntCmp IntCmpU IntFmt IntOp IsWindow LangString LicenseBkColor LicenseData LicenseForceSelection LicenseLangString LicenseText LoadLanguageFile LockWindow LogSet LogText ManifestDPIAware ManifestSupportedOS MessageBox MiscButtonText Name Nop OutFile Page PageCallbacks PageExEnd Pop Push Quit ReadEnvStr ReadINIStr ReadRegDWORD ReadRegStr Reboot RegDLL Rename RequestExecutionLevel ReserveFile Return RMDir SearchPath SectionEnd SectionGetFlags SectionGetInstTypes SectionGetSize SectionGetText SectionGroupEnd SectionIn SectionSetFlags SectionSetInstTypes SectionSetSize SectionSetText SendMessage SetAutoClose SetBrandingImage SetCompress SetCompressor SetCompressorDictSize SetCtlColors SetCurInstType SetDatablockOptimize SetDateSave SetDetailsPrint SetDetailsView SetErrorLevel SetErrors SetFileAttributes SetFont SetOutPath SetOverwrite SetPluginUnload SetRebootFlag SetRegView SetShellVarContext SetSilent ShowInstDetails ShowUninstDetails ShowWindow SilentInstall SilentUnInstall Sleep SpaceTexts StrCmp StrCmpS StrCpy StrLen SubCaption SubSectionEnd Unicode UninstallButtonText UninstallCaption UninstallIcon UninstallSubCaption UninstallText UninstPage UnRegDLL Var VIAddVersionKey VIFileVersion VIProductVersion WindowIcon WriteINIStr WriteRegBin WriteRegDWORD WriteRegExpandStr WriteRegStr WriteUninstaller XPStyle",literal:"admin all auto both colored current false force hide highest lastused leave listonly none normal notset off on open print show silent silentlog smooth textonly true user "},c:[e.HCM,e.CBCM,{cN:"string",b:'"',e:'"',i:"\\n",c:[{b:"\\$(\\\\(n|r|t)|\\$)"},t,i,n,r]},e.C(";","$",{r:0}),{cN:"function",bK:"Function PageEx Section SectionGroup SubSection",e:"$"},o,i,n,r,l,e.NM,{b:e.IR+"::"+e.IR}]}});hljs.registerLanguage("diff",function(e){return{aliases:["patch"],c:[{cN:"meta",r:10,v:[{b:/^@@ +\-\d+,\d+ +\+\d+,\d+ +@@$/},{b:/^\*\*\* +\d+,\d+ +\*\*\*\*$/},{b:/^\-\-\- +\d+,\d+ +\-\-\-\-$/}]},{cN:"comment",v:[{b:/Index: /,e:/$/},{b:/=====/,e:/=====$/},{b:/^\-\-\-/,e:/$/},{b:/^\*{3} /,e:/$/},{b:/^\+\+\+/,e:/$/},{b:/\*{5}/,e:/\*{5}$/}]},{cN:"addition",b:"^\\+",e:"$"},{cN:"deletion",b:"^\\-",e:"$"},{cN:"addition",b:"^\\!",e:"$"}]}});hljs.registerLanguage("glsl",function(e){return{k:{keyword:"break continue discard do else for if return whileattribute binding buffer ccw centroid centroid varying coherent column_major const cw depth_any depth_greater depth_less depth_unchanged early_fragment_tests equal_spacing flat fractional_even_spacing fractional_odd_spacing highp in index inout invariant invocations isolines layout line_strip lines lines_adjacency local_size_x local_size_y local_size_z location lowp max_vertices mediump noperspective offset origin_upper_left out packed patch pixel_center_integer point_mode points precise precision quads r11f_g11f_b10f r16 r16_snorm r16f r16i r16ui r32f r32i r32ui r8 r8_snorm r8i r8ui readonly restrict rg16 rg16_snorm rg16f rg16i rg16ui rg32f rg32i rg32ui rg8 rg8_snorm rg8i rg8ui rgb10_a2 rgb10_a2ui rgba16 rgba16_snorm rgba16f rgba16i rgba16ui rgba32f rgba32i rgba32ui rgba8 rgba8_snorm rgba8i rgba8ui row_major sample shared smooth std140 std430 stream triangle_strip triangles triangles_adjacency uniform varying vertices volatile writeonly",type:"atomic_uint bool bvec2 bvec3 bvec4 dmat2 dmat2x2 dmat2x3 dmat2x4 dmat3 dmat3x2 dmat3x3 dmat3x4 dmat4 dmat4x2 dmat4x3 dmat4x4 double dvec2 dvec3 dvec4 float iimage1D iimage1DArray iimage2D iimage2DArray iimage2DMS iimage2DMSArray iimage2DRect iimage3D iimageBufferiimageCube iimageCubeArray image1D image1DArray image2D image2DArray image2DMS image2DMSArray image2DRect image3D imageBuffer imageCube imageCubeArray int isampler1D isampler1DArray isampler2D isampler2DArray isampler2DMS isampler2DMSArray isampler2DRect isampler3D isamplerBuffer isamplerCube isamplerCubeArray ivec2 ivec3 ivec4 mat2 mat2x2 mat2x3 mat2x4 mat3 mat3x2 mat3x3 mat3x4 mat4 mat4x2 mat4x3 mat4x4 sampler1D sampler1DArray sampler1DArrayShadow sampler1DShadow sampler2D sampler2DArray sampler2DArrayShadow sampler2DMS sampler2DMSArray sampler2DRect sampler2DRectShadow sampler2DShadow sampler3D samplerBuffer samplerCube samplerCubeArray samplerCubeArrayShadow samplerCubeShadow image1D uimage1DArray uimage2D uimage2DArray uimage2DMS uimage2DMSArray uimage2DRect uimage3D uimageBuffer uimageCube uimageCubeArray uint usampler1D usampler1DArray usampler2D usampler2DArray usampler2DMS usampler2DMSArray usampler2DRect usampler3D samplerBuffer usamplerCube usamplerCubeArray uvec2 uvec3 uvec4 vec2 vec3 vec4 void",built_in:"gl_MaxAtomicCounterBindings gl_MaxAtomicCounterBufferSize gl_MaxClipDistances gl_MaxClipPlanes gl_MaxCombinedAtomicCounterBuffers gl_MaxCombinedAtomicCounters gl_MaxCombinedImageUniforms gl_MaxCombinedImageUnitsAndFragmentOutputs gl_MaxCombinedTextureImageUnits gl_MaxComputeAtomicCounterBuffers gl_MaxComputeAtomicCounters gl_MaxComputeImageUniforms gl_MaxComputeTextureImageUnits gl_MaxComputeUniformComponents gl_MaxComputeWorkGroupCount gl_MaxComputeWorkGroupSize gl_MaxDrawBuffers gl_MaxFragmentAtomicCounterBuffers gl_MaxFragmentAtomicCounters gl_MaxFragmentImageUniforms gl_MaxFragmentInputComponents gl_MaxFragmentInputVectors gl_MaxFragmentUniformComponents gl_MaxFragmentUniformVectors gl_MaxGeometryAtomicCounterBuffers gl_MaxGeometryAtomicCounters gl_MaxGeometryImageUniforms gl_MaxGeometryInputComponents gl_MaxGeometryOutputComponents gl_MaxGeometryOutputVertices gl_MaxGeometryTextureImageUnits gl_MaxGeometryTotalOutputComponents gl_MaxGeometryUniformComponents gl_MaxGeometryVaryingComponents gl_MaxImageSamples gl_MaxImageUnits gl_MaxLights gl_MaxPatchVertices gl_MaxProgramTexelOffset gl_MaxTessControlAtomicCounterBuffers gl_MaxTessControlAtomicCounters gl_MaxTessControlImageUniforms gl_MaxTessControlInputComponents gl_MaxTessControlOutputComponents gl_MaxTessControlTextureImageUnits gl_MaxTessControlTotalOutputComponents gl_MaxTessControlUniformComponents gl_MaxTessEvaluationAtomicCounterBuffers gl_MaxTessEvaluationAtomicCounters gl_MaxTessEvaluationImageUniforms gl_MaxTessEvaluationInputComponents gl_MaxTessEvaluationOutputComponents gl_MaxTessEvaluationTextureImageUnits gl_MaxTessEvaluationUniformComponents gl_MaxTessGenLevel gl_MaxTessPatchComponents gl_MaxTextureCoords gl_MaxTextureImageUnits gl_MaxTextureUnits gl_MaxVaryingComponents gl_MaxVaryingFloats gl_MaxVaryingVectors gl_MaxVertexAtomicCounterBuffers gl_MaxVertexAtomicCounters gl_MaxVertexAttribs gl_MaxVertexImageUniforms gl_MaxVertexOutputComponents gl_MaxVertexOutputVectors gl_MaxVertexTextureImageUnits gl_MaxVertexUniformComponents gl_MaxVertexUniformVectors gl_MaxViewports gl_MinProgramTexelOffset gl_BackColor gl_BackLightModelProduct gl_BackLightProduct gl_BackMaterial gl_BackSecondaryColor gl_ClipDistance gl_ClipPlane gl_ClipVertex gl_Color gl_DepthRange gl_EyePlaneQ gl_EyePlaneR gl_EyePlaneS gl_EyePlaneT gl_Fog gl_FogCoord gl_FogFragCoord gl_FragColor gl_FragCoord gl_FragData gl_FragDepth gl_FrontColor gl_FrontFacing gl_FrontLightModelProduct gl_FrontLightProduct gl_FrontMaterial gl_FrontSecondaryColor gl_GlobalInvocationID gl_InstanceID gl_InvocationID gl_Layer gl_LightModel gl_LightSource gl_LocalInvocationID gl_LocalInvocationIndex gl_ModelViewMatrix gl_ModelViewMatrixInverse gl_ModelViewMatrixInverseTranspose gl_ModelViewMatrixTranspose gl_ModelViewProjectionMatrix gl_ModelViewProjectionMatrixInverse gl_ModelViewProjectionMatrixInverseTranspose gl_ModelViewProjectionMatrixTranspose gl_MultiTexCoord0 gl_MultiTexCoord1 gl_MultiTexCoord2 gl_MultiTexCoord3 gl_MultiTexCoord4 gl_MultiTexCoord5 gl_MultiTexCoord6 gl_MultiTexCoord7 gl_Normal gl_NormalMatrix gl_NormalScale gl_NumSamples gl_NumWorkGroups gl_ObjectPlaneQ gl_ObjectPlaneR gl_ObjectPlaneS gl_ObjectPlaneT gl_PatchVerticesIn gl_Point gl_PointCoord gl_PointSize gl_Position gl_PrimitiveID gl_PrimitiveIDIn gl_ProjectionMatrix gl_ProjectionMatrixInverse gl_ProjectionMatrixInverseTranspose gl_ProjectionMatrixTranspose gl_SampleID gl_SampleMask gl_SampleMaskIn gl_SamplePosition gl_SecondaryColor gl_TessCoord gl_TessLevelInner gl_TessLevelOuter gl_TexCoord gl_TextureEnvColor gl_TextureMatrix gl_TextureMatrixInverse gl_TextureMatrixInverseTranspose gl_TextureMatrixTranspose gl_Vertex gl_VertexID gl_ViewportIndex gl_WorkGroupID gl_WorkGroupSize gl_in gl_out EmitStreamVertex EmitVertex EndPrimitive EndStreamPrimitive abs acos acosh all any asin asinh atan atanh atomicAdd atomicAnd atomicCompSwap atomicCounter atomicCounterDecrement atomicCounterIncrement atomicExchange atomicMax atomicMin atomicOr atomicXor barrier bitCount bitfieldExtract bitfieldInsert bitfieldReverse ceil clamp cos cosh cross dFdx dFdy degrees determinant distance dot equal exp exp2 faceforward findLSB findMSB floatBitsToInt floatBitsToUint floor fma fract frexp ftransform fwidth greaterThan greaterThanEqual groupMemoryBarrier imageAtomicAdd imageAtomicAnd imageAtomicCompSwap imageAtomicExchange imageAtomicMax imageAtomicMin imageAtomicOr imageAtomicXor imageLoad imageSize imageStore imulExtended intBitsToFloat interpolateAtCentroid interpolateAtOffset interpolateAtSample inverse inversesqrt isinf isnan ldexp length lessThan lessThanEqual log log2 matrixCompMult max memoryBarrier memoryBarrierAtomicCounter memoryBarrierBuffer memoryBarrierImage memoryBarrierShared min mix mod modf noise1 noise2 noise3 noise4 normalize not notEqual outerProduct packDouble2x32 packHalf2x16 packSnorm2x16 packSnorm4x8 packUnorm2x16 packUnorm4x8 pow radians reflect refract round roundEven shadow1D shadow1DLod shadow1DProj shadow1DProjLod shadow2D shadow2DLod shadow2DProj shadow2DProjLod sign sin sinh smoothstep sqrt step tan tanh texelFetch texelFetchOffset texture texture1D texture1DLod texture1DProj texture1DProjLod texture2D texture2DLod texture2DProj texture2DProjLod texture3D texture3DLod texture3DProj texture3DProjLod textureCube textureCubeLod textureGather textureGatherOffset textureGatherOffsets textureGrad textureGradOffset textureLod textureLodOffset textureOffset textureProj textureProjGrad textureProjGradOffset textureProjLod textureProjLodOffset textureProjOffset textureQueryLevels textureQueryLod textureSize transpose trunc uaddCarry uintBitsToFloat umulExtended unpackDouble2x32 unpackHalf2x16 unpackSnorm2x16 unpackSnorm4x8 unpackUnorm2x16 unpackUnorm4x8 usubBorrow",literal:"true false"},i:'"',c:[e.CLCM,e.CBCM,e.CNM,{cN:"meta",b:"#",e:"$"}]}});hljs.registerLanguage("capnproto",function(t){return{aliases:["capnp"],k:{keyword:"struct enum interface union group import using const annotation extends in of on as with from fixed",built_in:"Void Bool Int8 Int16 Int32 Int64 UInt8 UInt16 UInt32 UInt64 Float32 Float64 Text Data AnyPointer AnyStruct Capability List",literal:"true false"},c:[t.QSM,t.NM,t.HCM,{cN:"meta",b:/@0x[\w\d]{16};/,i:/\n/},{cN:"symbol",b:/@\d+\b/},{cN:"class",bK:"struct enum",e:/\{/,i:/\n/,c:[t.inherit(t.TM,{starts:{eW:!0,eE:!0}})]},{cN:"class",bK:"interface",e:/\{/,i:/\n/,c:[t.inherit(t.TM,{starts:{eW:!0,eE:!0}})]}]}});hljs.registerLanguage("elm",function(e){var i={v:[e.C("--","$"),e.C("{-","-}",{c:["self"]})]},r={cN:"type",b:"\\b[A-Z][\\w']*",r:0},t={b:"\\(",e:"\\)",i:'"',c:[{cN:"type",b:"\\b[A-Z][\\w]*(\\((\\.\\.|,|\\w+)\\))?"},i]},n={b:"{",e:"}",c:t.c};return{k:"let in if then else case of where module import exposing type alias as infix infixl infixr port",c:[{bK:"module",e:"where",k:"module where",c:[t,i],i:"\\W\\.|;"},{b:"import",e:"$",k:"import as exposing",c:[t,i],i:"\\W\\.|;"},{b:"type",e:"$",k:"type alias",c:[r,t,n,i]},{bK:"infix infixl infixr",e:"$",c:[e.CNM,i]},{b:"port",e:"$",k:"port",c:[i]},e.QSM,e.CNM,r,e.inherit(e.TM,{b:"^[_a-z][\\w']*"}),i,{b:"->|<-"}]}});hljs.registerLanguage("gradle",function(e){return{cI:!0,k:{keyword:"task project allprojects subprojects artifacts buildscript configurations dependencies repositories sourceSets description delete from into include exclude source classpath destinationDir includes options sourceCompatibility targetCompatibility group flatDir doLast doFirst flatten todir fromdir ant def abstract break case catch continue default do else extends final finally for if implements instanceof native new private protected public return static switch synchronized throw throws transient try volatile while strictfp package import false null super this true antlrtask checkstyle codenarc copy boolean byte char class double float int interface long short void compile runTime file fileTree abs any append asList asWritable call collect compareTo count div dump each eachByte eachFile eachLine every find findAll flatten getAt getErr getIn getOut getText grep immutable inject inspect intersect invokeMethods isCase join leftShift minus multiply newInputStream newOutputStream newPrintWriter newReader newWriter next plus pop power previous print println push putAt read readBytes readLines reverse reverseEach round size sort splitEachLine step subMap times toInteger toList tokenize upto waitForOrKill withPrintWriter withReader withStream withWriter withWriterAppend write writeLine"},c:[e.CLCM,e.CBCM,e.ASM,e.QSM,e.NM,e.RM]}});hljs.registerLanguage("processing",function(e){return{k:{keyword:"BufferedReader PVector PFont PImage PGraphics HashMap boolean byte char color double float int long String Array FloatDict FloatList IntDict IntList JSONArray JSONObject Object StringDict StringList Table TableRow XML false synchronized int abstract float private char boolean static null if const for true while long throw strictfp finally protected import native final return void enum else break transient new catch instanceof byte super volatile case assert short package default double public try this switch continue throws protected public private",literal:"P2D P3D HALF_PI PI QUARTER_PI TAU TWO_PI",title:"setup draw",built_in:"displayHeight displayWidth mouseY mouseX mousePressed pmouseX pmouseY key keyCode pixels focused frameCount frameRate height width size createGraphics beginDraw createShape loadShape PShape arc ellipse line point quad rect triangle bezier bezierDetail bezierPoint bezierTangent curve curveDetail curvePoint curveTangent curveTightness shape shapeMode beginContour beginShape bezierVertex curveVertex endContour endShape quadraticVertex vertex ellipseMode noSmooth rectMode smooth strokeCap strokeJoin strokeWeight mouseClicked mouseDragged mouseMoved mousePressed mouseReleased mouseWheel keyPressed keyPressedkeyReleased keyTyped print println save saveFrame day hour millis minute month second year background clear colorMode fill noFill noStroke stroke alpha blue brightness color green hue lerpColor red saturation modelX modelY modelZ screenX screenY screenZ ambient emissive shininess specular add createImage beginCamera camera endCamera frustum ortho perspective printCamera printProjection cursor frameRate noCursor exit loop noLoop popStyle pushStyle redraw binary boolean byte char float hex int str unbinary unhex join match matchAll nf nfc nfp nfs split splitTokens trim append arrayCopy concat expand reverse shorten sort splice subset box sphere sphereDetail createInput createReader loadBytes loadJSONArray loadJSONObject loadStrings loadTable loadXML open parseXML saveTable selectFolder selectInput beginRaw beginRecord createOutput createWriter endRaw endRecord PrintWritersaveBytes saveJSONArray saveJSONObject saveStream saveStrings saveXML selectOutput popMatrix printMatrix pushMatrix resetMatrix rotate rotateX rotateY rotateZ scale shearX shearY translate ambientLight directionalLight lightFalloff lights lightSpecular noLights normal pointLight spotLight image imageMode loadImage noTint requestImage tint texture textureMode textureWrap blend copy filter get loadPixels set updatePixels blendMode loadShader PShaderresetShader shader createFont loadFont text textFont textAlign textLeading textMode textSize textWidth textAscent textDescent abs ceil constrain dist exp floor lerp log mag map max min norm pow round sq sqrt acos asin atan atan2 cos degrees radians sin tan noise noiseDetail noiseSeed random randomGaussian randomSeed"},c:[e.CLCM,e.CBCM,e.ASM,e.QSM,e.CNM]}});hljs.registerLanguage("lisp",function(b){var e="[a-zA-Z_\\-\\+\\*\\/\\<\\=\\>\\&\\#][a-zA-Z0-9_\\-\\+\\*\\/\\<\\=\\>\\&\\#!]*",c="\\|[^]*?\\|",r="(\\-|\\+)?\\d+(\\.\\d+|\\/\\d+)?((d|e|f|l|s|D|E|F|L|S)(\\+|\\-)?\\d+)?",a={cN:"meta",b:"^#!",e:"$"},l={cN:"literal",b:"\\b(t{1}|nil)\\b"},n={cN:"number",v:[{b:r,r:0},{b:"#(b|B)[0-1]+(/[0-1]+)?"},{b:"#(o|O)[0-7]+(/[0-7]+)?"},{b:"#(x|X)[0-9a-fA-F]+(/[0-9a-fA-F]+)?"},{b:"#(c|C)\\("+r+" +"+r,e:"\\)"}]},i=b.inherit(b.QSM,{i:null}),t=b.C(";","$",{r:0}),s={b:"\\*",e:"\\*"},u={cN:"symbol",b:"[:&]"+e},d={b:e,r:0},f={b:c},m={b:"\\(",e:"\\)",c:["self",l,i,n,d]},o={c:[n,i,s,u,m,d],v:[{b:"['`]\\(",e:"\\)"},{b:"\\(quote ",e:"\\)",k:{name:"quote"}},{b:"'"+c}]},v={v:[{b:"'"+e},{b:"#'"+e+"(::"+e+")*"}]},N={b:"\\(\\s*",e:"\\)"},A={eW:!0,r:0};return N.c=[{cN:"name",v:[{b:e},{b:c}]},A],A.c=[o,v,N,l,n,i,t,s,u,f,d],{i:/\S/,c:[n,a,l,i,t,o,v,N,d]}});hljs.registerLanguage("tcl",function(e){return{aliases:["tk"],k:"after append apply array auto_execok auto_import auto_load auto_mkindex auto_mkindex_old auto_qualify auto_reset bgerror binary break catch cd chan clock close concat continue dde dict encoding eof error eval exec exit expr fblocked fconfigure fcopy file fileevent filename flush for foreach format gets glob global history http if incr info interp join lappend|10 lassign|10 lindex|10 linsert|10 list llength|10 load lrange|10 lrepeat|10 lreplace|10 lreverse|10 lsearch|10 lset|10 lsort|10 mathfunc mathop memory msgcat namespace open package parray pid pkg::create pkg_mkIndex platform platform::shell proc puts pwd read refchan regexp registry regsub|10 rename return safe scan seek set socket source split string subst switch tcl_endOfWord tcl_findLibrary tcl_startOfNextWord tcl_startOfPreviousWord tcl_wordBreakAfter tcl_wordBreakBefore tcltest tclvars tell time tm trace unknown unload unset update uplevel upvar variable vwait while",c:[e.C(";[ \\t]*#","$"),e.C("^[ \\t]*#","$"),{bK:"proc",e:"[\\{]",eE:!0,c:[{cN:"title",b:"[ \\t\\n\\r]+(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*",e:"[ \\t\\n\\r]",eW:!0,eE:!0}]},{eE:!0,v:[{b:"\\$(\\{)?(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*\\(([a-zA-Z0-9_])*\\)",e:"[^a-zA-Z0-9_\\}\\$]"},{b:"\\$(\\{)?(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*",e:"(\\))?[^a-zA-Z0-9_\\}\\$]"}]},{cN:"string",c:[e.BE],v:[e.inherit(e.ASM,{i:null}),e.inherit(e.QSM,{i:null})]},{cN:"number",v:[e.BNM,e.CNM]}]}});hljs.registerLanguage("fix",function(u){return{c:[{b:/[^\u2401\u0001]+/,e:/[\u2401\u0001]/,eE:!0,rB:!0,rE:!1,c:[{b:/([^\u2401\u0001=]+)/,e:/=([^\u2401\u0001=]+)/,rE:!0,rB:!1,cN:"attr"},{b:/=/,e:/([\u2401\u0001])/,eE:!0,eB:!0,cN:"string"}]}],cI:!0}});hljs.registerLanguage("haskell",function(e){var i={v:[e.C("--","$"),e.C("{-","-}",{c:["self"]})]},a={cN:"meta",b:"{-#",e:"#-}"},l={cN:"meta",b:"^#",e:"$"},c={cN:"type",b:"\\b[A-Z][\\w']*",r:0},n={b:"\\(",e:"\\)",i:'"',c:[a,l,{cN:"type",b:"\\b[A-Z][\\w]*(\\((\\.\\.|,|\\w+)\\))?"},e.inherit(e.TM,{b:"[_a-z][\\w']*"}),i]},s={b:"{",e:"}",c:n.c};return{aliases:["hs"],k:"let in if then else case of where do module import hiding qualified type data newtype deriving class instance as default infix infixl infixr foreign export ccall stdcall cplusplus jvm dotnet safe unsafe family forall mdo proc rec",c:[{bK:"module",e:"where",k:"module where",c:[n,i],i:"\\W\\.|;"},{b:"\\bimport\\b",e:"$",k:"import qualified as hiding",c:[n,i],i:"\\W\\.|;"},{cN:"class",b:"^(\\s*)?(class|instance)\\b",e:"where",k:"class family instance where",c:[c,n,i]},{cN:"class",b:"\\b(data|(new)?type)\\b",e:"$",k:"data family type newtype deriving",c:[a,c,n,s,i]},{bK:"default",e:"$",c:[c,n,i]},{bK:"infix infixl infixr",e:"$",c:[e.CNM,i]},{b:"\\bforeign\\b",e:"$",k:"foreign import export ccall stdcall cplusplus jvm dotnet safe unsafe",c:[c,e.QSM,i]},{cN:"meta",b:"#!\\/usr\\/bin\\/env runhaskell",e:"$"},a,l,e.QSM,e.CNM,c,e.inherit(e.TM,{b:"^[_a-z][\\w']*"}),i,{b:"->|<-"}]}});hljs.registerLanguage("css",function(e){var c="[a-zA-Z-][a-zA-Z0-9_-]*",t={b:/[A-Z\_\.\-]+\s*:/,rB:!0,e:";",eW:!0,c:[{cN:"attribute",b:/\S/,e:":",eE:!0,starts:{eW:!0,eE:!0,c:[{b:/[\w-]+\s*\(/,rB:!0,c:[{cN:"built_in",b:/[\w-]+/}]},e.CSSNM,e.QSM,e.ASM,e.CBCM,{cN:"number",b:"#[0-9A-Fa-f]+"},{cN:"meta",b:"!important"}]}}]};return{cI:!0,i:/[=\/|'\$]/,c:[e.CBCM,{cN:"selector-id",b:/#[A-Za-z0-9_-]+/},{cN:"selector-class",b:/\.[A-Za-z0-9_-]+/},{cN:"selector-attr",b:/\[/,e:/\]/,i:"$"},{cN:"selector-pseudo",b:/:(:)?[a-zA-Z0-9\_\-\+\(\)"'.]+/},{b:"@(font-face|page)",l:"[a-z-]+",k:"font-face page"},{b:"@",e:"[{;]",c:[{cN:"keyword",b:/\S+/},{b:/\s/,eW:!0,eE:!0,r:0,c:[e.ASM,e.QSM,e.CSSNM]}]},{cN:"selector-tag",b:c,r:0},{b:"{",e:"}",i:/\S/,c:[e.CBCM,t]}]}});hljs.registerLanguage("kotlin",function(e){var r="val var get set class trait object open private protected public final enum if else do while for when break continue throw try catch finally import package is as in return fun override default companion reified inline volatile transient native Byte Short Char Int Long Boolean Float Double Void Unit Nothing";return{k:{keyword:r,literal:"true false null"},c:[e.C("/\\*\\*","\\*/",{r:0,c:[{cN:"doctag",b:"@[A-Za-z]+"}]}),e.CLCM,e.CBCM,{cN:"type",b:/,e:/>/,rB:!0,eE:!1,r:0},{cN:"function",bK:"fun",e:"[(]|$",rB:!0,eE:!0,k:r,i:/fun\s+(<.*>)?[^\s\(]+(\s+[^\s\(]+)\s*=/,r:5,c:[{b:e.UIR+"\\s*\\(",rB:!0,r:0,c:[e.UTM]},{cN:"type",b:/,e:/>/,k:"reified",r:0},{cN:"params",b:/\(/,e:/\)/,k:r,r:0,i:/\([^\(,\s:]+,/,c:[{cN:"type",b:/:\s*/,e:/\s*[=\)]/,eB:!0,rE:!0,r:0}]},e.CLCM,e.CBCM]},{cN:"class",bK:"class trait",e:/[:\{(]|$/,eE:!0,i:"extends implements",c:[e.UTM,{cN:"type",b:/,e:/>/,eB:!0,eE:!0,r:0},{cN:"type",b:/[,:]\s*/,e:/[<\(,]|$/,eB:!0,rE:!0}]},{cN:"variable",bK:"var val",e:/\s*[=:$]/,eE:!0},e.QSM,{cN:"meta",b:"^#!/usr/bin/env",e:"$",i:"\n"},e.CNM]}});hljs.registerLanguage("mojolicious",function(e){return{sL:"xml",c:[{cN:"meta",b:"^__(END|DATA)__$"},{b:"^\\s*%{1,2}={0,2}",e:"$",sL:"perl"},{b:"<%{1,2}={0,2}",e:"={0,1}%>",sL:"perl",eB:!0,eE:!0}]}});hljs.registerLanguage("scss",function(e){var t="[a-zA-Z-][a-zA-Z0-9_-]*",i={cN:"variable",b:"(\\$"+t+")\\b"},r={cN:"number",b:"#[0-9A-Fa-f]+"};({cN:"attribute",b:"[A-Z\\_\\.\\-]+",e:":",eE:!0,i:"[^\\s]",starts:{eW:!0,eE:!0,c:[r,e.CSSNM,e.QSM,e.ASM,e.CBCM,{cN:"meta",b:"!important"}]}});return{cI:!0,i:"[=/|']",c:[e.CLCM,e.CBCM,{cN:"selector-id",b:"\\#[A-Za-z0-9_-]+",r:0},{cN:"selector-class",b:"\\.[A-Za-z0-9_-]+",r:0},{cN:"selector-attr",b:"\\[",e:"\\]",i:"$"},{cN:"selector-tag",b:"\\b(a|abbr|acronym|address|area|article|aside|audio|b|base|big|blockquote|body|br|button|canvas|caption|cite|code|col|colgroup|command|datalist|dd|del|details|dfn|div|dl|dt|em|embed|fieldset|figcaption|figure|footer|form|frame|frameset|(h[1-6])|head|header|hgroup|hr|html|i|iframe|img|input|ins|kbd|keygen|label|legend|li|link|map|mark|meta|meter|nav|noframes|noscript|object|ol|optgroup|option|output|p|param|pre|progress|q|rp|rt|ruby|samp|script|section|select|small|span|strike|strong|style|sub|sup|table|tbody|td|textarea|tfoot|th|thead|time|title|tr|tt|ul|var|video)\\b",r:0},{b:":(visited|valid|root|right|required|read-write|read-only|out-range|optional|only-of-type|only-child|nth-of-type|nth-last-of-type|nth-last-child|nth-child|not|link|left|last-of-type|last-child|lang|invalid|indeterminate|in-range|hover|focus|first-of-type|first-line|first-letter|first-child|first|enabled|empty|disabled|default|checked|before|after|active)"},{b:"::(after|before|choices|first-letter|first-line|repeat-index|repeat-item|selection|value)"},i,{cN:"attribute",b:"\\b(z-index|word-wrap|word-spacing|word-break|width|widows|white-space|visibility|vertical-align|unicode-bidi|transition-timing-function|transition-property|transition-duration|transition-delay|transition|transform-style|transform-origin|transform|top|text-underline-position|text-transform|text-shadow|text-rendering|text-overflow|text-indent|text-decoration-style|text-decoration-line|text-decoration-color|text-decoration|text-align-last|text-align|tab-size|table-layout|right|resize|quotes|position|pointer-events|perspective-origin|perspective|page-break-inside|page-break-before|page-break-after|padding-top|padding-right|padding-left|padding-bottom|padding|overflow-y|overflow-x|overflow-wrap|overflow|outline-width|outline-style|outline-offset|outline-color|outline|orphans|order|opacity|object-position|object-fit|normal|none|nav-up|nav-right|nav-left|nav-index|nav-down|min-width|min-height|max-width|max-height|mask|marks|margin-top|margin-right|margin-left|margin-bottom|margin|list-style-type|list-style-position|list-style-image|list-style|line-height|letter-spacing|left|justify-content|initial|inherit|ime-mode|image-orientation|image-resolution|image-rendering|icon|hyphens|height|font-weight|font-variant-ligatures|font-variant|font-style|font-stretch|font-size-adjust|font-size|font-language-override|font-kerning|font-feature-settings|font-family|font|float|flex-wrap|flex-shrink|flex-grow|flex-flow|flex-direction|flex-basis|flex|filter|empty-cells|display|direction|cursor|counter-reset|counter-increment|content|column-width|column-span|column-rule-width|column-rule-style|column-rule-color|column-rule|column-gap|column-fill|column-count|columns|color|clip-path|clip|clear|caption-side|break-inside|break-before|break-after|box-sizing|box-shadow|box-decoration-break|bottom|border-width|border-top-width|border-top-style|border-top-right-radius|border-top-left-radius|border-top-color|border-top|border-style|border-spacing|border-right-width|border-right-style|border-right-color|border-right|border-radius|border-left-width|border-left-style|border-left-color|border-left|border-image-width|border-image-source|border-image-slice|border-image-repeat|border-image-outset|border-image|border-color|border-collapse|border-bottom-width|border-bottom-style|border-bottom-right-radius|border-bottom-left-radius|border-bottom-color|border-bottom|border|background-size|background-repeat|background-position|background-origin|background-image|background-color|background-clip|background-attachment|background-blend-mode|background|backface-visibility|auto|animation-timing-function|animation-play-state|animation-name|animation-iteration-count|animation-fill-mode|animation-duration|animation-direction|animation-delay|animation|align-self|align-items|align-content)\\b",i:"[^\\s]"},{b:"\\b(whitespace|wait|w-resize|visible|vertical-text|vertical-ideographic|uppercase|upper-roman|upper-alpha|underline|transparent|top|thin|thick|text|text-top|text-bottom|tb-rl|table-header-group|table-footer-group|sw-resize|super|strict|static|square|solid|small-caps|separate|se-resize|scroll|s-resize|rtl|row-resize|ridge|right|repeat|repeat-y|repeat-x|relative|progress|pointer|overline|outside|outset|oblique|nowrap|not-allowed|normal|none|nw-resize|no-repeat|no-drop|newspaper|ne-resize|n-resize|move|middle|medium|ltr|lr-tb|lowercase|lower-roman|lower-alpha|loose|list-item|line|line-through|line-edge|lighter|left|keep-all|justify|italic|inter-word|inter-ideograph|inside|inset|inline|inline-block|inherit|inactive|ideograph-space|ideograph-parenthesis|ideograph-numeric|ideograph-alpha|horizontal|hidden|help|hand|groove|fixed|ellipsis|e-resize|double|dotted|distribute|distribute-space|distribute-letter|distribute-all-lines|disc|disabled|default|decimal|dashed|crosshair|collapse|col-resize|circle|char|center|capitalize|break-word|break-all|bottom|both|bolder|bold|block|bidi-override|below|baseline|auto|always|all-scroll|absolute|table|table-cell)\\b"},{b:":",e:";",c:[i,r,e.CSSNM,e.QSM,e.ASM,{cN:"meta",b:"!important"}]},{b:"@",e:"[{;]",k:"mixin include extend for if else each while charset import debug media page content font-face namespace warn",c:[i,e.QSM,e.ASM,r,e.CSSNM,{b:"\\s[A-Za-z0-9_.-]+",r:0}]}]}});hljs.registerLanguage("twig",function(e){var t={cN:"params",b:"\\(",e:"\\)"},a="attribute block constant cycle date dump include max min parent random range source template_from_string",r={bK:a,k:{name:a},r:0,c:[t]},c={b:/\|[A-Za-z_]+:?/,k:"abs batch capitalize convert_encoding date date_modify default escape first format join json_encode keys last length lower merge nl2br number_format raw replace reverse round slice sort split striptags title trim upper url_encode",c:[r]},s="autoescape block do embed extends filter flush for if import include macro sandbox set spaceless use verbatim";return s=s+" "+s.split(" ").map(function(e){return"end"+e}).join(" "),{aliases:["craftcms"],cI:!0,sL:"xml",c:[e.C(/\{#/,/#}/),{cN:"template-tag",b:/\{%/,e:/%}/,c:[{cN:"name",b:/\w+/,k:s,starts:{eW:!0,c:[c,r],r:0}}]},{cN:"template-variable",b:/\{\{/,e:/}}/,c:["self",c,r]}]}});hljs.registerLanguage("stata",function(e){return{aliases:["do","ado"],cI:!0,k:"if else in foreach for forv forva forval forvalu forvalue forvalues by bys bysort xi quietly qui capture about ac ac_7 acprplot acprplot_7 adjust ado adopath adoupdate alpha ameans an ano anov anova anova_estat anova_terms anovadef aorder ap app appe appen append arch arch_dr arch_estat arch_p archlm areg areg_p args arima arima_dr arima_estat arima_p as asmprobit asmprobit_estat asmprobit_lf asmprobit_mfx__dlg asmprobit_p ass asse asser assert avplot avplot_7 avplots avplots_7 bcskew0 bgodfrey binreg bip0_lf biplot bipp_lf bipr_lf bipr_p biprobit bitest bitesti bitowt blogit bmemsize boot bootsamp bootstrap bootstrap_8 boxco_l boxco_p boxcox boxcox_6 boxcox_p bprobit br break brier bro brow brows browse brr brrstat bs bs_7 bsampl_w bsample bsample_7 bsqreg bstat bstat_7 bstat_8 bstrap bstrap_7 ca ca_estat ca_p cabiplot camat canon canon_8 canon_8_p canon_estat canon_p cap caprojection capt captu captur capture cat cc cchart cchart_7 cci cd censobs_table centile cf char chdir checkdlgfiles checkestimationsample checkhlpfiles checksum chelp ci cii cl class classutil clear cli clis clist clo clog clog_lf clog_p clogi clogi_sw clogit clogit_lf clogit_p clogitp clogl_sw cloglog clonevar clslistarray cluster cluster_measures cluster_stop cluster_tree cluster_tree_8 clustermat cmdlog cnr cnre cnreg cnreg_p cnreg_sw cnsreg codebook collaps4 collapse colormult_nb colormult_nw compare compress conf confi confir confirm conren cons const constr constra constrai constrain constraint continue contract copy copyright copysource cor corc corr corr2data corr_anti corr_kmo corr_smc corre correl correla correlat correlate corrgram cou coun count cox cox_p cox_sw coxbase coxhaz coxvar cprplot cprplot_7 crc cret cretu cretur creturn cross cs cscript cscript_log csi ct ct_is ctset ctst_5 ctst_st cttost cumsp cumsp_7 cumul cusum cusum_7 cutil d datasig datasign datasigna datasignat datasignatu datasignatur datasignature datetof db dbeta de dec deco decod decode deff des desc descr descri describ describe destring dfbeta dfgls dfuller di di_g dir dirstats dis discard disp disp_res disp_s displ displa display distinct do doe doed doedi doedit dotplot dotplot_7 dprobit drawnorm drop ds ds_util dstdize duplicates durbina dwstat dydx e ed edi edit egen eivreg emdef en enc enco encod encode eq erase ereg ereg_lf ereg_p ereg_sw ereghet ereghet_glf ereghet_glf_sh ereghet_gp ereghet_ilf ereghet_ilf_sh ereghet_ip eret eretu eretur ereturn err erro error est est_cfexist est_cfname est_clickable est_expand est_hold est_table est_unhold est_unholdok estat estat_default estat_summ estat_vce_only esti estimates etodow etof etomdy ex exi exit expand expandcl fac fact facto factor factor_estat factor_p factor_pca_rotated factor_rotate factormat fcast fcast_compute fcast_graph fdades fdadesc fdadescr fdadescri fdadescrib fdadescribe fdasav fdasave fdause fh_st file open file read file close file filefilter fillin find_hlp_file findfile findit findit_7 fit fl fli flis flist for5_0 form forma format fpredict frac_154 frac_adj frac_chk frac_cox frac_ddp frac_dis frac_dv frac_in frac_mun frac_pp frac_pq frac_pv frac_wgt frac_xo fracgen fracplot fracplot_7 fracpoly fracpred fron_ex fron_hn fron_p fron_tn fron_tn2 frontier ftodate ftoe ftomdy ftowdate g gamhet_glf gamhet_gp gamhet_ilf gamhet_ip gamma gamma_d2 gamma_p gamma_sw gammahet gdi_hexagon gdi_spokes ge gen gene gener genera generat generate genrank genstd genvmean gettoken gl gladder gladder_7 glim_l01 glim_l02 glim_l03 glim_l04 glim_l05 glim_l06 glim_l07 glim_l08 glim_l09 glim_l10 glim_l11 glim_l12 glim_lf glim_mu glim_nw1 glim_nw2 glim_nw3 glim_p glim_v1 glim_v2 glim_v3 glim_v4 glim_v5 glim_v6 glim_v7 glm glm_6 glm_p glm_sw glmpred glo glob globa global glogit glogit_8 glogit_p gmeans gnbre_lf gnbreg gnbreg_5 gnbreg_p gomp_lf gompe_sw gomper_p gompertz gompertzhet gomphet_glf gomphet_glf_sh gomphet_gp gomphet_ilf gomphet_ilf_sh gomphet_ip gphdot gphpen gphprint gprefs gprobi_p gprobit gprobit_8 gr gr7 gr_copy gr_current gr_db gr_describe gr_dir gr_draw gr_draw_replay gr_drop gr_edit gr_editviewopts gr_example gr_example2 gr_export gr_print gr_qscheme gr_query gr_read gr_rename gr_replay gr_save gr_set gr_setscheme gr_table gr_undo gr_use graph graph7 grebar greigen greigen_7 greigen_8 grmeanby grmeanby_7 gs_fileinfo gs_filetype gs_graphinfo gs_stat gsort gwood h hadimvo hareg hausman haver he heck_d2 heckma_p heckman heckp_lf heckpr_p heckprob hel help hereg hetpr_lf hetpr_p hetprob hettest hexdump hilite hist hist_7 histogram hlogit hlu hmeans hotel hotelling hprobit hreg hsearch icd9 icd9_ff icd9p iis impute imtest inbase include inf infi infil infile infix inp inpu input ins insheet insp inspe inspec inspect integ inten intreg intreg_7 intreg_p intrg2_ll intrg_ll intrg_ll2 ipolate iqreg ir irf irf_create irfm iri is_svy is_svysum isid istdize ivprob_1_lf ivprob_lf ivprobit ivprobit_p ivreg ivreg_footnote ivtob_1_lf ivtob_lf ivtobit ivtobit_p jackknife jacknife jknife jknife_6 jknife_8 jkstat joinby kalarma1 kap kap_3 kapmeier kappa kapwgt kdensity kdensity_7 keep ksm ksmirnov ktau kwallis l la lab labe label labelbook ladder levels levelsof leverage lfit lfit_p li lincom line linktest lis list lloghet_glf lloghet_glf_sh lloghet_gp lloghet_ilf lloghet_ilf_sh lloghet_ip llogi_sw llogis_p llogist llogistic llogistichet lnorm_lf lnorm_sw lnorma_p lnormal lnormalhet lnormhet_glf lnormhet_glf_sh lnormhet_gp lnormhet_ilf lnormhet_ilf_sh lnormhet_ip lnskew0 loadingplot loc loca local log logi logis_lf logistic logistic_p logit logit_estat logit_p loglogs logrank loneway lookfor lookup lowess lowess_7 lpredict lrecomp lroc lroc_7 lrtest ls lsens lsens_7 lsens_x lstat ltable ltable_7 ltriang lv lvr2plot lvr2plot_7 m ma mac macr macro makecns man manova manova_estat manova_p manovatest mantel mark markin markout marksample mat mat_capp mat_order mat_put_rr mat_rapp mata mata_clear mata_describe mata_drop mata_matdescribe mata_matsave mata_matuse mata_memory mata_mlib mata_mosave mata_rename mata_which matalabel matcproc matlist matname matr matri matrix matrix_input__dlg matstrik mcc mcci md0_ md1_ md1debug_ md2_ md2debug_ mds mds_estat mds_p mdsconfig mdslong mdsmat mdsshepard mdytoe mdytof me_derd mean means median memory memsize meqparse mer merg merge mfp mfx mhelp mhodds minbound mixed_ll mixed_ll_reparm mkassert mkdir mkmat mkspline ml ml_5 ml_adjs ml_bhhhs ml_c_d ml_check ml_clear ml_cnt ml_debug ml_defd ml_e0 ml_e0_bfgs ml_e0_cycle ml_e0_dfp ml_e0i ml_e1 ml_e1_bfgs ml_e1_bhhh ml_e1_cycle ml_e1_dfp ml_e2 ml_e2_cycle ml_ebfg0 ml_ebfr0 ml_ebfr1 ml_ebh0q ml_ebhh0 ml_ebhr0 ml_ebr0i ml_ecr0i ml_edfp0 ml_edfr0 ml_edfr1 ml_edr0i ml_eds ml_eer0i ml_egr0i ml_elf ml_elf_bfgs ml_elf_bhhh ml_elf_cycle ml_elf_dfp ml_elfi ml_elfs ml_enr0i ml_enrr0 ml_erdu0 ml_erdu0_bfgs ml_erdu0_bhhh ml_erdu0_bhhhq ml_erdu0_cycle ml_erdu0_dfp ml_erdu0_nrbfgs ml_exde ml_footnote ml_geqnr ml_grad0 ml_graph ml_hbhhh ml_hd0 ml_hold ml_init ml_inv ml_log ml_max ml_mlout ml_mlout_8 ml_model ml_nb0 ml_opt ml_p ml_plot ml_query ml_rdgrd ml_repor ml_s_e ml_score ml_searc ml_technique ml_unhold mleval mlf_ mlmatbysum mlmatsum mlog mlogi mlogit mlogit_footnote mlogit_p mlopts mlsum mlvecsum mnl0_ mor more mov move mprobit mprobit_lf mprobit_p mrdu0_ mrdu1_ mvdecode mvencode mvreg mvreg_estat n nbreg nbreg_al nbreg_lf nbreg_p nbreg_sw nestreg net newey newey_7 newey_p news nl nl_7 nl_9 nl_9_p nl_p nl_p_7 nlcom nlcom_p nlexp2 nlexp2_7 nlexp2a nlexp2a_7 nlexp3 nlexp3_7 nlgom3 nlgom3_7 nlgom4 nlgom4_7 nlinit nllog3 nllog3_7 nllog4 nllog4_7 nlog_rd nlogit nlogit_p nlogitgen nlogittree nlpred no nobreak noi nois noisi noisil noisily note notes notes_dlg nptrend numlabel numlist odbc old_ver olo olog ologi ologi_sw ologit ologit_p ologitp on one onew onewa oneway op_colnm op_comp op_diff op_inv op_str opr opro oprob oprob_sw oprobi oprobi_p oprobit oprobitp opts_exclusive order orthog orthpoly ou out outf outfi outfil outfile outs outsh outshe outshee outsheet ovtest pac pac_7 palette parse parse_dissim pause pca pca_8 pca_display pca_estat pca_p pca_rotate pcamat pchart pchart_7 pchi pchi_7 pcorr pctile pentium pergram pergram_7 permute permute_8 personal peto_st pkcollapse pkcross pkequiv pkexamine pkexamine_7 pkshape pksumm pksumm_7 pl plo plot plugin pnorm pnorm_7 poisgof poiss_lf poiss_sw poisso_p poisson poisson_estat post postclose postfile postutil pperron pr prais prais_e prais_e2 prais_p predict predictnl preserve print pro prob probi probit probit_estat probit_p proc_time procoverlay procrustes procrustes_estat procrustes_p profiler prog progr progra program prop proportion prtest prtesti pwcorr pwd q\\s qby qbys qchi qchi_7 qladder qladder_7 qnorm qnorm_7 qqplot qqplot_7 qreg qreg_c qreg_p qreg_sw qu quadchk quantile quantile_7 que quer query range ranksum ratio rchart rchart_7 rcof recast reclink recode reg reg3 reg3_p regdw regr regre regre_p2 regres regres_p regress regress_estat regriv_p remap ren rena renam rename renpfix repeat replace report reshape restore ret retu retur return rm rmdir robvar roccomp roccomp_7 roccomp_8 rocf_lf rocfit rocfit_8 rocgold rocplot rocplot_7 roctab roctab_7 rolling rologit rologit_p rot rota rotat rotate rotatemat rreg rreg_p ru run runtest rvfplot rvfplot_7 rvpplot rvpplot_7 sa safesum sample sampsi sav save savedresults saveold sc sca scal scala scalar scatter scm_mine sco scob_lf scob_p scobi_sw scobit scor score scoreplot scoreplot_help scree screeplot screeplot_help sdtest sdtesti se search separate seperate serrbar serrbar_7 serset set set_defaults sfrancia sh she shel shell shewhart shewhart_7 signestimationsample signrank signtest simul simul_7 simulate simulate_8 sktest sleep slogit slogit_d2 slogit_p smooth snapspan so sor sort spearman spikeplot spikeplot_7 spikeplt spline_x split sqreg sqreg_p sret sretu sretur sreturn ssc st st_ct st_hc st_hcd st_hcd_sh st_is st_issys st_note st_promo st_set st_show st_smpl st_subid stack statsby statsby_8 stbase stci stci_7 stcox stcox_estat stcox_fr stcox_fr_ll stcox_p stcox_sw stcoxkm stcoxkm_7 stcstat stcurv stcurve stcurve_7 stdes stem stepwise stereg stfill stgen stir stjoin stmc stmh stphplot stphplot_7 stphtest stphtest_7 stptime strate strate_7 streg streg_sw streset sts sts_7 stset stsplit stsum sttocc sttoct stvary stweib su suest suest_8 sum summ summa summar summari summariz summarize sunflower sureg survcurv survsum svar svar_p svmat svy svy_disp svy_dreg svy_est svy_est_7 svy_estat svy_get svy_gnbreg_p svy_head svy_header svy_heckman_p svy_heckprob_p svy_intreg_p svy_ivreg_p svy_logistic_p svy_logit_p svy_mlogit_p svy_nbreg_p svy_ologit_p svy_oprobit_p svy_poisson_p svy_probit_p svy_regress_p svy_sub svy_sub_7 svy_x svy_x_7 svy_x_p svydes svydes_8 svygen svygnbreg svyheckman svyheckprob svyintreg svyintreg_7 svyintrg svyivreg svylc svylog_p svylogit svymarkout svymarkout_8 svymean svymlog svymlogit svynbreg svyolog svyologit svyoprob svyoprobit svyopts svypois svypois_7 svypoisson svyprobit svyprobt svyprop svyprop_7 svyratio svyreg svyreg_p svyregress svyset svyset_7 svyset_8 svytab svytab_7 svytest svytotal sw sw_8 swcnreg swcox swereg swilk swlogis swlogit swologit swoprbt swpois swprobit swqreg swtobit swweib symmetry symmi symplot symplot_7 syntax sysdescribe sysdir sysuse szroeter ta tab tab1 tab2 tab_or tabd tabdi tabdis tabdisp tabi table tabodds tabodds_7 tabstat tabu tabul tabula tabulat tabulate te tempfile tempname tempvar tes test testnl testparm teststd tetrachoric time_it timer tis tob tobi tobit tobit_p tobit_sw token tokeni tokeniz tokenize tostring total translate translator transmap treat_ll treatr_p treatreg trim trnb_cons trnb_mean trpoiss_d2 trunc_ll truncr_p truncreg tsappend tset tsfill tsline tsline_ex tsreport tsrevar tsrline tsset tssmooth tsunab ttest ttesti tut_chk tut_wait tutorial tw tware_st two twoway twoway__fpfit_serset twoway__function_gen twoway__histogram_gen twoway__ipoint_serset twoway__ipoints_serset twoway__kdensity_gen twoway__lfit_serset twoway__normgen_gen twoway__pci_serset twoway__qfit_serset twoway__scatteri_serset twoway__sunflower_gen twoway_ksm_serset ty typ type typeof u unab unabbrev unabcmd update us use uselabel var var_mkcompanion var_p varbasic varfcast vargranger varirf varirf_add varirf_cgraph varirf_create varirf_ctable varirf_describe varirf_dir varirf_drop varirf_erase varirf_graph varirf_ograph varirf_rename varirf_set varirf_table varlist varlmar varnorm varsoc varstable varstable_w varstable_w2 varwle vce vec vec_fevd vec_mkphi vec_p vec_p_w vecirf_create veclmar veclmar_w vecnorm vecnorm_w vecrank vecstable verinst vers versi versio version view viewsource vif vwls wdatetof webdescribe webseek webuse weib1_lf weib2_lf weib_lf weib_lf0 weibhet_glf weibhet_glf_sh weibhet_glfa weibhet_glfa_sh weibhet_gp weibhet_ilf weibhet_ilf_sh weibhet_ilfa weibhet_ilfa_sh weibhet_ip weibu_sw weibul_p weibull weibull_c weibull_s weibullhet wh whelp whi which whil while wilc_st wilcoxon win wind windo window winexec wntestb wntestb_7 wntestq xchart xchart_7 xcorr xcorr_7 xi xi_6 xmlsav xmlsave xmluse xpose xsh xshe xshel xshell xt_iis xt_tis xtab_p xtabond xtbin_p xtclog xtcloglog xtcloglog_8 xtcloglog_d2 xtcloglog_pa_p xtcloglog_re_p xtcnt_p xtcorr xtdata xtdes xtfront_p xtfrontier xtgee xtgee_elink xtgee_estat xtgee_makeivar xtgee_p xtgee_plink xtgls xtgls_p xthaus xthausman xtht_p xthtaylor xtile xtint_p xtintreg xtintreg_8 xtintreg_d2 xtintreg_p xtivp_1 xtivp_2 xtivreg xtline xtline_ex xtlogit xtlogit_8 xtlogit_d2 xtlogit_fe_p xtlogit_pa_p xtlogit_re_p xtmixed xtmixed_estat xtmixed_p xtnb_fe xtnb_lf xtnbreg xtnbreg_pa_p xtnbreg_refe_p xtpcse xtpcse_p xtpois xtpoisson xtpoisson_d2 xtpoisson_pa_p xtpoisson_refe_p xtpred xtprobit xtprobit_8 xtprobit_d2 xtprobit_re_p xtps_fe xtps_lf xtps_ren xtps_ren_8 xtrar_p xtrc xtrc_p xtrchh xtrefe_p xtreg xtreg_be xtreg_fe xtreg_ml xtreg_pa_p xtreg_re xtregar xtrere_p xtset xtsf_ll xtsf_llti xtsum xttab xttest0 xttobit xttobit_8 xttobit_p xttrans yx yxview__barlike_draw yxview_area_draw yxview_bar_draw yxview_dot_draw yxview_dropline_draw yxview_function_draw yxview_iarrow_draw yxview_ilabels_draw yxview_normal_draw yxview_pcarrow_draw yxview_pcbarrow_draw yxview_pccapsym_draw yxview_pcscatter_draw yxview_pcspike_draw yxview_rarea_draw yxview_rbar_draw yxview_rbarm_draw yxview_rcap_draw yxview_rcapsym_draw yxview_rconnected_draw yxview_rline_draw yxview_rscatter_draw yxview_rspike_draw yxview_spike_draw yxview_sunflower_draw zap_s zinb zinb_llf zinb_plf zip zip_llf zip_p zip_plf zt_ct_5 zt_hc_5 zt_hcd_5 zt_is_5 zt_iss_5 zt_sho_5 zt_smp_5 ztbase_5 ztcox_5 ztdes_5 ztereg_5 ztfill_5 ztgen_5 ztir_5 ztjoin_5 ztnb ztnb_p ztp ztp_p zts_5 ztset_5 ztspli_5 ztsum_5 zttoct_5 ztvary_5 ztweib_5",c:[{cN:"symbol",b:/`[a-zA-Z0-9_]+'/},{cN:"variable",b:/\$\{?[a-zA-Z0-9_]+\}?/},{cN:"string",v:[{b:'`"[^\r\n]*?"\''},{b:'"[^\r\n"]*"'}]},{cN:"built_in",v:[{b:"\\b(abs|acos|asin|atan|atan2|atanh|ceil|cloglog|comb|cos|digamma|exp|floor|invcloglog|invlogit|ln|lnfact|lnfactorial|lngamma|log|log10|max|min|mod|reldif|round|sign|sin|sqrt|sum|tan|tanh|trigamma|trunc|betaden|Binomial|binorm|binormal|chi2|chi2tail|dgammapda|dgammapdada|dgammapdadx|dgammapdx|dgammapdxdx|F|Fden|Ftail|gammaden|gammap|ibeta|invbinomial|invchi2|invchi2tail|invF|invFtail|invgammap|invibeta|invnchi2|invnFtail|invnibeta|invnorm|invnormal|invttail|nbetaden|nchi2|nFden|nFtail|nibeta|norm|normal|normalden|normd|npnchi2|tden|ttail|uniform|abbrev|char|index|indexnot|length|lower|ltrim|match|plural|proper|real|regexm|regexr|regexs|reverse|rtrim|string|strlen|strlower|strltrim|strmatch|strofreal|strpos|strproper|strreverse|strrtrim|strtrim|strupper|subinstr|subinword|substr|trim|upper|word|wordcount|_caller|autocode|byteorder|chop|clip|cond|e|epsdouble|epsfloat|group|inlist|inrange|irecode|matrix|maxbyte|maxdouble|maxfloat|maxint|maxlong|mi|minbyte|mindouble|minfloat|minint|minlong|missing|r|recode|replay|return|s|scalar|d|date|day|dow|doy|halfyear|mdy|month|quarter|week|year|d|daily|dofd|dofh|dofm|dofq|dofw|dofy|h|halfyearly|hofd|m|mofd|monthly|q|qofd|quarterly|tin|twithin|w|weekly|wofd|y|yearly|yh|ym|yofd|yq|yw|cholesky|colnumb|colsof|corr|det|diag|diag0cnt|el|get|hadamard|I|inv|invsym|issym|issymmetric|J|matmissing|matuniform|mreldif|nullmat|rownumb|rowsof|sweep|syminv|trace|vec|vecdiag)(?=\\(|$)"}]},e.C("^[ ]*\\*.*$",!1),e.CLCM,e.CBCM]}});hljs.registerLanguage("erlang-repl",function(e){return{k:{built_in:"spawn spawn_link self",keyword:"after and andalso|10 band begin bnot bor bsl bsr bxor case catch cond div end fun if let not of or orelse|10 query receive rem try when xor"},c:[{cN:"meta",b:"^[0-9]+> ",r:10},e.C("%","$"),{cN:"number",b:"\\b(\\d+#[a-fA-F0-9]+|\\d+(\\.\\d+)?([eE][-+]?\\d+)?)",r:0},e.ASM,e.QSM,{b:"\\?(::)?([A-Z]\\w*(::)?)+"},{b:"->"},{b:"ok"},{b:"!"},{b:"(\\b[a-z'][a-zA-Z0-9_']*:[a-z'][a-zA-Z0-9_']*)|(\\b[a-z'][a-zA-Z0-9_']*)",r:0},{b:"[A-Z][a-zA-Z0-9_']*",r:0}]}});hljs.registerLanguage("ocaml",function(e){return{aliases:["ml"],k:{keyword:"and as assert asr begin class constraint do done downto else end exception external for fun function functor if in include inherit! inherit initializer land lazy let lor lsl lsr lxor match method!|10 method mod module mutable new object of open! open or private rec sig struct then to try type val! val virtual when while with parser value",built_in:"array bool bytes char exn|5 float int int32 int64 list lazy_t|5 nativeint|5 string unit in_channel out_channel ref",literal:"true false"},i:/\/\/|>>/,l:"[a-z_]\\w*!?",c:[{cN:"literal",b:"\\[(\\|\\|)?\\]|\\(\\)",r:0},e.C("\\(\\*","\\*\\)",{c:["self"]}),{cN:"symbol",b:"'[A-Za-z_](?!')[\\w']*"},{cN:"type",b:"`[A-Z][\\w']*"},{cN:"type",b:"\\b[A-Z][\\w']*",r:0},{b:"[a-z_]\\w*'[\\w']*",r:0},e.inherit(e.ASM,{cN:"string",r:0}),e.inherit(e.QSM,{i:null}),{cN:"number",b:"\\b(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\\.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)",r:0},{b:/[-=]>/}]}});hljs.registerLanguage("rsl",function(e){return{k:{keyword:"float color point normal vector matrix while for if do return else break extern continue",built_in:"abs acos ambient area asin atan atmosphere attribute calculatenormal ceil cellnoise clamp comp concat cos degrees depth Deriv diffuse distance Du Dv environment exp faceforward filterstep floor format fresnel incident length lightsource log match max min mod noise normalize ntransform opposite option phong pnoise pow printf ptlined radians random reflect refract renderinfo round setcomp setxcomp setycomp setzcomp shadow sign sin smoothstep specular specularbrdf spline sqrt step tan texture textureinfo trace transform vtransform xcomp ycomp zcomp"},i:"",c:[e.CLCM,e.CBCM,e.QSM,e.ASM,e.CNM,{cN:"meta",b:"#",e:"$"},{cN:"class",bK:"surface displacement light volume imager",e:"\\("},{bK:"illuminate illuminance gather",e:"\\("}]}});hljs.registerLanguage("mel",function(e){return{k:"int float string vector matrix if else switch case default while do for in break continue global proc return about abs addAttr addAttributeEditorNodeHelp addDynamic addNewShelfTab addPP addPanelCategory addPrefixToName advanceToNextDrivenKey affectedNet affects aimConstraint air alias aliasAttr align alignCtx alignCurve alignSurface allViewFit ambientLight angle angleBetween animCone animCurveEditor animDisplay animView annotate appendStringArray applicationName applyAttrPreset applyTake arcLenDimContext arcLengthDimension arclen arrayMapper art3dPaintCtx artAttrCtx artAttrPaintVertexCtx artAttrSkinPaintCtx artAttrTool artBuildPaintMenu artFluidAttrCtx artPuttyCtx artSelectCtx artSetPaintCtx artUserPaintCtx assignCommand assignInputDevice assignViewportFactories attachCurve attachDeviceAttr attachSurface attrColorSliderGrp attrCompatibility attrControlGrp attrEnumOptionMenu attrEnumOptionMenuGrp attrFieldGrp attrFieldSliderGrp attrNavigationControlGrp attrPresetEditWin attributeExists attributeInfo attributeMenu attributeQuery autoKeyframe autoPlace bakeClip bakeFluidShading bakePartialHistory bakeResults bakeSimulation basename basenameEx batchRender bessel bevel bevelPlus binMembership bindSkin blend2 blendShape blendShapeEditor blendShapePanel blendTwoAttr blindDataType boneLattice boundary boxDollyCtx boxZoomCtx bufferCurve buildBookmarkMenu buildKeyframeMenu button buttonManip CBG cacheFile cacheFileCombine cacheFileMerge cacheFileTrack camera cameraView canCreateManip canvas capitalizeString catch catchQuiet ceil changeSubdivComponentDisplayLevel changeSubdivRegion channelBox character characterMap characterOutlineEditor characterize chdir checkBox checkBoxGrp checkDefaultRenderGlobals choice circle circularFillet clamp clear clearCache clip clipEditor clipEditorCurrentTimeCtx clipSchedule clipSchedulerOutliner clipTrimBefore closeCurve closeSurface cluster cmdFileOutput cmdScrollFieldExecuter cmdScrollFieldReporter cmdShell coarsenSubdivSelectionList collision color colorAtPoint colorEditor colorIndex colorIndexSliderGrp colorSliderButtonGrp colorSliderGrp columnLayout commandEcho commandLine commandPort compactHairSystem componentEditor compositingInterop computePolysetVolume condition cone confirmDialog connectAttr connectControl connectDynamic connectJoint connectionInfo constrain constrainValue constructionHistory container containsMultibyte contextInfo control convertFromOldLayers convertIffToPsd convertLightmap convertSolidTx convertTessellation convertUnit copyArray copyFlexor copyKey copySkinWeights cos cpButton cpCache cpClothSet cpCollision cpConstraint cpConvClothToMesh cpForces cpGetSolverAttr cpPanel cpProperty cpRigidCollisionFilter cpSeam cpSetEdit cpSetSolverAttr cpSolver cpSolverTypes cpTool cpUpdateClothUVs createDisplayLayer createDrawCtx createEditor createLayeredPsdFile createMotionField createNewShelf createNode createRenderLayer createSubdivRegion cross crossProduct ctxAbort ctxCompletion ctxEditMode ctxTraverse currentCtx currentTime currentTimeCtx currentUnit curve curveAddPtCtx curveCVCtx curveEPCtx curveEditorCtx curveIntersect curveMoveEPCtx curveOnSurface curveSketchCtx cutKey cycleCheck cylinder dagPose date defaultLightListCheckBox defaultNavigation defineDataServer defineVirtualDevice deformer deg_to_rad delete deleteAttr deleteShadingGroupsAndMaterials deleteShelfTab deleteUI deleteUnusedBrushes delrandstr detachCurve detachDeviceAttr detachSurface deviceEditor devicePanel dgInfo dgdirty dgeval dgtimer dimWhen directKeyCtx directionalLight dirmap dirname disable disconnectAttr disconnectJoint diskCache displacementToPoly displayAffected displayColor displayCull displayLevelOfDetail displayPref displayRGBColor displaySmoothness displayStats displayString displaySurface distanceDimContext distanceDimension doBlur dolly dollyCtx dopeSheetEditor dot dotProduct doubleProfileBirailSurface drag dragAttrContext draggerContext dropoffLocator duplicate duplicateCurve duplicateSurface dynCache dynControl dynExport dynExpression dynGlobals dynPaintEditor dynParticleCtx dynPref dynRelEdPanel dynRelEditor dynamicLoad editAttrLimits editDisplayLayerGlobals editDisplayLayerMembers editRenderLayerAdjustment editRenderLayerGlobals editRenderLayerMembers editor editorTemplate effector emit emitter enableDevice encodeString endString endsWith env equivalent equivalentTol erf error eval evalDeferred evalEcho event exactWorldBoundingBox exclusiveLightCheckBox exec executeForEachObject exists exp expression expressionEditorListen extendCurve extendSurface extrude fcheck fclose feof fflush fgetline fgetword file fileBrowserDialog fileDialog fileExtension fileInfo filetest filletCurve filter filterCurve filterExpand filterStudioImport findAllIntersections findAnimCurves findKeyframe findMenuItem findRelatedSkinCluster finder firstParentOf fitBspline flexor floatEq floatField floatFieldGrp floatScrollBar floatSlider floatSlider2 floatSliderButtonGrp floatSliderGrp floor flow fluidCacheInfo fluidEmitter fluidVoxelInfo flushUndo fmod fontDialog fopen formLayout format fprint frameLayout fread freeFormFillet frewind fromNativePath fwrite gamma gauss geometryConstraint getApplicationVersionAsFloat getAttr getClassification getDefaultBrush getFileList getFluidAttr getInputDeviceRange getMayaPanelTypes getModifiers getPanel getParticleAttr getPluginResource getenv getpid glRender glRenderEditor globalStitch gmatch goal gotoBindPose grabColor gradientControl gradientControlNoAttr graphDollyCtx graphSelectContext graphTrackCtx gravity grid gridLayout group groupObjectsByName HfAddAttractorToAS HfAssignAS HfBuildEqualMap HfBuildFurFiles HfBuildFurImages HfCancelAFR HfConnectASToHF HfCreateAttractor HfDeleteAS HfEditAS HfPerformCreateAS HfRemoveAttractorFromAS HfSelectAttached HfSelectAttractors HfUnAssignAS hardenPointCurve hardware hardwareRenderPanel headsUpDisplay headsUpMessage help helpLine hermite hide hilite hitTest hotBox hotkey hotkeyCheck hsv_to_rgb hudButton hudSlider hudSliderButton hwReflectionMap hwRender hwRenderLoad hyperGraph hyperPanel hyperShade hypot iconTextButton iconTextCheckBox iconTextRadioButton iconTextRadioCollection iconTextScrollList iconTextStaticLabel ikHandle ikHandleCtx ikHandleDisplayScale ikSolver ikSplineHandleCtx ikSystem ikSystemInfo ikfkDisplayMethod illustratorCurves image imfPlugins inheritTransform insertJoint insertJointCtx insertKeyCtx insertKnotCurve insertKnotSurface instance instanceable instancer intField intFieldGrp intScrollBar intSlider intSliderGrp interToUI internalVar intersect iprEngine isAnimCurve isConnected isDirty isParentOf isSameObject isTrue isValidObjectName isValidString isValidUiName isolateSelect itemFilter itemFilterAttr itemFilterRender itemFilterType joint jointCluster jointCtx jointDisplayScale jointLattice keyTangent keyframe keyframeOutliner keyframeRegionCurrentTimeCtx keyframeRegionDirectKeyCtx keyframeRegionDollyCtx keyframeRegionInsertKeyCtx keyframeRegionMoveKeyCtx keyframeRegionScaleKeyCtx keyframeRegionSelectKeyCtx keyframeRegionSetKeyCtx keyframeRegionTrackCtx keyframeStats lassoContext lattice latticeDeformKeyCtx launch launchImageEditor layerButton layeredShaderPort layeredTexturePort layout layoutDialog lightList lightListEditor lightListPanel lightlink lineIntersection linearPrecision linstep listAnimatable listAttr listCameras listConnections listDeviceAttachments listHistory listInputDeviceAxes listInputDeviceButtons listInputDevices listMenuAnnotation listNodeTypes listPanelCategories listRelatives listSets listTransforms listUnselected listerEditor loadFluid loadNewShelf loadPlugin loadPluginLanguageResources loadPrefObjects localizedPanelLabel lockNode loft log longNameOf lookThru ls lsThroughFilter lsType lsUI Mayatomr mag makeIdentity makeLive makePaintable makeRoll makeSingleSurface makeTubeOn makebot manipMoveContext manipMoveLimitsCtx manipOptions manipRotateContext manipRotateLimitsCtx manipScaleContext manipScaleLimitsCtx marker match max memory menu menuBarLayout menuEditor menuItem menuItemToShelf menuSet menuSetPref messageLine min minimizeApp mirrorJoint modelCurrentTimeCtx modelEditor modelPanel mouse movIn movOut move moveIKtoFK moveKeyCtx moveVertexAlongDirection multiProfileBirailSurface mute nParticle nameCommand nameField namespace namespaceInfo newPanelItems newton nodeCast nodeIconButton nodeOutliner nodePreset nodeType noise nonLinear normalConstraint normalize nurbsBoolean nurbsCopyUVSet nurbsCube nurbsEditUV nurbsPlane nurbsSelect nurbsSquare nurbsToPoly nurbsToPolygonsPref nurbsToSubdiv nurbsToSubdivPref nurbsUVSet nurbsViewDirectionVector objExists objectCenter objectLayer objectType objectTypeUI obsoleteProc oceanNurbsPreviewPlane offsetCurve offsetCurveOnSurface offsetSurface openGLExtension openMayaPref optionMenu optionMenuGrp optionVar orbit orbitCtx orientConstraint outlinerEditor outlinerPanel overrideModifier paintEffectsDisplay pairBlend palettePort paneLayout panel panelConfiguration panelHistory paramDimContext paramDimension paramLocator parent parentConstraint particle particleExists particleInstancer particleRenderInfo partition pasteKey pathAnimation pause pclose percent performanceOptions pfxstrokes pickWalk picture pixelMove planarSrf plane play playbackOptions playblast plugAttr plugNode pluginInfo pluginResourceUtil pointConstraint pointCurveConstraint pointLight pointMatrixMult pointOnCurve pointOnSurface pointPosition poleVectorConstraint polyAppend polyAppendFacetCtx polyAppendVertex polyAutoProjection polyAverageNormal polyAverageVertex polyBevel polyBlendColor polyBlindData polyBoolOp polyBridgeEdge polyCacheMonitor polyCheck polyChipOff polyClipboard polyCloseBorder polyCollapseEdge polyCollapseFacet polyColorBlindData polyColorDel polyColorPerVertex polyColorSet polyCompare polyCone polyCopyUV polyCrease polyCreaseCtx polyCreateFacet polyCreateFacetCtx polyCube polyCut polyCutCtx polyCylinder polyCylindricalProjection polyDelEdge polyDelFacet polyDelVertex polyDuplicateAndConnect polyDuplicateEdge polyEditUV polyEditUVShell polyEvaluate polyExtrudeEdge polyExtrudeFacet polyExtrudeVertex polyFlipEdge polyFlipUV polyForceUV polyGeoSampler polyHelix polyInfo polyInstallAction polyLayoutUV polyListComponentConversion polyMapCut polyMapDel polyMapSew polyMapSewMove polyMergeEdge polyMergeEdgeCtx polyMergeFacet polyMergeFacetCtx polyMergeUV polyMergeVertex polyMirrorFace polyMoveEdge polyMoveFacet polyMoveFacetUV polyMoveUV polyMoveVertex polyNormal polyNormalPerVertex polyNormalizeUV polyOptUvs polyOptions polyOutput polyPipe polyPlanarProjection polyPlane polyPlatonicSolid polyPoke polyPrimitive polyPrism polyProjection polyPyramid polyQuad polyQueryBlindData polyReduce polySelect polySelectConstraint polySelectConstraintMonitor polySelectCtx polySelectEditCtx polySeparate polySetToFaceNormal polySewEdge polyShortestPathCtx polySmooth polySoftEdge polySphere polySphericalProjection polySplit polySplitCtx polySplitEdge polySplitRing polySplitVertex polyStraightenUVBorder polySubdivideEdge polySubdivideFacet polyToSubdiv polyTorus polyTransfer polyTriangulate polyUVSet polyUnite polyWedgeFace popen popupMenu pose pow preloadRefEd print progressBar progressWindow projFileViewer projectCurve projectTangent projectionContext projectionManip promptDialog propModCtx propMove psdChannelOutliner psdEditTextureFile psdExport psdTextureFile putenv pwd python querySubdiv quit rad_to_deg radial radioButton radioButtonGrp radioCollection radioMenuItemCollection rampColorPort rand randomizeFollicles randstate rangeControl readTake rebuildCurve rebuildSurface recordAttr recordDevice redo reference referenceEdit referenceQuery refineSubdivSelectionList refresh refreshAE registerPluginResource rehash reloadImage removeJoint removeMultiInstance removePanelCategory rename renameAttr renameSelectionList renameUI render renderGlobalsNode renderInfo renderLayerButton renderLayerParent renderLayerPostProcess renderLayerUnparent renderManip renderPartition renderQualityNode renderSettings renderThumbnailUpdate renderWindowEditor renderWindowSelectContext renderer reorder reorderDeformers requires reroot resampleFluid resetAE resetPfxToPolyCamera resetTool resolutionNode retarget reverseCurve reverseSurface revolve rgb_to_hsv rigidBody rigidSolver roll rollCtx rootOf rot rotate rotationInterpolation roundConstantRadius rowColumnLayout rowLayout runTimeCommand runup sampleImage saveAllShelves saveAttrPreset saveFluid saveImage saveInitialState saveMenu savePrefObjects savePrefs saveShelf saveToolSettings scale scaleBrushBrightness scaleComponents scaleConstraint scaleKey scaleKeyCtx sceneEditor sceneUIReplacement scmh scriptCtx scriptEditorInfo scriptJob scriptNode scriptTable scriptToShelf scriptedPanel scriptedPanelType scrollField scrollLayout sculpt searchPathArray seed selLoadSettings select selectContext selectCurveCV selectKey selectKeyCtx selectKeyframeRegionCtx selectMode selectPref selectPriority selectType selectedNodes selectionConnection separator setAttr setAttrEnumResource setAttrMapping setAttrNiceNameResource setConstraintRestPosition setDefaultShadingGroup setDrivenKeyframe setDynamic setEditCtx setEditor setFluidAttr setFocus setInfinity setInputDeviceMapping setKeyCtx setKeyPath setKeyframe setKeyframeBlendshapeTargetWts setMenuMode setNodeNiceNameResource setNodeTypeFlag setParent setParticleAttr setPfxToPolyCamera setPluginResource setProject setStampDensity setStartupMessage setState setToolTo setUITemplate setXformManip sets shadingConnection shadingGeometryRelCtx shadingLightRelCtx shadingNetworkCompare shadingNode shapeCompare shelfButton shelfLayout shelfTabLayout shellField shortNameOf showHelp showHidden showManipCtx showSelectionInTitle showShadingGroupAttrEditor showWindow sign simplify sin singleProfileBirailSurface size sizeBytes skinCluster skinPercent smoothCurve smoothTangentSurface smoothstep snap2to2 snapKey snapMode snapTogetherCtx snapshot soft softMod softModCtx sort sound soundControl source spaceLocator sphere sphrand spotLight spotLightPreviewPort spreadSheetEditor spring sqrt squareSurface srtContext stackTrace startString startsWith stitchAndExplodeShell stitchSurface stitchSurfacePoints strcmp stringArrayCatenate stringArrayContains stringArrayCount stringArrayInsertAtIndex stringArrayIntersector stringArrayRemove stringArrayRemoveAtIndex stringArrayRemoveDuplicates stringArrayRemoveExact stringArrayToString stringToStringArray strip stripPrefixFromName stroke subdAutoProjection subdCleanTopology subdCollapse subdDuplicateAndConnect subdEditUV subdListComponentConversion subdMapCut subdMapSewMove subdMatchTopology subdMirror subdToBlind subdToPoly subdTransferUVsToCache subdiv subdivCrease subdivDisplaySmoothness substitute substituteAllString substituteGeometry substring surface surfaceSampler surfaceShaderList swatchDisplayPort switchTable symbolButton symbolCheckBox sysFile system tabLayout tan tangentConstraint texLatticeDeformContext texManipContext texMoveContext texMoveUVShellContext texRotateContext texScaleContext texSelectContext texSelectShortestPathCtx texSmudgeUVContext texWinToolCtx text textCurves textField textFieldButtonGrp textFieldGrp textManip textScrollList textToShelf textureDisplacePlane textureHairColor texturePlacementContext textureWindow threadCount threePointArcCtx timeControl timePort timerX toNativePath toggle toggleAxis toggleWindowVisibility tokenize tokenizeList tolerance tolower toolButton toolCollection toolDropped toolHasOptions toolPropertyWindow torus toupper trace track trackCtx transferAttributes transformCompare transformLimits translator trim trunc truncateFluidCache truncateHairCache tumble tumbleCtx turbulence twoPointArcCtx uiRes uiTemplate unassignInputDevice undo undoInfo ungroup uniform unit unloadPlugin untangleUV untitledFileName untrim upAxis updateAE userCtx uvLink uvSnapshot validateShelfName vectorize view2dToolCtx viewCamera viewClipPlane viewFit viewHeadOn viewLookAt viewManip viewPlace viewSet visor volumeAxis vortex waitCursor warning webBrowser webBrowserPrefs whatIs window windowPref wire wireContext workspace wrinkle wrinkleContext writeTake xbmLangPathList xform",i:"",c:[e.CNM,e.ASM,e.QSM,{cN:"string",b:"`",e:"`",c:[e.BE]},{b:"[\\$\\%\\@](\\^\\w\\b|#\\w+|[^\\s\\w{]|{\\w+}|\\w+)"},e.CLCM,e.CBCM]}});hljs.registerLanguage("step21",function(e){var i="[A-Z_][A-Z0-9_.]*",r={keyword:"HEADER ENDSEC DATA"},t={cN:"meta",b:"ISO-10303-21;",r:10},n={cN:"meta",b:"END-ISO-10303-21;",r:10};return{aliases:["p21","step","stp"],cI:!0,l:i,k:r,c:[t,n,e.CLCM,e.CBCM,e.C("/\\*\\*!","\\*/"),e.CNM,e.inherit(e.ASM,{i:null}),e.inherit(e.QSM,{i:null}),{cN:"string",b:"'",e:"'"},{cN:"symbol",v:[{b:"#",e:"\\d+",i:"\\W"}]}]}});hljs.registerLanguage("q",function(e){var s={keyword:"do while select delete by update from",literal:"0b 1b",built_in:"neg not null string reciprocal floor ceiling signum mod xbar xlog and or each scan over prior mmu lsq inv md5 ltime gtime count first var dev med cov cor all any rand sums prds mins maxs fills deltas ratios avgs differ prev next rank reverse iasc idesc asc desc msum mcount mavg mdev xrank mmin mmax xprev rotate distinct group where flip type key til get value attr cut set upsert raze union inter except cross sv vs sublist enlist read0 read1 hopen hclose hdel hsym hcount peach system ltrim rtrim trim lower upper ssr view tables views cols xcols keys xkey xcol xasc xdesc fkeys meta lj aj aj0 ij pj asof uj ww wj wj1 fby xgroup ungroup ej save load rsave rload show csv parse eval min max avg wavg wsum sin cos tan sum",type:"`float `double int `timestamp `timespan `datetime `time `boolean `symbol `char `byte `short `long `real `month `date `minute `second `guid"};return{aliases:["k","kdb"],k:s,l:/(`?)[A-Za-z0-9_]+\b/,c:[e.CLCM,e.QSM,e.CNM]}});hljs.registerLanguage("ini",function(e){var b={cN:"string",c:[e.BE],v:[{b:"'''",e:"'''",r:10},{b:'"""',e:'"""',r:10},{b:'"',e:'"'},{b:"'",e:"'"}]};return{aliases:["toml"],cI:!0,i:/\S/,c:[e.C(";","$"),e.HCM,{cN:"section",b:/^\s*\[+/,e:/\]+/},{b:/^[a-z0-9\[\]_-]+\s*=\s*/,e:"$",rB:!0,c:[{cN:"attr",b:/[a-z0-9\[\]_-]+/},{b:/=/,eW:!0,r:0,c:[{cN:"literal",b:/\bon|off|true|false|yes|no\b/},{cN:"variable",v:[{b:/\$[\w\d"][\w\d_]*/},{b:/\$\{(.*?)}/}]},b,{cN:"number",b:/([\+\-]+)?[\d]+_[\d_]+/},e.NM]}]}]}});hljs.registerLanguage("gcode",function(N){var e="[A-Z_][A-Z0-9_.]*",c="\\%",E="IF DO WHILE ENDWHILE CALL ENDIF SUB ENDSUB GOTO REPEAT ENDREPEAT EQ LT GT NE GE LE OR XOR",i={cN:"meta",b:"([O])([0-9]+)"},n=[N.CLCM,N.CBCM,N.C(/\(/,/\)/),N.inherit(N.CNM,{b:"([-+]?([0-9]*\\.?[0-9]+\\.?))|"+N.CNR}),N.inherit(N.ASM,{i:null}),N.inherit(N.QSM,{i:null}),{cN:"name",b:"([G])([0-9]+\\.?[0-9]?)"},{cN:"name",b:"([M])([0-9]+\\.?[0-9]?)"},{cN:"attr",b:"(VC|VS|#)",e:"(\\d+)"},{cN:"attr",b:"(VZOFX|VZOFY|VZOFZ)"},{cN:"built_in",b:"(ATAN|ABS|ACOS|ASIN|SIN|COS|EXP|FIX|FUP|ROUND|LN|TAN)(\\[)",e:"([-+]?([0-9]*\\.?[0-9]+\\.?))(\\])"},{cN:"symbol",v:[{b:"N",e:"\\d+",i:"\\W"}]}];return{aliases:["nc"],cI:!0,l:e,k:E,c:[{cN:"meta",b:c},i].concat(n)}});hljs.registerLanguage("markdown",function(e){return{aliases:["md","mkdown","mkd"],c:[{cN:"section",v:[{b:"^#{1,6}",e:"$"},{b:"^.+?\\n[=-]{2,}$"}]},{b:"<",e:">",sL:"xml",r:0},{cN:"bullet",b:"^([*+-]|(\\d+\\.))\\s+"},{cN:"strong",b:"[*_]{2}.+?[*_]{2}"},{cN:"emphasis",v:[{b:"\\*.+?\\*"},{b:"_.+?_",r:0}]},{cN:"quote",b:"^>\\s+",e:"$"},{cN:"code",v:[{b:"`.+?`"},{b:"^( {4}| )",e:"$",r:0}]},{b:"^[-\\*]{3,}",e:"$"},{b:"\\[.+?\\][\\(\\[].*?[\\)\\]]",rB:!0,c:[{cN:"string",b:"\\[",e:"\\]",eB:!0,rE:!0,r:0},{cN:"link",b:"\\]\\(",e:"\\)",eB:!0,eE:!0},{cN:"symbol",b:"\\]\\[",e:"\\]",eB:!0,eE:!0}],r:10},{b:"^\\[.+\\]:",rB:!0,c:[{cN:"symbol",b:"\\[",e:"\\]:",eB:!0,eE:!0,starts:{cN:"link",e:"$"}}]}]}});hljs.registerLanguage("scala",function(e){var t={cN:"meta",b:"@[A-Za-z]+"},a={cN:"subst",v:[{b:"\\$[A-Za-z0-9_]+"},{b:"\\${",e:"}"}]},r={cN:"string",v:[{b:'"',e:'"',i:"\\n",c:[e.BE]},{b:'"""',e:'"""',r:10},{b:'[a-z]+"',e:'"',i:"\\n",c:[e.BE,a]},{cN:"string",b:'[a-z]+"""',e:'"""',c:[a],r:10}]},c={cN:"symbol",b:"'\\w[\\w\\d_]*(?!')"},i={cN:"type",b:"\\b[A-Z][A-Za-z0-9_]*",r:0},s={cN:"title",b:/[^0-9\n\t "'(),.`{}\[\]:;][^\n\t "'(),.`{}\[\]:;]+|[^0-9\n\t "'(),.`{}\[\]:;=]/,r:0},n={cN:"class",bK:"class object trait type",e:/[:={\[\n;]/,eE:!0,c:[{bK:"extends with",r:10},{b:/\[/,e:/\]/,eB:!0,eE:!0,r:0,c:[i]},{cN:"params",b:/\(/,e:/\)/,eB:!0,eE:!0,r:0,c:[i]},s]},l={cN:"function",bK:"def",e:/[:={\[(\n;]/,eE:!0,c:[s]};return{k:{literal:"true false null",keyword:"type yield lazy override def with val var sealed abstract private trait object if forSome for while throw finally protected extends import final return else break new catch super class case package default try this match continue throws implicit"},c:[e.CLCM,e.CBCM,r,c,i,l,n,e.CNM,t]}});hljs.registerLanguage("tp",function(O){var R={cN:"number",b:"[1-9][0-9]*",r:0},E={cN:"symbol",b:":[^\\]]+"},T={cN:"built_in",b:"(AR|P|PAYLOAD|PR|R|SR|RSR|LBL|VR|UALM|MESSAGE|UTOOL|UFRAME|TIMER| TIMER_OVERFLOW|JOINT_MAX_SPEED|RESUME_PROG|DIAG_REC)\\[",e:"\\]",c:["self",R,E]},N={cN:"built_in",b:"(AI|AO|DI|DO|F|RI|RO|UI|UO|GI|GO|SI|SO)\\[",e:"\\]",c:["self",R,O.QSM,E]};return{k:{keyword:"ABORT ACC ADJUST AND AP_LD BREAK CALL CNT COL CONDITION CONFIG DA DB DIV DETECT ELSE END ENDFOR ERR_NUM ERROR_PROG FINE FOR GP GUARD INC IF JMP LINEAR_MAX_SPEED LOCK MOD MONITOR OFFSET Offset OR OVERRIDE PAUSE PREG PTH RT_LD RUN SELECT SKIP Skip TA TB TO TOOL_OFFSET Tool_Offset UF UT UFRAME_NUM UTOOL_NUM UNLOCK WAIT X Y Z W P R STRLEN SUBSTR FINDSTR VOFFSET PROG ATTR MN POS",literal:"ON OFF max_speed LPOS JPOS ENABLE DISABLE START STOP RESET"},c:[T,N,{cN:"keyword",b:"/(PROG|ATTR|MN|POS|END)\\b"},{cN:"keyword",b:"(CALL|RUN|POINT_LOGIC|LBL)\\b"},{cN:"keyword",b:"\\b(ACC|CNT|Skip|Offset|PSPD|RT_LD|AP_LD|Tool_Offset)"},{cN:"number",b:"\\d+(sec|msec|mm/sec|cm/min|inch/min|deg/sec|mm|in|cm)?\\b",r:0},O.C("//","[;$]"),O.C("!","[;$]"),O.C("--eg:","$"),O.QSM,{cN:"string",b:"'",e:"'"},O.CNM,{cN:"variable",b:"\\$[A-Za-z0-9_]+"}]}});hljs.registerLanguage("scilab",function(e){var s=[e.CNM,{cN:"string",b:"'|\"",e:"'|\"",c:[e.BE,{b:"''"}]}];return{aliases:["sci"],l:/%?\w+/,k:{keyword:"abort break case clear catch continue do elseif else endfunction end for function global if pause return resume select try then while",literal:"%f %F %t %T %pi %eps %inf %nan %e %i %z %s",built_in:"abs and acos asin atan ceil cd chdir clearglobal cosh cos cumprod deff disp error exec execstr exists exp eye gettext floor fprintf fread fsolve imag isdef isempty isinfisnan isvector lasterror length load linspace list listfiles log10 log2 log max min msprintf mclose mopen ones or pathconvert poly printf prod pwd rand real round sinh sin size gsort sprintf sqrt strcat strcmps tring sum system tanh tan type typename warning zeros matrix"},i:'("|#|/\\*|\\s+/\\w+)',c:[{cN:"function",bK:"function",e:"$",c:[e.UTM,{cN:"params",b:"\\(",e:"\\)"}]},{b:"[a-zA-Z_][a-zA-Z_0-9]*('+[\\.']*|[\\.']+)",e:"",r:0},{b:"\\[",e:"\\]'*[\\.']*",r:0,c:s},e.C("//","$")].concat(s)}});hljs.registerLanguage("erb",function(e){return{sL:"xml",c:[e.C("<%#","%>"),{b:"<%[%=-]?",e:"[%-]?%>",sL:"ruby",eB:!0,eE:!0}]}});hljs.registerLanguage("sqf",function(e){var t=["!","-","+","!=","%","&&","*","/","=","==",">",">=","<","<=","or","plus","^",":",">>","abs","accTime","acos","action","actionKeys","actionKeysImages","actionKeysNames","actionKeysNamesArray","actionName","activateAddons","activatedAddons","activateKey","addAction","addBackpack","addBackpackCargo","addBackpackCargoGlobal","addBackpackGlobal","addCamShake","addCuratorAddons","addCuratorCameraArea","addCuratorEditableObjects","addCuratorEditingArea","addCuratorPoints","addEditorObject","addEventHandler","addGoggles","addGroupIcon","addHandgunItem","addHeadgear","addItem","addItemCargo","addItemCargoGlobal","addItemPool","addItemToBackpack","addItemToUniform","addItemToVest","addLiveStats","addMagazine","addMagazine array","addMagazineAmmoCargo","addMagazineCargo","addMagazineCargoGlobal","addMagazineGlobal","addMagazinePool","addMagazines","addMagazineTurret","addMenu","addMenuItem","addMissionEventHandler","addMPEventHandler","addMusicEventHandler","addPrimaryWeaponItem","addPublicVariableEventHandler","addRating","addResources","addScore","addScoreSide","addSecondaryWeaponItem","addSwitchableUnit","addTeamMember","addToRemainsCollector","addUniform","addVehicle","addVest","addWaypoint","addWeapon","addWeaponCargo","addWeaponCargoGlobal","addWeaponGlobal","addWeaponPool","addWeaponTurret","agent","agents","AGLToASL","aimedAtTarget","aimPos","airDensityRTD","airportSide","AISFinishHeal","alive","allControls","allCurators","allDead","allDeadMen","allDisplays","allGroups","allMapMarkers","allMines","allMissionObjects","allow3DMode","allowCrewInImmobile","allowCuratorLogicIgnoreAreas","allowDamage","allowDammage","allowFileOperations","allowFleeing","allowGetIn","allPlayers","allSites","allTurrets","allUnits","allUnitsUAV","allVariables","ammo","and","animate","animateDoor","animationPhase","animationState","append","armoryPoints","arrayIntersect","asin","ASLToAGL","ASLToATL","assert","assignAsCargo","assignAsCargoIndex","assignAsCommander","assignAsDriver","assignAsGunner","assignAsTurret","assignCurator","assignedCargo","assignedCommander","assignedDriver","assignedGunner","assignedItems","assignedTarget","assignedTeam","assignedVehicle","assignedVehicleRole","assignItem","assignTeam","assignToAirport","atan","atan2","atg","ATLToASL","attachedObject","attachedObjects","attachedTo","attachObject","attachTo","attackEnabled","backpack","backpackCargo","backpackContainer","backpackItems","backpackMagazines","backpackSpaceFor","behaviour","benchmark","binocular","blufor","boundingBox","boundingBoxReal","boundingCenter","breakOut","breakTo","briefingName","buildingExit","buildingPos","buttonAction","buttonSetAction","cadetMode","call","callExtension","camCommand","camCommit","camCommitPrepared","camCommitted","camConstuctionSetParams","camCreate","camDestroy","cameraEffect","cameraEffectEnableHUD","cameraInterest","cameraOn","cameraView","campaignConfigFile","camPreload","camPreloaded","camPrepareBank","camPrepareDir","camPrepareDive","camPrepareFocus","camPrepareFov","camPrepareFovRange","camPreparePos","camPrepareRelPos","camPrepareTarget","camSetBank","camSetDir","camSetDive","camSetFocus","camSetFov","camSetFovRange","camSetPos","camSetRelPos","camSetTarget","camTarget","camUseNVG","canAdd","canAddItemToBackpack","canAddItemToUniform","canAddItemToVest","cancelSimpleTaskDestination","canFire","canMove","canSlingLoad","canStand","canUnloadInCombat","captive","captiveNum","case","catch","cbChecked","cbSetChecked","ceil","cheatsEnabled","checkAIFeature","civilian","className","clearAllItemsFromBackpack","clearBackpackCargo","clearBackpackCargoGlobal","clearGroupIcons","clearItemCargo","clearItemCargoGlobal","clearItemPool","clearMagazineCargo","clearMagazineCargoGlobal","clearMagazinePool","clearOverlay","clearRadio","clearWeaponCargo","clearWeaponCargoGlobal","clearWeaponPool","closeDialog","closeDisplay","closeOverlay","collapseObjectTree","combatMode","commandArtilleryFire","commandChat","commander","commandFire","commandFollow","commandFSM","commandGetOut","commandingMenu","commandMove","commandRadio","commandStop","commandTarget","commandWatch","comment","commitOverlay","compile","compileFinal","completedFSM","composeText","configClasses","configFile","configHierarchy","configName","configProperties","configSourceMod","configSourceModList","connectTerminalToUAV","controlNull","controlsGroupCtrl","copyFromClipboard","copyToClipboard","copyWaypoints","cos","count","countEnemy","countFriendly","countSide","countType","countUnknown","createAgent","createCenter","createDialog","createDiaryLink","createDiaryRecord","createDiarySubject","createDisplay","createGearDialog","createGroup","createGuardedPoint","createLocation","createMarker","createMarkerLocal","createMenu","createMine","createMissionDisplay","createSimpleTask","createSite","createSoundSource","createTask","createTeam","createTrigger","createUnit","createUnit array","createVehicle","createVehicle array","createVehicleCrew","createVehicleLocal","crew","ctrlActivate","ctrlAddEventHandler","ctrlAutoScrollDelay","ctrlAutoScrollRewind","ctrlAutoScrollSpeed","ctrlChecked","ctrlClassName","ctrlCommit","ctrlCommitted","ctrlCreate","ctrlDelete","ctrlEnable","ctrlEnabled","ctrlFade","ctrlHTMLLoaded","ctrlIDC","ctrlIDD","ctrlMapAnimAdd","ctrlMapAnimClear","ctrlMapAnimCommit","ctrlMapAnimDone","ctrlMapCursor","ctrlMapMouseOver","ctrlMapScale","ctrlMapScreenToWorld","ctrlMapWorldToScreen","ctrlModel","ctrlModelDirAndUp","ctrlModelScale","ctrlParent","ctrlPosition","ctrlRemoveAllEventHandlers","ctrlRemoveEventHandler","ctrlScale","ctrlSetActiveColor","ctrlSetAutoScrollDelay","ctrlSetAutoScrollRewind","ctrlSetAutoScrollSpeed","ctrlSetBackgroundColor","ctrlSetChecked","ctrlSetEventHandler","ctrlSetFade","ctrlSetFocus","ctrlSetFont","ctrlSetFontH1","ctrlSetFontH1B","ctrlSetFontH2","ctrlSetFontH2B","ctrlSetFontH3","ctrlSetFontH3B","ctrlSetFontH4","ctrlSetFontH4B","ctrlSetFontH5","ctrlSetFontH5B","ctrlSetFontH6","ctrlSetFontH6B","ctrlSetFontHeight","ctrlSetFontHeightH1","ctrlSetFontHeightH2","ctrlSetFontHeightH3","ctrlSetFontHeightH4","ctrlSetFontHeightH5","ctrlSetFontHeightH6","ctrlSetFontP","ctrlSetFontPB","ctrlSetForegroundColor","ctrlSetModel","ctrlSetModelDirAndUp","ctrlSetModelScale","ctrlSetPosition","ctrlSetScale","ctrlSetStructuredText","ctrlSetText","ctrlSetTextColor","ctrlSetTooltip","ctrlSetTooltipColorBox","ctrlSetTooltipColorShade","ctrlSetTooltipColorText","ctrlShow","ctrlShown","ctrlText","ctrlTextHeight","ctrlType","ctrlVisible","curatorAddons","curatorCamera","curatorCameraArea","curatorCameraAreaCeiling","curatorCoef","curatorEditableObjects","curatorEditingArea","curatorEditingAreaType","curatorMouseOver","curatorPoints","curatorRegisteredObjects","curatorSelected","curatorWaypointCost","currentChannel","currentCommand","currentMagazine","currentMagazineDetail","currentMagazineDetailTurret","currentMagazineTurret","currentMuzzle","currentNamespace","currentTask","currentTasks","currentThrowable","currentVisionMode","currentWaypoint","currentWeapon","currentWeaponMode","currentWeaponTurret","currentZeroing","cursorTarget","customChat","customRadio","cutFadeOut","cutObj","cutRsc","cutText","damage","date","dateToNumber","daytime","deActivateKey","debriefingText","debugFSM","debugLog","default","deg","deleteAt","deleteCenter","deleteCollection","deleteEditorObject","deleteGroup","deleteIdentity","deleteLocation","deleteMarker","deleteMarkerLocal","deleteRange","deleteResources","deleteSite","deleteStatus","deleteTeam","deleteVehicle","deleteVehicleCrew","deleteWaypoint","detach","detectedMines","diag activeMissionFSMs","diag activeSQFScripts","diag activeSQSScripts","diag captureFrame","diag captureSlowFrame","diag fps","diag fpsMin","diag frameNo","diag log","diag logSlowFrame","diag tickTime","dialog","diarySubjectExists","didJIP","didJIPOwner","difficulty","difficultyEnabled","difficultyEnabledRTD","direction","directSay","disableAI","disableCollisionWith","disableConversation","disableDebriefingStats","disableSerialization","disableTIEquipment","disableUAVConnectability","disableUserInput","displayAddEventHandler","displayCtrl","displayNull","displayRemoveAllEventHandlers","displayRemoveEventHandler","displaySetEventHandler","dissolveTeam","distance","distance2D","distanceSqr","distributionRegion","do","doArtilleryFire","doFire","doFollow","doFSM","doGetOut","doMove","doorPhase","doStop","doTarget","doWatch","drawArrow","drawEllipse","drawIcon","drawIcon3D","drawLine","drawLine3D","drawLink","drawLocation","drawRectangle","driver","drop","east","echo","editObject","editorSetEventHandler","effectiveCommander","else","emptyPositions","enableAI","enableAIFeature","enableAttack","enableCamShake","enableCaustics","enableCollisionWith","enableCopilot","enableDebriefingStats","enableDiagLegend","enableEndDialog","enableEngineArtillery","enableEnvironment","enableFatigue","enableGunLights","enableIRLasers","enableMimics","enablePersonTurret","enableRadio","enableReload","enableRopeAttach","enableSatNormalOnDetail","enableSaving","enableSentences","enableSimulation","enableSimulationGlobal","enableTeamSwitch","enableUAVConnectability","enableUAVWaypoints","endLoadingScreen","endMission","engineOn","enginesIsOnRTD","enginesRpmRTD","enginesTorqueRTD","entities","estimatedEndServerTime","estimatedTimeLeft","evalObjectArgument","everyBackpack","everyContainer","exec","execEditorScript","execFSM","execVM","exit","exitWith","exp","expectedDestination","eyeDirection","eyePos","face","faction","fadeMusic","fadeRadio","fadeSound","fadeSpeech","failMission","false","fillWeaponsFromPool","find","findCover","findDisplay","findEditorObject","findEmptyPosition","findEmptyPositionReady","findNearestEnemy","finishMissionInit","finite","fire","fireAtTarget","firstBackpack","flag","flagOwner","fleeing","floor","flyInHeight","fog","fogForecast","fogParams","for","forceAddUniform","forceEnd","forceMap","forceRespawn","forceSpeed","forceWalk","forceWeaponFire","forceWeatherChange","forEach","forEachMember","forEachMemberAgent","forEachMemberTeam","format","formation","formationDirection","formationLeader","formationMembers","formationPosition","formationTask","formatText","formLeader","freeLook","from","fromEditor","fuel","fullCrew","gearSlotAmmoCount","gearSlotData","getAllHitPointsDamage","getAmmoCargo","getArray","getArtilleryAmmo","getArtilleryComputerSettings","getArtilleryETA","getAssignedCuratorLogic","getAssignedCuratorUnit","getBackpackCargo","getBleedingRemaining","getBurningValue","getCargoIndex","getCenterOfMass","getClientState","getConnectedUAV","getDammage","getDescription","getDir","getDirVisual","getDLCs","getEditorCamera","getEditorMode","getEditorObjectScope","getElevationOffset","getFatigue","getFriend","getFSMVariable","getFuelCargo","getGroupIcon","getGroupIconParams","getGroupIcons","getHideFrom","getHit","getHitIndex","getHitPointDamage","getItemCargo","getMagazineCargo","getMarkerColor","getMarkerPos","getMarkerSize","getMarkerType","getMass","getModelInfo","getNumber","getObjectArgument","getObjectChildren","getObjectDLC","getObjectMaterials","getObjectProxy","getObjectTextures","getObjectType","getObjectViewDistance","getOxygenRemaining","getPersonUsedDLCs","getPlayerChannel","getPlayerUID","getPos","getPosASL","getPosASLVisual","getPosASLW","getPosATL","getPosATLVisual","getPosVisual","getPosWorld","getRepairCargo","getResolution","getShadowDistance","getSlingLoad","getSpeed","getSuppression","getTerrainHeightASL","getText","getVariable","getWeaponCargo","getWPPos","glanceAt","globalChat","globalRadio","goggles","goto","group","groupChat","groupFromNetId","groupIconSelectable","groupIconsVisible","groupId","groupOwner","groupRadio","groupSelectedUnits","groupSelectUnit","grpNull","gunner","gusts","halt","handgunItems","handgunMagazine","handgunWeapon","handsHit","hasInterface","hasWeapon","hcAllGroups","hcGroupParams","hcLeader","hcRemoveAllGroups","hcRemoveGroup","hcSelected","hcSelectGroup","hcSetGroup","hcShowBar","hcShownBar","headgear","hideBody","hideObject","hideObjectGlobal","hint","hintC","hintCadet","hintSilent","hmd","hostMission","htmlLoad","HUDMovementLevels","humidity","if","image","importAllGroups","importance","in","incapacitatedState","independent","inflame","inflamed","inGameUISetEventHandler","inheritsFrom","initAmbientLife","inputAction","inRangeOfArtillery","insertEditorObject","intersect","isAbleToBreathe","isAgent","isArray","isAutoHoverOn","isAutonomous","isAutotest","isBleeding","isBurning","isClass","isCollisionLightOn","isCopilotEnabled","isDedicated","isDLCAvailable","isEngineOn","isEqualTo","isFlashlightOn","isFlatEmpty","isForcedWalk","isFormationLeader","isHidden","isInRemainsCollector","isInstructorFigureEnabled","isIRLaserOn","isKeyActive","isKindOf","isLightOn","isLocalized","isManualFire","isMarkedForCollection","isMultiplayer","isNil","isNull","isNumber","isObjectHidden","isObjectRTD","isOnRoad","isPipEnabled","isPlayer","isRealTime","isServer","isShowing3DIcons","isSteamMission","isStreamFriendlyUIEnabled","isText","isTouchingGround","isTurnedOut","isTutHintsEnabled","isUAVConnectable","isUAVConnected","isUniformAllowed","isWalking","isWeaponDeployed","isWeaponRested","itemCargo","items","itemsWithMagazines","join","joinAs","joinAsSilent","joinSilent","joinString","kbAddDatabase","kbAddDatabaseTargets","kbAddTopic","kbHasTopic","kbReact","kbRemoveTopic","kbTell","kbWasSaid","keyImage","keyName","knowsAbout","land","landAt","landResult","language","laserTarget","lbAdd","lbClear","lbColor","lbCurSel","lbData","lbDelete","lbIsSelected","lbPicture","lbSelection","lbSetColor","lbSetCurSel","lbSetData","lbSetPicture","lbSetPictureColor","lbSetPictureColorDisabled","lbSetPictureColorSelected","lbSetSelectColor","lbSetSelectColorRight","lbSetSelected","lbSetTooltip","lbSetValue","lbSize","lbSort","lbSortByValue","lbText","lbValue","leader","leaderboardDeInit","leaderboardGetRows","leaderboardInit","leaveVehicle","libraryCredits","libraryDisclaimers","lifeState","lightAttachObject","lightDetachObject","lightIsOn","lightnings","limitSpeed","linearConversion","lineBreak","lineIntersects","lineIntersectsObjs","lineIntersectsSurfaces","lineIntersectsWith","linkItem","list","listObjects","ln","lnbAddArray","lnbAddColumn","lnbAddRow","lnbClear","lnbColor","lnbCurSelRow","lnbData","lnbDeleteColumn","lnbDeleteRow","lnbGetColumnsPosition","lnbPicture","lnbSetColor","lnbSetColumnsPos","lnbSetCurSelRow","lnbSetData","lnbSetPicture","lnbSetText","lnbSetValue","lnbSize","lnbText","lnbValue","load","loadAbs","loadBackpack","loadFile","loadGame","loadIdentity","loadMagazine","loadOverlay","loadStatus","loadUniform","loadVest","local","localize","locationNull","locationPosition","lock","lockCameraTo","lockCargo","lockDriver","locked","lockedCargo","lockedDriver","lockedTurret","lockTurret","lockWP","log","logEntities","lookAt","lookAtPos","magazineCargo","magazines","magazinesAllTurrets","magazinesAmmo","magazinesAmmoCargo","magazinesAmmoFull","magazinesDetail","magazinesDetailBackpack","magazinesDetailUniform","magazinesDetailVest","magazinesTurret","magazineTurretAmmo","mapAnimAdd","mapAnimClear","mapAnimCommit","mapAnimDone","mapCenterOnCamera","mapGridPosition","markAsFinishedOnSteam","markerAlpha","markerBrush","markerColor","markerDir","markerPos","markerShape","markerSize","markerText","markerType","max","members","min","mineActive","mineDetectedBy","missionConfigFile","missionName","missionNamespace","missionStart","mod","modelToWorld","modelToWorldVisual","moonIntensity","morale","move","moveInAny","moveInCargo","moveInCommander","moveInDriver","moveInGunner","moveInTurret","moveObjectToEnd","moveOut","moveTime","moveTo","moveToCompleted","moveToFailed","musicVolume","name","name location","nameSound","nearEntities","nearestBuilding","nearestLocation","nearestLocations","nearestLocationWithDubbing","nearestObject","nearestObjects","nearObjects","nearObjectsReady","nearRoads","nearSupplies","nearTargets","needReload","netId","netObjNull","newOverlay","nextMenuItemIndex","nextWeatherChange","nil","nMenuItems","not","numberToDate","objectCurators","objectFromNetId","objectParent","objNull","objStatus","onBriefingGroup","onBriefingNotes","onBriefingPlan","onBriefingTeamSwitch","onCommandModeChanged","onDoubleClick","onEachFrame","onGroupIconClick","onGroupIconOverEnter","onGroupIconOverLeave","onHCGroupSelectionChanged","onMapSingleClick","onPlayerConnected","onPlayerDisconnected","onPreloadFinished","onPreloadStarted","onShowNewObject","onTeamSwitch","openCuratorInterface","openMap","openYoutubeVideo","opfor","or","orderGetIn","overcast","overcastForecast","owner","param","params","parseNumber","parseText","parsingNamespace","particlesQuality","pi","pickWeaponPool","pitch","playableSlotsNumber","playableUnits","playAction","playActionNow","player","playerRespawnTime","playerSide","playersNumber","playGesture","playMission","playMove","playMoveNow","playMusic","playScriptedMission","playSound","playSound3D","position","positionCameraToWorld","posScreenToWorld","posWorldToScreen","ppEffectAdjust","ppEffectCommit","ppEffectCommitted","ppEffectCreate","ppEffectDestroy","ppEffectEnable","ppEffectForceInNVG","precision","preloadCamera","preloadObject","preloadSound","preloadTitleObj","preloadTitleRsc","preprocessFile","preprocessFileLineNumbers","primaryWeapon","primaryWeaponItems","primaryWeaponMagazine","priority","private","processDiaryLink","productVersion","profileName","profileNamespace","profileNameSteam","progressLoadingScreen","progressPosition","progressSetPosition","publicVariable","publicVariableClient","publicVariableServer","pushBack","putWeaponPool","queryItemsPool","queryMagazinePool","queryWeaponPool","rad","radioChannelAdd","radioChannelCreate","radioChannelRemove","radioChannelSetCallSign","radioChannelSetLabel","radioVolume","rain","rainbow","random","rank","rankId","rating","rectangular","registeredTasks","registerTask","reload","reloadEnabled","remoteControl","remoteExec","remoteExecCall","removeAction","removeAllActions","removeAllAssignedItems","removeAllContainers","removeAllCuratorAddons","removeAllCuratorCameraAreas","removeAllCuratorEditingAreas","removeAllEventHandlers","removeAllHandgunItems","removeAllItems","removeAllItemsWithMagazines","removeAllMissionEventHandlers","removeAllMPEventHandlers","removeAllMusicEventHandlers","removeAllPrimaryWeaponItems","removeAllWeapons","removeBackpack","removeBackpackGlobal","removeCuratorAddons","removeCuratorCameraArea","removeCuratorEditableObjects","removeCuratorEditingArea","removeDrawIcon","removeDrawLinks","removeEventHandler","removeFromRemainsCollector","removeGoggles","removeGroupIcon","removeHandgunItem","removeHeadgear","removeItem","removeItemFromBackpack","removeItemFromUniform","removeItemFromVest","removeItems","removeMagazine","removeMagazineGlobal","removeMagazines","removeMagazinesTurret","removeMagazineTurret","removeMenuItem","removeMissionEventHandler","removeMPEventHandler","removeMusicEventHandler","removePrimaryWeaponItem","removeSecondaryWeaponItem","removeSimpleTask","removeSwitchableUnit","removeTeamMember","removeUniform","removeVest","removeWeapon","removeWeaponGlobal","removeWeaponTurret","requiredVersion","resetCamShake","resetSubgroupDirection","resistance","resize","resources","respawnVehicle","restartEditorCamera","reveal","revealMine","reverse","reversedMouseY","roadsConnectedTo","roleDescription","ropeAttachedObjects","ropeAttachedTo","ropeAttachEnabled","ropeAttachTo","ropeCreate","ropeCut","ropeEndPosition","ropeLength","ropes","ropeUnwind","ropeUnwound","rotorsForcesRTD","rotorsRpmRTD","round","runInitScript","safeZoneH","safeZoneW","safeZoneWAbs","safeZoneX","safeZoneXAbs","safeZoneY","saveGame","saveIdentity","saveJoysticks","saveOverlay","saveProfileNamespace","saveStatus","saveVar","savingEnabled","say","say2D","say3D","scopeName","score","scoreSide","screenToWorld","scriptDone","scriptName","scriptNull","scudState","secondaryWeapon","secondaryWeaponItems","secondaryWeaponMagazine","select","selectBestPlaces","selectDiarySubject","selectedEditorObjects","selectEditorObject","selectionPosition","selectLeader","selectNoPlayer","selectPlayer","selectWeapon","selectWeaponTurret","sendAUMessage","sendSimpleCommand","sendTask","sendTaskResult","sendUDPMessage","serverCommand","serverCommandAvailable","serverCommandExecutable","serverName","serverTime","set","setAccTime","setAirportSide","setAmmo","setAmmoCargo","setAperture","setApertureNew","setArmoryPoints","setAttributes","setAutonomous","setBehaviour","setBleedingRemaining","setCameraInterest","setCamShakeDefParams","setCamShakeParams","setCamUseTi","setCaptive","setCenterOfMass","setCollisionLight","setCombatMode","setCompassOscillation","setCuratorCameraAreaCeiling","setCuratorCoef","setCuratorEditingAreaType","setCuratorWaypointCost","setCurrentChannel","setCurrentTask","setCurrentWaypoint","setDamage","setDammage","setDate","setDebriefingText","setDefaultCamera","setDestination","setDetailMapBlendPars","setDir","setDirection","setDrawIcon","setDropInterval","setEditorMode","setEditorObjectScope","setEffectCondition","setFace","setFaceAnimation","setFatigue","setFlagOwner","setFlagSide","setFlagTexture","setFog","setFog array","setFormation","setFormationTask","setFormDir","setFriend","setFromEditor","setFSMVariable","setFuel","setFuelCargo","setGroupIcon","setGroupIconParams","setGroupIconsSelectable","setGroupIconsVisible","setGroupId","setGroupIdGlobal","setGroupOwner","setGusts","setHideBehind","setHit","setHitIndex","setHitPointDamage","setHorizonParallaxCoef","setHUDMovementLevels","setIdentity","setImportance","setLeader","setLightAmbient","setLightAttenuation","setLightBrightness","setLightColor","setLightDayLight","setLightFlareMaxDistance","setLightFlareSize","setLightIntensity","setLightnings","setLightUseFlare","setLocalWindParams","setMagazineTurretAmmo","setMarkerAlpha","setMarkerAlphaLocal","setMarkerBrush","setMarkerBrushLocal","setMarkerColor","setMarkerColorLocal","setMarkerDir","setMarkerDirLocal","setMarkerPos","setMarkerPosLocal","setMarkerShape","setMarkerShapeLocal","setMarkerSize","setMarkerSizeLocal","setMarkerText","setMarkerTextLocal","setMarkerType","setMarkerTypeLocal","setMass","setMimic","setMousePosition","setMusicEffect","setMusicEventHandler","setName","setNameSound","setObjectArguments","setObjectMaterial","setObjectProxy","setObjectTexture","setObjectTextureGlobal","setObjectViewDistance","setOvercast","setOwner","setOxygenRemaining","setParticleCircle","setParticleClass","setParticleFire","setParticleParams","setParticleRandom","setPilotLight","setPiPEffect","setPitch","setPlayable","setPlayerRespawnTime","setPos","setPosASL","setPosASL2","setPosASLW","setPosATL","setPosition","setPosWorld","setRadioMsg","setRain","setRainbow","setRandomLip","setRank","setRectangular","setRepairCargo","setShadowDistance","setSide","setSimpleTaskDescription","setSimpleTaskDestination","setSimpleTaskTarget","setSimulWeatherLayers","setSize","setSkill","setSkill array","setSlingLoad","setSoundEffect","setSpeaker","setSpeech","setSpeedMode","setStatValue","setSuppression","setSystemOfUnits","setTargetAge","setTaskResult","setTaskState","setTerrainGrid","setText","setTimeMultiplier","setTitleEffect","setTriggerActivation","setTriggerArea","setTriggerStatements","setTriggerText","setTriggerTimeout","setTriggerType","setType","setUnconscious","setUnitAbility","setUnitPos","setUnitPosWeak","setUnitRank","setUnitRecoilCoefficient","setUnloadInCombat","setUserActionText","setVariable","setVectorDir","setVectorDirAndUp","setVectorUp","setVehicleAmmo","setVehicleAmmoDef","setVehicleArmor","setVehicleId","setVehicleLock","setVehiclePosition","setVehicleTiPars","setVehicleVarName","setVelocity","setVelocityTransformation","setViewDistance","setVisibleIfTreeCollapsed","setWaves","setWaypointBehaviour","setWaypointCombatMode","setWaypointCompletionRadius","setWaypointDescription","setWaypointFormation","setWaypointHousePosition","setWaypointLoiterRadius","setWaypointLoiterType","setWaypointName","setWaypointPosition","setWaypointScript","setWaypointSpeed","setWaypointStatements","setWaypointTimeout","setWaypointType","setWaypointVisible","setWeaponReloadingTime","setWind","setWindDir","setWindForce","setWindStr","setWPPos","show3DIcons","showChat","showCinemaBorder","showCommandingMenu","showCompass","showCuratorCompass","showGPS","showHUD","showLegend","showMap","shownArtilleryComputer","shownChat","shownCompass","shownCuratorCompass","showNewEditorObject","shownGPS","shownHUD","shownMap","shownPad","shownRadio","shownUAVFeed","shownWarrant","shownWatch","showPad","showRadio","showSubtitles","showUAVFeed","showWarrant","showWatch","showWaypoint","side","sideChat","sideEnemy","sideFriendly","sideLogic","sideRadio","sideUnknown","simpleTasks","simulationEnabled","simulCloudDensity","simulCloudOcclusion","simulInClouds","simulWeatherSync","sin","size","sizeOf","skill","skillFinal","skipTime","sleep","sliderPosition","sliderRange","sliderSetPosition","sliderSetRange","sliderSetSpeed","sliderSpeed","slingLoadAssistantShown","soldierMagazines","someAmmo","sort","soundVolume","spawn","speaker","speed","speedMode","splitString","sqrt","squadParams","stance","startLoadingScreen","step","stop","stopped","str","sunOrMoon","supportInfo","suppressFor","surfaceIsWater","surfaceNormal","surfaceType","swimInDepth","switch","switchableUnits","switchAction","switchCamera","switchGesture","switchLight","switchMove","synchronizedObjects","synchronizedTriggers","synchronizedWaypoints","synchronizeObjectsAdd","synchronizeObjectsRemove","synchronizeTrigger","synchronizeWaypoint","synchronizeWaypoint trigger","systemChat","systemOfUnits","tan","targetKnowledge","targetsAggregate","targetsQuery","taskChildren","taskCompleted","taskDescription","taskDestination","taskHint","taskNull","taskParent","taskResult","taskState","teamMember","teamMemberNull","teamName","teams","teamSwitch","teamSwitchEnabled","teamType","terminate","terrainIntersect","terrainIntersectASL","text","text location","textLog","textLogFormat","tg","then","throw","time","timeMultiplier","titleCut","titleFadeOut","titleObj","titleRsc","titleText","to","toArray","toLower","toString","toUpper","triggerActivated","triggerActivation","triggerArea","triggerAttachedVehicle","triggerAttachObject","triggerAttachVehicle","triggerStatements","triggerText","triggerTimeout","triggerTimeoutCurrent","triggerType","true","try","turretLocal","turretOwner","turretUnit","tvAdd","tvClear","tvCollapse","tvCount","tvCurSel","tvData","tvDelete","tvExpand","tvPicture","tvSetCurSel","tvSetData","tvSetPicture","tvSetPictureColor","tvSetTooltip","tvSetValue","tvSort","tvSortByValue","tvText","tvValue","type","typeName","typeOf","UAVControl","uiNamespace","uiSleep","unassignCurator","unassignItem","unassignTeam","unassignVehicle","underwater","uniform","uniformContainer","uniformItems","uniformMagazines","unitAddons","unitBackpack","unitPos","unitReady","unitRecoilCoefficient","units","unitsBelowHeight","unlinkItem","unlockAchievement","unregisterTask","updateDrawIcon","updateMenuItem","updateObjectTree","useAudioTimeForMoves","vectorAdd","vectorCos","vectorCrossProduct","vectorDiff","vectorDir","vectorDirVisual","vectorDistance","vectorDistanceSqr","vectorDotProduct","vectorFromTo","vectorMagnitude","vectorMagnitudeSqr","vectorMultiply","vectorNormalized","vectorUp","vectorUpVisual","vehicle","vehicleChat","vehicleRadio","vehicles","vehicleVarName","velocity","velocityModelSpace","verifySignature","vest","vestContainer","vestItems","vestMagazines","viewDistance","visibleCompass","visibleGPS","visibleMap","visiblePosition","visiblePositionASL","visibleWatch","waitUntil","waves","waypointAttachedObject","waypointAttachedVehicle","waypointAttachObject","waypointAttachVehicle","waypointBehaviour","waypointCombatMode","waypointCompletionRadius","waypointDescription","waypointFormation","waypointHousePosition","waypointLoiterRadius","waypointLoiterType","waypointName","waypointPosition","waypoints","waypointScript","waypointsEnabledUAV","waypointShow","waypointSpeed","waypointStatements","waypointTimeout","waypointTimeoutCurrent","waypointType","waypointVisible","weaponAccessories","weaponCargo","weaponDirection","weaponLowered","weapons","weaponsItems","weaponsItemsCargo","weaponState","weaponsTurret","weightRTD","west","WFSideText","while","wind","windDir","windStr","wingsForcesRTD","with","worldName","worldSize","worldToModel","worldToModelVisual","worldToScreen"],a=["case","catch","default","do","else","exit","exitWith|5","for","forEach","from","if","switch","then","throw","to","try","while","with"],r=["!","-","+","!=","%","&&","*","/","=","==",">",">=","<","<=","^",":",">>"],o=["_forEachIndex|10","_this|10","_x|10"],i=["true","false","nil"],n=t.filter(function(e){return-1==a.indexOf(e)&&-1==i.indexOf(e)&&-1==r.indexOf(e)});n=n.concat(o);var s={cN:"string",r:0,v:[{b:'"',e:'"',c:[{b:'""'}]},{b:"'",e:"'",c:[{b:"''"}]}]},l={cN:"number",b:e.NR,r:0},c={cN:"string",v:[e.QSM,{b:"'\\\\?.",e:"'",i:"."}]},d={cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elif endif define undef warning error line pragma ifdef ifndef"},c:[{b:/\\\n/,r:0},{bK:"include",e:"$",k:{"meta-keyword":"include"},c:[c,{cN:"meta-string",b:"<",e:">",i:"\\n"}]},c,l,e.CLCM,e.CBCM]};return{aliases:["sqf"],cI:!0,k:{keyword:a.join(" "),built_in:n.join(" "),literal:i.join(" ")},c:[e.CLCM,e.CBCM,l,s,d]}});hljs.registerLanguage("vala",function(e){return{k:{keyword:"char uchar unichar int uint long ulong short ushort int8 int16 int32 int64 uint8 uint16 uint32 uint64 float double bool struct enum string void weak unowned owned async signal static abstract interface override while do for foreach else switch case break default return try catch public private protected internal using new this get set const stdout stdin stderr var",built_in:"DBus GLib CCode Gee Object",literal:"false true null"},c:[{cN:"class",bK:"class interface delegate namespace",e:"{",eE:!0,i:"[^,:\\n\\s\\.]",c:[e.UTM]},e.CLCM,e.CBCM,{cN:"string",b:'"""',e:'"""',r:5},e.ASM,e.QSM,e.CNM,{cN:"meta",b:"^#",e:"$",r:2}]}});hljs.registerLanguage("elixir",function(e){var r="[a-zA-Z_][a-zA-Z0-9_]*(\\!|\\?)?",n="[a-zA-Z_]\\w*[!?=]?|[-+~]\\@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?",b="and false then defined module in return redo retry end for true self when next until do begin unless nil break not case cond alias while ensure or include use alias fn quote",c={cN:"subst",b:"#\\{",e:"}",l:r,k:b},a={cN:"string",c:[e.BE,c],v:[{b:/'/,e:/'/},{b:/"/,e:/"/}]},i={cN:"function",bK:"def defp defmacro",e:/\B\b/,c:[e.inherit(e.TM,{b:r,endsParent:!0})]},s=e.inherit(i,{cN:"class",bK:"defmodule defrecord",e:/\bdo\b|$|;/}),l=[a,e.HCM,s,i,{cN:"symbol",b:":",c:[a,{b:n}],r:0},{cN:"symbol",b:r+":",r:0},{cN:"number",b:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",r:0},{cN:"variable",b:"(\\$\\W)|((\\$|\\@\\@?)(\\w+))"},{b:"->"},{b:"("+e.RSR+")\\s*",c:[e.HCM,{cN:"regexp",i:"\\n",c:[e.BE,c],v:[{b:"/",e:"/[a-z]*"},{b:"%r\\[",e:"\\][a-z]*"}]}],r:0}];return c.c=l,{l:r,k:b,c:l}});hljs.registerLanguage("roboconf",function(a){var e="[a-zA-Z-_][^\\n{]+\\{",n={cN:"attribute",b:/[a-zA-Z-_]+/,e:/\s*:/,eE:!0,starts:{e:";",r:0,c:[{cN:"variable",b:/\.[a-zA-Z-_]+/},{cN:"keyword",b:/\(optional\)/}]}};return{aliases:["graph","instances"],cI:!0,k:"import",c:[{b:"^facet "+e,e:"}",k:"facet",c:[n,a.HCM]},{b:"^\\s*instance of "+e,e:"}",k:"name count channels instance-data instance-state instance of",i:/\S/,c:["self",n,a.HCM]},{b:"^"+e,e:"}",c:[n,a.HCM]},a.HCM]}});hljs.registerLanguage("mercury",function(e){var i={keyword:"module use_module import_module include_module end_module initialise mutable initialize finalize finalise interface implementation pred mode func type inst solver any_pred any_func is semidet det nondet multi erroneous failure cc_nondet cc_multi typeclass instance where pragma promise external trace atomic or_else require_complete_switch require_det require_semidet require_multi require_nondet require_cc_multi require_cc_nondet require_erroneous require_failure",meta:"inline no_inline type_spec source_file fact_table obsolete memo loop_check minimal_model terminates does_not_terminate check_termination promise_equivalent_clauses foreign_proc foreign_decl foreign_code foreign_type foreign_import_module foreign_export_enum foreign_export foreign_enum may_call_mercury will_not_call_mercury thread_safe not_thread_safe maybe_thread_safe promise_pure promise_semipure tabled_for_io local untrailed trailed attach_to_io_state can_pass_as_mercury_type stable will_not_throw_exception may_modify_trail will_not_modify_trail may_duplicate may_not_duplicate affects_liveness does_not_affect_liveness doesnt_affect_liveness no_sharing unknown_sharing sharing",built_in:"some all not if then else true fail false try catch catch_any semidet_true semidet_false semidet_fail impure_true impure semipure"},r=e.C("%","$"),t={cN:"number",b:"0'.\\|0[box][0-9a-fA-F]*"},_=e.inherit(e.ASM,{r:0}),n=e.inherit(e.QSM,{r:0}),a={cN:"subst",b:"\\\\[abfnrtv]\\|\\\\x[0-9a-fA-F]*\\\\\\|%[-+# *.0-9]*[dioxXucsfeEgGp]",r:0};n.c.push(a);var o={cN:"built_in",v:[{b:"<=>"},{b:"<=",r:0},{b:"=>",r:0},{b:"/\\\\"},{b:"\\\\/"}]},l={cN:"built_in",v:[{b:":-\\|-->"},{b:"=",r:0}]};return{aliases:["m","moo"],k:i,c:[o,l,r,e.CBCM,t,e.NM,_,n,{b:/:-/}]}});hljs.registerLanguage("arduino",function(e){var t={cN:"string",v:[e.inherit(e.QSM,{b:'((u8?|U)|L)?"'}),{b:'(u8?|U)?R"',e:'"',c:[e.BE]},{b:"'\\\\?.",e:"'",i:"."}]},r={cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elif endif define undef warning error line pragma ifdef ifndef"},c:[{b:/\\\n/,r:0},{bK:"include",e:"$",k:{"meta-keyword":"include"},c:[e.inherit(t,{cN:"meta-string"}),{cN:"meta-string",b:"<",e:">",i:"\\n"}]},t,e.CLCM,e.CBCM]};return{k:{"function":"setup loop while catch for if do goto try switch case else default break continue return",keyword:"boolean byte word string String array int float private char export virtual operator sizeof uint8_t uint16_t uint32_t uint64_t int8_t int16_t int32_t int64_t dynamic_cast typedef const_cast const struct static_cast union namespace unsigned long volatile static protected bool template mutable public friend auto void enum extern using class asm typeid short reinterpret_cast double register explicit signed typename this inline delete alignof constexpr decltype noexcept static_assert thread_local restrict _Bool complex _Complex _Imaginary atomic_bool atomic_char atomic_schar atomic_uchar atomic_short atomic_ushort atomic_int atomic_uint atomic_long atomic_ulong atomic_llong atomic_ullong",built_in:"KeyboardController MouseController SoftwareSerial EthernetServer EthernetClient LiquidCrystal RobotControl GSMVoiceCall EthernetUDP EsploraTFT HttpClient RobotMotor WiFiClient GSMScanner FileSystem Scheduler GSMServer YunClient YunServer IPAddress GSMClient GSMModem Keyboard Ethernet Console GSMBand Esplora Stepper Process WiFiUDP GSM_SMS Mailbox USBHost Firmata PImage Client Server GSMPIN FileIO Bridge Serial EEPROM Stream Mouse Audio Servo File Task GPRS WiFi Wire TFT GSM SPI SD runShellCommandAsynchronously analogWriteResolution retrieveCallingNumber printFirmwareVersion analogReadResolution sendDigitalPortPair noListenOnLocalhost readJoystickButton setFirmwareVersion readJoystickSwitch scrollDisplayRight getVoiceCallStatus scrollDisplayLeft writeMicroseconds delayMicroseconds beginTransmission getSignalStrength runAsynchronously getAsynchronously listenOnLocalhost getCurrentCarrier readAccelerometer messageAvailable sendDigitalPorts lineFollowConfig countryNameWrite runShellCommand readStringUntil rewindDirectory readTemperature setClockDivider readLightSensor endTransmission analogReference detachInterrupt countryNameRead attachInterrupt encryptionType readBytesUntil robotNameWrite readMicrophone robotNameRead cityNameWrite userNameWrite readJoystickY readJoystickX mouseReleased openNextFile scanNetworks noInterrupts digitalWrite beginSpeaker mousePressed isActionDone mouseDragged displayLogos noAutoscroll addParameter remoteNumber getModifiers keyboardRead userNameRead waitContinue processInput parseCommand printVersion readNetworks writeMessage blinkVersion cityNameRead readMessage setDataMode parsePacket isListening setBitOrder beginPacket isDirectory motorsWrite drawCompass digitalRead clearScreen serialEvent rightToLeft setTextSize leftToRight requestFrom keyReleased compassRead analogWrite interrupts WiFiServer disconnect playMelody parseFloat autoscroll getPINUsed setPINUsed setTimeout sendAnalog readSlider analogRead beginWrite createChar motorsStop keyPressed tempoWrite readButton subnetMask debugPrint macAddress writeGreen randomSeed attachGPRS readString sendString remotePort releaseAll mouseMoved background getXChange getYChange answerCall getResult voiceCall endPacket constrain getSocket writeJSON getButton available connected findUntil readBytes exitValue readGreen writeBlue startLoop IPAddress isPressed sendSysex pauseMode gatewayIP setCursor getOemKey tuneWrite noDisplay loadImage switchPIN onRequest onReceive changePIN playFile noBuffer parseInt overflow checkPIN knobRead beginTFT bitClear updateIR bitWrite position writeRGB highByte writeRed setSpeed readBlue noStroke remoteIP transfer shutdown hangCall beginSMS endWrite attached maintain noCursor checkReg checkPUK shiftOut isValid shiftIn pulseIn connect println localIP pinMode getIMEI display noBlink process getBand running beginSD drawBMP lowByte setBand release bitRead prepare pointTo readRed setMode noFill remove listen stroke detach attach noTone exists buffer height bitSet circle config cursor random IRread setDNS endSMS getKey micros millis begin print write ready flush width isPIN blink clear press mkdir rmdir close point yield image BSSID click delay read text move peek beep rect line open seek fill size turn stop home find step tone sqrt RSSI SSID end bit tan cos sin pow map abs max min get run put",symbol:"DIGITAL_MESSAGE FIRMATA_STRING ANALOG_MESSAGE REPORT_DIGITAL REPORT_ANALOG INPUT_PULLUP SET_PIN_MODE INTERNAL2V56 SYSTEM_RESET LED_BUILTIN INTERNAL1V1 SYSEX_START INTERNAL EXTERNAL DEFAULT OUTPUT INPUT HIGH LOW"},c:[r,e.CLCM,e.CBCM,e.ASM,e.QSM,e.CNM]}});hljs.registerLanguage("inform7",function(e){var r="\\[",o="\\]";return{aliases:["i7"],cI:!0,k:{keyword:"thing room person man woman animal container supporter backdrop door scenery open closed locked inside gender is are say understand kind of rule"},c:[{cN:"string",b:'"',e:'"',r:0,c:[{cN:"subst",b:r,e:o}]},{cN:"section",b:/^(Volume|Book|Part|Chapter|Section|Table)\b/,e:"$"},{b:/^(Check|Carry out|Report|Instead of|To|Rule|When|Before|After)\b/,e:":",c:[{b:"\\(This",e:"\\)"}]},{cN:"comment",b:r,e:o,c:["self"]}]}});hljs.registerLanguage("coffeescript",function(e){var c={keyword:"in if for while finally new do return else break catch instanceof throw try this switch continue typeof delete debugger super then unless until loop of by when and or is isnt not",literal:"true false null undefined yes no on off",built_in:"npm require console print module global window document"},n="[A-Za-z$_][0-9A-Za-z$_]*",r={cN:"subst",b:/#\{/,e:/}/,k:c},s=[e.BNM,e.inherit(e.CNM,{starts:{e:"(\\s*/)?",r:0}}),{cN:"string",v:[{b:/'''/,e:/'''/,c:[e.BE]},{b:/'/,e:/'/,c:[e.BE]},{b:/"""/,e:/"""/,c:[e.BE,r]},{b:/"/,e:/"/,c:[e.BE,r]}]},{cN:"regexp",v:[{b:"///",e:"///",c:[r,e.HCM]},{b:"//[gim]*",r:0},{b:/\/(?![ *])(\\\/|.)*?\/[gim]*(?=\W|$)/}]},{b:"@"+n},{b:"`",e:"`",eB:!0,eE:!0,sL:"javascript"}];r.c=s;var i=e.inherit(e.TM,{b:n}),t="(\\(.*\\))?\\s*\\B[-=]>",o={cN:"params",b:"\\([^\\(]",rB:!0,c:[{b:/\(/,e:/\)/,k:c,c:["self"].concat(s)}]};return{aliases:["coffee","cson","iced"],k:c,i:/\/\*/,c:s.concat([e.C("###","###"),e.HCM,{cN:"function",b:"^\\s*"+n+"\\s*=\\s*"+t,e:"[-=]>",rB:!0,c:[i,o]},{b:/[:\(,=]\s*/,r:0,c:[{cN:"function",b:t,e:"[-=]>",rB:!0,c:[o]}]},{cN:"class",bK:"class",e:"$",i:/[:="\[\]]/,c:[{bK:"extends",eW:!0,i:/[:="\[\]]/,c:[i]},i]},{b:n+":",e:":",rB:!0,rE:!0,r:0}])}});hljs.registerLanguage("oxygene",function(e){var r="abstract add and array as asc aspect assembly async begin break block by case class concat const copy constructor continue create default delegate desc distinct div do downto dynamic each else empty end ensure enum equals event except exit extension external false final finalize finalizer finally flags for forward from function future global group has if implementation implements implies in index inherited inline interface into invariants is iterator join locked locking loop matching method mod module namespace nested new nil not notify nullable of old on operator or order out override parallel params partial pinned private procedure property protected public queryable raise read readonly record reintroduce remove repeat require result reverse sealed select self sequence set shl shr skip static step soft take then to true try tuple type union unit unsafe until uses using var virtual raises volatile where while with write xor yield await mapped deprecated stdcall cdecl pascal register safecall overload library platform reference packed strict published autoreleasepool selector strong weak unretained",t=e.C("{","}",{r:0}),a=e.C("\\(\\*","\\*\\)",{r:10}),n={cN:"string",b:"'",e:"'",c:[{b:"''"}]},o={cN:"string",b:"(#\\d+)+"},i={cN:"function",bK:"function constructor destructor procedure method",e:"[:;]",k:"function constructor|10 destructor|10 procedure|10 method|10",c:[e.TM,{cN:"params",b:"\\(",e:"\\)",k:r,c:[n,o]},t,a]};return{cI:!0,k:r,i:'("|\\$[G-Zg-z]|\\/\\*||=>|->)',c:[t,a,e.CLCM,n,o,e.NM,i,{cN:"class",b:"=\\bclass\\b",e:"end;",k:r,c:[n,o,t,a,e.CLCM,i]}]}});hljs.registerLanguage("autoit",function(e){var t="ByRef Case Const ContinueCase ContinueLoop Default Dim Do Else ElseIf EndFunc EndIf EndSelect EndSwitch EndWith Enum Exit ExitLoop For Func Global If In Local Next ReDim Return Select Static Step Switch Then To Until Volatile WEnd While With",r="True False And Null Not Or",i="Abs ACos AdlibRegister AdlibUnRegister Asc AscW ASin Assign ATan AutoItSetOption AutoItWinGetTitle AutoItWinSetTitle Beep Binary BinaryLen BinaryMid BinaryToString BitAND BitNOT BitOR BitRotate BitShift BitXOR BlockInput Break Call CDTray Ceiling Chr ChrW ClipGet ClipPut ConsoleRead ConsoleWrite ConsoleWriteError ControlClick ControlCommand ControlDisable ControlEnable ControlFocus ControlGetFocus ControlGetHandle ControlGetPos ControlGetText ControlHide ControlListView ControlMove ControlSend ControlSetText ControlShow ControlTreeView Cos Dec DirCopy DirCreate DirGetSize DirMove DirRemove DllCall DllCallAddress DllCallbackFree DllCallbackGetPtr DllCallbackRegister DllClose DllOpen DllStructCreate DllStructGetData DllStructGetPtr DllStructGetSize DllStructSetData DriveGetDrive DriveGetFileSystem DriveGetLabel DriveGetSerial DriveGetType DriveMapAdd DriveMapDel DriveMapGet DriveSetLabel DriveSpaceFree DriveSpaceTotal DriveStatus EnvGet EnvSet EnvUpdate Eval Execute Exp FileChangeDir FileClose FileCopy FileCreateNTFSLink FileCreateShortcut FileDelete FileExists FileFindFirstFile FileFindNextFile FileFlush FileGetAttrib FileGetEncoding FileGetLongName FileGetPos FileGetShortcut FileGetShortName FileGetSize FileGetTime FileGetVersion FileInstall FileMove FileOpen FileOpenDialog FileRead FileReadLine FileReadToArray FileRecycle FileRecycleEmpty FileSaveDialog FileSelectFolder FileSetAttrib FileSetEnd FileSetPos FileSetTime FileWrite FileWriteLine Floor FtpSetProxy FuncName GUICreate GUICtrlCreateAvi GUICtrlCreateButton GUICtrlCreateCheckbox GUICtrlCreateCombo GUICtrlCreateContextMenu GUICtrlCreateDate GUICtrlCreateDummy GUICtrlCreateEdit GUICtrlCreateGraphic GUICtrlCreateGroup GUICtrlCreateIcon GUICtrlCreateInput GUICtrlCreateLabel GUICtrlCreateList GUICtrlCreateListView GUICtrlCreateListViewItem GUICtrlCreateMenu GUICtrlCreateMenuItem GUICtrlCreateMonthCal GUICtrlCreateObj GUICtrlCreatePic GUICtrlCreateProgress GUICtrlCreateRadio GUICtrlCreateSlider GUICtrlCreateTab GUICtrlCreateTabItem GUICtrlCreateTreeView GUICtrlCreateTreeViewItem GUICtrlCreateUpdown GUICtrlDelete GUICtrlGetHandle GUICtrlGetState GUICtrlRead GUICtrlRecvMsg GUICtrlRegisterListViewSort GUICtrlSendMsg GUICtrlSendToDummy GUICtrlSetBkColor GUICtrlSetColor GUICtrlSetCursor GUICtrlSetData GUICtrlSetDefBkColor GUICtrlSetDefColor GUICtrlSetFont GUICtrlSetGraphic GUICtrlSetImage GUICtrlSetLimit GUICtrlSetOnEvent GUICtrlSetPos GUICtrlSetResizing GUICtrlSetState GUICtrlSetStyle GUICtrlSetTip GUIDelete GUIGetCursorInfo GUIGetMsg GUIGetStyle GUIRegisterMsg GUISetAccelerators GUISetBkColor GUISetCoord GUISetCursor GUISetFont GUISetHelp GUISetIcon GUISetOnEvent GUISetState GUISetStyle GUIStartGroup GUISwitch Hex HotKeySet HttpSetProxy HttpSetUserAgent HWnd InetClose InetGet InetGetInfo InetGetSize InetRead IniDelete IniRead IniReadSection IniReadSectionNames IniRenameSection IniWrite IniWriteSection InputBox Int IsAdmin IsArray IsBinary IsBool IsDeclared IsDllStruct IsFloat IsFunc IsHWnd IsInt IsKeyword IsNumber IsObj IsPtr IsString Log MemGetStats Mod MouseClick MouseClickDrag MouseDown MouseGetCursor MouseGetPos MouseMove MouseUp MouseWheel MsgBox Number ObjCreate ObjCreateInterface ObjEvent ObjGet ObjName OnAutoItExitRegister OnAutoItExitUnRegister Opt Ping PixelChecksum PixelGetColor PixelSearch ProcessClose ProcessExists ProcessGetStats ProcessList ProcessSetPriority ProcessWait ProcessWaitClose ProgressOff ProgressOn ProgressSet Ptr Random RegDelete RegEnumKey RegEnumVal RegRead RegWrite Round Run RunAs RunAsWait RunWait Send SendKeepActive SetError SetExtended ShellExecute ShellExecuteWait Shutdown Sin Sleep SoundPlay SoundSetWaveVolume SplashImageOn SplashOff SplashTextOn Sqrt SRandom StatusbarGetText StderrRead StdinWrite StdioClose StdoutRead String StringAddCR StringCompare StringFormat StringFromASCIIArray StringInStr StringIsAlNum StringIsAlpha StringIsASCII StringIsDigit StringIsFloat StringIsInt StringIsLower StringIsSpace StringIsUpper StringIsXDigit StringLeft StringLen StringLower StringMid StringRegExp StringRegExpReplace StringReplace StringReverse StringRight StringSplit StringStripCR StringStripWS StringToASCIIArray StringToBinary StringTrimLeft StringTrimRight StringUpper Tan TCPAccept TCPCloseSocket TCPConnect TCPListen TCPNameToIP TCPRecv TCPSend TCPShutdown TCPStartup TimerDiff TimerInit ToolTip TrayCreateItem TrayCreateMenu TrayGetMsg TrayItemDelete TrayItemGetHandle TrayItemGetState TrayItemGetText TrayItemSetOnEvent TrayItemSetState TrayItemSetText TraySetClick TraySetIcon TraySetOnEvent TraySetPauseIcon TraySetState TraySetToolTip TrayTip UBound UDPBind UDPCloseSocket UDPOpen UDPRecv UDPSend UDPShutdown UDPStartup VarGetType WinActivate WinActive WinClose WinExists WinFlash WinGetCaretPos WinGetClassList WinGetClientSize WinGetHandle WinGetPos WinGetProcess WinGetState WinGetText WinGetTitle WinKill WinList WinMenuSelectItem WinMinimizeAll WinMinimizeAllUndo WinMove WinSetOnTop WinSetState WinSetTitle WinSetTrans WinWait WinWaitActive WinWaitClose WinWaitNotActive Array1DToHistogram ArrayAdd ArrayBinarySearch ArrayColDelete ArrayColInsert ArrayCombinations ArrayConcatenate ArrayDelete ArrayDisplay ArrayExtract ArrayFindAll ArrayInsert ArrayMax ArrayMaxIndex ArrayMin ArrayMinIndex ArrayPermute ArrayPop ArrayPush ArrayReverse ArraySearch ArrayShuffle ArraySort ArraySwap ArrayToClip ArrayToString ArrayTranspose ArrayTrim ArrayUnique Assert ChooseColor ChooseFont ClipBoard_ChangeChain ClipBoard_Close ClipBoard_CountFormats ClipBoard_Empty ClipBoard_EnumFormats ClipBoard_FormatStr ClipBoard_GetData ClipBoard_GetDataEx ClipBoard_GetFormatName ClipBoard_GetOpenWindow ClipBoard_GetOwner ClipBoard_GetPriorityFormat ClipBoard_GetSequenceNumber ClipBoard_GetViewer ClipBoard_IsFormatAvailable ClipBoard_Open ClipBoard_RegisterFormat ClipBoard_SetData ClipBoard_SetDataEx ClipBoard_SetViewer ClipPutFile ColorConvertHSLtoRGB ColorConvertRGBtoHSL ColorGetBlue ColorGetCOLORREF ColorGetGreen ColorGetRed ColorGetRGB ColorSetCOLORREF ColorSetRGB Crypt_DecryptData Crypt_DecryptFile Crypt_DeriveKey Crypt_DestroyKey Crypt_EncryptData Crypt_EncryptFile Crypt_GenRandom Crypt_HashData Crypt_HashFile Crypt_Shutdown Crypt_Startup DateAdd DateDayOfWeek DateDaysInMonth DateDiff DateIsLeapYear DateIsValid DateTimeFormat DateTimeSplit DateToDayOfWeek DateToDayOfWeekISO DateToDayValue DateToMonth Date_Time_CompareFileTime Date_Time_DOSDateTimeToArray Date_Time_DOSDateTimeToFileTime Date_Time_DOSDateTimeToStr Date_Time_DOSDateToArray Date_Time_DOSDateToStr Date_Time_DOSTimeToArray Date_Time_DOSTimeToStr Date_Time_EncodeFileTime Date_Time_EncodeSystemTime Date_Time_FileTimeToArray Date_Time_FileTimeToDOSDateTime Date_Time_FileTimeToLocalFileTime Date_Time_FileTimeToStr Date_Time_FileTimeToSystemTime Date_Time_GetFileTime Date_Time_GetLocalTime Date_Time_GetSystemTime Date_Time_GetSystemTimeAdjustment Date_Time_GetSystemTimeAsFileTime Date_Time_GetSystemTimes Date_Time_GetTickCount Date_Time_GetTimeZoneInformation Date_Time_LocalFileTimeToFileTime Date_Time_SetFileTime Date_Time_SetLocalTime Date_Time_SetSystemTime Date_Time_SetSystemTimeAdjustment Date_Time_SetTimeZoneInformation Date_Time_SystemTimeToArray Date_Time_SystemTimeToDateStr Date_Time_SystemTimeToDateTimeStr Date_Time_SystemTimeToFileTime Date_Time_SystemTimeToTimeStr Date_Time_SystemTimeToTzSpecificLocalTime Date_Time_TzSpecificLocalTimeToSystemTime DayValueToDate DebugBugReportEnv DebugCOMError DebugOut DebugReport DebugReportEx DebugReportVar DebugSetup Degree EventLog__Backup EventLog__Clear EventLog__Close EventLog__Count EventLog__DeregisterSource EventLog__Full EventLog__Notify EventLog__Oldest EventLog__Open EventLog__OpenBackup EventLog__Read EventLog__RegisterSource EventLog__Report Excel_BookAttach Excel_BookClose Excel_BookList Excel_BookNew Excel_BookOpen Excel_BookOpenText Excel_BookSave Excel_BookSaveAs Excel_Close Excel_ColumnToLetter Excel_ColumnToNumber Excel_ConvertFormula Excel_Export Excel_FilterGet Excel_FilterSet Excel_Open Excel_PictureAdd Excel_Print Excel_RangeCopyPaste Excel_RangeDelete Excel_RangeFind Excel_RangeInsert Excel_RangeLinkAddRemove Excel_RangeRead Excel_RangeReplace Excel_RangeSort Excel_RangeValidate Excel_RangeWrite Excel_SheetAdd Excel_SheetCopyMove Excel_SheetDelete Excel_SheetList FileCountLines FileCreate FileListToArray FileListToArrayRec FilePrint FileReadToArray FileWriteFromArray FileWriteLog FileWriteToLine FTP_Close FTP_Command FTP_Connect FTP_DecodeInternetStatus FTP_DirCreate FTP_DirDelete FTP_DirGetCurrent FTP_DirPutContents FTP_DirSetCurrent FTP_FileClose FTP_FileDelete FTP_FileGet FTP_FileGetSize FTP_FileOpen FTP_FilePut FTP_FileRead FTP_FileRename FTP_FileTimeLoHiToStr FTP_FindFileClose FTP_FindFileFirst FTP_FindFileNext FTP_GetLastResponseInfo FTP_ListToArray FTP_ListToArray2D FTP_ListToArrayEx FTP_Open FTP_ProgressDownload FTP_ProgressUpload FTP_SetStatusCallback GDIPlus_ArrowCapCreate GDIPlus_ArrowCapDispose GDIPlus_ArrowCapGetFillState GDIPlus_ArrowCapGetHeight GDIPlus_ArrowCapGetMiddleInset GDIPlus_ArrowCapGetWidth GDIPlus_ArrowCapSetFillState GDIPlus_ArrowCapSetHeight GDIPlus_ArrowCapSetMiddleInset GDIPlus_ArrowCapSetWidth GDIPlus_BitmapApplyEffect GDIPlus_BitmapApplyEffectEx GDIPlus_BitmapCloneArea GDIPlus_BitmapConvertFormat GDIPlus_BitmapCreateApplyEffect GDIPlus_BitmapCreateApplyEffectEx GDIPlus_BitmapCreateDIBFromBitmap GDIPlus_BitmapCreateFromFile GDIPlus_BitmapCreateFromGraphics GDIPlus_BitmapCreateFromHBITMAP GDIPlus_BitmapCreateFromHICON GDIPlus_BitmapCreateFromHICON32 GDIPlus_BitmapCreateFromMemory GDIPlus_BitmapCreateFromResource GDIPlus_BitmapCreateFromScan0 GDIPlus_BitmapCreateFromStream GDIPlus_BitmapCreateHBITMAPFromBitmap GDIPlus_BitmapDispose GDIPlus_BitmapGetHistogram GDIPlus_BitmapGetHistogramEx GDIPlus_BitmapGetHistogramSize GDIPlus_BitmapGetPixel GDIPlus_BitmapLockBits GDIPlus_BitmapSetPixel GDIPlus_BitmapUnlockBits GDIPlus_BrushClone GDIPlus_BrushCreateSolid GDIPlus_BrushDispose GDIPlus_BrushGetSolidColor GDIPlus_BrushGetType GDIPlus_BrushSetSolidColor GDIPlus_ColorMatrixCreate GDIPlus_ColorMatrixCreateGrayScale GDIPlus_ColorMatrixCreateNegative GDIPlus_ColorMatrixCreateSaturation GDIPlus_ColorMatrixCreateScale GDIPlus_ColorMatrixCreateTranslate GDIPlus_CustomLineCapClone GDIPlus_CustomLineCapCreate GDIPlus_CustomLineCapDispose GDIPlus_CustomLineCapGetStrokeCaps GDIPlus_CustomLineCapSetStrokeCaps GDIPlus_Decoders GDIPlus_DecodersGetCount GDIPlus_DecodersGetSize GDIPlus_DrawImageFX GDIPlus_DrawImageFXEx GDIPlus_DrawImagePoints GDIPlus_EffectCreate GDIPlus_EffectCreateBlur GDIPlus_EffectCreateBrightnessContrast GDIPlus_EffectCreateColorBalance GDIPlus_EffectCreateColorCurve GDIPlus_EffectCreateColorLUT GDIPlus_EffectCreateColorMatrix GDIPlus_EffectCreateHueSaturationLightness GDIPlus_EffectCreateLevels GDIPlus_EffectCreateRedEyeCorrection GDIPlus_EffectCreateSharpen GDIPlus_EffectCreateTint GDIPlus_EffectDispose GDIPlus_EffectGetParameters GDIPlus_EffectSetParameters GDIPlus_Encoders GDIPlus_EncodersGetCLSID GDIPlus_EncodersGetCount GDIPlus_EncodersGetParamList GDIPlus_EncodersGetParamListSize GDIPlus_EncodersGetSize GDIPlus_FontCreate GDIPlus_FontDispose GDIPlus_FontFamilyCreate GDIPlus_FontFamilyCreateFromCollection GDIPlus_FontFamilyDispose GDIPlus_FontFamilyGetCellAscent GDIPlus_FontFamilyGetCellDescent GDIPlus_FontFamilyGetEmHeight GDIPlus_FontFamilyGetLineSpacing GDIPlus_FontGetHeight GDIPlus_FontPrivateAddFont GDIPlus_FontPrivateAddMemoryFont GDIPlus_FontPrivateCollectionDispose GDIPlus_FontPrivateCreateCollection GDIPlus_GraphicsClear GDIPlus_GraphicsCreateFromHDC GDIPlus_GraphicsCreateFromHWND GDIPlus_GraphicsDispose GDIPlus_GraphicsDrawArc GDIPlus_GraphicsDrawBezier GDIPlus_GraphicsDrawClosedCurve GDIPlus_GraphicsDrawClosedCurve2 GDIPlus_GraphicsDrawCurve GDIPlus_GraphicsDrawCurve2 GDIPlus_GraphicsDrawEllipse GDIPlus_GraphicsDrawImage GDIPlus_GraphicsDrawImagePointsRect GDIPlus_GraphicsDrawImageRect GDIPlus_GraphicsDrawImageRectRect GDIPlus_GraphicsDrawLine GDIPlus_GraphicsDrawPath GDIPlus_GraphicsDrawPie GDIPlus_GraphicsDrawPolygon GDIPlus_GraphicsDrawRect GDIPlus_GraphicsDrawString GDIPlus_GraphicsDrawStringEx GDIPlus_GraphicsFillClosedCurve GDIPlus_GraphicsFillClosedCurve2 GDIPlus_GraphicsFillEllipse GDIPlus_GraphicsFillPath GDIPlus_GraphicsFillPie GDIPlus_GraphicsFillPolygon GDIPlus_GraphicsFillRect GDIPlus_GraphicsFillRegion GDIPlus_GraphicsGetCompositingMode GDIPlus_GraphicsGetCompositingQuality GDIPlus_GraphicsGetDC GDIPlus_GraphicsGetInterpolationMode GDIPlus_GraphicsGetSmoothingMode GDIPlus_GraphicsGetTransform GDIPlus_GraphicsMeasureCharacterRanges GDIPlus_GraphicsMeasureString GDIPlus_GraphicsReleaseDC GDIPlus_GraphicsResetClip GDIPlus_GraphicsResetTransform GDIPlus_GraphicsRestore GDIPlus_GraphicsRotateTransform GDIPlus_GraphicsSave GDIPlus_GraphicsScaleTransform GDIPlus_GraphicsSetClipPath GDIPlus_GraphicsSetClipRect GDIPlus_GraphicsSetClipRegion GDIPlus_GraphicsSetCompositingMode GDIPlus_GraphicsSetCompositingQuality GDIPlus_GraphicsSetInterpolationMode GDIPlus_GraphicsSetPixelOffsetMode GDIPlus_GraphicsSetSmoothingMode GDIPlus_GraphicsSetTextRenderingHint GDIPlus_GraphicsSetTransform GDIPlus_GraphicsTransformPoints GDIPlus_GraphicsTranslateTransform GDIPlus_HatchBrushCreate GDIPlus_HICONCreateFromBitmap GDIPlus_ImageAttributesCreate GDIPlus_ImageAttributesDispose GDIPlus_ImageAttributesSetColorKeys GDIPlus_ImageAttributesSetColorMatrix GDIPlus_ImageDispose GDIPlus_ImageGetDimension GDIPlus_ImageGetFlags GDIPlus_ImageGetGraphicsContext GDIPlus_ImageGetHeight GDIPlus_ImageGetHorizontalResolution GDIPlus_ImageGetPixelFormat GDIPlus_ImageGetRawFormat GDIPlus_ImageGetThumbnail GDIPlus_ImageGetType GDIPlus_ImageGetVerticalResolution GDIPlus_ImageGetWidth GDIPlus_ImageLoadFromFile GDIPlus_ImageLoadFromStream GDIPlus_ImageResize GDIPlus_ImageRotateFlip GDIPlus_ImageSaveToFile GDIPlus_ImageSaveToFileEx GDIPlus_ImageSaveToStream GDIPlus_ImageScale GDIPlus_LineBrushCreate GDIPlus_LineBrushCreateFromRect GDIPlus_LineBrushCreateFromRectWithAngle GDIPlus_LineBrushGetColors GDIPlus_LineBrushGetRect GDIPlus_LineBrushMultiplyTransform GDIPlus_LineBrushResetTransform GDIPlus_LineBrushSetBlend GDIPlus_LineBrushSetColors GDIPlus_LineBrushSetGammaCorrection GDIPlus_LineBrushSetLinearBlend GDIPlus_LineBrushSetPresetBlend GDIPlus_LineBrushSetSigmaBlend GDIPlus_LineBrushSetTransform GDIPlus_MatrixClone GDIPlus_MatrixCreate GDIPlus_MatrixDispose GDIPlus_MatrixGetElements GDIPlus_MatrixInvert GDIPlus_MatrixMultiply GDIPlus_MatrixRotate GDIPlus_MatrixScale GDIPlus_MatrixSetElements GDIPlus_MatrixShear GDIPlus_MatrixTransformPoints GDIPlus_MatrixTranslate GDIPlus_PaletteInitialize GDIPlus_ParamAdd GDIPlus_ParamInit GDIPlus_ParamSize GDIPlus_PathAddArc GDIPlus_PathAddBezier GDIPlus_PathAddClosedCurve GDIPlus_PathAddClosedCurve2 GDIPlus_PathAddCurve GDIPlus_PathAddCurve2 GDIPlus_PathAddCurve3 GDIPlus_PathAddEllipse GDIPlus_PathAddLine GDIPlus_PathAddLine2 GDIPlus_PathAddPath GDIPlus_PathAddPie GDIPlus_PathAddPolygon GDIPlus_PathAddRectangle GDIPlus_PathAddString GDIPlus_PathBrushCreate GDIPlus_PathBrushCreateFromPath GDIPlus_PathBrushGetCenterPoint GDIPlus_PathBrushGetFocusScales GDIPlus_PathBrushGetPointCount GDIPlus_PathBrushGetRect GDIPlus_PathBrushGetWrapMode GDIPlus_PathBrushMultiplyTransform GDIPlus_PathBrushResetTransform GDIPlus_PathBrushSetBlend GDIPlus_PathBrushSetCenterColor GDIPlus_PathBrushSetCenterPoint GDIPlus_PathBrushSetFocusScales GDIPlus_PathBrushSetGammaCorrection GDIPlus_PathBrushSetLinearBlend GDIPlus_PathBrushSetPresetBlend GDIPlus_PathBrushSetSigmaBlend GDIPlus_PathBrushSetSurroundColor GDIPlus_PathBrushSetSurroundColorsWithCount GDIPlus_PathBrushSetTransform GDIPlus_PathBrushSetWrapMode GDIPlus_PathClone GDIPlus_PathCloseFigure GDIPlus_PathCreate GDIPlus_PathCreate2 GDIPlus_PathDispose GDIPlus_PathFlatten GDIPlus_PathGetData GDIPlus_PathGetFillMode GDIPlus_PathGetLastPoint GDIPlus_PathGetPointCount GDIPlus_PathGetPoints GDIPlus_PathGetWorldBounds GDIPlus_PathIsOutlineVisiblePoint GDIPlus_PathIsVisiblePoint GDIPlus_PathIterCreate GDIPlus_PathIterDispose GDIPlus_PathIterGetSubpathCount GDIPlus_PathIterNextMarkerPath GDIPlus_PathIterNextSubpathPath GDIPlus_PathIterRewind GDIPlus_PathReset GDIPlus_PathReverse GDIPlus_PathSetFillMode GDIPlus_PathSetMarker GDIPlus_PathStartFigure GDIPlus_PathTransform GDIPlus_PathWarp GDIPlus_PathWiden GDIPlus_PathWindingModeOutline GDIPlus_PenCreate GDIPlus_PenCreate2 GDIPlus_PenDispose GDIPlus_PenGetAlignment GDIPlus_PenGetColor GDIPlus_PenGetCustomEndCap GDIPlus_PenGetDashCap GDIPlus_PenGetDashStyle GDIPlus_PenGetEndCap GDIPlus_PenGetMiterLimit GDIPlus_PenGetWidth GDIPlus_PenSetAlignment GDIPlus_PenSetColor GDIPlus_PenSetCustomEndCap GDIPlus_PenSetDashCap GDIPlus_PenSetDashStyle GDIPlus_PenSetEndCap GDIPlus_PenSetLineCap GDIPlus_PenSetLineJoin GDIPlus_PenSetMiterLimit GDIPlus_PenSetStartCap GDIPlus_PenSetWidth GDIPlus_RectFCreate GDIPlus_RegionClone GDIPlus_RegionCombinePath GDIPlus_RegionCombineRect GDIPlus_RegionCombineRegion GDIPlus_RegionCreate GDIPlus_RegionCreateFromPath GDIPlus_RegionCreateFromRect GDIPlus_RegionDispose GDIPlus_RegionGetBounds GDIPlus_RegionGetHRgn GDIPlus_RegionTransform GDIPlus_RegionTranslate GDIPlus_Shutdown GDIPlus_Startup GDIPlus_StringFormatCreate GDIPlus_StringFormatDispose GDIPlus_StringFormatGetMeasurableCharacterRangeCount GDIPlus_StringFormatSetAlign GDIPlus_StringFormatSetLineAlign GDIPlus_StringFormatSetMeasurableCharacterRanges GDIPlus_TextureCreate GDIPlus_TextureCreate2 GDIPlus_TextureCreateIA GetIP GUICtrlAVI_Close GUICtrlAVI_Create GUICtrlAVI_Destroy GUICtrlAVI_IsPlaying GUICtrlAVI_Open GUICtrlAVI_OpenEx GUICtrlAVI_Play GUICtrlAVI_Seek GUICtrlAVI_Show GUICtrlAVI_Stop GUICtrlButton_Click GUICtrlButton_Create GUICtrlButton_Destroy GUICtrlButton_Enable GUICtrlButton_GetCheck GUICtrlButton_GetFocus GUICtrlButton_GetIdealSize GUICtrlButton_GetImage GUICtrlButton_GetImageList GUICtrlButton_GetNote GUICtrlButton_GetNoteLength GUICtrlButton_GetSplitInfo GUICtrlButton_GetState GUICtrlButton_GetText GUICtrlButton_GetTextMargin GUICtrlButton_SetCheck GUICtrlButton_SetDontClick GUICtrlButton_SetFocus GUICtrlButton_SetImage GUICtrlButton_SetImageList GUICtrlButton_SetNote GUICtrlButton_SetShield GUICtrlButton_SetSize GUICtrlButton_SetSplitInfo GUICtrlButton_SetState GUICtrlButton_SetStyle GUICtrlButton_SetText GUICtrlButton_SetTextMargin GUICtrlButton_Show GUICtrlComboBoxEx_AddDir GUICtrlComboBoxEx_AddString GUICtrlComboBoxEx_BeginUpdate GUICtrlComboBoxEx_Create GUICtrlComboBoxEx_CreateSolidBitMap GUICtrlComboBoxEx_DeleteString GUICtrlComboBoxEx_Destroy GUICtrlComboBoxEx_EndUpdate GUICtrlComboBoxEx_FindStringExact GUICtrlComboBoxEx_GetComboBoxInfo GUICtrlComboBoxEx_GetComboControl GUICtrlComboBoxEx_GetCount GUICtrlComboBoxEx_GetCurSel GUICtrlComboBoxEx_GetDroppedControlRect GUICtrlComboBoxEx_GetDroppedControlRectEx GUICtrlComboBoxEx_GetDroppedState GUICtrlComboBoxEx_GetDroppedWidth GUICtrlComboBoxEx_GetEditControl GUICtrlComboBoxEx_GetEditSel GUICtrlComboBoxEx_GetEditText GUICtrlComboBoxEx_GetExtendedStyle GUICtrlComboBoxEx_GetExtendedUI GUICtrlComboBoxEx_GetImageList GUICtrlComboBoxEx_GetItem GUICtrlComboBoxEx_GetItemEx GUICtrlComboBoxEx_GetItemHeight GUICtrlComboBoxEx_GetItemImage GUICtrlComboBoxEx_GetItemIndent GUICtrlComboBoxEx_GetItemOverlayImage GUICtrlComboBoxEx_GetItemParam GUICtrlComboBoxEx_GetItemSelectedImage GUICtrlComboBoxEx_GetItemText GUICtrlComboBoxEx_GetItemTextLen GUICtrlComboBoxEx_GetList GUICtrlComboBoxEx_GetListArray GUICtrlComboBoxEx_GetLocale GUICtrlComboBoxEx_GetLocaleCountry GUICtrlComboBoxEx_GetLocaleLang GUICtrlComboBoxEx_GetLocalePrimLang GUICtrlComboBoxEx_GetLocaleSubLang GUICtrlComboBoxEx_GetMinVisible GUICtrlComboBoxEx_GetTopIndex GUICtrlComboBoxEx_GetUnicode GUICtrlComboBoxEx_InitStorage GUICtrlComboBoxEx_InsertString GUICtrlComboBoxEx_LimitText GUICtrlComboBoxEx_ReplaceEditSel GUICtrlComboBoxEx_ResetContent GUICtrlComboBoxEx_SetCurSel GUICtrlComboBoxEx_SetDroppedWidth GUICtrlComboBoxEx_SetEditSel GUICtrlComboBoxEx_SetEditText GUICtrlComboBoxEx_SetExtendedStyle GUICtrlComboBoxEx_SetExtendedUI GUICtrlComboBoxEx_SetImageList GUICtrlComboBoxEx_SetItem GUICtrlComboBoxEx_SetItemEx GUICtrlComboBoxEx_SetItemHeight GUICtrlComboBoxEx_SetItemImage GUICtrlComboBoxEx_SetItemIndent GUICtrlComboBoxEx_SetItemOverlayImage GUICtrlComboBoxEx_SetItemParam GUICtrlComboBoxEx_SetItemSelectedImage GUICtrlComboBoxEx_SetMinVisible GUICtrlComboBoxEx_SetTopIndex GUICtrlComboBoxEx_SetUnicode GUICtrlComboBoxEx_ShowDropDown GUICtrlComboBox_AddDir GUICtrlComboBox_AddString GUICtrlComboBox_AutoComplete GUICtrlComboBox_BeginUpdate GUICtrlComboBox_Create GUICtrlComboBox_DeleteString GUICtrlComboBox_Destroy GUICtrlComboBox_EndUpdate GUICtrlComboBox_FindString GUICtrlComboBox_FindStringExact GUICtrlComboBox_GetComboBoxInfo GUICtrlComboBox_GetCount GUICtrlComboBox_GetCueBanner GUICtrlComboBox_GetCurSel GUICtrlComboBox_GetDroppedControlRect GUICtrlComboBox_GetDroppedControlRectEx GUICtrlComboBox_GetDroppedState GUICtrlComboBox_GetDroppedWidth GUICtrlComboBox_GetEditSel GUICtrlComboBox_GetEditText GUICtrlComboBox_GetExtendedUI GUICtrlComboBox_GetHorizontalExtent GUICtrlComboBox_GetItemHeight GUICtrlComboBox_GetLBText GUICtrlComboBox_GetLBTextLen GUICtrlComboBox_GetList GUICtrlComboBox_GetListArray GUICtrlComboBox_GetLocale GUICtrlComboBox_GetLocaleCountry GUICtrlComboBox_GetLocaleLang GUICtrlComboBox_GetLocalePrimLang GUICtrlComboBox_GetLocaleSubLang GUICtrlComboBox_GetMinVisible GUICtrlComboBox_GetTopIndex GUICtrlComboBox_InitStorage GUICtrlComboBox_InsertString GUICtrlComboBox_LimitText GUICtrlComboBox_ReplaceEditSel GUICtrlComboBox_ResetContent GUICtrlComboBox_SelectString GUICtrlComboBox_SetCueBanner GUICtrlComboBox_SetCurSel GUICtrlComboBox_SetDroppedWidth GUICtrlComboBox_SetEditSel GUICtrlComboBox_SetEditText GUICtrlComboBox_SetExtendedUI GUICtrlComboBox_SetHorizontalExtent GUICtrlComboBox_SetItemHeight GUICtrlComboBox_SetMinVisible GUICtrlComboBox_SetTopIndex GUICtrlComboBox_ShowDropDown GUICtrlDTP_Create GUICtrlDTP_Destroy GUICtrlDTP_GetMCColor GUICtrlDTP_GetMCFont GUICtrlDTP_GetMonthCal GUICtrlDTP_GetRange GUICtrlDTP_GetRangeEx GUICtrlDTP_GetSystemTime GUICtrlDTP_GetSystemTimeEx GUICtrlDTP_SetFormat GUICtrlDTP_SetMCColor GUICtrlDTP_SetMCFont GUICtrlDTP_SetRange GUICtrlDTP_SetRangeEx GUICtrlDTP_SetSystemTime GUICtrlDTP_SetSystemTimeEx GUICtrlEdit_AppendText GUICtrlEdit_BeginUpdate GUICtrlEdit_CanUndo GUICtrlEdit_CharFromPos GUICtrlEdit_Create GUICtrlEdit_Destroy GUICtrlEdit_EmptyUndoBuffer GUICtrlEdit_EndUpdate GUICtrlEdit_Find GUICtrlEdit_FmtLines GUICtrlEdit_GetCueBanner GUICtrlEdit_GetFirstVisibleLine GUICtrlEdit_GetLimitText GUICtrlEdit_GetLine GUICtrlEdit_GetLineCount GUICtrlEdit_GetMargins GUICtrlEdit_GetModify GUICtrlEdit_GetPasswordChar GUICtrlEdit_GetRECT GUICtrlEdit_GetRECTEx GUICtrlEdit_GetSel GUICtrlEdit_GetText GUICtrlEdit_GetTextLen GUICtrlEdit_HideBalloonTip GUICtrlEdit_InsertText GUICtrlEdit_LineFromChar GUICtrlEdit_LineIndex GUICtrlEdit_LineLength GUICtrlEdit_LineScroll GUICtrlEdit_PosFromChar GUICtrlEdit_ReplaceSel GUICtrlEdit_Scroll GUICtrlEdit_SetCueBanner GUICtrlEdit_SetLimitText GUICtrlEdit_SetMargins GUICtrlEdit_SetModify GUICtrlEdit_SetPasswordChar GUICtrlEdit_SetReadOnly GUICtrlEdit_SetRECT GUICtrlEdit_SetRECTEx GUICtrlEdit_SetRECTNP GUICtrlEdit_SetRectNPEx GUICtrlEdit_SetSel GUICtrlEdit_SetTabStops GUICtrlEdit_SetText GUICtrlEdit_ShowBalloonTip GUICtrlEdit_Undo GUICtrlHeader_AddItem GUICtrlHeader_ClearFilter GUICtrlHeader_ClearFilterAll GUICtrlHeader_Create GUICtrlHeader_CreateDragImage GUICtrlHeader_DeleteItem GUICtrlHeader_Destroy GUICtrlHeader_EditFilter GUICtrlHeader_GetBitmapMargin GUICtrlHeader_GetImageList GUICtrlHeader_GetItem GUICtrlHeader_GetItemAlign GUICtrlHeader_GetItemBitmap GUICtrlHeader_GetItemCount GUICtrlHeader_GetItemDisplay GUICtrlHeader_GetItemFlags GUICtrlHeader_GetItemFormat GUICtrlHeader_GetItemImage GUICtrlHeader_GetItemOrder GUICtrlHeader_GetItemParam GUICtrlHeader_GetItemRect GUICtrlHeader_GetItemRectEx GUICtrlHeader_GetItemText GUICtrlHeader_GetItemWidth GUICtrlHeader_GetOrderArray GUICtrlHeader_GetUnicodeFormat GUICtrlHeader_HitTest GUICtrlHeader_InsertItem GUICtrlHeader_Layout GUICtrlHeader_OrderToIndex GUICtrlHeader_SetBitmapMargin GUICtrlHeader_SetFilterChangeTimeout GUICtrlHeader_SetHotDivider GUICtrlHeader_SetImageList GUICtrlHeader_SetItem GUICtrlHeader_SetItemAlign GUICtrlHeader_SetItemBitmap GUICtrlHeader_SetItemDisplay GUICtrlHeader_SetItemFlags GUICtrlHeader_SetItemFormat GUICtrlHeader_SetItemImage GUICtrlHeader_SetItemOrder GUICtrlHeader_SetItemParam GUICtrlHeader_SetItemText GUICtrlHeader_SetItemWidth GUICtrlHeader_SetOrderArray GUICtrlHeader_SetUnicodeFormat GUICtrlIpAddress_ClearAddress GUICtrlIpAddress_Create GUICtrlIpAddress_Destroy GUICtrlIpAddress_Get GUICtrlIpAddress_GetArray GUICtrlIpAddress_GetEx GUICtrlIpAddress_IsBlank GUICtrlIpAddress_Set GUICtrlIpAddress_SetArray GUICtrlIpAddress_SetEx GUICtrlIpAddress_SetFocus GUICtrlIpAddress_SetFont GUICtrlIpAddress_SetRange GUICtrlIpAddress_ShowHide GUICtrlListBox_AddFile GUICtrlListBox_AddString GUICtrlListBox_BeginUpdate GUICtrlListBox_ClickItem GUICtrlListBox_Create GUICtrlListBox_DeleteString GUICtrlListBox_Destroy GUICtrlListBox_Dir GUICtrlListBox_EndUpdate GUICtrlListBox_FindInText GUICtrlListBox_FindString GUICtrlListBox_GetAnchorIndex GUICtrlListBox_GetCaretIndex GUICtrlListBox_GetCount GUICtrlListBox_GetCurSel GUICtrlListBox_GetHorizontalExtent GUICtrlListBox_GetItemData GUICtrlListBox_GetItemHeight GUICtrlListBox_GetItemRect GUICtrlListBox_GetItemRectEx GUICtrlListBox_GetListBoxInfo GUICtrlListBox_GetLocale GUICtrlListBox_GetLocaleCountry GUICtrlListBox_GetLocaleLang GUICtrlListBox_GetLocalePrimLang GUICtrlListBox_GetLocaleSubLang GUICtrlListBox_GetSel GUICtrlListBox_GetSelCount GUICtrlListBox_GetSelItems GUICtrlListBox_GetSelItemsText GUICtrlListBox_GetText GUICtrlListBox_GetTextLen GUICtrlListBox_GetTopIndex GUICtrlListBox_InitStorage GUICtrlListBox_InsertString GUICtrlListBox_ItemFromPoint GUICtrlListBox_ReplaceString GUICtrlListBox_ResetContent GUICtrlListBox_SelectString GUICtrlListBox_SelItemRange GUICtrlListBox_SelItemRangeEx GUICtrlListBox_SetAnchorIndex GUICtrlListBox_SetCaretIndex GUICtrlListBox_SetColumnWidth GUICtrlListBox_SetCurSel GUICtrlListBox_SetHorizontalExtent GUICtrlListBox_SetItemData GUICtrlListBox_SetItemHeight GUICtrlListBox_SetLocale GUICtrlListBox_SetSel GUICtrlListBox_SetTabStops GUICtrlListBox_SetTopIndex GUICtrlListBox_Sort GUICtrlListBox_SwapString GUICtrlListBox_UpdateHScroll GUICtrlListView_AddArray GUICtrlListView_AddColumn GUICtrlListView_AddItem GUICtrlListView_AddSubItem GUICtrlListView_ApproximateViewHeight GUICtrlListView_ApproximateViewRect GUICtrlListView_ApproximateViewWidth GUICtrlListView_Arrange GUICtrlListView_BeginUpdate GUICtrlListView_CancelEditLabel GUICtrlListView_ClickItem GUICtrlListView_CopyItems GUICtrlListView_Create GUICtrlListView_CreateDragImage GUICtrlListView_CreateSolidBitMap GUICtrlListView_DeleteAllItems GUICtrlListView_DeleteColumn GUICtrlListView_DeleteItem GUICtrlListView_DeleteItemsSelected GUICtrlListView_Destroy GUICtrlListView_DrawDragImage GUICtrlListView_EditLabel GUICtrlListView_EnableGroupView GUICtrlListView_EndUpdate GUICtrlListView_EnsureVisible GUICtrlListView_FindInText GUICtrlListView_FindItem GUICtrlListView_FindNearest GUICtrlListView_FindParam GUICtrlListView_FindText GUICtrlListView_GetBkColor GUICtrlListView_GetBkImage GUICtrlListView_GetCallbackMask GUICtrlListView_GetColumn GUICtrlListView_GetColumnCount GUICtrlListView_GetColumnOrder GUICtrlListView_GetColumnOrderArray GUICtrlListView_GetColumnWidth GUICtrlListView_GetCounterPage GUICtrlListView_GetEditControl GUICtrlListView_GetExtendedListViewStyle GUICtrlListView_GetFocusedGroup GUICtrlListView_GetGroupCount GUICtrlListView_GetGroupInfo GUICtrlListView_GetGroupInfoByIndex GUICtrlListView_GetGroupRect GUICtrlListView_GetGroupViewEnabled GUICtrlListView_GetHeader GUICtrlListView_GetHotCursor GUICtrlListView_GetHotItem GUICtrlListView_GetHoverTime GUICtrlListView_GetImageList GUICtrlListView_GetISearchString GUICtrlListView_GetItem GUICtrlListView_GetItemChecked GUICtrlListView_GetItemCount GUICtrlListView_GetItemCut GUICtrlListView_GetItemDropHilited GUICtrlListView_GetItemEx GUICtrlListView_GetItemFocused GUICtrlListView_GetItemGroupID GUICtrlListView_GetItemImage GUICtrlListView_GetItemIndent GUICtrlListView_GetItemParam GUICtrlListView_GetItemPosition GUICtrlListView_GetItemPositionX GUICtrlListView_GetItemPositionY GUICtrlListView_GetItemRect GUICtrlListView_GetItemRectEx GUICtrlListView_GetItemSelected GUICtrlListView_GetItemSpacing GUICtrlListView_GetItemSpacingX GUICtrlListView_GetItemSpacingY GUICtrlListView_GetItemState GUICtrlListView_GetItemStateImage GUICtrlListView_GetItemText GUICtrlListView_GetItemTextArray GUICtrlListView_GetItemTextString GUICtrlListView_GetNextItem GUICtrlListView_GetNumberOfWorkAreas GUICtrlListView_GetOrigin GUICtrlListView_GetOriginX GUICtrlListView_GetOriginY GUICtrlListView_GetOutlineColor GUICtrlListView_GetSelectedColumn GUICtrlListView_GetSelectedCount GUICtrlListView_GetSelectedIndices GUICtrlListView_GetSelectionMark GUICtrlListView_GetStringWidth GUICtrlListView_GetSubItemRect GUICtrlListView_GetTextBkColor GUICtrlListView_GetTextColor GUICtrlListView_GetToolTips GUICtrlListView_GetTopIndex GUICtrlListView_GetUnicodeFormat GUICtrlListView_GetView GUICtrlListView_GetViewDetails GUICtrlListView_GetViewLarge GUICtrlListView_GetViewList GUICtrlListView_GetViewRect GUICtrlListView_GetViewSmall GUICtrlListView_GetViewTile GUICtrlListView_HideColumn GUICtrlListView_HitTest GUICtrlListView_InsertColumn GUICtrlListView_InsertGroup GUICtrlListView_InsertItem GUICtrlListView_JustifyColumn GUICtrlListView_MapIDToIndex GUICtrlListView_MapIndexToID GUICtrlListView_RedrawItems GUICtrlListView_RegisterSortCallBack GUICtrlListView_RemoveAllGroups GUICtrlListView_RemoveGroup GUICtrlListView_Scroll GUICtrlListView_SetBkColor GUICtrlListView_SetBkImage GUICtrlListView_SetCallBackMask GUICtrlListView_SetColumn GUICtrlListView_SetColumnOrder GUICtrlListView_SetColumnOrderArray GUICtrlListView_SetColumnWidth GUICtrlListView_SetExtendedListViewStyle GUICtrlListView_SetGroupInfo GUICtrlListView_SetHotItem GUICtrlListView_SetHoverTime GUICtrlListView_SetIconSpacing GUICtrlListView_SetImageList GUICtrlListView_SetItem GUICtrlListView_SetItemChecked GUICtrlListView_SetItemCount GUICtrlListView_SetItemCut GUICtrlListView_SetItemDropHilited GUICtrlListView_SetItemEx GUICtrlListView_SetItemFocused GUICtrlListView_SetItemGroupID GUICtrlListView_SetItemImage GUICtrlListView_SetItemIndent GUICtrlListView_SetItemParam GUICtrlListView_SetItemPosition GUICtrlListView_SetItemPosition32 GUICtrlListView_SetItemSelected GUICtrlListView_SetItemState GUICtrlListView_SetItemStateImage GUICtrlListView_SetItemText GUICtrlListView_SetOutlineColor GUICtrlListView_SetSelectedColumn GUICtrlListView_SetSelectionMark GUICtrlListView_SetTextBkColor GUICtrlListView_SetTextColor GUICtrlListView_SetToolTips GUICtrlListView_SetUnicodeFormat GUICtrlListView_SetView GUICtrlListView_SetWorkAreas GUICtrlListView_SimpleSort GUICtrlListView_SortItems GUICtrlListView_SubItemHitTest GUICtrlListView_UnRegisterSortCallBack GUICtrlMenu_AddMenuItem GUICtrlMenu_AppendMenu GUICtrlMenu_CalculatePopupWindowPosition GUICtrlMenu_CheckMenuItem GUICtrlMenu_CheckRadioItem GUICtrlMenu_CreateMenu GUICtrlMenu_CreatePopup GUICtrlMenu_DeleteMenu GUICtrlMenu_DestroyMenu GUICtrlMenu_DrawMenuBar GUICtrlMenu_EnableMenuItem GUICtrlMenu_FindItem GUICtrlMenu_FindParent GUICtrlMenu_GetItemBmp GUICtrlMenu_GetItemBmpChecked GUICtrlMenu_GetItemBmpUnchecked GUICtrlMenu_GetItemChecked GUICtrlMenu_GetItemCount GUICtrlMenu_GetItemData GUICtrlMenu_GetItemDefault GUICtrlMenu_GetItemDisabled GUICtrlMenu_GetItemEnabled GUICtrlMenu_GetItemGrayed GUICtrlMenu_GetItemHighlighted GUICtrlMenu_GetItemID GUICtrlMenu_GetItemInfo GUICtrlMenu_GetItemRect GUICtrlMenu_GetItemRectEx GUICtrlMenu_GetItemState GUICtrlMenu_GetItemStateEx GUICtrlMenu_GetItemSubMenu GUICtrlMenu_GetItemText GUICtrlMenu_GetItemType GUICtrlMenu_GetMenu GUICtrlMenu_GetMenuBackground GUICtrlMenu_GetMenuBarInfo GUICtrlMenu_GetMenuContextHelpID GUICtrlMenu_GetMenuData GUICtrlMenu_GetMenuDefaultItem GUICtrlMenu_GetMenuHeight GUICtrlMenu_GetMenuInfo GUICtrlMenu_GetMenuStyle GUICtrlMenu_GetSystemMenu GUICtrlMenu_InsertMenuItem GUICtrlMenu_InsertMenuItemEx GUICtrlMenu_IsMenu GUICtrlMenu_LoadMenu GUICtrlMenu_MapAccelerator GUICtrlMenu_MenuItemFromPoint GUICtrlMenu_RemoveMenu GUICtrlMenu_SetItemBitmaps GUICtrlMenu_SetItemBmp GUICtrlMenu_SetItemBmpChecked GUICtrlMenu_SetItemBmpUnchecked GUICtrlMenu_SetItemChecked GUICtrlMenu_SetItemData GUICtrlMenu_SetItemDefault GUICtrlMenu_SetItemDisabled GUICtrlMenu_SetItemEnabled GUICtrlMenu_SetItemGrayed GUICtrlMenu_SetItemHighlighted GUICtrlMenu_SetItemID GUICtrlMenu_SetItemInfo GUICtrlMenu_SetItemState GUICtrlMenu_SetItemSubMenu GUICtrlMenu_SetItemText GUICtrlMenu_SetItemType GUICtrlMenu_SetMenu GUICtrlMenu_SetMenuBackground GUICtrlMenu_SetMenuContextHelpID GUICtrlMenu_SetMenuData GUICtrlMenu_SetMenuDefaultItem GUICtrlMenu_SetMenuHeight GUICtrlMenu_SetMenuInfo GUICtrlMenu_SetMenuStyle GUICtrlMenu_TrackPopupMenu GUICtrlMonthCal_Create GUICtrlMonthCal_Destroy GUICtrlMonthCal_GetCalendarBorder GUICtrlMonthCal_GetCalendarCount GUICtrlMonthCal_GetColor GUICtrlMonthCal_GetColorArray GUICtrlMonthCal_GetCurSel GUICtrlMonthCal_GetCurSelStr GUICtrlMonthCal_GetFirstDOW GUICtrlMonthCal_GetFirstDOWStr GUICtrlMonthCal_GetMaxSelCount GUICtrlMonthCal_GetMaxTodayWidth GUICtrlMonthCal_GetMinReqHeight GUICtrlMonthCal_GetMinReqRect GUICtrlMonthCal_GetMinReqRectArray GUICtrlMonthCal_GetMinReqWidth GUICtrlMonthCal_GetMonthDelta GUICtrlMonthCal_GetMonthRange GUICtrlMonthCal_GetMonthRangeMax GUICtrlMonthCal_GetMonthRangeMaxStr GUICtrlMonthCal_GetMonthRangeMin GUICtrlMonthCal_GetMonthRangeMinStr GUICtrlMonthCal_GetMonthRangeSpan GUICtrlMonthCal_GetRange GUICtrlMonthCal_GetRangeMax GUICtrlMonthCal_GetRangeMaxStr GUICtrlMonthCal_GetRangeMin GUICtrlMonthCal_GetRangeMinStr GUICtrlMonthCal_GetSelRange GUICtrlMonthCal_GetSelRangeMax GUICtrlMonthCal_GetSelRangeMaxStr GUICtrlMonthCal_GetSelRangeMin GUICtrlMonthCal_GetSelRangeMinStr GUICtrlMonthCal_GetToday GUICtrlMonthCal_GetTodayStr GUICtrlMonthCal_GetUnicodeFormat GUICtrlMonthCal_HitTest GUICtrlMonthCal_SetCalendarBorder GUICtrlMonthCal_SetColor GUICtrlMonthCal_SetCurSel GUICtrlMonthCal_SetDayState GUICtrlMonthCal_SetFirstDOW GUICtrlMonthCal_SetMaxSelCount GUICtrlMonthCal_SetMonthDelta GUICtrlMonthCal_SetRange GUICtrlMonthCal_SetSelRange GUICtrlMonthCal_SetToday GUICtrlMonthCal_SetUnicodeFormat GUICtrlRebar_AddBand GUICtrlRebar_AddToolBarBand GUICtrlRebar_BeginDrag GUICtrlRebar_Create GUICtrlRebar_DeleteBand GUICtrlRebar_Destroy GUICtrlRebar_DragMove GUICtrlRebar_EndDrag GUICtrlRebar_GetBandBackColor GUICtrlRebar_GetBandBorders GUICtrlRebar_GetBandBordersEx GUICtrlRebar_GetBandChildHandle GUICtrlRebar_GetBandChildSize GUICtrlRebar_GetBandCount GUICtrlRebar_GetBandForeColor GUICtrlRebar_GetBandHeaderSize GUICtrlRebar_GetBandID GUICtrlRebar_GetBandIdealSize GUICtrlRebar_GetBandLength GUICtrlRebar_GetBandLParam GUICtrlRebar_GetBandMargins GUICtrlRebar_GetBandMarginsEx GUICtrlRebar_GetBandRect GUICtrlRebar_GetBandRectEx GUICtrlRebar_GetBandStyle GUICtrlRebar_GetBandStyleBreak GUICtrlRebar_GetBandStyleChildEdge GUICtrlRebar_GetBandStyleFixedBMP GUICtrlRebar_GetBandStyleFixedSize GUICtrlRebar_GetBandStyleGripperAlways GUICtrlRebar_GetBandStyleHidden GUICtrlRebar_GetBandStyleHideTitle GUICtrlRebar_GetBandStyleNoGripper GUICtrlRebar_GetBandStyleTopAlign GUICtrlRebar_GetBandStyleUseChevron GUICtrlRebar_GetBandStyleVariableHeight GUICtrlRebar_GetBandText GUICtrlRebar_GetBarHeight GUICtrlRebar_GetBarInfo GUICtrlRebar_GetBKColor GUICtrlRebar_GetColorScheme GUICtrlRebar_GetRowCount GUICtrlRebar_GetRowHeight GUICtrlRebar_GetTextColor GUICtrlRebar_GetToolTips GUICtrlRebar_GetUnicodeFormat GUICtrlRebar_HitTest GUICtrlRebar_IDToIndex GUICtrlRebar_MaximizeBand GUICtrlRebar_MinimizeBand GUICtrlRebar_MoveBand GUICtrlRebar_SetBandBackColor GUICtrlRebar_SetBandForeColor GUICtrlRebar_SetBandHeaderSize GUICtrlRebar_SetBandID GUICtrlRebar_SetBandIdealSize GUICtrlRebar_SetBandLength GUICtrlRebar_SetBandLParam GUICtrlRebar_SetBandStyle GUICtrlRebar_SetBandStyleBreak GUICtrlRebar_SetBandStyleChildEdge GUICtrlRebar_SetBandStyleFixedBMP GUICtrlRebar_SetBandStyleFixedSize GUICtrlRebar_SetBandStyleGripperAlways GUICtrlRebar_SetBandStyleHidden GUICtrlRebar_SetBandStyleHideTitle GUICtrlRebar_SetBandStyleNoGripper GUICtrlRebar_SetBandStyleTopAlign GUICtrlRebar_SetBandStyleUseChevron GUICtrlRebar_SetBandStyleVariableHeight GUICtrlRebar_SetBandText GUICtrlRebar_SetBarInfo GUICtrlRebar_SetBKColor GUICtrlRebar_SetColorScheme GUICtrlRebar_SetTextColor GUICtrlRebar_SetToolTips GUICtrlRebar_SetUnicodeFormat GUICtrlRebar_ShowBand GUICtrlRichEdit_AppendText GUICtrlRichEdit_AutoDetectURL GUICtrlRichEdit_CanPaste GUICtrlRichEdit_CanPasteSpecial GUICtrlRichEdit_CanRedo GUICtrlRichEdit_CanUndo GUICtrlRichEdit_ChangeFontSize GUICtrlRichEdit_Copy GUICtrlRichEdit_Create GUICtrlRichEdit_Cut GUICtrlRichEdit_Deselect GUICtrlRichEdit_Destroy GUICtrlRichEdit_EmptyUndoBuffer GUICtrlRichEdit_FindText GUICtrlRichEdit_FindTextInRange GUICtrlRichEdit_GetBkColor GUICtrlRichEdit_GetCharAttributes GUICtrlRichEdit_GetCharBkColor GUICtrlRichEdit_GetCharColor GUICtrlRichEdit_GetCharPosFromXY GUICtrlRichEdit_GetCharPosOfNextWord GUICtrlRichEdit_GetCharPosOfPreviousWord GUICtrlRichEdit_GetCharWordBreakInfo GUICtrlRichEdit_GetFirstCharPosOnLine GUICtrlRichEdit_GetFont GUICtrlRichEdit_GetLineCount GUICtrlRichEdit_GetLineLength GUICtrlRichEdit_GetLineNumberFromCharPos GUICtrlRichEdit_GetNextRedo GUICtrlRichEdit_GetNextUndo GUICtrlRichEdit_GetNumberOfFirstVisibleLine GUICtrlRichEdit_GetParaAlignment GUICtrlRichEdit_GetParaAttributes GUICtrlRichEdit_GetParaBorder GUICtrlRichEdit_GetParaIndents GUICtrlRichEdit_GetParaNumbering GUICtrlRichEdit_GetParaShading GUICtrlRichEdit_GetParaSpacing GUICtrlRichEdit_GetParaTabStops GUICtrlRichEdit_GetPasswordChar GUICtrlRichEdit_GetRECT GUICtrlRichEdit_GetScrollPos GUICtrlRichEdit_GetSel GUICtrlRichEdit_GetSelAA GUICtrlRichEdit_GetSelText GUICtrlRichEdit_GetSpaceUnit GUICtrlRichEdit_GetText GUICtrlRichEdit_GetTextInLine GUICtrlRichEdit_GetTextInRange GUICtrlRichEdit_GetTextLength GUICtrlRichEdit_GetVersion GUICtrlRichEdit_GetXYFromCharPos GUICtrlRichEdit_GetZoom GUICtrlRichEdit_GotoCharPos GUICtrlRichEdit_HideSelection GUICtrlRichEdit_InsertText GUICtrlRichEdit_IsModified GUICtrlRichEdit_IsTextSelected GUICtrlRichEdit_Paste GUICtrlRichEdit_PasteSpecial GUICtrlRichEdit_PauseRedraw GUICtrlRichEdit_Redo GUICtrlRichEdit_ReplaceText GUICtrlRichEdit_ResumeRedraw GUICtrlRichEdit_ScrollLineOrPage GUICtrlRichEdit_ScrollLines GUICtrlRichEdit_ScrollToCaret GUICtrlRichEdit_SetBkColor GUICtrlRichEdit_SetCharAttributes GUICtrlRichEdit_SetCharBkColor GUICtrlRichEdit_SetCharColor GUICtrlRichEdit_SetEventMask GUICtrlRichEdit_SetFont GUICtrlRichEdit_SetLimitOnText GUICtrlRichEdit_SetModified GUICtrlRichEdit_SetParaAlignment GUICtrlRichEdit_SetParaAttributes GUICtrlRichEdit_SetParaBorder GUICtrlRichEdit_SetParaIndents GUICtrlRichEdit_SetParaNumbering GUICtrlRichEdit_SetParaShading GUICtrlRichEdit_SetParaSpacing GUICtrlRichEdit_SetParaTabStops GUICtrlRichEdit_SetPasswordChar GUICtrlRichEdit_SetReadOnly GUICtrlRichEdit_SetRECT GUICtrlRichEdit_SetScrollPos GUICtrlRichEdit_SetSel GUICtrlRichEdit_SetSpaceUnit GUICtrlRichEdit_SetTabStops GUICtrlRichEdit_SetText GUICtrlRichEdit_SetUndoLimit GUICtrlRichEdit_SetZoom GUICtrlRichEdit_StreamFromFile GUICtrlRichEdit_StreamFromVar GUICtrlRichEdit_StreamToFile GUICtrlRichEdit_StreamToVar GUICtrlRichEdit_Undo GUICtrlSlider_ClearSel GUICtrlSlider_ClearTics GUICtrlSlider_Create GUICtrlSlider_Destroy GUICtrlSlider_GetBuddy GUICtrlSlider_GetChannelRect GUICtrlSlider_GetChannelRectEx GUICtrlSlider_GetLineSize GUICtrlSlider_GetLogicalTics GUICtrlSlider_GetNumTics GUICtrlSlider_GetPageSize GUICtrlSlider_GetPos GUICtrlSlider_GetRange GUICtrlSlider_GetRangeMax GUICtrlSlider_GetRangeMin GUICtrlSlider_GetSel GUICtrlSlider_GetSelEnd GUICtrlSlider_GetSelStart GUICtrlSlider_GetThumbLength GUICtrlSlider_GetThumbRect GUICtrlSlider_GetThumbRectEx GUICtrlSlider_GetTic GUICtrlSlider_GetTicPos GUICtrlSlider_GetToolTips GUICtrlSlider_GetUnicodeFormat GUICtrlSlider_SetBuddy GUICtrlSlider_SetLineSize GUICtrlSlider_SetPageSize GUICtrlSlider_SetPos GUICtrlSlider_SetRange GUICtrlSlider_SetRangeMax GUICtrlSlider_SetRangeMin GUICtrlSlider_SetSel GUICtrlSlider_SetSelEnd GUICtrlSlider_SetSelStart GUICtrlSlider_SetThumbLength GUICtrlSlider_SetTic GUICtrlSlider_SetTicFreq GUICtrlSlider_SetTipSide GUICtrlSlider_SetToolTips GUICtrlSlider_SetUnicodeFormat GUICtrlStatusBar_Create GUICtrlStatusBar_Destroy GUICtrlStatusBar_EmbedControl GUICtrlStatusBar_GetBorders GUICtrlStatusBar_GetBordersHorz GUICtrlStatusBar_GetBordersRect GUICtrlStatusBar_GetBordersVert GUICtrlStatusBar_GetCount GUICtrlStatusBar_GetHeight GUICtrlStatusBar_GetIcon GUICtrlStatusBar_GetParts GUICtrlStatusBar_GetRect GUICtrlStatusBar_GetRectEx GUICtrlStatusBar_GetText GUICtrlStatusBar_GetTextFlags GUICtrlStatusBar_GetTextLength GUICtrlStatusBar_GetTextLengthEx GUICtrlStatusBar_GetTipText GUICtrlStatusBar_GetUnicodeFormat GUICtrlStatusBar_GetWidth GUICtrlStatusBar_IsSimple GUICtrlStatusBar_Resize GUICtrlStatusBar_SetBkColor GUICtrlStatusBar_SetIcon GUICtrlStatusBar_SetMinHeight GUICtrlStatusBar_SetParts GUICtrlStatusBar_SetSimple GUICtrlStatusBar_SetText GUICtrlStatusBar_SetTipText GUICtrlStatusBar_SetUnicodeFormat GUICtrlStatusBar_ShowHide GUICtrlTab_ActivateTab GUICtrlTab_ClickTab GUICtrlTab_Create GUICtrlTab_DeleteAllItems GUICtrlTab_DeleteItem GUICtrlTab_DeselectAll GUICtrlTab_Destroy GUICtrlTab_FindTab GUICtrlTab_GetCurFocus GUICtrlTab_GetCurSel GUICtrlTab_GetDisplayRect GUICtrlTab_GetDisplayRectEx GUICtrlTab_GetExtendedStyle GUICtrlTab_GetImageList GUICtrlTab_GetItem GUICtrlTab_GetItemCount GUICtrlTab_GetItemImage GUICtrlTab_GetItemParam GUICtrlTab_GetItemRect GUICtrlTab_GetItemRectEx GUICtrlTab_GetItemState GUICtrlTab_GetItemText GUICtrlTab_GetRowCount GUICtrlTab_GetToolTips GUICtrlTab_GetUnicodeFormat GUICtrlTab_HighlightItem GUICtrlTab_HitTest GUICtrlTab_InsertItem GUICtrlTab_RemoveImage GUICtrlTab_SetCurFocus GUICtrlTab_SetCurSel GUICtrlTab_SetExtendedStyle GUICtrlTab_SetImageList GUICtrlTab_SetItem GUICtrlTab_SetItemImage GUICtrlTab_SetItemParam GUICtrlTab_SetItemSize GUICtrlTab_SetItemState GUICtrlTab_SetItemText GUICtrlTab_SetMinTabWidth GUICtrlTab_SetPadding GUICtrlTab_SetToolTips GUICtrlTab_SetUnicodeFormat GUICtrlToolbar_AddBitmap GUICtrlToolbar_AddButton GUICtrlToolbar_AddButtonSep GUICtrlToolbar_AddString GUICtrlToolbar_ButtonCount GUICtrlToolbar_CheckButton GUICtrlToolbar_ClickAccel GUICtrlToolbar_ClickButton GUICtrlToolbar_ClickIndex GUICtrlToolbar_CommandToIndex GUICtrlToolbar_Create GUICtrlToolbar_Customize GUICtrlToolbar_DeleteButton GUICtrlToolbar_Destroy GUICtrlToolbar_EnableButton GUICtrlToolbar_FindToolbar GUICtrlToolbar_GetAnchorHighlight GUICtrlToolbar_GetBitmapFlags GUICtrlToolbar_GetButtonBitmap GUICtrlToolbar_GetButtonInfo GUICtrlToolbar_GetButtonInfoEx GUICtrlToolbar_GetButtonParam GUICtrlToolbar_GetButtonRect GUICtrlToolbar_GetButtonRectEx GUICtrlToolbar_GetButtonSize GUICtrlToolbar_GetButtonState GUICtrlToolbar_GetButtonStyle GUICtrlToolbar_GetButtonText GUICtrlToolbar_GetColorScheme GUICtrlToolbar_GetDisabledImageList GUICtrlToolbar_GetExtendedStyle GUICtrlToolbar_GetHotImageList GUICtrlToolbar_GetHotItem GUICtrlToolbar_GetImageList GUICtrlToolbar_GetInsertMark GUICtrlToolbar_GetInsertMarkColor GUICtrlToolbar_GetMaxSize GUICtrlToolbar_GetMetrics GUICtrlToolbar_GetPadding GUICtrlToolbar_GetRows GUICtrlToolbar_GetString GUICtrlToolbar_GetStyle GUICtrlToolbar_GetStyleAltDrag GUICtrlToolbar_GetStyleCustomErase GUICtrlToolbar_GetStyleFlat GUICtrlToolbar_GetStyleList GUICtrlToolbar_GetStyleRegisterDrop GUICtrlToolbar_GetStyleToolTips GUICtrlToolbar_GetStyleTransparent GUICtrlToolbar_GetStyleWrapable GUICtrlToolbar_GetTextRows GUICtrlToolbar_GetToolTips GUICtrlToolbar_GetUnicodeFormat GUICtrlToolbar_HideButton GUICtrlToolbar_HighlightButton GUICtrlToolbar_HitTest GUICtrlToolbar_IndexToCommand GUICtrlToolbar_InsertButton GUICtrlToolbar_InsertMarkHitTest GUICtrlToolbar_IsButtonChecked GUICtrlToolbar_IsButtonEnabled GUICtrlToolbar_IsButtonHidden GUICtrlToolbar_IsButtonHighlighted GUICtrlToolbar_IsButtonIndeterminate GUICtrlToolbar_IsButtonPressed GUICtrlToolbar_LoadBitmap GUICtrlToolbar_LoadImages GUICtrlToolbar_MapAccelerator GUICtrlToolbar_MoveButton GUICtrlToolbar_PressButton GUICtrlToolbar_SetAnchorHighlight GUICtrlToolbar_SetBitmapSize GUICtrlToolbar_SetButtonBitMap GUICtrlToolbar_SetButtonInfo GUICtrlToolbar_SetButtonInfoEx GUICtrlToolbar_SetButtonParam GUICtrlToolbar_SetButtonSize GUICtrlToolbar_SetButtonState GUICtrlToolbar_SetButtonStyle GUICtrlToolbar_SetButtonText GUICtrlToolbar_SetButtonWidth GUICtrlToolbar_SetCmdID GUICtrlToolbar_SetColorScheme GUICtrlToolbar_SetDisabledImageList GUICtrlToolbar_SetDrawTextFlags GUICtrlToolbar_SetExtendedStyle GUICtrlToolbar_SetHotImageList GUICtrlToolbar_SetHotItem GUICtrlToolbar_SetImageList GUICtrlToolbar_SetIndent GUICtrlToolbar_SetIndeterminate GUICtrlToolbar_SetInsertMark GUICtrlToolbar_SetInsertMarkColor GUICtrlToolbar_SetMaxTextRows GUICtrlToolbar_SetMetrics GUICtrlToolbar_SetPadding GUICtrlToolbar_SetParent GUICtrlToolbar_SetRows GUICtrlToolbar_SetStyle GUICtrlToolbar_SetStyleAltDrag GUICtrlToolbar_SetStyleCustomErase GUICtrlToolbar_SetStyleFlat GUICtrlToolbar_SetStyleList GUICtrlToolbar_SetStyleRegisterDrop GUICtrlToolbar_SetStyleToolTips GUICtrlToolbar_SetStyleTransparent GUICtrlToolbar_SetStyleWrapable GUICtrlToolbar_SetToolTips GUICtrlToolbar_SetUnicodeFormat GUICtrlToolbar_SetWindowTheme GUICtrlTreeView_Add GUICtrlTreeView_AddChild GUICtrlTreeView_AddChildFirst GUICtrlTreeView_AddFirst GUICtrlTreeView_BeginUpdate GUICtrlTreeView_ClickItem GUICtrlTreeView_Create GUICtrlTreeView_CreateDragImage GUICtrlTreeView_CreateSolidBitMap GUICtrlTreeView_Delete GUICtrlTreeView_DeleteAll GUICtrlTreeView_DeleteChildren GUICtrlTreeView_Destroy GUICtrlTreeView_DisplayRect GUICtrlTreeView_DisplayRectEx GUICtrlTreeView_EditText GUICtrlTreeView_EndEdit GUICtrlTreeView_EndUpdate GUICtrlTreeView_EnsureVisible GUICtrlTreeView_Expand GUICtrlTreeView_ExpandedOnce GUICtrlTreeView_FindItem GUICtrlTreeView_FindItemEx GUICtrlTreeView_GetBkColor GUICtrlTreeView_GetBold GUICtrlTreeView_GetChecked GUICtrlTreeView_GetChildCount GUICtrlTreeView_GetChildren GUICtrlTreeView_GetCount GUICtrlTreeView_GetCut GUICtrlTreeView_GetDropTarget GUICtrlTreeView_GetEditControl GUICtrlTreeView_GetExpanded GUICtrlTreeView_GetFirstChild GUICtrlTreeView_GetFirstItem GUICtrlTreeView_GetFirstVisible GUICtrlTreeView_GetFocused GUICtrlTreeView_GetHeight GUICtrlTreeView_GetImageIndex GUICtrlTreeView_GetImageListIconHandle GUICtrlTreeView_GetIndent GUICtrlTreeView_GetInsertMarkColor GUICtrlTreeView_GetISearchString GUICtrlTreeView_GetItemByIndex GUICtrlTreeView_GetItemHandle GUICtrlTreeView_GetItemParam GUICtrlTreeView_GetLastChild GUICtrlTreeView_GetLineColor GUICtrlTreeView_GetNext GUICtrlTreeView_GetNextChild GUICtrlTreeView_GetNextSibling GUICtrlTreeView_GetNextVisible GUICtrlTreeView_GetNormalImageList GUICtrlTreeView_GetParentHandle GUICtrlTreeView_GetParentParam GUICtrlTreeView_GetPrev GUICtrlTreeView_GetPrevChild GUICtrlTreeView_GetPrevSibling GUICtrlTreeView_GetPrevVisible GUICtrlTreeView_GetScrollTime GUICtrlTreeView_GetSelected GUICtrlTreeView_GetSelectedImageIndex GUICtrlTreeView_GetSelection GUICtrlTreeView_GetSiblingCount GUICtrlTreeView_GetState GUICtrlTreeView_GetStateImageIndex GUICtrlTreeView_GetStateImageList GUICtrlTreeView_GetText GUICtrlTreeView_GetTextColor GUICtrlTreeView_GetToolTips GUICtrlTreeView_GetTree GUICtrlTreeView_GetUnicodeFormat GUICtrlTreeView_GetVisible GUICtrlTreeView_GetVisibleCount GUICtrlTreeView_HitTest GUICtrlTreeView_HitTestEx GUICtrlTreeView_HitTestItem GUICtrlTreeView_Index GUICtrlTreeView_InsertItem GUICtrlTreeView_IsFirstItem GUICtrlTreeView_IsParent GUICtrlTreeView_Level GUICtrlTreeView_SelectItem GUICtrlTreeView_SelectItemByIndex GUICtrlTreeView_SetBkColor GUICtrlTreeView_SetBold GUICtrlTreeView_SetChecked GUICtrlTreeView_SetCheckedByIndex GUICtrlTreeView_SetChildren GUICtrlTreeView_SetCut GUICtrlTreeView_SetDropTarget GUICtrlTreeView_SetFocused GUICtrlTreeView_SetHeight GUICtrlTreeView_SetIcon GUICtrlTreeView_SetImageIndex GUICtrlTreeView_SetIndent GUICtrlTreeView_SetInsertMark GUICtrlTreeView_SetInsertMarkColor GUICtrlTreeView_SetItemHeight GUICtrlTreeView_SetItemParam GUICtrlTreeView_SetLineColor GUICtrlTreeView_SetNormalImageList GUICtrlTreeView_SetScrollTime GUICtrlTreeView_SetSelected GUICtrlTreeView_SetSelectedImageIndex GUICtrlTreeView_SetState GUICtrlTreeView_SetStateImageIndex GUICtrlTreeView_SetStateImageList GUICtrlTreeView_SetText GUICtrlTreeView_SetTextColor GUICtrlTreeView_SetToolTips GUICtrlTreeView_SetUnicodeFormat GUICtrlTreeView_Sort GUIImageList_Add GUIImageList_AddBitmap GUIImageList_AddIcon GUIImageList_AddMasked GUIImageList_BeginDrag GUIImageList_Copy GUIImageList_Create GUIImageList_Destroy GUIImageList_DestroyIcon GUIImageList_DragEnter GUIImageList_DragLeave GUIImageList_DragMove GUIImageList_Draw GUIImageList_DrawEx GUIImageList_Duplicate GUIImageList_EndDrag GUIImageList_GetBkColor GUIImageList_GetIcon GUIImageList_GetIconHeight GUIImageList_GetIconSize GUIImageList_GetIconSizeEx GUIImageList_GetIconWidth GUIImageList_GetImageCount GUIImageList_GetImageInfoEx GUIImageList_Remove GUIImageList_ReplaceIcon GUIImageList_SetBkColor GUIImageList_SetIconSize GUIImageList_SetImageCount GUIImageList_Swap GUIScrollBars_EnableScrollBar GUIScrollBars_GetScrollBarInfoEx GUIScrollBars_GetScrollBarRect GUIScrollBars_GetScrollBarRGState GUIScrollBars_GetScrollBarXYLineButton GUIScrollBars_GetScrollBarXYThumbBottom GUIScrollBars_GetScrollBarXYThumbTop GUIScrollBars_GetScrollInfo GUIScrollBars_GetScrollInfoEx GUIScrollBars_GetScrollInfoMax GUIScrollBars_GetScrollInfoMin GUIScrollBars_GetScrollInfoPage GUIScrollBars_GetScrollInfoPos GUIScrollBars_GetScrollInfoTrackPos GUIScrollBars_GetScrollPos GUIScrollBars_GetScrollRange GUIScrollBars_Init GUIScrollBars_ScrollWindow GUIScrollBars_SetScrollInfo GUIScrollBars_SetScrollInfoMax GUIScrollBars_SetScrollInfoMin GUIScrollBars_SetScrollInfoPage GUIScrollBars_SetScrollInfoPos GUIScrollBars_SetScrollRange GUIScrollBars_ShowScrollBar GUIToolTip_Activate GUIToolTip_AddTool GUIToolTip_AdjustRect GUIToolTip_BitsToTTF GUIToolTip_Create GUIToolTip_Deactivate GUIToolTip_DelTool GUIToolTip_Destroy GUIToolTip_EnumTools GUIToolTip_GetBubbleHeight GUIToolTip_GetBubbleSize GUIToolTip_GetBubbleWidth GUIToolTip_GetCurrentTool GUIToolTip_GetDelayTime GUIToolTip_GetMargin GUIToolTip_GetMarginEx GUIToolTip_GetMaxTipWidth GUIToolTip_GetText GUIToolTip_GetTipBkColor GUIToolTip_GetTipTextColor GUIToolTip_GetTitleBitMap GUIToolTip_GetTitleText GUIToolTip_GetToolCount GUIToolTip_GetToolInfo GUIToolTip_HitTest GUIToolTip_NewToolRect GUIToolTip_Pop GUIToolTip_PopUp GUIToolTip_SetDelayTime GUIToolTip_SetMargin GUIToolTip_SetMaxTipWidth GUIToolTip_SetTipBkColor GUIToolTip_SetTipTextColor GUIToolTip_SetTitle GUIToolTip_SetToolInfo GUIToolTip_SetWindowTheme GUIToolTip_ToolExists GUIToolTip_ToolToArray GUIToolTip_TrackActivate GUIToolTip_TrackPosition GUIToolTip_Update GUIToolTip_UpdateTipText HexToString IEAction IEAttach IEBodyReadHTML IEBodyReadText IEBodyWriteHTML IECreate IECreateEmbedded IEDocGetObj IEDocInsertHTML IEDocInsertText IEDocReadHTML IEDocWriteHTML IEErrorNotify IEFormElementCheckBoxSelect IEFormElementGetCollection IEFormElementGetObjByName IEFormElementGetValue IEFormElementOptionSelect IEFormElementRadioSelect IEFormElementSetValue IEFormGetCollection IEFormGetObjByName IEFormImageClick IEFormReset IEFormSubmit IEFrameGetCollection IEFrameGetObjByName IEGetObjById IEGetObjByName IEHeadInsertEventScript IEImgClick IEImgGetCollection IEIsFrameSet IELinkClickByIndex IELinkClickByText IELinkGetCollection IELoadWait IELoadWaitTimeout IENavigate IEPropertyGet IEPropertySet IEQuit IETableGetCollection IETableWriteToArray IETagNameAllGetCollection IETagNameGetCollection IE_Example IE_Introduction IE_VersionInfo INetExplorerCapable INetGetSource INetMail INetSmtpMail IsPressed MathCheckDiv Max MemGlobalAlloc MemGlobalFree MemGlobalLock MemGlobalSize MemGlobalUnlock MemMoveMemory MemVirtualAlloc MemVirtualAllocEx MemVirtualFree MemVirtualFreeEx Min MouseTrap NamedPipes_CallNamedPipe NamedPipes_ConnectNamedPipe NamedPipes_CreateNamedPipe NamedPipes_CreatePipe NamedPipes_DisconnectNamedPipe NamedPipes_GetNamedPipeHandleState NamedPipes_GetNamedPipeInfo NamedPipes_PeekNamedPipe NamedPipes_SetNamedPipeHandleState NamedPipes_TransactNamedPipe NamedPipes_WaitNamedPipe Net_Share_ConnectionEnum Net_Share_FileClose Net_Share_FileEnum Net_Share_FileGetInfo Net_Share_PermStr Net_Share_ResourceStr Net_Share_SessionDel Net_Share_SessionEnum Net_Share_SessionGetInfo Net_Share_ShareAdd Net_Share_ShareCheck Net_Share_ShareDel Net_Share_ShareEnum Net_Share_ShareGetInfo Net_Share_ShareSetInfo Net_Share_StatisticsGetSvr Net_Share_StatisticsGetWrk Now NowCalc NowCalcDate NowDate NowTime PathFull PathGetRelative PathMake PathSplit ProcessGetName ProcessGetPriority Radian ReplaceStringInFile RunDos ScreenCapture_Capture ScreenCapture_CaptureWnd ScreenCapture_SaveImage ScreenCapture_SetBMPFormat ScreenCapture_SetJPGQuality ScreenCapture_SetTIFColorDepth ScreenCapture_SetTIFCompression Security__AdjustTokenPrivileges Security__CreateProcessWithToken Security__DuplicateTokenEx Security__GetAccountSid Security__GetLengthSid Security__GetTokenInformation Security__ImpersonateSelf Security__IsValidSid Security__LookupAccountName Security__LookupAccountSid Security__LookupPrivilegeValue Security__OpenProcessToken Security__OpenThreadToken Security__OpenThreadTokenEx Security__SetPrivilege Security__SetTokenInformation Security__SidToStringSid Security__SidTypeStr Security__StringSidToSid SendMessage SendMessageA SetDate SetTime Singleton SoundClose SoundLength SoundOpen SoundPause SoundPlay SoundPos SoundResume SoundSeek SoundStatus SoundStop SQLite_Changes SQLite_Close SQLite_Display2DResult SQLite_Encode SQLite_ErrCode SQLite_ErrMsg SQLite_Escape SQLite_Exec SQLite_FastEncode SQLite_FastEscape SQLite_FetchData SQLite_FetchNames SQLite_GetTable SQLite_GetTable2d SQLite_LastInsertRowID SQLite_LibVersion SQLite_Open SQLite_Query SQLite_QueryFinalize SQLite_QueryReset SQLite_QuerySingleRow SQLite_SafeMode SQLite_SetTimeout SQLite_Shutdown SQLite_SQLiteExe SQLite_Startup SQLite_TotalChanges StringBetween StringExplode StringInsert StringProper StringRepeat StringTitleCase StringToHex TCPIpToName TempFile TicksToTime Timer_Diff Timer_GetIdleTime Timer_GetTimerID Timer_Init Timer_KillAllTimers Timer_KillTimer Timer_SetTimer TimeToTicks VersionCompare viClose viExecCommand viFindGpib viGpibBusReset viGTL viInteractiveControl viOpen viSetAttribute viSetTimeout WeekNumberISO WinAPI_AbortPath WinAPI_ActivateKeyboardLayout WinAPI_AddClipboardFormatListener WinAPI_AddFontMemResourceEx WinAPI_AddFontResourceEx WinAPI_AddIconOverlay WinAPI_AddIconTransparency WinAPI_AddMRUString WinAPI_AdjustBitmap WinAPI_AdjustTokenPrivileges WinAPI_AdjustWindowRectEx WinAPI_AlphaBlend WinAPI_AngleArc WinAPI_AnimateWindow WinAPI_Arc WinAPI_ArcTo WinAPI_ArrayToStruct WinAPI_AssignProcessToJobObject WinAPI_AssocGetPerceivedType WinAPI_AssocQueryString WinAPI_AttachConsole WinAPI_AttachThreadInput WinAPI_BackupRead WinAPI_BackupReadAbort WinAPI_BackupSeek WinAPI_BackupWrite WinAPI_BackupWriteAbort WinAPI_Beep WinAPI_BeginBufferedPaint WinAPI_BeginDeferWindowPos WinAPI_BeginPaint WinAPI_BeginPath WinAPI_BeginUpdateResource WinAPI_BitBlt WinAPI_BringWindowToTop WinAPI_BroadcastSystemMessage WinAPI_BrowseForFolderDlg WinAPI_BufferedPaintClear WinAPI_BufferedPaintInit WinAPI_BufferedPaintSetAlpha WinAPI_BufferedPaintUnInit WinAPI_CallNextHookEx WinAPI_CallWindowProc WinAPI_CallWindowProcW WinAPI_CascadeWindows WinAPI_ChangeWindowMessageFilterEx WinAPI_CharToOem WinAPI_ChildWindowFromPointEx WinAPI_ClientToScreen WinAPI_ClipCursor WinAPI_CloseDesktop WinAPI_CloseEnhMetaFile WinAPI_CloseFigure WinAPI_CloseHandle WinAPI_CloseThemeData WinAPI_CloseWindow WinAPI_CloseWindowStation WinAPI_CLSIDFromProgID WinAPI_CoInitialize WinAPI_ColorAdjustLuma WinAPI_ColorHLSToRGB WinAPI_ColorRGBToHLS WinAPI_CombineRgn WinAPI_CombineTransform WinAPI_CommandLineToArgv WinAPI_CommDlgExtendedError WinAPI_CommDlgExtendedErrorEx WinAPI_CompareString WinAPI_CompressBitmapBits WinAPI_CompressBuffer WinAPI_ComputeCrc32 WinAPI_ConfirmCredentials WinAPI_CopyBitmap WinAPI_CopyCursor WinAPI_CopyEnhMetaFile WinAPI_CopyFileEx WinAPI_CopyIcon WinAPI_CopyImage WinAPI_CopyRect WinAPI_CopyStruct WinAPI_CoTaskMemAlloc WinAPI_CoTaskMemFree WinAPI_CoTaskMemRealloc WinAPI_CoUninitialize WinAPI_Create32BitHBITMAP WinAPI_Create32BitHICON WinAPI_CreateANDBitmap WinAPI_CreateBitmap WinAPI_CreateBitmapIndirect WinAPI_CreateBrushIndirect WinAPI_CreateBuffer WinAPI_CreateBufferFromStruct WinAPI_CreateCaret WinAPI_CreateColorAdjustment WinAPI_CreateCompatibleBitmap WinAPI_CreateCompatibleBitmapEx WinAPI_CreateCompatibleDC WinAPI_CreateDesktop WinAPI_CreateDIB WinAPI_CreateDIBColorTable WinAPI_CreateDIBitmap WinAPI_CreateDIBSection WinAPI_CreateDirectory WinAPI_CreateDirectoryEx WinAPI_CreateEllipticRgn WinAPI_CreateEmptyIcon WinAPI_CreateEnhMetaFile WinAPI_CreateEvent WinAPI_CreateFile WinAPI_CreateFileEx WinAPI_CreateFileMapping WinAPI_CreateFont WinAPI_CreateFontEx WinAPI_CreateFontIndirect WinAPI_CreateGUID WinAPI_CreateHardLink WinAPI_CreateIcon WinAPI_CreateIconFromResourceEx WinAPI_CreateIconIndirect WinAPI_CreateJobObject WinAPI_CreateMargins WinAPI_CreateMRUList WinAPI_CreateMutex WinAPI_CreateNullRgn WinAPI_CreateNumberFormatInfo WinAPI_CreateObjectID WinAPI_CreatePen WinAPI_CreatePoint WinAPI_CreatePolygonRgn WinAPI_CreateProcess WinAPI_CreateProcessWithToken WinAPI_CreateRect WinAPI_CreateRectEx WinAPI_CreateRectRgn WinAPI_CreateRectRgnIndirect WinAPI_CreateRoundRectRgn WinAPI_CreateSemaphore WinAPI_CreateSize WinAPI_CreateSolidBitmap WinAPI_CreateSolidBrush WinAPI_CreateStreamOnHGlobal WinAPI_CreateString WinAPI_CreateSymbolicLink WinAPI_CreateTransform WinAPI_CreateWindowEx WinAPI_CreateWindowStation WinAPI_DecompressBuffer WinAPI_DecryptFile WinAPI_DeferWindowPos WinAPI_DefineDosDevice WinAPI_DefRawInputProc WinAPI_DefSubclassProc WinAPI_DefWindowProc WinAPI_DefWindowProcW WinAPI_DeleteDC WinAPI_DeleteEnhMetaFile WinAPI_DeleteFile WinAPI_DeleteObject WinAPI_DeleteObjectID WinAPI_DeleteVolumeMountPoint WinAPI_DeregisterShellHookWindow WinAPI_DestroyCaret WinAPI_DestroyCursor WinAPI_DestroyIcon WinAPI_DestroyWindow WinAPI_DeviceIoControl WinAPI_DisplayStruct WinAPI_DllGetVersion WinAPI_DllInstall WinAPI_DllUninstall WinAPI_DPtoLP WinAPI_DragAcceptFiles WinAPI_DragFinish WinAPI_DragQueryFileEx WinAPI_DragQueryPoint WinAPI_DrawAnimatedRects WinAPI_DrawBitmap WinAPI_DrawEdge WinAPI_DrawFocusRect WinAPI_DrawFrameControl WinAPI_DrawIcon WinAPI_DrawIconEx WinAPI_DrawLine WinAPI_DrawShadowText WinAPI_DrawText WinAPI_DrawThemeBackground WinAPI_DrawThemeEdge WinAPI_DrawThemeIcon WinAPI_DrawThemeParentBackground WinAPI_DrawThemeText WinAPI_DrawThemeTextEx WinAPI_DuplicateEncryptionInfoFile WinAPI_DuplicateHandle WinAPI_DuplicateTokenEx WinAPI_DwmDefWindowProc WinAPI_DwmEnableBlurBehindWindow WinAPI_DwmEnableComposition WinAPI_DwmExtendFrameIntoClientArea WinAPI_DwmGetColorizationColor WinAPI_DwmGetColorizationParameters WinAPI_DwmGetWindowAttribute WinAPI_DwmInvalidateIconicBitmaps WinAPI_DwmIsCompositionEnabled WinAPI_DwmQueryThumbnailSourceSize WinAPI_DwmRegisterThumbnail WinAPI_DwmSetColorizationParameters WinAPI_DwmSetIconicLivePreviewBitmap WinAPI_DwmSetIconicThumbnail WinAPI_DwmSetWindowAttribute WinAPI_DwmUnregisterThumbnail WinAPI_DwmUpdateThumbnailProperties WinAPI_DWordToFloat WinAPI_DWordToInt WinAPI_EjectMedia WinAPI_Ellipse WinAPI_EmptyWorkingSet WinAPI_EnableWindow WinAPI_EncryptFile WinAPI_EncryptionDisable WinAPI_EndBufferedPaint WinAPI_EndDeferWindowPos WinAPI_EndPaint WinAPI_EndPath WinAPI_EndUpdateResource WinAPI_EnumChildProcess WinAPI_EnumChildWindows WinAPI_EnumDesktops WinAPI_EnumDesktopWindows WinAPI_EnumDeviceDrivers WinAPI_EnumDisplayDevices WinAPI_EnumDisplayMonitors WinAPI_EnumDisplaySettings WinAPI_EnumDllProc WinAPI_EnumFiles WinAPI_EnumFileStreams WinAPI_EnumFontFamilies WinAPI_EnumHardLinks WinAPI_EnumMRUList WinAPI_EnumPageFiles WinAPI_EnumProcessHandles WinAPI_EnumProcessModules WinAPI_EnumProcessThreads WinAPI_EnumProcessWindows WinAPI_EnumRawInputDevices WinAPI_EnumResourceLanguages WinAPI_EnumResourceNames WinAPI_EnumResourceTypes WinAPI_EnumSystemGeoID WinAPI_EnumSystemLocales WinAPI_EnumUILanguages WinAPI_EnumWindows WinAPI_EnumWindowsPopup WinAPI_EnumWindowStations WinAPI_EnumWindowsTop WinAPI_EqualMemory WinAPI_EqualRect WinAPI_EqualRgn WinAPI_ExcludeClipRect WinAPI_ExpandEnvironmentStrings WinAPI_ExtCreatePen WinAPI_ExtCreateRegion WinAPI_ExtFloodFill WinAPI_ExtractIcon WinAPI_ExtractIconEx WinAPI_ExtSelectClipRgn WinAPI_FatalAppExit WinAPI_FatalExit WinAPI_FileEncryptionStatus WinAPI_FileExists WinAPI_FileIconInit WinAPI_FileInUse WinAPI_FillMemory WinAPI_FillPath WinAPI_FillRect WinAPI_FillRgn WinAPI_FindClose WinAPI_FindCloseChangeNotification WinAPI_FindExecutable WinAPI_FindFirstChangeNotification WinAPI_FindFirstFile WinAPI_FindFirstFileName WinAPI_FindFirstStream WinAPI_FindNextChangeNotification WinAPI_FindNextFile WinAPI_FindNextFileName WinAPI_FindNextStream WinAPI_FindResource WinAPI_FindResourceEx WinAPI_FindTextDlg WinAPI_FindWindow WinAPI_FlashWindow WinAPI_FlashWindowEx WinAPI_FlattenPath WinAPI_FloatToDWord WinAPI_FloatToInt WinAPI_FlushFileBuffers WinAPI_FlushFRBuffer WinAPI_FlushViewOfFile WinAPI_FormatDriveDlg WinAPI_FormatMessage WinAPI_FrameRect WinAPI_FrameRgn WinAPI_FreeLibrary WinAPI_FreeMemory WinAPI_FreeMRUList WinAPI_FreeResource WinAPI_GdiComment WinAPI_GetActiveWindow WinAPI_GetAllUsersProfileDirectory WinAPI_GetAncestor WinAPI_GetApplicationRestartSettings WinAPI_GetArcDirection WinAPI_GetAsyncKeyState WinAPI_GetBinaryType WinAPI_GetBitmapBits WinAPI_GetBitmapDimension WinAPI_GetBitmapDimensionEx WinAPI_GetBkColor WinAPI_GetBkMode WinAPI_GetBoundsRect WinAPI_GetBrushOrg WinAPI_GetBufferedPaintBits WinAPI_GetBufferedPaintDC WinAPI_GetBufferedPaintTargetDC WinAPI_GetBufferedPaintTargetRect WinAPI_GetBValue WinAPI_GetCaretBlinkTime WinAPI_GetCaretPos WinAPI_GetCDType WinAPI_GetClassInfoEx WinAPI_GetClassLongEx WinAPI_GetClassName WinAPI_GetClientHeight WinAPI_GetClientRect WinAPI_GetClientWidth WinAPI_GetClipboardSequenceNumber WinAPI_GetClipBox WinAPI_GetClipCursor WinAPI_GetClipRgn WinAPI_GetColorAdjustment WinAPI_GetCompressedFileSize WinAPI_GetCompression WinAPI_GetConnectedDlg WinAPI_GetCurrentDirectory WinAPI_GetCurrentHwProfile WinAPI_GetCurrentObject WinAPI_GetCurrentPosition WinAPI_GetCurrentProcess WinAPI_GetCurrentProcessExplicitAppUserModelID WinAPI_GetCurrentProcessID WinAPI_GetCurrentThemeName WinAPI_GetCurrentThread WinAPI_GetCurrentThreadId WinAPI_GetCursor WinAPI_GetCursorInfo WinAPI_GetDateFormat WinAPI_GetDC WinAPI_GetDCEx WinAPI_GetDefaultPrinter WinAPI_GetDefaultUserProfileDirectory WinAPI_GetDesktopWindow WinAPI_GetDeviceCaps WinAPI_GetDeviceDriverBaseName WinAPI_GetDeviceDriverFileName WinAPI_GetDeviceGammaRamp WinAPI_GetDIBColorTable WinAPI_GetDIBits WinAPI_GetDiskFreeSpaceEx WinAPI_GetDlgCtrlID WinAPI_GetDlgItem WinAPI_GetDllDirectory WinAPI_GetDriveBusType WinAPI_GetDriveGeometryEx WinAPI_GetDriveNumber WinAPI_GetDriveType WinAPI_GetDurationFormat WinAPI_GetEffectiveClientRect WinAPI_GetEnhMetaFile WinAPI_GetEnhMetaFileBits WinAPI_GetEnhMetaFileDescription WinAPI_GetEnhMetaFileDimension WinAPI_GetEnhMetaFileHeader WinAPI_GetErrorMessage WinAPI_GetErrorMode WinAPI_GetExitCodeProcess WinAPI_GetExtended WinAPI_GetFileAttributes WinAPI_GetFileID WinAPI_GetFileInformationByHandle WinAPI_GetFileInformationByHandleEx WinAPI_GetFilePointerEx WinAPI_GetFileSizeEx WinAPI_GetFileSizeOnDisk WinAPI_GetFileTitle WinAPI_GetFileType WinAPI_GetFileVersionInfo WinAPI_GetFinalPathNameByHandle WinAPI_GetFinalPathNameByHandleEx WinAPI_GetFocus WinAPI_GetFontMemoryResourceInfo WinAPI_GetFontName WinAPI_GetFontResourceInfo WinAPI_GetForegroundWindow WinAPI_GetFRBuffer WinAPI_GetFullPathName WinAPI_GetGeoInfo WinAPI_GetGlyphOutline WinAPI_GetGraphicsMode WinAPI_GetGuiResources WinAPI_GetGUIThreadInfo WinAPI_GetGValue WinAPI_GetHandleInformation WinAPI_GetHGlobalFromStream WinAPI_GetIconDimension WinAPI_GetIconInfo WinAPI_GetIconInfoEx WinAPI_GetIdleTime WinAPI_GetKeyboardLayout WinAPI_GetKeyboardLayoutList WinAPI_GetKeyboardState WinAPI_GetKeyboardType WinAPI_GetKeyNameText WinAPI_GetKeyState WinAPI_GetLastActivePopup WinAPI_GetLastError WinAPI_GetLastErrorMessage WinAPI_GetLayeredWindowAttributes WinAPI_GetLocaleInfo WinAPI_GetLogicalDrives WinAPI_GetMapMode WinAPI_GetMemorySize WinAPI_GetMessageExtraInfo WinAPI_GetModuleFileNameEx WinAPI_GetModuleHandle WinAPI_GetModuleHandleEx WinAPI_GetModuleInformation WinAPI_GetMonitorInfo WinAPI_GetMousePos WinAPI_GetMousePosX WinAPI_GetMousePosY WinAPI_GetMUILanguage WinAPI_GetNumberFormat WinAPI_GetObject WinAPI_GetObjectID WinAPI_GetObjectInfoByHandle WinAPI_GetObjectNameByHandle WinAPI_GetObjectType WinAPI_GetOpenFileName WinAPI_GetOutlineTextMetrics WinAPI_GetOverlappedResult WinAPI_GetParent WinAPI_GetParentProcess WinAPI_GetPerformanceInfo WinAPI_GetPEType WinAPI_GetPhysicallyInstalledSystemMemory WinAPI_GetPixel WinAPI_GetPolyFillMode WinAPI_GetPosFromRect WinAPI_GetPriorityClass WinAPI_GetProcAddress WinAPI_GetProcessAffinityMask WinAPI_GetProcessCommandLine WinAPI_GetProcessFileName WinAPI_GetProcessHandleCount WinAPI_GetProcessID WinAPI_GetProcessIoCounters WinAPI_GetProcessMemoryInfo WinAPI_GetProcessName WinAPI_GetProcessShutdownParameters WinAPI_GetProcessTimes WinAPI_GetProcessUser WinAPI_GetProcessWindowStation WinAPI_GetProcessWorkingDirectory WinAPI_GetProfilesDirectory WinAPI_GetPwrCapabilities WinAPI_GetRawInputBuffer WinAPI_GetRawInputBufferLength WinAPI_GetRawInputData WinAPI_GetRawInputDeviceInfo WinAPI_GetRegionData WinAPI_GetRegisteredRawInputDevices WinAPI_GetRegKeyNameByHandle WinAPI_GetRgnBox WinAPI_GetROP2 WinAPI_GetRValue WinAPI_GetSaveFileName WinAPI_GetShellWindow WinAPI_GetStartupInfo WinAPI_GetStdHandle WinAPI_GetStockObject WinAPI_GetStretchBltMode WinAPI_GetString WinAPI_GetSysColor WinAPI_GetSysColorBrush WinAPI_GetSystemDefaultLangID WinAPI_GetSystemDefaultLCID WinAPI_GetSystemDefaultUILanguage WinAPI_GetSystemDEPPolicy WinAPI_GetSystemInfo WinAPI_GetSystemMetrics WinAPI_GetSystemPowerStatus WinAPI_GetSystemTimes WinAPI_GetSystemWow64Directory WinAPI_GetTabbedTextExtent WinAPI_GetTempFileName WinAPI_GetTextAlign WinAPI_GetTextCharacterExtra WinAPI_GetTextColor WinAPI_GetTextExtentPoint32 WinAPI_GetTextFace WinAPI_GetTextMetrics WinAPI_GetThemeAppProperties WinAPI_GetThemeBackgroundContentRect WinAPI_GetThemeBackgroundExtent WinAPI_GetThemeBackgroundRegion WinAPI_GetThemeBitmap WinAPI_GetThemeBool WinAPI_GetThemeColor WinAPI_GetThemeDocumentationProperty WinAPI_GetThemeEnumValue WinAPI_GetThemeFilename WinAPI_GetThemeFont WinAPI_GetThemeInt WinAPI_GetThemeMargins WinAPI_GetThemeMetric WinAPI_GetThemePartSize WinAPI_GetThemePosition WinAPI_GetThemePropertyOrigin WinAPI_GetThemeRect WinAPI_GetThemeString WinAPI_GetThemeSysBool WinAPI_GetThemeSysColor WinAPI_GetThemeSysColorBrush WinAPI_GetThemeSysFont WinAPI_GetThemeSysInt WinAPI_GetThemeSysSize WinAPI_GetThemeSysString WinAPI_GetThemeTextExtent WinAPI_GetThemeTextMetrics WinAPI_GetThemeTransitionDuration WinAPI_GetThreadDesktop WinAPI_GetThreadErrorMode WinAPI_GetThreadLocale WinAPI_GetThreadUILanguage WinAPI_GetTickCount WinAPI_GetTickCount64 WinAPI_GetTimeFormat WinAPI_GetTopWindow WinAPI_GetUDFColorMode WinAPI_GetUpdateRect WinAPI_GetUpdateRgn WinAPI_GetUserDefaultLangID WinAPI_GetUserDefaultLCID WinAPI_GetUserDefaultUILanguage WinAPI_GetUserGeoID WinAPI_GetUserObjectInformation WinAPI_GetVersion WinAPI_GetVersionEx WinAPI_GetVolumeInformation WinAPI_GetVolumeInformationByHandle WinAPI_GetVolumeNameForVolumeMountPoint WinAPI_GetWindow WinAPI_GetWindowDC WinAPI_GetWindowDisplayAffinity WinAPI_GetWindowExt WinAPI_GetWindowFileName WinAPI_GetWindowHeight WinAPI_GetWindowInfo WinAPI_GetWindowLong WinAPI_GetWindowOrg WinAPI_GetWindowPlacement WinAPI_GetWindowRect WinAPI_GetWindowRgn WinAPI_GetWindowRgnBox WinAPI_GetWindowSubclass WinAPI_GetWindowText WinAPI_GetWindowTheme WinAPI_GetWindowThreadProcessId WinAPI_GetWindowWidth WinAPI_GetWorkArea WinAPI_GetWorldTransform WinAPI_GetXYFromPoint WinAPI_GlobalMemoryStatus WinAPI_GradientFill WinAPI_GUIDFromString WinAPI_GUIDFromStringEx WinAPI_HashData WinAPI_HashString WinAPI_HiByte WinAPI_HideCaret WinAPI_HiDWord WinAPI_HiWord WinAPI_InflateRect WinAPI_InitMUILanguage WinAPI_InProcess WinAPI_IntersectClipRect WinAPI_IntersectRect WinAPI_IntToDWord WinAPI_IntToFloat WinAPI_InvalidateRect WinAPI_InvalidateRgn WinAPI_InvertANDBitmap WinAPI_InvertColor WinAPI_InvertRect WinAPI_InvertRgn WinAPI_IOCTL WinAPI_IsAlphaBitmap WinAPI_IsBadCodePtr WinAPI_IsBadReadPtr WinAPI_IsBadStringPtr WinAPI_IsBadWritePtr WinAPI_IsChild WinAPI_IsClassName WinAPI_IsDoorOpen WinAPI_IsElevated WinAPI_IsHungAppWindow WinAPI_IsIconic WinAPI_IsInternetConnected WinAPI_IsLoadKBLayout WinAPI_IsMemory WinAPI_IsNameInExpression WinAPI_IsNetworkAlive WinAPI_IsPathShared WinAPI_IsProcessInJob WinAPI_IsProcessorFeaturePresent WinAPI_IsRectEmpty WinAPI_IsThemeActive WinAPI_IsThemeBackgroundPartiallyTransparent WinAPI_IsThemePartDefined WinAPI_IsValidLocale WinAPI_IsWindow WinAPI_IsWindowEnabled WinAPI_IsWindowUnicode WinAPI_IsWindowVisible WinAPI_IsWow64Process WinAPI_IsWritable WinAPI_IsZoomed WinAPI_Keybd_Event WinAPI_KillTimer WinAPI_LineDDA WinAPI_LineTo WinAPI_LoadBitmap WinAPI_LoadCursor WinAPI_LoadCursorFromFile WinAPI_LoadIcon WinAPI_LoadIconMetric WinAPI_LoadIconWithScaleDown WinAPI_LoadImage WinAPI_LoadIndirectString WinAPI_LoadKeyboardLayout WinAPI_LoadLibrary WinAPI_LoadLibraryEx WinAPI_LoadMedia WinAPI_LoadResource WinAPI_LoadShell32Icon WinAPI_LoadString WinAPI_LoadStringEx WinAPI_LoByte WinAPI_LocalFree WinAPI_LockDevice WinAPI_LockFile WinAPI_LockResource WinAPI_LockWindowUpdate WinAPI_LockWorkStation WinAPI_LoDWord WinAPI_LongMid WinAPI_LookupIconIdFromDirectoryEx WinAPI_LoWord WinAPI_LPtoDP WinAPI_MAKELANGID WinAPI_MAKELCID WinAPI_MakeLong WinAPI_MakeQWord WinAPI_MakeWord WinAPI_MapViewOfFile WinAPI_MapVirtualKey WinAPI_MaskBlt WinAPI_MessageBeep WinAPI_MessageBoxCheck WinAPI_MessageBoxIndirect WinAPI_MirrorIcon WinAPI_ModifyWorldTransform WinAPI_MonitorFromPoint WinAPI_MonitorFromRect WinAPI_MonitorFromWindow WinAPI_Mouse_Event WinAPI_MoveFileEx WinAPI_MoveMemory WinAPI_MoveTo WinAPI_MoveToEx WinAPI_MoveWindow WinAPI_MsgBox WinAPI_MulDiv WinAPI_MultiByteToWideChar WinAPI_MultiByteToWideCharEx WinAPI_NtStatusToDosError WinAPI_OemToChar WinAPI_OffsetClipRgn WinAPI_OffsetPoints WinAPI_OffsetRect WinAPI_OffsetRgn WinAPI_OffsetWindowOrg WinAPI_OpenDesktop WinAPI_OpenFileById WinAPI_OpenFileDlg WinAPI_OpenFileMapping WinAPI_OpenIcon WinAPI_OpenInputDesktop WinAPI_OpenJobObject WinAPI_OpenMutex WinAPI_OpenProcess WinAPI_OpenProcessToken WinAPI_OpenSemaphore WinAPI_OpenThemeData WinAPI_OpenWindowStation WinAPI_PageSetupDlg WinAPI_PaintDesktop WinAPI_PaintRgn WinAPI_ParseURL WinAPI_ParseUserName WinAPI_PatBlt WinAPI_PathAddBackslash WinAPI_PathAddExtension WinAPI_PathAppend WinAPI_PathBuildRoot WinAPI_PathCanonicalize WinAPI_PathCommonPrefix WinAPI_PathCompactPath WinAPI_PathCompactPathEx WinAPI_PathCreateFromUrl WinAPI_PathFindExtension WinAPI_PathFindFileName WinAPI_PathFindNextComponent WinAPI_PathFindOnPath WinAPI_PathGetArgs WinAPI_PathGetCharType WinAPI_PathGetDriveNumber WinAPI_PathIsContentType WinAPI_PathIsDirectory WinAPI_PathIsDirectoryEmpty WinAPI_PathIsExe WinAPI_PathIsFileSpec WinAPI_PathIsLFNFileSpec WinAPI_PathIsRelative WinAPI_PathIsRoot WinAPI_PathIsSameRoot WinAPI_PathIsSystemFolder WinAPI_PathIsUNC WinAPI_PathIsUNCServer WinAPI_PathIsUNCServerShare WinAPI_PathMakeSystemFolder WinAPI_PathMatchSpec WinAPI_PathParseIconLocation WinAPI_PathRelativePathTo WinAPI_PathRemoveArgs WinAPI_PathRemoveBackslash WinAPI_PathRemoveExtension WinAPI_PathRemoveFileSpec WinAPI_PathRenameExtension WinAPI_PathSearchAndQualify WinAPI_PathSkipRoot WinAPI_PathStripPath WinAPI_PathStripToRoot WinAPI_PathToRegion WinAPI_PathUndecorate WinAPI_PathUnExpandEnvStrings WinAPI_PathUnmakeSystemFolder WinAPI_PathUnquoteSpaces WinAPI_PathYetAnotherMakeUniqueName WinAPI_PickIconDlg WinAPI_PlayEnhMetaFile WinAPI_PlaySound WinAPI_PlgBlt WinAPI_PointFromRect WinAPI_PolyBezier WinAPI_PolyBezierTo WinAPI_PolyDraw WinAPI_Polygon WinAPI_PostMessage WinAPI_PrimaryLangId WinAPI_PrintDlg WinAPI_PrintDlgEx WinAPI_PrintWindow WinAPI_ProgIDFromCLSID WinAPI_PtInRect WinAPI_PtInRectEx WinAPI_PtInRegion WinAPI_PtVisible WinAPI_QueryDosDevice WinAPI_QueryInformationJobObject WinAPI_QueryPerformanceCounter WinAPI_QueryPerformanceFrequency WinAPI_RadialGradientFill WinAPI_ReadDirectoryChanges WinAPI_ReadFile WinAPI_ReadProcessMemory WinAPI_Rectangle WinAPI_RectInRegion WinAPI_RectIsEmpty WinAPI_RectVisible WinAPI_RedrawWindow WinAPI_RegCloseKey WinAPI_RegConnectRegistry WinAPI_RegCopyTree WinAPI_RegCopyTreeEx WinAPI_RegCreateKey WinAPI_RegDeleteEmptyKey WinAPI_RegDeleteKey WinAPI_RegDeleteKeyValue WinAPI_RegDeleteTree WinAPI_RegDeleteTreeEx WinAPI_RegDeleteValue WinAPI_RegDisableReflectionKey WinAPI_RegDuplicateHKey WinAPI_RegEnableReflectionKey WinAPI_RegEnumKey WinAPI_RegEnumValue WinAPI_RegFlushKey WinAPI_RegisterApplicationRestart WinAPI_RegisterClass WinAPI_RegisterClassEx WinAPI_RegisterHotKey WinAPI_RegisterPowerSettingNotification WinAPI_RegisterRawInputDevices WinAPI_RegisterShellHookWindow WinAPI_RegisterWindowMessage WinAPI_RegLoadMUIString WinAPI_RegNotifyChangeKeyValue WinAPI_RegOpenKey WinAPI_RegQueryInfoKey WinAPI_RegQueryLastWriteTime WinAPI_RegQueryMultipleValues WinAPI_RegQueryReflectionKey WinAPI_RegQueryValue WinAPI_RegRestoreKey WinAPI_RegSaveKey WinAPI_RegSetValue WinAPI_ReleaseCapture WinAPI_ReleaseDC WinAPI_ReleaseMutex WinAPI_ReleaseSemaphore WinAPI_ReleaseStream WinAPI_RemoveClipboardFormatListener WinAPI_RemoveDirectory WinAPI_RemoveFontMemResourceEx WinAPI_RemoveFontResourceEx WinAPI_RemoveWindowSubclass WinAPI_ReOpenFile WinAPI_ReplaceFile WinAPI_ReplaceTextDlg WinAPI_ResetEvent WinAPI_RestartDlg WinAPI_RestoreDC WinAPI_RGB WinAPI_RotatePoints WinAPI_RoundRect WinAPI_SaveDC WinAPI_SaveFileDlg WinAPI_SaveHBITMAPToFile WinAPI_SaveHICONToFile WinAPI_ScaleWindowExt WinAPI_ScreenToClient WinAPI_SearchPath WinAPI_SelectClipPath WinAPI_SelectClipRgn WinAPI_SelectObject WinAPI_SendMessageTimeout WinAPI_SetActiveWindow WinAPI_SetArcDirection WinAPI_SetBitmapBits WinAPI_SetBitmapDimensionEx WinAPI_SetBkColor WinAPI_SetBkMode WinAPI_SetBoundsRect WinAPI_SetBrushOrg WinAPI_SetCapture WinAPI_SetCaretBlinkTime WinAPI_SetCaretPos WinAPI_SetClassLongEx WinAPI_SetColorAdjustment WinAPI_SetCompression WinAPI_SetCurrentDirectory WinAPI_SetCurrentProcessExplicitAppUserModelID WinAPI_SetCursor WinAPI_SetDCBrushColor WinAPI_SetDCPenColor WinAPI_SetDefaultPrinter WinAPI_SetDeviceGammaRamp WinAPI_SetDIBColorTable WinAPI_SetDIBits WinAPI_SetDIBitsToDevice WinAPI_SetDllDirectory WinAPI_SetEndOfFile WinAPI_SetEnhMetaFileBits WinAPI_SetErrorMode WinAPI_SetEvent WinAPI_SetFileAttributes WinAPI_SetFileInformationByHandleEx WinAPI_SetFilePointer WinAPI_SetFilePointerEx WinAPI_SetFileShortName WinAPI_SetFileValidData WinAPI_SetFocus WinAPI_SetFont WinAPI_SetForegroundWindow WinAPI_SetFRBuffer WinAPI_SetGraphicsMode WinAPI_SetHandleInformation WinAPI_SetInformationJobObject WinAPI_SetKeyboardLayout WinAPI_SetKeyboardState WinAPI_SetLastError WinAPI_SetLayeredWindowAttributes WinAPI_SetLocaleInfo WinAPI_SetMapMode WinAPI_SetMessageExtraInfo WinAPI_SetParent WinAPI_SetPixel WinAPI_SetPolyFillMode WinAPI_SetPriorityClass WinAPI_SetProcessAffinityMask WinAPI_SetProcessShutdownParameters WinAPI_SetProcessWindowStation WinAPI_SetRectRgn WinAPI_SetROP2 WinAPI_SetSearchPathMode WinAPI_SetStretchBltMode WinAPI_SetSysColors WinAPI_SetSystemCursor WinAPI_SetTextAlign WinAPI_SetTextCharacterExtra WinAPI_SetTextColor WinAPI_SetTextJustification WinAPI_SetThemeAppProperties WinAPI_SetThreadDesktop WinAPI_SetThreadErrorMode WinAPI_SetThreadExecutionState WinAPI_SetThreadLocale WinAPI_SetThreadUILanguage WinAPI_SetTimer WinAPI_SetUDFColorMode WinAPI_SetUserGeoID WinAPI_SetUserObjectInformation WinAPI_SetVolumeMountPoint WinAPI_SetWindowDisplayAffinity WinAPI_SetWindowExt WinAPI_SetWindowLong WinAPI_SetWindowOrg WinAPI_SetWindowPlacement WinAPI_SetWindowPos WinAPI_SetWindowRgn WinAPI_SetWindowsHookEx WinAPI_SetWindowSubclass WinAPI_SetWindowText WinAPI_SetWindowTheme WinAPI_SetWinEventHook WinAPI_SetWorldTransform WinAPI_SfcIsFileProtected WinAPI_SfcIsKeyProtected WinAPI_ShellAboutDlg WinAPI_ShellAddToRecentDocs WinAPI_ShellChangeNotify WinAPI_ShellChangeNotifyDeregister WinAPI_ShellChangeNotifyRegister WinAPI_ShellCreateDirectory WinAPI_ShellEmptyRecycleBin WinAPI_ShellExecute WinAPI_ShellExecuteEx WinAPI_ShellExtractAssociatedIcon WinAPI_ShellExtractIcon WinAPI_ShellFileOperation WinAPI_ShellFlushSFCache WinAPI_ShellGetFileInfo WinAPI_ShellGetIconOverlayIndex WinAPI_ShellGetImageList WinAPI_ShellGetKnownFolderIDList WinAPI_ShellGetKnownFolderPath WinAPI_ShellGetLocalizedName WinAPI_ShellGetPathFromIDList WinAPI_ShellGetSetFolderCustomSettings WinAPI_ShellGetSettings WinAPI_ShellGetSpecialFolderLocation WinAPI_ShellGetSpecialFolderPath WinAPI_ShellGetStockIconInfo WinAPI_ShellILCreateFromPath WinAPI_ShellNotifyIcon WinAPI_ShellNotifyIconGetRect WinAPI_ShellObjectProperties WinAPI_ShellOpenFolderAndSelectItems WinAPI_ShellOpenWithDlg WinAPI_ShellQueryRecycleBin WinAPI_ShellQueryUserNotificationState WinAPI_ShellRemoveLocalizedName WinAPI_ShellRestricted WinAPI_ShellSetKnownFolderPath WinAPI_ShellSetLocalizedName WinAPI_ShellSetSettings WinAPI_ShellStartNetConnectionDlg WinAPI_ShellUpdateImage WinAPI_ShellUserAuthenticationDlg WinAPI_ShellUserAuthenticationDlgEx WinAPI_ShortToWord WinAPI_ShowCaret WinAPI_ShowCursor WinAPI_ShowError WinAPI_ShowLastError WinAPI_ShowMsg WinAPI_ShowOwnedPopups WinAPI_ShowWindow WinAPI_ShutdownBlockReasonCreate WinAPI_ShutdownBlockReasonDestroy WinAPI_ShutdownBlockReasonQuery WinAPI_SizeOfResource WinAPI_StretchBlt WinAPI_StretchDIBits WinAPI_StrFormatByteSize WinAPI_StrFormatByteSizeEx WinAPI_StrFormatKBSize WinAPI_StrFromTimeInterval WinAPI_StringFromGUID WinAPI_StringLenA WinAPI_StringLenW WinAPI_StrLen WinAPI_StrokeAndFillPath WinAPI_StrokePath WinAPI_StructToArray WinAPI_SubLangId WinAPI_SubtractRect WinAPI_SwapDWord WinAPI_SwapQWord WinAPI_SwapWord WinAPI_SwitchColor WinAPI_SwitchDesktop WinAPI_SwitchToThisWindow WinAPI_SystemParametersInfo WinAPI_TabbedTextOut WinAPI_TerminateJobObject WinAPI_TerminateProcess WinAPI_TextOut WinAPI_TileWindows WinAPI_TrackMouseEvent WinAPI_TransparentBlt WinAPI_TwipsPerPixelX WinAPI_TwipsPerPixelY WinAPI_UnhookWindowsHookEx WinAPI_UnhookWinEvent WinAPI_UnionRect WinAPI_UnionStruct WinAPI_UniqueHardwareID WinAPI_UnloadKeyboardLayout WinAPI_UnlockFile WinAPI_UnmapViewOfFile WinAPI_UnregisterApplicationRestart WinAPI_UnregisterClass WinAPI_UnregisterHotKey WinAPI_UnregisterPowerSettingNotification WinAPI_UpdateLayeredWindow WinAPI_UpdateLayeredWindowEx WinAPI_UpdateLayeredWindowIndirect WinAPI_UpdateResource WinAPI_UpdateWindow WinAPI_UrlApplyScheme WinAPI_UrlCanonicalize WinAPI_UrlCombine WinAPI_UrlCompare WinAPI_UrlCreateFromPath WinAPI_UrlFixup WinAPI_UrlGetPart WinAPI_UrlHash WinAPI_UrlIs WinAPI_UserHandleGrantAccess WinAPI_ValidateRect WinAPI_ValidateRgn WinAPI_VerQueryRoot WinAPI_VerQueryValue WinAPI_VerQueryValueEx WinAPI_WaitForInputIdle WinAPI_WaitForMultipleObjects WinAPI_WaitForSingleObject WinAPI_WideCharToMultiByte WinAPI_WidenPath WinAPI_WindowFromDC WinAPI_WindowFromPoint WinAPI_WordToShort WinAPI_Wow64EnableWow64FsRedirection WinAPI_WriteConsole WinAPI_WriteFile WinAPI_WriteProcessMemory WinAPI_ZeroMemory WinNet_AddConnection WinNet_AddConnection2 WinNet_AddConnection3 WinNet_CancelConnection WinNet_CancelConnection2 WinNet_CloseEnum WinNet_ConnectionDialog WinNet_ConnectionDialog1 WinNet_DisconnectDialog WinNet_DisconnectDialog1 WinNet_EnumResource WinNet_GetConnection WinNet_GetConnectionPerformance WinNet_GetLastError WinNet_GetNetworkInformation WinNet_GetProviderName WinNet_GetResourceInformation WinNet_GetResourceParent WinNet_GetUniversalName WinNet_GetUser WinNet_OpenEnum WinNet_RestoreConnection WinNet_UseConnection Word_Create Word_DocAdd Word_DocAttach Word_DocClose Word_DocExport Word_DocFind Word_DocFindReplace Word_DocGet Word_DocLinkAdd Word_DocLinkGet Word_DocOpen Word_DocPictureAdd Word_DocPrint Word_DocRangeSet Word_DocSave Word_DocSaveAs Word_DocTableRead Word_DocTableWrite Word_Quit",I={
-v:[e.C(";","$",{r:0}),e.C("#cs","#ce"),e.C("#comments-start","#comments-end")]},n={b:"\\$[A-z0-9_]+"},l={cN:"string",v:[{b:/"/,e:/"/,c:[{b:/""/,r:0}]},{b:/'/,e:/'/,c:[{b:/''/,r:0}]}]},o={v:[e.BNM,e.CNM]},a={cN:"meta",b:"#",e:"$",k:{"meta-keyword":"include include-once NoTrayIcon OnAutoItStartRegister RequireAdmin pragma Au3Stripper_Ignore_Funcs Au3Stripper_Ignore_Variables Au3Stripper_Off Au3Stripper_On Au3Stripper_Parameters AutoIt3Wrapper_Add_Constants AutoIt3Wrapper_Au3Check_Parameters AutoIt3Wrapper_Au3Check_Stop_OnWarning AutoIt3Wrapper_Aut2Exe AutoIt3Wrapper_AutoIt3 AutoIt3Wrapper_AutoIt3Dir AutoIt3Wrapper_Change2CUI AutoIt3Wrapper_Compile_Both AutoIt3Wrapper_Compression AutoIt3Wrapper_EndIf AutoIt3Wrapper_Icon AutoIt3Wrapper_If_Compile AutoIt3Wrapper_If_Run AutoIt3Wrapper_Jump_To_First_Error AutoIt3Wrapper_OutFile AutoIt3Wrapper_OutFile_Type AutoIt3Wrapper_OutFile_X64 AutoIt3Wrapper_PlugIn_Funcs AutoIt3Wrapper_Res_Comment Autoit3Wrapper_Res_Compatibility AutoIt3Wrapper_Res_Description AutoIt3Wrapper_Res_Field AutoIt3Wrapper_Res_File_Add AutoIt3Wrapper_Res_FileVersion AutoIt3Wrapper_Res_FileVersion_AutoIncrement AutoIt3Wrapper_Res_Icon_Add AutoIt3Wrapper_Res_Language AutoIt3Wrapper_Res_LegalCopyright AutoIt3Wrapper_Res_ProductVersion AutoIt3Wrapper_Res_requestedExecutionLevel AutoIt3Wrapper_Res_SaveSource AutoIt3Wrapper_Run_After AutoIt3Wrapper_Run_Au3Check AutoIt3Wrapper_Run_Au3Stripper AutoIt3Wrapper_Run_Before AutoIt3Wrapper_Run_Debug_Mode AutoIt3Wrapper_Run_SciTE_Minimized AutoIt3Wrapper_Run_SciTE_OutputPane_Minimized AutoIt3Wrapper_Run_Tidy AutoIt3Wrapper_ShowProgress AutoIt3Wrapper_Testing AutoIt3Wrapper_Tidy_Stop_OnError AutoIt3Wrapper_UPX_Parameters AutoIt3Wrapper_UseUPX AutoIt3Wrapper_UseX64 AutoIt3Wrapper_Version AutoIt3Wrapper_Versioning AutoIt3Wrapper_Versioning_Parameters Tidy_Off Tidy_On Tidy_Parameters EndRegion Region"},c:[{b:/\\\n/,r:0},{bK:"include",k:{"meta-keyword":"include"},e:"$",c:[l,{cN:"meta-string",v:[{b:"<",e:">"},{b:/"/,e:/"/,c:[{b:/""/,r:0}]},{b:/'/,e:/'/,c:[{b:/''/,r:0}]}]}]},l,I]},_={cN:"symbol",b:"@[A-z0-9_]+"},G={cN:"function",bK:"Func",e:"$",i:"\\$|\\[|%",c:[e.UTM,{cN:"params",b:"\\(",e:"\\)",c:[n,l,o]}]};return{cI:!0,i:/\/\*/,k:{keyword:t,built_in:i,literal:r},c:[I,n,l,o,a,_,G]}});hljs.registerLanguage("perl",function(e){var t="getpwent getservent quotemeta msgrcv scalar kill dbmclose undef lc ma syswrite tr send umask sysopen shmwrite vec qx utime local oct semctl localtime readpipe do return format read sprintf dbmopen pop getpgrp not getpwnam rewinddir qqfileno qw endprotoent wait sethostent bless s|0 opendir continue each sleep endgrent shutdown dump chomp connect getsockname die socketpair close flock exists index shmgetsub for endpwent redo lstat msgctl setpgrp abs exit select print ref gethostbyaddr unshift fcntl syscall goto getnetbyaddr join gmtime symlink semget splice x|0 getpeername recv log setsockopt cos last reverse gethostbyname getgrnam study formline endhostent times chop length gethostent getnetent pack getprotoent getservbyname rand mkdir pos chmod y|0 substr endnetent printf next open msgsnd readdir use unlink getsockopt getpriority rindex wantarray hex system getservbyport endservent int chr untie rmdir prototype tell listen fork shmread ucfirst setprotoent else sysseek link getgrgid shmctl waitpid unpack getnetbyname reset chdir grep split require caller lcfirst until warn while values shift telldir getpwuid my getprotobynumber delete and sort uc defined srand accept package seekdir getprotobyname semop our rename seek if q|0 chroot sysread setpwent no crypt getc chown sqrt write setnetent setpriority foreach tie sin msgget map stat getlogin unless elsif truncate exec keys glob tied closedirioctl socket readlink eval xor readline binmode setservent eof ord bind alarm pipe atan2 getgrent exp time push setgrent gt lt or ne m|0 break given say state when",r={cN:"subst",b:"[$@]\\{",e:"\\}",k:t},s={b:"->{",e:"}"},n={v:[{b:/\$\d/},{b:/[\$%@](\^\w\b|#\w+(::\w+)*|{\w+}|\w+(::\w*)*)/},{b:/[\$%@][^\s\w{]/,r:0}]},i=[e.BE,r,n],o=[n,e.HCM,e.C("^\\=\\w","\\=cut",{eW:!0}),s,{cN:"string",c:i,v:[{b:"q[qwxr]?\\s*\\(",e:"\\)",r:5},{b:"q[qwxr]?\\s*\\[",e:"\\]",r:5},{b:"q[qwxr]?\\s*\\{",e:"\\}",r:5},{b:"q[qwxr]?\\s*\\|",e:"\\|",r:5},{b:"q[qwxr]?\\s*\\<",e:"\\>",r:5},{b:"qw\\s+q",e:"q",r:5},{b:"'",e:"'",c:[e.BE]},{b:'"',e:'"'},{b:"`",e:"`",c:[e.BE]},{b:"{\\w+}",c:[],r:0},{b:"-?\\w+\\s*\\=\\>",c:[],r:0}]},{cN:"number",b:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",r:0},{b:"(\\/\\/|"+e.RSR+"|\\b(split|return|print|reverse|grep)\\b)\\s*",k:"split return print reverse grep",r:0,c:[e.HCM,{cN:"regexp",b:"(s|tr|y)/(\\\\.|[^/])*/(\\\\.|[^/])*/[a-z]*",r:10},{cN:"regexp",b:"(m|qr)?/",e:"/[a-z]*",c:[e.BE],r:0}]},{cN:"function",bK:"sub",e:"(\\s*\\(.*?\\))?[;{]",eE:!0,r:5,c:[e.TM]},{b:"-\\w\\b",r:0},{b:"^__DATA__$",e:"^__END__$",sL:"mojolicious",c:[{b:"^@@.*",e:"$",cN:"comment"}]}];return r.c=o,s.c=o,{aliases:["pl"],k:t,c:o}});hljs.registerLanguage("axapta",function(e){return{k:"false int abstract private char boolean static null if for true while long throw finally protected final return void enum else break new catch byte super case short default double public try this switch continue reverse firstfast firstonly forupdate nofetch sum avg minof maxof count order group by asc desc index hint like dispaly edit client server ttsbegin ttscommit str real date container anytype common div mod",c:[e.CLCM,e.CBCM,e.ASM,e.QSM,e.CNM,{cN:"meta",b:"#",e:"$"},{cN:"class",bK:"class interface",e:"{",eE:!0,i:":",c:[{bK:"extends implements"},e.UTM]}]}});hljs.registerLanguage("applescript",function(e){var t=e.inherit(e.QSM,{i:""}),r={cN:"params",b:"\\(",e:"\\)",c:["self",e.CNM,t]},i=e.C("--","$"),o=e.C("\\(\\*","\\*\\)",{c:["self",i]}),n=[i,o,e.HCM];return{aliases:["osascript"],k:{keyword:"about above after against and around as at back before beginning behind below beneath beside between but by considering contain contains continue copy div does eighth else end equal equals error every exit fifth first for fourth from front get given global if ignoring in into is it its last local me middle mod my ninth not of on onto or over prop property put ref reference repeat returning script second set seventh since sixth some tell tenth that the|0 then third through thru timeout times to transaction try until where while whose with without",literal:"AppleScript false linefeed return pi quote result space tab true",built_in:"alias application boolean class constant date file integer list number real record string text activate beep count delay launch log offset read round run say summarize write character characters contents day frontmost id item length month name paragraph paragraphs rest reverse running time version weekday word words year"},c:[t,e.CNM,{cN:"built_in",b:"\\b(clipboard info|the clipboard|info for|list (disks|folder)|mount volume|path to|(close|open for) access|(get|set) eof|current date|do shell script|get volume settings|random number|set volume|system attribute|system info|time to GMT|(load|run|store) script|scripting components|ASCII (character|number)|localized string|choose (application|color|file|file name|folder|from list|remote application|URL)|display (alert|dialog))\\b|^\\s*return\\b"},{cN:"literal",b:"\\b(text item delimiters|current application|missing value)\\b"},{cN:"keyword",b:"\\b(apart from|aside from|instead of|out of|greater than|isn't|(doesn't|does not) (equal|come before|come after|contain)|(greater|less) than( or equal)?|(starts?|ends|begins?) with|contained by|comes (before|after)|a (ref|reference)|POSIX file|POSIX path|(date|time) string|quoted form)\\b"},{bK:"on",i:"[${=;\\n]",c:[e.UTM,r]}].concat(n),i:"//|->|=>|\\[\\["}});hljs.registerLanguage("prolog",function(c){var b={b:/[a-z][A-Za-z0-9_]*/,r:0},r={cN:"symbol",v:[{b:/[A-Z][a-zA-Z0-9_]*/},{b:/_[A-Za-z0-9_]*/}],r:0},e={b:/\(/,e:/\)/,r:0},n={b:/\[/,e:/\]/},a={cN:"comment",b:/%/,e:/$/,c:[c.PWM]},t={cN:"string",b:/`/,e:/`/,c:[c.BE]},g={cN:"string",b:/0\'(\\\'|.)/},s={cN:"string",b:/0\'\\s/},o={b:/:-/},N=[b,r,e,o,n,a,c.CBCM,c.QSM,c.ASM,t,g,s,c.CNM];return e.c=N,n.c=N,{c:N.concat([{b:/\.$/}])}});hljs.registerLanguage("actionscript",function(e){var a="[a-zA-Z_$][a-zA-Z0-9_$]*",t="([*]|[a-zA-Z_$][a-zA-Z0-9_$]*)",c={cN:"rest_arg",b:"[.]{3}",e:a,r:10};return{aliases:["as"],k:{keyword:"as break case catch class const continue default delete do dynamic each else extends final finally for function get if implements import in include instanceof interface internal is namespace native new override package private protected public return set static super switch this throw try typeof use var void while with",literal:"true false null undefined"},c:[e.ASM,e.QSM,e.CLCM,e.CBCM,e.CNM,{cN:"class",bK:"package",e:"{",c:[e.TM]},{cN:"class",bK:"class interface",e:"{",eE:!0,c:[{bK:"extends implements"},e.TM]},{cN:"meta",bK:"import include",e:";",k:{"meta-keyword":"import include"}},{cN:"function",bK:"function",e:"[{;]",eE:!0,i:"\\S",c:[e.TM,{cN:"params",b:"\\(",e:"\\)",c:[e.ASM,e.QSM,e.CLCM,e.CBCM,c]},{b:":\\s*"+t}]}],i:/#/}});hljs.registerLanguage("pf",function(t){var o={cN:"variable",b:/\$[\w\d#@][\w\d_]*/},e={cN:"variable",b:/,e:/>/};return{aliases:["pf.conf"],l:/[a-z0-9_<>-]+/,k:{built_in:"block match pass load anchor|5 antispoof|10 set table",keyword:"in out log quick on rdomain inet inet6 proto from port os to routeallow-opts divert-packet divert-reply divert-to flags group icmp-typeicmp6-type label once probability recieved-on rtable prio queuetos tag tagged user keep fragment for os dropaf-to|10 binat-to|10 nat-to|10 rdr-to|10 bitmask least-stats random round-robinsource-hash static-portdup-to reply-to route-toparent bandwidth default min max qlimitblock-policy debug fingerprints hostid limit loginterface optimizationreassemble ruleset-optimization basic none profile skip state-defaultsstate-policy timeoutconst counters persistno modulate synproxy state|5 floating if-bound no-sync pflow|10 sloppysource-track global rule max-src-nodes max-src-states max-src-connmax-src-conn-rate overload flushscrub|5 max-mss min-ttl no-df|10 random-id",literal:"all any no-route self urpf-failed egress|5 unknown"},c:[t.HCM,t.NM,t.QSM,o,e]}});hljs.registerLanguage("objectivec",function(e){var t={cN:"built_in",b:"(AV|CA|CF|CG|CI|MK|MP|NS|UI|XC)\\w+"},i={keyword:"int float while char export sizeof typedef const struct for union unsigned long volatile static bool mutable if do return goto void enum else break extern asm case short default double register explicit signed typename this switch continue wchar_t inline readonly assign readwrite self @synchronized id typeof nonatomic super unichar IBOutlet IBAction strong weak copy in out inout bycopy byref oneway __strong __weak __block __autoreleasing @private @protected @public @try @property @end @throw @catch @finally @autoreleasepool @synthesize @dynamic @selector @optional @required",literal:"false true FALSE TRUE nil YES NO NULL",built_in:"BOOL dispatch_once_t dispatch_queue_t dispatch_sync dispatch_async dispatch_once"},n=/[a-zA-Z@][a-zA-Z0-9_]*/,o="@interface @class @protocol @implementation";return{aliases:["mm","objc","obj-c"],k:i,l:n,i:"",c:[t,e.CLCM,e.CBCM,e.CNM,e.QSM,{cN:"string",v:[{b:'@"',e:'"',i:"\\n",c:[e.BE]},{b:"'",e:"[^\\\\]'",i:"[^\\\\][^']"}]},{cN:"meta",b:"#",e:"$",c:[{cN:"meta-string",v:[{b:'"',e:'"'},{b:"<",e:">"}]}]},{cN:"class",b:"("+o.split(" ").join("|")+")\\b",e:"({|$)",eE:!0,k:o,l:n,c:[e.UTM]},{b:"\\."+e.UIR,r:0}]}});hljs.registerLanguage("mathematica",function(e){return{aliases:["mma"],l:"(\\$|\\b)"+e.IR+"\\b",k:"AbelianGroup Abort AbortKernels AbortProtect Above Abs Absolute AbsoluteCorrelation AbsoluteCorrelationFunction AbsoluteCurrentValue AbsoluteDashing AbsoluteFileName AbsoluteOptions AbsolutePointSize AbsoluteThickness AbsoluteTime AbsoluteTiming AccountingForm Accumulate Accuracy AccuracyGoal ActionDelay ActionMenu ActionMenuBox ActionMenuBoxOptions Active ActiveItem ActiveStyle AcyclicGraphQ AddOnHelpPath AddTo AdjacencyGraph AdjacencyList AdjacencyMatrix AdjustmentBox AdjustmentBoxOptions AdjustTimeSeriesForecast AffineTransform After AiryAi AiryAiPrime AiryAiZero AiryBi AiryBiPrime AiryBiZero AlgebraicIntegerQ AlgebraicNumber AlgebraicNumberDenominator AlgebraicNumberNorm AlgebraicNumberPolynomial AlgebraicNumberTrace AlgebraicRules AlgebraicRulesData Algebraics AlgebraicUnitQ Alignment AlignmentMarker AlignmentPoint All AllowedDimensions AllowGroupClose AllowInlineCells AllowKernelInitialization AllowReverseGroupClose AllowScriptLevelChange AlphaChannel AlternatingGroup AlternativeHypothesis Alternatives AmbientLight Analytic AnchoredSearch And AndersonDarlingTest AngerJ AngleBracket AngularGauge Animate AnimationCycleOffset AnimationCycleRepetitions AnimationDirection AnimationDisplayTime AnimationRate AnimationRepetitions AnimationRunning Animator AnimatorBox AnimatorBoxOptions AnimatorElements Annotation Annuity AnnuityDue Antialiasing Antisymmetric Apart ApartSquareFree Appearance AppearanceElements AppellF1 Append AppendTo Apply ArcCos ArcCosh ArcCot ArcCoth ArcCsc ArcCsch ArcSec ArcSech ArcSin ArcSinDistribution ArcSinh ArcTan ArcTanh Arg ArgMax ArgMin ArgumentCountQ ARIMAProcess ArithmeticGeometricMean ARMAProcess ARProcess Array ArrayComponents ArrayDepth ArrayFlatten ArrayPad ArrayPlot ArrayQ ArrayReshape ArrayRules Arrays Arrow Arrow3DBox ArrowBox Arrowheads AspectRatio AspectRatioFixed Assert Assuming Assumptions AstronomicalData Asynchronous AsynchronousTaskObject AsynchronousTasks AtomQ Attributes AugmentedSymmetricPolynomial AutoAction AutoDelete AutoEvaluateEvents AutoGeneratedPackage AutoIndent AutoIndentSpacings AutoItalicWords AutoloadPath AutoMatch Automatic AutomaticImageSize AutoMultiplicationSymbol AutoNumberFormatting AutoOpenNotebooks AutoOpenPalettes AutorunSequencing AutoScaling AutoScroll AutoSpacing AutoStyleOptions AutoStyleWords Axes AxesEdge AxesLabel AxesOrigin AxesStyle Axis BabyMonsterGroupB Back Background BackgroundTasksSettings Backslash Backsubstitution Backward Band BandpassFilter BandstopFilter BarabasiAlbertGraphDistribution BarChart BarChart3D BarLegend BarlowProschanImportance BarnesG BarOrigin BarSpacing BartlettHannWindow BartlettWindow BaseForm Baseline BaselinePosition BaseStyle BatesDistribution BattleLemarieWavelet Because BeckmannDistribution Beep Before Begin BeginDialogPacket BeginFrontEndInteractionPacket BeginPackage BellB BellY Below BenfordDistribution BeniniDistribution BenktanderGibratDistribution BenktanderWeibullDistribution BernoulliB BernoulliDistribution BernoulliGraphDistribution BernoulliProcess BernsteinBasis BesselFilterModel BesselI BesselJ BesselJZero BesselK BesselY BesselYZero Beta BetaBinomialDistribution BetaDistribution BetaNegativeBinomialDistribution BetaPrimeDistribution BetaRegularized BetweennessCentrality BezierCurve BezierCurve3DBox BezierCurve3DBoxOptions BezierCurveBox BezierCurveBoxOptions BezierFunction BilateralFilter Binarize BinaryFormat BinaryImageQ BinaryRead BinaryReadList BinaryWrite BinCounts BinLists Binomial BinomialDistribution BinomialProcess BinormalDistribution BiorthogonalSplineWavelet BipartiteGraphQ BirnbaumImportance BirnbaumSaundersDistribution BitAnd BitClear BitGet BitLength BitNot BitOr BitSet BitShiftLeft BitShiftRight BitXor Black BlackmanHarrisWindow BlackmanNuttallWindow BlackmanWindow Blank BlankForm BlankNullSequence BlankSequence Blend Block BlockRandom BlomqvistBeta BlomqvistBetaTest Blue Blur BodePlot BohmanWindow Bold Bookmarks Boole BooleanConsecutiveFunction BooleanConvert BooleanCountingFunction BooleanFunction BooleanGraph BooleanMaxterms BooleanMinimize BooleanMinterms Booleans BooleanTable BooleanVariables BorderDimensions BorelTannerDistribution Bottom BottomHatTransform BoundaryStyle Bounds Box BoxBaselineShift BoxData BoxDimensions Boxed Boxes BoxForm BoxFormFormatTypes BoxFrame BoxID BoxMargins BoxMatrix BoxRatios BoxRotation BoxRotationPoint BoxStyle BoxWhiskerChart Bra BracketingBar BraKet BrayCurtisDistance BreadthFirstScan Break Brown BrownForsytheTest BrownianBridgeProcess BrowserCategory BSplineBasis BSplineCurve BSplineCurve3DBox BSplineCurveBox BSplineCurveBoxOptions BSplineFunction BSplineSurface BSplineSurface3DBox BubbleChart BubbleChart3D BubbleScale BubbleSizes BulletGauge BusinessDayQ ButterflyGraph ButterworthFilterModel Button ButtonBar ButtonBox ButtonBoxOptions ButtonCell ButtonContents ButtonData ButtonEvaluator ButtonExpandable ButtonFrame ButtonFunction ButtonMargins ButtonMinHeight ButtonNote ButtonNotebook ButtonSource ButtonStyle ButtonStyleMenuListing Byte ByteCount ByteOrdering C CachedValue CacheGraphics CalendarData CalendarType CallPacket CanberraDistance Cancel CancelButton CandlestickChart Cap CapForm CapitalDifferentialD CardinalBSplineBasis CarmichaelLambda Cases Cashflow Casoratian Catalan CatalanNumber Catch CauchyDistribution CauchyWindow CayleyGraph CDF CDFDeploy CDFInformation CDFWavelet Ceiling Cell CellAutoOverwrite CellBaseline CellBoundingBox CellBracketOptions CellChangeTimes CellContents CellContext CellDingbat CellDynamicExpression CellEditDuplicate CellElementsBoundingBox CellElementSpacings CellEpilog CellEvaluationDuplicate CellEvaluationFunction CellEventActions CellFrame CellFrameColor CellFrameLabelMargins CellFrameLabels CellFrameMargins CellGroup CellGroupData CellGrouping CellGroupingRules CellHorizontalScrolling CellID CellLabel CellLabelAutoDelete CellLabelMargins CellLabelPositioning CellMargins CellObject CellOpen CellPrint CellProlog Cells CellSize CellStyle CellTags CellularAutomaton CensoredDistribution Censoring Center CenterDot CentralMoment CentralMomentGeneratingFunction CForm ChampernowneNumber ChanVeseBinarize Character CharacterEncoding CharacterEncodingsPath CharacteristicFunction CharacteristicPolynomial CharacterRange Characters ChartBaseStyle ChartElementData ChartElementDataFunction ChartElementFunction ChartElements ChartLabels ChartLayout ChartLegends ChartStyle Chebyshev1FilterModel Chebyshev2FilterModel ChebyshevDistance ChebyshevT ChebyshevU Check CheckAbort CheckAll Checkbox CheckboxBar CheckboxBox CheckboxBoxOptions ChemicalData ChessboardDistance ChiDistribution ChineseRemainder ChiSquareDistribution ChoiceButtons ChoiceDialog CholeskyDecomposition Chop Circle CircleBox CircleDot CircleMinus CirclePlus CircleTimes CirculantGraph CityData Clear ClearAll ClearAttributes ClearSystemCache ClebschGordan ClickPane Clip ClipboardNotebook ClipFill ClippingStyle ClipPlanes ClipRange Clock ClockGauge ClockwiseContourIntegral Close Closed CloseKernels ClosenessCentrality Closing ClosingAutoSave ClosingEvent ClusteringComponents CMYKColor Coarse Coefficient CoefficientArrays CoefficientDomain CoefficientList CoefficientRules CoifletWavelet Collect Colon ColonForm ColorCombine ColorConvert ColorData ColorDataFunction ColorFunction ColorFunctionScaling Colorize ColorNegate ColorOutput ColorProfileData ColorQuantize ColorReplace ColorRules ColorSelectorSettings ColorSeparate ColorSetter ColorSetterBox ColorSetterBoxOptions ColorSlider ColorSpace Column ColumnAlignments ColumnBackgrounds ColumnForm ColumnLines ColumnsEqual ColumnSpacings ColumnWidths CommonDefaultFormatTypes Commonest CommonestFilter CommonUnits CommunityBoundaryStyle CommunityGraphPlot CommunityLabels CommunityRegionStyle CompatibleUnitQ CompilationOptions CompilationTarget Compile Compiled CompiledFunction Complement CompleteGraph CompleteGraphQ CompleteKaryTree CompletionsListPacket Complex Complexes ComplexExpand ComplexInfinity ComplexityFunction ComponentMeasurements ComponentwiseContextMenu Compose ComposeList ComposeSeries Composition CompoundExpression CompoundPoissonDistribution CompoundPoissonProcess CompoundRenewalProcess Compress CompressedData Condition ConditionalExpression Conditioned Cone ConeBox ConfidenceLevel ConfidenceRange ConfidenceTransform ConfigurationPath Congruent Conjugate ConjugateTranspose Conjunction Connect ConnectedComponents ConnectedGraphQ ConnesWindow ConoverTest ConsoleMessage ConsoleMessagePacket ConsolePrint Constant ConstantArray Constants ConstrainedMax ConstrainedMin ContentPadding ContentsBoundingBox ContentSelectable ContentSize Context ContextMenu Contexts ContextToFilename ContextToFileName Continuation Continue ContinuedFraction ContinuedFractionK ContinuousAction ContinuousMarkovProcess ContinuousTimeModelQ ContinuousWaveletData ContinuousWaveletTransform ContourDetect ContourGraphics ContourIntegral ContourLabels ContourLines ContourPlot ContourPlot3D Contours ContourShading ContourSmoothing ContourStyle ContraharmonicMean Control ControlActive ControlAlignment ControllabilityGramian ControllabilityMatrix ControllableDecomposition ControllableModelQ ControllerDuration ControllerInformation ControllerInformationData ControllerLinking ControllerManipulate ControllerMethod ControllerPath ControllerState ControlPlacement ControlsRendering ControlType Convergents ConversionOptions ConversionRules ConvertToBitmapPacket ConvertToPostScript ConvertToPostScriptPacket Convolve ConwayGroupCo1 ConwayGroupCo2 ConwayGroupCo3 CoordinateChartData CoordinatesToolOptions CoordinateTransform CoordinateTransformData CoprimeQ Coproduct CopulaDistribution Copyable CopyDirectory CopyFile CopyTag CopyToClipboard CornerFilter CornerNeighbors Correlation CorrelationDistance CorrelationFunction CorrelationTest Cos Cosh CoshIntegral CosineDistance CosineWindow CosIntegral Cot Coth Count CounterAssignments CounterBox CounterBoxOptions CounterClockwiseContourIntegral CounterEvaluator CounterFunction CounterIncrements CounterStyle CounterStyleMenuListing CountRoots CountryData Covariance CovarianceEstimatorFunction CovarianceFunction CoxianDistribution CoxIngersollRossProcess CoxModel CoxModelFit CramerVonMisesTest CreateArchive CreateDialog CreateDirectory CreateDocument CreateIntermediateDirectories CreatePalette CreatePalettePacket CreateScheduledTask CreateTemporary CreateWindow CriticalityFailureImportance CriticalitySuccessImportance CriticalSection Cross CrossingDetect CrossMatrix Csc Csch CubeRoot Cubics Cuboid CuboidBox Cumulant CumulantGeneratingFunction Cup CupCap Curl CurlyDoubleQuote CurlyQuote CurrentImage CurrentlySpeakingPacket CurrentValue CurvatureFlowFilter CurveClosed Cyan CycleGraph CycleIndexPolynomial Cycles CyclicGroup Cyclotomic Cylinder CylinderBox CylindricalDecomposition D DagumDistribution DamerauLevenshteinDistance DampingFactor Darker Dashed Dashing DataCompression DataDistribution DataRange DataReversed Date DateDelimiters DateDifference DateFunction DateList DateListLogPlot DateListPlot DatePattern DatePlus DateRange DateString DateTicksFormat DaubechiesWavelet DavisDistribution DawsonF DayCount DayCountConvention DayMatchQ DayName DayPlus DayRange DayRound DeBruijnGraph Debug DebugTag Decimal DeclareKnownSymbols DeclarePackage Decompose Decrement DedekindEta Default DefaultAxesStyle DefaultBaseStyle DefaultBoxStyle DefaultButton DefaultColor DefaultControlPlacement DefaultDuplicateCellStyle DefaultDuration DefaultElement DefaultFaceGridsStyle DefaultFieldHintStyle DefaultFont DefaultFontProperties DefaultFormatType DefaultFormatTypeForStyle DefaultFrameStyle DefaultFrameTicksStyle DefaultGridLinesStyle DefaultInlineFormatType DefaultInputFormatType DefaultLabelStyle DefaultMenuStyle DefaultNaturalLanguage DefaultNewCellStyle DefaultNewInlineCellStyle DefaultNotebook DefaultOptions DefaultOutputFormatType DefaultStyle DefaultStyleDefinitions DefaultTextFormatType DefaultTextInlineFormatType DefaultTicksStyle DefaultTooltipStyle DefaultValues Defer DefineExternal DefineInputStreamMethod DefineOutputStreamMethod Definition Degree DegreeCentrality DegreeGraphDistribution DegreeLexicographic DegreeReverseLexicographic Deinitialization Del Deletable Delete DeleteBorderComponents DeleteCases DeleteContents DeleteDirectory DeleteDuplicates DeleteFile DeleteSmallComponents DeleteWithContents DeletionWarning Delimiter DelimiterFlashTime DelimiterMatching Delimiters Denominator DensityGraphics DensityHistogram DensityPlot DependentVariables Deploy Deployed Depth DepthFirstScan Derivative DerivativeFilter DescriptorStateSpace DesignMatrix Det DGaussianWavelet DiacriticalPositioning Diagonal DiagonalMatrix Dialog DialogIndent DialogInput DialogLevel DialogNotebook DialogProlog DialogReturn DialogSymbols Diamond DiamondMatrix DiceDissimilarity DictionaryLookup DifferenceDelta DifferenceOrder DifferenceRoot DifferenceRootReduce Differences DifferentialD DifferentialRoot DifferentialRootReduce DifferentiatorFilter DigitBlock DigitBlockMinimum DigitCharacter DigitCount DigitQ DihedralGroup Dilation Dimensions DiracComb DiracDelta DirectedEdge DirectedEdges DirectedGraph DirectedGraphQ DirectedInfinity Direction Directive Directory DirectoryName DirectoryQ DirectoryStack DirichletCharacter DirichletConvolve DirichletDistribution DirichletL DirichletTransform DirichletWindow DisableConsolePrintPacket DiscreteChirpZTransform DiscreteConvolve DiscreteDelta DiscreteHadamardTransform DiscreteIndicator DiscreteLQEstimatorGains DiscreteLQRegulatorGains DiscreteLyapunovSolve DiscreteMarkovProcess DiscretePlot DiscretePlot3D DiscreteRatio DiscreteRiccatiSolve DiscreteShift DiscreteTimeModelQ DiscreteUniformDistribution DiscreteVariables DiscreteWaveletData DiscreteWaveletPacketTransform DiscreteWaveletTransform Discriminant Disjunction Disk DiskBox DiskMatrix Dispatch DispersionEstimatorFunction Display DisplayAllSteps DisplayEndPacket DisplayFlushImagePacket DisplayForm DisplayFunction DisplayPacket DisplayRules DisplaySetSizePacket DisplayString DisplayTemporary DisplayWith DisplayWithRef DisplayWithVariable DistanceFunction DistanceTransform Distribute Distributed DistributedContexts DistributeDefinitions DistributionChart DistributionDomain DistributionFitTest DistributionParameterAssumptions DistributionParameterQ Dithering Div Divergence Divide DivideBy Dividers Divisible Divisors DivisorSigma DivisorSum DMSList DMSString Do DockedCells DocumentNotebook DominantColors DOSTextFormat Dot DotDashed DotEqual Dotted DoubleBracketingBar DoubleContourIntegral DoubleDownArrow DoubleLeftArrow DoubleLeftRightArrow DoubleLeftTee DoubleLongLeftArrow DoubleLongLeftRightArrow DoubleLongRightArrow DoubleRightArrow DoubleRightTee DoubleUpArrow DoubleUpDownArrow DoubleVerticalBar DoublyInfinite Down DownArrow DownArrowBar DownArrowUpArrow DownLeftRightVector DownLeftTeeVector DownLeftVector DownLeftVectorBar DownRightTeeVector DownRightVector DownRightVectorBar Downsample DownTee DownTeeArrow DownValues DragAndDrop DrawEdges DrawFrontFaces DrawHighlighted Drop DSolve Dt DualLinearProgramming DualSystemsModel DumpGet DumpSave DuplicateFreeQ Dynamic DynamicBox DynamicBoxOptions DynamicEvaluationTimeout DynamicLocation DynamicModule DynamicModuleBox DynamicModuleBoxOptions DynamicModuleParent DynamicModuleValues DynamicName DynamicNamespace DynamicReference DynamicSetting DynamicUpdating DynamicWrapper DynamicWrapperBox DynamicWrapperBoxOptions E EccentricityCentrality EdgeAdd EdgeBetweennessCentrality EdgeCapacity EdgeCapForm EdgeColor EdgeConnectivity EdgeCost EdgeCount EdgeCoverQ EdgeDashing EdgeDelete EdgeDetect EdgeForm EdgeIndex EdgeJoinForm EdgeLabeling EdgeLabels EdgeLabelStyle EdgeList EdgeOpacity EdgeQ EdgeRenderingFunction EdgeRules EdgeShapeFunction EdgeStyle EdgeThickness EdgeWeight Editable EditButtonSettings EditCellTagsSettings EditDistance EffectiveInterest Eigensystem Eigenvalues EigenvectorCentrality Eigenvectors Element ElementData Eliminate EliminationOrder EllipticE EllipticExp EllipticExpPrime EllipticF EllipticFilterModel EllipticK EllipticLog EllipticNomeQ EllipticPi EllipticReducedHalfPeriods EllipticTheta EllipticThetaPrime EmitSound EmphasizeSyntaxErrors EmpiricalDistribution Empty EmptyGraphQ EnableConsolePrintPacket Enabled Encode End EndAdd EndDialogPacket EndFrontEndInteractionPacket EndOfFile EndOfLine EndOfString EndPackage EngineeringForm Enter EnterExpressionPacket EnterTextPacket Entropy EntropyFilter Environment Epilog Equal EqualColumns EqualRows EqualTilde EquatedTo Equilibrium EquirippleFilterKernel Equivalent Erf Erfc Erfi ErlangB ErlangC ErlangDistribution Erosion ErrorBox ErrorBoxOptions ErrorNorm ErrorPacket ErrorsDialogSettings EstimatedDistribution EstimatedProcess EstimatorGains EstimatorRegulator EuclideanDistance EulerE EulerGamma EulerianGraphQ EulerPhi Evaluatable Evaluate Evaluated EvaluatePacket EvaluationCell EvaluationCompletionAction EvaluationElements EvaluationMode EvaluationMonitor EvaluationNotebook EvaluationObject EvaluationOrder Evaluator EvaluatorNames EvenQ EventData EventEvaluator EventHandler EventHandlerTag EventLabels ExactBlackmanWindow ExactNumberQ ExactRootIsolation ExampleData Except ExcludedForms ExcludePods Exclusions ExclusionsStyle Exists Exit ExitDialog Exp Expand ExpandAll ExpandDenominator ExpandFileName ExpandNumerator Expectation ExpectationE ExpectedValue ExpGammaDistribution ExpIntegralE ExpIntegralEi Exponent ExponentFunction ExponentialDistribution ExponentialFamily ExponentialGeneratingFunction ExponentialMovingAverage ExponentialPowerDistribution ExponentPosition ExponentStep Export ExportAutoReplacements ExportPacket ExportString Expression ExpressionCell ExpressionPacket ExpToTrig ExtendedGCD Extension ExtentElementFunction ExtentMarkers ExtentSize ExternalCall ExternalDataCharacterEncoding Extract ExtractArchive ExtremeValueDistribution FaceForm FaceGrids FaceGridsStyle Factor FactorComplete Factorial Factorial2 FactorialMoment FactorialMomentGeneratingFunction FactorialPower FactorInteger FactorList FactorSquareFree FactorSquareFreeList FactorTerms FactorTermsList Fail FailureDistribution False FARIMAProcess FEDisableConsolePrintPacket FeedbackSector FeedbackSectorStyle FeedbackType FEEnableConsolePrintPacket Fibonacci FieldHint FieldHintStyle FieldMasked FieldSize File FileBaseName FileByteCount FileDate FileExistsQ FileExtension FileFormat FileHash FileInformation FileName FileNameDepth FileNameDialogSettings FileNameDrop FileNameJoin FileNames FileNameSetter FileNameSplit FileNameTake FilePrint FileType FilledCurve FilledCurveBox Filling FillingStyle FillingTransform FilterRules FinancialBond FinancialData FinancialDerivative FinancialIndicator Find FindArgMax FindArgMin FindClique FindClusters FindCurvePath FindDistributionParameters FindDivisions FindEdgeCover FindEdgeCut FindEulerianCycle FindFaces FindFile FindFit FindGeneratingFunction FindGeoLocation FindGeometricTransform FindGraphCommunities FindGraphIsomorphism FindGraphPartition FindHamiltonianCycle FindIndependentEdgeSet FindIndependentVertexSet FindInstance FindIntegerNullVector FindKClan FindKClique FindKClub FindKPlex FindLibrary FindLinearRecurrence FindList FindMaximum FindMaximumFlow FindMaxValue FindMinimum FindMinimumCostFlow FindMinimumCut FindMinValue FindPermutation FindPostmanTour FindProcessParameters FindRoot FindSequenceFunction FindSettings FindShortestPath FindShortestTour FindThreshold FindVertexCover FindVertexCut Fine FinishDynamic FiniteAbelianGroupCount FiniteGroupCount FiniteGroupData First FirstPassageTimeDistribution FischerGroupFi22 FischerGroupFi23 FischerGroupFi24Prime FisherHypergeometricDistribution FisherRatioTest FisherZDistribution Fit FitAll FittedModel FixedPoint FixedPointList FlashSelection Flat Flatten FlattenAt FlatTopWindow FlipView Floor FlushPrintOutputPacket Fold FoldList Font FontColor FontFamily FontForm FontName FontOpacity FontPostScriptName FontProperties FontReencoding FontSize FontSlant FontSubstitutions FontTracking FontVariations FontWeight For ForAll Format FormatRules FormatType FormatTypeAutoConvert FormatValues FormBox FormBoxOptions FortranForm Forward ForwardBackward Fourier FourierCoefficient FourierCosCoefficient FourierCosSeries FourierCosTransform FourierDCT FourierDCTFilter FourierDCTMatrix FourierDST FourierDSTMatrix FourierMatrix FourierParameters FourierSequenceTransform FourierSeries FourierSinCoefficient FourierSinSeries FourierSinTransform FourierTransform FourierTrigSeries FractionalBrownianMotionProcess FractionalPart FractionBox FractionBoxOptions FractionLine Frame FrameBox FrameBoxOptions Framed FrameInset FrameLabel Frameless FrameMargins FrameStyle FrameTicks FrameTicksStyle FRatioDistribution FrechetDistribution FreeQ FrequencySamplingFilterKernel FresnelC FresnelS Friday FrobeniusNumber FrobeniusSolve FromCharacterCode FromCoefficientRules FromContinuedFraction FromDate FromDigits FromDMS Front FrontEndDynamicExpression FrontEndEventActions FrontEndExecute FrontEndObject FrontEndResource FrontEndResourceString FrontEndStackSize FrontEndToken FrontEndTokenExecute FrontEndValueCache FrontEndVersion FrontFaceColor FrontFaceOpacity Full FullAxes FullDefinition FullForm FullGraphics FullOptions FullSimplify Function FunctionExpand FunctionInterpolation FunctionSpace FussellVeselyImportance GaborFilter GaborMatrix GaborWavelet GainMargins GainPhaseMargins Gamma GammaDistribution GammaRegularized GapPenalty Gather GatherBy GaugeFaceElementFunction GaugeFaceStyle GaugeFrameElementFunction GaugeFrameSize GaugeFrameStyle GaugeLabels GaugeMarkers GaugeStyle GaussianFilter GaussianIntegers GaussianMatrix GaussianWindow GCD GegenbauerC General GeneralizedLinearModelFit GenerateConditions GeneratedCell GeneratedParameters GeneratingFunction Generic GenericCylindricalDecomposition GenomeData GenomeLookup GeodesicClosing GeodesicDilation GeodesicErosion GeodesicOpening GeoDestination GeodesyData GeoDirection GeoDistance GeoGridPosition GeometricBrownianMotionProcess GeometricDistribution GeometricMean GeometricMeanFilter GeometricTransformation GeometricTransformation3DBox GeometricTransformation3DBoxOptions GeometricTransformationBox GeometricTransformationBoxOptions GeoPosition GeoPositionENU GeoPositionXYZ GeoProjectionData GestureHandler GestureHandlerTag Get GetBoundingBoxSizePacket GetContext GetEnvironment GetFileName GetFrontEndOptionsDataPacket GetLinebreakInformationPacket GetMenusPacket GetPageBreakInformationPacket Glaisher GlobalClusteringCoefficient GlobalPreferences GlobalSession Glow GoldenRatio GompertzMakehamDistribution GoodmanKruskalGamma GoodmanKruskalGammaTest Goto Grad Gradient GradientFilter GradientOrientationFilter Graph GraphAssortativity GraphCenter GraphComplement GraphData GraphDensity GraphDiameter GraphDifference GraphDisjointUnion GraphDistance GraphDistanceMatrix GraphElementData GraphEmbedding GraphHighlight GraphHighlightStyle GraphHub Graphics Graphics3D Graphics3DBox Graphics3DBoxOptions GraphicsArray GraphicsBaseline GraphicsBox GraphicsBoxOptions GraphicsColor GraphicsColumn GraphicsComplex GraphicsComplex3DBox GraphicsComplex3DBoxOptions GraphicsComplexBox GraphicsComplexBoxOptions GraphicsContents GraphicsData GraphicsGrid GraphicsGridBox GraphicsGroup GraphicsGroup3DBox GraphicsGroup3DBoxOptions GraphicsGroupBox GraphicsGroupBoxOptions GraphicsGrouping GraphicsHighlightColor GraphicsRow GraphicsSpacing GraphicsStyle GraphIntersection GraphLayout GraphLinkEfficiency GraphPeriphery GraphPlot GraphPlot3D GraphPower GraphPropertyDistribution GraphQ GraphRadius GraphReciprocity GraphRoot GraphStyle GraphUnion Gray GrayLevel GreatCircleDistance Greater GreaterEqual GreaterEqualLess GreaterFullEqual GreaterGreater GreaterLess GreaterSlantEqual GreaterTilde Green Grid GridBaseline GridBox GridBoxAlignment GridBoxBackground GridBoxDividers GridBoxFrame GridBoxItemSize GridBoxItemStyle GridBoxOptions GridBoxSpacings GridCreationSettings GridDefaultElement GridElementStyleOptions GridFrame GridFrameMargins GridGraph GridLines GridLinesStyle GroebnerBasis GroupActionBase GroupCentralizer GroupElementFromWord GroupElementPosition GroupElementQ GroupElements GroupElementToWord GroupGenerators GroupMultiplicationTable GroupOrbits GroupOrder GroupPageBreakWithin GroupSetwiseStabilizer GroupStabilizer GroupStabilizerChain Gudermannian GumbelDistribution HaarWavelet HadamardMatrix HalfNormalDistribution HamiltonianGraphQ HammingDistance HammingWindow HankelH1 HankelH2 HankelMatrix HannPoissonWindow HannWindow HaradaNortonGroupHN HararyGraph HarmonicMean HarmonicMeanFilter HarmonicNumber Hash HashTable Haversine HazardFunction Head HeadCompose Heads HeavisideLambda HeavisidePi HeavisideTheta HeldGroupHe HeldPart HelpBrowserLookup HelpBrowserNotebook HelpBrowserSettings HermiteDecomposition HermiteH HermitianMatrixQ HessenbergDecomposition Hessian HexadecimalCharacter Hexahedron HexahedronBox HexahedronBoxOptions HiddenSurface HighlightGraph HighlightImage HighpassFilter HigmanSimsGroupHS HilbertFilter HilbertMatrix Histogram Histogram3D HistogramDistribution HistogramList HistogramTransform HistogramTransformInterpolation HitMissTransform HITSCentrality HodgeDual HoeffdingD HoeffdingDTest Hold HoldAll HoldAllComplete HoldComplete HoldFirst HoldForm HoldPattern HoldRest HolidayCalendar HomeDirectory HomePage Horizontal HorizontalForm HorizontalGauge HorizontalScrollPosition HornerForm HotellingTSquareDistribution HoytDistribution HTMLSave Hue HumpDownHump HumpEqual HurwitzLerchPhi HurwitzZeta HyperbolicDistribution HypercubeGraph HyperexponentialDistribution Hyperfactorial Hypergeometric0F1 Hypergeometric0F1Regularized Hypergeometric1F1 Hypergeometric1F1Regularized Hypergeometric2F1 Hypergeometric2F1Regularized HypergeometricDistribution HypergeometricPFQ HypergeometricPFQRegularized HypergeometricU Hyperlink HyperlinkCreationSettings Hyphenation HyphenationOptions HypoexponentialDistribution HypothesisTestData I Identity IdentityMatrix If IgnoreCase Im Image Image3D Image3DSlices ImageAccumulate ImageAdd ImageAdjust ImageAlign ImageApply ImageAspectRatio ImageAssemble ImageCache ImageCacheValid ImageCapture ImageChannels ImageClip ImageColorSpace ImageCompose ImageConvolve ImageCooccurrence ImageCorners ImageCorrelate ImageCorrespondingPoints ImageCrop ImageData ImageDataPacket ImageDeconvolve ImageDemosaic ImageDifference ImageDimensions ImageDistance ImageEffect ImageFeatureTrack ImageFileApply ImageFileFilter ImageFileScan ImageFilter ImageForestingComponents ImageForwardTransformation ImageHistogram ImageKeypoints ImageLevels ImageLines ImageMargins ImageMarkers ImageMeasurements ImageMultiply ImageOffset ImagePad ImagePadding ImagePartition ImagePeriodogram ImagePerspectiveTransformation ImageQ ImageRangeCache ImageReflect ImageRegion ImageResize ImageResolution ImageRotate ImageRotated ImageScaled ImageScan ImageSize ImageSizeAction ImageSizeCache ImageSizeMultipliers ImageSizeRaw ImageSubtract ImageTake ImageTransformation ImageTrim ImageType ImageValue ImageValuePositions Implies Import ImportAutoReplacements ImportString ImprovementImportance In IncidenceGraph IncidenceList IncidenceMatrix IncludeConstantBasis IncludeFileExtension IncludePods IncludeSingularTerm Increment Indent IndentingNewlineSpacings IndentMaxFraction IndependenceTest IndependentEdgeSetQ IndependentUnit IndependentVertexSetQ Indeterminate IndexCreationOptions Indexed IndexGraph IndexTag Inequality InexactNumberQ InexactNumbers Infinity Infix Information Inherited InheritScope Initialization InitializationCell InitializationCellEvaluation InitializationCellWarning InlineCounterAssignments InlineCounterIncrements InlineRules Inner Inpaint Input InputAliases InputAssumptions InputAutoReplacements InputField InputFieldBox InputFieldBoxOptions InputForm InputGrouping InputNamePacket InputNotebook InputPacket InputSettings InputStream InputString InputStringPacket InputToBoxFormPacket Insert InsertionPointObject InsertResults Inset Inset3DBox Inset3DBoxOptions InsetBox InsetBoxOptions Install InstallService InString Integer IntegerDigits IntegerExponent IntegerLength IntegerPart IntegerPartitions IntegerQ Integers IntegerString Integral Integrate Interactive InteractiveTradingChart Interlaced Interleaving InternallyBalancedDecomposition InterpolatingFunction InterpolatingPolynomial Interpolation InterpolationOrder InterpolationPoints InterpolationPrecision Interpretation InterpretationBox InterpretationBoxOptions InterpretationFunction InterpretTemplate InterquartileRange Interrupt InterruptSettings Intersection Interval IntervalIntersection IntervalMemberQ IntervalUnion Inverse InverseBetaRegularized InverseCDF InverseChiSquareDistribution InverseContinuousWaveletTransform InverseDistanceTransform InverseEllipticNomeQ InverseErf InverseErfc InverseFourier InverseFourierCosTransform InverseFourierSequenceTransform InverseFourierSinTransform InverseFourierTransform InverseFunction InverseFunctions InverseGammaDistribution InverseGammaRegularized InverseGaussianDistribution InverseGudermannian InverseHaversine InverseJacobiCD InverseJacobiCN InverseJacobiCS InverseJacobiDC InverseJacobiDN InverseJacobiDS InverseJacobiNC InverseJacobiND InverseJacobiNS InverseJacobiSC InverseJacobiSD InverseJacobiSN InverseLaplaceTransform InversePermutation InverseRadon InverseSeries InverseSurvivalFunction InverseWaveletTransform InverseWeierstrassP InverseZTransform Invisible InvisibleApplication InvisibleTimes IrreduciblePolynomialQ IsolatingInterval IsomorphicGraphQ IsotopeData Italic Item ItemBox ItemBoxOptions ItemSize ItemStyle ItoProcess JaccardDissimilarity JacobiAmplitude Jacobian JacobiCD JacobiCN JacobiCS JacobiDC JacobiDN JacobiDS JacobiNC JacobiND JacobiNS JacobiP JacobiSC JacobiSD JacobiSN JacobiSymbol JacobiZeta JankoGroupJ1 JankoGroupJ2 JankoGroupJ3 JankoGroupJ4 JarqueBeraALMTest JohnsonDistribution Join Joined JoinedCurve JoinedCurveBox JoinForm JordanDecomposition JordanModelDecomposition K KagiChart KaiserBesselWindow KaiserWindow KalmanEstimator KalmanFilter KarhunenLoeveDecomposition KaryTree KatzCentrality KCoreComponents KDistribution KelvinBei KelvinBer KelvinKei KelvinKer KendallTau KendallTauTest KernelExecute KernelMixtureDistribution KernelObject Kernels Ket Khinchin KirchhoffGraph KirchhoffMatrix KleinInvariantJ KnightTourGraph KnotData KnownUnitQ KolmogorovSmirnovTest KroneckerDelta KroneckerModelDecomposition KroneckerProduct KroneckerSymbol KuiperTest KumaraswamyDistribution Kurtosis KuwaharaFilter Label Labeled LabeledSlider LabelingFunction LabelStyle LaguerreL LambdaComponents LambertW LanczosWindow LandauDistribution Language LanguageCategory LaplaceDistribution LaplaceTransform Laplacian LaplacianFilter LaplacianGaussianFilter Large Larger Last Latitude LatitudeLongitude LatticeData LatticeReduce Launch LaunchKernels LayeredGraphPlot LayerSizeFunction LayoutInformation LCM LeafCount LeapYearQ LeastSquares LeastSquaresFilterKernel Left LeftArrow LeftArrowBar LeftArrowRightArrow LeftDownTeeVector LeftDownVector LeftDownVectorBar LeftRightArrow LeftRightVector LeftTee LeftTeeArrow LeftTeeVector LeftTriangle LeftTriangleBar LeftTriangleEqual LeftUpDownVector LeftUpTeeVector LeftUpVector LeftUpVectorBar LeftVector LeftVectorBar LegendAppearance Legended LegendFunction LegendLabel LegendLayout LegendMargins LegendMarkers LegendMarkerSize LegendreP LegendreQ LegendreType Length LengthWhile LerchPhi Less LessEqual LessEqualGreater LessFullEqual LessGreater LessLess LessSlantEqual LessTilde LetterCharacter LetterQ Level LeveneTest LeviCivitaTensor LevyDistribution Lexicographic LibraryFunction LibraryFunctionError LibraryFunctionInformation LibraryFunctionLoad LibraryFunctionUnload LibraryLoad LibraryUnload LicenseID LiftingFilterData LiftingWaveletTransform LightBlue LightBrown LightCyan Lighter LightGray LightGreen Lighting LightingAngle LightMagenta LightOrange LightPink LightPurple LightRed LightSources LightYellow Likelihood Limit LimitsPositioning LimitsPositioningTokens LindleyDistribution Line Line3DBox LinearFilter LinearFractionalTransform LinearModelFit LinearOffsetFunction LinearProgramming LinearRecurrence LinearSolve LinearSolveFunction LineBox LineBreak LinebreakAdjustments LineBreakChart LineBreakWithin LineColor LineForm LineGraph LineIndent LineIndentMaxFraction LineIntegralConvolutionPlot LineIntegralConvolutionScale LineLegend LineOpacity LineSpacing LineWrapParts LinkActivate LinkClose LinkConnect LinkConnectedQ LinkCreate LinkError LinkFlush LinkFunction LinkHost LinkInterrupt LinkLaunch LinkMode LinkObject LinkOpen LinkOptions LinkPatterns LinkProtocol LinkRead LinkReadHeld LinkReadyQ Links LinkWrite LinkWriteHeld LiouvilleLambda List Listable ListAnimate ListContourPlot ListContourPlot3D ListConvolve ListCorrelate ListCurvePathPlot ListDeconvolve ListDensityPlot Listen ListFourierSequenceTransform ListInterpolation ListLineIntegralConvolutionPlot ListLinePlot ListLogLinearPlot ListLogLogPlot ListLogPlot ListPicker ListPickerBox ListPickerBoxBackground ListPickerBoxOptions ListPlay ListPlot ListPlot3D ListPointPlot3D ListPolarPlot ListQ ListStreamDensityPlot ListStreamPlot ListSurfacePlot3D ListVectorDensityPlot ListVectorPlot ListVectorPlot3D ListZTransform Literal LiteralSearch LocalClusteringCoefficient LocalizeVariables LocationEquivalenceTest LocationTest Locator LocatorAutoCreate LocatorBox LocatorBoxOptions LocatorCentering LocatorPane LocatorPaneBox LocatorPaneBoxOptions LocatorRegion Locked Log Log10 Log2 LogBarnesG LogGamma LogGammaDistribution LogicalExpand LogIntegral LogisticDistribution LogitModelFit LogLikelihood LogLinearPlot LogLogisticDistribution LogLogPlot LogMultinormalDistribution LogNormalDistribution LogPlot LogRankTest LogSeriesDistribution LongEqual Longest LongestAscendingSequence LongestCommonSequence LongestCommonSequencePositions LongestCommonSubsequence LongestCommonSubsequencePositions LongestMatch LongForm Longitude LongLeftArrow LongLeftRightArrow LongRightArrow Loopback LoopFreeGraphQ LowerCaseQ LowerLeftArrow LowerRightArrow LowerTriangularize LowpassFilter LQEstimatorGains LQGRegulator LQOutputRegulatorGains LQRegulatorGains LUBackSubstitution LucasL LuccioSamiComponents LUDecomposition LyapunovSolve LyonsGroupLy MachineID MachineName MachineNumberQ MachinePrecision MacintoshSystemPageSetup Magenta Magnification Magnify MainSolve MaintainDynamicCaches Majority MakeBoxes MakeExpression MakeRules MangoldtLambda ManhattanDistance Manipulate Manipulator MannWhitneyTest MantissaExponent Manual Map MapAll MapAt MapIndexed MAProcess MapThread MarcumQ MardiaCombinedTest MardiaKurtosisTest MardiaSkewnessTest MarginalDistribution MarkovProcessProperties Masking MatchingDissimilarity MatchLocalNameQ MatchLocalNames MatchQ Material MathematicaNotation MathieuC MathieuCharacteristicA MathieuCharacteristicB MathieuCharacteristicExponent MathieuCPrime MathieuGroupM11 MathieuGroupM12 MathieuGroupM22 MathieuGroupM23 MathieuGroupM24 MathieuS MathieuSPrime MathMLForm MathMLText Matrices MatrixExp MatrixForm MatrixFunction MatrixLog MatrixPlot MatrixPower MatrixQ MatrixRank Max MaxBend MaxDetect MaxExtraBandwidths MaxExtraConditions MaxFeatures MaxFilter Maximize MaxIterations MaxMemoryUsed MaxMixtureKernels MaxPlotPoints MaxPoints MaxRecursion MaxStableDistribution MaxStepFraction MaxSteps MaxStepSize MaxValue MaxwellDistribution McLaughlinGroupMcL Mean MeanClusteringCoefficient MeanDegreeConnectivity MeanDeviation MeanFilter MeanGraphDistance MeanNeighborDegree MeanShift MeanShiftFilter Median MedianDeviation MedianFilter Medium MeijerG MeixnerDistribution MemberQ MemoryConstrained MemoryInUse Menu MenuAppearance MenuCommandKey MenuEvaluator MenuItem MenuPacket MenuSortingValue MenuStyle MenuView MergeDifferences Mesh MeshFunctions MeshRange MeshShading MeshStyle Message MessageDialog MessageList MessageName MessageOptions MessagePacket Messages MessagesNotebook MetaCharacters MetaInformation Method MethodOptions MexicanHatWavelet MeyerWavelet Min MinDetect MinFilter MinimalPolynomial MinimalStateSpaceModel Minimize Minors MinRecursion MinSize MinStableDistribution Minus MinusPlus MinValue Missing MissingDataMethod MittagLefflerE MixedRadix MixedRadixQuantity MixtureDistribution Mod Modal Mode Modular ModularLambda Module Modulus MoebiusMu Moment Momentary MomentConvert MomentEvaluate MomentGeneratingFunction Monday Monitor MonomialList MonomialOrder MonsterGroupM MorletWavelet MorphologicalBinarize MorphologicalBranchPoints MorphologicalComponents MorphologicalEulerNumber MorphologicalGraph MorphologicalPerimeter MorphologicalTransform Most MouseAnnotation MouseAppearance MouseAppearanceTag MouseButtons Mouseover MousePointerNote MousePosition MovingAverage MovingMedian MoyalDistribution MultiedgeStyle MultilaunchWarning MultiLetterItalics MultiLetterStyle MultilineFunction Multinomial MultinomialDistribution MultinormalDistribution MultiplicativeOrder Multiplicity Multiselection MultivariateHypergeometricDistribution MultivariatePoissonDistribution MultivariateTDistribution N NakagamiDistribution NameQ Names NamespaceBox Nand NArgMax NArgMin NBernoulliB NCache NDSolve NDSolveValue Nearest NearestFunction NeedCurrentFrontEndPackagePacket NeedCurrentFrontEndSymbolsPacket NeedlemanWunschSimilarity Needs Negative NegativeBinomialDistribution NegativeMultinomialDistribution NeighborhoodGraph Nest NestedGreaterGreater NestedLessLess NestedScriptRules NestList NestWhile NestWhileList NevilleThetaC NevilleThetaD NevilleThetaN NevilleThetaS NewPrimitiveStyle NExpectation Next NextPrime NHoldAll NHoldFirst NHoldRest NicholsGridLines NicholsPlot NIntegrate NMaximize NMaxValue NMinimize NMinValue NominalVariables NonAssociative NoncentralBetaDistribution NoncentralChiSquareDistribution NoncentralFRatioDistribution NoncentralStudentTDistribution NonCommutativeMultiply NonConstants None NonlinearModelFit NonlocalMeansFilter NonNegative NonPositive Nor NorlundB Norm Normal NormalDistribution NormalGrouping Normalize NormalizedSquaredEuclideanDistance NormalsFunction NormFunction Not NotCongruent NotCupCap NotDoubleVerticalBar Notebook NotebookApply NotebookAutoSave NotebookClose NotebookConvertSettings NotebookCreate NotebookCreateReturnObject NotebookDefault NotebookDelete NotebookDirectory NotebookDynamicExpression NotebookEvaluate NotebookEventActions NotebookFileName NotebookFind NotebookFindReturnObject NotebookGet NotebookGetLayoutInformationPacket NotebookGetMisspellingsPacket NotebookInformation NotebookInterfaceObject NotebookLocate NotebookObject NotebookOpen NotebookOpenReturnObject NotebookPath NotebookPrint NotebookPut NotebookPutReturnObject NotebookRead NotebookResetGeneratedCells Notebooks NotebookSave NotebookSaveAs NotebookSelection NotebookSetupLayoutInformationPacket NotebooksMenu NotebookWrite NotElement NotEqualTilde NotExists NotGreater NotGreaterEqual NotGreaterFullEqual NotGreaterGreater NotGreaterLess NotGreaterSlantEqual NotGreaterTilde NotHumpDownHump NotHumpEqual NotLeftTriangle NotLeftTriangleBar NotLeftTriangleEqual NotLess NotLessEqual NotLessFullEqual NotLessGreater NotLessLess NotLessSlantEqual NotLessTilde NotNestedGreaterGreater NotNestedLessLess NotPrecedes NotPrecedesEqual NotPrecedesSlantEqual NotPrecedesTilde NotReverseElement NotRightTriangle NotRightTriangleBar NotRightTriangleEqual NotSquareSubset NotSquareSubsetEqual NotSquareSuperset NotSquareSupersetEqual NotSubset NotSubsetEqual NotSucceeds NotSucceedsEqual NotSucceedsSlantEqual NotSucceedsTilde NotSuperset NotSupersetEqual NotTilde NotTildeEqual NotTildeFullEqual NotTildeTilde NotVerticalBar NProbability NProduct NProductFactors NRoots NSolve NSum NSumTerms Null NullRecords NullSpace NullWords Number NumberFieldClassNumber NumberFieldDiscriminant NumberFieldFundamentalUnits NumberFieldIntegralBasis NumberFieldNormRepresentatives NumberFieldRegulator NumberFieldRootsOfUnity NumberFieldSignature NumberForm NumberFormat NumberMarks NumberMultiplier NumberPadding NumberPoint NumberQ NumberSeparator NumberSigns NumberString Numerator NumericFunction NumericQ NuttallWindow NValues NyquistGridLines NyquistPlot O ObservabilityGramian ObservabilityMatrix ObservableDecomposition ObservableModelQ OddQ Off Offset OLEData On ONanGroupON OneIdentity Opacity Open OpenAppend Opener OpenerBox OpenerBoxOptions OpenerView OpenFunctionInspectorPacket Opening OpenRead OpenSpecialOptions OpenTemporary OpenWrite Operate OperatingSystem OptimumFlowData Optional OptionInspectorSettings OptionQ Options OptionsPacket OptionsPattern OptionValue OptionValueBox OptionValueBoxOptions Or Orange Order OrderDistribution OrderedQ Ordering Orderless OrnsteinUhlenbeckProcess Orthogonalize Out Outer OutputAutoOverwrite OutputControllabilityMatrix OutputControllableModelQ OutputForm OutputFormData OutputGrouping OutputMathEditExpression OutputNamePacket OutputResponse OutputSizeLimit OutputStream Over OverBar OverDot Overflow OverHat Overlaps Overlay OverlayBox OverlayBoxOptions Overscript OverscriptBox OverscriptBoxOptions OverTilde OverVector OwenT OwnValues PackingMethod PaddedForm Padding PadeApproximant PadLeft PadRight PageBreakAbove PageBreakBelow PageBreakWithin PageFooterLines PageFooters PageHeaderLines PageHeaders PageHeight PageRankCentrality PageWidth PairedBarChart PairedHistogram PairedSmoothHistogram PairedTTest PairedZTest PaletteNotebook PalettePath Pane PaneBox PaneBoxOptions Panel PanelBox PanelBoxOptions Paneled PaneSelector PaneSelectorBox PaneSelectorBoxOptions PaperWidth ParabolicCylinderD ParagraphIndent ParagraphSpacing ParallelArray ParallelCombine ParallelDo ParallelEvaluate Parallelization Parallelize ParallelMap ParallelNeeds ParallelProduct ParallelSubmit ParallelSum ParallelTable ParallelTry Parameter ParameterEstimator ParameterMixtureDistribution ParameterVariables ParametricFunction ParametricNDSolve ParametricNDSolveValue ParametricPlot ParametricPlot3D ParentConnect ParentDirectory ParentForm Parenthesize ParentList ParetoDistribution Part PartialCorrelationFunction PartialD ParticleData Partition PartitionsP PartitionsQ ParzenWindow PascalDistribution PassEventsDown PassEventsUp Paste PasteBoxFormInlineCells PasteButton Path PathGraph PathGraphQ Pattern PatternSequence PatternTest PauliMatrix PaulWavelet Pause PausedTime PDF PearsonChiSquareTest PearsonCorrelationTest PearsonDistribution PerformanceGoal PeriodicInterpolation Periodogram PeriodogramArray PermutationCycles PermutationCyclesQ PermutationGroup PermutationLength PermutationList PermutationListQ PermutationMax PermutationMin PermutationOrder PermutationPower PermutationProduct PermutationReplace Permutations PermutationSupport Permute PeronaMalikFilter Perpendicular PERTDistribution PetersenGraph PhaseMargins Pi Pick PIDData PIDDerivativeFilter PIDFeedforward PIDTune Piecewise PiecewiseExpand PieChart PieChart3D PillaiTrace PillaiTraceTest Pink Pivoting PixelConstrained PixelValue PixelValuePositions Placed Placeholder PlaceholderReplace Plain PlanarGraphQ Play PlayRange Plot Plot3D Plot3Matrix PlotDivision PlotJoined PlotLabel PlotLayout PlotLegends PlotMarkers PlotPoints PlotRange PlotRangeClipping PlotRangePadding PlotRegion PlotStyle Plus PlusMinus Pochhammer PodStates PodWidth Point Point3DBox PointBox PointFigureChart PointForm PointLegend PointSize PoissonConsulDistribution PoissonDistribution PoissonProcess PoissonWindow PolarAxes PolarAxesOrigin PolarGridLines PolarPlot PolarTicks PoleZeroMarkers PolyaAeppliDistribution PolyGamma Polygon Polygon3DBox Polygon3DBoxOptions PolygonBox PolygonBoxOptions PolygonHoleScale PolygonIntersections PolygonScale PolyhedronData PolyLog PolynomialExtendedGCD PolynomialForm PolynomialGCD PolynomialLCM PolynomialMod PolynomialQ PolynomialQuotient PolynomialQuotientRemainder PolynomialReduce PolynomialRemainder Polynomials PopupMenu PopupMenuBox PopupMenuBoxOptions PopupView PopupWindow Position Positive PositiveDefiniteMatrixQ PossibleZeroQ Postfix PostScript Power PowerDistribution PowerExpand PowerMod PowerModList PowerSpectralDensity PowersRepresentations PowerSymmetricPolynomial Precedence PrecedenceForm Precedes PrecedesEqual PrecedesSlantEqual PrecedesTilde Precision PrecisionGoal PreDecrement PredictionRoot PreemptProtect PreferencesPath Prefix PreIncrement Prepend PrependTo PreserveImageOptions Previous PriceGraphDistribution PrimaryPlaceholder Prime PrimeNu PrimeOmega PrimePi PrimePowerQ PrimeQ Primes PrimeZetaP PrimitiveRoot PrincipalComponents PrincipalValue Print PrintAction PrintForm PrintingCopies PrintingOptions PrintingPageRange PrintingStartingPageNumber PrintingStyleEnvironment PrintPrecision PrintTemporary Prism PrismBox PrismBoxOptions PrivateCellOptions PrivateEvaluationOptions PrivateFontOptions PrivateFrontEndOptions PrivateNotebookOptions PrivatePaths Probability ProbabilityDistribution ProbabilityPlot ProbabilityPr ProbabilityScalePlot ProbitModelFit ProcessEstimator ProcessParameterAssumptions ProcessParameterQ ProcessStateDomain ProcessTimeDomain Product ProductDistribution ProductLog ProgressIndicator ProgressIndicatorBox ProgressIndicatorBoxOptions Projection Prolog PromptForm Properties Property PropertyList PropertyValue Proportion Proportional Protect Protected ProteinData Pruning PseudoInverse Purple Put PutAppend Pyramid PyramidBox PyramidBoxOptions QBinomial QFactorial QGamma QHypergeometricPFQ QPochhammer QPolyGamma QRDecomposition QuadraticIrrationalQ Quantile QuantilePlot Quantity QuantityForm QuantityMagnitude QuantityQ QuantityUnit Quartics QuartileDeviation Quartiles QuartileSkewness QueueingNetworkProcess QueueingProcess QueueProperties Quiet Quit Quotient QuotientRemainder RadialityCentrality RadicalBox RadicalBoxOptions RadioButton RadioButtonBar RadioButtonBox RadioButtonBoxOptions Radon RamanujanTau RamanujanTauL RamanujanTauTheta RamanujanTauZ Random RandomChoice RandomComplex RandomFunction RandomGraph RandomImage RandomInteger RandomPermutation RandomPrime RandomReal RandomSample RandomSeed RandomVariate RandomWalkProcess Range RangeFilter RangeSpecification RankedMax RankedMin Raster Raster3D Raster3DBox Raster3DBoxOptions RasterArray RasterBox RasterBoxOptions Rasterize RasterSize Rational RationalFunctions Rationalize Rationals Ratios Raw RawArray RawBoxes RawData RawMedium RayleighDistribution Re Read ReadList ReadProtected Real RealBlockDiagonalForm RealDigits RealExponent Reals Reap Record RecordLists RecordSeparators Rectangle RectangleBox RectangleBoxOptions RectangleChart RectangleChart3D RecurrenceFilter RecurrenceTable RecurringDigitsForm Red Reduce RefBox ReferenceLineStyle ReferenceMarkers ReferenceMarkerStyle Refine ReflectionMatrix ReflectionTransform Refresh RefreshRate RegionBinarize RegionFunction RegionPlot RegionPlot3D RegularExpression Regularization Reinstall Release ReleaseHold ReliabilityDistribution ReliefImage ReliefPlot Remove RemoveAlphaChannel RemoveAsynchronousTask Removed RemoveInputStreamMethod RemoveOutputStreamMethod RemoveProperty RemoveScheduledTask RenameDirectory RenameFile RenderAll RenderingOptions RenewalProcess RenkoChart Repeated RepeatedNull RepeatedString Replace ReplaceAll ReplaceHeldPart ReplaceImageValue ReplaceList ReplacePart ReplacePixelValue ReplaceRepeated Resampling Rescale RescalingTransform ResetDirectory ResetMenusPacket ResetScheduledTask Residue Resolve Rest Resultant ResumePacket Return ReturnExpressionPacket ReturnInputFormPacket ReturnPacket ReturnTextPacket Reverse ReverseBiorthogonalSplineWavelet ReverseElement ReverseEquilibrium ReverseGraph ReverseUpEquilibrium RevolutionAxis RevolutionPlot3D RGBColor RiccatiSolve RiceDistribution RidgeFilter RiemannR RiemannSiegelTheta RiemannSiegelZ Riffle Right RightArrow RightArrowBar RightArrowLeftArrow RightCosetRepresentative RightDownTeeVector RightDownVector RightDownVectorBar RightTee RightTeeArrow RightTeeVector RightTriangle RightTriangleBar RightTriangleEqual RightUpDownVector RightUpTeeVector RightUpVector RightUpVectorBar RightVector RightVectorBar RiskAchievementImportance RiskReductionImportance RogersTanimotoDissimilarity Root RootApproximant RootIntervals RootLocusPlot RootMeanSquare RootOfUnityQ RootReduce Roots RootSum Rotate RotateLabel RotateLeft RotateRight RotationAction RotationBox RotationBoxOptions RotationMatrix RotationTransform Round RoundImplies RoundingRadius Row RowAlignments RowBackgrounds RowBox RowHeights RowLines RowMinHeight RowReduce RowsEqual RowSpacings RSolve RudvalisGroupRu Rule RuleCondition RuleDelayed RuleForm RulerUnits Run RunScheduledTask RunThrough RuntimeAttributes RuntimeOptions RussellRaoDissimilarity SameQ SameTest SampleDepth SampledSoundFunction SampledSoundList SampleRate SamplingPeriod SARIMAProcess SARMAProcess SatisfiabilityCount SatisfiabilityInstances SatisfiableQ Saturday Save Saveable SaveAutoDelete SaveDefinitions SawtoothWave Scale Scaled ScaleDivisions ScaledMousePosition ScaleOrigin ScalePadding ScaleRanges ScaleRangeStyle ScalingFunctions ScalingMatrix ScalingTransform Scan ScheduledTaskActiveQ ScheduledTaskData ScheduledTaskObject ScheduledTasks SchurDecomposition ScientificForm ScreenRectangle ScreenStyleEnvironment ScriptBaselineShifts ScriptLevel ScriptMinSize ScriptRules ScriptSizeMultipliers Scrollbars ScrollingOptions ScrollPosition Sec Sech SechDistribution SectionGrouping SectorChart SectorChart3D SectorOrigin SectorSpacing SeedRandom Select Selectable SelectComponents SelectedCells SelectedNotebook Selection SelectionAnimate SelectionCell SelectionCellCreateCell SelectionCellDefaultStyle SelectionCellParentStyle SelectionCreateCell SelectionDebuggerTag SelectionDuplicateCell SelectionEvaluate SelectionEvaluateCreateCell SelectionMove SelectionPlaceholder SelectionSetStyle SelectWithContents SelfLoops SelfLoopStyle SemialgebraicComponentInstances SendMail Sequence SequenceAlignment SequenceForm SequenceHold SequenceLimit Series SeriesCoefficient SeriesData SessionTime Set SetAccuracy SetAlphaChannel SetAttributes Setbacks SetBoxFormNamesPacket SetDelayed SetDirectory SetEnvironment SetEvaluationNotebook SetFileDate SetFileLoadingContext SetNotebookStatusLine SetOptions SetOptionsPacket SetPrecision SetProperty SetSelectedNotebook SetSharedFunction SetSharedVariable SetSpeechParametersPacket SetStreamPosition SetSystemOptions Setter SetterBar SetterBox SetterBoxOptions Setting SetValue Shading Shallow ShannonWavelet ShapiroWilkTest Share Sharpen ShearingMatrix ShearingTransform ShenCastanMatrix Short ShortDownArrow Shortest ShortestMatch ShortestPathFunction ShortLeftArrow ShortRightArrow ShortUpArrow Show ShowAutoStyles ShowCellBracket ShowCellLabel ShowCellTags ShowClosedCellArea ShowContents ShowControls ShowCursorTracker ShowGroupOpenCloseIcon ShowGroupOpener ShowInvisibleCharacters ShowPageBreaks ShowPredictiveInterface ShowSelection ShowShortBoxForm ShowSpecialCharacters ShowStringCharacters ShowSyntaxStyles ShrinkingDelay ShrinkWrapBoundingBox SiegelTheta SiegelTukeyTest Sign Signature SignedRankTest SignificanceLevel SignPadding SignTest SimilarityRules SimpleGraph SimpleGraphQ Simplify Sin Sinc SinghMaddalaDistribution SingleEvaluation SingleLetterItalics SingleLetterStyle SingularValueDecomposition SingularValueList SingularValuePlot SingularValues Sinh SinhIntegral SinIntegral SixJSymbol Skeleton SkeletonTransform SkellamDistribution Skewness SkewNormalDistribution Skip SliceDistribution Slider Slider2D Slider2DBox Slider2DBoxOptions SliderBox SliderBoxOptions SlideView Slot SlotSequence Small SmallCircle Smaller SmithDelayCompensator SmithWatermanSimilarity SmoothDensityHistogram SmoothHistogram SmoothHistogram3D SmoothKernelDistribution SocialMediaData Socket SokalSneathDissimilarity Solve SolveAlways SolveDelayed Sort SortBy Sound SoundAndGraphics SoundNote SoundVolume Sow Space SpaceForm Spacer Spacings Span SpanAdjustments SpanCharacterRounding SpanFromAbove SpanFromBoth SpanFromLeft SpanLineThickness SpanMaxSize SpanMinSize SpanningCharacters SpanSymmetric SparseArray SpatialGraphDistribution Speak SpeakTextPacket SpearmanRankTest SpearmanRho Spectrogram SpectrogramArray Specularity SpellingCorrection SpellingDictionaries SpellingDictionariesPath SpellingOptions SpellingSuggestionsPacket Sphere SphereBox SphericalBesselJ SphericalBesselY SphericalHankelH1 SphericalHankelH2 SphericalHarmonicY SphericalPlot3D SphericalRegion SpheroidalEigenvalue SpheroidalJoiningFactor SpheroidalPS SpheroidalPSPrime SpheroidalQS SpheroidalQSPrime SpheroidalRadialFactor SpheroidalS1 SpheroidalS1Prime SpheroidalS2 SpheroidalS2Prime Splice SplicedDistribution SplineClosed SplineDegree SplineKnots SplineWeights Split SplitBy SpokenString Sqrt SqrtBox SqrtBoxOptions Square SquaredEuclideanDistance SquareFreeQ SquareIntersection SquaresR SquareSubset SquareSubsetEqual SquareSuperset SquareSupersetEqual SquareUnion SquareWave StabilityMargins StabilityMarginsStyle StableDistribution Stack StackBegin StackComplete StackInhibit StandardDeviation StandardDeviationFilter StandardForm Standardize StandbyDistribution Star StarGraph StartAsynchronousTask StartingStepSize StartOfLine StartOfString StartScheduledTask StartupSound StateDimensions StateFeedbackGains StateOutputEstimator StateResponse StateSpaceModel StateSpaceRealization StateSpaceTransform StationaryDistribution StationaryWaveletPacketTransform StationaryWaveletTransform StatusArea StatusCentrality StepMonitor StieltjesGamma StirlingS1 StirlingS2 StopAsynchronousTask StopScheduledTask StrataVariables StratonovichProcess StreamColorFunction StreamColorFunctionScaling StreamDensityPlot StreamPlot StreamPoints StreamPosition Streams StreamScale StreamStyle String StringBreak StringByteCount StringCases StringCount StringDrop StringExpression StringForm StringFormat StringFreeQ StringInsert StringJoin StringLength StringMatchQ StringPosition StringQ StringReplace StringReplaceList StringReplacePart StringReverse StringRotateLeft StringRotateRight StringSkeleton StringSplit StringTake StringToStream StringTrim StripBoxes StripOnInput StripWrapperBoxes StrokeForm StructuralImportance StructuredArray StructuredSelection StruveH StruveL Stub StudentTDistribution Style StyleBox StyleBoxAutoDelete StyleBoxOptions StyleData StyleDefinitions StyleForm StyleKeyMapping StyleMenuListing StyleNameDialogSettings StyleNames StylePrint StyleSheetPath Subfactorial Subgraph SubMinus SubPlus SubresultantPolynomialRemainders SubresultantPolynomials Subresultants Subscript SubscriptBox SubscriptBoxOptions Subscripted Subset SubsetEqual Subsets SubStar Subsuperscript SubsuperscriptBox SubsuperscriptBoxOptions Subtract SubtractFrom SubValues Succeeds SucceedsEqual SucceedsSlantEqual SucceedsTilde SuchThat Sum SumConvergence Sunday SuperDagger SuperMinus SuperPlus Superscript SuperscriptBox SuperscriptBoxOptions Superset SupersetEqual SuperStar Surd SurdForm SurfaceColor SurfaceGraphics SurvivalDistribution SurvivalFunction SurvivalModel SurvivalModelFit SuspendPacket SuzukiDistribution SuzukiGroupSuz SwatchLegend Switch Symbol SymbolName SymletWavelet Symmetric SymmetricGroup SymmetricMatrixQ SymmetricPolynomial SymmetricReduction Symmetrize SymmetrizedArray SymmetrizedArrayRules SymmetrizedDependentComponents SymmetrizedIndependentComponents SymmetrizedReplacePart SynchronousInitialization SynchronousUpdating Syntax SyntaxForm SyntaxInformation SyntaxLength SyntaxPacket SyntaxQ SystemDialogInput SystemException SystemHelpPath SystemInformation SystemInformationData SystemOpen SystemOptions SystemsModelDelay SystemsModelDelayApproximate SystemsModelDelete SystemsModelDimensions SystemsModelExtract SystemsModelFeedbackConnect SystemsModelLabels SystemsModelOrder SystemsModelParallelConnect SystemsModelSeriesConnect SystemsModelStateFeedbackConnect SystemStub Tab TabFilling Table TableAlignments TableDepth TableDirections TableForm TableHeadings TableSpacing TableView TableViewBox TabSpacings TabView TabViewBox TabViewBoxOptions TagBox TagBoxNote TagBoxOptions TaggingRules TagSet TagSetDelayed TagStyle TagUnset Take TakeWhile Tally Tan Tanh TargetFunctions TargetUnits TautologyQ TelegraphProcess TemplateBox TemplateBoxOptions TemplateSlotSequence TemporalData Temporary TemporaryVariable TensorContract TensorDimensions TensorExpand TensorProduct TensorQ TensorRank TensorReduce TensorSymmetry TensorTranspose TensorWedge Tetrahedron TetrahedronBox TetrahedronBoxOptions TeXForm TeXSave Text Text3DBox Text3DBoxOptions TextAlignment TextBand TextBoundingBox TextBox TextCell TextClipboardType TextData TextForm TextJustification TextLine TextPacket TextParagraph TextRecognize TextRendering TextStyle Texture TextureCoordinateFunction TextureCoordinateScaling Therefore ThermometerGauge Thick Thickness Thin Thinning ThisLink ThompsonGroupTh Thread ThreeJSymbol Threshold Through Throw Thumbnail Thursday Ticks TicksStyle Tilde TildeEqual TildeFullEqual TildeTilde TimeConstrained TimeConstraint Times TimesBy TimeSeriesForecast TimeSeriesInvertibility TimeUsed TimeValue TimeZone Timing Tiny TitleGrouping TitsGroupT ToBoxes ToCharacterCode ToColor ToContinuousTimeModel ToDate ToDiscreteTimeModel ToeplitzMatrix ToExpression ToFileName Together Toggle ToggleFalse Toggler TogglerBar TogglerBox TogglerBoxOptions ToHeldExpression ToInvertibleTimeSeries TokenWords Tolerance ToLowerCase ToNumberField TooBig Tooltip TooltipBox TooltipBoxOptions TooltipDelay TooltipStyle Top TopHatTransform TopologicalSort ToRadicals ToRules ToString Total TotalHeight TotalVariationFilter TotalWidth TouchscreenAutoZoom TouchscreenControlPlacement ToUpperCase Tr Trace TraceAbove TraceAction TraceBackward TraceDepth TraceDialog TraceForward TraceInternal TraceLevel TraceOff TraceOn TraceOriginal TracePrint TraceScan TrackedSymbols TradingChart TraditionalForm TraditionalFunctionNotation TraditionalNotation TraditionalOrder TransferFunctionCancel TransferFunctionExpand TransferFunctionFactor TransferFunctionModel TransferFunctionPoles TransferFunctionTransform TransferFunctionZeros TransformationFunction TransformationFunctions TransformationMatrix TransformedDistribution TransformedField Translate TranslationTransform TransparentColor Transpose TreeForm TreeGraph TreeGraphQ TreePlot TrendStyle TriangleWave TriangularDistribution Trig TrigExpand TrigFactor TrigFactorList Trigger TrigReduce TrigToExp TrimmedMean True TrueQ TruncatedDistribution TsallisQExponentialDistribution TsallisQGaussianDistribution TTest Tube TubeBezierCurveBox TubeBezierCurveBoxOptions TubeBox TubeBSplineCurveBox TubeBSplineCurveBoxOptions Tuesday TukeyLambdaDistribution TukeyWindow Tuples TuranGraph TuringMachine Transparent UnateQ Uncompress Undefined UnderBar Underflow Underlined Underoverscript UnderoverscriptBox UnderoverscriptBoxOptions Underscript UnderscriptBox UnderscriptBoxOptions UndirectedEdge UndirectedGraph UndirectedGraphQ UndocumentedTestFEParserPacket UndocumentedTestGetSelectionPacket Unequal Unevaluated UniformDistribution UniformGraphDistribution UniformSumDistribution Uninstall Union UnionPlus Unique UnitBox UnitConvert UnitDimensions Unitize UnitRootTest UnitSimplify UnitStep UnitTriangle UnitVector Unprotect UnsameQ UnsavedVariables Unset UnsetShared UntrackedVariables Up UpArrow UpArrowBar UpArrowDownArrow Update UpdateDynamicObjects UpdateDynamicObjectsSynchronous UpdateInterval UpDownArrow UpEquilibrium UpperCaseQ UpperLeftArrow UpperRightArrow UpperTriangularize Upsample UpSet UpSetDelayed UpTee UpTeeArrow UpValues URL URLFetch URLFetchAsynchronous URLSave URLSaveAsynchronous UseGraphicsRange Using UsingFrontEnd V2Get ValidationLength Value ValueBox ValueBoxOptions ValueForm ValueQ ValuesData Variables Variance VarianceEquivalenceTest VarianceEstimatorFunction VarianceGammaDistribution VarianceTest VectorAngle VectorColorFunction VectorColorFunctionScaling VectorDensityPlot VectorGlyphData VectorPlot VectorPlot3D VectorPoints VectorQ Vectors VectorScale VectorStyle Vee Verbatim Verbose VerboseConvertToPostScriptPacket VerifyConvergence VerifySolutions VerifyTestAssumptions Version VersionNumber VertexAdd VertexCapacity VertexColors VertexComponent VertexConnectivity VertexCoordinateRules VertexCoordinates VertexCorrelationSimilarity VertexCosineSimilarity VertexCount VertexCoverQ VertexDataCoordinates VertexDegree VertexDelete VertexDiceSimilarity VertexEccentricity VertexInComponent VertexInDegree VertexIndex VertexJaccardSimilarity VertexLabeling VertexLabels VertexLabelStyle VertexList VertexNormals VertexOutComponent VertexOutDegree VertexQ VertexRenderingFunction VertexReplace VertexShape VertexShapeFunction VertexSize VertexStyle VertexTextureCoordinates VertexWeight Vertical VerticalBar VerticalForm VerticalGauge VerticalSeparator VerticalSlider VerticalTilde ViewAngle ViewCenter ViewMatrix ViewPoint ViewPointSelectorSettings ViewPort ViewRange ViewVector ViewVertical VirtualGroupData Visible VisibleCell VoigtDistribution VonMisesDistribution WaitAll WaitAsynchronousTask WaitNext WaitUntil WakebyDistribution WalleniusHypergeometricDistribution WaringYuleDistribution WatershedComponents WatsonUSquareTest WattsStrogatzGraphDistribution WaveletBestBasis WaveletFilterCoefficients WaveletImagePlot WaveletListPlot WaveletMapIndexed WaveletMatrixPlot WaveletPhi WaveletPsi WaveletScale WaveletScalogram WaveletThreshold WeaklyConnectedComponents WeaklyConnectedGraphQ WeakStationarity WeatherData WeberE Wedge Wednesday WeibullDistribution WeierstrassHalfPeriods WeierstrassInvariants WeierstrassP WeierstrassPPrime WeierstrassSigma WeierstrassZeta WeightedAdjacencyGraph WeightedAdjacencyMatrix WeightedData WeightedGraphQ Weights WelchWindow WheelGraph WhenEvent Which While White Whitespace WhitespaceCharacter WhittakerM WhittakerW WienerFilter WienerProcess WignerD WignerSemicircleDistribution WilksW WilksWTest WindowClickSelect WindowElements WindowFloating WindowFrame WindowFrameElements WindowMargins WindowMovable WindowOpacity WindowSelected WindowSize WindowStatusArea WindowTitle WindowToolbars WindowWidth With WolframAlpha WolframAlphaDate WolframAlphaQuantity WolframAlphaResult Word WordBoundary WordCharacter WordData WordSearch WordSeparators WorkingPrecision Write WriteString Wronskian XMLElement XMLObject Xnor Xor Yellow YuleDissimilarity ZernikeR ZeroSymmetric ZeroTest ZeroWidthTimes Zeta ZetaZero ZipfDistribution ZTest ZTransform $Aborted $ActivationGroupID $ActivationKey $ActivationUserRegistered $AddOnsDirectory $AssertFunction $Assumptions $AsynchronousTask $BaseDirectory $BatchInput $BatchOutput $BoxForms $ByteOrdering $Canceled $CharacterEncoding $CharacterEncodings $CommandLine $CompilationTarget $ConditionHold $ConfiguredKernels $Context $ContextPath $ControlActiveSetting $CreationDate $CurrentLink $DateStringFormat $DefaultFont $DefaultFrontEnd $DefaultImagingDevice $DefaultPath $Display $DisplayFunction $DistributedContexts $DynamicEvaluation $Echo $Epilog $ExportFormats $Failed $FinancialDataSource $FormatType $FrontEnd $FrontEndSession $GeoLocation $HistoryLength $HomeDirectory $HTTPCookies $IgnoreEOF $ImagingDevices $ImportFormats $InitialDirectory $Input $InputFileName $InputStreamMethods $Inspector $InstallationDate $InstallationDirectory $InterfaceEnvironment $IterationLimit $KernelCount $KernelID $Language $LaunchDirectory $LibraryPath $LicenseExpirationDate $LicenseID $LicenseProcesses $LicenseServer $LicenseSubprocesses $LicenseType $Line $Linked $LinkSupported $LoadedFiles $MachineAddresses $MachineDomain $MachineDomains $MachineEpsilon $MachineID $MachineName $MachinePrecision $MachineType $MaxExtraPrecision $MaxLicenseProcesses $MaxLicenseSubprocesses $MaxMachineNumber $MaxNumber $MaxPiecewiseCases $MaxPrecision $MaxRootDegree $MessageGroups $MessageList $MessagePrePrint $Messages $MinMachineNumber $MinNumber $MinorReleaseNumber $MinPrecision $ModuleNumber $NetworkLicense $NewMessage $NewSymbol $Notebooks $NumberMarks $Off $OperatingSystem $Output $OutputForms $OutputSizeLimit $OutputStreamMethods $Packages $ParentLink $ParentProcessID $PasswordFile $PatchLevelID $Path $PathnameSeparator $PerformanceGoal $PipeSupported $Post $Pre $PreferencesDirectory $PrePrint $PreRead $PrintForms $PrintLiteral $ProcessID $ProcessorCount $ProcessorType $ProductInformation $ProgramName $RandomState $RecursionLimit $ReleaseNumber $RootDirectory $ScheduledTask $ScriptCommandLine $SessionID $SetParentLink $SharedFunctions $SharedVariables $SoundDisplay $SoundDisplayFunction $SuppressInputFormHeads $SynchronousEvaluation $SyntaxHandler $System $SystemCharacterEncoding $SystemID $SystemWordLength $TemporaryDirectory $TemporaryPrefix $TextStyle $TimedOut $TimeUnit $TimeZone $TopDirectory $TraceOff $TraceOn $TracePattern $TracePostAction $TracePreAction $Urgent $UserAddOnsDirectory $UserBaseDirectory $UserDocumentsDirectory $UserName $Version $VersionNumber",
-c:[{cN:"comment",b:/\(\*/,e:/\*\)/},e.ASM,e.QSM,e.CNM,{b:/\{/,e:/\}/,i:/:/}]}});hljs.registerLanguage("cmake",function(e){return{aliases:["cmake.in"],cI:!0,k:{keyword:"add_custom_command add_custom_target add_definitions add_dependencies add_executable add_library add_subdirectory add_test aux_source_directory break build_command cmake_minimum_required cmake_policy configure_file create_test_sourcelist define_property else elseif enable_language enable_testing endforeach endfunction endif endmacro endwhile execute_process export find_file find_library find_package find_path find_program fltk_wrap_ui foreach function get_cmake_property get_directory_property get_filename_component get_property get_source_file_property get_target_property get_test_property if include include_directories include_external_msproject include_regular_expression install link_directories load_cache load_command macro mark_as_advanced message option output_required_files project qt_wrap_cpp qt_wrap_ui remove_definitions return separate_arguments set set_directory_properties set_property set_source_files_properties set_target_properties set_tests_properties site_name source_group string target_link_libraries try_compile try_run unset variable_watch while build_name exec_program export_library_dependencies install_files install_programs install_targets link_libraries make_directory remove subdir_depends subdirs use_mangled_mesa utility_source variable_requires write_file qt5_use_modules qt5_use_package qt5_wrap_cpp on off true false and or equal less greater strless strgreater strequal matches"},c:[{cN:"variable",b:"\\${",e:"}"},e.HCM,e.QSM,e.NM]}});hljs.registerLanguage("less",function(e){var r="[\\w-]+",t="("+r+"|@{"+r+"})",a=[],c=[],s=function(e){return{cN:"string",b:"~?"+e+".*?"+e}},b=function(e,r,t){return{cN:e,b:r,r:t}},i={b:"\\(",e:"\\)",c:c,r:0};c.push(e.CLCM,e.CBCM,s("'"),s('"'),e.CSSNM,{b:"(url|data-uri)\\(",starts:{cN:"string",e:"[\\)\\n]",eE:!0}},b("number","#[0-9A-Fa-f]+\\b"),i,b("variable","@@?"+r,10),b("variable","@{"+r+"}"),b("built_in","~?`[^`]*?`"),{cN:"attribute",b:r+"\\s*:",e:":",rB:!0,eE:!0},{cN:"meta",b:"!important"});var n=c.concat({b:"{",e:"}",c:a}),o={bK:"when",eW:!0,c:[{bK:"and not"}].concat(c)},u={cN:"attribute",b:t,e:":",eE:!0,c:[e.CLCM,e.CBCM],i:/\S/,starts:{e:"[;}]",rE:!0,c:c,i:"[<=$]"}},C={cN:"keyword",b:"@(import|media|charset|font-face|(-[a-z]+-)?keyframes|supports|document|namespace|page|viewport|host)\\b",starts:{e:"[;{}]",rE:!0,c:c,r:0}},l={cN:"variable",v:[{b:"@"+r+"\\s*:",r:15},{b:"@"+r}],starts:{e:"[;}]",rE:!0,c:n}},p={v:[{b:"[\\.#:&\\[]",e:"[;{}]"},{b:t+"[^;]*{",e:"{"}],rB:!0,rE:!0,i:"[<='$\"]",c:[e.CLCM,e.CBCM,o,b("keyword","all\\b"),b("variable","@{"+r+"}"),b("selector-tag",t+"%?",0),b("selector-id","#"+t),b("selector-class","\\."+t,0),b("selector-tag","&",0),{cN:"selector-attr",b:"\\[",e:"\\]"},{b:"\\(",e:"\\)",c:n},{b:"!important"}]};return a.push(e.CLCM,e.CBCM,C,l,p,u),{cI:!0,i:"[=>'/<($\"]",c:a}});hljs.registerLanguage("dockerfile",function(e){return{aliases:["docker"],cI:!0,k:"from maintainer cmd expose add copy entrypoint volume user workdir onbuild run env label",c:[e.HCM,{k:"run cmd entrypoint volume add copy workdir onbuild label",b:/^ *(onbuild +)?(run|cmd|entrypoint|volume|add|copy|workdir|label) +/,starts:{e:/[^\\]\n/,sL:"bash"}},{k:"from maintainer expose env user onbuild",b:/^ *(onbuild +)?(from|maintainer|expose|env|user|onbuild) +/,e:/[^\\]\n/,c:[e.ASM,e.QSM,e.NM,e.HCM]}]}});hljs.registerLanguage("dts",function(e){var a={cN:"string",v:[e.inherit(e.QSM,{b:'((u8?|U)|L)?"'}),{b:'(u8?|U)?R"',e:'"',c:[e.BE]},{b:"'\\\\?.",e:"'",i:"."}]},c={cN:"number",v:[{b:"\\b(\\d+(\\.\\d*)?|\\.\\d+)(u|U|l|L|ul|UL|f|F)"},{b:e.CNR}],r:0},b={cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elif endif define undef ifdef ifndef"},c:[{b:/\\\n/,r:0},{bK:"include",e:"$",k:{"meta-keyword":"include"},c:[e.inherit(a,{cN:"meta-string"}),{cN:"meta-string",b:"<",e:">",i:"\\n"}]},a,e.CLCM,e.CBCM]},i={cN:"variable",b:"\\&[a-z\\d_]*\\b"},r={cN:"meta-keyword",b:"/[a-z][a-z\\d-]*/"},d={cN:"symbol",b:"^\\s*[a-zA-Z_][a-zA-Z\\d_]*:"},n={cN:"params",b:"<",e:">",c:[c,i]},s={cN:"class",b:/[a-zA-Z_][a-zA-Z\d_@]*\s{/,e:/[{;=]/,rB:!0,eE:!0},t={cN:"class",b:"/\\s*{",e:"};",r:10,c:[i,r,d,s,n,e.CLCM,e.CBCM,c,a]};return{k:"",c:[t,i,r,d,s,n,e.CLCM,e.CBCM,c,a,b,{b:e.IR+"::",k:""}]}});hljs.registerLanguage("xquery",function(e){var t="for let if while then else return where group by xquery encoding versionmodule namespace boundary-space preserve strip default collation base-uri orderingcopy-namespaces order declare import schema namespace function option in allowing emptyat tumbling window sliding window start when only end when previous next stable ascendingdescending empty greatest least some every satisfies switch case typeswitch try catch andor to union intersect instance of treat as castable cast map array delete insert intoreplace value rename copy modify update",a="false true xs:string xs:integer element item xs:date xs:datetime xs:float xs:double xs:decimal QName xs:anyURI xs:long xs:int xs:short xs:byte attribute",s={b:/\$[a-zA-Z0-9\-]+/,r:5},n={cN:"number",b:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",r:0},r={cN:"string",v:[{b:/"/,e:/"/,c:[{b:/""/,r:0}]},{b:/'/,e:/'/,c:[{b:/''/,r:0}]}]},i={cN:"meta",b:"%\\w+"},c={cN:"comment",b:"\\(:",e:":\\)",r:10,c:[{cN:"doctag",b:"@\\w+"}]},o={b:"{",e:"}"},l=[s,r,n,c,i,o];return o.c=l,{aliases:["xpath","xq"],cI:!1,l:/[a-zA-Z\$][a-zA-Z0-9_:\-]*/,i:/(proc)|(abstract)|(extends)|(until)|(#)/,k:{keyword:t,literal:a},c:l}});hljs.registerLanguage("puppet",function(e){var s={keyword:"and case default else elsif false if in import enherits node or true undef unless main settings $string ",literal:"alias audit before loglevel noop require subscribe tag owner ensure group mode name|0 changes context force incl lens load_path onlyif provider returns root show_diff type_check en_address ip_address realname command environment hour monute month monthday special target weekday creates cwd ogoutput refresh refreshonly tries try_sleep umask backup checksum content ctime force ignore links mtime purge recurse recurselimit replace selinux_ignore_defaults selrange selrole seltype seluser source souirce_permissions sourceselect validate_cmd validate_replacement allowdupe attribute_membership auth_membership forcelocal gid ia_load_module members system host_aliases ip allowed_trunk_vlans description device_url duplex encapsulation etherchannel native_vlan speed principals allow_root auth_class auth_type authenticate_user k_of_n mechanisms rule session_owner shared options device fstype enable hasrestart directory present absent link atboot blockdevice device dump pass remounts poller_tag use message withpath adminfile allow_virtual allowcdrom category configfiles flavor install_options instance package_settings platform responsefile status uninstall_options vendor unless_system_user unless_uid binary control flags hasstatus manifest pattern restart running start stop allowdupe auths expiry gid groups home iterations key_membership keys managehome membership password password_max_age password_min_age profile_membership profiles project purge_ssh_keys role_membership roles salt shell uid baseurl cost descr enabled enablegroups exclude failovermethod gpgcheck gpgkey http_caching include includepkgs keepalive metadata_expire metalink mirrorlist priority protect proxy proxy_password proxy_username repo_gpgcheck s3_enabled skip_if_unavailable sslcacert sslclientcert sslclientkey sslverify mounted",built_in:"architecture augeasversion blockdevices boardmanufacturer boardproductname boardserialnumber cfkey dhcp_servers domain ec2_ ec2_userdata facterversion filesystems ldom fqdn gid hardwareisa hardwaremodel hostname id|0 interfaces ipaddress ipaddress_ ipaddress6 ipaddress6_ iphostnumber is_virtual kernel kernelmajversion kernelrelease kernelversion kernelrelease kernelversion lsbdistcodename lsbdistdescription lsbdistid lsbdistrelease lsbmajdistrelease lsbminordistrelease lsbrelease macaddress macaddress_ macosx_buildversion macosx_productname macosx_productversion macosx_productverson_major macosx_productversion_minor manufacturer memoryfree memorysize netmask metmask_ network_ operatingsystem operatingsystemmajrelease operatingsystemrelease osfamily partitions path physicalprocessorcount processor processorcount productname ps puppetversion rubysitedir rubyversion selinux selinux_config_mode selinux_config_policy selinux_current_mode selinux_current_mode selinux_enforced selinux_policyversion serialnumber sp_ sshdsakey sshecdsakey sshrsakey swapencrypted swapfree swapsize timezone type uniqueid uptime uptime_days uptime_hours uptime_seconds uuid virtual vlans xendomains zfs_version zonenae zones zpool_version"},r=e.C("#","$"),a="([A-Za-z_]|::)(\\w|::)*",i=e.inherit(e.TM,{b:a}),o={cN:"variable",b:"\\$"+a},t={cN:"string",c:[e.BE,o],v:[{b:/'/,e:/'/},{b:/"/,e:/"/}]};return{aliases:["pp"],c:[r,o,t,{bK:"class",e:"\\{|;",i:/=/,c:[i,r]},{bK:"define",e:/\{/,c:[{cN:"section",b:e.IR,endsParent:!0}]},{b:e.IR+"\\s+\\{",rB:!0,e:/\S/,c:[{cN:"keyword",b:e.IR},{b:/\{/,e:/\}/,k:s,r:0,c:[t,r,{b:"[a-zA-Z_]+\\s*=>",rB:!0,e:"=>",c:[{cN:"attr",b:e.IR}]},{cN:"number",b:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",r:0},o]}],r:0}]}});hljs.registerLanguage("nginx",function(e){var r={cN:"variable",v:[{b:/\$\d+/},{b:/\$\{/,e:/}/},{b:"[\\$\\@]"+e.UIR}]},b={eW:!0,l:"[a-z/_]+",k:{literal:"on off yes no true false none blocked debug info notice warn error crit select break last permanent redirect kqueue rtsig epoll poll /dev/poll"},r:0,i:"=>",c:[e.HCM,{cN:"string",c:[e.BE,r],v:[{b:/"/,e:/"/},{b:/'/,e:/'/}]},{b:"([a-z]+):/",e:"\\s",eW:!0,eE:!0,c:[r]},{cN:"regexp",c:[e.BE,r],v:[{b:"\\s\\^",e:"\\s|{|;",rE:!0},{b:"~\\*?\\s+",e:"\\s|{|;",rE:!0},{b:"\\*(\\.[a-z\\-]+)+"},{b:"([a-z\\-]+\\.)+\\*"}]},{cN:"number",b:"\\b\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}(:\\d{1,5})?\\b"},{cN:"number",b:"\\b\\d+[kKmMgGdshdwy]*\\b",r:0},r]};return{aliases:["nginxconf"],c:[e.HCM,{b:e.UIR+"\\s+{",rB:!0,e:"{",c:[{cN:"section",b:e.UIR}],r:0},{b:e.UIR+"\\s",e:";|{",rB:!0,c:[{cN:"attribute",b:e.UIR,starts:b}],r:0}],i:"[^\\s\\}]"}});hljs.registerLanguage("scheme",function(e){var t="[^\\(\\)\\[\\]\\{\\}\",'`;#|\\\\\\s]+",r="(\\-|\\+)?\\d+([./]\\d+)?",a=r+"[+\\-]"+r+"i",i={"builtin-name":"case-lambda call/cc class define-class exit-handler field import inherit init-field interface let*-values let-values let/ec mixin opt-lambda override protect provide public rename require require-for-syntax syntax syntax-case syntax-error unit/sig unless when with-syntax and begin call-with-current-continuation call-with-input-file call-with-output-file case cond define define-syntax delay do dynamic-wind else for-each if lambda let let* let-syntax letrec letrec-syntax map or syntax-rules ' * + , ,@ - ... / ; < <= = => > >= ` abs acos angle append apply asin assoc assq assv atan boolean? caar cadr call-with-input-file call-with-output-file call-with-values car cdddar cddddr cdr ceiling char->integer char-alphabetic? char-ci<=? char-ci char-ci=? char-ci>=? char-ci>? char-downcase char-lower-case? char-numeric? char-ready? char-upcase char-upper-case? char-whitespace? char<=? char char=? char>=? char>? char? close-input-port close-output-port complex? cons cos current-input-port current-output-port denominator display eof-object? eq? equal? eqv? eval even? exact->inexact exact? exp expt floor force gcd imag-part inexact->exact inexact? input-port? integer->char integer? interaction-environment lcm length list list->string list->vector list-ref list-tail list? load log magnitude make-polar make-rectangular make-string make-vector max member memq memv min modulo negative? newline not null-environment null? number->string number? numerator odd? open-input-file open-output-file output-port? pair? peek-char port? positive? procedure? quasiquote quote quotient rational? rationalize read read-char real-part real? remainder reverse round scheme-report-environment set! set-car! set-cdr! sin sqrt string string->list string->number string->symbol string-append string-ci<=? string-ci string-ci=? string-ci>=? string-ci>? string-copy string-fill! string-length string-ref string-set! string<=? string string=? string>=? string>? string? substring symbol->string symbol? tan transcript-off transcript-on truncate values vector vector->list vector-fill! vector-length vector-ref vector-set! with-input-from-file with-output-to-file write write-char zero?"},n={cN:"meta",b:"^#!",e:"$"},c={cN:"literal",b:"(#t|#f|#\\\\"+t+"|#\\\\.)"},l={cN:"number",v:[{b:r,r:0},{b:a,r:0},{b:"#b[0-1]+(/[0-1]+)?"},{b:"#o[0-7]+(/[0-7]+)?"},{b:"#x[0-9a-f]+(/[0-9a-f]+)?"}]},s=e.QSM,o=[e.C(";","$",{r:0}),e.C("#\\|","\\|#")],u={b:t,r:0},p={cN:"symbol",b:"'"+t},d={eW:!0,r:0},m={v:[{b:"\\(",e:"\\)"},{b:"\\[",e:"\\]"}],c:[{cN:"name",b:t,l:t,k:i},d]};return d.c=[c,l,s,u,p,m].concat(o),{i:/\S/,c:[n,l,s,p,m].concat(o)}});hljs.registerLanguage("tex",function(c){var e={cN:"tag",b:/\\/,r:0,c:[{cN:"name",v:[{b:/[a-zA-Zа-яА-я]+[*]?/},{b:/[^a-zA-Zа-яА-я0-9]/}],starts:{eW:!0,r:0,c:[{cN:"string",v:[{b:/\[/,e:/\]/},{b:/\{/,e:/\}/}]},{b:/\s*=\s*/,eW:!0,r:0,c:[{cN:"number",b:/-?\d*\.?\d+(pt|pc|mm|cm|in|dd|cc|ex|em)?/}]}]}}]};return{c:[e,{cN:"formula",c:[e],r:0,v:[{b:/\$\$/,e:/\$\$/},{b:/\$/,e:/\$/}]},c.C("%","$",{r:0})]}});hljs.registerLanguage("mizar",function(e){return{k:"environ vocabularies notations constructors definitions registrations theorems schemes requirements begin end definition registration cluster existence pred func defpred deffunc theorem proof let take assume then thus hence ex for st holds consider reconsider such that and in provided of as from be being by means equals implies iff redefine define now not or attr is mode suppose per cases set thesis contradiction scheme reserve struct correctness compatibility coherence symmetry assymetry reflexivity irreflexivity connectedness uniqueness commutativity idempotence involutiveness projectivity",c:[e.C("::","$")]}});hljs.registerLanguage("r",function(e){var r="([a-zA-Z]|\\.[a-zA-Z.])[a-zA-Z0-9._]*";return{c:[e.HCM,{b:r,l:r,k:{keyword:"function if in break next repeat else for return switch while try tryCatch stop warning require library attach detach source setMethod setGeneric setGroupGeneric setClass ...",literal:"NULL NA TRUE FALSE T F Inf NaN NA_integer_|10 NA_real_|10 NA_character_|10 NA_complex_|10"},r:0},{cN:"number",b:"0[xX][0-9a-fA-F]+[Li]?\\b",r:0},{cN:"number",b:"\\d+(?:[eE][+\\-]?\\d*)?L\\b",r:0},{cN:"number",b:"\\d+\\.(?!\\d)(?:i\\b)?",r:0},{cN:"number",b:"\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",r:0},{b:"`",e:"`",r:0},{cN:"string",c:[e.BE],v:[{b:'"',e:'"'},{b:"'",e:"'"}]}]}});hljs.registerLanguage("cs",function(e){var t="abstract as base bool break byte case catch char checked const continue decimal dynamic default delegate do double else enum event explicit extern false finally fixed float for foreach goto if implicit in int interface internal is lock long null when object operator out override params private protected public readonly ref sbyte sealed short sizeof stackalloc static string struct switch this true try typeof uint ulong unchecked unsafe ushort using virtual volatile void while async protected public private internal ascending descending from get group into join let orderby partial select set value var where yield",r=e.IR+"(<"+e.IR+">)?";return{aliases:["csharp"],k:t,i:/::/,c:[e.C("///","$",{rB:!0,c:[{cN:"doctag",v:[{b:"///",r:0},{b:""},{b:"?",e:">"}]}]}),e.CLCM,e.CBCM,{cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elif endif define undef warning error line region endregion pragma checksum"}},{cN:"string",b:'@"',e:'"',c:[{b:'""'}]},e.ASM,e.QSM,e.CNM,{bK:"class interface",e:/[{;=]/,i:/[^\s:]/,c:[e.TM,e.CLCM,e.CBCM]},{bK:"namespace",e:/[{;=]/,i:/[^\s:]/,c:[e.inherit(e.TM,{b:"[a-zA-Z](\\.?\\w)*"}),e.CLCM,e.CBCM]},{bK:"new return throw await",r:0},{cN:"function",b:"("+r+"\\s+)+"+e.IR+"\\s*\\(",rB:!0,e:/[{;=]/,eE:!0,k:t,c:[{b:e.IR+"\\s*\\(",rB:!0,c:[e.TM],r:0},{cN:"params",b:/\(/,e:/\)/,eB:!0,eE:!0,k:t,r:0,c:[e.ASM,e.QSM,e.CNM,e.CBCM]},e.CLCM,e.CBCM]}]}});hljs.registerLanguage("vbscript",function(e){return{aliases:["vbs"],cI:!0,k:{keyword:"call class const dim do loop erase execute executeglobal exit for each next function if then else on error option explicit new private property let get public randomize redim rem select case set stop sub while wend with end to elseif is or xor and not class_initialize class_terminate default preserve in me byval byref step resume goto",built_in:"lcase month vartype instrrev ubound setlocale getobject rgb getref string weekdayname rnd dateadd monthname now day minute isarray cbool round formatcurrency conversions csng timevalue second year space abs clng timeserial fixs len asc isempty maths dateserial atn timer isobject filter weekday datevalue ccur isdate instr datediff formatdatetime replace isnull right sgn array snumeric log cdbl hex chr lbound msgbox ucase getlocale cos cdate cbyte rtrim join hour oct typename trim strcomp int createobject loadpicture tan formatnumber mid scriptenginebuildversion scriptengine split scriptengineminorversion cint sin datepart ltrim sqr scriptenginemajorversion time derived eval date formatpercent exp inputbox left ascw chrw regexp server response request cstr err",literal:"true false null nothing empty"},i:"//",c:[e.inherit(e.QSM,{c:[{b:'""'}]}),e.C(/'/,/$/,{r:0}),e.CNM]}});hljs.registerLanguage("vbscript-html",function(r){return{sL:"xml",c:[{b:"<%",e:"%>",sL:"vbscript"}]}});hljs.registerLanguage("stylus",function(e){var t={cN:"variable",b:"\\$"+e.IR},o={cN:"number",b:"#([a-fA-F0-9]{6}|[a-fA-F0-9]{3})"},i=["charset","css","debug","extend","font-face","for","import","include","media","mixin","page","warn","while"],r=["after","before","first-letter","first-line","active","first-child","focus","hover","lang","link","visited"],n=["a","abbr","address","article","aside","audio","b","blockquote","body","button","canvas","caption","cite","code","dd","del","details","dfn","div","dl","dt","em","fieldset","figcaption","figure","footer","form","h1","h2","h3","h4","h5","h6","header","hgroup","html","i","iframe","img","input","ins","kbd","label","legend","li","mark","menu","nav","object","ol","p","q","quote","samp","section","span","strong","summary","sup","table","tbody","td","textarea","tfoot","th","thead","time","tr","ul","var","video"],a="[\\.\\s\\n\\[\\:,]",l=["align-content","align-items","align-self","animation","animation-delay","animation-direction","animation-duration","animation-fill-mode","animation-iteration-count","animation-name","animation-play-state","animation-timing-function","auto","backface-visibility","background","background-attachment","background-clip","background-color","background-image","background-origin","background-position","background-repeat","background-size","border","border-bottom","border-bottom-color","border-bottom-left-radius","border-bottom-right-radius","border-bottom-style","border-bottom-width","border-collapse","border-color","border-image","border-image-outset","border-image-repeat","border-image-slice","border-image-source","border-image-width","border-left","border-left-color","border-left-style","border-left-width","border-radius","border-right","border-right-color","border-right-style","border-right-width","border-spacing","border-style","border-top","border-top-color","border-top-left-radius","border-top-right-radius","border-top-style","border-top-width","border-width","bottom","box-decoration-break","box-shadow","box-sizing","break-after","break-before","break-inside","caption-side","clear","clip","clip-path","color","column-count","column-fill","column-gap","column-rule","column-rule-color","column-rule-style","column-rule-width","column-span","column-width","columns","content","counter-increment","counter-reset","cursor","direction","display","empty-cells","filter","flex","flex-basis","flex-direction","flex-flow","flex-grow","flex-shrink","flex-wrap","float","font","font-family","font-feature-settings","font-kerning","font-language-override","font-size","font-size-adjust","font-stretch","font-style","font-variant","font-variant-ligatures","font-weight","height","hyphens","icon","image-orientation","image-rendering","image-resolution","ime-mode","inherit","initial","justify-content","left","letter-spacing","line-height","list-style","list-style-image","list-style-position","list-style-type","margin","margin-bottom","margin-left","margin-right","margin-top","marks","mask","max-height","max-width","min-height","min-width","nav-down","nav-index","nav-left","nav-right","nav-up","none","normal","object-fit","object-position","opacity","order","orphans","outline","outline-color","outline-offset","outline-style","outline-width","overflow","overflow-wrap","overflow-x","overflow-y","padding","padding-bottom","padding-left","padding-right","padding-top","page-break-after","page-break-before","page-break-inside","perspective","perspective-origin","pointer-events","position","quotes","resize","right","tab-size","table-layout","text-align","text-align-last","text-decoration","text-decoration-color","text-decoration-line","text-decoration-style","text-indent","text-overflow","text-rendering","text-shadow","text-transform","text-underline-position","top","transform","transform-origin","transform-style","transition","transition-delay","transition-duration","transition-property","transition-timing-function","unicode-bidi","vertical-align","visibility","white-space","widows","width","word-break","word-spacing","word-wrap","z-index"],d=["\\{","\\}","\\?","(\\bReturn\\b)","(\\bEnd\\b)","(\\bend\\b)",";","#\\s","\\*\\s","===\\s","\\|","%"];return{aliases:["styl"],cI:!1,i:"("+d.join("|")+")",k:"if else for in",c:[e.QSM,e.ASM,e.CLCM,e.CBCM,o,{b:"\\.[a-zA-Z][a-zA-Z0-9_-]*"+a,rB:!0,c:[{cN:"selector-class",b:"\\.[a-zA-Z][a-zA-Z0-9_-]*"}]},{b:"\\#[a-zA-Z][a-zA-Z0-9_-]*"+a,rB:!0,c:[{cN:"selector-id",b:"\\#[a-zA-Z][a-zA-Z0-9_-]*"}]},{b:"\\b("+n.join("|")+")"+a,rB:!0,c:[{cN:"selector-tag",b:"\\b[a-zA-Z][a-zA-Z0-9_-]*"}]},{b:"&?:?:\\b("+r.join("|")+")"+a},{b:"@("+i.join("|")+")\\b"},t,e.CSSNM,e.NM,{cN:"function",b:"^[a-zA-Z][a-zA-Z0-9_-]*\\(.*\\)",i:"[\\n]",rB:!0,c:[{cN:"title",b:"\\b[a-zA-Z][a-zA-Z0-9_-]*"},{cN:"params",b:/\(/,e:/\)/,c:[o,t,e.ASM,e.CSSNM,e.NM,e.QSM]}]},{cN:"attribute",b:"\\b("+l.reverse().join("|")+")\\b"}]}});hljs.registerLanguage("avrasm",function(r){return{cI:!0,l:"\\.?"+r.IR,k:{keyword:"adc add adiw and andi asr bclr bld brbc brbs brcc brcs break breq brge brhc brhs brid brie brlo brlt brmi brne brpl brsh brtc brts brvc brvs bset bst call cbi cbr clc clh cli cln clr cls clt clv clz com cp cpc cpi cpse dec eicall eijmp elpm eor fmul fmuls fmulsu icall ijmp in inc jmp ld ldd ldi lds lpm lsl lsr mov movw mul muls mulsu neg nop or ori out pop push rcall ret reti rjmp rol ror sbc sbr sbrc sbrs sec seh sbi sbci sbic sbis sbiw sei sen ser ses set sev sez sleep spm st std sts sub subi swap tst wdr",built_in:"r0 r1 r2 r3 r4 r5 r6 r7 r8 r9 r10 r11 r12 r13 r14 r15 r16 r17 r18 r19 r20 r21 r22 r23 r24 r25 r26 r27 r28 r29 r30 r31 x|0 xh xl y|0 yh yl z|0 zh zl ucsr1c udr1 ucsr1a ucsr1b ubrr1l ubrr1h ucsr0c ubrr0h tccr3c tccr3a tccr3b tcnt3h tcnt3l ocr3ah ocr3al ocr3bh ocr3bl ocr3ch ocr3cl icr3h icr3l etimsk etifr tccr1c ocr1ch ocr1cl twcr twdr twar twsr twbr osccal xmcra xmcrb eicra spmcsr spmcr portg ddrg ping portf ddrf sreg sph spl xdiv rampz eicrb eimsk gimsk gicr eifr gifr timsk tifr mcucr mcucsr tccr0 tcnt0 ocr0 assr tccr1a tccr1b tcnt1h tcnt1l ocr1ah ocr1al ocr1bh ocr1bl icr1h icr1l tccr2 tcnt2 ocr2 ocdr wdtcr sfior eearh eearl eedr eecr porta ddra pina portb ddrb pinb portc ddrc pinc portd ddrd pind spdr spsr spcr udr0 ucsr0a ucsr0b ubrr0l acsr admux adcsr adch adcl porte ddre pine pinf",meta:".byte .cseg .db .def .device .dseg .dw .endmacro .equ .eseg .exit .include .list .listmac .macro .nolist .org .set"},c:[r.CBCM,r.C(";","$",{r:0}),r.CNM,r.BNM,{cN:"number",b:"\\b(\\$[a-zA-Z0-9]+|0o[0-7]+)"},r.QSM,{cN:"string",b:"'",e:"[^\\\\]'",i:"[^\\\\][^']"},{cN:"symbol",b:"^[A-Za-z0-9_.$]+:"},{cN:"meta",b:"#",e:"$"},{cN:"subst",b:"@[0-9]+"}]}});hljs.registerLanguage("dns",function(d){return{aliases:["bind","zone"],k:{keyword:"IN A AAAA AFSDB APL CAA CDNSKEY CDS CERT CNAME DHCID DLV DNAME DNSKEY DS HIP IPSECKEY KEY KX LOC MX NAPTR NS NSEC NSEC3 NSEC3PARAM PTR RRSIG RP SIG SOA SRV SSHFP TA TKEY TLSA TSIG TXT"},c:[d.C(";","$"),{cN:"meta",b:/^\$(TTL|GENERATE|INCLUDE|ORIGIN)\b/},{cN:"number",b:"((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:)))\\b"},{cN:"number",b:"((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]).){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\\b"},d.inherit(d.NM,{b:/\b\d+[dhwm]?/})]}});hljs.registerLanguage("basic",function(E){return{cI:!0,i:"^.",l:"[a-zA-Z][a-zA-Z0-9_$%!#]*",k:{keyword:"ABS ASC AND ATN AUTO|0 BEEP BLOAD|10 BSAVE|10 CALL CALLS CDBL CHAIN CHDIR CHR$|10 CINT CIRCLE CLEAR CLOSE CLS COLOR COM COMMON CONT COS CSNG CSRLIN CVD CVI CVS DATA DATE$ DEFDBL DEFINT DEFSNG DEFSTR DEF|0 SEG USR DELETE DIM DRAW EDIT END ENVIRON ENVIRON$ EOF EQV ERASE ERDEV ERDEV$ ERL ERR ERROR EXP FIELD FILES FIX FOR|0 FRE GET GOSUB|10 GOTO HEX$ IF|0 THEN ELSE|0 INKEY$ INP INPUT INPUT# INPUT$ INSTR IMP INT IOCTL IOCTL$ KEY ON OFF LIST KILL LEFT$ LEN LET LINE LLIST LOAD LOC LOCATE LOF LOG LPRINT USING LSET MERGE MID$ MKDIR MKD$ MKI$ MKS$ MOD NAME NEW NEXT NOISE NOT OCT$ ON OR PEN PLAY STRIG OPEN OPTION BASE OUT PAINT PALETTE PCOPY PEEK PMAP POINT POKE POS PRINT PRINT] PSET PRESET PUT RANDOMIZE READ REM RENUM RESET|0 RESTORE RESUME RETURN|0 RIGHT$ RMDIR RND RSET RUN SAVE SCREEN SGN SHELL SIN SOUND SPACE$ SPC SQR STEP STICK STOP STR$ STRING$ SWAP SYSTEM TAB TAN TIME$ TIMER TROFF TRON TO USR VAL VARPTR VARPTR$ VIEW WAIT WHILE WEND WIDTH WINDOW WRITE XOR"},c:[E.QSM,E.C("REM","$",{r:10}),E.C("'","$",{r:0}),{cN:"symbol",b:"^[0-9]+ ",r:10},{cN:"number",b:"\\b([0-9]+[0-9edED.]*[#!]?)",r:0},{cN:"number",b:"(&[hH][0-9a-fA-F]{1,4})"},{cN:"number",b:"(&[oO][0-7]{1,6})"}]}});hljs.registerLanguage("bash",function(e){var t={cN:"variable",v:[{b:/\$[\w\d#@][\w\d_]*/},{b:/\$\{(.*?)}/}]},s={cN:"string",b:/"/,e:/"/,c:[e.BE,t,{cN:"variable",b:/\$\(/,e:/\)/,c:[e.BE]}]},a={cN:"string",b:/'/,e:/'/};return{aliases:["sh","zsh"],l:/-?[a-z\.]+/,k:{keyword:"if then else elif fi for while in do done case esac function",literal:"true false",built_in:"break cd continue eval exec exit export getopts hash pwd readonly return shift test times trap umask unset alias bind builtin caller command declare echo enable help let local logout mapfile printf read readarray source type typeset ulimit unalias set shopt autoload bg bindkey bye cap chdir clone comparguments compcall compctl compdescribe compfiles compgroups compquote comptags comptry compvalues dirs disable disown echotc echoti emulate fc fg float functions getcap getln history integer jobs kill limit log noglob popd print pushd pushln rehash sched setcap setopt stat suspend ttyctl unfunction unhash unlimit unsetopt vared wait whence where which zcompile zformat zftp zle zmodload zparseopts zprof zpty zregexparse zsocket zstyle ztcp",_:"-ne -eq -lt -gt -f -d -e -s -l -a"},c:[{cN:"meta",b:/^#![^\n]+sh\s*$/,r:10},{cN:"function",b:/\w[\w\d_]*\s*\(\s*\)\s*\{/,rB:!0,c:[e.inherit(e.TM,{b:/\w[\w\d_]*/})],r:0},e.HCM,s,a,t]}});hljs.registerLanguage("profile",function(e){return{c:[e.CNM,{b:"[a-zA-Z_][\\da-zA-Z_]+\\.[\\da-zA-Z_]{1,3}",e:":",eE:!0},{b:"(ncalls|tottime|cumtime)",e:"$",k:"ncalls tottime|10 cumtime|10 filename",r:10},{b:"function calls",e:"$",c:[e.CNM],r:10},e.ASM,e.QSM,{cN:"string",b:"\\(",e:"\\)$",eB:!0,eE:!0,r:0}]}});hljs.registerLanguage("julia",function(e){var r={keyword:"in abstract baremodule begin bitstype break catch ccall const continue do else elseif end export finally for function global if immutable import importall let local macro module quote return try type typealias using while",literal:"true false ARGS CPU_CORES C_NULL DL_LOAD_PATH DevNull ENDIAN_BOM ENV I|0 Inf Inf16 Inf32 InsertionSort JULIA_HOME LOAD_PATH MS_ASYNC MS_INVALIDATE MS_SYNC MergeSort NaN NaN16 NaN32 OS_NAME QuickSort RTLD_DEEPBIND RTLD_FIRST RTLD_GLOBAL RTLD_LAZY RTLD_LOCAL RTLD_NODELETE RTLD_NOLOAD RTLD_NOW RoundDown RoundFromZero RoundNearest RoundToZero RoundUp STDERR STDIN STDOUT VERSION WORD_SIZE catalan cglobal e|0 eu|0 eulergamma golden im nothing pi γ π φ Inf64 NaN64 RoundNearestTiesAway RoundNearestTiesUp ",built_in:"ANY ASCIIString AbstractArray AbstractRNG AbstractSparseArray Any ArgumentError Array Associative Base64Pipe Bidiagonal BigFloat BigInt BitArray BitMatrix BitVector Bool BoundsError Box CFILE Cchar Cdouble Cfloat Char CharString Cint Clong Clonglong ClusterManager Cmd Coff_t Colon Complex Complex128 Complex32 Complex64 Condition Cptrdiff_t Cshort Csize_t Cssize_t Cuchar Cuint Culong Culonglong Cushort Cwchar_t DArray DataType DenseArray Diagonal Dict DimensionMismatch DirectIndexString Display DivideError DomainError EOFError EachLine Enumerate ErrorException Exception Expr Factorization FileMonitor FileOffset Filter Float16 Float32 Float64 FloatRange FloatingPoint Function GetfieldNode GotoNode Hermitian IO IOBuffer IOStream IPv4 IPv6 InexactError Int Int128 Int16 Int32 Int64 Int8 IntSet Integer InterruptException IntrinsicFunction KeyError LabelNode LambdaStaticData LineNumberNode LoadError LocalProcess MIME MathConst MemoryError MersenneTwister Method MethodError MethodTable Module NTuple NewvarNode Nothing Number ObjectIdDict OrdinalRange OverflowError ParseError PollingFileWatcher ProcessExitedException ProcessGroup Ptr QuoteNode Range Range1 Ranges Rational RawFD Real Regex RegexMatch RemoteRef RepString RevString RopeString RoundingMode Set SharedArray Signed SparseMatrixCSC StackOverflowError Stat StatStruct StepRange String SubArray SubString SymTridiagonal Symbol SymbolNode Symmetric SystemError Task TextDisplay Timer TmStruct TopNode Triangular Tridiagonal Type TypeConstructor TypeError TypeName TypeVar UTF16String UTF32String UTF8String UdpSocket Uint Uint128 Uint16 Uint32 Uint64 Uint8 UndefRefError UndefVarError UniformScaling UnionType UnitRange Unsigned Vararg VersionNumber WString WeakKeyDict WeakRef Woodbury Zip AbstractChannel AbstractFloat AbstractString AssertionError Base64DecodePipe Base64EncodePipe BufferStream CapturedException CartesianIndex CartesianRange Channel Cintmax_t CompositeException Cstring Cuintmax_t Cwstring Date DateTime Dims Enum GenSym GlobalRef HTML InitError InvalidStateException Irrational LinSpace LowerTriangular NullException Nullable OutOfMemoryError Pair PartialQuickSort Pipe RandomDevice ReadOnlyMemoryError ReentrantLock Ref RemoteException SegmentationFault SerializationState SimpleVector TCPSocket Text Tuple UDPSocket UInt UInt128 UInt16 UInt32 UInt64 UInt8 UnicodeError Union UpperTriangular Val Void WorkerConfig AbstractMatrix AbstractSparseMatrix AbstractSparseVector AbstractVecOrMat AbstractVector DenseMatrix DenseVecOrMat DenseVector Matrix SharedMatrix SharedVector StridedArray StridedMatrix StridedVecOrMat StridedVector VecOrMat Vector "},t="[A-Za-z_\\u00A1-\\uFFFF][A-Za-z_0-9\\u00A1-\\uFFFF]*",a={l:t,k:r,i:/<\//},n={cN:"type",b:/::/},o={cN:"type",b:/<:/},i={cN:"number",b:/(\b0x[\d_]*(\.[\d_]*)?|0x\.\d[\d_]*)p[-+]?\d+|\b0[box][a-fA-F0-9][a-fA-F0-9_]*|(\b\d[\d_]*(\.[\d_]*)?|\.\d[\d_]*)([eEfF][-+]?\d+)?/,r:0},l={cN:"string",b:/'(.|\\[xXuU][a-zA-Z0-9]+)'/},c={cN:"subst",b:/\$\(/,e:/\)/,k:r},s={cN:"variable",b:"\\$"+t},d={cN:"string",c:[e.BE,c,s],v:[{b:/\w*"""/,e:/"""\w*/,r:10},{b:/\w*"/,e:/"\w*/}]},S={cN:"string",c:[e.BE,c,s],b:"`",e:"`"},u={cN:"meta",b:"@"+t},g={cN:"comment",v:[{b:"#=",e:"=#",r:10},{b:"#",e:"$"}]};return a.c=[i,l,n,o,d,S,u,g,e.HCM],c.c=a.c,a});hljs.registerLanguage("rib",function(e){return{k:"ArchiveRecord AreaLightSource Atmosphere Attribute AttributeBegin AttributeEnd Basis Begin Blobby Bound Clipping ClippingPlane Color ColorSamples ConcatTransform Cone CoordinateSystem CoordSysTransform CropWindow Curves Cylinder DepthOfField Detail DetailRange Disk Displacement Display End ErrorHandler Exposure Exterior Format FrameAspectRatio FrameBegin FrameEnd GeneralPolygon GeometricApproximation Geometry Hider Hyperboloid Identity Illuminate Imager Interior LightSource MakeCubeFaceEnvironment MakeLatLongEnvironment MakeShadow MakeTexture Matte MotionBegin MotionEnd NuPatch ObjectBegin ObjectEnd ObjectInstance Opacity Option Orientation Paraboloid Patch PatchMesh Perspective PixelFilter PixelSamples PixelVariance Points PointsGeneralPolygons PointsPolygons Polygon Procedural Projection Quantize ReadArchive RelativeDetail ReverseOrientation Rotate Scale ScreenWindow ShadingInterpolation ShadingRate Shutter Sides Skew SolidBegin SolidEnd Sphere SubdivisionMesh Surface TextureCoordinates Torus Transform TransformBegin TransformEnd TransformPoints Translate TrimCurve WorldBegin WorldEnd",i:"",c:[e.HCM,e.CNM,e.ASM,e.QSM]}});hljs.registerLanguage("gauss",function(e){var t={keyword:"and bool break|0 call callexe checkinterrupt clear clearg closeall cls comlog compile continue create debug declare delete disable dlibrary|10 dllcall do|0 dos ed edit else|0 elseif enable end endfor|10 endif|10 endp|10 endo|10 errorlog|10 errorlogat expr external fn for|0 format goto gosub|0 graph if|0 keyword let lib library line load loadarray loadexe loadf|10 loadk|10 loadm|10 loadp loads loadx local locate loopnextindex lprint lpwidth lshow matrix msym ndpclex new not open or output outwidth plot plotsym pop prcsn print printdos proc|10 push retp|10 return|0 rndcon rndmod rndmult rndseed run save saveall screen scroll setarray show sparse stop string struct system trace trap|10 threadfor|10 threadendfor|10 threadbegin|10 threadjoin|10 threadstat|10 threadend|10 until use while winprint",built_in:"abs acf aconcat aeye amax amean AmericanBinomCall AmericanBinomCall_Greeks AmericanBinomCall_ImpVol AmericanBinomPut AmericanBinomPut_Greeks AmericanBinomPut_ImpVol AmericanBSCall AmericanBSCall_Greeks AmericanBSCall_ImpVol AmericanBSPut AmericanBSPut_Greeks AmericanBSPut_ImpVol amin amult annotationGetDefaults annotationSetBkd annotationSetFont annotationSetLineColor annotationSetLineStyle annotationSetLineThickness annualTradingDays arccos arcsin areshape arrayalloc arrayindex arrayinit arraytomat asciiload asclabel astd astds asum atan atan2 atranspose axmargin balance band bandchol bandcholsol bandltsol bandrv bandsolpd bar base10 begwind besselj bessely beta box boxcox cdfBeta cdfBetaInv cdfBinomial cdfBinomialInv cdfBvn cdfBvn2 cdfBvn2e cdfCauchy cdfCauchyInv cdfChic cdfChii cdfChinc cdfChincInv cdfExp cdfExpInv cdfFc cdfFnc cdfFncInv cdfGam cdfGenPareto cdfHyperGeo cdfLaplace cdfLaplaceInv cdfLogistic cdfLogisticInv cdfmControlCreate cdfMvn cdfMvn2e cdfMvnce cdfMvne cdfMvt2e cdfMvtce cdfMvte cdfN cdfN2 cdfNc cdfNegBinomial cdfNegBinomialInv cdfNi cdfPoisson cdfPoissonInv cdfRayleigh cdfRayleighInv cdfTc cdfTci cdfTnc cdfTvn cdfWeibull cdfWeibullInv cdir ceil ChangeDir chdir chiBarSquare chol choldn cholsol cholup chrs close code cols colsf combinate combinated complex con cond conj cons ConScore contour conv convertsatostr convertstrtosa corrm corrms corrvc corrx corrxs cos cosh counts countwts crossprd crout croutp csrcol csrlin csvReadM csvReadSA cumprodc cumsumc curve cvtos datacreate datacreatecomplex datalist dataload dataloop dataopen datasave date datestr datestring datestrymd dayinyr dayofweek dbAddDatabase|10 dbClose|10 dbCommit|10 dbCreateQuery|10 dbExecQuery|10 dbGetConnectOptions|10 dbGetDatabaseName|10 dbGetDriverName|10 dbGetDrivers|10 dbGetHostName|10 dbGetLastErrorNum|10 dbGetLastErrorText|10 dbGetNumericalPrecPolicy|10 dbGetPassword|10 dbGetPort|10 dbGetTableHeaders|10 dbGetTables|10 dbGetUserName|10 dbHasFeature|10 dbIsDriverAvailable|10 dbIsOpen|10 dbIsOpenError|10 dbOpen|10 dbQueryBindValue|10 dbQueryClear|10 dbQueryCols|10 dbQueryExecPrepared|10 dbQueryFetchAllM|10 dbQueryFetchAllSA|10 dbQueryFetchOneM|10 dbQueryFetchOneSA|10 dbQueryFinish|10 dbQueryGetBoundValue|10 dbQueryGetBoundValues|10 dbQueryGetField|10 dbQueryGetLastErrorNum|10 dbQueryGetLastErrorText|10 dbQueryGetLastInsertID|10 dbQueryGetLastQuery|10 dbQueryGetPosition|10 dbQueryIsActive|10 dbQueryIsForwardOnly|10 dbQueryIsNull|10 dbQueryIsSelect|10 dbQueryIsValid|10 dbQueryPrepare|10 dbQueryRows|10 dbQuerySeek|10 dbQuerySeekFirst|10 dbQuerySeekLast|10 dbQuerySeekNext|10 dbQuerySeekPrevious|10 dbQuerySetForwardOnly|10 dbRemoveDatabase|10 dbRollback|10 dbSetConnectOptions|10 dbSetDatabaseName|10 dbSetHostName|10 dbSetNumericalPrecPolicy|10 dbSetPort|10 dbSetUserName|10 dbTransaction|10 DeleteFile delif delrows denseToSp denseToSpRE denToZero design det detl dfft dffti diag diagrv digamma doswin DOSWinCloseall DOSWinOpen dotfeq dotfeqmt dotfge dotfgemt dotfgt dotfgtmt dotfle dotflemt dotflt dotfltmt dotfne dotfnemt draw drop dsCreate dstat dstatmt dstatmtControlCreate dtdate dtday dttime dttodtv dttostr dttoutc dtvnormal dtvtodt dtvtoutc dummy dummybr dummydn eig eigh eighv eigv elapsedTradingDays endwind envget eof eqSolve eqSolvemt eqSolvemtControlCreate eqSolvemtOutCreate eqSolveset erf erfc erfccplx erfcplx error etdays ethsec etstr EuropeanBinomCall EuropeanBinomCall_Greeks EuropeanBinomCall_ImpVol EuropeanBinomPut EuropeanBinomPut_Greeks EuropeanBinomPut_ImpVol EuropeanBSCall EuropeanBSCall_Greeks EuropeanBSCall_ImpVol EuropeanBSPut EuropeanBSPut_Greeks EuropeanBSPut_ImpVol exctsmpl exec execbg exp extern eye fcheckerr fclearerr feq feqmt fflush fft ffti fftm fftmi fftn fge fgemt fgets fgetsa fgetsat fgetst fgt fgtmt fileinfo filesa fle flemt floor flt fltmt fmod fne fnemt fonts fopen formatcv formatnv fputs fputst fseek fstrerror ftell ftocv ftos ftostrC gamma gammacplx gammaii gausset gdaAppend gdaCreate gdaDStat gdaDStatMat gdaGetIndex gdaGetName gdaGetNames gdaGetOrders gdaGetType gdaGetTypes gdaGetVarInfo gdaIsCplx gdaLoad gdaPack gdaRead gdaReadByIndex gdaReadSome gdaReadSparse gdaReadStruct gdaReportVarInfo gdaSave gdaUpdate gdaUpdateAndPack gdaVars gdaWrite gdaWrite32 gdaWriteSome getarray getdims getf getGAUSShome getmatrix getmatrix4D getname getnamef getNextTradingDay getNextWeekDay getnr getorders getpath getPreviousTradingDay getPreviousWeekDay getRow getscalar3D getscalar4D getTrRow getwind glm gradcplx gradMT gradMTm gradMTT gradMTTm gradp graphprt graphset hasimag header headermt hess hessMT hessMTg hessMTgw hessMTm hessMTmw hessMTT hessMTTg hessMTTgw hessMTTm hessMTw hessp hist histf histp hsec imag indcv indexcat indices indices2 indicesf indicesfn indnv indsav indx integrate1d integrateControlCreate intgrat2 intgrat3 inthp1 inthp2 inthp3 inthp4 inthpControlCreate intquad1 intquad2 intquad3 intrleav intrleavsa intrsect intsimp inv invpd invswp iscplx iscplxf isden isinfnanmiss ismiss key keyav keyw lag lag1 lagn lapEighb lapEighi lapEighvb lapEighvi lapgEig lapgEigh lapgEighv lapgEigv lapgSchur lapgSvdcst lapgSvds lapgSvdst lapSvdcusv lapSvds lapSvdusv ldlp ldlsol linSolve listwise ln lncdfbvn lncdfbvn2 lncdfmvn lncdfn lncdfn2 lncdfnc lnfact lngammacplx lnpdfmvn lnpdfmvt lnpdfn lnpdft loadd loadstruct loadwind loess loessmt loessmtControlCreate log loglog logx logy lower lowmat lowmat1 ltrisol lu lusol machEpsilon make makevars makewind margin matalloc matinit mattoarray maxbytes maxc maxindc maxv maxvec mbesselei mbesselei0 mbesselei1 mbesseli mbesseli0 mbesseli1 meanc median mergeby mergevar minc minindc minv miss missex missrv moment momentd movingave movingaveExpwgt movingaveWgt nextindex nextn nextnevn nextwind ntos null null1 numCombinations ols olsmt olsmtControlCreate olsqr olsqr2 olsqrmt ones optn optnevn orth outtyp pacf packedToSp packr parse pause pdfCauchy pdfChi pdfExp pdfGenPareto pdfHyperGeo pdfLaplace pdfLogistic pdfn pdfPoisson pdfRayleigh pdfWeibull pi pinv pinvmt plotAddArrow plotAddBar plotAddBox plotAddHist plotAddHistF plotAddHistP plotAddPolar plotAddScatter plotAddShape plotAddTextbox plotAddTS plotAddXY plotArea plotBar plotBox plotClearLayout plotContour plotCustomLayout plotGetDefaults plotHist plotHistF plotHistP plotLayout plotLogLog plotLogX plotLogY plotOpenWindow plotPolar plotSave plotScatter plotSetAxesPen plotSetBar plotSetBarFill plotSetBarStacked plotSetBkdColor plotSetFill plotSetGrid plotSetLegend plotSetLineColor plotSetLineStyle plotSetLineSymbol plotSetLineThickness plotSetNewWindow plotSetTitle plotSetWhichYAxis plotSetXAxisShow plotSetXLabel plotSetXRange plotSetXTicInterval plotSetXTicLabel plotSetYAxisShow plotSetYLabel plotSetYRange plotSetZAxisShow plotSetZLabel plotSurface plotTS plotXY polar polychar polyeval polygamma polyint polymake polymat polymroot polymult polyroot pqgwin previousindex princomp printfm printfmt prodc psi putarray putf putvals pvCreate pvGetIndex pvGetParNames pvGetParVector pvLength pvList pvPack pvPacki pvPackm pvPackmi pvPacks pvPacksi pvPacksm pvPacksmi pvPutParVector pvTest pvUnpack QNewton QNewtonmt QNewtonmtControlCreate QNewtonmtOutCreate QNewtonSet QProg QProgmt QProgmtInCreate qqr qqre qqrep qr qre qrep qrsol qrtsol qtyr qtyre qtyrep quantile quantiled qyr qyre qyrep qz rank rankindx readr real reclassify reclassifyCuts recode recserar recsercp recserrc rerun rescale reshape rets rev rfft rffti rfftip rfftn rfftnp rfftp rndBernoulli rndBeta rndBinomial rndCauchy rndChiSquare rndCon rndCreateState rndExp rndGamma rndGeo rndGumbel rndHyperGeo rndi rndKMbeta rndKMgam rndKMi rndKMn rndKMnb rndKMp rndKMu rndKMvm rndLaplace rndLCbeta rndLCgam rndLCi rndLCn rndLCnb rndLCp rndLCu rndLCvm rndLogNorm rndMTu rndMVn rndMVt rndn rndnb rndNegBinomial rndp rndPoisson rndRayleigh rndStateSkip rndu rndvm rndWeibull rndWishart rotater round rows rowsf rref sampleData satostrC saved saveStruct savewind scale scale3d scalerr scalinfnanmiss scalmiss schtoc schur searchsourcepath seekr select selif seqa seqm setdif setdifsa setvars setvwrmode setwind shell shiftr sin singleindex sinh sleep solpd sortc sortcc sortd sorthc sorthcc sortind sortindc sortmc sortr sortrc spBiconjGradSol spChol spConjGradSol spCreate spDenseSubmat spDiagRvMat spEigv spEye spLDL spline spLU spNumNZE spOnes spreadSheetReadM spreadSheetReadSA spreadSheetWrite spScale spSubmat spToDense spTrTDense spTScalar spZeros sqpSolve sqpSolveMT sqpSolveMTControlCreate sqpSolveMTlagrangeCreate sqpSolveMToutCreate sqpSolveSet sqrt statements stdc stdsc stocv stof strcombine strindx strlen strput strrindx strsect strsplit strsplitPad strtodt strtof strtofcplx strtriml strtrimr strtrunc strtruncl strtruncpad strtruncr submat subscat substute subvec sumc sumr surface svd svd1 svd2 svdcusv svds svdusv sysstate tab tan tanh tempname threadBegin threadEnd threadEndFor threadFor threadJoin threadStat time timedt timestr timeutc title tkf2eps tkf2ps tocart todaydt toeplitz token topolar trapchk trigamma trimr trunc type typecv typef union unionsa uniqindx uniqindxsa unique uniquesa upmat upmat1 upper utctodt utctodtv utrisol vals varCovMS varCovXS varget vargetl varmall varmares varput varputl vartypef vcm vcms vcx vcxs vec vech vecr vector vget view viewxyz vlist vnamecv volume vput vread vtypecv wait waitc walkindex where window writer xlabel xlsGetSheetCount xlsGetSheetSize xlsGetSheetTypes xlsMakeRange xlsReadM xlsReadSA xlsWrite xlsWriteM xlsWriteSA xpnd xtics xy xyz ylabel ytics zeros zeta zlabel ztics",literal:"DB_AFTER_LAST_ROW DB_ALL_TABLES DB_BATCH_OPERATIONS DB_BEFORE_FIRST_ROW DB_BLOB DB_EVENT_NOTIFICATIONS DB_FINISH_QUERY DB_HIGH_PRECISION DB_LAST_INSERT_ID DB_LOW_PRECISION_DOUBLE DB_LOW_PRECISION_INT32 DB_LOW_PRECISION_INT64 DB_LOW_PRECISION_NUMBERS DB_MULTIPLE_RESULT_SETS DB_NAMED_PLACEHOLDERS DB_POSITIONAL_PLACEHOLDERS DB_PREPARED_QUERIES DB_QUERY_SIZE DB_SIMPLE_LOCKING DB_SYSTEM_TABLES DB_TABLES DB_TRANSACTIONS DB_UNICODE DB_VIEWS"},a={cN:"meta",b:"#",e:"$",k:{"meta-keyword":"define definecs|10 undef ifdef ifndef iflight ifdllcall ifmac ifos2win ifunix else endif lineson linesoff srcfile srcline"},c:[{b:/\\\n/,r:0},{bK:"include",e:"$",k:{"meta-keyword":"include"},c:[{cN:"meta-string",b:'"',e:'"',i:"\\n"}]},e.CLCM,e.CBCM]},r=e.UIR+"\\s*\\(?",o=[{cN:"params",b:/\(/,e:/\)/,k:t,r:0,c:[e.CNM,e.CLCM,e.CBCM]}];return{aliases:["gss"],cI:!0,k:t,i:"(\\{[%#]|[%#]\\})",c:[e.CNM,e.CLCM,e.CBCM,e.C("@","@"),a,{cN:"string",b:'"',e:'"',c:[e.BE]},{cN:"function",bK:"proc keyword",e:";",eE:!0,k:t,c:[{b:r,rB:!0,c:[e.UTM],r:0},e.CNM,e.CLCM,e.CBCM,a].concat(o)},{cN:"function",bK:"fn",e:";",eE:!0,k:t,c:[{b:r+e.IR+"\\)?\\s*\\=\\s*",rB:!0,c:[e.UTM],r:0},e.CNM,e.CLCM,e.CBCM].concat(o)},{cN:"function",b:"\\bexternal (proc|keyword|fn)\\s+",e:";",eE:!0,k:t,c:[{b:r,rB:!0,c:[e.UTM],r:0},e.CLCM,e.CBCM]},{cN:"function",b:"\\bexternal (matrix|string|array|sparse matrix|struct "+e.IR+")\\s+",e:";",eE:!0,k:t,c:[e.CLCM,e.CBCM]}]}});hljs.registerLanguage("makefile",function(e){var a={cN:"variable",b:/\$\(/,e:/\)/,c:[e.BE]};return{aliases:["mk","mak"],c:[e.HCM,{b:/^\w+\s*\W*=/,rB:!0,r:0,starts:{e:/\s*\W*=/,eE:!0,starts:{e:/$/,r:0,c:[a]}}},{cN:"section",b:/^[\w]+:\s*$/},{cN:"meta",b:/^\.PHONY:/,e:/$/,k:{"meta-keyword":".PHONY"},l:/[\.\w]+/},{b:/^\t+/,e:/$/,r:0,c:[e.QSM,a]}]}});hljs.registerLanguage("ceylon",function(e){var a="assembly module package import alias class interface object given value assign void function new of extends satisfies abstracts in out return break continue throw assert dynamic if else switch case for while try catch finally then let this outer super is exists nonempty",t="shared abstract formal default actual variable late native deprecatedfinal sealed annotation suppressWarnings small",s="doc by license see throws tagged",n={cN:"subst",eB:!0,eE:!0,b:/``/,e:/``/,k:a,r:10},r=[{cN:"string",b:'"""',e:'"""',r:10},{cN:"string",b:'"',e:'"',c:[n]},{cN:"string",b:"'",e:"'"},{cN:"number",b:"#[0-9a-fA-F_]+|\\$[01_]+|[0-9_]+(?:\\.[0-9_](?:[eE][+-]?\\d+)?)?[kMGTPmunpf]?",r:0}];return n.c=r,{k:{keyword:a+" "+t,meta:s},i:"\\$[^01]|#[^0-9a-fA-F]",c:[e.CLCM,e.C("/\\*","\\*/",{c:["self"]}),{cN:"meta",b:'@[a-z]\\w*(?:\\:"[^"]*")?'}].concat(r)}});hljs.registerLanguage("thrift",function(e){var t="bool byte i16 i32 i64 double string binary";return{k:{keyword:"namespace const typedef struct enum service exception void oneway set list map required optional",built_in:t,literal:"true false"},c:[e.QSM,e.NM,e.CLCM,e.CBCM,{cN:"class",bK:"struct enum service exception",e:/\{/,i:/\n/,c:[e.inherit(e.TM,{starts:{eW:!0,eE:!0}})]},{b:"\\b(set|list|map)\\s*<",e:">",k:t,c:["self"]}]}});hljs.registerLanguage("haxe",function(e){var a="([*]|[a-zA-Z_$][a-zA-Z0-9_$]*)";return{aliases:["hx"],k:{keyword:"break callback case cast catch class continue default do dynamic else enum extends extern for function here if implements import in inline interface never new override package private public return static super switch this throw trace try typedef untyped using var while",literal:"true false null"},c:[e.ASM,e.QSM,e.CLCM,e.CBCM,e.CNM,{cN:"class",bK:"class interface",e:"{",eE:!0,c:[{bK:"extends implements"},e.TM]},{cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elseif end error"}},{cN:"function",bK:"function",e:"[{;]",eE:!0,i:"\\S",c:[e.TM,{cN:"params",b:"\\(",e:"\\)",c:[e.ASM,e.QSM,e.CLCM,e.CBCM]},{b:":\\s*"+a}]}]}});hljs.registerLanguage("xl",function(e){var t="ObjectLoader Animate MovieCredits Slides Filters Shading Materials LensFlare Mapping VLCAudioVideo StereoDecoder PointCloud NetworkAccess RemoteControl RegExp ChromaKey Snowfall NodeJS Speech Charts",o={keyword:"if then else do while until for loop import with is as where when by data constant integer real text name boolean symbol infix prefix postfix block tree",literal:"true false nil",built_in:"in mod rem and or xor not abs sign floor ceil sqrt sin cos tan asin acos atan exp expm1 log log2 log10 log1p pi at text_length text_range text_find text_replace contains page slide basic_slide title_slide title subtitle fade_in fade_out fade_at clear_color color line_color line_width texture_wrap texture_transform texture scale_?x scale_?y scale_?z? translate_?x translate_?y translate_?z? rotate_?x rotate_?y rotate_?z? rectangle circle ellipse sphere path line_to move_to quad_to curve_to theme background contents locally time mouse_?x mouse_?y mouse_buttons "+t},a={cN:"string",b:'"',e:'"',i:"\\n"},r={cN:"string",b:"'",e:"'",i:"\\n"},i={cN:"string",b:"<<",e:">>"},l={cN:"number",b:"[0-9]+#[0-9A-Z_]+(\\.[0-9-A-Z_]+)?#?([Ee][+-]?[0-9]+)?"},n={bK:"import",e:"$",k:o,c:[a]},s={cN:"function",b:/[a-z][^\n]*->/,rB:!0,e:/->/,c:[e.inherit(e.TM,{starts:{eW:!0,k:o}})]};return{aliases:["tao"],l:/[a-zA-Z][a-zA-Z0-9_?]*/,k:o,c:[e.CLCM,e.CBCM,a,r,i,s,n,l,e.NM]}});hljs.registerLanguage("smali",function(t){var s=["add","and","cmp","cmpg","cmpl","const","div","double","float","goto","if","int","long","move","mul","neg","new","nop","not","or","rem","return","shl","shr","sput","sub","throw","ushr","xor"],e=["aget","aput","array","check","execute","fill","filled","goto/16","goto/32","iget","instance","invoke","iput","monitor","packed","sget","sparse"],r=["transient","constructor","abstract","final","synthetic","public","private","protected","static","bridge","system"];return{aliases:["smali"],c:[{cN:"string",b:'"',e:'"',r:0},t.C("#","$",{r:0}),{cN:"keyword",v:[{b:"\\s*\\.end\\s[a-zA-Z0-9]*"},{b:"^[ ]*\\.[a-zA-Z]*",r:0},{b:"\\s:[a-zA-Z_0-9]*",r:0},{b:"\\s("+r.join("|")+")"}]},{cN:"built_in",v:[{b:"\\s("+s.join("|")+")\\s"},{b:"\\s("+s.join("|")+")((\\-|/)[a-zA-Z0-9]+)+\\s",r:10},{b:"\\s("+e.join("|")+")((\\-|/)[a-zA-Z0-9]+)*\\s",r:10}]},{cN:"class",b:"L[^(;:\n]*;",r:0},{b:"[vp][0-9]+"}]}});hljs.registerLanguage("dart",function(e){var t={cN:"subst",b:"\\$\\{",e:"}",k:"true false null this is new super"},r={cN:"string",v:[{b:"r'''",e:"'''"},{b:'r"""',e:'"""'},{b:"r'",e:"'",i:"\\n"},{b:'r"',e:'"',i:"\\n"},{b:"'''",e:"'''",c:[e.BE,t]},{b:'"""',e:'"""',c:[e.BE,t]},{b:"'",e:"'",i:"\\n",c:[e.BE,t]},{b:'"',e:'"',i:"\\n",c:[e.BE,t]}]};t.c=[e.CNM,r];var n={keyword:"assert break case catch class const continue default do else enum extends false final finally for if in is new null rethrow return super switch this throw true try var void while with abstract as dynamic export external factory get implements import library operator part set static typedef",built_in:"print Comparable DateTime Duration Function Iterable Iterator List Map Match Null Object Pattern RegExp Set Stopwatch String StringBuffer StringSink Symbol Type Uri bool double int num document window querySelector querySelectorAll Element ElementList"};return{k:n,c:[r,e.C("/\\*\\*","\\*/",{sL:"markdown"}),e.C("///","$",{sL:"markdown"}),e.CLCM,e.CBCM,{cN:"class",bK:"class interface",e:"{",eE:!0,c:[{bK:"extends implements"},e.UTM]},e.CNM,{cN:"meta",b:"@[A-Za-z]+"},{b:"=>"}]}});hljs.registerLanguage("clojure-repl",function(e){return{c:[{cN:"meta",b:/^([\w.-]+|\s*#_)=>/,starts:{e:/$/,sL:"clojure"}}]}});hljs.registerLanguage("fortran",function(e){var t={cN:"params",b:"\\(",e:"\\)"},n={literal:".False. .True.",keyword:"kind do while private call intrinsic where elsewhere type endtype endmodule endselect endinterface end enddo endif if forall endforall only contains default return stop then public subroutine|10 function program .and. .or. .not. .le. .eq. .ge. .gt. .lt. goto save else use module select case access blank direct exist file fmt form formatted iostat name named nextrec number opened rec recl sequential status unformatted unit continue format pause cycle exit c_null_char c_alert c_backspace c_form_feed flush wait decimal round iomsg synchronous nopass non_overridable pass protected volatile abstract extends import non_intrinsic value deferred generic final enumerator class associate bind enum c_int c_short c_long c_long_long c_signed_char c_size_t c_int8_t c_int16_t c_int32_t c_int64_t c_int_least8_t c_int_least16_t c_int_least32_t c_int_least64_t c_int_fast8_t c_int_fast16_t c_int_fast32_t c_int_fast64_t c_intmax_t C_intptr_t c_float c_double c_long_double c_float_complex c_double_complex c_long_double_complex c_bool c_char c_null_ptr c_null_funptr c_new_line c_carriage_return c_horizontal_tab c_vertical_tab iso_c_binding c_loc c_funloc c_associated c_f_pointer c_ptr c_funptr iso_fortran_env character_storage_size error_unit file_storage_size input_unit iostat_end iostat_eor numeric_storage_size output_unit c_f_procpointer ieee_arithmetic ieee_support_underflow_control ieee_get_underflow_mode ieee_set_underflow_mode newunit contiguous recursive pad position action delim readwrite eor advance nml interface procedure namelist include sequence elemental pure integer real character complex logical dimension allocatable|10 parameter external implicit|10 none double precision assign intent optional pointer target in out common equivalence data",built_in:"alog alog10 amax0 amax1 amin0 amin1 amod cabs ccos cexp clog csin csqrt dabs dacos dasin datan datan2 dcos dcosh ddim dexp dint dlog dlog10 dmax1 dmin1 dmod dnint dsign dsin dsinh dsqrt dtan dtanh float iabs idim idint idnint ifix isign max0 max1 min0 min1 sngl algama cdabs cdcos cdexp cdlog cdsin cdsqrt cqabs cqcos cqexp cqlog cqsin cqsqrt dcmplx dconjg derf derfc dfloat dgamma dimag dlgama iqint qabs qacos qasin qatan qatan2 qcmplx qconjg qcos qcosh qdim qerf qerfc qexp qgamma qimag qlgama qlog qlog10 qmax1 qmin1 qmod qnint qsign qsin qsinh qsqrt qtan qtanh abs acos aimag aint anint asin atan atan2 char cmplx conjg cos cosh exp ichar index int log log10 max min nint sign sin sinh sqrt tan tanh print write dim lge lgt lle llt mod nullify allocate deallocate adjustl adjustr all allocated any associated bit_size btest ceiling count cshift date_and_time digits dot_product eoshift epsilon exponent floor fraction huge iand ibclr ibits ibset ieor ior ishft ishftc lbound len_trim matmul maxexponent maxloc maxval merge minexponent minloc minval modulo mvbits nearest pack present product radix random_number random_seed range repeat reshape rrspacing scale scan selected_int_kind selected_real_kind set_exponent shape size spacing spread sum system_clock tiny transpose trim ubound unpack verify achar iachar transfer dble entry dprod cpu_time command_argument_count get_command get_command_argument get_environment_variable is_iostat_end ieee_arithmetic ieee_support_underflow_control ieee_get_underflow_mode ieee_set_underflow_mode is_iostat_eor move_alloc new_line selected_char_kind same_type_as extends_type_ofacosh asinh atanh bessel_j0 bessel_j1 bessel_jn bessel_y0 bessel_y1 bessel_yn erf erfc erfc_scaled gamma log_gamma hypot norm2 atomic_define atomic_ref execute_command_line leadz trailz storage_size merge_bits bge bgt ble blt dshiftl dshiftr findloc iall iany iparity image_index lcobound ucobound maskl maskr num_images parity popcnt poppar shifta shiftl shiftr this_image"};return{cI:!0,aliases:["f90","f95"],k:n,i:/\/\*/,c:[e.inherit(e.ASM,{cN:"string",r:0}),e.inherit(e.QSM,{cN:"string",r:0}),{cN:"function",bK:"subroutine function program",i:"[${=\\n]",c:[e.UTM,t]},e.C("!","$",{r:0}),{cN:"number",b:"(?=\\b|\\+|\\-|\\.)(?=\\.\\d|\\d)(?:\\d+)?(?:\\.?\\d*)(?:[de][+-]?\\d+)?\\b\\.?",r:0}]}});hljs.registerLanguage("handlebars",function(e){var a={"builtin-name":"each in with if else unless bindattr action collection debugger log outlet template unbound view yield"};return{aliases:["hbs","html.hbs","html.handlebars"],cI:!0,sL:"xml",c:[e.C("{{!(--)?","(--)?}}"),{cN:"template-tag",b:/\{\{[#\/]/,e:/\}\}/,c:[{cN:"name",b:/[a-zA-Z\.-]+/,k:a,starts:{eW:!0,r:0,c:[e.QSM]}}]},{cN:"template-variable",b:/\{\{/,e:/\}\}/,k:a}]}});hljs.registerLanguage("armasm",function(s){return{cI:!0,aliases:["arm"],l:"\\.?"+s.IR,k:{meta:".2byte .4byte .align .ascii .asciz .balign .byte .code .data .else .end .endif .endm .endr .equ .err .exitm .extern .global .hword .if .ifdef .ifndef .include .irp .long .macro .rept .req .section .set .skip .space .text .word .arm .thumb .code16 .code32 .force_thumb .thumb_func .ltorg ALIAS ALIGN ARM AREA ASSERT ATTR CN CODE CODE16 CODE32 COMMON CP DATA DCB DCD DCDU DCDO DCFD DCFDU DCI DCQ DCQU DCW DCWU DN ELIF ELSE END ENDFUNC ENDIF ENDP ENTRY EQU EXPORT EXPORTAS EXTERN FIELD FILL FUNCTION GBLA GBLL GBLS GET GLOBAL IF IMPORT INCBIN INCLUDE INFO KEEP LCLA LCLL LCLS LTORG MACRO MAP MEND MEXIT NOFP OPT PRESERVE8 PROC QN READONLY RELOC REQUIRE REQUIRE8 RLIST FN ROUT SETA SETL SETS SN SPACE SUBT THUMB THUMBX TTL WHILE WEND ",built_in:"r0 r1 r2 r3 r4 r5 r6 r7 r8 r9 r10 r11 r12 r13 r14 r15 pc lr sp ip sl sb fp a1 a2 a3 a4 v1 v2 v3 v4 v5 v6 v7 v8 f0 f1 f2 f3 f4 f5 f6 f7 p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 c0 c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14 c15 q0 q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 q13 q14 q15 cpsr_c cpsr_x cpsr_s cpsr_f cpsr_cx cpsr_cxs cpsr_xs cpsr_xsf cpsr_sf cpsr_cxsf spsr_c spsr_x spsr_s spsr_f spsr_cx spsr_cxs spsr_xs spsr_xsf spsr_sf spsr_cxsf s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16 s17 s18 s19 s20 s21 s22 s23 s24 s25 s26 s27 s28 s29 s30 s31 d0 d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 d19 d20 d21 d22 d23 d24 d25 d26 d27 d28 d29 d30 d31 {PC} {VAR} {TRUE} {FALSE} {OPT} {CONFIG} {ENDIAN} {CODESIZE} {CPU} {FPU} {ARCHITECTURE} {PCSTOREOFFSET} {ARMASM_VERSION} {INTER} {ROPI} {RWPI} {SWST} {NOSWST} . @"},c:[{cN:"keyword",b:"\\b(adc|(qd?|sh?|u[qh]?)?add(8|16)?|usada?8|(q|sh?|u[qh]?)?(as|sa)x|and|adrl?|sbc|rs[bc]|asr|b[lx]?|blx|bxj|cbn?z|tb[bh]|bic|bfc|bfi|[su]bfx|bkpt|cdp2?|clz|clrex|cmp|cmn|cpsi[ed]|cps|setend|dbg|dmb|dsb|eor|isb|it[te]{0,3}|lsl|lsr|ror|rrx|ldm(([id][ab])|f[ds])?|ldr((s|ex)?[bhd])?|movt?|mvn|mra|mar|mul|[us]mull|smul[bwt][bt]|smu[as]d|smmul|smmla|mla|umlaal|smlal?([wbt][bt]|d)|mls|smlsl?[ds]|smc|svc|sev|mia([bt]{2}|ph)?|mrr?c2?|mcrr2?|mrs|msr|orr|orn|pkh(tb|bt)|rbit|rev(16|sh)?|sel|[su]sat(16)?|nop|pop|push|rfe([id][ab])?|stm([id][ab])?|str(ex)?[bhd]?|(qd?)?sub|(sh?|q|u[qh]?)?sub(8|16)|[su]xt(a?h|a?b(16)?)|srs([id][ab])?|swpb?|swi|smi|tst|teq|wfe|wfi|yield)(eq|ne|cs|cc|mi|pl|vs|vc|hi|ls|ge|lt|gt|le|al|hs|lo)?[sptrx]?",e:"\\s"},s.C("[;@]","$",{r:0}),s.CBCM,s.QSM,{cN:"string",b:"'",e:"[^\\\\]'",r:0},{cN:"title",b:"\\|",e:"\\|",i:"\\n",r:0},{cN:"number",v:[{b:"[#$=]?0x[0-9a-f]+"},{b:"[#$=]?0b[01]+"},{b:"[#$=]\\d+"},{b:"\\b\\d+"}],r:0},{cN:"symbol",v:[{b:"^[a-z_\\.\\$][a-z0-9_\\.\\$]+"},{b:"^\\s*[a-z_\\.\\$][a-z0-9_\\.\\$]+:"},{b:"[=#]\\w+"}],r:0}]}});hljs.registerLanguage("json",function(e){var t={literal:"true false null"},i=[e.QSM,e.CNM],r={e:",",eW:!0,eE:!0,c:i,k:t},s={b:"{",e:"}",c:[{cN:"attr",b:'\\s*"',e:'"\\s*:\\s*',eB:!0,eE:!0,c:[e.BE],i:"\\n",starts:r}],i:"\\S"},n={b:"\\[",e:"\\]",c:[e.inherit(r)],i:"\\S"};return i.splice(i.length,0,s,n),{c:i,k:t,i:"\\S"}});hljs.registerLanguage("php",function(e){var c={b:"\\$+[a-zA-Z_-ÿ][a-zA-Z0-9_-ÿ]*"},a={cN:"meta",b:/<\?(php)?|\?>/},i={cN:"string",c:[e.BE,a],v:[{b:'b"',e:'"'},{b:"b'",e:"'"},e.inherit(e.ASM,{i:null}),e.inherit(e.QSM,{i:null})]},t={v:[e.BNM,e.CNM]};return{aliases:["php3","php4","php5","php6"],cI:!0,k:"and include_once list abstract global private echo interface as static endswitch array null if endwhile or const for endforeach self var while isset public protected exit foreach throw elseif include __FILE__ empty require_once do xor return parent clone use __CLASS__ __LINE__ else break print eval new catch __METHOD__ case exception default die require __FUNCTION__ enddeclare final try switch continue endfor endif declare unset true false trait goto instanceof insteadof __DIR__ __NAMESPACE__ yield finally",c:[e.CLCM,e.HCM,e.C("/\\*","\\*/",{c:[{cN:"doctag",b:"@[A-Za-z]+"},a]}),e.C("__halt_compiler.+?;",!1,{eW:!0,k:"__halt_compiler",l:e.UIR}),{cN:"string",b:/<<<['"]?\w+['"]?$/,e:/^\w+;?$/,c:[e.BE,{cN:"subst",v:[{b:/\$\w+/},{b:/\{\$/,e:/\}/}]}]},a,c,{b:/(::|->)+[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/},{cN:"function",bK:"function",e:/[;{]/,eE:!0,i:"\\$|\\[|%",c:[e.UTM,{cN:"params",b:"\\(",e:"\\)",c:["self",c,e.CBCM,i,t]}]},{cN:"class",bK:"class interface",e:"{",eE:!0,i:/[:\(\$"]/,c:[{bK:"extends implements"},e.UTM]},{bK:"namespace",e:";",i:/[\.']/,c:[e.UTM]},{bK:"use",e:";",c:[e.UTM]},{b:"=>"},i,t]}});hljs.registerLanguage("gherkin",function(e){return{aliases:["feature"],k:"Feature Background Ability Business Need Scenario Scenarios Scenario Outline Scenario Template Examples Given And Then But When",c:[{cN:"keyword",b:"\\*"},{cN:"meta",b:"@[^@\\s]+"},{b:"\\|",e:"\\|\\w*$",c:[{cN:"string",b:"[^|]+"}]},{cN:"variable",b:"<",e:">"},e.HCM,{cN:"string",b:'"""',e:'"""'},e.QSM]}});hljs.registerLanguage("javascript",function(e){return{aliases:["js"],k:{keyword:"in of if for while finally var new function do return void else break catch instanceof with throw case default try this switch continue typeof delete let yield const export super debugger as async await import from as",literal:"true false null undefined NaN Infinity",built_in:"eval isFinite isNaN parseFloat parseInt decodeURI decodeURIComponent encodeURI encodeURIComponent escape unescape Object Function Boolean Error EvalError InternalError RangeError ReferenceError StopIteration SyntaxError TypeError URIError Number Math Date String RegExp Array Float32Array Float64Array Int16Array Int32Array Int8Array Uint16Array Uint32Array Uint8Array Uint8ClampedArray ArrayBuffer DataView JSON Intl arguments require module console window document Symbol Set Map WeakSet WeakMap Proxy Reflect Promise"},c:[{cN:"meta",r:10,b:/^\s*['"]use (strict|asm)['"]/},{cN:"meta",b:/^#!/,e:/$/},e.ASM,e.QSM,{cN:"string",b:"`",e:"`",c:[e.BE,{cN:"subst",b:"\\$\\{",e:"\\}"}]},e.CLCM,e.CBCM,{cN:"number",v:[{b:"\\b(0[bB][01]+)"},{b:"\\b(0[oO][0-7]+)"},{b:e.CNR}],r:0},{b:"("+e.RSR+"|\\b(case|return|throw)\\b)\\s*",k:"return throw case",c:[e.CLCM,e.CBCM,e.RM,{b:/,e:/>\s*[);\]]/,r:0,sL:"xml"}],r:0},{cN:"function",bK:"function",e:/\{/,eE:!0,c:[e.inherit(e.TM,{b:/[A-Za-z$_][0-9A-Za-z$_]*/}),{cN:"params",b:/\(/,e:/\)/,eB:!0,eE:!0,c:[e.CLCM,e.CBCM]}],i:/\[|%/},{b:/\$[(.]/},{b:"\\."+e.IR,r:0},{cN:"class",bK:"class",e:/[{;=]/,eE:!0,i:/[:"\[\]]/,c:[{bK:"extends"},e.UTM]},{bK:"constructor",e:/\{/,eE:!0}],i:/#(?!!)/}});hljs.registerLanguage("ruleslanguage",function(T){return{k:{keyword:"BILL_PERIOD BILL_START BILL_STOP RS_EFFECTIVE_START RS_EFFECTIVE_STOP RS_JURIS_CODE RS_OPCO_CODE INTDADDATTRIBUTE|5 INTDADDVMSG|5 INTDBLOCKOP|5 INTDBLOCKOPNA|5 INTDCLOSE|5 INTDCOUNT|5 INTDCOUNTSTATUSCODE|5 INTDCREATEMASK|5 INTDCREATEDAYMASK|5 INTDCREATEFACTORMASK|5 INTDCREATEHANDLE|5 INTDCREATEOVERRIDEDAYMASK|5 INTDCREATEOVERRIDEMASK|5 INTDCREATESTATUSCODEMASK|5 INTDCREATETOUPERIOD|5 INTDDELETE|5 INTDDIPTEST|5 INTDEXPORT|5 INTDGETERRORCODE|5 INTDGETERRORMESSAGE|5 INTDISEQUAL|5 INTDJOIN|5 INTDLOAD|5 INTDLOADACTUALCUT|5 INTDLOADDATES|5 INTDLOADHIST|5 INTDLOADLIST|5 INTDLOADLISTDATES|5 INTDLOADLISTENERGY|5 INTDLOADLISTHIST|5 INTDLOADRELATEDCHANNEL|5 INTDLOADSP|5 INTDLOADSTAGING|5 INTDLOADUOM|5 INTDLOADUOMDATES|5 INTDLOADUOMHIST|5 INTDLOADVERSION|5 INTDOPEN|5 INTDREADFIRST|5 INTDREADNEXT|5 INTDRECCOUNT|5 INTDRELEASE|5 INTDREPLACE|5 INTDROLLAVG|5 INTDROLLPEAK|5 INTDSCALAROP|5 INTDSCALE|5 INTDSETATTRIBUTE|5 INTDSETDSTPARTICIPANT|5 INTDSETSTRING|5 INTDSETVALUE|5 INTDSETVALUESTATUS|5 INTDSHIFTSTARTTIME|5 INTDSMOOTH|5 INTDSORT|5 INTDSPIKETEST|5 INTDSUBSET|5 INTDTOU|5 INTDTOURELEASE|5 INTDTOUVALUE|5 INTDUPDATESTATS|5 INTDVALUE|5 STDEV INTDDELETEEX|5 INTDLOADEXACTUAL|5 INTDLOADEXCUT|5 INTDLOADEXDATES|5 INTDLOADEX|5 INTDLOADEXRELATEDCHANNEL|5 INTDSAVEEX|5 MVLOAD|5 MVLOADACCT|5 MVLOADACCTDATES|5 MVLOADACCTHIST|5 MVLOADDATES|5 MVLOADHIST|5 MVLOADLIST|5 MVLOADLISTDATES|5 MVLOADLISTHIST|5 IF FOR NEXT DONE SELECT END CALL ABORT CLEAR CHANNEL FACTOR LIST NUMBER OVERRIDE SET WEEK DISTRIBUTIONNODE ELSE WHEN THEN OTHERWISE IENUM CSV INCLUDE LEAVE RIDER SAVE DELETE NOVALUE SECTION WARN SAVE_UPDATE DETERMINANT LABEL REPORT REVENUE EACH IN FROM TOTAL CHARGE BLOCK AND OR CSV_FILE RATE_CODE AUXILIARY_DEMAND UIDACCOUNT RS BILL_PERIOD_SELECT HOURS_PER_MONTH INTD_ERROR_STOP SEASON_SCHEDULE_NAME ACCOUNTFACTOR ARRAYUPPERBOUND CALLSTOREDPROC GETADOCONNECTION GETCONNECT GETDATASOURCE GETQUALIFIER GETUSERID HASVALUE LISTCOUNT LISTOP LISTUPDATE LISTVALUE PRORATEFACTOR RSPRORATE SETBINPATH SETDBMONITOR WQ_OPEN BILLINGHOURS DATE DATEFROMFLOAT DATETIMEFROMSTRING DATETIMETOSTRING DATETOFLOAT DAY DAYDIFF DAYNAME DBDATETIME HOUR MINUTE MONTH MONTHDIFF MONTHHOURS MONTHNAME ROUNDDATE SAMEWEEKDAYLASTYEAR SECOND WEEKDAY WEEKDIFF YEAR YEARDAY YEARSTR COMPSUM HISTCOUNT HISTMAX HISTMIN HISTMINNZ HISTVALUE MAXNRANGE MAXRANGE MINRANGE COMPIKVA COMPKVA COMPKVARFROMKQKW COMPLF IDATTR FLAG LF2KW LF2KWH MAXKW POWERFACTOR READING2USAGE AVGSEASON MAXSEASON MONTHLYMERGE SEASONVALUE SUMSEASON ACCTREADDATES ACCTTABLELOAD CONFIGADD CONFIGGET CREATEOBJECT CREATEREPORT EMAILCLIENT EXPBLKMDMUSAGE EXPMDMUSAGE EXPORT_USAGE FACTORINEFFECT GETUSERSPECIFIEDSTOP INEFFECT ISHOLIDAY RUNRATE SAVE_PROFILE SETREPORTTITLE USEREXIT WATFORRUNRATE TO TABLE ACOS ASIN ATAN ATAN2 BITAND CEIL COS COSECANT COSH COTANGENT DIVQUOT DIVREM EXP FABS FLOOR FMOD FREPM FREXPN LOG LOG10 MAX MAXN MIN MINNZ MODF POW ROUND ROUND2VALUE ROUNDINT SECANT SIN SINH SQROOT TAN TANH FLOAT2STRING FLOAT2STRINGNC INSTR LEFT LEN LTRIM MID RIGHT RTRIM STRING STRINGNC TOLOWER TOUPPER TRIM NUMDAYS READ_DATE STAGING",built_in:"IDENTIFIER OPTIONS XML_ELEMENT XML_OP XML_ELEMENT_OF DOMDOCCREATE DOMDOCLOADFILE DOMDOCLOADXML DOMDOCSAVEFILE DOMDOCGETROOT DOMDOCADDPI DOMNODEGETNAME DOMNODEGETTYPE DOMNODEGETVALUE DOMNODEGETCHILDCT DOMNODEGETFIRSTCHILD DOMNODEGETSIBLING DOMNODECREATECHILDELEMENT DOMNODESETATTRIBUTE DOMNODEGETCHILDELEMENTCT DOMNODEGETFIRSTCHILDELEMENT DOMNODEGETSIBLINGELEMENT DOMNODEGETATTRIBUTECT DOMNODEGETATTRIBUTEI DOMNODEGETATTRIBUTEBYNAME DOMNODEGETBYNAME"},c:[T.CLCM,T.CBCM,T.ASM,T.QSM,T.CNM,{cN:"literal",v:[{b:"#\\s+[a-zA-Z\\ \\.]*",r:0},{b:"#[a-zA-Z\\ \\.]+"}]}]}});hljs.registerLanguage("python",function(e){var r={cN:"meta",b:/^(>>>|\.\.\.) /},b={cN:"string",c:[e.BE],v:[{b:/(u|b)?r?'''/,e:/'''/,c:[r],r:10},{b:/(u|b)?r?"""/,e:/"""/,c:[r],r:10},{b:/(u|r|ur)'/,e:/'/,r:10},{b:/(u|r|ur)"/,e:/"/,r:10},{b:/(b|br)'/,e:/'/},{b:/(b|br)"/,e:/"/},e.ASM,e.QSM]},a={cN:"number",r:0,v:[{b:e.BNR+"[lLjJ]?"},{b:"\\b(0o[0-7]+)[lLjJ]?"},{b:e.CNR+"[lLjJ]?"}]},l={cN:"params",b:/\(/,e:/\)/,c:["self",r,a,b]};return{aliases:["py","gyp"],k:{keyword:"and elif is global as in if from raise for except finally print import pass return exec else break not with class assert yield try while continue del or def lambda async await nonlocal|10 None True False",built_in:"Ellipsis NotImplemented"},i:/(<\/|->|\?)/,c:[r,a,b,e.HCM,{v:[{cN:"function",bK:"def",r:10},{cN:"class",bK:"class"}],e:/:/,i:/[${=;\n,]/,c:[e.UTM,l,{b:/->/,eW:!0,k:"None"}]},{cN:"meta",b:/^[\t ]*@/,e:/$/},{b:/\b(print|exec)\(/}]}});hljs.registerLanguage("asciidoc",function(e){return{aliases:["adoc"],c:[e.C("^/{4,}\\n","\\n/{4,}$",{r:10}),e.C("^//","$",{r:0}),{cN:"title",b:"^\\.\\w.*$"},{b:"^[=\\*]{4,}\\n",e:"\\n^[=\\*]{4,}$",r:10},{cN:"section",r:10,v:[{b:"^(={1,5}) .+?( \\1)?$"},{b:"^[^\\[\\]\\n]+?\\n[=\\-~\\^\\+]{2,}$"}]},{cN:"meta",b:"^:.+?:",e:"\\s",eE:!0,r:10},{cN:"meta",b:"^\\[.+?\\]$",r:0},{cN:"quote",b:"^_{4,}\\n",e:"\\n_{4,}$",r:10},{cN:"code",b:"^[\\-\\.]{4,}\\n",e:"\\n[\\-\\.]{4,}$",r:10},{b:"^\\+{4,}\\n",e:"\\n\\+{4,}$",c:[{b:"<",e:">",sL:"xml",r:0}],r:10},{cN:"bullet",b:"^(\\*+|\\-+|\\.+|[^\\n]+?::)\\s+"},{cN:"symbol",b:"^(NOTE|TIP|IMPORTANT|WARNING|CAUTION):\\s+",r:10},{cN:"strong",b:"\\B\\*(?![\\*\\s])",e:"(\\n{2}|\\*)",c:[{b:"\\\\*\\w",r:0}]},{cN:"emphasis",b:"\\B'(?!['\\s])",e:"(\\n{2}|')",c:[{b:"\\\\'\\w",r:0}],r:0},{cN:"emphasis",b:"_(?![_\\s])",e:"(\\n{2}|_)",r:0},{cN:"string",v:[{b:"``.+?''"},{b:"`.+?'"}]},{cN:"code",b:"(`.+?`|\\+.+?\\+)",r:0},{cN:"code",b:"^[ \\t]",e:"$",r:0},{b:"^'{3,}[ \\t]*$",r:10},{b:"(link:)?(http|https|ftp|file|irc|image:?):\\S+\\[.*?\\]",rB:!0,c:[{b:"(link|image:?):",r:0},{cN:"link",b:"\\w",e:"[^\\[]+",r:0},{cN:"string",b:"\\[",e:"\\]",eB:!0,eE:!0,r:0}],r:10}]}});hljs.registerLanguage("lasso",function(e){var r="[a-zA-Z_][a-zA-Z0-9_.]*",t="<\\?(lasso(script)?|=)",a="\\]|\\?>",n={literal:"true false none minimal full all void bw nbw ew new cn ncn lt lte gt gte eq neq rx nrx ft",built_in:"array date decimal duration integer map pair string tag xml null boolean bytes keyword list locale queue set stack staticarray local var variable global data self inherited currentcapture givenblock",keyword:"error_code error_msg error_pop error_push error_reset cache database_names database_schemanames database_tablenames define_tag define_type email_batch encode_set html_comment handle handle_error header if inline iterate ljax_target link link_currentaction link_currentgroup link_currentrecord link_detail link_firstgroup link_firstrecord link_lastgroup link_lastrecord link_nextgroup link_nextrecord link_prevgroup link_prevrecord log loop namespace_using output_none portal private protect records referer referrer repeating resultset rows search_args search_arguments select sort_args sort_arguments thread_atomic value_list while abort case else if_empty if_false if_null if_true loop_abort loop_continue loop_count params params_up return return_value run_children soap_definetag soap_lastrequest soap_lastresponse tag_name ascending average by define descending do equals frozen group handle_failure import in into join let match max min on order parent protected provide public require returnhome skip split_thread sum take thread to trait type where with yield yieldhome and or not"},s=e.C("",{r:0}),i={cN:"meta",b:"\\[noprocess\\]",starts:{e:"\\[/noprocess\\]",rE:!0,c:[s]}},l={cN:"meta",b:"\\[/noprocess|"+t},o={cN:"symbol",b:"'"+r+"'"},c=[e.C("/\\*\\*!","\\*/"),e.CLCM,e.CBCM,e.inherit(e.CNM,{b:e.CNR+"|(infinity|nan)\\b"}),e.inherit(e.ASM,{i:null}),e.inherit(e.QSM,{i:null}),{cN:"string",b:"`",e:"`"},{v:[{b:"[#$]"+r},{b:"#",e:"\\d+",i:"\\W"}]},{cN:"type",b:"::\\s*",e:r,i:"\\W"},{cN:"attr",v:[{b:"-(?!infinity)"+e.UIR,r:0},{b:"(\\.\\.\\.)"}]},{b:/(->|\.\.?)\s*/,r:0,c:[o]},{cN:"class",bK:"define",rE:!0,e:"\\(|=>",c:[e.inherit(e.TM,{b:e.UIR+"(=(?!>))?"})]}];return{aliases:["ls","lassoscript"],cI:!0,l:r+"|&[lg]t;",k:n,c:[{cN:"meta",b:a,r:0,starts:{e:"\\[|"+t,rE:!0,r:0,c:[s]}},i,l,{cN:"meta",b:"\\[no_square_brackets",starts:{e:"\\[/no_square_brackets\\]",l:r+"|&[lg]t;",k:n,c:[{cN:"meta",b:a,r:0,starts:{e:"\\[noprocess\\]|"+t,rE:!0,c:[s]}},i,l].concat(c)}},{cN:"meta",b:"\\[",r:0},{cN:"meta",b:"^#!.+lasso9\\b",r:10}].concat(c)}});hljs.registerLanguage("cos",function(e){var t={cN:"string",v:[{b:'"',e:'"',c:[{b:'""',r:0}]}]},r={cN:"number",b:"\\b(\\d+(\\.\\d*)?|\\.\\d+)",r:0},s={keyword:["property","parameter","class","classmethod","clientmethod","extends","as","break","catch","close","continue","do","d","else","elseif","for","goto","halt","hang","h","if","job","j","kill","k","lock","l","merge","new","open","quit","q","read","r","return","set","s","tcommit","throw","trollback","try","tstart","use","view","while","write","w","xecute","x","zkill","znspace","zn","ztrap","zwrite","zw","zzdump","zzwrite","print","zbreak","zinsert","zload","zprint","zremove","zsave","zzprint","mv","mvcall","mvcrt","mvdim","mvprint","zquit","zsync","ascii"].join(" ")};return{cI:!0,aliases:["cos","cls"],k:s,c:[r,t,e.CLCM,e.CBCM,{cN:"comment",b:/;/,e:"$",r:0},{cN:"built_in",b:/(?:\$\$?|\.\.)\^?[a-zA-Z]+/},{cN:"built_in",b:/\$\$\$[a-zA-Z]+/},{cN:"built_in",b:/%[a-z]+(?:\.[a-z]+)*/},{cN:"symbol",b:/\^%?[a-zA-Z][\w]*/},{cN:"keyword",b:/##class|##super|#define|#dim/},{b:/&sql\(/,e:/\)/,eB:!0,eE:!0,sL:"sql"},{b:/&(js|jscript|javascript),e:/>/,eB:!0,eE:!0,sL:"javascript"},{b:/&html<\s*,e:/>\s*>/,sL:"xml"}]}});hljs.registerLanguage("hsp",function(e){return{cI:!0,k:"goto gosub return break repeat loop continue wait await dim sdim foreach dimtype dup dupptr end stop newmod delmod mref run exgoto on mcall assert logmes newlab resume yield onexit onerror onkey onclick oncmd exist delete mkdir chdir dirlist bload bsave bcopy memfile if else poke wpoke lpoke getstr chdpm memexpand memcpy memset notesel noteadd notedel noteload notesave randomize noteunsel noteget split strrep setease button chgdisp exec dialog mmload mmplay mmstop mci pset pget syscolor mes print title pos circle cls font sysfont objsize picload color palcolor palette redraw width gsel gcopy gzoom gmode bmpsave hsvcolor getkey listbox chkbox combox input mesbox buffer screen bgscr mouse objsel groll line clrobj boxf objprm objmode stick grect grotate gsquare gradf objimage objskip objenable celload celdiv celput newcom querycom delcom cnvstow comres axobj winobj sendmsg comevent comevarg sarrayconv callfunc cnvwtos comevdisp libptr system hspstat hspver stat cnt err strsize looplev sublev iparam wparam lparam refstr refdval int rnd strlen length length2 length3 length4 vartype gettime peek wpeek lpeek varptr varuse noteinfo instr abs limit getease str strmid strf getpath strtrim sin cos tan atan sqrt double absf expf logf limitf powf geteasef mousex mousey mousew hwnd hinstance hdc ginfo objinfo dirinfo sysinfo thismod __hspver__ __hsp30__ __date__ __time__ __line__ __file__ _debug __hspdef__ and or xor not screen_normal screen_palette screen_hide screen_fixedsize screen_tool screen_frame gmode_gdi gmode_mem gmode_rgb0 gmode_alpha gmode_rgb0alpha gmode_add gmode_sub gmode_pixela ginfo_mx ginfo_my ginfo_act ginfo_sel ginfo_wx1 ginfo_wy1 ginfo_wx2 ginfo_wy2 ginfo_vx ginfo_vy ginfo_sizex ginfo_sizey ginfo_winx ginfo_winy ginfo_mesx ginfo_mesy ginfo_r ginfo_g ginfo_b ginfo_paluse ginfo_dispx ginfo_dispy ginfo_cx ginfo_cy ginfo_intid ginfo_newid ginfo_sx ginfo_sy objinfo_mode objinfo_bmscr objinfo_hwnd notemax notesize dir_cur dir_exe dir_win dir_sys dir_cmdline dir_desktop dir_mydoc dir_tv font_normal font_bold font_italic font_underline font_strikeout font_antialias objmode_normal objmode_guifont objmode_usefont gsquare_grad msgothic msmincho do until while wend for next _break _continue switch case default swbreak swend ddim ldim alloc m_pi rad2deg deg2rad ease_linear ease_quad_in ease_quad_out ease_quad_inout ease_cubic_in ease_cubic_out ease_cubic_inout ease_quartic_in ease_quartic_out ease_quartic_inout ease_bounce_in ease_bounce_out ease_bounce_inout ease_shake_in ease_shake_out ease_shake_inout ease_loop",c:[e.CLCM,e.CBCM,e.QSM,e.ASM,{cN:"string",b:'{"',e:'"}',c:[e.BE]},e.C(";","$"),{cN:"meta",b:"#",e:"$",k:{"meta-keyword":"addion cfunc cmd cmpopt comfunc const defcfunc deffunc define else endif enum epack func global if ifdef ifndef include modcfunc modfunc modinit modterm module pack packopt regcmd runtime undef usecom uselib"},c:[e.inherit(e.QSM,{cN:"meta-string"}),e.NM,e.CNM,e.CLCM,e.CBCM]},{cN:"symbol",b:"^\\*(\\w+|@)"},e.NM,e.CNM]}});hljs.registerLanguage("crmsh",function(t){var e="primitive rsc_template",r="group clone ms master location colocation order fencing_topology rsc_ticket acl_target acl_group user role tag xml",s="property rsc_defaults op_defaults",a="params meta operations op rule attributes utilization",i="read write deny defined not_defined in_range date spec in ref reference attribute type xpath version and or lt gt tag lte gte eq ne \\",o="number string",n="Master Started Slave Stopped start promote demote stop monitor true false";return{aliases:["crm","pcmk"],cI:!0,k:{keyword:a+" "+i+" "+o,literal:n},c:[t.HCM,{bK:"node",starts:{e:"\\s*([\\w_-]+:)?",starts:{cN:"title",e:"\\s*[\\$\\w_][\\w_-]*"}}},{bK:e,starts:{cN:"title",e:"\\s*[\\$\\w_][\\w_-]*",starts:{e:"\\s*@?[\\w_][\\w_\\.:-]*"}}},{b:"\\b("+r.split(" ").join("|")+")\\s+",k:r,starts:{cN:"title",e:"[\\$\\w_][\\w_-]*"}},{bK:s,starts:{cN:"title",e:"\\s*([\\w_-]+:)?"}},t.QSM,{cN:"meta",b:"(ocf|systemd|service|lsb):[\\w_:-]+",r:0},{cN:"number",b:"\\b\\d+(\\.\\d+)?(ms|s|h|m)?",r:0},{cN:"literal",b:"[-]?(infinity|inf)",r:0},{cN:"attr",b:/([A-Za-z\$_\#][\w_-]+)=/,r:0},{cN:"tag",b:"?",e:"/?>",r:0}]}});hljs.registerLanguage("dust",function(e){var t="if eq ne lt lte gt gte select default math sep";return{aliases:["dst"],cI:!0,sL:"xml",c:[{cN:"template-tag",b:/\{[#\/]/,e:/\}/,i:/;/,c:[{cN:"name",b:/[a-zA-Z\.-]+/,starts:{eW:!0,r:0,c:[e.QSM]}}]},{cN:"template-variable",b:/\{/,e:/\}/,i:/;/,k:t}]}});hljs.registerLanguage("groovy",function(e){return{k:{literal:"true false null",keyword:"byte short char int long boolean float double void def as in assert trait super this abstract static volatile transient public private protected synchronized final class interface enum if else for while switch case break default continue throw throws try catch finally implements extends new import package return instanceof"},c:[e.C("/\\*\\*","\\*/",{r:0,c:[{b:/\w+@/,r:0},{cN:"doctag",b:"@[A-Za-z]+"}]}),e.CLCM,e.CBCM,{cN:"string",b:'"""',e:'"""'},{cN:"string",b:"'''",e:"'''"},{cN:"string",b:"\\$/",e:"/\\$",r:10},e.ASM,{cN:"regexp",b:/~?\/[^\/\n]+\//,c:[e.BE]},e.QSM,{cN:"meta",b:"^#!/usr/bin/env",e:"$",i:"\n"},e.BNM,{cN:"class",bK:"class interface trait enum",e:"{",i:":",c:[{bK:"extends implements"},e.UTM]},e.CNM,{cN:"meta",b:"@[A-Za-z]+"},{cN:"string",b:/[^\?]{0}[A-Za-z0-9_$]+ *:/},{b:/\?/,e:/\:/},{cN:"symbol",b:"^\\s*[A-Za-z0-9_$]+:",r:0}],i:/#|<\//}});hljs.registerLanguage("vbnet",function(e){return{aliases:["vb"],cI:!0,k:{keyword:"addhandler addressof alias and andalso aggregate ansi as assembly auto binary by byref byval call case catch class compare const continue custom declare default delegate dim distinct do each equals else elseif end enum erase error event exit explicit finally for friend from function get global goto group handles if implements imports in inherits interface into is isfalse isnot istrue join key let lib like loop me mid mod module mustinherit mustoverride mybase myclass namespace narrowing new next not notinheritable notoverridable of off on operator option optional or order orelse overloads overridable overrides paramarray partial preserve private property protected public raiseevent readonly redim rem removehandler resume return select set shadows shared skip static step stop structure strict sub synclock take text then throw to try unicode until using when where while widening with withevents writeonly xor",built_in:"boolean byte cbool cbyte cchar cdate cdec cdbl char cint clng cobj csbyte cshort csng cstr ctype date decimal directcast double gettype getxmlnamespace iif integer long object sbyte short single string trycast typeof uinteger ulong ushort",literal:"true false nothing"},i:"//|{|}|endif|gosub|variant|wend",c:[e.inherit(e.QSM,{c:[{b:'""'}]}),e.C("'","$",{rB:!0,c:[{cN:"doctag",b:"'''|",c:[e.PWM]},{cN:"doctag",b:"?",e:">",c:[e.PWM]}]}),e.CNM,{cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elseif end region externalsource"}}]}});hljs.registerLanguage("cpp",function(t){var e={cN:"keyword",b:"\\b[a-z\\d_]*_t\\b"},r={cN:"string",v:[t.inherit(t.QSM,{b:'((u8?|U)|L)?"'}),{b:'(u8?|U)?R"',e:'"',c:[t.BE]},{b:"'\\\\?.",e:"'",i:"."}]},i={cN:"number",v:[{b:"\\b(\\d+(\\.\\d*)?|\\.\\d+)(u|U|l|L|ul|UL|f|F)"},{b:t.CNR}],r:0},s={cN:"meta",b:"#",e:"$",k:{"meta-keyword":"if else elif endif define undef warning error line pragma ifdef ifndef"},c:[{b:/\\\n/,r:0},{bK:"include",e:"$",k:{"meta-keyword":"include"},c:[t.inherit(r,{cN:"meta-string"}),{cN:"meta-string",b:"<",e:">",i:"\\n"}]},r,t.CLCM,t.CBCM]},a=t.IR+"\\s*\\(",c={keyword:"int float while private char catch export virtual operator sizeof dynamic_cast|10 typedef const_cast|10 const struct for static_cast|10 union namespace unsigned long volatile static protected bool template mutable if public friend do goto auto void enum else break extern using class asm case typeid short reinterpret_cast|10 default double register explicit signed typename try this switch continue inline delete alignof constexpr decltype noexcept static_assert thread_local restrict _Bool complex _Complex _Imaginary atomic_bool atomic_char atomic_schar atomic_uchar atomic_short atomic_ushort atomic_int atomic_uint atomic_long atomic_ulong atomic_llong atomic_ullong",built_in:"std string cin cout cerr clog stdin stdout stderr stringstream istringstream ostringstream auto_ptr deque list queue stack vector map set bitset multiset multimap unordered_set unordered_map unordered_multiset unordered_multimap array shared_ptr abort abs acos asin atan2 atan calloc ceil cosh cos exit exp fabs floor fmod fprintf fputs free frexp fscanf isalnum isalpha iscntrl isdigit isgraph islower isprint ispunct isspace isupper isxdigit tolower toupper labs ldexp log10 log malloc realloc memchr memcmp memcpy memset modf pow printf putchar puts scanf sinh sin snprintf sprintf sqrt sscanf strcat strchr strcmp strcpy strcspn strlen strncat strncmp strncpy strpbrk strrchr strspn strstr tanh tan vfprintf vprintf vsprintf endl initializer_list unique_ptr",literal:"true false nullptr NULL"};return{aliases:["c","cc","h","c++","h++","hpp"],k:c,i:"",c:[e,t.CLCM,t.CBCM,i,r,s,{b:"\\b(deque|list|queue|stack|vector|map|set|bitset|multiset|multimap|unordered_map|unordered_set|unordered_multiset|unordered_multimap|array)\\s*<",e:">",k:c,c:["self",e]},{b:t.IR+"::",k:c},{bK:"new throw return else",r:0},{cN:"function",b:"("+t.IR+"[\\*&\\s]+)+"+a,rB:!0,e:/[{;=]/,eE:!0,k:c,i:/[^\w\s\*&]/,c:[{b:a,rB:!0,c:[t.TM],r:0},{cN:"params",b:/\(/,e:/\)/,k:c,r:0,c:[t.CLCM,t.CBCM,r,i]},t.CLCM,t.CBCM,s]}]}});hljs.registerLanguage("erlang",function(e){var r="[a-z'][a-zA-Z0-9_']*",c="("+r+":"+r+"|"+r+")",b={keyword:"after and andalso|10 band begin bnot bor bsl bzr bxor case catch cond div end fun if let not of orelse|10 query receive rem try when xor",literal:"false true"},i=e.C("%","$"),n={cN:"number",b:"\\b(\\d+#[a-fA-F0-9]+|\\d+(\\.\\d+)?([eE][-+]?\\d+)?)",r:0},a={b:"fun\\s+"+r+"/\\d+"},d={b:c+"\\(",e:"\\)",rB:!0,r:0,c:[{b:c,r:0},{b:"\\(",e:"\\)",eW:!0,rE:!0,r:0}]},o={b:"{",e:"}",r:0},t={b:"\\b_([A-Z][A-Za-z0-9_]*)?",r:0},f={b:"[A-Z][a-zA-Z0-9_]*",r:0},l={b:"#"+e.UIR,r:0,rB:!0,c:[{b:"#"+e.UIR,r:0},{b:"{",e:"}",r:0}]},s={bK:"fun receive if try case",e:"end",k:b};s.c=[i,a,e.inherit(e.ASM,{cN:""}),s,d,e.QSM,n,o,t,f,l];var u=[i,a,s,d,e.QSM,n,o,t,f,l];d.c[1].c=u,o.c=u,l.c[1].c=u;var h={cN:"params",b:"\\(",e:"\\)",c:u};return{aliases:["erl"],k:b,i:"(|\\*=|\\+=|-=|/\\*|\\*/|\\(\\*|\\*\\))",c:[{cN:"function",b:"^"+r+"\\s*\\(",e:"->",rB:!0,i:"\\(|#|//|/\\*|\\\\|:|;",c:[h,e.inherit(e.TM,{b:r})],starts:{e:";|\\.",k:b,c:u}},i,{b:"^-",e:"\\.",r:0,eE:!0,rB:!0,l:"-"+e.IR,k:"-module -record -undef -export -ifdef -ifndef -author -copyright -doc -vsn -import -include -include_lib -compile -define -else -endif -file -behaviour -behavior -spec",c:[h]},n,e.QSM,l,t,f,o,{b:/\.$/}]}});hljs.registerLanguage("dos",function(e){var r=e.C(/@?rem\b/,/$/,{r:10}),t={cN:"symbol",b:"^\\s*[A-Za-z._?][A-Za-z0-9_$#@~.?]*(:|\\s+label)",r:0};return{aliases:["bat","cmd"],cI:!0,i:/\/\*/,k:{keyword:"if else goto for in do call exit not exist errorlevel defined equ neq lss leq gtr geq",built_in:"prn nul lpt3 lpt2 lpt1 con com4 com3 com2 com1 aux shift cd dir echo setlocal endlocal set pause copy append assoc at attrib break cacls cd chcp chdir chkdsk chkntfs cls cmd color comp compact convert date dir diskcomp diskcopy doskey erase fs find findstr format ftype graftabl help keyb label md mkdir mode more move path pause print popd pushd promt rd recover rem rename replace restore rmdir shiftsort start subst time title tree type ver verify vol ping net ipconfig taskkill xcopy ren del"},c:[{cN:"variable",b:/%%[^ ]|%[^ ]+?%|![^ ]+?!/},{cN:"function",b:t.b,e:"goto:eof",c:[e.inherit(e.TM,{b:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),r]},{cN:"number",b:"\\b\\d+",r:0},r]}});hljs.registerLanguage("delphi",function(e){var r="exports register file shl array record property for mod while set ally label uses raise not stored class safecall var interface or private static exit index inherited to else stdcall override shr asm far resourcestring finalization packed virtual out and protected library do xorwrite goto near function end div overload object unit begin string on inline repeat until destructor write message program with read initialization except default nil if case cdecl in downto threadvar of try pascal const external constructor type public then implementation finally published procedure",t=[e.CLCM,e.C(/\{/,/\}/,{r:0}),e.C(/\(\*/,/\*\)/,{r:10})],i={cN:"string",b:/'/,e:/'/,c:[{b:/''/}]},c={cN:"string",b:/(#\d+)+/},o={b:e.IR+"\\s*=\\s*class\\s*\\(",rB:!0,c:[e.TM]},n={cN:"function",bK:"function constructor destructor procedure",e:/[:;]/,k:"function constructor|10 destructor|10 procedure|10",c:[e.TM,{cN:"params",b:/\(/,e:/\)/,k:r,c:[i,c]}].concat(t)};return{cI:!0,k:r,i:/"|\$[G-Zg-z]|\/\*|<\/|\|/,c:[i,c,e.NM,o,n].concat(t)}});hljs.registerLanguage("parser3",function(r){var e=r.C("{","}",{c:["self"]});return{sL:"xml",r:0,c:[r.C("^#","$"),r.C("\\^rem{","}",{r:10,c:[e]}),{cN:"meta",b:"^@(?:BASE|USE|CLASS|OPTIONS)$",r:10},{cN:"title",b:"@[\\w\\-]+\\[[\\w^;\\-]*\\](?:\\[[\\w^;\\-]*\\])?(?:.*)$"},{cN:"variable",b:"\\$\\{?[\\w\\-\\.\\:]+\\}?"},{cN:"keyword",b:"\\^[\\w\\-\\.\\:]+"},{cN:"number",b:"\\^#[0-9a-fA-F]+"},r.CNM]}});hljs.registerLanguage("xml",function(s){var t="[A-Za-z0-9\\._:-]+",e={b:/<\?(php)?(?!\w)/,e:/\?>/,sL:"php"},r={eW:!0,i:/,r:0,c:[e,{cN:"attr",b:t,r:0},{b:"=",r:0,c:[{cN:"string",c:[e],v:[{b:/"/,e:/"/},{b:/'/,e:/'/},{b:/[^\s\/>]+/}]}]}]};return{aliases:["html","xhtml","rss","atom","xsl","plist"],cI:!0,c:[{cN:"meta",b:"",r:10,c:[{b:"\\[",e:"\\]"}]},s.C("",{r:10}),{b:"<\\!\\[CDATA\\[",e:"\\]\\]>",r:10},{cN:"tag",b:"",rE:!0,sL:["css","xml"]}},{cN:"tag",b:"",rE:!0,sL:["actionscript","javascript","handlebars","xml"]}},e,{cN:"meta",b:/<\?\w+/,e:/\?>/,r:10},{cN:"tag",b:"?",e:"/?>",c:[{cN:"name",b:/[^\/><\s]+/,r:0},r]}]}});hljs.registerLanguage("lua",function(e){var t="\\[=*\\[",a="\\]=*\\]",r={b:t,e:a,c:["self"]},n=[e.C("--(?!"+t+")","$"),e.C("--"+t,a,{c:[r],r:10})];return{l:e.UIR,k:{keyword:"and break do else elseif end false for if in local nil not or repeat return then true until while",built_in:"_G _VERSION assert collectgarbage dofile error getfenv getmetatable ipairs load loadfile loadstring module next pairs pcall print rawequal rawget rawset require select setfenv setmetatable tonumber tostring type unpack xpcall coroutine debug io math os package string table"},c:n.concat([{cN:"function",bK:"function",e:"\\)",c:[e.inherit(e.TM,{b:"([_a-zA-Z]\\w*\\.)*([_a-zA-Z]\\w*:)?[_a-zA-Z]\\w*"}),{cN:"params",b:"\\(",eW:!0,c:n}].concat(n)},e.CNM,e.ASM,e.QSM,{cN:"string",b:t,e:a,c:[r],r:5}])}});hljs.registerLanguage("java",function(e){var a=e.UIR+"(<"+e.UIR+"(\\s*,\\s*"+e.UIR+")*>)?",t="false synchronized int abstract float private char boolean static null if const for true while long strictfp finally protected import native final void enum else break transient catch instanceof byte super volatile case assert short package default double public try this switch continue throws protected public private",r="\\b(0[bB]([01]+[01_]+[01]+|[01]+)|0[xX]([a-fA-F0-9]+[a-fA-F0-9_]+[a-fA-F0-9]+|[a-fA-F0-9]+)|(([\\d]+[\\d_]+[\\d]+|[\\d]+)(\\.([\\d]+[\\d_]+[\\d]+|[\\d]+))?|\\.([\\d]+[\\d_]+[\\d]+|[\\d]+))([eE][-+]?\\d+)?)[lLfF]?",c={cN:"number",b:r,r:0};return{aliases:["jsp"],k:t,i:/<\/|#/,c:[e.C("/\\*\\*","\\*/",{r:0,c:[{b:/\w+@/,r:0},{cN:"doctag",b:"@[A-Za-z]+"}]}),e.CLCM,e.CBCM,e.ASM,e.QSM,{cN:"class",bK:"class interface",e:/[{;=]/,eE:!0,k:"class interface",i:/[:"\[\]]/,c:[{bK:"extends implements"},e.UTM]},{bK:"new throw return else",r:0},{cN:"function",b:"("+a+"\\s+)+"+e.UIR+"\\s*\\(",rB:!0,e:/[{;=]/,eE:!0,k:t,c:[{b:e.UIR+"\\s*\\(",rB:!0,r:0,c:[e.UTM]},{cN:"params",b:/\(/,e:/\)/,k:t,r:0,c:[e.ASM,e.QSM,e.CNM,e.CBCM]},e.CLCM,e.CBCM]},c,{cN:"meta",b:"@[A-Za-z]+"}]}});hljs.registerLanguage("openscad",function(e){var r={cN:"keyword",b:"\\$(f[asn]|t|vp[rtd]|children)"},n={cN:"literal",b:"false|true|PI|undef"},o={cN:"number",b:"\\b\\d+(\\.\\d+)?(e-?\\d+)?",r:0},i=e.inherit(e.QSM,{i:null}),t={cN:"meta",k:{"meta-keyword":"include use"},b:"include|use <",e:">"},s={cN:"params",b:"\\(",e:"\\)",c:["self",o,i,r,n]},c={b:"[*!#%]",r:0},a={cN:"function",bK:"module function",e:"\\=|\\{",c:[s,e.UTM]};return{aliases:["scad"],k:{keyword:"function module include use for intersection_for if else \\%",literal:"false true PI undef",built_in:"circle square polygon text sphere cube cylinder polyhedron translate rotate scale resize mirror multmatrix color offset hull minkowski union difference intersection abs sign sin cos tan acos asin atan atan2 floor round ceil ln log pow sqrt exp rands min max concat lookup str chr search version version_num norm cross parent_module echo import import_dxf dxf_linear_extrude linear_extrude rotate_extrude surface projection render children dxf_cross dxf_dim let assign"},c:[e.CLCM,e.CBCM,o,t,i,r,c,a]}});hljs.registerLanguage("powershell",function(e){var t={b:"`[\\s\\S]",r:0},r={cN:"variable",v:[{b:/\$[\w\d][\w\d_:]*/}]},o={cN:"literal",b:/\$(null|true|false)\b/},a={cN:"string",b:/"/,e:/"/,c:[t,r,{cN:"variable",b:/\$[A-z]/,e:/[^A-z]/}]},i={cN:"string",b:/'/,e:/'/};return{aliases:["ps"],l:/-?[A-z\.\-]+/,cI:!0,k:{keyword:"if else foreach return function do while until elseif begin for trap data dynamicparam end break throw param continue finally in switch exit filter try process catch",built_in:"Add-Content Add-History Add-Member Add-PSSnapin Clear-Content Clear-Item Clear-Item Property Clear-Variable Compare-Object ConvertFrom-SecureString Convert-Path ConvertTo-Html ConvertTo-SecureString Copy-Item Copy-ItemProperty Export-Alias Export-Clixml Export-Console Export-Csv ForEach-Object Format-Custom Format-List Format-Table Format-Wide Get-Acl Get-Alias Get-AuthenticodeSignature Get-ChildItem Get-Command Get-Content Get-Credential Get-Culture Get-Date Get-EventLog Get-ExecutionPolicy Get-Help Get-History Get-Host Get-Item Get-ItemProperty Get-Location Get-Member Get-PfxCertificate Get-Process Get-PSDrive Get-PSProvider Get-PSSnapin Get-Service Get-TraceSource Get-UICulture Get-Unique Get-Variable Get-WmiObject Group-Object Import-Alias Import-Clixml Import-Csv Invoke-Expression Invoke-History Invoke-Item Join-Path Measure-Command Measure-Object Move-Item Move-ItemProperty New-Alias New-Item New-ItemProperty New-Object New-PSDrive New-Service New-TimeSpan New-Variable Out-Default Out-File Out-Host Out-Null Out-Printer Out-String Pop-Location Push-Location Read-Host Remove-Item Remove-ItemProperty Remove-PSDrive Remove-PSSnapin Remove-Variable Rename-Item Rename-ItemProperty Resolve-Path Restart-Service Resume-Service Select-Object Select-String Set-Acl Set-Alias Set-AuthenticodeSignature Set-Content Set-Date Set-ExecutionPolicy Set-Item Set-ItemProperty Set-Location Set-PSDebug Set-Service Set-TraceSource Set-Variable Sort-Object Split-Path Start-Service Start-Sleep Start-Transcript Stop-Process Stop-Service Stop-Transcript Suspend-Service Tee-Object Test-Path Trace-Command Update-FormatData Update-TypeData Where-Object Write-Debug Write-Error Write-Host Write-Output Write-Progress Write-Verbose Write-Warning",nomarkup:"-ne -eq -lt -gt -ge -le -not -like -notlike -match -notmatch -contains -notcontains -in -notin -replace"},c:[e.HCM,e.NM,a,i,o,r]}});hljs.registerLanguage("protobuf",function(e){return{k:{keyword:"package import option optional required repeated group",built_in:"double float int32 int64 uint32 uint64 sint32 sint64 fixed32 fixed64 sfixed32 sfixed64 bool string bytes",literal:"true false"},c:[e.QSM,e.NM,e.CLCM,{cN:"class",bK:"message enum service",e:/\{/,i:/\n/,c:[e.inherit(e.TM,{starts:{eW:!0,eE:!0}})]},{cN:"function",bK:"rpc",e:/;/,eE:!0,k:"rpc returns"},{b:/^\s*[A-Z_]+/,e:/\s*=/,eE:!0}]}});hljs.registerLanguage("ruby",function(e){var b="[a-zA-Z_]\\w*[!?=]?|[-+~]\\@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\]=?",c="and false then defined module in return redo if BEGIN retry end for true self when next until do begin unless END rescue nil else break undef not super class case require yield alias while ensure elsif or include attr_reader attr_writer attr_accessor",r={cN:"doctag",b:"@[A-Za-z]+"},a={b:"#<",e:">"},s=[e.C("#","$",{c:[r]}),e.C("^\\=begin","^\\=end",{c:[r],r:10}),e.C("^__END__","\\n$")],n={cN:"subst",b:"#\\{",e:"}",k:c},t={cN:"string",c:[e.BE,n],v:[{b:/'/,e:/'/},{b:/"/,e:/"/},{b:/`/,e:/`/},{b:"%[qQwWx]?\\(",e:"\\)"},{b:"%[qQwWx]?\\[",e:"\\]"},{b:"%[qQwWx]?{",e:"}"},{b:"%[qQwWx]?<",e:">"},{b:"%[qQwWx]?/",e:"/"},{b:"%[qQwWx]?%",e:"%"},{b:"%[qQwWx]?-",e:"-"},{b:"%[qQwWx]?\\|",e:"\\|"},{b:/\B\?(\\\d{1,3}|\\x[A-Fa-f0-9]{1,2}|\\u[A-Fa-f0-9]{4}|\\?\S)\b/}]},i={cN:"params",b:"\\(",e:"\\)",endsParent:!0,k:c},d=[t,a,{cN:"class",bK:"class module",e:"$|;",i:/=/,c:[e.inherit(e.TM,{b:"[A-Za-z_]\\w*(::\\w+)*(\\?|\\!)?"}),{b:"<\\s*",c:[{b:"("+e.IR+"::)?"+e.IR}]}].concat(s)},{cN:"function",bK:"def",e:"$|;",c:[e.inherit(e.TM,{b:b}),i].concat(s)},{cN:"symbol",b:e.UIR+"(\\!|\\?)?:",r:0},{cN:"symbol",b:":",c:[t,{b:b}],r:0},{cN:"number",b:"(\\b0[0-7_]+)|(\\b0x[0-9a-fA-F_]+)|(\\b[1-9][0-9_]*(\\.[0-9_]+)?)|[0_]\\b",r:0},{b:"(\\$\\W)|((\\$|\\@\\@?)(\\w+))"},{b:"("+e.RSR+")\\s*",c:[a,{cN:"regexp",c:[e.BE,n],i:/\n/,v:[{b:"/",e:"/[a-z]*"},{b:"%r{",e:"}[a-z]*"},{b:"%r\\(",e:"\\)[a-z]*"},{b:"%r!",e:"![a-z]*"},{b:"%r\\[",e:"\\][a-z]*"}]}].concat(s),r:0}].concat(s);n.c=d,i.c=d;var o="[>?]>",l="[\\w#]+\\(\\w+\\):\\d+:\\d+>",u="(\\w+-)?\\d+\\.\\d+\\.\\d(p\\d+)?[^>]+>",w=[{b:/^\s*=>/,starts:{e:"$",c:d}},{cN:"meta",b:"^("+o+"|"+l+"|"+u+")",starts:{e:"$",c:d}}];return{aliases:["rb","gemspec","podspec","thor","irb"],k:c,i:/\/\*/,c:s.concat(w).concat(d)}});hljs.registerLanguage("sql",function(e){var t=e.C("--","$");return{cI:!0,i:/[<>{}*]/,c:[{bK:"begin end start commit rollback savepoint lock alter create drop rename call delete do handler insert load replace select truncate update set show pragma grant merge describe use explain help declare prepare execute deallocate release unlock purge reset change stop analyze cache flush optimize repair kill install uninstall checksum restore check backup revoke",e:/;/,eW:!0,k:{keyword:"abort abs absolute acc acce accep accept access accessed accessible account acos action activate add addtime admin administer advanced advise aes_decrypt aes_encrypt after agent aggregate ali alia alias allocate allow alter always analyze ancillary and any anydata anydataset anyschema anytype apply archive archived archivelog are as asc ascii asin assembly assertion associate asynchronous at atan atn2 attr attri attrib attribu attribut attribute attributes audit authenticated authentication authid authors auto autoallocate autodblink autoextend automatic availability avg backup badfile basicfile before begin beginning benchmark between bfile bfile_base big bigfile bin binary_double binary_float binlog bit_and bit_count bit_length bit_or bit_xor bitmap blob_base block blocksize body both bound buffer_cache buffer_pool build bulk by byte byteordermark bytes c cache caching call calling cancel capacity cascade cascaded case cast catalog category ceil ceiling chain change changed char_base char_length character_length characters characterset charindex charset charsetform charsetid check checksum checksum_agg child choose chr chunk class cleanup clear client clob clob_base clone close cluster_id cluster_probability cluster_set clustering coalesce coercibility col collate collation collect colu colum column column_value columns columns_updated comment commit compact compatibility compiled complete composite_limit compound compress compute concat concat_ws concurrent confirm conn connec connect connect_by_iscycle connect_by_isleaf connect_by_root connect_time connection consider consistent constant constraint constraints constructor container content contents context contributors controlfile conv convert convert_tz corr corr_k corr_s corresponding corruption cos cost count count_big counted covar_pop covar_samp cpu_per_call cpu_per_session crc32 create creation critical cross cube cume_dist curdate current current_date current_time current_timestamp current_user cursor curtime customdatum cycle d data database databases datafile datafiles datalength date_add date_cache date_format date_sub dateadd datediff datefromparts datename datepart datetime2fromparts day day_to_second dayname dayofmonth dayofweek dayofyear days db_role_change dbtimezone ddl deallocate declare decode decompose decrement decrypt deduplicate def defa defau defaul default defaults deferred defi defin define degrees delayed delegate delete delete_all delimited demand dense_rank depth dequeue des_decrypt des_encrypt des_key_file desc descr descri describ describe descriptor deterministic diagnostics difference dimension direct_load directory disable disable_all disallow disassociate discardfile disconnect diskgroup distinct distinctrow distribute distributed div do document domain dotnet double downgrade drop dumpfile duplicate duration e each edition editionable editions element ellipsis else elsif elt empty enable enable_all enclosed encode encoding encrypt end end-exec endian enforced engine engines enqueue enterprise entityescaping eomonth error errors escaped evalname evaluate event eventdata events except exception exceptions exchange exclude excluding execu execut execute exempt exists exit exp expire explain export export_set extended extent external external_1 external_2 externally extract f failed failed_login_attempts failover failure far fast feature_set feature_value fetch field fields file file_name_convert filesystem_like_logging final finish first first_value fixed flash_cache flashback floor flush following follows for forall force form forma format found found_rows freelist freelists freepools fresh from from_base64 from_days ftp full function g general generated get get_format get_lock getdate getutcdate global global_name globally go goto grant grants greatest group group_concat group_id grouping grouping_id groups gtid_subtract guarantee guard handler hash hashkeys having hea head headi headin heading heap help hex hierarchy high high_priority hosts hour http i id ident_current ident_incr ident_seed identified identity idle_time if ifnull ignore iif ilike ilm immediate import in include including increment index indexes indexing indextype indicator indices inet6_aton inet6_ntoa inet_aton inet_ntoa infile initial initialized initially initrans inmemory inner innodb input insert install instance instantiable instr interface interleaved intersect into invalidate invisible is is_free_lock is_ipv4 is_ipv4_compat is_not is_not_null is_used_lock isdate isnull isolation iterate java join json json_exists k keep keep_duplicates key keys kill l language large last last_day last_insert_id last_value lax lcase lead leading least leaves left len lenght length less level levels library like like2 like4 likec limit lines link list listagg little ln load load_file lob lobs local localtime localtimestamp locate locator lock locked log log10 log2 logfile logfiles logging logical logical_reads_per_call logoff logon logs long loop low low_priority lower lpad lrtrim ltrim m main make_set makedate maketime managed management manual map mapping mask master master_pos_wait match matched materialized max maxextents maximize maxinstances maxlen maxlogfiles maxloghistory maxlogmembers maxsize maxtrans md5 measures median medium member memcompress memory merge microsecond mid migration min minextents minimum mining minus minute minvalue missing mod mode model modification modify module monitoring month months mount move movement multiset mutex n name name_const names nan national native natural nav nchar nclob nested never new newline next nextval no no_write_to_binlog noarchivelog noaudit nobadfile nocheck nocompress nocopy nocycle nodelay nodiscardfile noentityescaping noguarantee nokeep nologfile nomapping nomaxvalue nominimize nominvalue nomonitoring none noneditionable nonschema noorder nopr nopro noprom nopromp noprompt norely noresetlogs noreverse normal norowdependencies noschemacheck noswitch not nothing notice notrim novalidate now nowait nth_value nullif nulls num numb numbe nvarchar nvarchar2 object ocicoll ocidate ocidatetime ociduration ociinterval ociloblocator ocinumber ociref ocirefcursor ocirowid ocistring ocitype oct octet_length of off offline offset oid oidindex old on online only opaque open operations operator optimal optimize option optionally or oracle oracle_date oradata ord ordaudio orddicom orddoc order ordimage ordinality ordvideo organization orlany orlvary out outer outfile outline output over overflow overriding p package pad parallel parallel_enable parameters parent parse partial partition partitions pascal passing password password_grace_time password_lock_time password_reuse_max password_reuse_time password_verify_function patch path patindex pctincrease pctthreshold pctused pctversion percent percent_rank percentile_cont percentile_disc performance period period_add period_diff permanent physical pi pipe pipelined pivot pluggable plugin policy position post_transaction pow power pragma prebuilt precedes preceding precision prediction prediction_cost prediction_details prediction_probability prediction_set prepare present preserve prior priority private private_sga privileges procedural procedure procedure_analyze processlist profiles project prompt protection public publishingservername purge quarter query quick quiesce quota quotename radians raise rand range rank raw read reads readsize rebuild record records recover recovery recursive recycle redo reduced ref reference referenced references referencing refresh regexp_like register regr_avgx regr_avgy regr_count regr_intercept regr_r2 regr_slope regr_sxx regr_sxy reject rekey relational relative relaylog release release_lock relies_on relocate rely rem remainder rename repair repeat replace replicate replication required reset resetlogs resize resource respect restore restricted result result_cache resumable resume retention return returning returns reuse reverse revoke right rlike role roles rollback rolling rollup round row row_count rowdependencies rowid rownum rows rtrim rules safe salt sample save savepoint sb1 sb2 sb4 scan schema schemacheck scn scope scroll sdo_georaster sdo_topo_geometry search sec_to_time second section securefile security seed segment select self sequence sequential serializable server servererror session session_user sessions_per_user set sets settings sha sha1 sha2 share shared shared_pool short show shrink shutdown si_averagecolor si_colorhistogram si_featurelist si_positionalcolor si_stillimage si_texture siblings sid sign sin size size_t sizes skip slave sleep smalldatetimefromparts smallfile snapshot some soname sort soundex source space sparse spfile split sql sql_big_result sql_buffer_result sql_cache sql_calc_found_rows sql_small_result sql_variant_property sqlcode sqldata sqlerror sqlname sqlstate sqrt square standalone standby start starting startup statement static statistics stats_binomial_test stats_crosstab stats_ks_test stats_mode stats_mw_test stats_one_way_anova stats_t_test_ stats_t_test_indep stats_t_test_one stats_t_test_paired stats_wsr_test status std stddev stddev_pop stddev_samp stdev stop storage store stored str str_to_date straight_join strcmp strict string struct stuff style subdate subpartition subpartitions substitutable substr substring subtime subtring_index subtype success sum suspend switch switchoffset switchover sync synchronous synonym sys sys_xmlagg sysasm sysaux sysdate sysdatetimeoffset sysdba sysoper system system_user sysutcdatetime t table tables tablespace tan tdo template temporary terminated tertiary_weights test than then thread through tier ties time time_format time_zone timediff timefromparts timeout timestamp timestampadd timestampdiff timezone_abbr timezone_minute timezone_region to to_base64 to_date to_days to_seconds todatetimeoffset trace tracking transaction transactional translate translation treat trigger trigger_nestlevel triggers trim truncate try_cast try_convert try_parse type ub1 ub2 ub4 ucase unarchived unbounded uncompress under undo unhex unicode uniform uninstall union unique unix_timestamp unknown unlimited unlock unpivot unrecoverable unsafe unsigned until untrusted unusable unused update updated upgrade upped upper upsert url urowid usable usage use use_stored_outlines user user_data user_resources users using utc_date utc_timestamp uuid uuid_short validate validate_password_strength validation valist value values var var_samp varcharc vari varia variab variabl variable variables variance varp varraw varrawc varray verify version versions view virtual visible void wait wallet warning warnings week weekday weekofyear wellformed when whene whenev wheneve whenever where while whitespace with within without work wrapped xdb xml xmlagg xmlattributes xmlcast xmlcolattval xmlelement xmlexists xmlforest xmlindex xmlnamespaces xmlpi xmlquery xmlroot xmlschema xmlserialize xmltable xmltype xor year year_to_month years yearweek",literal:"true false null",built_in:"array bigint binary bit blob boolean char character date dec decimal float int int8 integer interval number numeric real record serial serial8 smallint text varchar varying void"},c:[{cN:"string",b:"'",e:"'",c:[e.BE,{b:"''"}]},{cN:"string",b:'"',e:'"',c:[e.BE,{b:'""'}]},{cN:"string",b:"`",e:"`",c:[e.BE]},e.CNM,e.CBCM,t]},e.CBCM,t]}});hljs.registerLanguage("gams",function(e){var s="abort acronym acronyms alias all and assign binary card diag display else1 eps eq equation equations file files for1 free ge gt if inf integer le loop lt maximizing minimizing model models na ne negative no not option options or ord parameter parameters positive prod putpage puttl repeat sameas scalar scalars semicont semiint set1 sets smax smin solve sos1 sos2 sum system table then until using variable variables while1 xor yes";return{aliases:["gms"],cI:!0,k:s,c:[{bK:"sets parameters variables equations",e:";",c:[{b:"/",e:"/",c:[e.NM]}]},{cN:"string",b:"\\*{3}",e:"\\*{3}"},e.NM,{cN:"number",b:"\\$[a-zA-Z0-9]+"}]}});hljs.registerLanguage("stan",function(e){return{c:[e.HCM,e.CLCM,e.CBCM,{b:e.UIR,l:e.UIR,k:{name:"for in while repeat until if then else",symbol:"bernoulli bernoulli_logit binomial binomial_logit beta_binomial hypergeometric categorical categorical_logit ordered_logistic neg_binomial neg_binomial_2 neg_binomial_2_log poisson poisson_log multinomial normal exp_mod_normal skew_normal student_t cauchy double_exponential logistic gumbel lognormal chi_square inv_chi_square scaled_inv_chi_square exponential inv_gamma weibull frechet rayleigh wiener pareto pareto_type_2 von_mises uniform multi_normal multi_normal_prec multi_normal_cholesky multi_gp multi_gp_cholesky multi_student_t gaussian_dlm_obs dirichlet lkj_corr lkj_corr_cholesky wishart inv_wishart","selector-tag":"int real vector simplex unit_vector ordered positive_ordered row_vector matrix cholesky_factor_corr cholesky_factor_cov corr_matrix cov_matrix",title:"functions model data parameters quantities transformed generated",literal:"true false"},r:0},{cN:"number",b:"0[xX][0-9a-fA-F]+[Li]?\\b",r:0},{cN:"number",b:"0[xX][0-9a-fA-F]+[Li]?\\b",r:0},{cN:"number",b:"\\d+(?:[eE][+\\-]?\\d*)?L\\b",r:0},{cN:"number",b:"\\d+\\.(?!\\d)(?:i\\b)?",r:0},{cN:"number",b:"\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",r:0}]}});hljs.registerLanguage("brainfuck",function(r){var n={cN:"literal",b:"[\\+\\-]",r:0};return{aliases:["bf"],c:[r.C("[^\\[\\]\\.,\\+\\-<> \r\n]","[\\[\\]\\.,\\+\\-<> \r\n]",{rE:!0,r:0}),{cN:"title",b:"[\\[\\]]",r:0},{cN:"string",b:"[\\.,]",r:0},{b:/\+\+|\-\-/,rB:!0,c:[n]},n]}});hljs.registerLanguage("http",function(e){var t="HTTP/[0-9\\.]+";return{aliases:["https"],i:"\\S",c:[{b:"^"+t,e:"$",c:[{cN:"number",b:"\\b\\d{3}\\b"}]},{b:"^[A-Z]+ (.*?) "+t+"$",rB:!0,e:"$",c:[{cN:"string",b:" ",e:" ",eB:!0,eE:!0},{b:t},{cN:"keyword",b:"[A-Z]+"}]},{cN:"attribute",b:"^\\w",e:": ",eE:!0,i:"\\n|\\s|=",starts:{e:"$",r:0}},{b:"\\n\\n",starts:{sL:[],eW:!0}}]}});hljs.registerLanguage("typescript",function(e){var r={keyword:"in if for while finally var new function do return void else break catch instanceof with throw case default try this switch continue typeof delete let yield const class public private protected get set super static implements enum export import declare type namespace abstract",literal:"true false null undefined NaN Infinity",built_in:"eval isFinite isNaN parseFloat parseInt decodeURI decodeURIComponent encodeURI encodeURIComponent escape unescape Object Function Boolean Error EvalError InternalError RangeError ReferenceError StopIteration SyntaxError TypeError URIError Number Math Date String RegExp Array Float32Array Float64Array Int16Array Int32Array Int8Array Uint16Array Uint32Array Uint8Array Uint8ClampedArray ArrayBuffer DataView JSON Intl arguments require module console window document any number boolean string void"};return{aliases:["ts"],k:r,c:[{cN:"meta",b:/^\s*['"]use strict['"]/},e.ASM,e.QSM,{cN:"string",b:"`",e:"`",c:[e.BE,{cN:"subst",b:"\\$\\{",e:"\\}"}]},e.CLCM,e.CBCM,{cN:"number",v:[{b:"\\b(0[bB][01]+)"},{b:"\\b(0[oO][0-7]+)"},{b:e.CNR}],r:0},{b:"("+e.RSR+"|\\b(case|return|throw)\\b)\\s*",k:"return throw case",c:[e.CLCM,e.CBCM,e.RM],r:0},{cN:"function",b:"function",e:/[\{;]/,eE:!0,k:r,c:["self",e.inherit(e.TM,{b:/[A-Za-z$_][0-9A-Za-z$_]*/}),{cN:"params",b:/\(/,e:/\)/,eB:!0,eE:!0,k:r,c:[e.CLCM,e.CBCM],i:/["'\(]/}],i:/\[|%/,r:0},{bK:"constructor",e:/\{/,eE:!0},{bK:"module",e:/\{/,eE:!0},{bK:"interface",e:/\{/,eE:!0,k:"interface extends"},{b:/\$[(.]/},{b:"\\."+e.IR,r:0}]}});hljs.registerLanguage("mipsasm",function(s){return{cI:!0,aliases:["mips"],l:"\\.?"+s.IR,k:{meta:".2byte .4byte .align .ascii .asciz .balign .byte .code .data .else .end .endif .endm .endr .equ .err .exitm .extern .global .hword .if .ifdef .ifndef .include .irp .long .macro .rept .req .section .set .skip .space .text .word .ltorg ",built_in:"$0 $1 $2 $3 $4 $5 $6 $7 $8 $9 $10 $11 $12 $13 $14 $15 $16 $17 $18 $19 $20 $21 $22 $23 $24 $25 $26 $27 $28 $29 $30 $31 zero at v0 v1 a0 a1 a2 a3 a4 a5 a6 a7 t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 s0 s1 s2 s3 s4 s5 s6 s7 s8 k0 k1 gp sp fp ra $f0 $f1 $f2 $f2 $f4 $f5 $f6 $f7 $f8 $f9 $f10 $f11 $f12 $f13 $f14 $f15 $f16 $f17 $f18 $f19 $f20 $f21 $f22 $f23 $f24 $f25 $f26 $f27 $f28 $f29 $f30 $f31 Context Random EntryLo0 EntryLo1 Context PageMask Wired EntryHi HWREna BadVAddr Count Compare SR IntCtl SRSCtl SRSMap Cause EPC PRId EBase Config Config1 Config2 Config3 LLAddr Debug DEPC DESAVE CacheErr ECC ErrorEPC TagLo DataLo TagHi DataHi WatchLo WatchHi PerfCtl PerfCnt "},c:[{cN:"keyword",b:"\\b(addi?u?|andi?|b(al)?|beql?|bgez(al)?l?|bgtzl?|blezl?|bltz(al)?l?|bnel?|cl[oz]|divu?|ext|ins|j(al)?|jalr(.hb)?|jr(.hb)?|lbu?|lhu?|ll|lui|lw[lr]?|maddu?|mfhi|mflo|movn|movz|msubu?|mthi|mtlo|mul|multu?|nop|nor|ori?|rotrv?|sb|sc|se[bh]|sh|sllv?|slti?u?|srav?|srlv?|subu?|sw[lr]?|xori?|wsbh|abs.[sd]|add.[sd]|alnv.ps|bc1[ft]l?|c.(s?f|un|u?eq|[ou]lt|[ou]le|ngle?|seq|l[et]|ng[et]).[sd]|(ceil|floor|round|trunc).[lw].[sd]|cfc1|cvt.d.[lsw]|cvt.l.[dsw]|cvt.ps.s|cvt.s.[dlw]|cvt.s.p[lu]|cvt.w.[dls]|div.[ds]|ldx?c1|luxc1|lwx?c1|madd.[sd]|mfc1|mov[fntz]?.[ds]|msub.[sd]|mth?c1|mul.[ds]|neg.[ds]|nmadd.[ds]|nmsub.[ds]|p[lu][lu].ps|recip.fmt|r?sqrt.[ds]|sdx?c1|sub.[ds]|suxc1|swx?c1|break|cache|d?eret|[de]i|ehb|mfc0|mtc0|pause|prefx?|rdhwr|rdpgpr|sdbbp|ssnop|synci?|syscall|teqi?|tgei?u?|tlb(p|r|w[ir])|tlti?u?|tnei?|wait|wrpgpr)",e:"\\s"},s.C("[;#]","$"),s.CBCM,s.QSM,{cN:"string",b:"'",e:"[^\\\\]'",r:0},{cN:"title",b:"\\|",e:"\\|",i:"\\n",r:0},{cN:"number",v:[{b:"0x[0-9a-f]+"},{b:"\\b-?\\d+"}],r:0},{cN:"symbol",v:[{b:"^\\s*[a-z_\\.\\$][a-z0-9_\\.\\$]+:"},{b:"^\\s*[0-9]+:"},{b:"[0-9]+[bf]"}],r:0}],i:"/"}});hljs.registerLanguage("nimrod",function(t){return{aliases:["nim"],k:{keyword:"addr and as asm bind block break|0 case|0 cast const|0 continue|0 converter discard distinct|10 div do elif else|0 end|0 enum|0 except export finally for from generic if|0 import|0 in include|0 interface is isnot|10 iterator|10 let|0 macro method|10 mixin mod nil not notin|10 object|0 of or out proc|10 ptr raise ref|10 return shl shr static template try|0 tuple type|0 using|0 var|0 when while|0 with without xor yield",literal:"shared guarded stdin stdout stderr result|10 true false"},c:[{cN:"meta",b:/{\./,e:/\.}/,r:10},{cN:"string",b:/[a-zA-Z]\w*"/,e:/"/,c:[{b:/""/}]},{cN:"string",b:/([a-zA-Z]\w*)?"""/,e:/"""/},t.QSM,{cN:"type",b:/\b[A-Z]\w+\b/,r:0},{cN:"built_in",b:/\b(int|int8|int16|int32|int64|uint|uint8|uint16|uint32|uint64|float|float32|float64|bool|char|string|cstring|pointer|expr|stmt|void|auto|any|range|array|openarray|varargs|seq|set|clong|culong|cchar|cschar|cshort|cint|csize|clonglong|cfloat|cdouble|clongdouble|cuchar|cushort|cuint|culonglong|cstringarray|semistatic)\b/},{cN:"number",r:0,v:[{b:/\b(0[xX][0-9a-fA-F][_0-9a-fA-F]*)('?[iIuU](8|16|32|64))?/},{b:/\b(0o[0-7][_0-7]*)('?[iIuUfF](8|16|32|64))?/},{b:/\b(0(b|B)[01][_01]*)('?[iIuUfF](8|16|32|64))?/},{b:/\b(\d[_\d]*)('?[iIuUfF](8|16|32|64))?/}]},t.HCM]}});hljs.registerLanguage("accesslog",function(T){return{c:[{cN:"number",b:"\\b\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}(:\\d{1,5})?\\b"},{cN:"number",b:"\\b\\d+\\b",r:0},{cN:"string",b:'"(GET|POST|HEAD|PUT|DELETE|CONNECT|OPTIONS|PATCH|TRACE)',e:'"',k:"GET POST HEAD PUT DELETE CONNECT OPTIONS PATCH TRACE",i:"\\n",r:10},{cN:"string",b:/\[/,e:/\]/,i:"\\n"},{cN:"string",b:'"',e:'"',i:"\\n"}]}});hljs.registerLanguage("django",function(e){var t={b:/\|[A-Za-z]+:?/,k:{name:"truncatewords removetags linebreaksbr yesno get_digit timesince random striptags filesizeformat escape linebreaks length_is ljust rjust cut urlize fix_ampersands title floatformat capfirst pprint divisibleby add make_list unordered_list urlencode timeuntil urlizetrunc wordcount stringformat linenumbers slice date dictsort dictsortreversed default_if_none pluralize lower join center default truncatewords_html upper length phone2numeric wordwrap time addslashes slugify first escapejs force_escape iriencode last safe safeseq truncatechars localize unlocalize localtime utc timezone"},c:[e.QSM,e.ASM]};return{aliases:["jinja"],cI:!0,sL:"xml",c:[e.C(/\{%\s*comment\s*%}/,/\{%\s*endcomment\s*%}/),e.C(/\{#/,/#}/),{cN:"template-tag",b:/\{%/,e:/%}/,c:[{cN:"name",b:/\w+/,k:{name:"comment endcomment load templatetag ifchanged endifchanged if endif firstof for endfor ifnotequal endifnotequal widthratio extends include spaceless endspaceless regroup ifequal endifequal ssi now with cycle url filter endfilter debug block endblock else autoescape endautoescape csrf_token empty elif endwith static trans blocktrans endblocktrans get_static_prefix get_media_prefix plural get_current_language language get_available_languages get_current_language_bidi get_language_info get_language_info_list localize endlocalize localtime endlocaltime timezone endtimezone get_current_timezone verbatim"},starts:{eW:!0,k:"in by as",c:[t],r:0}}]},{cN:"template-variable",b:/\{\{/,e:/}}/,c:[t]}]}});hljs.registerLanguage("irpf90",function(e){var t={cN:"params",b:"\\(",e:"\\)"},n={literal:".False. .True.",keyword:"kind do while private call intrinsic where elsewhere type endtype endmodule endselect endinterface end enddo endif if forall endforall only contains default return stop then public subroutine|10 function program .and. .or. .not. .le. .eq. .ge. .gt. .lt. goto save else use module select case access blank direct exist file fmt form formatted iostat name named nextrec number opened rec recl sequential status unformatted unit continue format pause cycle exit c_null_char c_alert c_backspace c_form_feed flush wait decimal round iomsg synchronous nopass non_overridable pass protected volatile abstract extends import non_intrinsic value deferred generic final enumerator class associate bind enum c_int c_short c_long c_long_long c_signed_char c_size_t c_int8_t c_int16_t c_int32_t c_int64_t c_int_least8_t c_int_least16_t c_int_least32_t c_int_least64_t c_int_fast8_t c_int_fast16_t c_int_fast32_t c_int_fast64_t c_intmax_t C_intptr_t c_float c_double c_long_double c_float_complex c_double_complex c_long_double_complex c_bool c_char c_null_ptr c_null_funptr c_new_line c_carriage_return c_horizontal_tab c_vertical_tab iso_c_binding c_loc c_funloc c_associated c_f_pointer c_ptr c_funptr iso_fortran_env character_storage_size error_unit file_storage_size input_unit iostat_end iostat_eor numeric_storage_size output_unit c_f_procpointer ieee_arithmetic ieee_support_underflow_control ieee_get_underflow_mode ieee_set_underflow_mode newunit contiguous recursive pad position action delim readwrite eor advance nml interface procedure namelist include sequence elemental pure integer real character complex logical dimension allocatable|10 parameter external implicit|10 none double precision assign intent optional pointer target in out common equivalence data begin_provider &begin_provider end_provider begin_shell end_shell begin_template end_template subst assert touch soft_touch provide no_dep free irp_if irp_else irp_endif irp_write irp_read",built_in:"alog alog10 amax0 amax1 amin0 amin1 amod cabs ccos cexp clog csin csqrt dabs dacos dasin datan datan2 dcos dcosh ddim dexp dint dlog dlog10 dmax1 dmin1 dmod dnint dsign dsin dsinh dsqrt dtan dtanh float iabs idim idint idnint ifix isign max0 max1 min0 min1 sngl algama cdabs cdcos cdexp cdlog cdsin cdsqrt cqabs cqcos cqexp cqlog cqsin cqsqrt dcmplx dconjg derf derfc dfloat dgamma dimag dlgama iqint qabs qacos qasin qatan qatan2 qcmplx qconjg qcos qcosh qdim qerf qerfc qexp qgamma qimag qlgama qlog qlog10 qmax1 qmin1 qmod qnint qsign qsin qsinh qsqrt qtan qtanh abs acos aimag aint anint asin atan atan2 char cmplx conjg cos cosh exp ichar index int log log10 max min nint sign sin sinh sqrt tan tanh print write dim lge lgt lle llt mod nullify allocate deallocate adjustl adjustr all allocated any associated bit_size btest ceiling count cshift date_and_time digits dot_product eoshift epsilon exponent floor fraction huge iand ibclr ibits ibset ieor ior ishft ishftc lbound len_trim matmul maxexponent maxloc maxval merge minexponent minloc minval modulo mvbits nearest pack present product radix random_number random_seed range repeat reshape rrspacing scale scan selected_int_kind selected_real_kind set_exponent shape size spacing spread sum system_clock tiny transpose trim ubound unpack verify achar iachar transfer dble entry dprod cpu_time command_argument_count get_command get_command_argument get_environment_variable is_iostat_end ieee_arithmetic ieee_support_underflow_control ieee_get_underflow_mode ieee_set_underflow_mode is_iostat_eor move_alloc new_line selected_char_kind same_type_as extends_type_ofacosh asinh atanh bessel_j0 bessel_j1 bessel_jn bessel_y0 bessel_y1 bessel_yn erf erfc erfc_scaled gamma log_gamma hypot norm2 atomic_define atomic_ref execute_command_line leadz trailz storage_size merge_bits bge bgt ble blt dshiftl dshiftr findloc iall iany iparity image_index lcobound ucobound maskl maskr num_images parity popcnt poppar shifta shiftl shiftr this_image IRP_ALIGN irp_here"};return{cI:!0,k:n,i:/\/\*/,c:[e.inherit(e.ASM,{cN:"string",r:0}),e.inherit(e.QSM,{cN:"string",r:0}),{cN:"function",bK:"subroutine function program",i:"[${=\\n]",c:[e.UTM,t]},e.C("!","$",{r:0}),e.C("begin_doc","end_doc",{r:10}),{cN:"number",b:"(?=\\b|\\+|\\-|\\.)(?=\\.\\d|\\d)(?:\\d+)?(?:\\.?\\d*)(?:[de][+-]?\\d+)?\\b\\.?",r:0}]}});hljs.registerLanguage("fsharp",function(e){var t={b:"<",e:">",c:[e.inherit(e.TM,{b:/'[a-zA-Z0-9_]+/})]};return{aliases:["fs"],k:"abstract and as assert base begin class default delegate do done downcast downto elif else end exception extern false finally for fun function global if in inherit inline interface internal lazy let match member module mutable namespace new null of open or override private public rec return sig static struct then to true try type upcast use val void when while with yield",i:/\/\*/,c:[{cN:"keyword",b:/\b(yield|return|let|do)!/},{cN:"string",b:'@"',e:'"',c:[{b:'""'}]},{cN:"string",b:'"""',e:'"""'},e.C("\\(\\*","\\*\\)"),{cN:"class",bK:"type",e:"\\(|=|$",eE:!0,c:[e.UTM,t]},{cN:"meta",b:"\\[<",e:">\\]",r:10},{cN:"symbol",b:"\\B('[A-Za-z])\\b",c:[e.BE]},e.CLCM,e.inherit(e.QSM,{i:null}),e.CNM]}});hljs.registerLanguage("go",function(e){var t={keyword:"break default func interface select case map struct chan else goto package switch const fallthrough if range type continue for import return var go defer bool byte complex64 complex128 float32 float64 int8 int16 int32 int64 string uint8 uint16 uint32 uint64 int uint uintptr rune",literal:"true false iota nil",built_in:"append cap close complex copy imag len make new panic print println real recover delete"};return{aliases:["golang"],k:t,i:"",c:[e.CLCM,e.CBCM,e.QSM,{cN:"string",b:"'",e:"[^\\\\]'"},{cN:"string",b:"`",e:"`"},{cN:"number",b:e.CNR+"[dflsi]?",r:0},e.CNM]}});hljs.registerLanguage("apache",function(e){var r={cN:"number",b:"[\\$%]\\d+"};return{aliases:["apacheconf"],cI:!0,c:[e.HCM,{cN:"section",b:"?",e:">"},{cN:"attribute",b:/\w+/,r:0,k:{nomarkup:"order deny allow setenv rewriterule rewriteengine rewritecond documentroot sethandler errordocument loadmodule options header listen serverroot servername"},starts:{e:/$/,r:0,k:{literal:"on off all"},c:[{cN:"meta",b:"\\s\\[",e:"\\]$"},{cN:"variable",b:"[\\$%]\\{",e:"\\}",c:["self",r]},r,e.QSM]}}],i:/\S/}});hljs.registerLanguage("clojure",function(e){var t={"builtin-name":"def defonce cond apply if-not if-let if not not= = < > <= >= == + / * - rem quot neg? pos? delay? symbol? keyword? true? false? integer? empty? coll? list? set? ifn? fn? associative? sequential? sorted? counted? reversible? number? decimal? class? distinct? isa? float? rational? reduced? ratio? odd? even? char? seq? vector? string? map? nil? contains? zero? instance? not-every? not-any? libspec? -> ->> .. . inc compare do dotimes mapcat take remove take-while drop letfn drop-last take-last drop-while while intern condp case reduced cycle split-at split-with repeat replicate iterate range merge zipmap declare line-seq sort comparator sort-by dorun doall nthnext nthrest partition eval doseq await await-for let agent atom send send-off release-pending-sends add-watch mapv filterv remove-watch agent-error restart-agent set-error-handler error-handler set-error-mode! error-mode shutdown-agents quote var fn loop recur throw try monitor-enter monitor-exit defmacro defn defn- macroexpand macroexpand-1 for dosync and or when when-not when-let comp juxt partial sequence memoize constantly complement identity assert peek pop doto proxy defstruct first rest cons defprotocol cast coll deftype defrecord last butlast sigs reify second ffirst fnext nfirst nnext defmulti defmethod meta with-meta ns in-ns create-ns import refer keys select-keys vals key val rseq name namespace promise into transient persistent! conj! assoc! dissoc! pop! disj! use class type num float double short byte boolean bigint biginteger bigdec print-method print-dup throw-if printf format load compile get-in update-in pr pr-on newline flush read slurp read-line subvec with-open memfn time re-find re-groups rand-int rand mod locking assert-valid-fdecl alias resolve ref deref refset swap! reset! set-validator! compare-and-set! alter-meta! reset-meta! commute get-validator alter ref-set ref-history-count ref-min-history ref-max-history ensure sync io! new next conj set! to-array future future-call into-array aset gen-class reduce map filter find empty hash-map hash-set sorted-map sorted-map-by sorted-set sorted-set-by vec vector seq flatten reverse assoc dissoc list disj get union difference intersection extend extend-type extend-protocol int nth delay count concat chunk chunk-buffer chunk-append chunk-first chunk-rest max min dec unchecked-inc-int unchecked-inc unchecked-dec-inc unchecked-dec unchecked-negate unchecked-add-int unchecked-add unchecked-subtract-int unchecked-subtract chunk-next chunk-cons chunked-seq? prn vary-meta lazy-seq spread list* str find-keyword keyword symbol gensym force rationalize"},r="a-zA-Z_\\-!.?+*=<>'",n="["+r+"]["+r+"0-9/;:]*",a="[-+]?\\d+(\\.\\d+)?",o={b:n,r:0},s={cN:"number",b:a,r:0},i=e.inherit(e.QSM,{i:null}),c=e.C(";","$",{r:0}),d={cN:"literal",b:/\b(true|false|nil)\b/},l={b:"[\\[\\{]",e:"[\\]\\}]"},m={cN:"comment",b:"\\^"+n},p=e.C("\\^\\{","\\}"),u={cN:"symbol",b:"[:]"+n},f={b:"\\(",e:"\\)"},h={eW:!0,r:0},y={k:t,l:n,cN:"name",b:n,starts:h},b=[f,i,m,p,c,u,l,s,d,o];return f.c=[e.C("comment",""),y,h],h.c=b,l.c=b,{aliases:["clj"],i:/\S/,c:[f,i,m,p,c,u,l,s,d]}});hljs.registerLanguage("golo",function(e){return{k:{keyword:"println readln print import module function local return let var while for foreach times in case when match with break continue augment augmentation each find filter reduce if then else otherwise try catch finally raise throw orIfNull DynamicObject|10 DynamicVariable struct Observable map set vector list array",literal:"true false null"},c:[e.HCM,e.QSM,e.CNM,{cN:"meta",b:"@[A-Za-z]+"}]}});hljs.registerLanguage("monkey",function(e){var n={cN:"number",r:0,v:[{b:"[$][a-fA-F0-9]+"},e.NM]};return{cI:!0,k:{keyword:"public private property continue exit extern new try catch eachin not abstract final select case default const local global field end if then else elseif endif while wend repeat until forever for to step next return module inline throw import",built_in:"DebugLog DebugStop Error Print ACos ACosr ASin ASinr ATan ATan2 ATan2r ATanr Abs Abs Ceil Clamp Clamp Cos Cosr Exp Floor Log Max Max Min Min Pow Sgn Sgn Sin Sinr Sqrt Tan Tanr Seed PI HALFPI TWOPI",literal:"true false null and or shl shr mod"},i:/\/\*/,c:[e.C("#rem","#end"),e.C("'","$",{r:0}),{cN:"function",bK:"function method",e:"[(=:]|$",i:/\n/,c:[e.UTM]},{cN:"class",bK:"class interface",e:"$",c:[{bK:"extends implements"},e.UTM]},{cN:"built_in",b:"\\b(self|super)\\b"},{cN:"meta",b:"\\s*#",e:"$",k:{"meta-keyword":"if else elseif endif end then"}},{cN:"meta",b:"^\\s*strict\\b"},{bK:"alias",e:"=",c:[e.UTM]},e.QSM,n]}});hljs.registerLanguage("cal",function(e){var r="div mod in and or not xor asserterror begin case do downto else end exit for if of repeat then to until while with var",t="false true",c=[e.CLCM,e.C(/\{/,/\}/,{r:0}),e.C(/\(\*/,/\*\)/,{r:10})],n={cN:"string",b:/'/,e:/'/,c:[{b:/''/}]},o={cN:"string",b:/(#\d+)+/},a={cN:"number",b:"\\b\\d+(\\.\\d+)?(DT|D|T)",r:0},i={cN:"string",b:'"',e:'"'},d={cN:"function",bK:"procedure",e:/[:;]/,k:"procedure|10",c:[e.TM,{cN:"params",b:/\(/,e:/\)/,k:r,c:[n,o]}].concat(c)},s={cN:"class",b:"OBJECT (Table|Form|Report|Dataport|Codeunit|XMLport|MenuSuite|Page|Query) (\\d+) ([^\\r\\n]+)",rB:!0,c:[e.TM,d]};return{cI:!0,k:{keyword:r,literal:t},i:/\/\*/,c:[n,o,a,i,e.NM,s,d]}});hljs.registerLanguage("sml",function(e){return{aliases:["ml"],k:{keyword:"abstype and andalso as case datatype do else end eqtype exception fn fun functor handle if in include infix infixr let local nonfix of op open orelse raise rec sharing sig signature struct structure then type val with withtype where while",built_in:"array bool char exn int list option order real ref string substring vector unit word",literal:"true false NONE SOME LESS EQUAL GREATER nil"},i:/\/\/|>>/,l:"[a-z_]\\w*!?",c:[{cN:"literal",b:"\\[(\\|\\|)?\\]|\\(\\)"},e.C("\\(\\*","\\*\\)",{c:["self"]}),{cN:"symbol",b:"'[A-Za-z_](?!')[\\w']*"},{cN:"type",b:"`[A-Z][\\w']*"},{cN:"type",b:"\\b[A-Z][\\w']*",r:0},{b:"[a-z_]\\w*'[\\w']*"},e.inherit(e.ASM,{cN:"string",r:0}),e.inherit(e.QSM,{i:null}),{cN:"number",b:"\\b(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\\.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)",r:0},{b:/[-=]>/}]}});hljs.registerLanguage("rust",function(e){var t="([uif](8|16|32|64|size))?",r=e.inherit(e.CBCM);r.c.push("self");var n="Copy Send Sized Sync Drop Fn FnMut FnOnce drop Box ToOwned Clone PartialEq PartialOrd Eq Ord AsRef AsMut Into From Default Iterator Extend IntoIterator DoubleEndedIterator ExactSizeIterator Option Result SliceConcatExt String ToString Vec assert! assert_eq! bitflags! bytes! cfg! col! concat! concat_idents! debug_assert! debug_assert_eq! env! panic! file! format! format_args! include_bin! include_str! line! local_data_key! module_path! option_env! print! println! select! stringify! try! unimplemented! unreachable! vec! write! writeln!";return{aliases:["rs"],k:{keyword:"alignof as be box break const continue crate do else enum extern false fn for if impl in let loop match mod mut offsetof once priv proc pub pure ref return self Self sizeof static struct super trait true type typeof unsafe unsized use virtual while where yield int i8 i16 i32 i64 uint u8 u32 u64 float f32 f64 str char bool",literal:"true false Some None Ok Err",built_in:n},l:e.IR+"!?",i:"",c:[e.CLCM,r,e.inherit(e.QSM,{b:/b?"/,i:null}),{cN:"string",v:[{b:/r(#*)".*?"\1(?!#)/},{b:/b?'\\?(x\w{2}|u\w{4}|U\w{8}|.)'/}]},{cN:"symbol",b:/'[a-zA-Z_][a-zA-Z0-9_]*/},{cN:"number",v:[{b:"\\b0b([01_]+)"+t},{b:"\\b0o([0-7_]+)"+t},{b:"\\b0x([A-Fa-f0-9_]+)"+t},{b:"\\b(\\d[\\d_]*(\\.[0-9_]+)?([eE][+-]?[0-9_]+)?)"+t}],r:0},{cN:"function",bK:"fn",e:"(\\(|<)",eE:!0,c:[e.UTM]},{cN:"meta",b:"#\\!?\\[",e:"\\]",c:[{cN:"meta-string",b:/"/,e:/"/}]},{cN:"class",bK:"type",e:"(=|<)",c:[e.UTM],i:"\\S"},{cN:"class",bK:"trait enum struct",e:"{",c:[e.inherit(e.UTM,{endsParent:!0})],i:"[\\w\\d]"},{b:e.IR+"::",k:{built_in:n}},{b:"->"}]}});hljs.registerLanguage("livescript",function(e){var t={keyword:"in if for while finally new do return else break catch instanceof throw try this switch continue typeof delete debugger case default function var with then unless until loop of by when and or is isnt not it that otherwise from to til fallthrough super case default function var void const let enum export import native __hasProp __extends __slice __bind __indexOf",literal:"true false null undefined yes no on off it that void",built_in:"npm require console print module global window document"},s="[A-Za-z$_](?:-[0-9A-Za-z$_]|[0-9A-Za-z$_])*",n=e.inherit(e.TM,{b:s}),i={cN:"subst",b:/#\{/,e:/}/,k:t},r={cN:"subst",b:/#[A-Za-z$_]/,e:/(?:\-[0-9A-Za-z$_]|[0-9A-Za-z$_])*/,k:t},c=[e.BNM,{cN:"number",b:"(\\b0[xX][a-fA-F0-9_]+)|(\\b\\d(\\d|_\\d)*(\\.(\\d(\\d|_\\d)*)?)?(_*[eE]([-+]\\d(_\\d|\\d)*)?)?[_a-z]*)",r:0,starts:{e:"(\\s*/)?",r:0}},{cN:"string",v:[{b:/'''/,e:/'''/,c:[e.BE]},{b:/'/,e:/'/,c:[e.BE]},{b:/"""/,e:/"""/,c:[e.BE,i,r]},{b:/"/,e:/"/,c:[e.BE,i,r]},{b:/\\/,e:/(\s|$)/,eE:!0}]},{cN:"regexp",v:[{b:"//",e:"//[gim]*",c:[i,e.HCM]},{b:/\/(?![ *])(\\\/|.)*?\/[gim]*(?=\W|$)/}]},{b:"@"+s},{b:"``",e:"``",eB:!0,eE:!0,sL:"javascript"}];i.c=c;var a={cN:"params",b:"\\(",rB:!0,c:[{b:/\(/,e:/\)/,k:t,c:["self"].concat(c)}]};return{aliases:["ls"],k:t,i:/\/\*/,c:c.concat([e.C("\\/\\*","\\*\\/"),e.HCM,{cN:"function",c:[n,a],rB:!0,v:[{b:"("+s+"\\s*(?:=|:=)\\s*)?(\\(.*\\))?\\s*\\B\\->\\*?",e:"\\->\\*?"},{b:"("+s+"\\s*(?:=|:=)\\s*)?!?(\\(.*\\))?\\s*\\B[-~]{1,2}>\\*?",e:"[-~]{1,2}>\\*?"},{b:"("+s+"\\s*(?:=|:=)\\s*)?(\\(.*\\))?\\s*\\B!?[-~]{1,2}>\\*?",e:"!?[-~]{1,2}>\\*?"}]},{cN:"class",bK:"class",e:"$",i:/[:="\[\]]/,c:[{bK:"extends",eW:!0,i:/[:="\[\]]/,c:[n]},n]},{b:s+":",e:":",rB:!0,rE:!0,r:0}])}});hljs.registerLanguage("x86asm",function(s){return{cI:!0,l:"[.%]?"+s.IR,k:{keyword:"lock rep repe repz repne repnz xaquire xrelease bnd nobnd aaa aad aam aas adc add and arpl bb0_reset bb1_reset bound bsf bsr bswap bt btc btr bts call cbw cdq cdqe clc cld cli clts cmc cmp cmpsb cmpsd cmpsq cmpsw cmpxchg cmpxchg486 cmpxchg8b cmpxchg16b cpuid cpu_read cpu_write cqo cwd cwde daa das dec div dmint emms enter equ f2xm1 fabs fadd faddp fbld fbstp fchs fclex fcmovb fcmovbe fcmove fcmovnb fcmovnbe fcmovne fcmovnu fcmovu fcom fcomi fcomip fcomp fcompp fcos fdecstp fdisi fdiv fdivp fdivr fdivrp femms feni ffree ffreep fiadd ficom ficomp fidiv fidivr fild fimul fincstp finit fist fistp fisttp fisub fisubr fld fld1 fldcw fldenv fldl2e fldl2t fldlg2 fldln2 fldpi fldz fmul fmulp fnclex fndisi fneni fninit fnop fnsave fnstcw fnstenv fnstsw fpatan fprem fprem1 fptan frndint frstor fsave fscale fsetpm fsin fsincos fsqrt fst fstcw fstenv fstp fstsw fsub fsubp fsubr fsubrp ftst fucom fucomi fucomip fucomp fucompp fxam fxch fxtract fyl2x fyl2xp1 hlt ibts icebp idiv imul in inc incbin insb insd insw int int01 int1 int03 int3 into invd invpcid invlpg invlpga iret iretd iretq iretw jcxz jecxz jrcxz jmp jmpe lahf lar lds lea leave les lfence lfs lgdt lgs lidt lldt lmsw loadall loadall286 lodsb lodsd lodsq lodsw loop loope loopne loopnz loopz lsl lss ltr mfence monitor mov movd movq movsb movsd movsq movsw movsx movsxd movzx mul mwait neg nop not or out outsb outsd outsw packssdw packsswb packuswb paddb paddd paddsb paddsiw paddsw paddusb paddusw paddw pand pandn pause paveb pavgusb pcmpeqb pcmpeqd pcmpeqw pcmpgtb pcmpgtd pcmpgtw pdistib pf2id pfacc pfadd pfcmpeq pfcmpge pfcmpgt pfmax pfmin pfmul pfrcp pfrcpit1 pfrcpit2 pfrsqit1 pfrsqrt pfsub pfsubr pi2fd pmachriw pmaddwd pmagw pmulhriw pmulhrwa pmulhrwc pmulhw pmullw pmvgezb pmvlzb pmvnzb pmvzb pop popa popad popaw popf popfd popfq popfw por prefetch prefetchw pslld psllq psllw psrad psraw psrld psrlq psrlw psubb psubd psubsb psubsiw psubsw psubusb psubusw psubw punpckhbw punpckhdq punpckhwd punpcklbw punpckldq punpcklwd push pusha pushad pushaw pushf pushfd pushfq pushfw pxor rcl rcr rdshr rdmsr rdpmc rdtsc rdtscp ret retf retn rol ror rdm rsdc rsldt rsm rsts sahf sal salc sar sbb scasb scasd scasq scasw sfence sgdt shl shld shr shrd sidt sldt skinit smi smint smintold smsw stc std sti stosb stosd stosq stosw str sub svdc svldt svts swapgs syscall sysenter sysexit sysret test ud0 ud1 ud2b ud2 ud2a umov verr verw fwait wbinvd wrshr wrmsr xadd xbts xchg xlatb xlat xor cmove cmovz cmovne cmovnz cmova cmovnbe cmovae cmovnb cmovb cmovnae cmovbe cmovna cmovg cmovnle cmovge cmovnl cmovl cmovnge cmovle cmovng cmovc cmovnc cmovo cmovno cmovs cmovns cmovp cmovpe cmovnp cmovpo je jz jne jnz ja jnbe jae jnb jb jnae jbe jna jg jnle jge jnl jl jnge jle jng jc jnc jo jno js jns jpo jnp jpe jp sete setz setne setnz seta setnbe setae setnb setnc setb setnae setcset setbe setna setg setnle setge setnl setl setnge setle setng sets setns seto setno setpe setp setpo setnp addps addss andnps andps cmpeqps cmpeqss cmpleps cmpless cmpltps cmpltss cmpneqps cmpneqss cmpnleps cmpnless cmpnltps cmpnltss cmpordps cmpordss cmpunordps cmpunordss cmpps cmpss comiss cvtpi2ps cvtps2pi cvtsi2ss cvtss2si cvttps2pi cvttss2si divps divss ldmxcsr maxps maxss minps minss movaps movhps movlhps movlps movhlps movmskps movntps movss movups mulps mulss orps rcpps rcpss rsqrtps rsqrtss shufps sqrtps sqrtss stmxcsr subps subss ucomiss unpckhps unpcklps xorps fxrstor fxrstor64 fxsave fxsave64 xgetbv xsetbv xsave xsave64 xsaveopt xsaveopt64 xrstor xrstor64 prefetchnta prefetcht0 prefetcht1 prefetcht2 maskmovq movntq pavgb pavgw pextrw pinsrw pmaxsw pmaxub pminsw pminub pmovmskb pmulhuw psadbw pshufw pf2iw pfnacc pfpnacc pi2fw pswapd maskmovdqu clflush movntdq movnti movntpd movdqa movdqu movdq2q movq2dq paddq pmuludq pshufd pshufhw pshuflw pslldq psrldq psubq punpckhqdq punpcklqdq addpd addsd andnpd andpd cmpeqpd cmpeqsd cmplepd cmplesd cmpltpd cmpltsd cmpneqpd cmpneqsd cmpnlepd cmpnlesd cmpnltpd cmpnltsd cmpordpd cmpordsd cmpunordpd cmpunordsd cmppd comisd cvtdq2pd cvtdq2ps cvtpd2dq cvtpd2pi cvtpd2ps cvtpi2pd cvtps2dq cvtps2pd cvtsd2si cvtsd2ss cvtsi2sd cvtss2sd cvttpd2pi cvttpd2dq cvttps2dq cvttsd2si divpd divsd maxpd maxsd minpd minsd movapd movhpd movlpd movmskpd movupd mulpd mulsd orpd shufpd sqrtpd sqrtsd subpd subsd ucomisd unpckhpd unpcklpd xorpd addsubpd addsubps haddpd haddps hsubpd hsubps lddqu movddup movshdup movsldup clgi stgi vmcall vmclear vmfunc vmlaunch vmload vmmcall vmptrld vmptrst vmread vmresume vmrun vmsave vmwrite vmxoff vmxon invept invvpid pabsb pabsw pabsd palignr phaddw phaddd phaddsw phsubw phsubd phsubsw pmaddubsw pmulhrsw pshufb psignb psignw psignd extrq insertq movntsd movntss lzcnt blendpd blendps blendvpd blendvps dppd dpps extractps insertps movntdqa mpsadbw packusdw pblendvb pblendw pcmpeqq pextrb pextrd pextrq phminposuw pinsrb pinsrd pinsrq pmaxsb pmaxsd pmaxud pmaxuw pminsb pminsd pminud pminuw pmovsxbw pmovsxbd pmovsxbq pmovsxwd pmovsxwq pmovsxdq pmovzxbw pmovzxbd pmovzxbq pmovzxwd pmovzxwq pmovzxdq pmuldq pmulld ptest roundpd roundps roundsd roundss crc32 pcmpestri pcmpestrm pcmpistri pcmpistrm pcmpgtq popcnt getsec pfrcpv pfrsqrtv movbe aesenc aesenclast aesdec aesdeclast aesimc aeskeygenassist vaesenc vaesenclast vaesdec vaesdeclast vaesimc vaeskeygenassist vaddpd vaddps vaddsd vaddss vaddsubpd vaddsubps vandpd vandps vandnpd vandnps vblendpd vblendps vblendvpd vblendvps vbroadcastss vbroadcastsd vbroadcastf128 vcmpeq_ospd vcmpeqpd vcmplt_ospd vcmpltpd vcmple_ospd vcmplepd vcmpunord_qpd vcmpunordpd vcmpneq_uqpd vcmpneqpd vcmpnlt_uspd vcmpnltpd vcmpnle_uspd vcmpnlepd vcmpord_qpd vcmpordpd vcmpeq_uqpd vcmpnge_uspd vcmpngepd vcmpngt_uspd vcmpngtpd vcmpfalse_oqpd vcmpfalsepd vcmpneq_oqpd vcmpge_ospd vcmpgepd vcmpgt_ospd vcmpgtpd vcmptrue_uqpd vcmptruepd vcmplt_oqpd vcmple_oqpd vcmpunord_spd vcmpneq_uspd vcmpnlt_uqpd vcmpnle_uqpd vcmpord_spd vcmpeq_uspd vcmpnge_uqpd vcmpngt_uqpd vcmpfalse_ospd vcmpneq_ospd vcmpge_oqpd vcmpgt_oqpd vcmptrue_uspd vcmppd vcmpeq_osps vcmpeqps vcmplt_osps vcmpltps vcmple_osps vcmpleps vcmpunord_qps vcmpunordps vcmpneq_uqps vcmpneqps vcmpnlt_usps vcmpnltps vcmpnle_usps vcmpnleps vcmpord_qps vcmpordps vcmpeq_uqps vcmpnge_usps vcmpngeps vcmpngt_usps vcmpngtps vcmpfalse_oqps vcmpfalseps vcmpneq_oqps vcmpge_osps vcmpgeps vcmpgt_osps vcmpgtps vcmptrue_uqps vcmptrueps vcmplt_oqps vcmple_oqps vcmpunord_sps vcmpneq_usps vcmpnlt_uqps vcmpnle_uqps vcmpord_sps vcmpeq_usps vcmpnge_uqps vcmpngt_uqps vcmpfalse_osps vcmpneq_osps vcmpge_oqps vcmpgt_oqps vcmptrue_usps vcmpps vcmpeq_ossd vcmpeqsd vcmplt_ossd vcmpltsd vcmple_ossd vcmplesd vcmpunord_qsd vcmpunordsd vcmpneq_uqsd vcmpneqsd vcmpnlt_ussd vcmpnltsd vcmpnle_ussd vcmpnlesd vcmpord_qsd vcmpordsd vcmpeq_uqsd vcmpnge_ussd vcmpngesd vcmpngt_ussd vcmpngtsd vcmpfalse_oqsd vcmpfalsesd vcmpneq_oqsd vcmpge_ossd vcmpgesd vcmpgt_ossd vcmpgtsd vcmptrue_uqsd vcmptruesd vcmplt_oqsd vcmple_oqsd vcmpunord_ssd vcmpneq_ussd vcmpnlt_uqsd vcmpnle_uqsd vcmpord_ssd vcmpeq_ussd vcmpnge_uqsd vcmpngt_uqsd vcmpfalse_ossd vcmpneq_ossd vcmpge_oqsd vcmpgt_oqsd vcmptrue_ussd vcmpsd vcmpeq_osss vcmpeqss vcmplt_osss vcmpltss vcmple_osss vcmpless vcmpunord_qss vcmpunordss vcmpneq_uqss vcmpneqss vcmpnlt_usss vcmpnltss vcmpnle_usss vcmpnless vcmpord_qss vcmpordss vcmpeq_uqss vcmpnge_usss vcmpngess vcmpngt_usss vcmpngtss vcmpfalse_oqss vcmpfalsess vcmpneq_oqss vcmpge_osss vcmpgess vcmpgt_osss vcmpgtss vcmptrue_uqss vcmptruess vcmplt_oqss vcmple_oqss vcmpunord_sss vcmpneq_usss vcmpnlt_uqss vcmpnle_uqss vcmpord_sss vcmpeq_usss vcmpnge_uqss vcmpngt_uqss vcmpfalse_osss vcmpneq_osss vcmpge_oqss vcmpgt_oqss vcmptrue_usss vcmpss vcomisd vcomiss vcvtdq2pd vcvtdq2ps vcvtpd2dq vcvtpd2ps vcvtps2dq vcvtps2pd vcvtsd2si vcvtsd2ss vcvtsi2sd vcvtsi2ss vcvtss2sd vcvtss2si vcvttpd2dq vcvttps2dq vcvttsd2si vcvttss2si vdivpd vdivps vdivsd vdivss vdppd vdpps vextractf128 vextractps vhaddpd vhaddps vhsubpd vhsubps vinsertf128 vinsertps vlddqu vldqqu vldmxcsr vmaskmovdqu vmaskmovps vmaskmovpd vmaxpd vmaxps vmaxsd vmaxss vminpd vminps vminsd vminss vmovapd vmovaps vmovd vmovq vmovddup vmovdqa vmovqqa vmovdqu vmovqqu vmovhlps vmovhpd vmovhps vmovlhps vmovlpd vmovlps vmovmskpd vmovmskps vmovntdq vmovntqq vmovntdqa vmovntpd vmovntps vmovsd vmovshdup vmovsldup vmovss vmovupd vmovups vmpsadbw vmulpd vmulps vmulsd vmulss vorpd vorps vpabsb vpabsw vpabsd vpacksswb vpackssdw vpackuswb vpackusdw vpaddb vpaddw vpaddd vpaddq vpaddsb vpaddsw vpaddusb vpaddusw vpalignr vpand vpandn vpavgb vpavgw vpblendvb vpblendw vpcmpestri vpcmpestrm vpcmpistri vpcmpistrm vpcmpeqb vpcmpeqw vpcmpeqd vpcmpeqq vpcmpgtb vpcmpgtw vpcmpgtd vpcmpgtq vpermilpd vpermilps vperm2f128 vpextrb vpextrw vpextrd vpextrq vphaddw vphaddd vphaddsw vphminposuw vphsubw vphsubd vphsubsw vpinsrb vpinsrw vpinsrd vpinsrq vpmaddwd vpmaddubsw vpmaxsb vpmaxsw vpmaxsd vpmaxub vpmaxuw vpmaxud vpminsb vpminsw vpminsd vpminub vpminuw vpminud vpmovmskb vpmovsxbw vpmovsxbd vpmovsxbq vpmovsxwd vpmovsxwq vpmovsxdq vpmovzxbw vpmovzxbd vpmovzxbq vpmovzxwd vpmovzxwq vpmovzxdq vpmulhuw vpmulhrsw vpmulhw vpmullw vpmulld vpmuludq vpmuldq vpor vpsadbw vpshufb vpshufd vpshufhw vpshuflw vpsignb vpsignw vpsignd vpslldq vpsrldq vpsllw vpslld vpsllq vpsraw vpsrad vpsrlw vpsrld vpsrlq vptest vpsubb vpsubw vpsubd vpsubq vpsubsb vpsubsw vpsubusb vpsubusw vpunpckhbw vpunpckhwd vpunpckhdq vpunpckhqdq vpunpcklbw vpunpcklwd vpunpckldq vpunpcklqdq vpxor vrcpps vrcpss vrsqrtps vrsqrtss vroundpd vroundps vroundsd vroundss vshufpd vshufps vsqrtpd vsqrtps vsqrtsd vsqrtss vstmxcsr vsubpd vsubps vsubsd vsubss vtestps vtestpd vucomisd vucomiss vunpckhpd vunpckhps vunpcklpd vunpcklps vxorpd vxorps vzeroall vzeroupper pclmullqlqdq pclmulhqlqdq pclmullqhqdq pclmulhqhqdq pclmulqdq vpclmullqlqdq vpclmulhqlqdq vpclmullqhqdq vpclmulhqhqdq vpclmulqdq vfmadd132ps vfmadd132pd vfmadd312ps vfmadd312pd vfmadd213ps vfmadd213pd vfmadd123ps vfmadd123pd vfmadd231ps vfmadd231pd vfmadd321ps vfmadd321pd vfmaddsub132ps vfmaddsub132pd vfmaddsub312ps vfmaddsub312pd vfmaddsub213ps vfmaddsub213pd vfmaddsub123ps vfmaddsub123pd vfmaddsub231ps vfmaddsub231pd vfmaddsub321ps vfmaddsub321pd vfmsub132ps vfmsub132pd vfmsub312ps vfmsub312pd vfmsub213ps vfmsub213pd vfmsub123ps vfmsub123pd vfmsub231ps vfmsub231pd vfmsub321ps vfmsub321pd vfmsubadd132ps vfmsubadd132pd vfmsubadd312ps vfmsubadd312pd vfmsubadd213ps vfmsubadd213pd vfmsubadd123ps vfmsubadd123pd vfmsubadd231ps vfmsubadd231pd vfmsubadd321ps vfmsubadd321pd vfnmadd132ps vfnmadd132pd vfnmadd312ps vfnmadd312pd vfnmadd213ps vfnmadd213pd vfnmadd123ps vfnmadd123pd vfnmadd231ps vfnmadd231pd vfnmadd321ps vfnmadd321pd vfnmsub132ps vfnmsub132pd vfnmsub312ps vfnmsub312pd vfnmsub213ps vfnmsub213pd vfnmsub123ps vfnmsub123pd vfnmsub231ps vfnmsub231pd vfnmsub321ps vfnmsub321pd vfmadd132ss vfmadd132sd vfmadd312ss vfmadd312sd vfmadd213ss vfmadd213sd vfmadd123ss vfmadd123sd vfmadd231ss vfmadd231sd vfmadd321ss vfmadd321sd vfmsub132ss vfmsub132sd vfmsub312ss vfmsub312sd vfmsub213ss vfmsub213sd vfmsub123ss vfmsub123sd vfmsub231ss vfmsub231sd vfmsub321ss vfmsub321sd vfnmadd132ss vfnmadd132sd vfnmadd312ss vfnmadd312sd vfnmadd213ss vfnmadd213sd vfnmadd123ss vfnmadd123sd vfnmadd231ss vfnmadd231sd vfnmadd321ss vfnmadd321sd vfnmsub132ss vfnmsub132sd vfnmsub312ss vfnmsub312sd vfnmsub213ss vfnmsub213sd vfnmsub123ss vfnmsub123sd vfnmsub231ss vfnmsub231sd vfnmsub321ss vfnmsub321sd rdfsbase rdgsbase rdrand wrfsbase wrgsbase vcvtph2ps vcvtps2ph adcx adox rdseed clac stac xstore xcryptecb xcryptcbc xcryptctr xcryptcfb xcryptofb montmul xsha1 xsha256 llwpcb slwpcb lwpval lwpins vfmaddpd vfmaddps vfmaddsd vfmaddss vfmaddsubpd vfmaddsubps vfmsubaddpd vfmsubaddps vfmsubpd vfmsubps vfmsubsd vfmsubss vfnmaddpd vfnmaddps vfnmaddsd vfnmaddss vfnmsubpd vfnmsubps vfnmsubsd vfnmsubss vfrczpd vfrczps vfrczsd vfrczss vpcmov vpcomb vpcomd vpcomq vpcomub vpcomud vpcomuq vpcomuw vpcomw vphaddbd vphaddbq vphaddbw vphadddq vphaddubd vphaddubq vphaddubw vphaddudq vphadduwd vphadduwq vphaddwd vphaddwq vphsubbw vphsubdq vphsubwd vpmacsdd vpmacsdqh vpmacsdql vpmacssdd vpmacssdqh vpmacssdql vpmacsswd vpmacssww vpmacswd vpmacsww vpmadcsswd vpmadcswd vpperm vprotb vprotd vprotq vprotw vpshab vpshad vpshaq vpshaw vpshlb vpshld vpshlq vpshlw vbroadcasti128 vpblendd vpbroadcastb vpbroadcastw vpbroadcastd vpbroadcastq vpermd vpermpd vpermps vpermq vperm2i128 vextracti128 vinserti128 vpmaskmovd vpmaskmovq vpsllvd vpsllvq vpsravd vpsrlvd vpsrlvq vgatherdpd vgatherqpd vgatherdps vgatherqps vpgatherdd vpgatherqd vpgatherdq vpgatherqq xabort xbegin xend xtest andn bextr blci blcic blsi blsic blcfill blsfill blcmsk blsmsk blsr blcs bzhi mulx pdep pext rorx sarx shlx shrx tzcnt tzmsk t1mskc valignd valignq vblendmpd vblendmps vbroadcastf32x4 vbroadcastf64x4 vbroadcasti32x4 vbroadcasti64x4 vcompresspd vcompressps vcvtpd2udq vcvtps2udq vcvtsd2usi vcvtss2usi vcvttpd2udq vcvttps2udq vcvttsd2usi vcvttss2usi vcvtudq2pd vcvtudq2ps vcvtusi2sd vcvtusi2ss vexpandpd vexpandps vextractf32x4 vextractf64x4 vextracti32x4 vextracti64x4 vfixupimmpd vfixupimmps vfixupimmsd vfixupimmss vgetexppd vgetexpps vgetexpsd vgetexpss vgetmantpd vgetmantps vgetmantsd vgetmantss vinsertf32x4 vinsertf64x4 vinserti32x4 vinserti64x4 vmovdqa32 vmovdqa64 vmovdqu32 vmovdqu64 vpabsq vpandd vpandnd vpandnq vpandq vpblendmd vpblendmq vpcmpltd vpcmpled vpcmpneqd vpcmpnltd vpcmpnled vpcmpd vpcmpltq vpcmpleq vpcmpneqq vpcmpnltq vpcmpnleq vpcmpq vpcmpequd vpcmpltud vpcmpleud vpcmpnequd vpcmpnltud vpcmpnleud vpcmpud vpcmpequq vpcmpltuq vpcmpleuq vpcmpnequq vpcmpnltuq vpcmpnleuq vpcmpuq vpcompressd vpcompressq vpermi2d vpermi2pd vpermi2ps vpermi2q vpermt2d vpermt2pd vpermt2ps vpermt2q vpexpandd vpexpandq vpmaxsq vpmaxuq vpminsq vpminuq vpmovdb vpmovdw vpmovqb vpmovqd vpmovqw vpmovsdb vpmovsdw vpmovsqb vpmovsqd vpmovsqw vpmovusdb vpmovusdw vpmovusqb vpmovusqd vpmovusqw vpord vporq vprold vprolq vprolvd vprolvq vprord vprorq vprorvd vprorvq vpscatterdd vpscatterdq vpscatterqd vpscatterqq vpsraq vpsravq vpternlogd vpternlogq vptestmd vptestmq vptestnmd vptestnmq vpxord vpxorq vrcp14pd vrcp14ps vrcp14sd vrcp14ss vrndscalepd vrndscaleps vrndscalesd vrndscaless vrsqrt14pd vrsqrt14ps vrsqrt14sd vrsqrt14ss vscalefpd vscalefps vscalefsd vscalefss vscatterdpd vscatterdps vscatterqpd vscatterqps vshuff32x4 vshuff64x2 vshufi32x4 vshufi64x2 kandnw kandw kmovw knotw kortestw korw kshiftlw kshiftrw kunpckbw kxnorw kxorw vpbroadcastmb2q vpbroadcastmw2d vpconflictd vpconflictq vplzcntd vplzcntq vexp2pd vexp2ps vrcp28pd vrcp28ps vrcp28sd vrcp28ss vrsqrt28pd vrsqrt28ps vrsqrt28sd vrsqrt28ss vgatherpf0dpd vgatherpf0dps vgatherpf0qpd vgatherpf0qps vgatherpf1dpd vgatherpf1dps vgatherpf1qpd vgatherpf1qps vscatterpf0dpd vscatterpf0dps vscatterpf0qpd vscatterpf0qps vscatterpf1dpd vscatterpf1dps vscatterpf1qpd vscatterpf1qps prefetchwt1 bndmk bndcl bndcu bndcn bndmov bndldx bndstx sha1rnds4 sha1nexte sha1msg1 sha1msg2 sha256rnds2 sha256msg1 sha256msg2 hint_nop0 hint_nop1 hint_nop2 hint_nop3 hint_nop4 hint_nop5 hint_nop6 hint_nop7 hint_nop8 hint_nop9 hint_nop10 hint_nop11 hint_nop12 hint_nop13 hint_nop14 hint_nop15 hint_nop16 hint_nop17 hint_nop18 hint_nop19 hint_nop20 hint_nop21 hint_nop22 hint_nop23 hint_nop24 hint_nop25 hint_nop26 hint_nop27 hint_nop28 hint_nop29 hint_nop30 hint_nop31 hint_nop32 hint_nop33 hint_nop34 hint_nop35 hint_nop36 hint_nop37 hint_nop38 hint_nop39 hint_nop40 hint_nop41 hint_nop42 hint_nop43 hint_nop44 hint_nop45 hint_nop46 hint_nop47 hint_nop48 hint_nop49 hint_nop50 hint_nop51 hint_nop52 hint_nop53 hint_nop54 hint_nop55 hint_nop56 hint_nop57 hint_nop58 hint_nop59 hint_nop60 hint_nop61 hint_nop62 hint_nop63",built_in:"ip eip rip al ah bl bh cl ch dl dh sil dil bpl spl r8b r9b r10b r11b r12b r13b r14b r15b ax bx cx dx si di bp sp r8w r9w r10w r11w r12w r13w r14w r15w eax ebx ecx edx esi edi ebp esp eip r8d r9d r10d r11d r12d r13d r14d r15d rax rbx rcx rdx rsi rdi rbp rsp r8 r9 r10 r11 r12 r13 r14 r15 cs ds es fs gs ss st st0 st1 st2 st3 st4 st5 st6 st7 mm0 mm1 mm2 mm3 mm4 mm5 mm6 mm7 xmm0 xmm1 xmm2 xmm3 xmm4 xmm5 xmm6 xmm7 xmm8 xmm9 xmm10 xmm11 xmm12 xmm13 xmm14 xmm15 xmm16 xmm17 xmm18 xmm19 xmm20 xmm21 xmm22 xmm23 xmm24 xmm25 xmm26 xmm27 xmm28 xmm29 xmm30 xmm31 ymm0 ymm1 ymm2 ymm3 ymm4 ymm5 ymm6 ymm7 ymm8 ymm9 ymm10 ymm11 ymm12 ymm13 ymm14 ymm15 ymm16 ymm17 ymm18 ymm19 ymm20 ymm21 ymm22 ymm23 ymm24 ymm25 ymm26 ymm27 ymm28 ymm29 ymm30 ymm31 zmm0 zmm1 zmm2 zmm3 zmm4 zmm5 zmm6 zmm7 zmm8 zmm9 zmm10 zmm11 zmm12 zmm13 zmm14 zmm15 zmm16 zmm17 zmm18 zmm19 zmm20 zmm21 zmm22 zmm23 zmm24 zmm25 zmm26 zmm27 zmm28 zmm29 zmm30 zmm31 k0 k1 k2 k3 k4 k5 k6 k7 bnd0 bnd1 bnd2 bnd3 cr0 cr1 cr2 cr3 cr4 cr8 dr0 dr1 dr2 dr3 dr8 tr3 tr4 tr5 tr6 tr7 r0 r1 r2 r3 r4 r5 r6 r7 r0b r1b r2b r3b r4b r5b r6b r7b r0w r1w r2w r3w r4w r5w r6w r7w r0d r1d r2d r3d r4d r5d r6d r7d r0h r1h r2h r3h r0l r1l r2l r3l r4l r5l r6l r7l r8l r9l r10l r11l r12l r13l r14l r15l db dw dd dq dt ddq do dy dz resb resw resd resq rest resdq reso resy resz incbin equ times byte word dword qword nosplit rel abs seg wrt strict near far a32 ptr",meta:"%define %xdefine %+ %undef %defstr %deftok %assign %strcat %strlen %substr %rotate %elif %else %endif %if %ifmacro %ifctx %ifidn %ifidni %ifid %ifnum %ifstr %iftoken %ifempty %ifenv %error %warning %fatal %rep %endrep %include %push %pop %repl %pathsearch %depend %use %arg %stacksize %local %line %comment %endcomment .nolist __FILE__ __LINE__ __SECT__ __BITS__ __OUTPUT_FORMAT__ __DATE__ __TIME__ __DATE_NUM__ __TIME_NUM__ __UTC_DATE__ __UTC_TIME__ __UTC_DATE_NUM__ __UTC_TIME_NUM__ __PASS__ struc endstruc istruc at iend align alignb sectalign daz nodaz up down zero default option assume public bits use16 use32 use64 default section segment absolute extern global common cpu float __utf16__ __utf16le__ __utf16be__ __utf32__ __utf32le__ __utf32be__ __float8__ __float16__ __float32__ __float64__ __float80m__ __float80e__ __float128l__ __float128h__ __Infinity__ __QNaN__ __SNaN__ Inf NaN QNaN SNaN float8 float16 float32 float64 float80m float80e float128l float128h __FLOAT_DAZ__ __FLOAT_ROUND__ __FLOAT__"},c:[s.C(";","$",{r:0}),{cN:"number",v:[{b:"\\b(?:([0-9][0-9_]*)?\\.[0-9_]*(?:[eE][+-]?[0-9_]+)?|(0[Xx])?[0-9][0-9_]*\\.?[0-9_]*(?:[pP](?:[+-]?[0-9_]+)?)?)\\b",r:0},{b:"\\$[0-9][0-9A-Fa-f]*",r:0},{b:"\\b(?:[0-9A-Fa-f][0-9A-Fa-f_]*[Hh]|[0-9][0-9_]*[DdTt]?|[0-7][0-7_]*[QqOo]|[0-1][0-1_]*[BbYy])\\b"},{b:"\\b(?:0[Xx][0-9A-Fa-f_]+|0[DdTt][0-9_]+|0[QqOo][0-7_]+|0[BbYy][0-1_]+)\\b"}]},s.QSM,{cN:"string",v:[{b:"'",e:"[^\\\\]'"},{b:"`",e:"[^\\\\]`"},{b:"\\.[A-Za-z0-9]+"}],r:0},{cN:"symbol",v:[{b:"^\\s*[A-Za-z._?][A-Za-z0-9_$#@~.?]*(:|\\s+label)"},{b:"^\\s*%%[A-Za-z0-9_$#@~.?]*:"}],r:0},{cN:"subst",b:"%[0-9]+",r:0},{cN:"subst",b:"%!S+",r:0}]}});hljs.registerLanguage("aspectj",function(e){var t="false synchronized int abstract float private char boolean static null if const for true while long throw strictfp finally protected import native final return void enum else extends implements break transient new catch instanceof byte super volatile case assert short package default double public try this switch continue throws privileged aspectOf adviceexecution proceed cflowbelow cflow initialization preinitialization staticinitialization withincode target within execution getWithinTypeName handler thisJoinPoint thisJoinPointStaticPart thisEnclosingJoinPointStaticPart declare parents warning error soft precedence thisAspectInstance",i="get set args call";return{k:t,i:/<\/|#/,c:[e.C("/\\*\\*","\\*/",{r:0,c:[{b:/\w+@/,r:0},{cN:"doctag",b:"@[A-Za-z]+"}]}),e.CLCM,e.CBCM,e.ASM,e.QSM,{cN:"class",bK:"aspect",e:/[{;=]/,eE:!0,i:/[:;"\[\]]/,c:[{bK:"extends implements pertypewithin perthis pertarget percflowbelow percflow issingleton"},e.UTM,{b:/\([^\)]*/,e:/[)]+/,k:t+" "+i,eE:!1}]},{cN:"class",bK:"class interface",e:/[{;=]/,eE:!0,r:0,k:"class interface",i:/[:"\[\]]/,c:[{bK:"extends implements"},e.UTM]},{bK:"pointcut after before around throwing returning",e:/[)]/,eE:!1,i:/["\[\]]/,c:[{b:e.UIR+"\\s*\\(",rB:!0,c:[e.UTM]}]},{b:/[:]/,rB:!0,e:/[{;]/,r:0,eE:!1,k:t,i:/["\[\]]/,c:[{b:e.UIR+"\\s*\\(",k:t+" "+i},e.QSM]},{bK:"new throw",r:0},{cN:"function",b:/\w+ +\w+(\.)?\w+\s*\([^\)]*\)\s*((throws)[\w\s,]+)?[\{;]/,rB:!0,e:/[{;=]/,k:t,eE:!0,c:[{b:e.UIR+"\\s*\\(",rB:!0,r:0,c:[e.UTM]},{cN:"params",b:/\(/,e:/\)/,r:0,k:t,c:[e.ASM,e.QSM,e.CNM,e.CBCM]},e.CLCM,e.CBCM]},e.CNM,{cN:"meta",b:"@[A-Za-z]+"}]}});hljs.registerLanguage("verilog",function(e){return{aliases:["v"],cI:!1,k:{keyword:"always and assign begin buf bufif0 bufif1 case casex casez cmos deassign default defparam disable edge else end endcase endfunction endmodule endprimitive endspecify endtable endtask event for force forever fork function if ifnone initial inout input join macromodule module nand negedge nmos nor not notif0 notif1 or output parameter pmos posedge primitive pulldown pullup rcmos release repeat rnmos rpmos rtran rtranif0 rtranif1 specify specparam table task timescale tran tranif0 tranif1 wait while xnor xor highz0 highz1 integer large medium pull0 pull1 real realtime reg scalared signed small strong0 strong1 supply0 supply0 supply1 supply1 time tri tri0 tri1 triand trior trireg vectored wand weak0 weak1 wire wor"},c:[e.CBCM,e.CLCM,e.QSM,{cN:"number",b:"(\\b((\\d'(b|h|o|d|B|H|O|D))[0-9xzXZa-fA-F_]+))|(\\B(('(b|h|o|d|B|H|O|D))[0-9xzXZa-fA-F_]+))|(\\b([0-9xzXZ_])+)",c:[e.BE],r:0},{cN:"variable",b:"#\\((?!parameter).+\\)"}]}});hljs.registerLanguage("autohotkey",function(e){var r={b:/`[\s\S]/};return{cI:!0,k:{keyword:"Break Continue Else Gosub If Loop Return While",literal:"A|0 true false NOT AND OR",built_in:"ComSpec Clipboard ClipboardAll ErrorLevel"},c:[{cN:"built_in",b:"A_[a-zA-Z0-9]+"},r,e.inherit(e.QSM,{c:[r]}),e.C(";","$",{r:0}),{cN:"number",b:e.NR,r:0},{cN:"variable",b:"%",e:"%",i:"\\n",c:[r]},{cN:"symbol",c:[r],v:[{b:'^[^\\n";]+::(?!=)'},{b:'^[^\\n";]+:(?!=)',r:0}]},{b:",\\s*,"}]}});hljs.registerLanguage("1c",function(c){var e="[a-zA-Zа-яА-Я][a-zA-Z0-9_а-яА-Я]*",n="возврат дата для если и или иначе иначеесли исключение конецесли конецпопытки конецпроцедуры конецфункции конеццикла константа не перейти перем перечисление по пока попытка прервать продолжить процедура строка тогда фс функция цикл число экспорт",b="ansitooem oemtoansi ввестивидсубконто ввестидату ввестизначение ввестиперечисление ввестипериод ввестиплансчетов ввестистроку ввестичисло вопрос восстановитьзначение врег выбранныйплансчетов вызватьисключение датагод датамесяц датачисло добавитьмесяц завершитьработусистемы заголовоксистемы записьжурналарегистрации запуститьприложение зафиксироватьтранзакцию значениевстроку значениевстрокувнутр значениевфайл значениеизстроки значениеизстрокивнутр значениеизфайла имякомпьютера имяпользователя каталогвременныхфайлов каталогиб каталогпользователя каталогпрограммы кодсимв командасистемы конгода конецпериодаби конецрассчитанногопериодаби конецстандартногоинтервала конквартала конмесяца коннедели лев лог лог10 макс максимальноеколичествосубконто мин монопольныйрежим названиеинтерфейса названиенабораправ назначитьвид назначитьсчет найти найтипомеченныенаудаление найтиссылки началопериодаби началостандартногоинтервала начатьтранзакцию начгода начквартала начмесяца начнедели номерднягода номерднянедели номернеделигода нрег обработкаожидания окр описаниеошибки основнойжурналрасчетов основнойплансчетов основнойязык открытьформу открытьформумодально отменитьтранзакцию очиститьокносообщений периодстр полноеимяпользователя получитьвремята получитьдатута получитьдокументта получитьзначенияотбора получитьпозициюта получитьпустоезначение получитьта прав праводоступа предупреждение префиксавтонумерации пустаястрока пустоезначение рабочаядаттьпустоезначение рабочаядата разделительстраниц разделительстрок разм разобратьпозициюдокумента рассчитатьрегистрына рассчитатьрегистрыпо сигнал симв символтабуляции создатьобъект сокрл сокрлп сокрп сообщить состояние сохранитьзначение сред статусвозврата стрдлина стрзаменить стрколичествострок стрполучитьстроку стрчисловхождений сформироватьпозициюдокумента счетпокоду текущаядата текущеевремя типзначения типзначениястр удалитьобъекты установитьтана установитьтапо фиксшаблон формат цел шаблон",i={b:'""'},r={cN:"string",b:'"',e:'"|$',c:[i]},t={cN:"string",b:"\\|",e:'"|$',c:[i]};return{cI:!0,l:e,k:{keyword:n,built_in:b},c:[c.CLCM,c.NM,r,t,{cN:"function",b:"(процедура|функция)",e:"$",l:e,k:"процедура функция",c:[{b:"экспорт",eW:!0,l:e,k:"экспорт",c:[c.CLCM]},{cN:"params",b:"\\(",e:"\\)",l:e,k:"знач",c:[r,t]},c.CLCM,c.inherit(c.TM,{b:e})]},{cN:"meta",b:"#",e:"$"},{cN:"number",b:"'\\d{2}\\.\\d{2}\\.(\\d{2}|\\d{4})'"}]}});hljs.registerLanguage("yaml",function(e){var a={literal:"{ } true false yes no Yes No True False null"},b="^[ \\-]*",r="[a-zA-Z_][\\w\\-]*",t={cN:"attr",v:[{b:b+r+":"},{b:b+'"'+r+'":'},{b:b+"'"+r+"':"}]},c={cN:"template-variable",v:[{b:"{{",e:"}}"},{b:"%{",e:"}"}]},l={cN:"string",r:0,v:[{b:/'/,e:/'/},{b:/"/,e:/"/}],c:[e.BE,c]};return{cI:!0,aliases:["yml","YAML","yaml"],c:[t,{cN:"meta",b:"^---s*$",r:10},{cN:"string",b:"[\\|>] *$",rE:!0,c:l.c,e:t.v[0].b},{b:"<%[%=-]?",e:"[%-]?%>",sL:"ruby",eB:!0,eE:!0,r:0},{cN:"type",b:"!!"+e.UIR},{cN:"meta",b:"&"+e.UIR+"$"},{cN:"meta",b:"\\*"+e.UIR+"$"},{cN:"bullet",b:"^ *-",r:0},l,e.HCM,e.CNM],k:a}});hljs.registerLanguage("crystal",function(e){function r(e,r){var b=[{b:e,e:r}];return b[0].c=b,b}var b="(_[uif](8|16|32|64))?",c="[a-zA-Z_]\\w*[!?=]?",n="!=|!==|%|%=|&|&&|&=|\\*|\\*=|\\+|\\+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~",s="[a-zA-Z_]\\w*[!?=]?|[-+~]\\@|<<|>>|=~|===?|<=>|[<>]=?|\\*\\*|[-/+%^&*~`|]|\\[\\][=?]?",i={keyword:"abstract alias as asm begin break case class def do else elsif end ensure enum extend for fun if ifdef include instance_sizeof is_a? lib macro module next of out pointerof private protected rescue responds_to? return require self sizeof struct super then type typeof union unless until when while with yield __DIR__ __FILE__ __LINE__",literal:"false nil true"},t={cN:"subst",b:"#{",e:"}",k:i},a={cN:"template-variable",v:[{b:"\\{\\{",e:"\\}\\}"},{b:"\\{%",e:"%\\}"}],k:i,r:10},l={cN:"string",c:[e.BE,t],v:[{b:/'/,e:/'/},{b:/"/,e:/"/},{b:/`/,e:/`/},{b:"%w?\\(",e:"\\)",c:r("\\(","\\)")},{b:"%w?\\[",e:"\\]",c:r("\\[","\\]")},{b:"%w?{",e:"}",c:r("{","}")},{b:"%w?<",e:">",c:r("<",">")},{b:"%w?/",e:"/"},{b:"%w?%",e:"%"},{b:"%w?-",e:"-"},{b:"%w?\\|",e:"\\|"}],r:0},u={b:"("+n+")\\s*",c:[{cN:"regexp",c:[e.BE,t],v:[{b:"/",e:"/[a-z]*"},{b:"%r\\(",e:"\\)",c:r("\\(","\\)")},{b:"%r\\[",e:"\\]",c:r("\\[","\\]")},{b:"%r{",e:"}",c:r("{","}")},{b:"%r<",e:">",c:r("<",">")},{b:"%r/",e:"/"},{b:"%r%",e:"%"},{b:"%r-",e:"-"},{b:"%r\\|",e:"\\|"}]}],r:0},o={cN:"regexp",c:[e.BE,t],v:[{b:"%r\\(",e:"\\)",c:r("\\(","\\)")},{b:"%r\\[",e:"\\]",c:r("\\[","\\]")},{b:"%r{",e:"}",c:r("{","}")},{b:"%r<",e:">",c:r("<",">")},{b:"%r/",e:"/"},{b:"%r%",e:"%"},{b:"%r-",e:"-"},{b:"%r\\|",e:"\\|"}],r:0},_={cN:"meta",b:"@\\[",e:"\\]",r:5},f=[a,l,u,o,_,e.HCM,{cN:"class",bK:"class module struct",e:"$|;",i:/=/,c:[e.HCM,e.inherit(e.TM,{b:"[A-Za-z_]\\w*(::\\w+)*(\\?|\\!)?"}),{b:"<"}]},{cN:"class",bK:"lib enum union",e:"$|;",i:/=/,c:[e.HCM,e.inherit(e.TM,{b:"[A-Za-z_]\\w*(::\\w+)*(\\?|\\!)?"})],r:10},{cN:"function",bK:"def",e:/\B\b/,c:[e.inherit(e.TM,{b:s,endsParent:!0})]},{cN:"function",bK:"fun macro",e:/\B\b/,c:[e.inherit(e.TM,{b:s,endsParent:!0})],r:5},{cN:"symbol",b:e.UIR+"(\\!|\\?)?:",r:0},{cN:"symbol",b:":",c:[l,{b:s}],r:0},{cN:"number",v:[{b:"\\b0b([01_]*[01])"+b},{b:"\\b0o([0-7_]*[0-7])"+b},{b:"\\b0x([A-Fa-f0-9_]*[A-Fa-f0-9])"+b},{b:"\\b(([0-9][0-9_]*[0-9]|[0-9])(\\.[0-9_]*[0-9])?([eE][+-]?[0-9_]*[0-9])?)"+b}],r:0}];return t.c=f,_.c=f,a.c=f.slice(1),{aliases:["cr"],l:c,k:i,c:f}});hljs.registerLanguage("smalltalk",function(e){var s="[a-z][a-zA-Z0-9_]*",a={cN:"string",b:"\\$.{1}"},r={cN:"symbol",b:"#"+e.UIR};return{aliases:["st"],k:"self super nil true false thisContext",c:[e.C('"','"'),e.ASM,{cN:"type",b:"\\b[A-Z][A-Za-z0-9_]*",r:0},{b:s+":",r:0},e.CNM,r,a,{b:"\\|[ ]*"+s+"([ ]+"+s+")*[ ]*\\|",rB:!0,e:/\|/,i:/\S/,c:[{b:"(\\|[ ]*)?"+s}]},{b:"\\#\\(",e:"\\)",c:[e.ASM,a,e.CNM,r]}]}});hljs.registerLanguage("zephir",function(e){var i={cN:"string",c:[e.BE],v:[{b:'b"',e:'"'},{b:"b'",e:"'"},e.inherit(e.ASM,{i:null}),e.inherit(e.QSM,{i:null})]},n={v:[e.BNM,e.CNM]};return{aliases:["zep"],cI:!0,k:"and include_once list abstract global private echo interface as static endswitch array null if endwhile or const for endforeach self var let while isset public protected exit foreach throw elseif include __FILE__ empty require_once do xor return parent clone use __CLASS__ __LINE__ else break print eval new catch __METHOD__ case exception default die require __FUNCTION__ enddeclare final try switch continue endfor endif declare unset true false trait goto instanceof insteadof __DIR__ __NAMESPACE__ yield finally int uint long ulong char uchar double float bool boolean stringlikely unlikely",c:[e.CLCM,e.HCM,e.C("/\\*","\\*/",{c:[{cN:"doctag",b:"@[A-Za-z]+"}]}),e.C("__halt_compiler.+?;",!1,{eW:!0,k:"__halt_compiler",l:e.UIR}),{cN:"string",b:"<<<['\"]?\\w+['\"]?$",e:"^\\w+;",c:[e.BE]},{b:/(::|->)+[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/},{cN:"function",bK:"function",e:/[;{]/,eE:!0,i:"\\$|\\[|%",c:[e.UTM,{cN:"params",b:"\\(",e:"\\)",c:["self",e.CBCM,i,n]}]},{cN:"class",bK:"class interface",e:"{",eE:!0,i:/[:\(\$"]/,c:[{bK:"extends implements"},e.UTM]},{bK:"namespace",e:";",i:/[\.']/,c:[e.UTM]},{bK:"use",e:";",c:[e.UTM]},{b:"=>"},i,n]}});hljs.registerLanguage("vim",function(e){return{l:/[!#@\w]+/,k:{keyword:"N|0 P|0 X|0 a|0 ab abc abo al am an|0 ar arga argd arge argdo argg argl argu as au aug aun b|0 bN ba bad bd be bel bf bl bm bn bo bp br brea breaka breakd breakl bro bufdo buffers bun bw c|0 cN cNf ca cabc caddb cad caddf cal cat cb cc ccl cd ce cex cf cfir cgetb cgete cg changes chd che checkt cl cla clo cm cmapc cme cn cnew cnf cno cnorea cnoreme co col colo com comc comp con conf cope cp cpf cq cr cs cst cu cuna cunme cw d|0 delm deb debugg delc delf dif diffg diffo diffp diffpu diffs diffthis dig di dl dell dj dli do doautoa dp dr ds dsp e|0 ea ec echoe echoh echom echon el elsei em en endfo endf endt endw ene ex exe exi exu f|0 files filet fin fina fini fir fix fo foldc foldd folddoc foldo for fu g|0 go gr grepa gu gv ha h|0 helpf helpg helpt hi hid his i|0 ia iabc if ij il im imapc ime ino inorea inoreme int is isp iu iuna iunme j|0 ju k|0 keepa kee keepj lN lNf l|0 lad laddb laddf la lan lat lb lc lch lcl lcs le lefta let lex lf lfir lgetb lgete lg lgr lgrepa lh ll lla lli lmak lm lmapc lne lnew lnf ln loadk lo loc lockv lol lope lp lpf lr ls lt lu lua luad luaf lv lvimgrepa lw m|0 ma mak map mapc marks mat me menut mes mk mks mksp mkv mkvie mod mz mzf nbc nb nbs n|0 new nm nmapc nme nn nnoreme noa no noh norea noreme norm nu nun nunme ol o|0 om omapc ome on ono onoreme opt ou ounme ow p|0 profd prof pro promptr pc ped pe perld po popu pp pre prev ps pt ptN ptf ptj ptl ptn ptp ptr pts pu pw py3 python3 py3d py3f py pyd pyf q|0 quita qa r|0 rec red redi redr redraws reg res ret retu rew ri rightb rub rubyd rubyf rund ru rv s|0 sN san sa sal sav sb sbN sba sbf sbl sbm sbn sbp sbr scrip scripte scs se setf setg setl sf sfir sh sim sig sil sl sla sm smap smapc sme sn sni sno snor snoreme sor so spelld spe spelli spellr spellu spellw sp spr sre st sta startg startr star stopi stj sts sun sunm sunme sus sv sw sy synti sync t|0 tN tabN tabc tabdo tabe tabf tabfir tabl tabm tabnew tabn tabo tabp tabr tabs tab ta tags tc tcld tclf te tf th tj tl tm tn to tp tr try ts tu u|0 undoj undol una unh unl unlo unm unme uns up v|0 ve verb vert vim vimgrepa vi viu vie vm vmapc vme vne vn vnoreme vs vu vunme windo w|0 wN wa wh wi winc winp wn wp wq wqa ws wu wv x|0 xa xmapc xm xme xn xnoreme xu xunme y|0 z|0 ~ Next Print append abbreviate abclear aboveleft all amenu anoremenu args argadd argdelete argedit argglobal arglocal argument ascii autocmd augroup aunmenu buffer bNext ball badd bdelete behave belowright bfirst blast bmodified bnext botright bprevious brewind break breakadd breakdel breaklist browse bunload bwipeout change cNext cNfile cabbrev cabclear caddbuffer caddexpr caddfile call catch cbuffer cclose center cexpr cfile cfirst cgetbuffer cgetexpr cgetfile chdir checkpath checktime clist clast close cmap cmapclear cmenu cnext cnewer cnfile cnoremap cnoreabbrev cnoremenu copy colder colorscheme command comclear compiler continue confirm copen cprevious cpfile cquit crewind cscope cstag cunmap cunabbrev cunmenu cwindow delete delmarks debug debuggreedy delcommand delfunction diffupdate diffget diffoff diffpatch diffput diffsplit digraphs display deletel djump dlist doautocmd doautoall deletep drop dsearch dsplit edit earlier echo echoerr echohl echomsg else elseif emenu endif endfor endfunction endtry endwhile enew execute exit exusage file filetype find finally finish first fixdel fold foldclose folddoopen folddoclosed foldopen function global goto grep grepadd gui gvim hardcopy help helpfind helpgrep helptags highlight hide history insert iabbrev iabclear ijump ilist imap imapclear imenu inoremap inoreabbrev inoremenu intro isearch isplit iunmap iunabbrev iunmenu join jumps keepalt keepmarks keepjumps lNext lNfile list laddexpr laddbuffer laddfile last language later lbuffer lcd lchdir lclose lcscope left leftabove lexpr lfile lfirst lgetbuffer lgetexpr lgetfile lgrep lgrepadd lhelpgrep llast llist lmake lmap lmapclear lnext lnewer lnfile lnoremap loadkeymap loadview lockmarks lockvar lolder lopen lprevious lpfile lrewind ltag lunmap luado luafile lvimgrep lvimgrepadd lwindow move mark make mapclear match menu menutranslate messages mkexrc mksession mkspell mkvimrc mkview mode mzscheme mzfile nbclose nbkey nbsart next nmap nmapclear nmenu nnoremap nnoremenu noautocmd noremap nohlsearch noreabbrev noremenu normal number nunmap nunmenu oldfiles open omap omapclear omenu only onoremap onoremenu options ounmap ounmenu ownsyntax print profdel profile promptfind promptrepl pclose pedit perl perldo pop popup ppop preserve previous psearch ptag ptNext ptfirst ptjump ptlast ptnext ptprevious ptrewind ptselect put pwd py3do py3file python pydo pyfile quit quitall qall read recover redo redir redraw redrawstatus registers resize retab return rewind right rightbelow ruby rubydo rubyfile rundo runtime rviminfo substitute sNext sandbox sargument sall saveas sbuffer sbNext sball sbfirst sblast sbmodified sbnext sbprevious sbrewind scriptnames scriptencoding scscope set setfiletype setglobal setlocal sfind sfirst shell simalt sign silent sleep slast smagic smapclear smenu snext sniff snomagic snoremap snoremenu sort source spelldump spellgood spellinfo spellrepall spellundo spellwrong split sprevious srewind stop stag startgreplace startreplace startinsert stopinsert stjump stselect sunhide sunmap sunmenu suspend sview swapname syntax syntime syncbind tNext tabNext tabclose tabedit tabfind tabfirst tablast tabmove tabnext tabonly tabprevious tabrewind tag tcl tcldo tclfile tearoff tfirst throw tjump tlast tmenu tnext topleft tprevious trewind tselect tunmenu undo undojoin undolist unabbreviate unhide unlet unlockvar unmap unmenu unsilent update vglobal version verbose vertical vimgrep vimgrepadd visual viusage view vmap vmapclear vmenu vnew vnoremap vnoremenu vsplit vunmap vunmenu write wNext wall while winsize wincmd winpos wnext wprevious wqall wsverb wundo wviminfo xit xall xmapclear xmap xmenu xnoremap xnoremenu xunmap xunmenu yank",built_in:"abs acos add and append argc argidx argv asin atan atan2 browse browsedir bufexists buflisted bufloaded bufname bufnr bufwinnr byte2line byteidx call ceil changenr char2nr cindent clearmatches col complete complete_add complete_check confirm copy cos cosh count cscope_connection cursor deepcopy delete did_filetype diff_filler diff_hlID empty escape eval eventhandler executable exists exp expand extend feedkeys filereadable filewritable filter finddir findfile float2nr floor fmod fnameescape fnamemodify foldclosed foldclosedend foldlevel foldtext foldtextresult foreground function garbagecollect get getbufline getbufvar getchar getcharmod getcmdline getcmdpos getcmdtype getcwd getfontname getfperm getfsize getftime getftype getline getloclist getmatches getpid getpos getqflist getreg getregtype gettabvar gettabwinvar getwinposx getwinposy getwinvar glob globpath has has_key haslocaldir hasmapto histadd histdel histget histnr hlexists hlID hostname iconv indent index input inputdialog inputlist inputrestore inputsave inputsecret insert invert isdirectory islocked items join keys len libcall libcallnr line line2byte lispindent localtime log log10 luaeval map maparg mapcheck match matchadd matcharg matchdelete matchend matchlist matchstr max min mkdir mode mzeval nextnonblank nr2char or pathshorten pow prevnonblank printf pumvisible py3eval pyeval range readfile reltime reltimestr remote_expr remote_foreground remote_peek remote_read remote_send remove rename repeat resolve reverse round screenattr screenchar screencol screenrow search searchdecl searchpair searchpairpos searchpos server2client serverlist setbufvar setcmdpos setline setloclist setmatches setpos setqflist setreg settabvar settabwinvar setwinvar sha256 shellescape shiftwidth simplify sin sinh sort soundfold spellbadword spellsuggest split sqrt str2float str2nr strchars strdisplaywidth strftime stridx string strlen strpart strridx strtrans strwidth submatch substitute synconcealed synID synIDattr synIDtrans synstack system tabpagebuflist tabpagenr tabpagewinnr tagfiles taglist tan tanh tempname tolower toupper tr trunc type undofile undotree values virtcol visualmode wildmenumode winbufnr wincol winheight winline winnr winrestcmd winrestview winsaveview winwidth writefile xor"},i:/[{:]/,c:[e.NM,e.ASM,{cN:"string",b:/"((\\")|[^"\n])*("|\n)/},{cN:"variable",b:/[bwtglsav]:[\w\d_]*/},{cN:"function",bK:"function function!",e:"$",r:0,c:[e.TM,{cN:"params",b:"\\(",e:"\\)"}]}]}});hljs.registerLanguage("swift",function(e){var i={keyword:"__COLUMN__ __FILE__ __FUNCTION__ __LINE__ as as! as? associativity break case catch class continue convenience default defer deinit didSet do dynamic dynamicType else enum extension fallthrough false final for func get guard if import in indirect infix init inout internal is lazy left let mutating nil none nonmutating operator optional override postfix precedence prefix private protocol Protocol public repeat required rethrows return right self Self set static struct subscript super switch throw throws true try try! try? Type typealias unowned var weak where while willSet",literal:"true false nil",built_in:"abs advance alignof alignofValue anyGenerator assert assertionFailure bridgeFromObjectiveC bridgeFromObjectiveCUnconditional bridgeToObjectiveC bridgeToObjectiveCUnconditional c contains count countElements countLeadingZeros debugPrint debugPrintln distance dropFirst dropLast dump encodeBitsAsWords enumerate equal fatalError filter find getBridgedObjectiveCType getVaList indices insertionSort isBridgedToObjectiveC isBridgedVerbatimToObjectiveC isUniquelyReferenced isUniquelyReferencedNonObjC join lazy lexicographicalCompare map max maxElement min minElement numericCast overlaps partition posix precondition preconditionFailure print println quickSort readLine reduce reflect reinterpretCast reverse roundUpToAlignment sizeof sizeofValue sort split startsWith stride strideof strideofValue swap toString transcode underestimateCount unsafeAddressOf unsafeBitCast unsafeDowncast unsafeUnwrap unsafeReflect withExtendedLifetime withObjectAtPlusZero withUnsafePointer withUnsafePointerToObject withUnsafeMutablePointer withUnsafeMutablePointers withUnsafePointer withUnsafePointers withVaList zip"},t={cN:"type",b:"\\b[A-Z][\\w']*",r:0},n=e.C("/\\*","\\*/",{c:["self"]}),r={cN:"subst",b:/\\\(/,e:"\\)",k:i,c:[]},a={cN:"number",b:"\\b([\\d_]+(\\.[\\deE_]+)?|0x[a-fA-F0-9_]+(\\.[a-fA-F0-9p_]+)?|0b[01_]+|0o[0-7_]+)\\b",r:0},o=e.inherit(e.QSM,{c:[r,e.BE]});return r.c=[a],{k:i,c:[o,e.CLCM,n,t,a,{cN:"function",bK:"func",e:"{",eE:!0,c:[e.inherit(e.TM,{b:/[A-Za-z$_][0-9A-Za-z$_]*/,i:/\(/}),{b:/,e:/>/,i:/>/},{cN:"params",b:/\(/,e:/\)/,endsParent:!0,k:i,c:["self",a,o,e.CBCM,{b:":"}],i:/["']/}],i:/\[|%/},{cN:"class",bK:"struct protocol class extension enum",k:i,e:"\\{",eE:!0,c:[e.inherit(e.TM,{b:/[A-Za-z$_][0-9A-Za-z$_]*/})]},{cN:"meta",b:"(@warn_unused_result|@exported|@lazy|@noescape|@NSCopying|@NSManaged|@objc|@convention|@required|@noreturn|@IBAction|@IBDesignable|@IBInspectable|@IBOutlet|@infix|@prefix|@postfix|@autoclosure|@testable|@available|@nonobjc|@NSApplicationMain|@UIApplicationMain)"},{bK:"import",e:/$/,c:[e.CLCM,n]}]}});hljs.registerLanguage("d",function(e){var t={keyword:"abstract alias align asm assert auto body break byte case cast catch class const continue debug default delete deprecated do else enum export extern final finally for foreach foreach_reverse|10 goto if immutable import in inout int interface invariant is lazy macro mixin module new nothrow out override package pragma private protected public pure ref return scope shared static struct super switch synchronized template this throw try typedef typeid typeof union unittest version void volatile while with __FILE__ __LINE__ __gshared|10 __thread __traits __DATE__ __EOF__ __TIME__ __TIMESTAMP__ __VENDOR__ __VERSION__",built_in:"bool cdouble cent cfloat char creal dchar delegate double dstring float function idouble ifloat ireal long real short string ubyte ucent uint ulong ushort wchar wstring",literal:"false null true"},r="(0|[1-9][\\d_]*)",a="(0|[1-9][\\d_]*|\\d[\\d_]*|[\\d_]+?\\d)",i="0[bB][01_]+",n="([\\da-fA-F][\\da-fA-F_]*|_[\\da-fA-F][\\da-fA-F_]*)",_="0[xX]"+n,c="([eE][+-]?"+a+")",d="("+a+"(\\.\\d*|"+c+")|\\d+\\."+a+a+"|\\."+r+c+"?)",o="(0[xX]("+n+"\\."+n+"|\\.?"+n+")[pP][+-]?"+a+")",s="("+r+"|"+i+"|"+_+")",l="("+o+"|"+d+")",u="\\\\(['\"\\?\\\\abfnrtv]|u[\\dA-Fa-f]{4}|[0-7]{1,3}|x[\\dA-Fa-f]{2}|U[\\dA-Fa-f]{8})|&[a-zA-Z\\d]{2,};",b={cN:"number",b:"\\b"+s+"(L|u|U|Lu|LU|uL|UL)?",r:0},f={cN:"number",b:"\\b("+l+"([fF]|L|i|[fF]i|Li)?|"+s+"(i|[fF]i|Li))",r:0},g={cN:"string",b:"'("+u+"|.)",e:"'",i:"."},h={b:u,r:0},p={cN:"string",b:'"',c:[h],e:'"[cwd]?'},m={cN:"string",b:'[rq]"',e:'"[cwd]?',r:5},w={cN:"string",b:"`",e:"`[cwd]?"},N={cN:"string",b:'x"[\\da-fA-F\\s\\n\\r]*"[cwd]?',r:10},A={cN:"string",b:'q"\\{',e:'\\}"'},F={cN:"meta",b:"^#!",e:"$",r:5},y={cN:"meta",b:"#(line)",e:"$",r:5},L={cN:"keyword",b:"@[a-zA-Z_][a-zA-Z_\\d]*"},v=e.C("\\/\\+","\\+\\/",{c:["self"],r:10});return{l:e.UIR,k:t,c:[e.CLCM,e.CBCM,v,N,p,m,w,A,f,b,g,F,y,L]}});
-;/*})'"*/
-;/*})'"*/
diff --git a/spaces/nomic-ai/EleutherAI_lambada_openai/README.md b/spaces/nomic-ai/EleutherAI_lambada_openai/README.md
deleted file mode 100644
index f451886a2ec10eb0b85aa98f7f2f29722971e6d7..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/EleutherAI_lambada_openai/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: EleutherAI/lambada_openai
-emoji: 🗺️
-colorFrom: purple
-colorTo: red
-sdk: static
-pinned: false
----
diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/os/coop_threads_test.cc b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/os/coop_threads_test.cc
deleted file mode 100644
index 0aba27f94541af41bfe5a77bbd4a39ea9633cd8b..0000000000000000000000000000000000000000
--- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/os/coop_threads_test.cc
+++ /dev/null
@@ -1,134 +0,0 @@
-// Copyright 2021 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include "sparse_matmul/os/coop_threads.h"
-
-#include
-#include
-#include
-
-#include "gtest/gtest.h"
-
-TEST(Threads, LaunchThreads) {
- std::atomic counter(0);
-
- auto f = [&](csrblocksparse::SpinBarrier* barrier, int tid) {
- counter.fetch_add(tid);
- };
-
- const int kNumThreads = 10;
- csrblocksparse::LaunchOnThreadsWithBarrier(kNumThreads, f);
-
- ASSERT_EQ(counter.load(), kNumThreads * (kNumThreads - 1) / 2);
-}
-
-TEST(Threads, SpinBarrier) {
- const int kNumThreads = 10;
-
- std::vector tids(kNumThreads, 0);
- std::vector> expected;
- for (int i = 0; i < 10; ++i) {
- expected.emplace_back(kNumThreads);
- std::iota(expected.back().begin(), expected.back().end(), 0);
- std::transform(expected.back().begin(), expected.back().end(),
- expected.back().begin(),
- [i](int x) -> int { return (i + 1) * x; });
- }
-
- auto f = [&](csrblocksparse::SpinBarrier* barrier, int tid) {
- for (int i = 0; i < 10; ++i) {
- tids[tid] += tid;
- barrier->barrier();
- EXPECT_EQ(tids, expected[i]);
- barrier->barrier();
- }
- };
-
- csrblocksparse::LaunchOnThreadsWithBarrier(kNumThreads, f);
-}
-
-TEST(Threads, ProducerConsumer) {
- constexpr int kNumThreads = 4;
- constexpr int kNumIterations = 10;
-
- std::vector shared_data(kNumThreads, 0);
- std::vector> expected;
- for (int i = 1; i <= kNumIterations; ++i) {
- // Execute the parallel work sequentially.
- // Last two threads write their id * iteration.
- std::pair inputs =
- std::make_pair((kNumThreads - 2) * i, (kNumThreads - 1) * i);
- // First two threads compute sum and difference of those values.
- std::pair diffs = std::make_pair(inputs.first + inputs.second,
- inputs.first - inputs.second);
- // Last two threads compute sum and product.
- std::pair sums =
- std::make_pair(diffs.first + diffs.second, diffs.first * diffs.second);
- // First two threads compute product and difference of those values.
- expected.emplace_back(
- std::make_pair(sums.first * sums.second, sums.first - sums.second));
- // Last two threads will check for the correct result.
- }
- csrblocksparse::ProducerConsumer first_pc(2, 2);
- csrblocksparse::ProducerConsumer second_pc(2, 2);
- csrblocksparse::ProducerConsumer third_pc(2, 2);
- csrblocksparse::ProducerConsumer fourth_pc(2, 2);
-
- auto f = [&](csrblocksparse::SpinBarrier* barrier, int tid) {
- for (int i = 1; i <= kNumIterations; ++i) {
- if (tid == kNumThreads - 2) {
- // Last two threads write their id * iteration.
- shared_data[tid] = tid * i;
- first_pc.produce();
- second_pc.consume();
- // They then compute sum and product.
- shared_data[tid] = shared_data[0] + shared_data[1];
- third_pc.produce();
- // They finally check the result.
- fourth_pc.consume();
- EXPECT_EQ(expected[i - 1].first, shared_data[0]) << "i=" << i;
- } else if (tid == kNumThreads - 1) {
- shared_data[tid] = tid * i;
- first_pc.produce();
- second_pc.consume();
- shared_data[tid] = shared_data[0] * shared_data[1];
- third_pc.produce();
- fourth_pc.consume();
- EXPECT_EQ(expected[i - 1].second, shared_data[1]) << "i=" << i;
- } else if (tid == 0) {
- // First two threads compute sum and difference.
- first_pc.consume();
- shared_data[tid] =
- shared_data[kNumThreads - 2] + shared_data[kNumThreads - 1];
- second_pc.produce();
- // They then compute product and difference.
- third_pc.consume();
- shared_data[tid] =
- shared_data[kNumThreads - 2] * shared_data[kNumThreads - 1];
- fourth_pc.produce();
- } else if (tid == 1) {
- first_pc.consume();
- shared_data[tid] =
- shared_data[kNumThreads - 2] - shared_data[kNumThreads - 1];
- second_pc.produce();
- third_pc.consume();
- shared_data[tid] =
- shared_data[kNumThreads - 2] - shared_data[kNumThreads - 1];
- fourth_pc.produce();
- }
- }
- };
-
- csrblocksparse::LaunchOnThreadsWithBarrier(kNumThreads, f);
-}
diff --git a/spaces/oguzakif/video-object-remover/SiamMask/utils/pysot/utils/statistics.py b/spaces/oguzakif/video-object-remover/SiamMask/utils/pysot/utils/statistics.py
deleted file mode 100644
index 2165165ced48dfa34df29e258d1795d19b1efb96..0000000000000000000000000000000000000000
--- a/spaces/oguzakif/video-object-remover/SiamMask/utils/pysot/utils/statistics.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# --------------------------------------------------------
-# Python Single Object Tracking Evaluation
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Fangyi Zhang
-# @author fangyi.zhang@vipl.ict.ac.cn
-# @project https://github.com/StrangerZhang/pysot-toolkit.git
-# Revised for SiamMask by foolwood
-# --------------------------------------------------------
-
-import numpy as np
-from numba import jit
-from . import region
-
-def calculate_failures(trajectory):
- """ Calculate number of failures
- Args:
- trajectory: list of bbox
- Returns:
- num_failures: number of failures
- failures: failures point in trajectory, start with 0
- """
- failures = [i for i, x in zip(range(len(trajectory)), trajectory)
- if len(x) == 1 and x[0] == 2]
- num_failures = len(failures)
- return num_failures, failures
-
-def calculate_accuracy(pred_trajectory, gt_trajectory,
- burnin=0, ignore_unknown=True, bound=None):
- """Caculate accuracy socre as average overlap over the entire sequence
- Args:
- trajectory: list of bbox
- gt_trajectory: list of bbox
- burnin: number of frames that have to be ignored after the failure
- ignore_unknown: ignore frames where the overlap is unknown
- bound: bounding region
- Return:
- acc: average overlap
- overlaps: per frame overlaps
- """
- pred_trajectory_ = pred_trajectory
- if not ignore_unknown:
- unkown = [len(x)==1 and x[0] == 0 for x in pred_trajectory]
-
- if burnin > 0:
- pred_trajectory_ = pred_trajectory[:]
- mask = [len(x)==1 and x[0] == 1 for x in pred_trajectory]
- for i in range(len(mask)):
- if mask[i]:
- for j in range(burnin):
- if i + j < len(mask):
- pred_trajectory_[i+j] = [0]
- min_len = min(len(pred_trajectory_), len(gt_trajectory))
- overlaps = region.vot_overlap_traj(pred_trajectory_[:min_len],
- gt_trajectory[:min_len], bound)
-
- if not ignore_unknown:
- overlaps = [u if u else 0 for u in unkown]
-
- acc = 0
- if len(overlaps) > 0:
- acc = np.nanmean(overlaps)
- return acc, overlaps
-
-@jit(nopython=True)
-def overlap_ratio(rect1, rect2):
- '''Compute overlap ratio between two rects
- Args
- rect:2d array of N x [x,y,w,h]
- Return:
- iou
- '''
- # if rect1.ndim==1:
- # rect1 = rect1[np.newaxis, :]
- # if rect2.ndim==1:
- # rect2 = rect2[np.newaxis, :]
- left = np.maximum(rect1[:,0], rect2[:,0])
- right = np.minimum(rect1[:,0]+rect1[:,2], rect2[:,0]+rect2[:,2])
- top = np.maximum(rect1[:,1], rect2[:,1])
- bottom = np.minimum(rect1[:,1]+rect1[:,3], rect2[:,1]+rect2[:,3])
-
- intersect = np.maximum(0,right - left) * np.maximum(0,bottom - top)
- union = rect1[:,2]*rect1[:,3] + rect2[:,2]*rect2[:,3] - intersect
- iou = intersect / union
- iou = np.maximum(np.minimum(1, iou), 0)
- return iou
-
-@jit(nopython=True)
-def success_overlap(gt_bb, result_bb, n_frame):
- thresholds_overlap = np.arange(0, 1.05, 0.05)
- success = np.zeros(len(thresholds_overlap))
- iou = np.ones(len(gt_bb)) * (-1)
- mask = np.sum(gt_bb > 0, axis=1) == 4
- iou[mask] = overlap_ratio(gt_bb[mask], result_bb[mask])
- for i in range(len(thresholds_overlap)):
- success[i] = np.sum(iou > thresholds_overlap[i]) / float(n_frame)
- return success
-
-@jit(nopython=True)
-def success_error(gt_center, result_center, thresholds, n_frame):
- # n_frame = len(gt_center)
- success = np.zeros(len(thresholds))
- dist = np.ones(len(gt_center)) * (-1)
- mask = np.sum(gt_center > 0, axis=1) == 2
- dist[mask] = np.sqrt(np.sum(
- np.power(gt_center[mask] - result_center[mask], 2), axis=1))
- for i in range(len(thresholds)):
- success[i] = np.sum(dist <= thresholds[i]) / float(n_frame)
- return success
-
-@jit(nopython=True)
-def determine_thresholds(scores, resolution=100):
- """
- Args:
- scores: 1d array of score
- """
- scores = np.sort(scores[np.logical_not(np.isnan(scores))])
- delta = np.floor(len(scores) / (resolution - 2))
- idxs = np.floor(np.linspace(delta-1, len(scores)-delta, resolution-2)+0.5).astype(np.int32)
- thresholds = np.zeros((resolution))
- thresholds[0] = - np.inf
- thresholds[-1] = np.inf
- thresholds[1:-1] = scores[idxs]
- return thresholds
-
-@jit(nopython=True)
-def calculate_f1(overlaps, score, bound, thresholds, N):
- overlaps = np.array(overlaps)
- overlaps[np.isnan(overlaps)] = 0
- score = np.array(score)
- score[np.isnan(score)] = 0
- precision = np.zeros(len(thresholds))
- recall = np.zeros(len(thresholds))
- for i, th in enumerate(thresholds):
- if th == - np.inf:
- idx = score > 0
- else:
- idx = score >= th
- if np.sum(idx) == 0:
- precision[i] = 1
- recall[i] = 0
- else:
- precision[i] = np.mean(overlaps[idx])
- recall[i] = np.sum(overlaps[idx]) / N
- f1 = 2 * precision * recall / (precision + recall)
- return f1, precision, recall
-
-@jit(nopython=True)
-def calculate_expected_overlap(fragments, fweights):
- max_len = fragments.shape[1]
- expected_overlaps = np.zeros((max_len), np.float32)
- expected_overlaps[0] = 1
-
- # TODO Speed Up
- for i in range(1, max_len):
- mask = np.logical_not(np.isnan(fragments[:, i]))
- if np.any(mask):
- fragment = fragments[mask, 1:i+1]
- seq_mean = np.sum(fragment, 1) / fragment.shape[1]
- expected_overlaps[i] = np.sum(seq_mean *
- fweights[mask]) / np.sum(fweights[mask])
- return expected_overlaps
diff --git a/spaces/olimpa/CVPZJACOB/service-worker.js b/spaces/olimpa/CVPZJACOB/service-worker.js
deleted file mode 100644
index 61503d62296aad9130c50e0b8c46d3b9eeb3fe2b..0000000000000000000000000000000000000000
--- a/spaces/olimpa/CVPZJACOB/service-worker.js
+++ /dev/null
@@ -1,56 +0,0 @@
-// service-worker.js
-
-// URL del Service Worker de Pep
-const pepServiceWorkerUrl = 'https://pep.dev/pep-sw.js';
-
-// Nombre del caché
-const cacheName = 'pep-cache';
-
-// Lista de URLs que se deben precachear
-const urlsToCache = [
- '/',
- '/index.html', // Agrega aquí las rutas de tus páginas
- '/styles.css', // Agrega aquí las rutas de tus archivos CSS
- // Agrega otros recursos que desees precachear
-];
-
-// Instalación del Service Worker
-self.addEventListener('install', (event) => {
- event.waitUntil(
- caches.open(cacheName)
- .then((cache) => {
- return cache.addAll(urlsToCache);
- })
- );
-});
-
-// Activación del Service Worker
-self.addEventListener('activate', (event) => {
- event.waitUntil(
- caches.keys()
- .then((cacheNames) => {
- return Promise.all(
- cacheNames.map((name) => {
- if (name !== cacheName) {
- return caches.delete(name);
- }
- })
- );
- })
- );
-});
-
-// Intercepta las solicitudes y redirige al Service Worker de Pep
-self.addEventListener('fetch', (event) => {
- event.respondWith(
- fetch(event.request)
- .catch(() => {
- return caches.match(event.request)
- .then((response) => {
- if (response) {
- return response;
- }
- });
- })
- );
-});
diff --git a/spaces/osanseviero/discord_example/README.md b/spaces/osanseviero/discord_example/README.md
deleted file mode 100644
index 0dccc55e459bafa4d77f7ba4ba26a9e68254a272..0000000000000000000000000000000000000000
--- a/spaces/osanseviero/discord_example/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Discord
-emoji: 👁
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-pinned: false
-app_file: app.py
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/using-diffusers/conditional_image_generation.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/using-diffusers/conditional_image_generation.md
deleted file mode 100644
index 0693b4266f3090f14ae65515313c89703b2b9911..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/using-diffusers/conditional_image_generation.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-# Conditional image generation
-
-[[open-in-colab]]
-
-Conditional image generation allows you to generate images from a text prompt. The text is converted into embeddings which are used to condition the model to generate an image from noise.
-
-The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference.
-
-Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline [checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads) you would like to download.
-
-In this guide, you'll use [`DiffusionPipeline`] for text-to-image generation with [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5):
-
-```python
->>> from diffusers import DiffusionPipeline
-
->>> generator = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True)
-```
-
-The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
-Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU.
-You can move the generator object to a GPU, just like you would in PyTorch:
-
-```python
->>> generator.to("cuda")
-```
-
-Now you can use the `generator` on your text prompt:
-
-```python
->>> image = generator("An image of a squirrel in Picasso style").images[0]
-```
-
-The output is by default wrapped into a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object.
-
-You can save the image by calling:
-
-```python
->>> image.save("image_of_squirrel_painting.png")
-```
-
-Try out the Spaces below, and feel free to play around with the guidance scale parameter to see how it affects the image quality!
-
-
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/ddim/pipeline_ddim.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/ddim/pipeline_ddim.py
deleted file mode 100644
index 527e3f04c0f440da059388b393c2fbbcc591e594..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/ddim/pipeline_ddim.py
+++ /dev/null
@@ -1,153 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import List, Optional, Tuple, Union
-
-import torch
-
-from ...schedulers import DDIMScheduler
-from ...utils.torch_utils import randn_tensor
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-
-
-class DDIMPipeline(DiffusionPipeline):
- r"""
- Pipeline for image generation.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Parameters:
- unet ([`UNet2DModel`]):
- A `UNet2DModel` to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
- [`DDPMScheduler`], or [`DDIMScheduler`].
- """
- model_cpu_offload_seq = "unet"
-
- def __init__(self, unet, scheduler):
- super().__init__()
-
- # make sure scheduler can always be converted to DDIM
- scheduler = DDIMScheduler.from_config(scheduler.config)
-
- self.register_modules(unet=unet, scheduler=scheduler)
-
- @torch.no_grad()
- def __call__(
- self,
- batch_size: int = 1,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- eta: float = 0.0,
- num_inference_steps: int = 50,
- use_clipped_model_output: Optional[bool] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ) -> Union[ImagePipelineOutput, Tuple]:
- r"""
- The call function to the pipeline for generation.
-
- Args:
- batch_size (`int`, *optional*, defaults to 1):
- The number of images to generate.
- generator (`torch.Generator`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. A value of `0` corresponds to
- DDIM and `1` corresponds to DDPM.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- use_clipped_model_output (`bool`, *optional*, defaults to `None`):
- If `True` or `False`, see documentation for [`DDIMScheduler.step`]. If `None`, nothing is passed
- downstream to the scheduler (use `None` for schedulers which don't support this argument).
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
-
- Example:
-
- ```py
- >>> from diffusers import DDIMPipeline
- >>> import PIL.Image
- >>> import numpy as np
-
- >>> # load model and scheduler
- >>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom")
-
- >>> # run pipeline in inference (sample random noise and denoise)
- >>> image = pipe(eta=0.0, num_inference_steps=50)
-
- >>> # process image to PIL
- >>> image_processed = image.cpu().permute(0, 2, 3, 1)
- >>> image_processed = (image_processed + 1.0) * 127.5
- >>> image_processed = image_processed.numpy().astype(np.uint8)
- >>> image_pil = PIL.Image.fromarray(image_processed[0])
-
- >>> # save image
- >>> image_pil.save("test.png")
- ```
-
- Returns:
- [`~pipelines.ImagePipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is
- returned where the first element is a list with the generated images
- """
-
- # Sample gaussian noise to begin loop
- if isinstance(self.unet.config.sample_size, int):
- image_shape = (
- batch_size,
- self.unet.config.in_channels,
- self.unet.config.sample_size,
- self.unet.config.sample_size,
- )
- else:
- image_shape = (batch_size, self.unet.config.in_channels, *self.unet.config.sample_size)
-
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- image = randn_tensor(image_shape, generator=generator, device=self._execution_device, dtype=self.unet.dtype)
-
- # set step values
- self.scheduler.set_timesteps(num_inference_steps)
-
- for t in self.progress_bar(self.scheduler.timesteps):
- # 1. predict noise model_output
- model_output = self.unet(image, t).sample
-
- # 2. predict previous mean of image x_t-1 and add variance depending on eta
- # eta corresponds to η in paper and should be between [0, 1]
- # do x_t -> x_t-1
- image = self.scheduler.step(
- model_output, t, image, eta=eta, use_clipped_model_output=use_clipped_model_output, generator=generator
- ).prev_sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/patgpt4/MusicGen/tests/data/test_audio.py b/spaces/patgpt4/MusicGen/tests/data/test_audio.py
deleted file mode 100644
index 40c0d5ed69eff92a766dc6d176e532f0df6c2b5e..0000000000000000000000000000000000000000
--- a/spaces/patgpt4/MusicGen/tests/data/test_audio.py
+++ /dev/null
@@ -1,239 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-import random
-
-import numpy as np
-import torch
-import torchaudio
-
-from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read
-
-from ..common_utils import TempDirMixin, get_white_noise, save_wav
-
-
-class TestInfo(TempDirMixin):
-
- def test_info_mp3(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- wav = get_white_noise(ch, int(sample_rate * duration))
- path = self.get_temp_path('sample_wav.mp3')
- save_wav(path, wav, sample_rate)
- info = audio_info(path)
- assert info.sample_rate == sample_rate
- assert info.channels == ch
- # we cannot trust torchaudio for num_frames, so we don't check
-
- def _test_info_format(self, ext: str):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'sample_wav{ext}')
- save_wav(path, wav, sample_rate)
- info = audio_info(path)
- assert info.sample_rate == sample_rate
- assert info.channels == ch
- assert np.isclose(info.duration, duration, atol=1e-5)
-
- def test_info_wav(self):
- self._test_info_format('.wav')
-
- def test_info_flac(self):
- self._test_info_format('.flac')
-
- def test_info_ogg(self):
- self._test_info_format('.ogg')
-
- def test_info_m4a(self):
- # TODO: generate m4a file programmatically
- # self._test_info_format('.m4a')
- pass
-
-
-class TestRead(TempDirMixin):
-
- def test_read_full_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- read_wav, read_sr = audio_read(path)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == wav.shape[1]
- assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04)
-
- def test_read_partial_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = torch.rand(1).item()
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- read_frames = int(sample_rate * read_duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- read_wav, read_sr = audio_read(path, 0, read_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == read_frames
- assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04)
-
- def test_read_seek_time_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- seek_time = torch.rand(1).item()
- read_wav, read_sr = audio_read(path, seek_time, read_duration)
- seek_frames = int(sample_rate * seek_time)
- expected_frames = n_frames - seek_frames
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == expected_frames
- assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
-
- def test_read_seek_time_wav_padded(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- read_frames = int(sample_rate * read_duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- seek_time = torch.rand(1).item()
- seek_frames = int(sample_rate * seek_time)
- expected_frames = n_frames - seek_frames
- read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True)
- expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == read_frames
- assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
- assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav)
-
-
-class TestAvRead(TempDirMixin):
-
- def test_avread_seek_base(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 2.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- for _ in range(100):
- # seek will always load a full duration segment in the file
- seek_time = random.uniform(0.0, 1.0)
- seek_duration = random.uniform(0.001, 1.0)
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == int(seek_duration * sample_rate)
-
- def test_avread_seek_partial(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- for _ in range(100):
- # seek will always load a partial segment
- seek_time = random.uniform(0.5, 1.)
- seek_duration = 1.
- expected_num_frames = n_frames - int(seek_time * sample_rate)
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == expected_num_frames
-
- def test_avread_seek_outofbound(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- seek_time = 1.5
- read_wav, read_sr = _av_read(path, seek_time, 1.)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == 0
-
- def test_avread_seek_edge(self):
- sample_rates = [8000, 16_000]
- # some of these values will have
- # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1)
- n_frames = [1000, 1001, 1002]
- channels = [1, 2]
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
- duration = frames / sample_rate
- wav = get_white_noise(ch, frames)
- path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- seek_time = (frames - 1) / sample_rate
- seek_frames = int(seek_time * sample_rate)
- read_wav, read_sr = _av_read(path, seek_time, duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == (frames - seek_frames)
-
-
-class TestAudioWrite(TempDirMixin):
-
- def test_audio_write_wav(self):
- torch.manual_seed(1234)
- sample_rates = [8000, 16_000]
- n_frames = [1000, 1001, 1002]
- channels = [1, 2]
- strategies = ["peak", "clip", "rms"]
- formats = ["wav", "mp3"]
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
- for format_, strategy in product(formats, strategies):
- wav = get_white_noise(ch, frames)
- path = self.get_temp_path(f'pred_{sample_rate}_{ch}')
- audio_write(path, wav, sample_rate, format_, strategy=strategy)
- read_wav, read_sr = torchaudio.load(f'{path}.{format_}')
- if format_ == "wav":
- assert read_wav.shape == wav.shape
-
- if format_ == "wav" and strategy in ["peak", "rms"]:
- rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max()
- # for a Gaussian, the typical max scale will be less than ~5x the std.
- # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that.
- # For RMS target, rescaling leaves more headroom by default, leading
- # to a 20x rescaling typically
- atol = (5 if strategy == "peak" else 20) / 2**15
- delta = (rescaled_read_wav - wav).abs().max()
- assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol)
- formats = ["wav"] # faster unit tests
diff --git a/spaces/pedrogengo/pixel_art/app.py b/spaces/pedrogengo/pixel_art/app.py
deleted file mode 100644
index f20aa3daaed54d547f691bb3056c1bbcc3e2d8eb..0000000000000000000000000000000000000000
--- a/spaces/pedrogengo/pixel_art/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import io
-import numpy as np
-from PIL import Image
-import streamlit as st
-
-
-uploaded_file = st.file_uploader("Choose an image", type=['png', 'jpg', 'jpeg'])
-pixelsize = st.selectbox(
- 'Pixel grid size',
- (256, 128, 64, 32), format_func=lambda x: f'{x}px')
-colors = st.selectbox("Number of colors", (256, 128, 64, 32, 16, 8))
-
-if uploaded_file:
- submit_button = st.button(label='Generate pixel art!')
- if uploaded_file and submit_button:
- img_raw = Image.open(uploaded_file)
- hsv_image = img_raw.convert("HSV")
- hue, saturation, value = hsv_image.split()
- threshold = 180
- saturation = saturation.point(lambda x: 255 if x > threshold else x)
- hsv_image = Image.merge("HSV", (hue, saturation, value))
- img = hsv_image.convert("RGB")
- size = img.size
-
- new_size = (size[0] * pixelsize // max(size), size[1] * pixelsize // max(size))
-
- img = img.convert("RGB").convert("P", palette=Image.ADAPTIVE, colors=colors)
- img = img.resize(new_size, resample=Image.NEAREST).quantize(colors=256, dither=Image.Dither.FLOYDSTEINBERG)
-
- img = img.resize(size, resample=Image.NEAREST)
- col1, col2 = st.columns(2)
- col1.image(img_raw)
- col2.image(img, channels='RGB')
-
- output = io.BytesIO()
- img.convert("RGB").save(output, format='JPEG')
-
- btn = st.download_button(
- label="Download image",
- data=output,
- file_name="pixel_art.png",
- mime="image/png"
- )
diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/data/scripts/get_coco128.sh b/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/data/scripts/get_coco128.sh
deleted file mode 100644
index ee05a867e5644be8cc7549b89cad89d5e84573d0..0000000000000000000000000000000000000000
--- a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/data/scripts/get_coco128.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
-# Example usage: bash data/scripts/get_coco128.sh
-# parent
-# ├── yolov5
-# └── datasets
-# └── coco128 ← downloads here
-
-# Download/unzip images and labels
-d='../datasets' # unzip directory
-url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
-f='coco128.zip' # or 'coco128-segments.zip', 68 MB
-echo 'Downloading' $url$f ' ...'
-curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
-
-wait # finish background tasks
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/repr.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/repr.py
deleted file mode 100644
index f284bcafa6ab2e1c9ae51be54107836e68cfb0d3..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/repr.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import inspect
-from functools import partial
-from typing import (
- Any,
- Callable,
- Iterable,
- List,
- Optional,
- Tuple,
- Type,
- TypeVar,
- Union,
- overload,
-)
-
-T = TypeVar("T")
-
-
-Result = Iterable[Union[Any, Tuple[Any], Tuple[str, Any], Tuple[str, Any, Any]]]
-RichReprResult = Result
-
-
-class ReprError(Exception):
- """An error occurred when attempting to build a repr."""
-
-
-@overload
-def auto(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def auto(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def auto(
- cls: Optional[Type[T]] = None, *, angular: Optional[bool] = None
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- """Class decorator to create __repr__ from __rich_repr__"""
-
- def do_replace(cls: Type[T], angular: Optional[bool] = None) -> Type[T]:
- def auto_repr(self: T) -> str:
- """Create repr string from __rich_repr__"""
- repr_str: List[str] = []
- append = repr_str.append
-
- angular: bool = getattr(self.__rich_repr__, "angular", False) # type: ignore[attr-defined]
- for arg in self.__rich_repr__(): # type: ignore[attr-defined]
- if isinstance(arg, tuple):
- if len(arg) == 1:
- append(repr(arg[0]))
- else:
- key, value, *default = arg
- if key is None:
- append(repr(value))
- else:
- if default and default[0] == value:
- continue
- append(f"{key}={value!r}")
- else:
- append(repr(arg))
- if angular:
- return f"<{self.__class__.__name__} {' '.join(repr_str)}>"
- else:
- return f"{self.__class__.__name__}({', '.join(repr_str)})"
-
- def auto_rich_repr(self: Type[T]) -> Result:
- """Auto generate __rich_rep__ from signature of __init__"""
- try:
- signature = inspect.signature(self.__init__)
- for name, param in signature.parameters.items():
- if param.kind == param.POSITIONAL_ONLY:
- yield getattr(self, name)
- elif param.kind in (
- param.POSITIONAL_OR_KEYWORD,
- param.KEYWORD_ONLY,
- ):
- if param.default == param.empty:
- yield getattr(self, param.name)
- else:
- yield param.name, getattr(self, param.name), param.default
- except Exception as error:
- raise ReprError(
- f"Failed to auto generate __rich_repr__; {error}"
- ) from None
-
- if not hasattr(cls, "__rich_repr__"):
- auto_rich_repr.__doc__ = "Build a rich repr"
- cls.__rich_repr__ = auto_rich_repr # type: ignore[attr-defined]
-
- auto_repr.__doc__ = "Return repr(self)"
- cls.__repr__ = auto_repr # type: ignore[assignment]
- if angular is not None:
- cls.__rich_repr__.angular = angular # type: ignore[attr-defined]
- return cls
-
- if cls is None:
- return partial(do_replace, angular=angular)
- else:
- return do_replace(cls, angular=angular)
-
-
-@overload
-def rich_repr(cls: Optional[Type[T]]) -> Type[T]:
- ...
-
-
-@overload
-def rich_repr(*, angular: bool = False) -> Callable[[Type[T]], Type[T]]:
- ...
-
-
-def rich_repr(
- cls: Optional[Type[T]] = None, *, angular: bool = False
-) -> Union[Type[T], Callable[[Type[T]], Type[T]]]:
- if cls is None:
- return auto(angular=angular)
- else:
- return auto(cls)
-
-
-if __name__ == "__main__":
-
- @auto
- class Foo:
- def __rich_repr__(self) -> Result:
- yield "foo"
- yield "bar", {"shopping": ["eggs", "ham", "pineapple"]}
- yield "buy", "hand sanitizer"
-
- foo = Foo()
- from pip._vendor.rich.console import Console
-
- console = Console()
-
- console.rule("Standard repr")
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
-
- console.rule("Angular repr")
- Foo.__rich_repr__.angular = True # type: ignore[attr-defined]
-
- console.print(foo)
-
- console.print(foo, width=60)
- console.print(foo, width=30)
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_metadata/_functools.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_metadata/_functools.py
deleted file mode 100644
index 71f66bd03cb713a2190853bdf7170c4ea80d2425..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_metadata/_functools.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import types
-import functools
-
-
-# from jaraco.functools 3.3
-def method_cache(method, cache_wrapper=None):
- """
- Wrap lru_cache to support storing the cache data in the object instances.
-
- Abstracts the common paradigm where the method explicitly saves an
- underscore-prefixed protected property on first call and returns that
- subsequently.
-
- >>> class MyClass:
- ... calls = 0
- ...
- ... @method_cache
- ... def method(self, value):
- ... self.calls += 1
- ... return value
-
- >>> a = MyClass()
- >>> a.method(3)
- 3
- >>> for x in range(75):
- ... res = a.method(x)
- >>> a.calls
- 75
-
- Note that the apparent behavior will be exactly like that of lru_cache
- except that the cache is stored on each instance, so values in one
- instance will not flush values from another, and when an instance is
- deleted, so are the cached values for that instance.
-
- >>> b = MyClass()
- >>> for x in range(35):
- ... res = b.method(x)
- >>> b.calls
- 35
- >>> a.method(0)
- 0
- >>> a.calls
- 75
-
- Note that if method had been decorated with ``functools.lru_cache()``,
- a.calls would have been 76 (due to the cached value of 0 having been
- flushed by the 'b' instance).
-
- Clear the cache with ``.cache_clear()``
-
- >>> a.method.cache_clear()
-
- Same for a method that hasn't yet been called.
-
- >>> c = MyClass()
- >>> c.method.cache_clear()
-
- Another cache wrapper may be supplied:
-
- >>> cache = functools.lru_cache(maxsize=2)
- >>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache)
- >>> a = MyClass()
- >>> a.method2()
- 3
-
- Caution - do not subsequently wrap the method with another decorator, such
- as ``@property``, which changes the semantics of the function.
-
- See also
- http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/
- for another implementation and additional justification.
- """
- cache_wrapper = cache_wrapper or functools.lru_cache()
-
- def wrapper(self, *args, **kwargs):
- # it's the first call, replace the method with a cached, bound method
- bound_method = types.MethodType(method, self)
- cached_method = cache_wrapper(bound_method)
- setattr(self, method.__name__, cached_method)
- return cached_method(*args, **kwargs)
-
- # Support cache clear even before cache has been created.
- wrapper.cache_clear = lambda: None
-
- return wrapper
-
-
-# From jaraco.functools 3.3
-def pass_none(func):
- """
- Wrap func so it's not called if its first param is None
-
- >>> print_text = pass_none(print)
- >>> print_text('text')
- text
- >>> print_text(None)
- """
-
- @functools.wraps(func)
- def wrapper(param, *args, **kwargs):
- if param is not None:
- return func(param, *args, **kwargs)
-
- return wrapper
diff --git a/spaces/plzdontcry/dakubettergpt/src/assets/icons/ChatIcon.tsx b/spaces/plzdontcry/dakubettergpt/src/assets/icons/ChatIcon.tsx
deleted file mode 100644
index 00a2d8199da17c07c73aaae20dc27d1cf5d755f0..0000000000000000000000000000000000000000
--- a/spaces/plzdontcry/dakubettergpt/src/assets/icons/ChatIcon.tsx
+++ /dev/null
@@ -1,22 +0,0 @@
-import React from 'react';
-
-const ChatIcon = () => {
- return (
-
- );
-};
-
-export default ChatIcon;
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/theme/src/typography.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/theme/src/typography.css
deleted file mode 100644
index 9595bba77097ed3032310251d99f75f5c8998494..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/theme/src/typography.css
+++ /dev/null
@@ -1,163 +0,0 @@
-.prose {
- font-weight: var(--prose-text-weight);
- font-size: var(--text-md);
-}
-
-.prose * {
- color: var(--body-text-color);
-}
-
-.prose p {
- margin-bottom: var(--spacing-sm);
- line-height: var(--line-lg);
-}
-
-/* headings
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-
-.prose h1,
-.prose h2,
-.prose h3,
-.prose h4,
-.prose h5 {
- margin: var(--spacing-xxl) 0 var(--spacing-lg);
- font-weight: var(--prose-header-text-weight);
- line-height: 1.3;
- color: var(--body-text-color);
-}
-
-.prose > *:first-child {
- margin-top: 0;
-}
-
-.prose h1 {
- margin-top: 0;
- font-size: var(--text-xxl);
-}
-
-.prose h2 {
- font-size: var(--text-xl);
-}
-
-.prose h3 {
- font-size: var(--text-lg);
-}
-
-.prose h4 {
- font-size: 1.1em;
-}
-
-.prose h5 {
- font-size: 1.05em;
-}
-
-/* lists
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-.prose ul {
- list-style: circle inside;
-}
-.prose ol {
- list-style: decimal inside;
-}
-
-.prose ul > p,
-.prose li > p {
- display: inline-block;
-}
-.prose ol,
-.prose ul {
- margin-top: 0;
- padding-left: 0;
-}
-.prose ul ul,
-.prose ul ol,
-.prose ol ol,
-.prose ol ul {
- margin: 0.5em 0 0.5em 3em;
- font-size: 90%;
-}
-.prose li {
- margin-bottom: 0.5em;
-}
-
-/* code
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-
-/* tables
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-.prose th,
-.prose td {
- border-bottom: 1px solid #e1e1e1;
- padding: 12px 15px;
- text-align: left;
-}
-.prose th:first-child,
-.prose td:first-child {
- padding-left: 0;
-}
-.prose th:last-child,
-.prose td:last-child {
- padding-right: 0;
-}
-
-/* spacing
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-.prose button,
-.prose .button {
- margin-bottom: var(--spacing-sm);
-}
-.prose input,
-.prose textarea,
-.prose select,
-.prose fieldset {
- margin-bottom: var(--spacing-sm);
-}
-.prose pre,
-.prose blockquote,
-.prose dl,
-.prose figure,
-.prose table,
-.prose p,
-.prose ul,
-.prose ol,
-.prose form {
- margin-bottom: var(--spacing-md);
-}
-
-/* links
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-.prose a {
- color: var(--link-text-color);
- text-decoration: underline;
-}
-
-.prose a:visited {
- color: var(--link-text-color-visited);
-}
-
-.prose a:hover {
- color: var(--link-text-color-hover);
-}
-.prose a:active {
- color: var(--link-text-color-active);
-}
-
-/* misc
-–––––––––––––––––––––––––––––––––––––––––––––––––– */
-
-.prose hr {
- margin-top: 3em;
- margin-bottom: 3.5em;
- border-width: 0;
- border-top: 1px solid #e1e1e1;
-}
-
-.prose blockquote {
- margin: var(--size-6) 0 !important;
- border-left: 5px solid var(--border-color-primary);
- padding-left: var(--size-2);
-}
-
-.prose:last-child {
- margin-bottom: 0 !important;
-}
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema/tests/test_jsonschema_test_suite.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema/tests/test_jsonschema_test_suite.py
deleted file mode 100644
index 9c63714e4ff5d9326ca473c0a47bb0ecc8a98aac..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema/tests/test_jsonschema_test_suite.py
+++ /dev/null
@@ -1,257 +0,0 @@
-"""
-Test runner for the JSON Schema official test suite
-
-Tests comprehensive correctness of each draft's validator.
-
-See https://github.com/json-schema-org/JSON-Schema-Test-Suite for details.
-"""
-
-import sys
-
-from jsonschema.tests._suite import Suite
-import jsonschema
-
-SUITE = Suite()
-DRAFT3 = SUITE.version(name="draft3")
-DRAFT4 = SUITE.version(name="draft4")
-DRAFT6 = SUITE.version(name="draft6")
-DRAFT7 = SUITE.version(name="draft7")
-DRAFT201909 = SUITE.version(name="draft2019-09")
-DRAFT202012 = SUITE.version(name="draft2020-12")
-
-
-def skip(message, **kwargs):
- def skipper(test):
- if all(value == getattr(test, attr) for attr, value in kwargs.items()):
- return message
- return skipper
-
-
-def missing_format(Validator):
- def missing_format(test): # pragma: no cover
- schema = test.schema
- if (
- schema is True
- or schema is False
- or "format" not in schema
- or schema["format"] in Validator.FORMAT_CHECKER.checkers
- or test.valid
- ):
- return
-
- return f"Format checker {schema['format']!r} not found."
- return missing_format
-
-
-def complex_email_validation(test):
- if test.subject != "email":
- return
-
- message = "Complex email validation is (intentionally) unsupported."
- return skip(
- message=message,
- description="an invalid domain",
- )(test) or skip(
- message=message,
- description="an invalid IPv4-address-literal",
- )(test) or skip(
- message=message,
- description="dot after local part is not valid",
- )(test) or skip(
- message=message,
- description="dot before local part is not valid",
- )(test) or skip(
- message=message,
- description="two subsequent dots inside local part are not valid",
- )(test)
-
-
-if sys.version_info < (3, 9): # pragma: no cover
- message = "Rejecting leading zeros is 3.9+"
- allowed_leading_zeros = skip(
- message=message,
- subject="ipv4",
- description="invalid leading zeroes, as they are treated as octals",
- )
-else:
- def allowed_leading_zeros(test): # pragma: no cover
- return
-
-
-def leap_second(test):
- message = "Leap seconds are unsupported."
- return skip(
- message=message,
- subject="time",
- description="a valid time string with leap second",
- )(test) or skip(
- message=message,
- subject="time",
- description="a valid time string with leap second, Zulu",
- )(test) or skip(
- message=message,
- subject="time",
- description="a valid time string with leap second with offset",
- )(test) or skip(
- message=message,
- subject="time",
- description="valid leap second, positive time-offset",
- )(test) or skip(
- message=message,
- subject="time",
- description="valid leap second, negative time-offset",
- )(test) or skip(
- message=message,
- subject="time",
- description="valid leap second, large positive time-offset",
- )(test) or skip(
- message=message,
- subject="time",
- description="valid leap second, large negative time-offset",
- )(test) or skip(
- message=message,
- subject="time",
- description="valid leap second, zero time-offset",
- )(test) or skip(
- message=message,
- subject="date-time",
- description="a valid date-time with a leap second, UTC",
- )(test) or skip(
- message=message,
- subject="date-time",
- description="a valid date-time with a leap second, with minus offset",
- )(test)
-
-
-TestDraft3 = DRAFT3.to_unittest_testcase(
- DRAFT3.cases(),
- DRAFT3.format_cases(),
- DRAFT3.optional_cases_of(name="bignum"),
- DRAFT3.optional_cases_of(name="non-bmp-regex"),
- DRAFT3.optional_cases_of(name="zeroTerminatedFloats"),
- Validator=jsonschema.Draft3Validator,
- format_checker=jsonschema.Draft3Validator.FORMAT_CHECKER,
- skip=lambda test: (
- missing_format(jsonschema.Draft3Validator)(test)
- or complex_email_validation(test)
- ),
-)
-
-
-TestDraft4 = DRAFT4.to_unittest_testcase(
- DRAFT4.cases(),
- DRAFT4.format_cases(),
- DRAFT4.optional_cases_of(name="bignum"),
- DRAFT4.optional_cases_of(name="float-overflow"),
- DRAFT4.optional_cases_of(name="non-bmp-regex"),
- DRAFT4.optional_cases_of(name="zeroTerminatedFloats"),
- Validator=jsonschema.Draft4Validator,
- format_checker=jsonschema.Draft4Validator.FORMAT_CHECKER,
- skip=lambda test: (
- allowed_leading_zeros(test)
- or leap_second(test)
- or missing_format(jsonschema.Draft4Validator)(test)
- or complex_email_validation(test)
- ),
-)
-
-
-TestDraft6 = DRAFT6.to_unittest_testcase(
- DRAFT6.cases(),
- DRAFT6.format_cases(),
- DRAFT6.optional_cases_of(name="bignum"),
- DRAFT6.optional_cases_of(name="float-overflow"),
- DRAFT6.optional_cases_of(name="non-bmp-regex"),
- Validator=jsonschema.Draft6Validator,
- format_checker=jsonschema.Draft6Validator.FORMAT_CHECKER,
- skip=lambda test: (
- allowed_leading_zeros(test)
- or leap_second(test)
- or missing_format(jsonschema.Draft6Validator)(test)
- or complex_email_validation(test)
- ),
-)
-
-
-TestDraft7 = DRAFT7.to_unittest_testcase(
- DRAFT7.cases(),
- DRAFT7.format_cases(),
- DRAFT7.optional_cases_of(name="bignum"),
- DRAFT7.optional_cases_of(name="cross-draft"),
- DRAFT7.optional_cases_of(name="float-overflow"),
- DRAFT7.optional_cases_of(name="non-bmp-regex"),
- Validator=jsonschema.Draft7Validator,
- format_checker=jsonschema.Draft7Validator.FORMAT_CHECKER,
- skip=lambda test: (
- allowed_leading_zeros(test)
- or leap_second(test)
- or missing_format(jsonschema.Draft7Validator)(test)
- or complex_email_validation(test)
- ),
-)
-
-
-TestDraft201909 = DRAFT201909.to_unittest_testcase(
- DRAFT201909.cases(),
- DRAFT201909.optional_cases_of(name="bignum"),
- DRAFT201909.optional_cases_of(name="cross-draft"),
- DRAFT201909.optional_cases_of(name="float-overflow"),
- DRAFT201909.optional_cases_of(name="non-bmp-regex"),
- DRAFT201909.optional_cases_of(name="refOfUnknownKeyword"),
- Validator=jsonschema.Draft201909Validator,
- skip=skip(
- message="Vocabulary support is still in-progress.",
- subject="vocabulary",
- description=(
- "no validation: invalid number, but it still validates"
- ),
- ),
-)
-
-
-TestDraft201909Format = DRAFT201909.to_unittest_testcase(
- DRAFT201909.format_cases(),
- name="TestDraft201909Format",
- Validator=jsonschema.Draft201909Validator,
- format_checker=jsonschema.Draft201909Validator.FORMAT_CHECKER,
- skip=lambda test: (
- complex_email_validation(test)
- or allowed_leading_zeros(test)
- or leap_second(test)
- or missing_format(jsonschema.Draft201909Validator)(test)
- or complex_email_validation(test)
- ),
-)
-
-
-TestDraft202012 = DRAFT202012.to_unittest_testcase(
- DRAFT202012.cases(),
- DRAFT202012.optional_cases_of(name="bignum"),
- DRAFT202012.optional_cases_of(name="cross-draft"),
- DRAFT202012.optional_cases_of(name="float-overflow"),
- DRAFT202012.optional_cases_of(name="non-bmp-regex"),
- DRAFT202012.optional_cases_of(name="refOfUnknownKeyword"),
- Validator=jsonschema.Draft202012Validator,
- skip=skip(
- message="Vocabulary support is still in-progress.",
- subject="vocabulary",
- description=(
- "no validation: invalid number, but it still validates"
- ),
- ),
-)
-
-
-TestDraft202012Format = DRAFT202012.to_unittest_testcase(
- DRAFT202012.format_cases(),
- name="TestDraft202012Format",
- Validator=jsonschema.Draft202012Validator,
- format_checker=jsonschema.Draft202012Validator.FORMAT_CHECKER,
- skip=lambda test: (
- complex_email_validation(test)
- or allowed_leading_zeros(test)
- or leap_second(test)
- or missing_format(jsonschema.Draft202012Validator)(test)
- or complex_email_validation(test)
- ),
-)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_globals.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_globals.py
deleted file mode 100644
index 416a20f5e11b14b1da34e2bfb45c7961edc9097c..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_globals.py
+++ /dev/null
@@ -1,95 +0,0 @@
-"""
-Module defining global singleton classes.
-
-This module raises a RuntimeError if an attempt to reload it is made. In that
-way the identities of the classes defined here are fixed and will remain so
-even if numpy itself is reloaded. In particular, a function like the following
-will still work correctly after numpy is reloaded::
-
- def foo(arg=np._NoValue):
- if arg is np._NoValue:
- ...
-
-That was not the case when the singleton classes were defined in the numpy
-``__init__.py`` file. See gh-7844 for a discussion of the reload problem that
-motivated this module.
-
-"""
-import enum
-
-from ._utils import set_module as _set_module
-
-__all__ = ['_NoValue', '_CopyMode']
-
-
-# Disallow reloading this module so as to preserve the identities of the
-# classes defined here.
-if '_is_loaded' in globals():
- raise RuntimeError('Reloading numpy._globals is not allowed')
-_is_loaded = True
-
-
-class _NoValueType:
- """Special keyword value.
-
- The instance of this class may be used as the default value assigned to a
- keyword if no other obvious default (e.g., `None`) is suitable,
-
- Common reasons for using this keyword are:
-
- - A new keyword is added to a function, and that function forwards its
- inputs to another function or method which can be defined outside of
- NumPy. For example, ``np.std(x)`` calls ``x.std``, so when a ``keepdims``
- keyword was added that could only be forwarded if the user explicitly
- specified ``keepdims``; downstream array libraries may not have added
- the same keyword, so adding ``x.std(..., keepdims=keepdims)``
- unconditionally could have broken previously working code.
- - A keyword is being deprecated, and a deprecation warning must only be
- emitted when the keyword is used.
-
- """
- __instance = None
- def __new__(cls):
- # ensure that only one instance exists
- if not cls.__instance:
- cls.__instance = super().__new__(cls)
- return cls.__instance
-
- def __repr__(self):
- return ""
-
-
-_NoValue = _NoValueType()
-
-
-@_set_module("numpy")
-class _CopyMode(enum.Enum):
- """
- An enumeration for the copy modes supported
- by numpy.copy() and numpy.array(). The following three modes are supported,
-
- - ALWAYS: This means that a deep copy of the input
- array will always be taken.
- - IF_NEEDED: This means that a deep copy of the input
- array will be taken only if necessary.
- - NEVER: This means that the deep copy will never be taken.
- If a copy cannot be avoided then a `ValueError` will be
- raised.
-
- Note that the buffer-protocol could in theory do copies. NumPy currently
- assumes an object exporting the buffer protocol will never do this.
- """
-
- ALWAYS = True
- IF_NEEDED = False
- NEVER = 2
-
- def __bool__(self):
- # For backwards compatibility
- if self == _CopyMode.ALWAYS:
- return True
-
- if self == _CopyMode.IF_NEEDED:
- return False
-
- raise ValueError(f"{self} is neither True nor False.")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_memmap.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_memmap.py
deleted file mode 100644
index ad074b312d5a0f5d551324b4be8327bacff7a849..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_memmap.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import sys
-import os
-import mmap
-import pytest
-from pathlib import Path
-from tempfile import NamedTemporaryFile, TemporaryFile
-
-from numpy import (
- memmap, sum, average, prod, ndarray, isscalar, add, subtract, multiply)
-
-from numpy import arange, allclose, asarray
-from numpy.testing import (
- assert_, assert_equal, assert_array_equal, suppress_warnings, IS_PYPY,
- break_cycles
- )
-
-class TestMemmap:
- def setup_method(self):
- self.tmpfp = NamedTemporaryFile(prefix='mmap')
- self.shape = (3, 4)
- self.dtype = 'float32'
- self.data = arange(12, dtype=self.dtype)
- self.data.resize(self.shape)
-
- def teardown_method(self):
- self.tmpfp.close()
- self.data = None
- if IS_PYPY:
- break_cycles()
- break_cycles()
-
- def test_roundtrip(self):
- # Write data to file
- fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
- shape=self.shape)
- fp[:] = self.data[:]
- del fp # Test __del__ machinery, which handles cleanup
-
- # Read data back from file
- newfp = memmap(self.tmpfp, dtype=self.dtype, mode='r',
- shape=self.shape)
- assert_(allclose(self.data, newfp))
- assert_array_equal(self.data, newfp)
- assert_equal(newfp.flags.writeable, False)
-
- def test_open_with_filename(self, tmp_path):
- tmpname = tmp_path / 'mmap'
- fp = memmap(tmpname, dtype=self.dtype, mode='w+',
- shape=self.shape)
- fp[:] = self.data[:]
- del fp
-
- def test_unnamed_file(self):
- with TemporaryFile() as f:
- fp = memmap(f, dtype=self.dtype, shape=self.shape)
- del fp
-
- def test_attributes(self):
- offset = 1
- mode = "w+"
- fp = memmap(self.tmpfp, dtype=self.dtype, mode=mode,
- shape=self.shape, offset=offset)
- assert_equal(offset, fp.offset)
- assert_equal(mode, fp.mode)
- del fp
-
- def test_filename(self, tmp_path):
- tmpname = tmp_path / "mmap"
- fp = memmap(tmpname, dtype=self.dtype, mode='w+',
- shape=self.shape)
- abspath = Path(os.path.abspath(tmpname))
- fp[:] = self.data[:]
- assert_equal(abspath, fp.filename)
- b = fp[:1]
- assert_equal(abspath, b.filename)
- del b
- del fp
-
- def test_path(self, tmp_path):
- tmpname = tmp_path / "mmap"
- fp = memmap(Path(tmpname), dtype=self.dtype, mode='w+',
- shape=self.shape)
- # os.path.realpath does not resolve symlinks on Windows
- # see: https://bugs.python.org/issue9949
- # use Path.resolve, just as memmap class does internally
- abspath = str(Path(tmpname).resolve())
- fp[:] = self.data[:]
- assert_equal(abspath, str(fp.filename.resolve()))
- b = fp[:1]
- assert_equal(abspath, str(b.filename.resolve()))
- del b
- del fp
-
- def test_filename_fileobj(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, mode="w+",
- shape=self.shape)
- assert_equal(fp.filename, self.tmpfp.name)
-
- @pytest.mark.skipif(sys.platform == 'gnu0',
- reason="Known to fail on hurd")
- def test_flush(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
- shape=self.shape)
- fp[:] = self.data[:]
- assert_equal(fp[0], self.data[0])
- fp.flush()
-
- def test_del(self):
- # Make sure a view does not delete the underlying mmap
- fp_base = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
- shape=self.shape)
- fp_base[0] = 5
- fp_view = fp_base[0:1]
- assert_equal(fp_view[0], 5)
- del fp_view
- # Should still be able to access and assign values after
- # deleting the view
- assert_equal(fp_base[0], 5)
- fp_base[0] = 6
- assert_equal(fp_base[0], 6)
-
- def test_arithmetic_drops_references(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
- shape=self.shape)
- tmp = (fp + 10)
- if isinstance(tmp, memmap):
- assert_(tmp._mmap is not fp._mmap)
-
- def test_indexing_drops_references(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
- shape=self.shape)
- tmp = fp[(1, 2), (2, 3)]
- if isinstance(tmp, memmap):
- assert_(tmp._mmap is not fp._mmap)
-
- def test_slicing_keeps_references(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, mode='w+',
- shape=self.shape)
- assert_(fp[:2, :2]._mmap is fp._mmap)
-
- def test_view(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, shape=self.shape)
- new1 = fp.view()
- new2 = new1.view()
- assert_(new1.base is fp)
- assert_(new2.base is fp)
- new_array = asarray(fp)
- assert_(new_array.base is fp)
-
- def test_ufunc_return_ndarray(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, shape=self.shape)
- fp[:] = self.data
-
- with suppress_warnings() as sup:
- sup.filter(FutureWarning, "np.average currently does not preserve")
- for unary_op in [sum, average, prod]:
- result = unary_op(fp)
- assert_(isscalar(result))
- assert_(result.__class__ is self.data[0, 0].__class__)
-
- assert_(unary_op(fp, axis=0).__class__ is ndarray)
- assert_(unary_op(fp, axis=1).__class__ is ndarray)
-
- for binary_op in [add, subtract, multiply]:
- assert_(binary_op(fp, self.data).__class__ is ndarray)
- assert_(binary_op(self.data, fp).__class__ is ndarray)
- assert_(binary_op(fp, fp).__class__ is ndarray)
-
- fp += 1
- assert(fp.__class__ is memmap)
- add(fp, 1, out=fp)
- assert(fp.__class__ is memmap)
-
- def test_getitem(self):
- fp = memmap(self.tmpfp, dtype=self.dtype, shape=self.shape)
- fp[:] = self.data
-
- assert_(fp[1:, :-1].__class__ is memmap)
- # Fancy indexing returns a copy that is not memmapped
- assert_(fp[[0, 1]].__class__ is ndarray)
-
- def test_memmap_subclass(self):
- class MemmapSubClass(memmap):
- pass
-
- fp = MemmapSubClass(self.tmpfp, dtype=self.dtype, shape=self.shape)
- fp[:] = self.data
-
- # We keep previous behavior for subclasses of memmap, i.e. the
- # ufunc and __getitem__ output is never turned into a ndarray
- assert_(sum(fp, axis=0).__class__ is MemmapSubClass)
- assert_(sum(fp).__class__ is MemmapSubClass)
- assert_(fp[1:, :-1].__class__ is MemmapSubClass)
- assert(fp[[0, 1]].__class__ is MemmapSubClass)
-
- def test_mmap_offset_greater_than_allocation_granularity(self):
- size = 5 * mmap.ALLOCATIONGRANULARITY
- offset = mmap.ALLOCATIONGRANULARITY + 1
- fp = memmap(self.tmpfp, shape=size, mode='w+', offset=offset)
- assert_(fp.offset == offset)
-
- def test_no_shape(self):
- self.tmpfp.write(b'a'*16)
- mm = memmap(self.tmpfp, dtype='float64')
- assert_equal(mm.shape, (2,))
-
- def test_empty_array(self):
- # gh-12653
- with pytest.raises(ValueError, match='empty file'):
- memmap(self.tmpfp, shape=(0,4), mode='w+')
-
- self.tmpfp.write(b'\0')
-
- # ok now the file is not empty
- memmap(self.tmpfp, shape=(0,4), mode='w+')
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/abstract/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/abstract/__init__.py
deleted file mode 100644
index 48482bd87a05766938004c75b9228eaeaf269cbf..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/abstract/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# flake8: noqa
-
-from openai.api_resources.abstract.api_resource import APIResource
-from openai.api_resources.abstract.createable_api_resource import CreateableAPIResource
-from openai.api_resources.abstract.deletable_api_resource import DeletableAPIResource
-from openai.api_resources.abstract.listable_api_resource import ListableAPIResource
-from openai.api_resources.abstract.nested_resource_class_methods import (
- nested_resource_class_methods,
-)
-from openai.api_resources.abstract.paginatable_api_resource import (
- PaginatableAPIResource,
-)
-from openai.api_resources.abstract.updateable_api_resource import UpdateableAPIResource
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/csvs.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/csvs.py
deleted file mode 100644
index 8d0edd88ffb6c7729ade55d371c30b656430ece7..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/csvs.py
+++ /dev/null
@@ -1,326 +0,0 @@
-"""
-Module for formatting output data into CSV files.
-"""
-
-from __future__ import annotations
-
-from collections.abc import (
- Hashable,
- Iterable,
- Iterator,
- Sequence,
-)
-import csv as csvlib
-import os
-from typing import (
- TYPE_CHECKING,
- Any,
- cast,
-)
-
-import numpy as np
-
-from pandas._libs import writers as libwriters
-from pandas.util._decorators import cache_readonly
-
-from pandas.core.dtypes.generic import (
- ABCDatetimeIndex,
- ABCIndex,
- ABCMultiIndex,
- ABCPeriodIndex,
-)
-from pandas.core.dtypes.missing import notna
-
-from pandas.core.indexes.api import Index
-
-from pandas.io.common import get_handle
-
-if TYPE_CHECKING:
- from pandas._typing import (
- CompressionOptions,
- FilePath,
- FloatFormatType,
- IndexLabel,
- StorageOptions,
- WriteBuffer,
- )
-
- from pandas.io.formats.format import DataFrameFormatter
-
-
-_DEFAULT_CHUNKSIZE_CELLS = 100_000
-
-
-class CSVFormatter:
- cols: np.ndarray
-
- def __init__(
- self,
- formatter: DataFrameFormatter,
- path_or_buf: FilePath | WriteBuffer[str] | WriteBuffer[bytes] = "",
- sep: str = ",",
- cols: Sequence[Hashable] | None = None,
- index_label: IndexLabel | None = None,
- mode: str = "w",
- encoding: str | None = None,
- errors: str = "strict",
- compression: CompressionOptions = "infer",
- quoting: int | None = None,
- lineterminator: str | None = "\n",
- chunksize: int | None = None,
- quotechar: str | None = '"',
- date_format: str | None = None,
- doublequote: bool = True,
- escapechar: str | None = None,
- storage_options: StorageOptions | None = None,
- ) -> None:
- self.fmt = formatter
-
- self.obj = self.fmt.frame
-
- self.filepath_or_buffer = path_or_buf
- self.encoding = encoding
- self.compression: CompressionOptions = compression
- self.mode = mode
- self.storage_options = storage_options
-
- self.sep = sep
- self.index_label = self._initialize_index_label(index_label)
- self.errors = errors
- self.quoting = quoting or csvlib.QUOTE_MINIMAL
- self.quotechar = self._initialize_quotechar(quotechar)
- self.doublequote = doublequote
- self.escapechar = escapechar
- self.lineterminator = lineterminator or os.linesep
- self.date_format = date_format
- self.cols = self._initialize_columns(cols)
- self.chunksize = self._initialize_chunksize(chunksize)
-
- @property
- def na_rep(self) -> str:
- return self.fmt.na_rep
-
- @property
- def float_format(self) -> FloatFormatType | None:
- return self.fmt.float_format
-
- @property
- def decimal(self) -> str:
- return self.fmt.decimal
-
- @property
- def header(self) -> bool | list[str]:
- return self.fmt.header
-
- @property
- def index(self) -> bool:
- return self.fmt.index
-
- def _initialize_index_label(self, index_label: IndexLabel | None) -> IndexLabel:
- if index_label is not False:
- if index_label is None:
- return self._get_index_label_from_obj()
- elif not isinstance(index_label, (list, tuple, np.ndarray, ABCIndex)):
- # given a string for a DF with Index
- return [index_label]
- return index_label
-
- def _get_index_label_from_obj(self) -> Sequence[Hashable]:
- if isinstance(self.obj.index, ABCMultiIndex):
- return self._get_index_label_multiindex()
- else:
- return self._get_index_label_flat()
-
- def _get_index_label_multiindex(self) -> Sequence[Hashable]:
- return [name or "" for name in self.obj.index.names]
-
- def _get_index_label_flat(self) -> Sequence[Hashable]:
- index_label = self.obj.index.name
- return [""] if index_label is None else [index_label]
-
- def _initialize_quotechar(self, quotechar: str | None) -> str | None:
- if self.quoting != csvlib.QUOTE_NONE:
- # prevents crash in _csv
- return quotechar
- return None
-
- @property
- def has_mi_columns(self) -> bool:
- return bool(isinstance(self.obj.columns, ABCMultiIndex))
-
- def _initialize_columns(self, cols: Iterable[Hashable] | None) -> np.ndarray:
- # validate mi options
- if self.has_mi_columns:
- if cols is not None:
- msg = "cannot specify cols with a MultiIndex on the columns"
- raise TypeError(msg)
-
- if cols is not None:
- if isinstance(cols, ABCIndex):
- cols = cols._format_native_types(**self._number_format)
- else:
- cols = list(cols)
- self.obj = self.obj.loc[:, cols]
-
- # update columns to include possible multiplicity of dupes
- # and make sure cols is just a list of labels
- new_cols = self.obj.columns
- return new_cols._format_native_types(**self._number_format)
-
- def _initialize_chunksize(self, chunksize: int | None) -> int:
- if chunksize is None:
- return (_DEFAULT_CHUNKSIZE_CELLS // (len(self.cols) or 1)) or 1
- return int(chunksize)
-
- @property
- def _number_format(self) -> dict[str, Any]:
- """Dictionary used for storing number formatting settings."""
- return {
- "na_rep": self.na_rep,
- "float_format": self.float_format,
- "date_format": self.date_format,
- "quoting": self.quoting,
- "decimal": self.decimal,
- }
-
- @cache_readonly
- def data_index(self) -> Index:
- data_index = self.obj.index
- if (
- isinstance(data_index, (ABCDatetimeIndex, ABCPeriodIndex))
- and self.date_format is not None
- ):
- data_index = Index(
- [x.strftime(self.date_format) if notna(x) else "" for x in data_index]
- )
- elif isinstance(data_index, ABCMultiIndex):
- data_index = data_index.remove_unused_levels()
- return data_index
-
- @property
- def nlevels(self) -> int:
- if self.index:
- return getattr(self.data_index, "nlevels", 1)
- else:
- return 0
-
- @property
- def _has_aliases(self) -> bool:
- return isinstance(self.header, (tuple, list, np.ndarray, ABCIndex))
-
- @property
- def _need_to_save_header(self) -> bool:
- return bool(self._has_aliases or self.header)
-
- @property
- def write_cols(self) -> Sequence[Hashable]:
- if self._has_aliases:
- assert not isinstance(self.header, bool)
- if len(self.header) != len(self.cols):
- raise ValueError(
- f"Writing {len(self.cols)} cols but got {len(self.header)} aliases"
- )
- return self.header
- else:
- # self.cols is an ndarray derived from Index._format_native_types,
- # so its entries are strings, i.e. hashable
- return cast(Sequence[Hashable], self.cols)
-
- @property
- def encoded_labels(self) -> list[Hashable]:
- encoded_labels: list[Hashable] = []
-
- if self.index and self.index_label:
- assert isinstance(self.index_label, Sequence)
- encoded_labels = list(self.index_label)
-
- if not self.has_mi_columns or self._has_aliases:
- encoded_labels += list(self.write_cols)
-
- return encoded_labels
-
- def save(self) -> None:
- """
- Create the writer & save.
- """
- # apply compression and byte/text conversion
- with get_handle(
- self.filepath_or_buffer,
- self.mode,
- encoding=self.encoding,
- errors=self.errors,
- compression=self.compression,
- storage_options=self.storage_options,
- ) as handles:
- # Note: self.encoding is irrelevant here
- self.writer = csvlib.writer(
- handles.handle,
- lineterminator=self.lineterminator,
- delimiter=self.sep,
- quoting=self.quoting,
- doublequote=self.doublequote,
- escapechar=self.escapechar,
- quotechar=self.quotechar,
- )
-
- self._save()
-
- def _save(self) -> None:
- if self._need_to_save_header:
- self._save_header()
- self._save_body()
-
- def _save_header(self) -> None:
- if not self.has_mi_columns or self._has_aliases:
- self.writer.writerow(self.encoded_labels)
- else:
- for row in self._generate_multiindex_header_rows():
- self.writer.writerow(row)
-
- def _generate_multiindex_header_rows(self) -> Iterator[list[Hashable]]:
- columns = self.obj.columns
- for i in range(columns.nlevels):
- # we need at least 1 index column to write our col names
- col_line = []
- if self.index:
- # name is the first column
- col_line.append(columns.names[i])
-
- if isinstance(self.index_label, list) and len(self.index_label) > 1:
- col_line.extend([""] * (len(self.index_label) - 1))
-
- col_line.extend(columns._get_level_values(i))
- yield col_line
-
- # Write out the index line if it's not empty.
- # Otherwise, we will print out an extraneous
- # blank line between the mi and the data rows.
- if self.encoded_labels and set(self.encoded_labels) != {""}:
- yield self.encoded_labels + [""] * len(columns)
-
- def _save_body(self) -> None:
- nrows = len(self.data_index)
- chunks = (nrows // self.chunksize) + 1
- for i in range(chunks):
- start_i = i * self.chunksize
- end_i = min(start_i + self.chunksize, nrows)
- if start_i >= end_i:
- break
- self._save_chunk(start_i, end_i)
-
- def _save_chunk(self, start_i: int, end_i: int) -> None:
- # create the data for a chunk
- slicer = slice(start_i, end_i)
- df = self.obj.iloc[slicer]
-
- res = df._mgr.to_native_types(**self._number_format)
- data = [res.iget_values(i) for i in range(len(res.items))]
-
- ix = self.data_index[slicer]._format_native_types(**self._number_format)
- libwriters.write_csv_rows(
- data,
- ix,
- self.nlevels,
- self.cols,
- self.writer,
- )
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_unary.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_unary.py
deleted file mode 100644
index 5e29d3c868983bac65ca0df6679c96798ee9c915..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_unary.py
+++ /dev/null
@@ -1,194 +0,0 @@
-from decimal import Decimal
-
-import numpy as np
-import pytest
-
-from pandas.compat.numpy import np_version_gte1p25
-
-import pandas as pd
-import pandas._testing as tm
-
-
-class TestDataFrameUnaryOperators:
- # __pos__, __neg__, __invert__
-
- @pytest.mark.parametrize(
- "df,expected",
- [
- (pd.DataFrame({"a": [-1, 1]}), pd.DataFrame({"a": [1, -1]})),
- (pd.DataFrame({"a": [False, True]}), pd.DataFrame({"a": [True, False]})),
- (
- pd.DataFrame({"a": pd.Series(pd.to_timedelta([-1, 1]))}),
- pd.DataFrame({"a": pd.Series(pd.to_timedelta([1, -1]))}),
- ),
- ],
- )
- def test_neg_numeric(self, df, expected):
- tm.assert_frame_equal(-df, expected)
- tm.assert_series_equal(-df["a"], expected["a"])
-
- @pytest.mark.parametrize(
- "df, expected",
- [
- (np.array([1, 2], dtype=object), np.array([-1, -2], dtype=object)),
- ([Decimal("1.0"), Decimal("2.0")], [Decimal("-1.0"), Decimal("-2.0")]),
- ],
- )
- def test_neg_object(self, df, expected):
- # GH#21380
- df = pd.DataFrame({"a": df})
- expected = pd.DataFrame({"a": expected})
- tm.assert_frame_equal(-df, expected)
- tm.assert_series_equal(-df["a"], expected["a"])
-
- @pytest.mark.parametrize(
- "df",
- [
- pd.DataFrame({"a": ["a", "b"]}),
- pd.DataFrame({"a": pd.to_datetime(["2017-01-22", "1970-01-01"])}),
- ],
- )
- def test_neg_raises(self, df):
- msg = (
- "bad operand type for unary -: 'str'|"
- r"bad operand type for unary -: 'DatetimeArray'"
- )
- with pytest.raises(TypeError, match=msg):
- (-df)
- with pytest.raises(TypeError, match=msg):
- (-df["a"])
-
- def test_invert(self, float_frame):
- df = float_frame
-
- tm.assert_frame_equal(-(df < 0), ~(df < 0))
-
- def test_invert_mixed(self):
- shape = (10, 5)
- df = pd.concat(
- [
- pd.DataFrame(np.zeros(shape, dtype="bool")),
- pd.DataFrame(np.zeros(shape, dtype=int)),
- ],
- axis=1,
- ignore_index=True,
- )
- result = ~df
- expected = pd.concat(
- [
- pd.DataFrame(np.ones(shape, dtype="bool")),
- pd.DataFrame(-np.ones(shape, dtype=int)),
- ],
- axis=1,
- ignore_index=True,
- )
- tm.assert_frame_equal(result, expected)
-
- def test_invert_empty_not_input(self):
- # GH#51032
- df = pd.DataFrame()
- result = ~df
- tm.assert_frame_equal(df, result)
- assert df is not result
-
- @pytest.mark.parametrize(
- "df",
- [
- pd.DataFrame({"a": [-1, 1]}),
- pd.DataFrame({"a": [False, True]}),
- pd.DataFrame({"a": pd.Series(pd.to_timedelta([-1, 1]))}),
- ],
- )
- def test_pos_numeric(self, df):
- # GH#16073
- tm.assert_frame_equal(+df, df)
- tm.assert_series_equal(+df["a"], df["a"])
-
- @pytest.mark.parametrize(
- "df",
- [
- pd.DataFrame({"a": np.array([-1, 2], dtype=object)}),
- pd.DataFrame({"a": [Decimal("-1.0"), Decimal("2.0")]}),
- ],
- )
- def test_pos_object(self, df):
- # GH#21380
- tm.assert_frame_equal(+df, df)
- tm.assert_series_equal(+df["a"], df["a"])
-
- @pytest.mark.parametrize(
- "df",
- [
- pytest.param(
- pd.DataFrame({"a": ["a", "b"]}),
- # filterwarnings removable once min numpy version is 1.25
- marks=[
- pytest.mark.filterwarnings("ignore:Applying:DeprecationWarning")
- ],
- ),
- ],
- )
- def test_pos_object_raises(self, df):
- # GH#21380
- if np_version_gte1p25:
- with pytest.raises(
- TypeError, match=r"^bad operand type for unary \+: \'str\'$"
- ):
- tm.assert_frame_equal(+df, df)
- else:
- tm.assert_series_equal(+df["a"], df["a"])
-
- @pytest.mark.parametrize(
- "df", [pd.DataFrame({"a": pd.to_datetime(["2017-01-22", "1970-01-01"])})]
- )
- def test_pos_raises(self, df):
- msg = r"bad operand type for unary \+: 'DatetimeArray'"
- with pytest.raises(TypeError, match=msg):
- (+df)
- with pytest.raises(TypeError, match=msg):
- (+df["a"])
-
- def test_unary_nullable(self):
- df = pd.DataFrame(
- {
- "a": pd.array([1, -2, 3, pd.NA], dtype="Int64"),
- "b": pd.array([4.0, -5.0, 6.0, pd.NA], dtype="Float32"),
- "c": pd.array([True, False, False, pd.NA], dtype="boolean"),
- # include numpy bool to make sure bool-vs-boolean behavior
- # is consistent in non-NA locations
- "d": np.array([True, False, False, True]),
- }
- )
-
- result = +df
- res_ufunc = np.positive(df)
- expected = df
- # TODO: assert that we have copies?
- tm.assert_frame_equal(result, expected)
- tm.assert_frame_equal(res_ufunc, expected)
-
- result = -df
- res_ufunc = np.negative(df)
- expected = pd.DataFrame(
- {
- "a": pd.array([-1, 2, -3, pd.NA], dtype="Int64"),
- "b": pd.array([-4.0, 5.0, -6.0, pd.NA], dtype="Float32"),
- "c": pd.array([False, True, True, pd.NA], dtype="boolean"),
- "d": np.array([False, True, True, False]),
- }
- )
- tm.assert_frame_equal(result, expected)
- tm.assert_frame_equal(res_ufunc, expected)
-
- result = abs(df)
- res_ufunc = np.abs(df)
- expected = pd.DataFrame(
- {
- "a": pd.array([1, 2, 3, pd.NA], dtype="Int64"),
- "b": pd.array([4.0, 5.0, 6.0, pd.NA], dtype="Float32"),
- "c": pd.array([True, False, False, pd.NA], dtype="boolean"),
- "d": np.array([True, False, False, True]),
- }
- )
- tm.assert_frame_equal(result, expected)
- tm.assert_frame_equal(res_ufunc, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/common/test_read_errors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/common/test_read_errors.py
deleted file mode 100644
index 492b4d5ec058ea656d1cb25649ddfd22df027a8a..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/parser/common/test_read_errors.py
+++ /dev/null
@@ -1,272 +0,0 @@
-"""
-Tests that work on both the Python and C engines but do not have a
-specific classification into the other test modules.
-"""
-import codecs
-import csv
-from io import StringIO
-import os
-from pathlib import Path
-
-import numpy as np
-import pytest
-
-from pandas.compat import PY311
-from pandas.errors import (
- EmptyDataError,
- ParserError,
-)
-
-from pandas import DataFrame
-import pandas._testing as tm
-
-pytestmark = pytest.mark.usefixtures("pyarrow_skip")
-
-
-def test_empty_decimal_marker(all_parsers):
- data = """A|B|C
-1|2,334|5
-10|13|10.
-"""
- # Parsers support only length-1 decimals
- msg = "Only length-1 decimal markers supported"
- parser = all_parsers
-
- with pytest.raises(ValueError, match=msg):
- parser.read_csv(StringIO(data), decimal="")
-
-
-def test_bad_stream_exception(all_parsers, csv_dir_path):
- # see gh-13652
- #
- # This test validates that both the Python engine and C engine will
- # raise UnicodeDecodeError instead of C engine raising ParserError
- # and swallowing the exception that caused read to fail.
- path = os.path.join(csv_dir_path, "sauron.SHIFT_JIS.csv")
- codec = codecs.lookup("utf-8")
- utf8 = codecs.lookup("utf-8")
- parser = all_parsers
- msg = "'utf-8' codec can't decode byte"
-
- # Stream must be binary UTF8.
- with open(path, "rb") as handle, codecs.StreamRecoder(
- handle, utf8.encode, utf8.decode, codec.streamreader, codec.streamwriter
- ) as stream:
- with pytest.raises(UnicodeDecodeError, match=msg):
- parser.read_csv(stream)
-
-
-def test_malformed(all_parsers):
- # see gh-6607
- parser = all_parsers
- data = """ignore
-A,B,C
-1,2,3 # comment
-1,2,3,4,5
-2,3,4
-"""
- msg = "Expected 3 fields in line 4, saw 5"
- with pytest.raises(ParserError, match=msg):
- parser.read_csv(StringIO(data), header=1, comment="#")
-
-
-@pytest.mark.parametrize("nrows", [5, 3, None])
-def test_malformed_chunks(all_parsers, nrows):
- data = """ignore
-A,B,C
-skip
-1,2,3
-3,5,10 # comment
-1,2,3,4,5
-2,3,4
-"""
- parser = all_parsers
- msg = "Expected 3 fields in line 6, saw 5"
- with parser.read_csv(
- StringIO(data), header=1, comment="#", iterator=True, chunksize=1, skiprows=[2]
- ) as reader:
- with pytest.raises(ParserError, match=msg):
- reader.read(nrows)
-
-
-def test_catch_too_many_names(all_parsers):
- # see gh-5156
- data = """\
-1,2,3
-4,,6
-7,8,9
-10,11,12\n"""
- parser = all_parsers
- msg = (
- "Too many columns specified: expected 4 and found 3"
- if parser.engine == "c"
- else "Number of passed names did not match "
- "number of header fields in the file"
- )
-
- with pytest.raises(ValueError, match=msg):
- parser.read_csv(StringIO(data), header=0, names=["a", "b", "c", "d"])
-
-
-@pytest.mark.parametrize("nrows", [0, 1, 2, 3, 4, 5])
-def test_raise_on_no_columns(all_parsers, nrows):
- parser = all_parsers
- data = "\n" * nrows
-
- msg = "No columns to parse from file"
- with pytest.raises(EmptyDataError, match=msg):
- parser.read_csv(StringIO(data))
-
-
-def test_unexpected_keyword_parameter_exception(all_parsers):
- # GH-34976
- parser = all_parsers
-
- msg = "{}\\(\\) got an unexpected keyword argument 'foo'"
- with pytest.raises(TypeError, match=msg.format("read_csv")):
- parser.read_csv("foo.csv", foo=1)
- with pytest.raises(TypeError, match=msg.format("read_table")):
- parser.read_table("foo.tsv", foo=1)
-
-
-def test_suppress_error_output(all_parsers, capsys):
- # see gh-15925
- parser = all_parsers
- data = "a\n1\n1,2,3\n4\n5,6,7"
- expected = DataFrame({"a": [1, 4]})
-
- result = parser.read_csv(StringIO(data), on_bad_lines="skip")
- tm.assert_frame_equal(result, expected)
-
- captured = capsys.readouterr()
- assert captured.err == ""
-
-
-def test_error_bad_lines(all_parsers):
- # see gh-15925
- parser = all_parsers
- data = "a\n1\n1,2,3\n4\n5,6,7"
-
- msg = "Expected 1 fields in line 3, saw 3"
- with pytest.raises(ParserError, match=msg):
- parser.read_csv(StringIO(data), on_bad_lines="error")
-
-
-def test_warn_bad_lines(all_parsers, capsys):
- # see gh-15925
- parser = all_parsers
- data = "a\n1\n1,2,3\n4\n5,6,7"
- expected = DataFrame({"a": [1, 4]})
-
- result = parser.read_csv(StringIO(data), on_bad_lines="warn")
- tm.assert_frame_equal(result, expected)
-
- captured = capsys.readouterr()
- assert "Skipping line 3" in captured.err
- assert "Skipping line 5" in captured.err
-
-
-def test_read_csv_wrong_num_columns(all_parsers):
- # Too few columns.
- data = """A,B,C,D,E,F
-1,2,3,4,5,6
-6,7,8,9,10,11,12
-11,12,13,14,15,16
-"""
- parser = all_parsers
- msg = "Expected 6 fields in line 3, saw 7"
-
- with pytest.raises(ParserError, match=msg):
- parser.read_csv(StringIO(data))
-
-
-def test_null_byte_char(request, all_parsers):
- # see gh-2741
- data = "\x00,foo"
- names = ["a", "b"]
- parser = all_parsers
-
- if parser.engine == "c" or (parser.engine == "python" and PY311):
- if parser.engine == "python" and PY311:
- request.node.add_marker(
- pytest.mark.xfail(
- reason="In Python 3.11, this is read as an empty character not null"
- )
- )
- expected = DataFrame([[np.nan, "foo"]], columns=names)
- out = parser.read_csv(StringIO(data), names=names)
- tm.assert_frame_equal(out, expected)
- else:
- msg = "NULL byte detected"
- with pytest.raises(ParserError, match=msg):
- parser.read_csv(StringIO(data), names=names)
-
-
-@pytest.mark.filterwarnings("always::ResourceWarning")
-def test_open_file(request, all_parsers):
- # GH 39024
- parser = all_parsers
- if parser.engine == "c":
- request.node.add_marker(
- pytest.mark.xfail(
- reason=f"{parser.engine} engine does not support sep=None "
- f"with delim_whitespace=False"
- )
- )
-
- with tm.ensure_clean() as path:
- file = Path(path)
- file.write_bytes(b"\xe4\na\n1")
-
- with tm.assert_produces_warning(None):
- # should not trigger a ResourceWarning
- with pytest.raises(csv.Error, match="Could not determine delimiter"):
- parser.read_csv(file, sep=None, encoding_errors="replace")
-
-
-def test_invalid_on_bad_line(all_parsers):
- parser = all_parsers
- data = "a\n1\n1,2,3\n4\n5,6,7"
- with pytest.raises(ValueError, match="Argument abc is invalid for on_bad_lines"):
- parser.read_csv(StringIO(data), on_bad_lines="abc")
-
-
-def test_bad_header_uniform_error(all_parsers):
- parser = all_parsers
- data = "+++123456789...\ncol1,col2,col3,col4\n1,2,3,4\n"
- msg = "Expected 2 fields in line 2, saw 4"
- if parser.engine == "c":
- msg = (
- "Could not construct index. Requested to use 1 "
- "number of columns, but 3 left to parse."
- )
-
- with pytest.raises(ParserError, match=msg):
- parser.read_csv(StringIO(data), index_col=0, on_bad_lines="error")
-
-
-def test_on_bad_lines_warn_correct_formatting(all_parsers, capsys):
- # see gh-15925
- parser = all_parsers
- data = """1,2
-a,b
-a,b,c
-a,b,d
-a,b
-"""
- expected = DataFrame({"1": "a", "2": ["b"] * 2})
-
- result = parser.read_csv(StringIO(data), on_bad_lines="warn")
- tm.assert_frame_equal(result, expected)
-
- captured = capsys.readouterr()
- if parser.engine == "c":
- warn = """Skipping line 3: expected 2 fields, saw 3
-Skipping line 4: expected 2 fields, saw 3
-
-"""
- else:
- warn = """Skipping line 3: Expected 2 fields in line 3, saw 3
-Skipping line 4: Expected 2 fields in line 4, saw 3
-"""
- assert captured.err == warn
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_categorical.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_categorical.py
deleted file mode 100644
index b227c935c2b624a05ac10350ddeebb1333978fec..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_categorical.py
+++ /dev/null
@@ -1,214 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas import (
- Categorical,
- DataFrame,
- Series,
- _testing as tm,
- concat,
- read_hdf,
-)
-from pandas.tests.io.pytables.common import (
- _maybe_remove,
- ensure_clean_store,
-)
-
-pytestmark = pytest.mark.single_cpu
-
-
-def test_categorical(setup_path):
- with ensure_clean_store(setup_path) as store:
- # Basic
- _maybe_remove(store, "s")
- s = Series(
- Categorical(
- ["a", "b", "b", "a", "a", "c"],
- categories=["a", "b", "c", "d"],
- ordered=False,
- )
- )
- store.append("s", s, format="table")
- result = store.select("s")
- tm.assert_series_equal(s, result)
-
- _maybe_remove(store, "s_ordered")
- s = Series(
- Categorical(
- ["a", "b", "b", "a", "a", "c"],
- categories=["a", "b", "c", "d"],
- ordered=True,
- )
- )
- store.append("s_ordered", s, format="table")
- result = store.select("s_ordered")
- tm.assert_series_equal(s, result)
-
- _maybe_remove(store, "df")
- df = DataFrame({"s": s, "vals": [1, 2, 3, 4, 5, 6]})
- store.append("df", df, format="table")
- result = store.select("df")
- tm.assert_frame_equal(result, df)
-
- # Dtypes
- _maybe_remove(store, "si")
- s = Series([1, 1, 2, 2, 3, 4, 5]).astype("category")
- store.append("si", s)
- result = store.select("si")
- tm.assert_series_equal(result, s)
-
- _maybe_remove(store, "si2")
- s = Series([1, 1, np.nan, 2, 3, 4, 5]).astype("category")
- store.append("si2", s)
- result = store.select("si2")
- tm.assert_series_equal(result, s)
-
- # Multiple
- _maybe_remove(store, "df2")
- df2 = df.copy()
- df2["s2"] = Series(list("abcdefg")).astype("category")
- store.append("df2", df2)
- result = store.select("df2")
- tm.assert_frame_equal(result, df2)
-
- # Make sure the metadata is OK
- info = store.info()
- assert "/df2 " in info
- # df2._mgr.blocks[0] and df2._mgr.blocks[2] are Categorical
- assert "/df2/meta/values_block_0/meta" in info
- assert "/df2/meta/values_block_2/meta" in info
-
- # unordered
- _maybe_remove(store, "s2")
- s = Series(
- Categorical(
- ["a", "b", "b", "a", "a", "c"],
- categories=["a", "b", "c", "d"],
- ordered=False,
- )
- )
- store.append("s2", s, format="table")
- result = store.select("s2")
- tm.assert_series_equal(result, s)
-
- # Query
- _maybe_remove(store, "df3")
- store.append("df3", df, data_columns=["s"])
- expected = df[df.s.isin(["b", "c"])]
- result = store.select("df3", where=['s in ["b","c"]'])
- tm.assert_frame_equal(result, expected)
-
- expected = df[df.s.isin(["b", "c"])]
- result = store.select("df3", where=['s = ["b","c"]'])
- tm.assert_frame_equal(result, expected)
-
- expected = df[df.s.isin(["d"])]
- result = store.select("df3", where=['s in ["d"]'])
- tm.assert_frame_equal(result, expected)
-
- expected = df[df.s.isin(["f"])]
- result = store.select("df3", where=['s in ["f"]'])
- tm.assert_frame_equal(result, expected)
-
- # Appending with same categories is ok
- store.append("df3", df)
-
- df = concat([df, df])
- expected = df[df.s.isin(["b", "c"])]
- result = store.select("df3", where=['s in ["b","c"]'])
- tm.assert_frame_equal(result, expected)
-
- # Appending must have the same categories
- df3 = df.copy()
- df3["s"] = df3["s"].cat.remove_unused_categories()
-
- msg = "cannot append a categorical with different categories to the existing"
- with pytest.raises(ValueError, match=msg):
- store.append("df3", df3)
-
- # Remove, and make sure meta data is removed (its a recursive
- # removal so should be).
- result = store.select("df3/meta/s/meta")
- assert result is not None
- store.remove("df3")
-
- with pytest.raises(
- KeyError, match="'No object named df3/meta/s/meta in the file'"
- ):
- store.select("df3/meta/s/meta")
-
-
-def test_categorical_conversion(tmp_path, setup_path):
- # GH13322
- # Check that read_hdf with categorical columns doesn't return rows if
- # where criteria isn't met.
- obsids = ["ESP_012345_6789", "ESP_987654_3210"]
- imgids = ["APF00006np", "APF0001imm"]
- data = [4.3, 9.8]
-
- # Test without categories
- df = DataFrame({"obsids": obsids, "imgids": imgids, "data": data})
-
- # We are expecting an empty DataFrame matching types of df
- expected = df.iloc[[], :]
- path = tmp_path / setup_path
- df.to_hdf(path, "df", format="table", data_columns=True)
- result = read_hdf(path, "df", where="obsids=B")
- tm.assert_frame_equal(result, expected)
-
- # Test with categories
- df.obsids = df.obsids.astype("category")
- df.imgids = df.imgids.astype("category")
-
- # We are expecting an empty DataFrame matching types of df
- expected = df.iloc[[], :]
- path = tmp_path / setup_path
- df.to_hdf(path, "df", format="table", data_columns=True)
- result = read_hdf(path, "df", where="obsids=B")
- tm.assert_frame_equal(result, expected)
-
-
-def test_categorical_nan_only_columns(tmp_path, setup_path):
- # GH18413
- # Check that read_hdf with categorical columns with NaN-only values can
- # be read back.
- df = DataFrame(
- {
- "a": ["a", "b", "c", np.nan],
- "b": [np.nan, np.nan, np.nan, np.nan],
- "c": [1, 2, 3, 4],
- "d": Series([None] * 4, dtype=object),
- }
- )
- df["a"] = df.a.astype("category")
- df["b"] = df.b.astype("category")
- df["d"] = df.b.astype("category")
- expected = df
- path = tmp_path / setup_path
- df.to_hdf(path, "df", format="table", data_columns=True)
- result = read_hdf(path, "df")
- tm.assert_frame_equal(result, expected)
-
-
-@pytest.mark.parametrize(
- "where, df, expected",
- [
- ('col=="q"', DataFrame({"col": ["a", "b", "s"]}), DataFrame({"col": []})),
- ('col=="a"', DataFrame({"col": ["a", "b", "s"]}), DataFrame({"col": ["a"]})),
- ],
-)
-def test_convert_value(
- tmp_path, setup_path, where: str, df: DataFrame, expected: DataFrame
-):
- # GH39420
- # Check that read_hdf with categorical columns can filter by where condition.
- df.col = df.col.astype("category")
- max_widths = {"col": 1}
- categorical_values = sorted(df.col.unique())
- expected.col = expected.col.astype("category")
- expected.col = expected.col.cat.set_categories(categorical_values)
-
- path = tmp_path / setup_path
- df.to_hdf(path, "df", format="table", min_itemsize=max_widths)
- result = read_hdf(path, where=where)
- tm.assert_frame_equal(result, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_timedeltas.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_timedeltas.py
deleted file mode 100644
index 4784a6d0d600dcc77e359fb3d7d56301f78270d2..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_timedeltas.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import re
-
-import numpy as np
-import pytest
-
-from pandas._libs.tslibs.timedeltas import (
- array_to_timedelta64,
- delta_to_nanoseconds,
- ints_to_pytimedelta,
-)
-
-from pandas import (
- Timedelta,
- offsets,
-)
-import pandas._testing as tm
-
-
-@pytest.mark.parametrize(
- "obj,expected",
- [
- (np.timedelta64(14, "D"), 14 * 24 * 3600 * 1e9),
- (Timedelta(minutes=-7), -7 * 60 * 1e9),
- (Timedelta(minutes=-7).to_pytimedelta(), -7 * 60 * 1e9),
- (Timedelta(seconds=1234e-9), 1234), # GH43764, GH40946
- (
- Timedelta(seconds=1e-9, milliseconds=1e-5, microseconds=1e-1),
- 111,
- ), # GH43764
- (
- Timedelta(days=1, seconds=1e-9, milliseconds=1e-5, microseconds=1e-1),
- 24 * 3600e9 + 111,
- ), # GH43764
- (offsets.Nano(125), 125),
- ],
-)
-def test_delta_to_nanoseconds(obj, expected):
- result = delta_to_nanoseconds(obj)
- assert result == expected
-
-
-def test_delta_to_nanoseconds_error():
- obj = np.array([123456789], dtype="m8[ns]")
-
- with pytest.raises(TypeError, match=""):
- delta_to_nanoseconds(obj)
-
- with pytest.raises(TypeError, match="float"):
- delta_to_nanoseconds(1.5)
- with pytest.raises(TypeError, match="int"):
- delta_to_nanoseconds(1)
- with pytest.raises(TypeError, match="int"):
- delta_to_nanoseconds(np.int64(2))
- with pytest.raises(TypeError, match="int"):
- delta_to_nanoseconds(np.int32(3))
-
-
-def test_delta_to_nanoseconds_td64_MY_raises():
- msg = (
- "delta_to_nanoseconds does not support Y or M units, "
- "as their duration in nanoseconds is ambiguous"
- )
-
- td = np.timedelta64(1234, "Y")
-
- with pytest.raises(ValueError, match=msg):
- delta_to_nanoseconds(td)
-
- td = np.timedelta64(1234, "M")
-
- with pytest.raises(ValueError, match=msg):
- delta_to_nanoseconds(td)
-
-
-@pytest.mark.parametrize("unit", ["Y", "M"])
-def test_unsupported_td64_unit_raises(unit):
- # GH 52806
- with pytest.raises(
- ValueError,
- match=f"Unit {unit} is not supported. "
- "Only unambiguous timedelta values durations are supported. "
- "Allowed units are 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns'",
- ):
- Timedelta(np.timedelta64(1, unit))
-
-
-def test_huge_nanoseconds_overflow():
- # GH 32402
- assert delta_to_nanoseconds(Timedelta(1e10)) == 1e10
- assert delta_to_nanoseconds(Timedelta(nanoseconds=1e10)) == 1e10
-
-
-@pytest.mark.parametrize(
- "kwargs", [{"Seconds": 1}, {"seconds": 1, "Nanoseconds": 1}, {"Foo": 2}]
-)
-def test_kwarg_assertion(kwargs):
- err_message = (
- "cannot construct a Timedelta from the passed arguments, "
- "allowed keywords are "
- "[weeks, days, hours, minutes, seconds, "
- "milliseconds, microseconds, nanoseconds]"
- )
-
- with pytest.raises(ValueError, match=re.escape(err_message)):
- Timedelta(**kwargs)
-
-
-class TestArrayToTimedelta64:
- def test_array_to_timedelta64_string_with_unit_2d_raises(self):
- # check the 'unit is not None and errors != "coerce"' path
- # in array_to_timedelta64 raises correctly with 2D values
- values = np.array([["1", 2], [3, "4"]], dtype=object)
- with pytest.raises(ValueError, match="unit must not be specified"):
- array_to_timedelta64(values, unit="s")
-
- def test_array_to_timedelta64_non_object_raises(self):
- # check we raise, not segfault
- values = np.arange(5)
-
- msg = "'values' must have object dtype"
- with pytest.raises(TypeError, match=msg):
- array_to_timedelta64(values)
-
-
-@pytest.mark.parametrize("unit", ["s", "ms", "us"])
-def test_ints_to_pytimedelta(unit):
- # tests for non-nanosecond cases
- arr = np.arange(6, dtype=np.int64).view(f"m8[{unit}]")
-
- res = ints_to_pytimedelta(arr, box=False)
- # For non-nanosecond, .astype(object) gives pytimedelta objects
- # instead of integers
- expected = arr.astype(object)
- tm.assert_numpy_array_equal(res, expected)
-
- res = ints_to_pytimedelta(arr, box=True)
- expected = np.array([Timedelta(x) for x in arr], dtype=object)
- tm.assert_numpy_array_equal(res, expected)
-
-
-@pytest.mark.parametrize("unit", ["Y", "M", "ps", "fs", "as"])
-def test_ints_to_pytimedelta_unsupported(unit):
- arr = np.arange(6, dtype=np.int64).view(f"m8[{unit}]")
-
- with pytest.raises(NotImplementedError, match=r"\d{1,2}"):
- ints_to_pytimedelta(arr, box=False)
- msg = "Only resolutions 's', 'ms', 'us', 'ns' are supported"
- with pytest.raises(NotImplementedError, match=msg):
- ints_to_pytimedelta(arr, box=True)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/entrypoints.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/entrypoints.py
deleted file mode 100644
index 1504a12916b10c2de007d0ac0e1a3531ac79f8a7..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/entrypoints.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import sys
-from typing import List, Optional
-
-from pip._internal.cli.main import main
-
-
-def _wrapper(args: Optional[List[str]] = None) -> int:
- """Central wrapper for all old entrypoints.
-
- Historically pip has had several entrypoints defined. Because of issues
- arising from PATH, sys.path, multiple Pythons, their interactions, and most
- of them having a pip installed, users suffer every time an entrypoint gets
- moved.
-
- To alleviate this pain, and provide a mechanism for warning users and
- directing them to an appropriate place for help, we now define all of
- our old entrypoints as wrappers for the current one.
- """
- sys.stderr.write(
- "WARNING: pip is being invoked by an old script wrapper. This will "
- "fail in a future version of pip.\n"
- "Please see https://github.com/pypa/pip/issues/5599 for advice on "
- "fixing the underlying issue.\n"
- "To avoid this problem you can invoke Python with '-m pip' instead of "
- "running pip directly.\n"
- )
- return main(args)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/platformdirs/android.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/platformdirs/android.py
deleted file mode 100644
index a68405871f2991c4cfa6a36c1f6bc87e8b258299..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/platformdirs/android.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from __future__ import annotations
-
-import os
-import re
-import sys
-from functools import lru_cache
-
-from .api import PlatformDirsABC
-
-
-class Android(PlatformDirsABC):
- """
- Follows the guidance `from here `_. Makes use of the
- `appname ` and
- `version `.
- """
-
- @property
- def user_data_dir(self) -> str:
- """:return: data directory tied to the user, e.g. ``/data/user///files/``"""
- return self._append_app_name_and_version(_android_folder(), "files")
-
- @property
- def site_data_dir(self) -> str:
- """:return: data directory shared by users, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def user_config_dir(self) -> str:
- """
- :return: config directory tied to the user, e.g. ``/data/user///shared_prefs/``
- """
- return self._append_app_name_and_version(_android_folder(), "shared_prefs")
-
- @property
- def site_config_dir(self) -> str:
- """:return: config directory shared by the users, same as `user_config_dir`"""
- return self.user_config_dir
-
- @property
- def user_cache_dir(self) -> str:
- """:return: cache directory tied to the user, e.g. e.g. ``/data/user///cache/``"""
- return self._append_app_name_and_version(_android_folder(), "cache")
-
- @property
- def user_state_dir(self) -> str:
- """:return: state directory tied to the user, same as `user_data_dir`"""
- return self.user_data_dir
-
- @property
- def user_log_dir(self) -> str:
- """
- :return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it,
- e.g. ``/data/user///cache//log``
- """
- path = self.user_cache_dir
- if self.opinion:
- path = os.path.join(path, "log")
- return path
-
- @property
- def user_documents_dir(self) -> str:
- """
- :return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents``
- """
- return _android_documents_folder()
-
- @property
- def user_runtime_dir(self) -> str:
- """
- :return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it,
- e.g. ``/data/user///cache//tmp``
- """
- path = self.user_cache_dir
- if self.opinion:
- path = os.path.join(path, "tmp")
- return path
-
-
-@lru_cache(maxsize=1)
-def _android_folder() -> str:
- """:return: base folder for the Android OS"""
- try:
- # First try to get path to android app via pyjnius
- from jnius import autoclass
-
- Context = autoclass("android.content.Context") # noqa: N806
- result: str = Context.getFilesDir().getParentFile().getAbsolutePath()
- except Exception:
- # if fails find an android folder looking path on the sys.path
- pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files")
- for path in sys.path:
- if pattern.match(path):
- result = path.split("/files")[0]
- break
- else:
- raise OSError("Cannot find path to android app folder")
- return result
-
-
-@lru_cache(maxsize=1)
-def _android_documents_folder() -> str:
- """:return: documents folder for the Android OS"""
- # Get directories with pyjnius
- try:
- from jnius import autoclass
-
- Context = autoclass("android.content.Context") # noqa: N806
- Environment = autoclass("android.os.Environment") # noqa: N806
- documents_dir: str = Context.getExternalFilesDir(Environment.DIRECTORY_DOCUMENTS).getAbsolutePath()
- except Exception:
- documents_dir = "/storage/emulated/0/Documents"
-
- return documents_dir
-
-
-__all__ = [
- "Android",
-]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/modeline.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/modeline.py
deleted file mode 100644
index 047d86d6be6a25ae26fba26d959bf2a1609f7dc8..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/modeline.py
+++ /dev/null
@@ -1,43 +0,0 @@
-"""
- pygments.modeline
- ~~~~~~~~~~~~~~~~~
-
- A simple modeline parser (based on pymodeline).
-
- :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import re
-
-__all__ = ['get_filetype_from_buffer']
-
-
-modeline_re = re.compile(r'''
- (?: vi | vim | ex ) (?: [<=>]? \d* )? :
- .* (?: ft | filetype | syn | syntax ) = ( [^:\s]+ )
-''', re.VERBOSE)
-
-
-def get_filetype_from_line(l):
- m = modeline_re.search(l)
- if m:
- return m.group(1)
-
-
-def get_filetype_from_buffer(buf, max_lines=5):
- """
- Scan the buffer for modelines and return filetype if one is found.
- """
- lines = buf.splitlines()
- for l in lines[-1:-max_lines-1:-1]:
- ret = get_filetype_from_line(l)
- if ret:
- return ret
- for i in range(max_lines, -1, -1):
- if i < len(lines):
- ret = get_filetype_from_line(lines[i])
- if ret:
- return ret
-
- return None
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/py36compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/py36compat.py
deleted file mode 100644
index 343547a4d316e48144ba6bdf342dcc24cd6cb6cd..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/py36compat.py
+++ /dev/null
@@ -1,134 +0,0 @@
-import os
-from glob import glob
-from distutils.util import convert_path
-from distutils.command import sdist
-
-
-class sdist_add_defaults:
- """
- Mix-in providing forward-compatibility for functionality as found in
- distutils on Python 3.7.
-
- Do not edit the code in this class except to update functionality
- as implemented in distutils. Instead, override in the subclass.
- """
-
- def add_defaults(self):
- """Add all the default files to self.filelist:
- - README or README.txt
- - setup.py
- - test/test*.py
- - all pure Python modules mentioned in setup script
- - all files pointed by package_data (build_py)
- - all files defined in data_files.
- - all files defined as scripts.
- - all C sources listed as part of extensions or C libraries
- in the setup script (doesn't catch C headers!)
- Warns if (README or README.txt) or setup.py are missing; everything
- else is optional.
- """
- self._add_defaults_standards()
- self._add_defaults_optional()
- self._add_defaults_python()
- self._add_defaults_data_files()
- self._add_defaults_ext()
- self._add_defaults_c_libs()
- self._add_defaults_scripts()
-
- @staticmethod
- def _cs_path_exists(fspath):
- """
- Case-sensitive path existence check
-
- >>> sdist_add_defaults._cs_path_exists(__file__)
- True
- >>> sdist_add_defaults._cs_path_exists(__file__.upper())
- False
- """
- if not os.path.exists(fspath):
- return False
- # make absolute so we always have a directory
- abspath = os.path.abspath(fspath)
- directory, filename = os.path.split(abspath)
- return filename in os.listdir(directory)
-
- def _add_defaults_standards(self):
- standards = [self.READMES, self.distribution.script_name]
- for fn in standards:
- if isinstance(fn, tuple):
- alts = fn
- got_it = False
- for fn in alts:
- if self._cs_path_exists(fn):
- got_it = True
- self.filelist.append(fn)
- break
-
- if not got_it:
- self.warn("standard file not found: should have one of " +
- ', '.join(alts))
- else:
- if self._cs_path_exists(fn):
- self.filelist.append(fn)
- else:
- self.warn("standard file '%s' not found" % fn)
-
- def _add_defaults_optional(self):
- optional = ['test/test*.py', 'setup.cfg']
- for pattern in optional:
- files = filter(os.path.isfile, glob(pattern))
- self.filelist.extend(files)
-
- def _add_defaults_python(self):
- # build_py is used to get:
- # - python modules
- # - files defined in package_data
- build_py = self.get_finalized_command('build_py')
-
- # getting python files
- if self.distribution.has_pure_modules():
- self.filelist.extend(build_py.get_source_files())
-
- # getting package_data files
- # (computed in build_py.data_files by build_py.finalize_options)
- for pkg, src_dir, build_dir, filenames in build_py.data_files:
- for filename in filenames:
- self.filelist.append(os.path.join(src_dir, filename))
-
- def _add_defaults_data_files(self):
- # getting distribution.data_files
- if self.distribution.has_data_files():
- for item in self.distribution.data_files:
- if isinstance(item, str):
- # plain file
- item = convert_path(item)
- if os.path.isfile(item):
- self.filelist.append(item)
- else:
- # a (dirname, filenames) tuple
- dirname, filenames = item
- for f in filenames:
- f = convert_path(f)
- if os.path.isfile(f):
- self.filelist.append(f)
-
- def _add_defaults_ext(self):
- if self.distribution.has_ext_modules():
- build_ext = self.get_finalized_command('build_ext')
- self.filelist.extend(build_ext.get_source_files())
-
- def _add_defaults_c_libs(self):
- if self.distribution.has_c_libraries():
- build_clib = self.get_finalized_command('build_clib')
- self.filelist.extend(build_clib.get_source_files())
-
- def _add_defaults_scripts(self):
- if self.distribution.has_scripts():
- build_scripts = self.get_finalized_command('build_scripts')
- self.filelist.extend(build_scripts.get_source_files())
-
-
-if hasattr(sdist.sdist, '_add_defaults_standards'):
- # disable the functionality already available upstream
- class sdist_add_defaults: # noqa
- pass
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py
deleted file mode 100644
index 0c0efea0a126abbe06a278fb2ac6290975a36f9b..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py
+++ /dev/null
@@ -1,550 +0,0 @@
-import asyncio
-import http
-import logging
-from typing import (
- Any,
- Callable,
- Dict,
- List,
- Literal,
- Optional,
- Tuple,
- cast,
-)
-from urllib.parse import unquote
-
-import h11
-from h11._connection import DEFAULT_MAX_INCOMPLETE_EVENT_SIZE
-
-from uvicorn._types import (
- ASGI3Application,
- ASGIReceiveEvent,
- ASGISendEvent,
- HTTPRequestEvent,
- HTTPResponseBodyEvent,
- HTTPResponseStartEvent,
- HTTPScope,
-)
-from uvicorn.config import Config
-from uvicorn.logging import TRACE_LOG_LEVEL
-from uvicorn.protocols.http.flow_control import (
- CLOSE_HEADER,
- HIGH_WATER_LIMIT,
- FlowControl,
- service_unavailable,
-)
-from uvicorn.protocols.utils import (
- get_client_addr,
- get_local_addr,
- get_path_with_query_string,
- get_remote_addr,
- is_ssl,
-)
-from uvicorn.server import ServerState
-
-
-def _get_status_phrase(status_code: int) -> bytes:
- try:
- return http.HTTPStatus(status_code).phrase.encode()
- except ValueError:
- return b""
-
-
-STATUS_PHRASES = {
- status_code: _get_status_phrase(status_code) for status_code in range(100, 600)
-}
-
-
-class H11Protocol(asyncio.Protocol):
- def __init__(
- self,
- config: Config,
- server_state: ServerState,
- app_state: Dict[str, Any],
- _loop: Optional[asyncio.AbstractEventLoop] = None,
- ) -> None:
- if not config.loaded:
- config.load()
-
- self.config = config
- self.app = config.loaded_app
- self.loop = _loop or asyncio.get_event_loop()
- self.logger = logging.getLogger("uvicorn.error")
- self.access_logger = logging.getLogger("uvicorn.access")
- self.access_log = self.access_logger.hasHandlers()
- self.conn = h11.Connection(
- h11.SERVER,
- config.h11_max_incomplete_event_size
- if config.h11_max_incomplete_event_size is not None
- else DEFAULT_MAX_INCOMPLETE_EVENT_SIZE,
- )
- self.ws_protocol_class = config.ws_protocol_class
- self.root_path = config.root_path
- self.limit_concurrency = config.limit_concurrency
- self.app_state = app_state
-
- # Timeouts
- self.timeout_keep_alive_task: Optional[asyncio.TimerHandle] = None
- self.timeout_keep_alive = config.timeout_keep_alive
-
- # Shared server state
- self.server_state = server_state
- self.connections = server_state.connections
- self.tasks = server_state.tasks
-
- # Per-connection state
- self.transport: asyncio.Transport = None # type: ignore[assignment]
- self.flow: FlowControl = None # type: ignore[assignment]
- self.server: Optional[Tuple[str, int]] = None
- self.client: Optional[Tuple[str, int]] = None
- self.scheme: Optional[Literal["http", "https"]] = None
-
- # Per-request state
- self.scope: HTTPScope = None # type: ignore[assignment]
- self.headers: List[Tuple[bytes, bytes]] = None # type: ignore[assignment]
- self.cycle: RequestResponseCycle = None # type: ignore[assignment]
-
- # Protocol interface
- def connection_made( # type: ignore[override]
- self, transport: asyncio.Transport
- ) -> None:
- self.connections.add(self)
-
- self.transport = transport
- self.flow = FlowControl(transport)
- self.server = get_local_addr(transport)
- self.client = get_remote_addr(transport)
- self.scheme = "https" if is_ssl(transport) else "http"
-
- if self.logger.level <= TRACE_LOG_LEVEL:
- prefix = "%s:%d - " % self.client if self.client else ""
- self.logger.log(TRACE_LOG_LEVEL, "%sHTTP connection made", prefix)
-
- def connection_lost(self, exc: Optional[Exception]) -> None:
- self.connections.discard(self)
-
- if self.logger.level <= TRACE_LOG_LEVEL:
- prefix = "%s:%d - " % self.client if self.client else ""
- self.logger.log(TRACE_LOG_LEVEL, "%sHTTP connection lost", prefix)
-
- if self.cycle and not self.cycle.response_complete:
- self.cycle.disconnected = True
- if self.conn.our_state != h11.ERROR:
- event = h11.ConnectionClosed()
- try:
- self.conn.send(event)
- except h11.LocalProtocolError:
- # Premature client disconnect
- pass
-
- if self.cycle is not None:
- self.cycle.message_event.set()
- if self.flow is not None:
- self.flow.resume_writing()
- if exc is None:
- self.transport.close()
- self._unset_keepalive_if_required()
-
- def eof_received(self) -> None:
- pass
-
- def _unset_keepalive_if_required(self) -> None:
- if self.timeout_keep_alive_task is not None:
- self.timeout_keep_alive_task.cancel()
- self.timeout_keep_alive_task = None
-
- def _get_upgrade(self) -> Optional[bytes]:
- connection = []
- upgrade = None
- for name, value in self.headers:
- if name == b"connection":
- connection = [token.lower().strip() for token in value.split(b",")]
- if name == b"upgrade":
- upgrade = value.lower()
- if b"upgrade" in connection:
- return upgrade
- return None
-
- def _should_upgrade_to_ws(self) -> bool:
- if self.ws_protocol_class is None:
- if self.config.ws == "auto":
- msg = "Unsupported upgrade request."
- self.logger.warning(msg)
- msg = "No supported WebSocket library detected. Please use \"pip install 'uvicorn[standard]'\", or install 'websockets' or 'wsproto' manually." # noqa: E501
- self.logger.warning(msg)
- return False
- return True
-
- def data_received(self, data: bytes) -> None:
- self._unset_keepalive_if_required()
-
- self.conn.receive_data(data)
- self.handle_events()
-
- def handle_events(self) -> None:
- while True:
- try:
- event = self.conn.next_event()
- except h11.RemoteProtocolError:
- msg = "Invalid HTTP request received."
- self.logger.warning(msg)
- self.send_400_response(msg)
- return
-
- if event is h11.NEED_DATA:
- break
-
- elif event is h11.PAUSED:
- # This case can occur in HTTP pipelining, so we need to
- # stop reading any more data, and ensure that at the end
- # of the active request/response cycle we handle any
- # events that have been buffered up.
- self.flow.pause_reading()
- break
-
- elif isinstance(event, h11.Request):
- self.headers = [(key.lower(), value) for key, value in event.headers]
- raw_path, _, query_string = event.target.partition(b"?")
- self.scope = {
- "type": "http",
- "asgi": {
- "version": self.config.asgi_version,
- "spec_version": "2.3",
- },
- "http_version": event.http_version.decode("ascii"),
- "server": self.server,
- "client": self.client,
- "scheme": self.scheme, # type: ignore[typeddict-item]
- "method": event.method.decode("ascii"),
- "root_path": self.root_path,
- "path": unquote(raw_path.decode("ascii")),
- "raw_path": raw_path,
- "query_string": query_string,
- "headers": self.headers,
- "state": self.app_state.copy(),
- }
-
- upgrade = self._get_upgrade()
- if upgrade == b"websocket" and self._should_upgrade_to_ws():
- self.handle_websocket_upgrade(event)
- return
-
- # Handle 503 responses when 'limit_concurrency' is exceeded.
- if self.limit_concurrency is not None and (
- len(self.connections) >= self.limit_concurrency
- or len(self.tasks) >= self.limit_concurrency
- ):
- app = service_unavailable
- message = "Exceeded concurrency limit."
- self.logger.warning(message)
- else:
- app = self.app
-
- self.cycle = RequestResponseCycle(
- scope=self.scope,
- conn=self.conn,
- transport=self.transport,
- flow=self.flow,
- logger=self.logger,
- access_logger=self.access_logger,
- access_log=self.access_log,
- default_headers=self.server_state.default_headers,
- message_event=asyncio.Event(),
- on_response=self.on_response_complete,
- )
- task = self.loop.create_task(self.cycle.run_asgi(app))
- task.add_done_callback(self.tasks.discard)
- self.tasks.add(task)
-
- elif isinstance(event, h11.Data):
- if self.conn.our_state is h11.DONE:
- continue
- self.cycle.body += event.data
- if len(self.cycle.body) > HIGH_WATER_LIMIT:
- self.flow.pause_reading()
- self.cycle.message_event.set()
-
- elif isinstance(event, h11.EndOfMessage):
- if self.conn.our_state is h11.DONE:
- self.transport.resume_reading()
- self.conn.start_next_cycle()
- continue
- self.cycle.more_body = False
- self.cycle.message_event.set()
-
- def handle_websocket_upgrade(self, event: h11.Request) -> None:
- if self.logger.level <= TRACE_LOG_LEVEL:
- prefix = "%s:%d - " % self.client if self.client else ""
- self.logger.log(TRACE_LOG_LEVEL, "%sUpgrading to WebSocket", prefix)
-
- self.connections.discard(self)
- output = [event.method, b" ", event.target, b" HTTP/1.1\r\n"]
- for name, value in self.headers:
- output += [name, b": ", value, b"\r\n"]
- output.append(b"\r\n")
- protocol = self.ws_protocol_class( # type: ignore[call-arg, misc]
- config=self.config,
- server_state=self.server_state,
- app_state=self.app_state,
- )
- protocol.connection_made(self.transport)
- protocol.data_received(b"".join(output))
- self.transport.set_protocol(protocol)
-
- def send_400_response(self, msg: str) -> None:
- reason = STATUS_PHRASES[400]
- headers: List[Tuple[bytes, bytes]] = [
- (b"content-type", b"text/plain; charset=utf-8"),
- (b"connection", b"close"),
- ]
- event = h11.Response(status_code=400, headers=headers, reason=reason)
- output = self.conn.send(event)
- self.transport.write(output)
-
- output = self.conn.send(event=h11.Data(data=msg.encode("ascii")))
- self.transport.write(output)
-
- output = self.conn.send(event=h11.EndOfMessage())
- self.transport.write(output)
-
- self.transport.close()
-
- def on_response_complete(self) -> None:
- self.server_state.total_requests += 1
-
- if self.transport.is_closing():
- return
-
- # Set a short Keep-Alive timeout.
- self._unset_keepalive_if_required()
-
- self.timeout_keep_alive_task = self.loop.call_later(
- self.timeout_keep_alive, self.timeout_keep_alive_handler
- )
-
- # Unpause data reads if needed.
- self.flow.resume_reading()
-
- # Unblock any pipelined events.
- if self.conn.our_state is h11.DONE and self.conn.their_state is h11.DONE:
- self.conn.start_next_cycle()
- self.handle_events()
-
- def shutdown(self) -> None:
- """
- Called by the server to commence a graceful shutdown.
- """
- if self.cycle is None or self.cycle.response_complete:
- event = h11.ConnectionClosed()
- self.conn.send(event)
- self.transport.close()
- else:
- self.cycle.keep_alive = False
-
- def pause_writing(self) -> None:
- """
- Called by the transport when the write buffer exceeds the high water mark.
- """
- self.flow.pause_writing()
-
- def resume_writing(self) -> None:
- """
- Called by the transport when the write buffer drops below the low water mark.
- """
- self.flow.resume_writing()
-
- def timeout_keep_alive_handler(self) -> None:
- """
- Called on a keep-alive connection if no new data is received after a short
- delay.
- """
- if not self.transport.is_closing():
- event = h11.ConnectionClosed()
- self.conn.send(event)
- self.transport.close()
-
-
-class RequestResponseCycle:
- def __init__(
- self,
- scope: "HTTPScope",
- conn: h11.Connection,
- transport: asyncio.Transport,
- flow: FlowControl,
- logger: logging.Logger,
- access_logger: logging.Logger,
- access_log: bool,
- default_headers: List[Tuple[bytes, bytes]],
- message_event: asyncio.Event,
- on_response: Callable[..., None],
- ) -> None:
- self.scope = scope
- self.conn = conn
- self.transport = transport
- self.flow = flow
- self.logger = logger
- self.access_logger = access_logger
- self.access_log = access_log
- self.default_headers = default_headers
- self.message_event = message_event
- self.on_response = on_response
-
- # Connection state
- self.disconnected = False
- self.keep_alive = True
- self.waiting_for_100_continue = conn.they_are_waiting_for_100_continue
-
- # Request state
- self.body = b""
- self.more_body = True
-
- # Response state
- self.response_started = False
- self.response_complete = False
-
- # ASGI exception wrapper
- async def run_asgi(self, app: "ASGI3Application") -> None:
- try:
- result = await app( # type: ignore[func-returns-value]
- self.scope, self.receive, self.send
- )
- except BaseException as exc:
- msg = "Exception in ASGI application\n"
- self.logger.error(msg, exc_info=exc)
- if not self.response_started:
- await self.send_500_response()
- else:
- self.transport.close()
- else:
- if result is not None:
- msg = "ASGI callable should return None, but returned '%s'."
- self.logger.error(msg, result)
- self.transport.close()
- elif not self.response_started and not self.disconnected:
- msg = "ASGI callable returned without starting response."
- self.logger.error(msg)
- await self.send_500_response()
- elif not self.response_complete and not self.disconnected:
- msg = "ASGI callable returned without completing response."
- self.logger.error(msg)
- self.transport.close()
- finally:
- self.on_response = lambda: None
-
- async def send_500_response(self) -> None:
- response_start_event: "HTTPResponseStartEvent" = {
- "type": "http.response.start",
- "status": 500,
- "headers": [
- (b"content-type", b"text/plain; charset=utf-8"),
- (b"connection", b"close"),
- ],
- }
- await self.send(response_start_event)
- response_body_event: "HTTPResponseBodyEvent" = {
- "type": "http.response.body",
- "body": b"Internal Server Error",
- "more_body": False,
- }
- await self.send(response_body_event)
-
- # ASGI interface
- async def send(self, message: "ASGISendEvent") -> None:
- message_type = message["type"]
-
- if self.flow.write_paused and not self.disconnected:
- await self.flow.drain()
-
- if self.disconnected:
- return
-
- if not self.response_started:
- # Sending response status line and headers
- if message_type != "http.response.start":
- msg = "Expected ASGI message 'http.response.start', but got '%s'."
- raise RuntimeError(msg % message_type)
- message = cast("HTTPResponseStartEvent", message)
-
- self.response_started = True
- self.waiting_for_100_continue = False
-
- status = message["status"]
- headers = self.default_headers + list(message.get("headers", []))
-
- if CLOSE_HEADER in self.scope["headers"] and CLOSE_HEADER not in headers:
- headers = headers + [CLOSE_HEADER]
-
- if self.access_log:
- self.access_logger.info(
- '%s - "%s %s HTTP/%s" %d',
- get_client_addr(self.scope),
- self.scope["method"],
- get_path_with_query_string(self.scope),
- self.scope["http_version"],
- status,
- )
-
- # Write response status line and headers
- reason = STATUS_PHRASES[status]
- response = h11.Response(status_code=status, headers=headers, reason=reason)
- output = self.conn.send(event=response)
- self.transport.write(output)
-
- elif not self.response_complete:
- # Sending response body
- if message_type != "http.response.body":
- msg = "Expected ASGI message 'http.response.body', but got '%s'."
- raise RuntimeError(msg % message_type)
- message = cast("HTTPResponseBodyEvent", message)
-
- body = message.get("body", b"")
- more_body = message.get("more_body", False)
-
- # Write response body
- data = b"" if self.scope["method"] == "HEAD" else body
- output = self.conn.send(event=h11.Data(data=data))
- self.transport.write(output)
-
- # Handle response completion
- if not more_body:
- self.response_complete = True
- self.message_event.set()
- output = self.conn.send(event=h11.EndOfMessage())
- self.transport.write(output)
-
- else:
- # Response already sent
- msg = "Unexpected ASGI message '%s' sent, after response already completed."
- raise RuntimeError(msg % message_type)
-
- if self.response_complete:
- if self.conn.our_state is h11.MUST_CLOSE or not self.keep_alive:
- self.conn.send(event=h11.ConnectionClosed())
- self.transport.close()
- self.on_response()
-
- async def receive(self) -> "ASGIReceiveEvent":
- if self.waiting_for_100_continue and not self.transport.is_closing():
- headers: List[Tuple[str, str]] = []
- event = h11.InformationalResponse(
- status_code=100, headers=headers, reason="Continue"
- )
- output = self.conn.send(event=event)
- self.transport.write(output)
- self.waiting_for_100_continue = False
-
- if not self.disconnected and not self.response_complete:
- self.flow.resume_reading()
- await self.message_event.wait()
- self.message_event.clear()
-
- if self.disconnected or self.response_complete:
- return {"type": "http.disconnect"}
-
- message: "HTTPRequestEvent" = {
- "type": "http.request",
- "body": self.body,
- "more_body": self.more_body,
- }
- self.body = b""
- return message
diff --git a/spaces/pseudolab/huggingface-korea-theme/README.md b/spaces/pseudolab/huggingface-korea-theme/README.md
deleted file mode 100644
index 7da80b0a242c9c7fa327ebe8a715ead9c23edee4..0000000000000000000000000000000000000000
--- a/spaces/pseudolab/huggingface-korea-theme/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
----
-tags: [gradio-theme]
-title: huggingface-korea-theme
-colorFrom: yellow
-colorTo: orange
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-# HuggingFace Korea Theme
-## Description
-This is a theme for KREW Hackathon 2023 based on Monochrome and inspired by HuggingFace Korea logo.
diff --git a/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/google.py b/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/google.py
deleted file mode 100644
index 2396eae74cc57a9c8a64d1aae25d980f728e7491..0000000000000000000000000000000000000000
--- a/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/google.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from google.cloud import speech
-import types
-
-from realtime_ai_character.audio.speech_to_text.base import SpeechToText
-from realtime_ai_character.logger import get_logger
-from realtime_ai_character.utils import Singleton
-
-logger = get_logger(__name__)
-config = types.SimpleNamespace(**{
- 'web': {
- 'encoding': speech.RecognitionConfig.AudioEncoding.WEBM_OPUS,
- 'sample_rate_hertz': 48000,
- 'language_code': 'en-US',
- 'max_alternatives': 1,
- },
- 'terminal': {
- 'encoding': speech.RecognitionConfig.AudioEncoding.LINEAR16,
- 'sample_rate_hertz': 44100,
- 'language_code': 'en-US',
- 'max_alternatives': 1,
- },
-})
-
-
-class Google(Singleton, SpeechToText):
- def __init__(self):
- super().__init__()
- logger.info("Setting up [Google Speech to Text]...")
- self.client = speech.SpeechClient()
-
- def transcribe(self, audio_bytes, platform, prompt='') -> str:
- batch_config = speech.RecognitionConfig({
- 'speech_contexts': [speech.SpeechContext(phrases=prompt.split(','))],
- **config.__dict__[platform]})
- response = self.client.recognize(
- config=batch_config,
- audio=speech.RecognitionAudio(content=audio_bytes)
- )
- if not response.results:
- return ''
- result = response.results[0]
- if not result.alternatives:
- return ''
- return result.alternatives[0].transcript
diff --git a/spaces/qblocks/Monster-SD/README.md b/spaces/qblocks/Monster-SD/README.md
deleted file mode 100644
index f76bff5778bf101e02950752a01be09951507f18..0000000000000000000000000000000000000000
--- a/spaces/qblocks/Monster-SD/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Monster SD
-emoji: 🌍
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/qkorbit/AltDiffusion/style.css b/spaces/qkorbit/AltDiffusion/style.css
deleted file mode 100644
index d954ce678fed7d0f33bdc6af6764b73e06d6e78a..0000000000000000000000000000000000000000
--- a/spaces/qkorbit/AltDiffusion/style.css
+++ /dev/null
@@ -1,81 +0,0 @@
-.gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
-}
-.gr-button {
- color: white;
- /* border-color: black; */
- /* background: black; */
- background: rgb(60, 145, 238);
-}
-/* input[type='range'] {
- accent-color: rgb(60, 145, 238);
-}
-.dark input[type='range'] {
- accent-color: #dfdfdf;
-} */
-.container {
- max-width: 900px;
- margin: auto;
- padding-top: 1.5rem;
-}
-#gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
-}
-#gallery>div>.h-full {
- min-height: 20rem;
-}
-.details:hover {
- text-decoration: underline;
-}
-.gr-button {
- white-space: nowrap;
-}
-/* .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
-} */
-.footer {
- margin-bottom: 45px;
- margin-top: 20px;
- /* text-align: center; */
- border-bottom: 1px solid #e5e5e5;
-}
-.footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
-}
-.footer>p>h4 {
- font-size: .20rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- font-weight: bold;
-}
-.dark .footer {
- /* border-color: #303030; */
- border-color: rgb(60, 145, 238);
-}
-.dark .footer>p {
- /* background: #0b0f19; */
- background: rgb(60, 145, 238);
-}
-.prompt h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
-}
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Hasphl 2010 Error Code 1068.md b/spaces/quidiaMuxgu/Expedit-SAM/Hasphl 2010 Error Code 1068.md
deleted file mode 100644
index c6ab0a52d7a836405880edf00ec0cc7c3583c7c4..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Hasphl 2010 Error Code 1068.md
+++ /dev/null
@@ -1,6 +0,0 @@
-hasphl 2010 error code 1068
Download Zip ☆ https://geags.com/2uCs56
-
-chromecrashes.com/tag/windows-10-error-1068. Hasphl 2010 Error Code 1068 Crack Topo France V3 Pro . Hasphl 2010 Error Code 1068.. Hasphl 2010 Error ... 4d29de3e1b
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/MAME 0 37b11 Full Romset GP2X Wiz MAME 2 0.md b/spaces/quidiaMuxgu/Expedit-SAM/MAME 0 37b11 Full Romset GP2X Wiz MAME 2 0.md
deleted file mode 100644
index ff3f5a577e97ebc0d0ae216f36839f523b56bd0b..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/MAME 0 37b11 Full Romset GP2X Wiz MAME 2 0.md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-- MAME 0.37b11 Full Romset GP2X Wiz MAME 2.0: The Ultimate Retro Gaming Experience
-- How to Emulate Classic Arcade Games with MAME 0.37b11 Full Romset GP2X Wiz MAME 2.0
-- Enjoy Thousands of Games with MAME 0.37b11 Full Romset GP2X Wiz MAME 2.0
-- MAME 0.37b11 Full Romset GP2X Wiz MAME 2.0: A Complete Guide for Beginners
-- The Benefits of Using MAME 0.37b11 Full Romset GP2X Wiz MAME 2.0 for Your Gaming Needs
-MAME 0 37b11 Full romset GP2X Wiz MAME 2 0
Download → https://geags.com/2uCqmr
-MAME 0 37b11 Full romset GP2X Wiz MAME 2 0: The Ultimate Retro Gaming Experience
-If you are a fan of classic arcade games, you might have heard of MAME, the Multiple Arcade Machine Emulator. MAME is a software that allows you to play thousands of games from different arcade systems on your computer or mobile device. But not all versions of MAME are the same. Some are more compatible, some are more optimized, and some are more suitable for certain devices.
-One of the best versions of MAME for handheld devices is MAME 0 37b11 Full romset GP2X Wiz MAME 2 0. This version of MAME is specifically designed for the GP2X Wiz, a portable gaming console that runs on Linux. The GP2X Wiz has a powerful processor, a high-resolution screen, and a built-in joystick and buttons that make it perfect for playing arcade games.
-MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 is a complete collection of ROMs that are compatible with this version of MAME. It includes over 2000 games from various arcade systems, such as Atari, Capcom, Konami, Sega, SNK, and more. You can play classics like Pac-Man, Donkey Kong, Street Fighter, Metal Slug, and many others.
-In this article, we will show you how to download and install MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 on your GP2X Wiz device. We will also give you some tips and tricks on how to optimize your gaming experience and enjoy the best of retro gaming.
-How to Download and Install MAME 0 37b11 Full romset GP2X Wiz MAME 2 0
-The first step to play MAME games on your GP2X Wiz is to download the MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 file. This file contains all the ROMs that you need to run the games on your device. You can download it from various sources on the internet, such as The Old Computer website or Archive.org.
-
-The file size is about 2 GB, so make sure you have enough space on your computer and your GP2X Wiz device. You will also need a program that can extract compressed files, such as WinRAR or 7-Zip.
-Once you have downloaded the file, follow these steps to install it on your GP2X Wiz:
-
-- Connect your GP2X Wiz to your computer using a USB cable.
-- Open the file manager on your computer and locate the MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 file.
-- Right-click on the file and select "Extract here" or "Extract to" depending on your program.
-- A new folder named "Mame-0.37b11" will be created with all the ROMs inside.
-- Copy this folder to the root directory of your GP2X Wiz device.
-- Eject your GP2X Wiz from your computer and turn it on.
-- Navigate to the "Mame-0.37b11" folder on your device and select "Mame4all.gpe" to launch the emulator.
-- You will see a list of all the available games. Use the joystick and buttons to select and play the game of your choice.
-
-How to Optimize Your Gaming Experience with MAME 0 37b11 Full romset GP2X Wiz MAME 2 0
-MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 is a great way to enjoy retro gaming on your handheld device. However, there are some things that you can do to improve your gaming experience and avoid any issues or errors. Here are some tips and tricks that you can try:
-
-- Make sure your GP2X Wiz device has enough battery power before playing. Some games may consume more power than others, so it is advisable to charge your device fully before playing or use an external power source.
-- Adjust the screen brightness and volume according to your preference and environment. You can do this by pressing the "Menu" button on your device and selecting "Settings". You can also change other settings such as language, date and time, etc.
-- Use the "Save state" and "Load state" features to save and resume your game progress at any point. You can do this by pressing the "Menu" button on your device and selecting "Save state" or "Load state". You can have up to nine save slots for each game.
-- Use the "Filter" option to change the appearance of the game graphics. You can do this by pressing the "Menu" button on your device and selecting "Filter". You can choose between different filters such as scanlines, smooth, hq3x, etc.
-- Use the "Options" menu to change various settings related to the emulator and the games. You can do this by pressing the "Menu" button on your device and selecting "Options". You can change settings such as frameskip, sound frequency, CPU clock speed, etc.
-- If you encounter any problems or errors while playing a game, try changing some of the settings mentioned above or use a different version of the ROM. Some games may not work properly or at all with certain settings or ROM versions.
-
-Conclusion
-MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 is one of the best versions of MAME for handheld devices. It allows you to play thousands of classic arcade games on your GP2X Wiz device with ease and convenience. All you need is to download and install the file on your device and enjoy retro gaming at its finest.
-We hope this article has helped you learn how to download and install MAME 0 37b11 Full romset GP2X Wiz MAME 2 0 on your GP2X Wiz device. We also hope you have found some useful tips and tricks on how to optimize your gaming experience with this emulator. If you have any questions or feedback, feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/data/loaders/augmentors/boundary.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/data/loaders/augmentors/boundary.py
deleted file mode 100644
index 8c10ec7f13c8ac12c68114ddd7b1e4b25a545689..0000000000000000000000000000000000000000
--- a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/data/loaders/augmentors/boundary.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import numpy as np
-from scipy import interpolate
-import cv2
-
-
-class AddBoundary(object):
- def __init__(self, num_landmarks=68, map_size=64, sigma=1, min_dpi=64):
- self.num_landmarks = num_landmarks
- self.sigma = sigma
-
- if isinstance(map_size, (tuple, list)):
- self.width = map_size[0]
- self.height = map_size[1]
- else:
- self.width = map_size
- self.height = map_size
-
- if max(map_size) > min_dpi:
- self.dpi = max(map_size)
- else:
- self.dpi = min_dpi
-
- self.fig_size =[self.height/self.dpi, self.width/self.dpi]
-
- def __call__(self, sample):
- landmarks = sample['landmarks_float']
- mask_lnd = sample['mask_ldm_float']
- boundaries = self.get_dataset_boundaries(landmarks, mask_lnd)
- functions = {}
-
- for key, points in boundaries.items():
- if len(points) != 0:
- temp = points[0]
- new_points = points[0:1, :]
- for point in points[1:]:
- if point[0] == temp[0] and point[1] == temp[1]:
- continue
- else:
- new_points = np.concatenate((new_points, np.expand_dims(point, 0)), axis=0)
- temp = point
-
- points = new_points
- if points.shape[0] == 1:
- points = np.concatenate((points, points+0.001), axis=0)
- k = min(4, points.shape[0])
- functions[key] = interpolate.splprep([points[:, 0], points[:, 1]], k=k-1,s=0)
-
- boundary_maps = np.zeros((len(boundaries), self.height, self.width))
- for i_map, key in enumerate(functions.keys()):
- boundary_map = np.zeros((self.height, self.width))
- xnew = np.arange(0, 1, 1/self.dpi)
- out = interpolate.splev(xnew, functions[key][0], der=0)
-
- out = np.round(out).astype(int).transpose()
- out = out[out[:, 0] < self.height]
- out = out[out[:, 1] < self.width]
- boundary_map[out[:,1], out[:,0]]= 255
-
- # Smooth
- sigma = self.sigma
- temp = 255 - boundary_map.astype(np.uint8)
- temp = cv2.distanceTransform(temp, cv2.DIST_L2, cv2.DIST_MASK_PRECISE)
- temp = temp.astype(np.float32)
- temp = np.where(temp < 3*sigma, np.exp(-(temp*temp)/(2*sigma*sigma)), 0 )
- boundary_maps[i_map] = temp
-
- sample['boundary'] = boundary_maps
- return sample
-
- def get_dataset_boundaries(self, landmarks, mask_lnd):
- boundaries = {}
- if self.num_landmarks == 68:
- cheek = landmarks[0:17]
- boundaries['cheek'] = cheek[mask_lnd[0:17] > 0]
- left_eyebrow = landmarks[17:22]
- boundaries['left_eyebrow'] = left_eyebrow[mask_lnd[17:22] > 0]
- right_eyebrow = landmarks[22:27]
- boundaries['right_eyebrow'] = right_eyebrow[mask_lnd[22:27] > 0]
- nose = landmarks[27:31]
- boundaries['nose'] = nose[mask_lnd[27:31] > 0]
- nose_bot = landmarks[31:36]
- boundaries['nose_bot'] = nose_bot[mask_lnd[31:36] > 0]
- uper_left_eyelid = landmarks[36:40]
- boundaries['upper_left_eyelid'] = uper_left_eyelid[mask_lnd[36:40] > 0]
- lower_left_eyelid = np.array([landmarks[i] for i in [36, 41, 40, 39]])
- lower_left_eyelid_mask = np.array([mask_lnd[i] for i in [36, 41, 40, 39]])
- boundaries['lower_left_eyelid'] = lower_left_eyelid[lower_left_eyelid_mask > 0]
- upper_right_eyelid = landmarks[42:46]
- boundaries['upper_right_eyelid'] = upper_right_eyelid[mask_lnd[42:46] > 0]
- lower_right_eyelid = np.array([landmarks[i] for i in [42, 47, 46, 45]])
- lower_right_eyelid_mask = np.array([mask_lnd[i] for i in [42, 47, 46, 45]])
- boundaries['lower_right_eyelid'] = lower_right_eyelid[lower_right_eyelid_mask > 0]
- upper_outer_lip = landmarks[48:55]
- boundaries['upper_outer_lip'] = upper_outer_lip[mask_lnd[48:55] > 0]
- lower_outer_lip = np.array([landmarks[i] for i in [48, 59, 58, 57, 56, 55, 54]])
- lower_outer_lip_mask = np.array([mask_lnd[i] for i in [48, 59, 58, 57, 56, 55, 54]])
- boundaries['lower_outer_lip'] = lower_outer_lip[lower_outer_lip_mask > 0]
- upper_inner_lip = np.array([landmarks[i] for i in [60, 61, 62, 63, 64]])
- upper_inner_lip_mask = np.array([mask_lnd[i] for i in [60, 61, 62, 63, 64]])
- boundaries['upper_inner_lip'] = upper_inner_lip[upper_inner_lip_mask > 0]
- lower_inner_lip = np.array([landmarks[i] for i in [60, 67, 66, 65, 64]])
- lower_inner_lip_mask = np.array([mask_lnd[i] for i in [60, 67, 66, 65, 64]])
- boundaries['lower_inner_lip'] = lower_inner_lip[lower_inner_lip_mask > 0]
-
- elif self.num_landmarks == 98:
- boundaries['cheek'] = landmarks[0:33]
- boundaries['upper_left_eyebrow'] = landmarks[33:38]
- boundaries['lower_left_eyebrow'] = np.array([landmarks[i] for i in [33, 41, 40, 39, 38]])
- boundaries['upper_right_eyebrow'] = landmarks[42:47]
- boundaries['lower_right_eyebrow'] = landmarks[46:51]
- boundaries['nose'] = landmarks[51:55]
- boundaries['nose_bot'] = landmarks[55:60]
- boundaries['upper_left_eyelid'] = landmarks[60:65]
- boundaries['lower_left_eyelid'] = np.array([landmarks[i] for i in [60, 67, 66, 65, 64]])
- boundaries['upper_right_eyelid'] = landmarks[68:73]
- boundaries['lower_right_eyelid'] = np.array([landmarks[i] for i in [68, 75, 74, 73, 72]])
- boundaries['upper_outer_lip'] = landmarks[76:83]
- boundaries['lower_outer_lip'] = np.array([landmarks[i] for i in [76, 87, 86, 85, 84, 83, 82]])
- boundaries['upper_inner_lip'] = np.array([landmarks[i] for i in [88, 89, 90, 91, 92]])
- boundaries['lower_inner_lip'] = np.array([landmarks[i] for i in [88, 95, 94, 93, 92]])
-
- return boundaries
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Peti Beta VST and Explore the World of Harmoniums Accordions and More.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Peti Beta VST and Explore the World of Harmoniums Accordions and More.md
deleted file mode 100644
index 7535f3468801f33da4d42b0e26c4b4587206f8f6..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download Peti Beta VST and Explore the World of Harmoniums Accordions and More.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-Peti Beta VST Download: How to Get the Best Harmonium Plugin for Your Music Production
- If you are looking for a realistic and versatile harmonium plugin for your music production, you may have heard of Peti Beta VST. This plugin is designed to emulate the sounds of various harmoniums, accordions, and other reed organs, using hybrid FM synthesis technology. But what is Peti Beta VST exactly, and how can you get it for your music production software? In this article, we will answer these questions and more, and help you decide if Peti Beta VST is worth it for your music production needs.
-peti beta vst download
Download Zip ✺✺✺ https://tinourl.com/2uKZXR
- What is Peti Beta VST?
- Peti Beta VST is a virtual instrument plugin that simulates the sounds of the harmonium, a member of the reed organ family, also known as "pump organ". The harmonium is widely used in north Indian music, as well as in pop and rock music for its full and sweet sound. Peti Beta VST can also emulate other reed organ variations, such as accordions, pipe organs, and melodicas.
- Peti Beta VST uses hybrid FM synthesis technology to generate realistic harmonium sounds. It allows you to control various parameters of the sound generation engine, such as pitch, modulation, envelope, filter, noise, and effects. It also gives you complete control over a simulated bellows mechanism, which is essential for creating expressive harmonium sounds. You can choose from 11 different bellows modes, ranging from automatic to manual pumping, with appropriate modulation of the sound parameters.
- Peti Beta VST comes with 64 presets that cover a wide range of harmonium sounds. You can also create your own presets and save them for later use. You can play Peti Beta VST using your MIDI keyboard or controller, or use your mouse to click on the virtual keyboard on the plugin interface.
- How to Download Peti Beta VST?
- If you want to download Peti Beta VST, you need to visit the official website of NUSofting, the developer of this plugin. NUSofting is a company that specializes in creating unique and innovative virtual instruments and effects plugins for music production. You can find their website at https://nusofting.liqihsynth.com.
- On their website, you can find more information about Peti Beta VST and other products they offer. You can also listen to some audio demos of Peti Beta VST and see some screenshots of its user interface. To download Peti Beta VST, you need to click on the "Buy Now" button on their website. This will take you to a secure payment page where you can choose your preferred payment method.
- Peti Beta VST costs €48.00 (about $55 USD) for both Mac OS X and Windows versions. You can pay using PayPal or credit card. After completing your payment, you will receive an email with a download link and a serial number for activating your plugin. You can also access your download link and serial number from your account page on NUSofting's website.
- How to Install Peti Beta VST?
- After downloading Peti Beta VST from NUSofting's website, you need to install it on your computer. The installation process is simple and straightforward. Here are the steps you need to follow:
-peti beta harmonium plugin
-peti beta accordion emulator
-peti beta reed organ vst
-peti beta virtual instrument mac
-peti beta audio unit windows
-peti beta fm synthesis technology
-peti beta nusofting liqihsynth
-peti beta kvr audio product
-peti beta 440software download
-peti beta gbr loops indian
-peti beta melodica reggae dub
-peti beta bellows mechanism modes
-peti beta pipe organ timbres
-peti beta pump organ simulation
-peti beta serial number license
-peti beta user reviews ratings
-peti beta demo version free
-peti beta commercial software price
-peti beta buy online dontcrack
-peti beta update latest version
-peti beta presets tweakable sound
-peti beta documentation first-rate
-peti beta midi controller support
-peti beta net energy gain experiment
-peti beta south korea fusion reactor
-peti beta 100 million degrees celsius
-peti beta sun core temperature kelvin
-peti beta new scientist article
-peti beta the sun news report
-peti beta yahoo news headline
-peti beta wikipedia solar core page
-peti beta montana physics website
-peti beta cornell university faq
-peti beta nasa fact sheet sunfact
-peti beta broken drum machine plugin
-peti beta combo sister transistor organ
-peti beta dahornet vintage synth vsti
-peti beta dk virtual drums sample player
-peti beta emm knagalis ethnic instrument
-peti beta entity semi-modular synthesizer
-peti beta groove analogizer audio effecterizerator
-
-- Extract the downloaded ZIP file to a folder on your computer.
-- Open the folder and double-click on the installer file (PetibetaVstSetup.exe for Windows or PETI_Beta_Installer.pkg for Mac OS X).
-- Follow the instructions on the installer wizard. Choose your preferred language and accept the license agreement.
-- Select the destination folder where you want to install Peti Beta VST. Make sure it is a folder where your DAW can find your plugins.
-- Click on "Install" and wait for the installation process to finish.
-- Click on "Finish" and close the installer wizard.
-
- Congratulations! You have successfully installed Peti Beta VST on your computer. Now you can use it in your music production software.
- How to Use Peti Beta VST?
- To use Peti Beta VST in your music production software, you need to load it as a plugin in your DAW. The exact steps may vary depending on which DAW you are using, but here are some general guidelines:
- How to Load Peti Beta VST as a Plugin?
-
-- Open your DAW and create a new project or open an existing one.
-- Create a new track or select an existing one where you want to use Peti Beta VST.
-- Add an instrument plugin slot or an FX slot (depending on whether you want to use Peti Beta VST as an instrument or an effect) on your track.
-- Browse through your plugins list and find "PetibetaVst" under "NUSofting" or "Instruments". Drag and drop it onto the plugin slot or double-click on it.
-- You should see a window with the user interface of Peti Beta VST. You can resize it by dragging its corners or edges.
-
- How to Adjust Peti Beta VST Settings?
- How to Play Peti Beta VST?
- To play Peti Beta VST, you can use your MIDI keyboard or controller, or your mouse. If you have a MIDI keyboard or controller connected to your computer, you can use it to play the notes and control the parameters of Peti Beta VST. You can also assign different MIDI CC messages to different parameters of Peti Beta VST using the "MIDI Learn" function. To do this, right-click on any knob or slider on the plugin interface and select "MIDI Learn". Then move the knob or slider on your MIDI controller that you want to assign to that parameter. You should see a green light indicating that the MIDI Learn is successful.
- If you don't have a MIDI keyboard or controller, you can use your mouse to play Peti Beta VST. You can click on the virtual keyboard on the plugin interface to play the notes. You can also use your mouse wheel to control the bellows pressure. You can also click and drag on any knob or slider on the plugin interface to adjust the parameters of Peti Beta VST.
- What are the Benefits of Peti Beta VST?
- Peti Beta VST is a great plugin for creating realistic and versatile harmonium sounds for your music production. Here are some of the benefits of using Peti Beta VST:
- Realistic and Versatile Harmonium Sounds
- Peti Beta VST uses hybrid FM synthesis technology to generate realistic harmonium sounds that capture the nuances and characteristics of different harmoniums and accordions. You can choose from 64 presets that cover a wide range of harmonium sounds, from classical Indian music to pop and rock music. You can also create your own presets and save them for later use. You can adjust various parameters of the sound generation engine, such as pitch, modulation, envelope, filter, noise, and effects, to fine-tune your harmonium sound.
- Flexible and Creative Bellows Control
- Peti Beta VST gives you complete control over a simulated bellows mechanism, which is essential for creating expressive harmonium sounds. You can choose from 11 different bellows modes, ranging from automatic to manual pumping, with appropriate modulation of the sound parameters. You can also use your MIDI keyboard or controller, or your mouse wheel, to control the bellows pressure. The bellows control allows you to create dynamic and organic harmonium sounds that respond to your playing style.
- Compatible and Affordable Plugin
- Peti Beta VST is compatible with both Mac OS X and Windows operating systems, and works with most DAWs that support VST or AU plugins. It is easy to install and use, and has a low CPU usage. It also has a reasonable price of €48.00 (about $55 USD), which is cheaper than many other harmonium plugins on the market.
- What are the Drawbacks of Peti Beta VST?
- Peti Beta VST is not a perfect plugin, and it has some drawbacks that you should be aware of before buying it. Here are some of the drawbacks of using Peti Beta VST:
- Limited Sound Design Options
- Peti Beta VST is designed to emulate the sounds of harmoniums and accordions, and it does not offer much sound customization beyond that. If you are looking for a plugin that can create more diverse and complex sounds, such as synthesizers or samplers, Peti Beta VST may not be suitable for you. Peti Beta VST is best suited for creating realistic and authentic harmonium sounds for your music production.
- Outdated User Interface
- Peti Beta VST has a dated and cluttered user interface that may be hard to navigate and use. The plugin interface has many knobs and sliders that are not clearly labeled or organized. The virtual keyboard is also small and hard to see. The plugin interface does not have a modern or sleek design that matches the quality of its sound engine. The user interface may be a turn-off for some users who prefer a more user-friendly and aesthetically pleasing plugin interface.
- Conclusion: Is Peti Beta VST Worth It?
- Peti Beta VST is a unique and innovative plugin that simulates the sounds of various harmoniums and accordions using hybrid FM synthesis technology. It offers realistic and versatile harmonium sounds that can be used for various genres of music production. It also gives you complete control over a simulated bellows mechanism that allows you to create expressive and dynamic harmonium sounds.
- Peti Beta VST is compatible with both Mac OS X and Windows operating systems, and works with most DAWs that support VST or AU plugins. It is easy to install and use, and has a low CPU usage. It also has a reasonable price of €48.00 (about $55 USD), which is cheaper than many other harmonium plugins on the market.
- However, Peti Beta VST also has some drawbacks that you should be aware of before buying it. It does not offer much sound customization beyond the harmonium sounds, and it has a dated and cluttered user interface that may be hard to navigate and use.
- Therefore, we recommend Peti Beta VST for users who are looking for a realistic and versatile harmonium plugin for their music production, and who don't mind its limited sound design options and outdated user interface. If you are interested in Peti Beta VST, you can download it from NUSofting's website at https://nusofting.liqihsynth.com.
- Frequently Asked Questions
-
-- What is Peti Beta VST?
-Peti Beta VST is a virtual instrument plugin that simulates the sounds of various harmoniums and accordions using hybrid FM synthesis technology.
-- How do I download Peti Beta VST?
-You can download Peti Beta VST from NUSofting's website at https://nusofting.liqihsynth.com. You need to pay €48.00 (about $55 USD) for both Mac OS X and Windows versions.
-- How do I install Peti Beta VST?
-You need to extract the downloaded ZIP file to a folder on your computer, then run the installer file (PetibetaVstSetup.exe for Windows or PETI_Beta_Installer.pkg for Mac OS X). Follow the instructions on the installer wizard and select the destination folder where you want to install Peti Beta VST.
-- How do I use Peti Beta VST?
-You need to load Peti Beta VST as a plugin in your DAW. Then you can play it using your MIDI keyboard or controller, or your mouse. You can also adjust its settings using its user interface.
-- What are the benefits of Peti Beta VST?
-Peti Beta VST offers realistic and versatile harmonium sounds that can be used for various genres of music production. It also gives you complete control over a simulated bellows mechanism that allows you to create expressive and dynamic harmonium sounds.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Fbi Faces 4.0 Free Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Fbi Faces 4.0 Free Download.md
deleted file mode 100644
index 44ef4110b56eec456b08f9476fcdc3c1a81030e2..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Fbi Faces 4.0 Free Download.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-FBI Faces 4.0 Free Download: A Comprehensive Guide
-Facial recognition software is a powerful tool that can help identify, track, and apprehend criminal suspects, as well as provide access control, verification, and personalization services. One of the most popular and widely used facial recognition software is FBI Faces 4.0, which is developed by IQ Biometrix and endorsed by crime fighting agencies such as the CIA, FBI, and the US Military.
-In this article, we will explain what FBI Faces 4.0 is, how it works, why it is useful for law enforcement and other users, and how to download it for free. We will also compare it with some of the best alternatives in the market and answer some frequently asked questions about it.
-Fbi Faces 4.0 Free Download
Download Zip 🆓 https://tinourl.com/2uL1rX
-What is FBI Faces 4.0 and what does it do?
-FBI Faces 4.0 is a facial composite software that allows users to create realistic images of human faces from a database of over 4,400 facial features. Users can select from different categories such as eyes, nose, mouth, hair, skin tone, facial shape, and accessories to create a face that matches their description or memory.
-FBI Faces 4.0 can also edit existing faces by changing or adding features, as well as compare faces with a database of mugshots or photos to find matches or similarities. The software can export faces in JPEG format to use in police bulletins, advisories, websites, or other media.
-FBI Faces 4.0 is designed to help law enforcement agencies solve crimes by providing them with a reliable and easy way to generate suspect sketches or identify potential witnesses or victims. It can also be used by other users such as educators, artists, researchers, or hobbyists who want to create faces for various purposes.
-How to download FBI Faces 4.0 for free?
-FBI Faces 4.0 is not a free software, but it can be downloaded for free from some websites that offer cracked versions or serial keys. However, this is not recommended as it may expose your computer to viruses, malware, or legal issues.
-The best way to download FBI Faces 4.0 for free is to use the official trial version that is available on the IQ Biometrix website. The trial version allows you to use the software for 15 days with full functionality and access to all features. You can also request a demo or a quote from the website if you want to purchase the software.
-Features and benefits of FBI Faces 4.0
-FBI Faces 4.0 has many features and benefits that make it one of the best facial recognition software in the market. Here are some of them:
-
-Database of over 4,400 facial features
-FBI Faces 4.0 has a large and diverse database of facial features that covers different ethnicities, genders, ages, and expressions. Users can choose from hundreds of options for each feature category, such as eye color, nose shape, mouth type, hair style, glasses, hats, earrings, and more. Users can also adjust the size, position, rotation, and color of each feature to create a more accurate and realistic face.
-Ability to create realistic facial composites
-FBI Faces 4.0 uses advanced algorithms and techniques to create facial composites that look natural and lifelike. The software blends the features seamlessly and applies shadows, highlights, and textures to enhance the quality of the face. Users can also modify the expression, emotion, age, and gender of the face to match their needs.
-Compatibility with other software and devices
-FBI Faces 4.0 is compatible with other software and devices that can help users create or use faces. For example, users can import photos or sketches from scanners, cameras, or other sources and use them as a base or a reference for creating faces. Users can also export faces to other software such as Photoshop, Illustrator, or PowerPoint for further editing or presentation. Additionally, FBI Faces 4.0 can work with biometric devices such as fingerprint scanners or iris scanners to enhance the security and accuracy of identification.
-User-friendly interface and easy operation
-FBI Faces 4.0 has a user-friendly interface that is intuitive and easy to navigate. Users can access all the features and functions from the main menu or the toolbar. Users can also use the drag-and-drop feature to add or remove features from the face. The software also provides tips and tutorials to help users get started and learn how to use the software effectively.
-How to use FBI Faces 4.0
-Using FBI Faces 4.0 is simple and straightforward. Here are the basic steps to follow:
-Installation and activation
-To install FBI Faces 4.0, you need to download the setup file from the IQ Biometrix website or from a trusted source. Then, you need to run the setup file and follow the instructions on the screen. You may need to enter a serial key or a license code to activate the software.
-Creating a new face or editing an existing one
-To create a new face, you need to click on the "New" button on the toolbar or select "File > New" from the menu. Then, you need to choose a base face from the gallery or import your own photo or sketch. You can then start adding or changing features from the database by clicking on the feature category and selecting an option from the list. You can also adjust the size, position, rotation, and color of each feature by using the sliders or the mouse.
-To edit an existing face, you need to open it from the gallery or import it from your computer or another source. You can then modify any feature by clicking on it and choosing a different option from the list or using the sliders or the mouse.
-Saving, printing, and exporting faces
-To save a face, you need to click on the "Save" button on the toolbar or select "File > Save" from the menu. You can then choose a name and a location for your file. The file will be saved in JPEG format by default.
-To print a face, you need to click on the "Print" button on the toolbar or select "File > Print" from the menu. You can then choose your printer settings and preferences and click on "OK". The face will be printed on paper.
-To export a face, you need to click on the "Export" button on the toolbar or select "File > Export" from the menu. You can then choose a format and a location for your file. The software supports various formats such as BMP, GIF, PNG, TIFF, and PDF.
-Searching and matching faces
-To search for a face, you need to click on the "Search" button on the toolbar or select "Tools > Search" from the menu. You can then choose a source for your search, such as the gallery, the database, or your computer. You can also specify some criteria for your search, such as gender, age, ethnicity, or feature type. The software will then display the results that match your query.
-To match a face, you need to click on the "Match" button on the toolbar or select "Tools > Match" from the menu. You can then choose a face to compare with another face from the gallery, the database, or your computer. The software will then show you the similarity score and the percentage of matching features between the two faces.
-Alternatives to FBI Faces 4.0
-FBI Faces 4.0 is not the only facial recognition software available in the market. There are many other alternatives that offer different features and benefits for different users and purposes. Here are some of them:
-CompreFace: open-source and scalable solution
-CompreFace is an open-source facial recognition software that can be easily integrated with any application or system. It allows users to create and manage their own face collections, train their own models, and perform face detection and recognition tasks. It also provides a REST API and a web interface for easy access and operation. CompreFace is scalable and can handle large volumes of data and requests.
-Paravision: AI-powered ID verification for travel and border checks
-Paravision is an AI-powered facial recognition software that provides ID verification and biometric authentication solutions for travel and border security. It can verify identities from various documents such as passports, visas, driver's licenses, and national IDs. It can also detect spoofing attempts such as masks, makeup, or photos. Paravision is compliant with international standards and regulations such as ISO/IEC 19794-5 and NIST FRVT.
-Luxand: facial recognition technology development for AI developers
-Luxand is a facial recognition software that provides technology development tools and SDKs for AI developers who want to create their own applications or systems using facial recognition technology. It offers various features such as face detection, face recognition, face tracking, face morphing, face aging, emotion recognition, gender detection, and more. Luxand also provides ready-made solutions for various industries such as entertainment, education, healthcare, and security.
-Conclusion
-FBI Faces 4.0 is a facial composite software that can help law enforcement agencies and other users create realistic images of human faces from a database of over 4,400 facial features. It can also edit existing faces by changing or adding features, as well as compare faces with a database of mugshots or photos to find matches or similarities.
-FBI Faces 4.0 has many features and benefits that make it one of the best facial recognition software in the market. It has a large and diverse database of facial features that covers different ethnicities, genders, ages, and expressions. It can create facial composites that look natural and lifelike by blending the features seamlessly and applying shadows, highlights, and textures to the face. It is compatible with other software and devices that can help users create or use faces. It has a user-friendly interface and easy operation that allows users to access all the features and functions from the main menu or the toolbar.
-FBI Faces 4.0 can be downloaded for free from the IQ Biometrix website or from some websites that offer cracked versions or serial keys. However, the latter option is not recommended as it may expose your computer to viruses, malware, or legal issues. The best way to download FBI Faces 4.0 for free is to use the official trial version that is available on the IQ Biometrix website. The trial version allows you to use the software for 15 days with full functionality and access to all features.
-However, FBI Faces 4.0 is not the only facial recognition software available in the market. There are many other alternatives that offer different features and benefits for different users and purposes. Some of them are CompreFace, Paravision, and Luxand. These software are open-source, AI-powered, or technology development tools that can provide facial recognition solutions for various industries and applications.
-In conclusion, FBI Faces 4.0 is a facial composite software that can help law enforcement agencies and other users create realistic images of human faces from a database of over 4,400 facial features. It has many features and benefits that make it one of the best facial recognition software in the market. However, it is not a free software and it can be downloaded for free only from the official website or from some risky sources. There are also other alternatives that can provide similar or better facial recognition solutions for different needs and preferences.
-If you are interested in FBI Faces 4.0 or any of its alternatives, you can visit their websites or contact their support teams for more information and assistance. You can also read some reviews or testimonials from other users who have used these software before. You can also watch some videos or tutorials on how to use these software effectively and efficiently.
-We hope this article has been helpful and informative for you. Thank you for reading and have a great day!
-FAQs
-Here are some frequently asked questions about FBI Faces 4.0 and its alternatives:
-What are the system requirements for FBI Faces 4.0?
-The system requirements for FBI Faces 4.0 are as follows:
-
-- Operating system: Windows XP/Vista/7/8/10
-- Processor: Pentium III or higher
-- Memory: 256 MB RAM or higher
-- Hard disk space: 500 MB or higher
-- Display: 800 x 600 pixels or higher
-- Internet connection: required for activation and updates
-
-How accurate is FBI Faces 4.0?
-FBI Faces 4.0 is not a scientific tool that can guarantee 100% accuracy or reliability. It is a creative tool that can help users create realistic images of human faces based on their description or memory. The accuracy of FBI Faces 4.0 depends on several factors such as the quality of the input data, the selection of the features, the adjustment of the parameters, and the subjective perception of the user.
-How secure is FBI Faces 4.0?
-FBI Faces 4.0 is a secure software that does not collect, store, or share any personal or sensitive data from its users. The software does not require any registration or login to use it. The software also does not connect to any external servers or databases without the user's permission or knowledge.
-What are the privacy and security issues of facial recognition software?
-Facial recognition software is a technology that can identify, track, and analyze human faces from images or videos. It can be used for various purposes such as law enforcement, security, verification, personalization, entertainment, and research. However, facial recognition software also poses some privacy and security issues such as:
-
-- Invasion of privacy: Facial recognition software can capture and store personal information such as identity, location, behavior, preferences, and associations without the consent or awareness of the individuals.
-- Bias and discrimination: Facial recognition software can produce inaccurate or unfair results based on factors such as race, gender, age, ethnicity, or expression.
-- Hacking and misuse: Facial recognition software can be hacked or misused by unauthorized parties such as criminals, hackers, terrorists, or rogue governments.
-
-To address these issues, facial recognition software should be regulated by laws and policies that protect the rights and interests of individuals and groups. Facial recognition software should also be transparent and accountable for its performance and outcomes. Facial recognition software should also be ethical and responsible for its impact and consequences on society and individuals.
-Where can I find more information about FBI Faces 4.0?
-You can find more information about FBI Faces 4.0 on the IQ Biometrix website, where you can download the trial version, request a demo or a quote, read the user manual, watch the video tutorials, or contact the support team. You can also read some reviews or testimonials from other users who have used FBI Faces 4.0 on various websites or blogs.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/rakesh092/Voice_cloning/README.md b/spaces/rakesh092/Voice_cloning/README.md
deleted file mode 100644
index eef7f847c15edb0867cc83acdd59e2f403b98bcc..0000000000000000000000000000000000000000
--- a/spaces/rakesh092/Voice_cloning/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Voice Cloning
-emoji: 📚
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Media Encoder CC 2015.3 V10.3 Multilingual [Latest].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Media Encoder CC 2015.3 V10.3 Multilingual [Latest].md
deleted file mode 100644
index f71fbc404e63fbfde2c719a8d08ff94a6ffd805c..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Media Encoder CC 2015.3 V10.3 Multilingual [Latest].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Adobe Media Encoder CC 2015.3 v10.3 Multilingual [Latest]
Download File ⚙ https://urlgoal.com/2uCLKE
-
-This product may integrate with or allow access to certain Adobe or third-party hosted online.. Adobe Media Encoder CC 2015.3 (v10.3) Multilingual. Software . 1fdad05405
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Answer Key For Medical Transcription Fundamentals And Practice 3rd Edition Zip-adds 1.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Answer Key For Medical Transcription Fundamentals And Practice 3rd Edition Zip-adds 1.md
deleted file mode 100644
index 6e9a66091221cdb71ebd79b1288151c6094de3b9..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Answer Key For Medical Transcription Fundamentals And Practice 3rd Edition Zip-adds 1.md
+++ /dev/null
@@ -1,6 +0,0 @@
-answer key for medical transcription fundamentals and practice 3rd edition zip-adds 1
DOWNLOAD ○○○ https://urlgoal.com/2uCJYv
-
-Answer Key For Medical Transcription Fundamentals And Practice 3rd Edition Zip-adds 1 >> DOWNLOAD (Mirror #1) e31cf57bcd Medical Transcription ... 4d29de3e1b
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download !!TOP!! AutoClicker R5 ! (pour Linkbucks) 8.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download !!TOP!! AutoClicker R5 ! (pour Linkbucks) 8.md
deleted file mode 100644
index 5f2cf0c193deb0b459c64551ff8b635122c4f6a9..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download !!TOP!! AutoClicker R5 ! (pour Linkbucks) 8.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Download AutoClicker R5 ! (pour Linkbucks) 8
Download Zip ⇔ https://urlgoal.com/2uCMjH
-
- 3cee63e6c2
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Eptar Reinforcement 1.36 For ArchiCAD.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Eptar Reinforcement 1.36 For ArchiCAD.md
deleted file mode 100644
index 3bb40fd3fd6f34264dd38f47e28a226b73321229..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Eptar Reinforcement 1.36 For ArchiCAD.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Eptar Reinforcement 1.36 for ArchiCAD
Download Zip · https://urlgoal.com/2uCJRS
-
-eptar reinforcement archicad 24, eptar reinforcement archicad 23 crack, eptar reinforcement archicad, eptar reinforcement archicad 22, eptar reinforcement ... Read the whole topic: https://eva.ru/topic/262/2651319.htm?messageId=68533992
-eptar reinforcement archicad 24, eptar reinforcement archicad 23 crack, eptar reinforcement archicad, eptar reinforcement archicad 22, eptar reinforcement archicad 12, eptar reinforcement archicad 19, eptar reinforcement arch 8a78ff9644
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Lateef 1 Full Movie In Hindi Hd 1080p).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Lateef 1 Full Movie In Hindi Hd 1080p).md
deleted file mode 100644
index 99832efeba829b6110c35590f1fc6a678d0f40cd..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Lateef 1 Full Movie In Hindi Hd 1080p).md
+++ /dev/null
@@ -1,12 +0,0 @@
-HD Online Player (Lateef 1 full movie in hindi hd 1080p)
DOWNLOAD ✒ https://urlgoal.com/2uCK6K
-
-Analysis: Took us many to realise this is indeed the same director and same writers of the Mughal-e-Azam. It is impossible to ignore the message it sends when one sees on the trailer that Ali asks if he can be bought over and that’s when we realise it is indeed the same movie. After the effects of seeing Mughal-e-Azam in theaters, when you see it on the big screen, it is nothing short of amazing. The visuals are incredible. The soundtrack is a mind blowing mix of new sounds and old. There is a certain grandeur to the entire film and you find yourself glued to the screen the entire time. The way the characters are portrayed is a lesson to all filmmakers. You have a baddie, a good guy and the rest is pretty standard. It is the consistency and combination of those three, which makes up this work of art.
-
-The performances are stellar. Each actor in the film delivers in the space of what you would normally expect of him. The actors of Mughal-e-Azam live on in this film too. The chemistry between all of them is great. The script is not just its strength, the script is what makes it the most quotable film of the year. The dialogue and narration leave a mark. The most crucial of all is that it shows how low the level of filmmaking has reached when these 3 films take place. This doesn’t mean that we shouldn’t applaud them but it should make you want to give a reminder to all those filmmakers out there.
-
-Review: As expected, a huge bang on the first full day of the Indian Cinema 2013, PK is a very entertaining film. It’s a rare thing to say, that’s for sure. The film starts with a black screen. Then the credits begin and the first line that is spoken is “It’s May 27”. You wait for almost 20 minutes for the first scene, and then we are introduced to the protagonist, one Raja Goswami. As a first timer, I don’t know how to comment on the film. I feel that at the end of the day it depends on your taste. It is a film that is either liked or disliked.
-
-Talking about Raja Goswami, he is a top class performance by the entire cast. I feel that the Hindi Film industry as a whole has taken a bit of a dive 4fefd39f24
-
-
-
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolact_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolact_head.py
deleted file mode 100644
index 8f89a271baf2fd75eb63dc16e8343870fe640760..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolact_head.py
+++ /dev/null
@@ -1,1018 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmcv.runner import BaseModule, ModuleList, force_fp32
-
-from mmdet.core import build_sampler, fast_nms, images_to_levels, multi_apply
-from mmdet.core.utils import select_single_mlvl
-from ..builder import HEADS, build_loss
-from .anchor_head import AnchorHead
-
-
-@HEADS.register_module()
-class YOLACTHead(AnchorHead):
- """YOLACT box head used in https://arxiv.org/abs/1904.02689.
-
- Note that YOLACT head is a light version of RetinaNet head.
- Four differences are described as follows:
-
- 1. YOLACT box head has three-times fewer anchors.
- 2. YOLACT box head shares the convs for box and cls branches.
- 3. YOLACT box head uses OHEM instead of Focal loss.
- 4. YOLACT box head predicts a set of mask coefficients for each box.
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- anchor_generator (dict): Config dict for anchor generator
- loss_cls (dict): Config of classification loss.
- loss_bbox (dict): Config of localization loss.
- num_head_convs (int): Number of the conv layers shared by
- box and cls branches.
- num_protos (int): Number of the mask coefficients.
- use_ohem (bool): If true, ``loss_single_OHEM`` will be used for
- cls loss calculation. If false, ``loss_single`` will be used.
- conv_cfg (dict): Dictionary to construct and config conv layer.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- anchor_generator=dict(
- type='AnchorGenerator',
- octave_base_scale=3,
- scales_per_octave=1,
- ratios=[0.5, 1.0, 2.0],
- strides=[8, 16, 32, 64, 128]),
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=False,
- reduction='none',
- loss_weight=1.0),
- loss_bbox=dict(
- type='SmoothL1Loss', beta=1.0, loss_weight=1.5),
- num_head_convs=1,
- num_protos=32,
- use_ohem=True,
- conv_cfg=None,
- norm_cfg=None,
- init_cfg=dict(
- type='Xavier',
- distribution='uniform',
- bias=0,
- layer='Conv2d'),
- **kwargs):
- self.num_head_convs = num_head_convs
- self.num_protos = num_protos
- self.use_ohem = use_ohem
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- super(YOLACTHead, self).__init__(
- num_classes,
- in_channels,
- loss_cls=loss_cls,
- loss_bbox=loss_bbox,
- anchor_generator=anchor_generator,
- init_cfg=init_cfg,
- **kwargs)
- if self.use_ohem:
- sampler_cfg = dict(type='PseudoSampler')
- self.sampler = build_sampler(sampler_cfg, context=self)
- self.sampling = False
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self.relu = nn.ReLU(inplace=True)
- self.head_convs = ModuleList()
- for i in range(self.num_head_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- self.head_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.conv_cls = nn.Conv2d(
- self.feat_channels,
- self.num_base_priors * self.cls_out_channels,
- 3,
- padding=1)
- self.conv_reg = nn.Conv2d(
- self.feat_channels, self.num_base_priors * 4, 3, padding=1)
- self.conv_coeff = nn.Conv2d(
- self.feat_channels,
- self.num_base_priors * self.num_protos,
- 3,
- padding=1)
-
- def forward_single(self, x):
- """Forward feature of a single scale level.
-
- Args:
- x (Tensor): Features of a single scale level.
-
- Returns:
- tuple:
- cls_score (Tensor): Cls scores for a single scale level \
- the channels number is num_anchors * num_classes.
- bbox_pred (Tensor): Box energies / deltas for a single scale \
- level, the channels number is num_anchors * 4.
- coeff_pred (Tensor): Mask coefficients for a single scale \
- level, the channels number is num_anchors * num_protos.
- """
- for head_conv in self.head_convs:
- x = head_conv(x)
- cls_score = self.conv_cls(x)
- bbox_pred = self.conv_reg(x)
- coeff_pred = self.conv_coeff(x).tanh()
- return cls_score, bbox_pred, coeff_pred
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """A combination of the func:``AnchorHead.loss`` and
- func:``SSDHead.loss``.
-
- When ``self.use_ohem == True``, it functions like ``SSDHead.loss``,
- otherwise, it follows ``AnchorHead.loss``. Besides, it additionally
- returns ``sampling_results``.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): Class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
- boxes can be ignored when computing the loss. Default: None
-
- Returns:
- tuple:
- dict[str, Tensor]: A dictionary of loss components.
- List[:obj:``SamplingResult``]: Sampler results for each image.
- """
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.prior_generator.num_levels
-
- device = cls_scores[0].device
-
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels,
- unmap_outputs=not self.use_ohem,
- return_sampling_results=True)
- if cls_reg_targets is None:
- return None
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
- num_total_pos, num_total_neg, sampling_results) = cls_reg_targets
-
- if self.use_ohem:
- num_images = len(img_metas)
- all_cls_scores = torch.cat([
- s.permute(0, 2, 3, 1).reshape(
- num_images, -1, self.cls_out_channels) for s in cls_scores
- ], 1)
- all_labels = torch.cat(labels_list, -1).view(num_images, -1)
- all_label_weights = torch.cat(label_weights_list,
- -1).view(num_images, -1)
- all_bbox_preds = torch.cat([
- b.permute(0, 2, 3, 1).reshape(num_images, -1, 4)
- for b in bbox_preds
- ], -2)
- all_bbox_targets = torch.cat(bbox_targets_list,
- -2).view(num_images, -1, 4)
- all_bbox_weights = torch.cat(bbox_weights_list,
- -2).view(num_images, -1, 4)
-
- # concat all level anchors to a single tensor
- all_anchors = []
- for i in range(num_images):
- all_anchors.append(torch.cat(anchor_list[i]))
-
- # check NaN and Inf
- assert torch.isfinite(all_cls_scores).all().item(), \
- 'classification scores become infinite or NaN!'
- assert torch.isfinite(all_bbox_preds).all().item(), \
- 'bbox predications become infinite or NaN!'
-
- losses_cls, losses_bbox = multi_apply(
- self.loss_single_OHEM,
- all_cls_scores,
- all_bbox_preds,
- all_anchors,
- all_labels,
- all_label_weights,
- all_bbox_targets,
- all_bbox_weights,
- num_total_samples=num_total_pos)
- else:
- num_total_samples = (
- num_total_pos +
- num_total_neg if self.sampling else num_total_pos)
-
- # anchor number of multi levels
- num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
- # concat all level anchors and flags to a single tensor
- concat_anchor_list = []
- for i in range(len(anchor_list)):
- concat_anchor_list.append(torch.cat(anchor_list[i]))
- all_anchor_list = images_to_levels(concat_anchor_list,
- num_level_anchors)
- losses_cls, losses_bbox = multi_apply(
- self.loss_single,
- cls_scores,
- bbox_preds,
- all_anchor_list,
- labels_list,
- label_weights_list,
- bbox_targets_list,
- bbox_weights_list,
- num_total_samples=num_total_samples)
-
- return dict(
- loss_cls=losses_cls, loss_bbox=losses_bbox), sampling_results
-
- def loss_single_OHEM(self, cls_score, bbox_pred, anchors, labels,
- label_weights, bbox_targets, bbox_weights,
- num_total_samples):
- """"See func:``SSDHead.loss``."""
- loss_cls_all = self.loss_cls(cls_score, labels, label_weights)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero(
- as_tuple=False).reshape(-1)
- neg_inds = (labels == self.num_classes).nonzero(
- as_tuple=False).view(-1)
-
- num_pos_samples = pos_inds.size(0)
- if num_pos_samples == 0:
- num_neg_samples = neg_inds.size(0)
- else:
- num_neg_samples = self.train_cfg.neg_pos_ratio * num_pos_samples
- if num_neg_samples > neg_inds.size(0):
- num_neg_samples = neg_inds.size(0)
- topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples)
- loss_cls_pos = loss_cls_all[pos_inds].sum()
- loss_cls_neg = topk_loss_cls_neg.sum()
- loss_cls = (loss_cls_pos + loss_cls_neg) / num_total_samples
- if self.reg_decoded_bbox:
- # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
- # is applied directly on the decoded bounding boxes, it
- # decodes the already encoded coordinates to absolute format.
- bbox_pred = self.bbox_coder.decode(anchors, bbox_pred)
- loss_bbox = self.loss_bbox(
- bbox_pred,
- bbox_targets,
- bbox_weights,
- avg_factor=num_total_samples)
- return loss_cls[None], loss_bbox
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'coeff_preds'))
- def get_bboxes(self,
- cls_scores,
- bbox_preds,
- coeff_preds,
- img_metas,
- cfg=None,
- rescale=False):
- """"Similar to func:``AnchorHead.get_bboxes``, but additionally
- processes coeff_preds.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- with shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- coeff_preds (list[Tensor]): Mask coefficients for each scale
- level with shape (N, num_anchors * num_protos, H, W)
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- cfg (mmcv.Config | None): Test / postprocessing configuration,
- if None, test_cfg would be used
- rescale (bool): If True, return boxes in original image space.
- Default: False.
-
- Returns:
- list[tuple[Tensor, Tensor, Tensor]]: Each item in result_list is
- a 3-tuple. The first item is an (n, 5) tensor, where the
- first 4 columns are bounding box positions
- (tl_x, tl_y, br_x, br_y) and the 5-th column is a score
- between 0 and 1. The second item is an (n,) tensor where each
- item is the predicted class label of the corresponding box.
- The third item is an (n, num_protos) tensor where each item
- is the predicted mask coefficients of instance inside the
- corresponding box.
- """
- assert len(cls_scores) == len(bbox_preds)
- num_levels = len(cls_scores)
-
- device = cls_scores[0].device
- featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)]
- mlvl_anchors = self.prior_generator.grid_priors(
- featmap_sizes, device=device)
-
- det_bboxes = []
- det_labels = []
- det_coeffs = []
- for img_id in range(len(img_metas)):
- cls_score_list = select_single_mlvl(cls_scores, img_id)
- bbox_pred_list = select_single_mlvl(bbox_preds, img_id)
- coeff_pred_list = select_single_mlvl(coeff_preds, img_id)
- img_shape = img_metas[img_id]['img_shape']
- scale_factor = img_metas[img_id]['scale_factor']
- bbox_res = self._get_bboxes_single(cls_score_list, bbox_pred_list,
- coeff_pred_list, mlvl_anchors,
- img_shape, scale_factor, cfg,
- rescale)
- det_bboxes.append(bbox_res[0])
- det_labels.append(bbox_res[1])
- det_coeffs.append(bbox_res[2])
- return det_bboxes, det_labels, det_coeffs
-
- def _get_bboxes_single(self,
- cls_score_list,
- bbox_pred_list,
- coeff_preds_list,
- mlvl_anchors,
- img_shape,
- scale_factor,
- cfg,
- rescale=False):
- """"Similar to func:``AnchorHead._get_bboxes_single``, but additionally
- processes coeff_preds_list and uses fast NMS instead of traditional
- NMS.
-
- Args:
- cls_score_list (list[Tensor]): Box scores for a single scale level
- Has shape (num_anchors * num_classes, H, W).
- bbox_pred_list (list[Tensor]): Box energies / deltas for a single
- scale level with shape (num_anchors * 4, H, W).
- coeff_preds_list (list[Tensor]): Mask coefficients for a single
- scale level with shape (num_anchors * num_protos, H, W).
- mlvl_anchors (list[Tensor]): Box reference for a single scale level
- with shape (num_total_anchors, 4).
- img_shape (tuple[int]): Shape of the input image,
- (height, width, 3).
- scale_factor (ndarray): Scale factor of the image arange as
- (w_scale, h_scale, w_scale, h_scale).
- cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
-
- Returns:
- tuple[Tensor, Tensor, Tensor]: The first item is an (n, 5) tensor,
- where the first 4 columns are bounding box positions
- (tl_x, tl_y, br_x, br_y) and the 5-th column is a score between
- 0 and 1. The second item is an (n,) tensor where each item is
- the predicted class label of the corresponding box. The third
- item is an (n, num_protos) tensor where each item is the
- predicted mask coefficients of instance inside the
- corresponding box.
- """
- cfg = self.test_cfg if cfg is None else cfg
- assert len(cls_score_list) == len(bbox_pred_list) == len(mlvl_anchors)
- nms_pre = cfg.get('nms_pre', -1)
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_coeffs = []
- for cls_score, bbox_pred, coeff_pred, anchors in \
- zip(cls_score_list, bbox_pred_list,
- coeff_preds_list, mlvl_anchors):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
- cls_score = cls_score.permute(1, 2,
- 0).reshape(-1, self.cls_out_channels)
- if self.use_sigmoid_cls:
- scores = cls_score.sigmoid()
- else:
- scores = cls_score.softmax(-1)
- bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4)
- coeff_pred = coeff_pred.permute(1, 2,
- 0).reshape(-1, self.num_protos)
-
- if 0 < nms_pre < scores.shape[0]:
- # Get maximum scores for foreground classes.
- if self.use_sigmoid_cls:
- max_scores, _ = scores.max(dim=1)
- else:
- # remind that we set FG labels to [0, num_class-1]
- # since mmdet v2.0
- # BG cat_id: num_class
- max_scores, _ = scores[:, :-1].max(dim=1)
- _, topk_inds = max_scores.topk(nms_pre)
- anchors = anchors[topk_inds, :]
- bbox_pred = bbox_pred[topk_inds, :]
- scores = scores[topk_inds, :]
- coeff_pred = coeff_pred[topk_inds, :]
- bboxes = self.bbox_coder.decode(
- anchors, bbox_pred, max_shape=img_shape)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_coeffs.append(coeff_pred)
- mlvl_bboxes = torch.cat(mlvl_bboxes)
- if rescale:
- mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
- mlvl_scores = torch.cat(mlvl_scores)
- mlvl_coeffs = torch.cat(mlvl_coeffs)
- if self.use_sigmoid_cls:
- # Add a dummy background class to the backend when using sigmoid
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
- mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
- det_bboxes, det_labels, det_coeffs = fast_nms(mlvl_bboxes, mlvl_scores,
- mlvl_coeffs,
- cfg.score_thr,
- cfg.iou_thr, cfg.top_k,
- cfg.max_per_img)
- return det_bboxes, det_labels, det_coeffs
-
-
-@HEADS.register_module()
-class YOLACTSegmHead(BaseModule):
- """YOLACT segmentation head used in https://arxiv.org/abs/1904.02689.
-
- Apply a semantic segmentation loss on feature space using layers that are
- only evaluated during training to increase performance with no speed
- penalty.
-
- Args:
- in_channels (int): Number of channels in the input feature map.
- num_classes (int): Number of categories excluding the background
- category.
- loss_segm (dict): Config of semantic segmentation loss.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- """
-
- def __init__(self,
- num_classes,
- in_channels=256,
- loss_segm=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0),
- init_cfg=dict(
- type='Xavier',
- distribution='uniform',
- override=dict(name='segm_conv'))):
- super(YOLACTSegmHead, self).__init__(init_cfg)
- self.in_channels = in_channels
- self.num_classes = num_classes
- self.loss_segm = build_loss(loss_segm)
- self._init_layers()
- self.fp16_enabled = False
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self.segm_conv = nn.Conv2d(
- self.in_channels, self.num_classes, kernel_size=1)
-
- def forward(self, x):
- """Forward feature from the upstream network.
-
- Args:
- x (Tensor): Feature from the upstream network, which is
- a 4D-tensor.
-
- Returns:
- Tensor: Predicted semantic segmentation map with shape
- (N, num_classes, H, W).
- """
- return self.segm_conv(x)
-
- @force_fp32(apply_to=('segm_pred', ))
- def loss(self, segm_pred, gt_masks, gt_labels):
- """Compute loss of the head.
-
- Args:
- segm_pred (list[Tensor]): Predicted semantic segmentation map
- with shape (N, num_classes, H, W).
- gt_masks (list[Tensor]): Ground truth masks for each image with
- the same shape of the input image.
- gt_labels (list[Tensor]): Class indices corresponding to each box.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- loss_segm = []
- num_imgs, num_classes, mask_h, mask_w = segm_pred.size()
- for idx in range(num_imgs):
- cur_segm_pred = segm_pred[idx]
- cur_gt_masks = gt_masks[idx].float()
- cur_gt_labels = gt_labels[idx]
- segm_targets = self.get_targets(cur_segm_pred, cur_gt_masks,
- cur_gt_labels)
- if segm_targets is None:
- loss = self.loss_segm(cur_segm_pred,
- torch.zeros_like(cur_segm_pred),
- torch.zeros_like(cur_segm_pred))
- else:
- loss = self.loss_segm(
- cur_segm_pred,
- segm_targets,
- avg_factor=num_imgs * mask_h * mask_w)
- loss_segm.append(loss)
- return dict(loss_segm=loss_segm)
-
- def get_targets(self, segm_pred, gt_masks, gt_labels):
- """Compute semantic segmentation targets for each image.
-
- Args:
- segm_pred (Tensor): Predicted semantic segmentation map
- with shape (num_classes, H, W).
- gt_masks (Tensor): Ground truth masks for each image with
- the same shape of the input image.
- gt_labels (Tensor): Class indices corresponding to each box.
-
- Returns:
- Tensor: Semantic segmentation targets with shape
- (num_classes, H, W).
- """
- if gt_masks.size(0) == 0:
- return None
- num_classes, mask_h, mask_w = segm_pred.size()
- with torch.no_grad():
- downsampled_masks = F.interpolate(
- gt_masks.unsqueeze(0), (mask_h, mask_w),
- mode='bilinear',
- align_corners=False).squeeze(0)
- downsampled_masks = downsampled_masks.gt(0.5).float()
- segm_targets = torch.zeros_like(segm_pred, requires_grad=False)
- for obj_idx in range(downsampled_masks.size(0)):
- segm_targets[gt_labels[obj_idx] - 1] = torch.max(
- segm_targets[gt_labels[obj_idx] - 1],
- downsampled_masks[obj_idx])
- return segm_targets
-
- def simple_test(self, feats, img_metas, rescale=False):
- """Test function without test-time augmentation."""
- raise NotImplementedError(
- 'simple_test of YOLACTSegmHead is not implemented '
- 'because this head is only evaluated during training')
-
-
-@HEADS.register_module()
-class YOLACTProtonet(BaseModule):
- """YOLACT mask head used in https://arxiv.org/abs/1904.02689.
-
- This head outputs the mask prototypes for YOLACT.
-
- Args:
- in_channels (int): Number of channels in the input feature map.
- proto_channels (tuple[int]): Output channels of protonet convs.
- proto_kernel_sizes (tuple[int]): Kernel sizes of protonet convs.
- include_last_relu (Bool): If keep the last relu of protonet.
- num_protos (int): Number of prototypes.
- num_classes (int): Number of categories excluding the background
- category.
- loss_mask_weight (float): Reweight the mask loss by this factor.
- max_masks_to_train (int): Maximum number of masks to train for
- each image.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- """
-
- def __init__(self,
- num_classes,
- in_channels=256,
- proto_channels=(256, 256, 256, None, 256, 32),
- proto_kernel_sizes=(3, 3, 3, -2, 3, 1),
- include_last_relu=True,
- num_protos=32,
- loss_mask_weight=1.0,
- max_masks_to_train=100,
- init_cfg=dict(
- type='Xavier',
- distribution='uniform',
- override=dict(name='protonet'))):
- super(YOLACTProtonet, self).__init__(init_cfg)
- self.in_channels = in_channels
- self.proto_channels = proto_channels
- self.proto_kernel_sizes = proto_kernel_sizes
- self.include_last_relu = include_last_relu
- self.protonet = self._init_layers()
-
- self.loss_mask_weight = loss_mask_weight
- self.num_protos = num_protos
- self.num_classes = num_classes
- self.max_masks_to_train = max_masks_to_train
- self.fp16_enabled = False
-
- def _init_layers(self):
- """A helper function to take a config setting and turn it into a
- network."""
- # Possible patterns:
- # ( 256, 3) -> conv
- # ( 256,-2) -> deconv
- # (None,-2) -> bilinear interpolate
- in_channels = self.in_channels
- protonets = ModuleList()
- for num_channels, kernel_size in zip(self.proto_channels,
- self.proto_kernel_sizes):
- if kernel_size > 0:
- layer = nn.Conv2d(
- in_channels,
- num_channels,
- kernel_size,
- padding=kernel_size // 2)
- else:
- if num_channels is None:
- layer = InterpolateModule(
- scale_factor=-kernel_size,
- mode='bilinear',
- align_corners=False)
- else:
- layer = nn.ConvTranspose2d(
- in_channels,
- num_channels,
- -kernel_size,
- padding=kernel_size // 2)
- protonets.append(layer)
- protonets.append(nn.ReLU(inplace=True))
- in_channels = num_channels if num_channels is not None \
- else in_channels
- if not self.include_last_relu:
- protonets = protonets[:-1]
- return nn.Sequential(*protonets)
-
- def forward_dummy(self, x):
- prototypes = self.protonet(x)
- return prototypes
-
- def forward(self, x, coeff_pred, bboxes, img_meta, sampling_results=None):
- """Forward feature from the upstream network to get prototypes and
- linearly combine the prototypes, using masks coefficients, into
- instance masks. Finally, crop the instance masks with given bboxes.
-
- Args:
- x (Tensor): Feature from the upstream network, which is
- a 4D-tensor.
- coeff_pred (list[Tensor]): Mask coefficients for each scale
- level with shape (N, num_anchors * num_protos, H, W).
- bboxes (list[Tensor]): Box used for cropping with shape
- (N, num_anchors * 4, H, W). During training, they are
- ground truth boxes. During testing, they are predicted
- boxes.
- img_meta (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- sampling_results (List[:obj:``SamplingResult``]): Sampler results
- for each image.
-
- Returns:
- list[Tensor]: Predicted instance segmentation masks.
- """
- prototypes = self.protonet(x)
- prototypes = prototypes.permute(0, 2, 3, 1).contiguous()
-
- num_imgs = x.size(0)
-
- # The reason for not using self.training is that
- # val workflow will have a dimension mismatch error.
- # Note that this writing method is very tricky.
- # Fix https://github.com/open-mmlab/mmdetection/issues/5978
- is_train_or_val_workflow = (coeff_pred[0].dim() == 4)
-
- # Train or val workflow
- if is_train_or_val_workflow:
- coeff_pred_list = []
- for coeff_pred_per_level in coeff_pred:
- coeff_pred_per_level = \
- coeff_pred_per_level.permute(
- 0, 2, 3, 1).reshape(num_imgs, -1, self.num_protos)
- coeff_pred_list.append(coeff_pred_per_level)
- coeff_pred = torch.cat(coeff_pred_list, dim=1)
-
- mask_pred_list = []
- for idx in range(num_imgs):
- cur_prototypes = prototypes[idx]
- cur_coeff_pred = coeff_pred[idx]
- cur_bboxes = bboxes[idx]
- cur_img_meta = img_meta[idx]
-
- # Testing state
- if not is_train_or_val_workflow:
- bboxes_for_cropping = cur_bboxes
- else:
- cur_sampling_results = sampling_results[idx]
- pos_assigned_gt_inds = \
- cur_sampling_results.pos_assigned_gt_inds
- bboxes_for_cropping = cur_bboxes[pos_assigned_gt_inds].clone()
- pos_inds = cur_sampling_results.pos_inds
- cur_coeff_pred = cur_coeff_pred[pos_inds]
-
- # Linearly combine the prototypes with the mask coefficients
- mask_pred = cur_prototypes @ cur_coeff_pred.t()
- mask_pred = torch.sigmoid(mask_pred)
-
- h, w = cur_img_meta['img_shape'][:2]
- bboxes_for_cropping[:, 0] /= w
- bboxes_for_cropping[:, 1] /= h
- bboxes_for_cropping[:, 2] /= w
- bboxes_for_cropping[:, 3] /= h
-
- mask_pred = self.crop(mask_pred, bboxes_for_cropping)
- mask_pred = mask_pred.permute(2, 0, 1).contiguous()
- mask_pred_list.append(mask_pred)
- return mask_pred_list
-
- @force_fp32(apply_to=('mask_pred', ))
- def loss(self, mask_pred, gt_masks, gt_bboxes, img_meta, sampling_results):
- """Compute loss of the head.
-
- Args:
- mask_pred (list[Tensor]): Predicted prototypes with shape
- (num_classes, H, W).
- gt_masks (list[Tensor]): Ground truth masks for each image with
- the same shape of the input image.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- img_meta (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- sampling_results (List[:obj:``SamplingResult``]): Sampler results
- for each image.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- loss_mask = []
- num_imgs = len(mask_pred)
- total_pos = 0
- for idx in range(num_imgs):
- cur_mask_pred = mask_pred[idx]
- cur_gt_masks = gt_masks[idx].float()
- cur_gt_bboxes = gt_bboxes[idx]
- cur_img_meta = img_meta[idx]
- cur_sampling_results = sampling_results[idx]
-
- pos_assigned_gt_inds = cur_sampling_results.pos_assigned_gt_inds
- num_pos = pos_assigned_gt_inds.size(0)
- # Since we're producing (near) full image masks,
- # it'd take too much vram to backprop on every single mask.
- # Thus we select only a subset.
- if num_pos > self.max_masks_to_train:
- perm = torch.randperm(num_pos)
- select = perm[:self.max_masks_to_train]
- cur_mask_pred = cur_mask_pred[select]
- pos_assigned_gt_inds = pos_assigned_gt_inds[select]
- num_pos = self.max_masks_to_train
- total_pos += num_pos
-
- gt_bboxes_for_reweight = cur_gt_bboxes[pos_assigned_gt_inds]
-
- mask_targets = self.get_targets(cur_mask_pred, cur_gt_masks,
- pos_assigned_gt_inds)
- if num_pos == 0:
- loss = cur_mask_pred.sum() * 0.
- elif mask_targets is None:
- loss = F.binary_cross_entropy(cur_mask_pred,
- torch.zeros_like(cur_mask_pred),
- torch.zeros_like(cur_mask_pred))
- else:
- cur_mask_pred = torch.clamp(cur_mask_pred, 0, 1)
- loss = F.binary_cross_entropy(
- cur_mask_pred, mask_targets,
- reduction='none') * self.loss_mask_weight
-
- h, w = cur_img_meta['img_shape'][:2]
- gt_bboxes_width = (gt_bboxes_for_reweight[:, 2] -
- gt_bboxes_for_reweight[:, 0]) / w
- gt_bboxes_height = (gt_bboxes_for_reweight[:, 3] -
- gt_bboxes_for_reweight[:, 1]) / h
- loss = loss.mean(dim=(1,
- 2)) / gt_bboxes_width / gt_bboxes_height
- loss = torch.sum(loss)
- loss_mask.append(loss)
-
- if total_pos == 0:
- total_pos += 1 # avoid nan
- loss_mask = [x / total_pos for x in loss_mask]
-
- return dict(loss_mask=loss_mask)
-
- def get_targets(self, mask_pred, gt_masks, pos_assigned_gt_inds):
- """Compute instance segmentation targets for each image.
-
- Args:
- mask_pred (Tensor): Predicted prototypes with shape
- (num_classes, H, W).
- gt_masks (Tensor): Ground truth masks for each image with
- the same shape of the input image.
- pos_assigned_gt_inds (Tensor): GT indices of the corresponding
- positive samples.
- Returns:
- Tensor: Instance segmentation targets with shape
- (num_instances, H, W).
- """
- if gt_masks.size(0) == 0:
- return None
- mask_h, mask_w = mask_pred.shape[-2:]
- gt_masks = F.interpolate(
- gt_masks.unsqueeze(0), (mask_h, mask_w),
- mode='bilinear',
- align_corners=False).squeeze(0)
- gt_masks = gt_masks.gt(0.5).float()
- mask_targets = gt_masks[pos_assigned_gt_inds]
- return mask_targets
-
- def get_seg_masks(self, mask_pred, label_pred, img_meta, rescale):
- """Resize, binarize, and format the instance mask predictions.
-
- Args:
- mask_pred (Tensor): shape (N, H, W).
- label_pred (Tensor): shape (N, ).
- img_meta (dict): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- rescale (bool): If rescale is False, then returned masks will
- fit the scale of imgs[0].
- Returns:
- list[ndarray]: Mask predictions grouped by their predicted classes.
- """
- ori_shape = img_meta['ori_shape']
- scale_factor = img_meta['scale_factor']
- if rescale:
- img_h, img_w = ori_shape[:2]
- else:
- img_h = np.round(ori_shape[0] * scale_factor[1]).astype(np.int32)
- img_w = np.round(ori_shape[1] * scale_factor[0]).astype(np.int32)
-
- cls_segms = [[] for _ in range(self.num_classes)]
- if mask_pred.size(0) == 0:
- return cls_segms
-
- mask_pred = F.interpolate(
- mask_pred.unsqueeze(0), (img_h, img_w),
- mode='bilinear',
- align_corners=False).squeeze(0) > 0.5
- mask_pred = mask_pred.cpu().numpy().astype(np.uint8)
-
- for m, l in zip(mask_pred, label_pred):
- cls_segms[l].append(m)
- return cls_segms
-
- def crop(self, masks, boxes, padding=1):
- """Crop predicted masks by zeroing out everything not in the predicted
- bbox.
-
- Args:
- masks (Tensor): shape [H, W, N].
- boxes (Tensor): bbox coords in relative point form with
- shape [N, 4].
-
- Return:
- Tensor: The cropped masks.
- """
- h, w, n = masks.size()
- x1, x2 = self.sanitize_coordinates(
- boxes[:, 0], boxes[:, 2], w, padding, cast=False)
- y1, y2 = self.sanitize_coordinates(
- boxes[:, 1], boxes[:, 3], h, padding, cast=False)
-
- rows = torch.arange(
- w, device=masks.device, dtype=x1.dtype).view(1, -1,
- 1).expand(h, w, n)
- cols = torch.arange(
- h, device=masks.device, dtype=x1.dtype).view(-1, 1,
- 1).expand(h, w, n)
-
- masks_left = rows >= x1.view(1, 1, -1)
- masks_right = rows < x2.view(1, 1, -1)
- masks_up = cols >= y1.view(1, 1, -1)
- masks_down = cols < y2.view(1, 1, -1)
-
- crop_mask = masks_left * masks_right * masks_up * masks_down
-
- return masks * crop_mask.float()
-
- def sanitize_coordinates(self, x1, x2, img_size, padding=0, cast=True):
- """Sanitizes the input coordinates so that x1 < x2, x1 != x2, x1 >= 0,
- and x2 <= image_size. Also converts from relative to absolute
- coordinates and casts the results to long tensors.
-
- Warning: this does things in-place behind the scenes so
- copy if necessary.
-
- Args:
- _x1 (Tensor): shape (N, ).
- _x2 (Tensor): shape (N, ).
- img_size (int): Size of the input image.
- padding (int): x1 >= padding, x2 <= image_size-padding.
- cast (bool): If cast is false, the result won't be cast to longs.
-
- Returns:
- tuple:
- x1 (Tensor): Sanitized _x1.
- x2 (Tensor): Sanitized _x2.
- """
- x1 = x1 * img_size
- x2 = x2 * img_size
- if cast:
- x1 = x1.long()
- x2 = x2.long()
- x1 = torch.min(x1, x2)
- x2 = torch.max(x1, x2)
- x1 = torch.clamp(x1 - padding, min=0)
- x2 = torch.clamp(x2 + padding, max=img_size)
- return x1, x2
-
- def simple_test(self,
- feats,
- det_bboxes,
- det_labels,
- det_coeffs,
- img_metas,
- rescale=False):
- """Test function without test-time augmentation.
-
- Args:
- feats (tuple[torch.Tensor]): Multi-level features from the
- upstream network, each is a 4D-tensor.
- det_bboxes (list[Tensor]): BBox results of each image. each
- element is (n, 5) tensor, where 5 represent
- (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1.
- det_labels (list[Tensor]): BBox results of each image. each
- element is (n, ) tensor, each element represents the class
- label of the corresponding box.
- det_coeffs (list[Tensor]): BBox coefficient of each image. each
- element is (n, m) tensor, m is vector length.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[list]: encoded masks. The c-th item in the outer list
- corresponds to the c-th class. Given the c-th outer list, the
- i-th item in that inner list is the mask for the i-th box with
- class label c.
- """
- num_imgs = len(img_metas)
- scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
- if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
- segm_results = [[[] for _ in range(self.num_classes)]
- for _ in range(num_imgs)]
- else:
- # if det_bboxes is rescaled to the original image size, we need to
- # rescale it back to the testing scale to obtain RoIs.
- if rescale and not isinstance(scale_factors[0], float):
- scale_factors = [
- torch.from_numpy(scale_factor).to(det_bboxes[0].device)
- for scale_factor in scale_factors
- ]
- _bboxes = [
- det_bboxes[i][:, :4] *
- scale_factors[i] if rescale else det_bboxes[i][:, :4]
- for i in range(len(det_bboxes))
- ]
- mask_preds = self.forward(feats[0], det_coeffs, _bboxes, img_metas)
- # apply mask post-processing to each image individually
- segm_results = []
- for i in range(num_imgs):
- if det_bboxes[i].shape[0] == 0:
- segm_results.append([[] for _ in range(self.num_classes)])
- else:
- segm_result = self.get_seg_masks(mask_preds[i],
- det_labels[i],
- img_metas[i], rescale)
- segm_results.append(segm_result)
- return segm_results
-
-
-class InterpolateModule(BaseModule):
- """This is a module version of F.interpolate.
-
- Any arguments you give it just get passed along for the ride.
- """
-
- def __init__(self, *args, init_cfg=None, **kwargs):
- super().__init__(init_cfg)
-
- self.args = args
- self.kwargs = kwargs
-
- def forward(self, x):
- """Forward features from the upstream network."""
- return F.interpolate(x, *self.args, **self.kwargs)
diff --git a/spaces/rorallitri/biomedical-language-models/logs/360 Ransomware Decryption Tool 1.0.0.1271 [free _TOP_].md b/spaces/rorallitri/biomedical-language-models/logs/360 Ransomware Decryption Tool 1.0.0.1271 [free _TOP_].md
deleted file mode 100644
index 2dd4a7aa5aea295e796cbab0e11bb600ab9c3a8b..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/360 Ransomware Decryption Tool 1.0.0.1271 [free _TOP_].md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-360 Ransomware Decryption Tool 1.0.0.1271 [Free] - How to Recover Your Encrypted Files Easily
-
-Ransomware is a type of malware that encrypts your files and demands a ransom to restore them. It can affect any Windows system and cause serious data loss and damage. Some of the most notorious ransomware variants are WannaCry, Petya, GoldenEye, CryptoLocker, CryptoWall, and more.
-
-If you are a victim of ransomware, you might think that you have no choice but to pay the ransom or lose your files forever. However, there is a free software that can help you get back your encrypted files without paying anything. It is called 360 Ransomware Decryption Tool 1.0.0.1271 [Free], and it is developed by Qihu 360 Software Co., a leading security company.
-360 Ransomware Decryption Tool 1.0.0.1271 [Free]
Download File ☑ https://tinurll.com/2uzlna
-
-360 Ransomware Decryption Tool 1.0.0.1271 [Free] is a powerful and easy-to-use tool that can decrypt files encrypted by more than 80 types of ransomware, including WannaCry, Petya, GoldenEye, CryptoLocker, CryptoWall, and more. It can also decrypt files encrypted by unknown or new ransomware variants, as long as they use the same encryption algorithm as the known ones.
-
-In this article, we will show you how to use 360 Ransomware Decryption Tool 1.0.0.1271 [Free] to recover your encrypted files easily and safely.
-
-How to Download 360 Ransomware Decryption Tool 1.0.0.1271 [Free]
-
-To download 360 Ransomware Decryption Tool 1.0.0.1271 [Free], you can visit one of the following links:
-
-
-
-After downloading the file, you need to run it and follow the instructions on the screen to install it on your computer.
-
-How to Use 360 Ransomware Decryption Tool 1.0.0.1271 [Free]
-
-To use 360 Ransomware Decryption Tool 1.0.0.1271 [Free], you need to follow these steps:
-
-
-- Launch the tool and select the folder that contains your encrypted files.
-- Choose a location where you want to save the decrypted files.
-- Click on \"Scan Now\" to start the decryption process.
-- Wait for the tool to scan and decrypt your files.
-- Check the results and verify that your files are restored.
-
-
-The tool will display the number of encrypted and decrypted files, as well as the elapsed time and the current file being scanned.
-
-If the tool cannot decrypt some of your files, it might be because they are encrypted by a new or unknown ransomware variant that uses a different encryption algorithm than the known ones. In that case, you can try other decryption tools or contact a professional data recovery service for help.
-
-Conclusion
-
-360 Ransomware Decryption Tool 1.0.0.1271 [Free] is a free software that can help you recover your encrypted files during a ransomware attack. It can decrypt files encrypted by more than 80 types of ransomware, including WannaCry, Petya, GoldenEye, CryptoLocker, CryptoWall, and more. It can also decrypt files encrypted by unknown or new ransomware variants, as long as they use the same encryption algorithm as the known ones.
-
-
-To use 360 Ransomware Decryption Tool 1.0.0.1271 [Free], you just need to download it from one of the links above and install it on your computer. Then, you need to select the folder that contains your encrypted files and choose a location where you want to save the decrypted files. Finally, you need to click on \"Scan Now\" to start the decryption process and wait for the tool to scan and decrypt your files.
-
-We hope that this article has been helpful and informative for you. If you have any questions or comments about 360 Ransomware Decryption Tool 1.0.0.1271 [Free] or its alternatives, please feel free to leave them below. Thank you for reading!
-How to Prevent Ransomware Attacks
-
-While 360 Ransomware Decryption Tool 1.0.0.1271 [Free] can help you recover your encrypted files after a ransomware attack, it is always better to prevent such attacks from happening in the first place. Here are some tips and best practices that can help you protect your computer and your data from ransomware:
-
-
-- Keep your Windows system and your antivirus software updated with the latest security patches and definitions.
-- Avoid opening suspicious or unsolicited email attachments or links, especially from unknown senders.
-- Backup your important files regularly to an external drive or a cloud service, so that you can restore them in case of a ransomware attack.
-- Use strong and unique passwords for your online accounts and enable two-factor authentication whenever possible.
-- Avoid visiting malicious or compromised websites or downloading pirated or cracked software or files.
-- Use a firewall and a VPN to secure your network and your online activities.
-- Educate yourself and your family members about the dangers and signs of ransomware and how to avoid them.
-
-
-By following these tips and best practices, you can reduce the risk of becoming a victim of ransomware and keep your computer and your data safe.
-
-Alternatives to 360 Ransomware Decryption Tool 1.0.0.1271 [Free]
-
-360 Ransomware Decryption Tool 1.0.0.1271 [Free] is not the only tool that can help you decrypt files encrypted by ransomware. There are other tools that can do the same job, depending on the type of ransomware that infected your computer. Here are some of the alternatives to 360 Ransomware Decryption Tool 1.0.0.1271 [Free] that you can try:
-
-
-- Emsisoft Decryptor: A collection of free tools that can decrypt files encrypted by various types of ransomware, such as CryptoLocker, CryptoWall, TeslaCrypt, Nemucod, Stampado, Apocalypse, and more.
-- No More Ransom: A project that aims to help victims of ransomware by providing free decryption tools and advice on how to prevent ransomware attacks. It is supported by various law enforcement agencies and security companies.
-- ID Ransomware: A website that can help you identify the type of ransomware that infected your computer and provide you with a link to a possible decryption tool or solution.
-- Kaspersky Anti-Ransomware Tool: A free tool that can protect your computer from ransomware attacks by blocking malicious processes and preventing unauthorized file encryption.
-- McAfee Ransomware Recover: A free tool that can decrypt files encrypted by various types of ransomware, such as GandCrab, Shade, Crysis, TeslaCrypt, WildFire, and more.
-
-
-You can also contact a professional data recovery service or a security expert for help if none of the above tools work for you.
-Benefits of Using 360 Ransomware Decryption Tool 1.0.0.1271 [Free]
-
-Using 360 Ransomware Decryption Tool 1.0.0.1271 [Free] has many benefits for Windows users who have been affected by ransomware. Here are some of them:
-
-
-- It is free and easy to use, without requiring any technical skills or knowledge.
-- It can decrypt files encrypted by more than 80 types of ransomware, including some of the most widespread and dangerous ones.
-- It can decrypt files encrypted by unknown or new ransomware variants, as long as they use the same encryption algorithm as the known ones.
-- It can decrypt files on any Windows system, regardless of the version or edition.
-- It can decrypt files on any storage device, such as hard drives, flash drives, memory cards, etc.
-- It can decrypt files of any size and type, such as documents, photos, videos, music, etc.
-- It can decrypt files without causing any damage or loss to the original data.
-
-
-By using 360 Ransomware Decryption Tool 1.0.0.1271 [Free], you can recover your encrypted files easily and safely, without paying any ransom or losing any data.
-
-How to Contact 360 Ransomware Decryption Tool 1.0.0.1271 [Free] Support
-
-If you have any questions or issues regarding 360 Ransomware Decryption Tool 1.0.0.1271 [Free], you can contact the support team of Qihu 360 Software Co., the developer of the tool. Here are some ways to contact them:
-
-
-- You can visit their official website at https://www.360totalsecurity.com/en/ and find more information and resources about their products and services.
-- You can send them an email at support@360safe.com and describe your problem or inquiry in detail.
-- You can call them at +86-10-5878-1000 and speak to a customer service representative.
-- You can follow them on social media platforms such as Facebook, Twitter, YouTube, and Instagram and get the latest news and updates about their products and services.
-
-
-The support team of Qihu 360 Software Co. is available 24/7 and ready to assist you with any questions or issues regarding 360 Ransomware Decryption Tool 1.0.0.1271 [Free].
-Reviews and Testimonials of 360 Ransomware Decryption Tool 1.0.0.1271 [Free]
-
-360 Ransomware Decryption Tool 1.0.0.1271 [Free] has received many positive reviews and testimonials from users who have used it to recover their encrypted files during a ransomware attack. Here are some of them:
-
-
-\"I was infected by WannaCry ransomware and all my files were locked with a .WNCRY extension. I tried many tools but none of them worked. Then I found 360 Ransomware Decryption Tool and it decrypted all my files in minutes. Thank you so much for this amazing tool!\"
-- John, USA
-
-
-
-\"My computer was attacked by Petya ransomware and I was asked to pay $300 in Bitcoin to get my files back. I refused to pay and searched for a solution online. I came across 360 Ransomware Decryption Tool and decided to give it a try. It scanned my computer and decrypted all my files without any problem. I am so grateful for this tool!\"
-- Maria, Spain
-
-
-
-\"I was a victim of GoldenEye ransomware and I lost access to all my important documents and photos. I was desperate and hopeless until I found 360 Ransomware Decryption Tool. It was very easy to use and it restored all my files in no time. This tool is a lifesaver!\"
-- Ahmed, Egypt
-
-
-These are just some of the reviews and testimonials of 360 Ransomware Decryption Tool 1.0.0.1271 [Free]. You can find more on the official website of Qihu 360 Software Co., the developer of the tool.
-
-Download 360 Ransomware Decryption Tool 1.0.0.1271 [Free] Now
-
-If you are looking for a free and effective tool that can help you decrypt files encrypted by ransomware, you should download 360 Ransomware Decryption Tool 1.0.0.1271 [Free] now.
-
-With this tool, you can recover your encrypted files easily and safely, without paying any ransom or losing any data.
-
-You can download 360 Ransomware Decryption Tool 1.0.0.1271 [Free] from one of the links below:
-
-
-
-Don't let ransomware ruin your day or your data.
-
-Download 360 Ransomware Decryption Tool 1.0.0.1271 [Free] now and get back your encrypted files easily!
-Conclusion
-
-Ransomware is a type of malware that encrypts your files and demands a ransom to restore them. It can affect any Windows system and cause serious data loss and damage. Some of the most notorious ransomware variants are WannaCry, Petya, GoldenEye, CryptoLocker, CryptoWall, and more.
-
-If you are a victim of ransomware, you might think that you have no choice but to pay the ransom or lose your files forever. However, there is a free software that can help you get back your encrypted files without paying anything. It is called 360 Ransomware Decryption Tool 1.0.0.1271 [Free], and it is developed by Qihu 360 Software Co., a leading security company.
-
-360 Ransomware Decryption Tool 1.0.0.1271 [Free] is a powerful and easy-to-use tool that can decrypt files encrypted by more than 80 types of ransomware, including WannaCry, Petya, GoldenEye, CryptoLocker, CryptoWall, and more. It can also decrypt files encrypted by unknown or new ransomware variants, as long as they use the same encryption algorithm as the known ones.
-
-To use 360 Ransomware Decryption Tool 1.0.0.1271 [Free], you just need to download it from one of the links above and install it on your computer. Then, you need to select the folder that contains your encrypted files and choose a location where you want to save the decrypted files. Finally, you need to click on \"Scan Now\" to start the decryption process and wait for the tool to scan and decrypt your files.
-
-We hope that this article has been helpful and informative for you. If you have any questions or comments about 360 Ransomware Decryption Tool 1.0.0.1271 [Free] or its alternatives, please feel free to leave them below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Cute CUT ? Video Editor Movie Maker V1.8.8 [Pro] [Latest].md b/spaces/rorallitri/biomedical-language-models/logs/Cute CUT ? Video Editor Movie Maker V1.8.8 [Pro] [Latest].md
deleted file mode 100644
index 9fd84d7c38a65139921e6a39ae3e84346883bba8..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Cute CUT ? Video Editor Movie Maker V1.8.8 [Pro] [Latest].md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-Cute CUT Pro APK is a functional offline video editor & movie maker software. You can download this application easily from the official website of its developers. Downloading it will take a few minutes, and then you have to install it on your computer or Android device, depending on where you are using it.
-Cute Cut Pro APK is a video editor & movie maker software that enables users to create various types of videos from files and other local sources on their computers. It is one of the best video editing tools, with numerous features that help users make or edit videos easily. It also allows users to add effects, adjust audio and crossfade it. Download Cute Cut Pro on Android/PC for free.!
-Cute CUT – Video Editor Movie Maker v1.8.8 [Pro] [Latest]
Download Zip ○ https://tinurll.com/2uznr6
-The next feature here is the Compatibility of our Cute CUT video editor application. You must be thinking about how someone can add Compatibility into a robust software feature, but mark my words; Compatibility is the real thing inside the Cute CUT Pro APK. Here, The Compatibility is shown in the way of video formats. In simple words, the app is compatible with almost all the video formats, including MP4, MKV, AVI, and more than six different types of media, which you can then edit inside the movie projects. These six media types are Video, Photo, SELF-DRAW, Text Notes, Music tracks, and your own voice segments. Let me know if there is any other such Android app with incredible convenience and many features together?
-
Very easy and simple video editing features. This is very powerful and editor among its kind. It produces high quality videos and movies through its best user friendly features.
It provides more utilities when compared to a common video editor. They offers Advanced Hollywood-style editing functionality.
-Cute Cut is an android video editor application, this is a professional video editor. You will get all the editing features and by adding more to it we have the cute cut pro apk. In the pro apk you can use the professional features of the application. These features are locked in the normal version but this is the mod apk file so you can access everything.
-If you want to create your own unique movie with a video editor cute cut app then download and install the latest version of this new app from our website for free. While installing the app from our website allow all permissions and also enable unknown sources.
-PowerDirector Gold APK is such a movie maker and video editor that lets you edit videos in 4K quality with amazing features. Most time, many mobile YouTubers are using this on their device for editing videos without getting any watermark.
-The original PowerDirector Gold latest version has been developed for Android by CyberLink. The modified version has been developed by an anonymous developer. He or she mod this video editor to giving users all the pro items and features.
-
-InShot - Featured by Google Play, top movie maker and HD pro video editor with music, helps you create video with ease, edit video for YouTube, Instagram, Tik Tok, IGTV, Facebook, Messenger, Whatsapp, Twitter etc.
-Features:
Video Trimmer & Video Cutter & Video Splitter
* Trim and cut video. Pro video trimmer & cutter and video crop app.
* Split videos into two parts, Multi-split videos into several clips.
* Crop video and Export it in HD quality. Easy-to-use free movie maker & vertical pro video editor for YouTube.
-Video Ratio & Video Background
* Fit your video in any Aspect Ratio. Easy-to-use instagram video editor and Tik Tok editor.
* Square video, No crop video maker and pro video editor app.
* Add different borders and no crop. Background color and video blur editor.
-Video Speed Control
* Adjust video speed with video filters and video effects. Fast/Slow motion full screen video maker and free video trimmer and movie maker app.
* Speed up videos or add slow motion.
-Video Cropper
* Crop video in any ratios. Powerful movie maker and pro video editor for YouTube, Instagram, Musical.ly, Tik Tok etc. Best video crop app and video editing app.
* Crop video to remove watermark or any unwanted part.
* Zoom in/out video.
-Easy to Share
* Custom video export resolution, HD pro video editor (1080P or 4K) , professional movie maker
* Share to all social apps YouTube, Instagram, IGTV, Facebook, Whatsapp, Tik Tok, etc.
-InShot is a powerful full screen video maker & video trimmer, best video editor with all features, free photo slideshow maker. It's great for cutting, trimming and splitting a long video into short video clips. The blur tool also helps blur background for your videos and photos. With InShot, you can easily add music to video, add text on video, flip & rotate video, merge video. Fast/Slow motion feature is super fun. InShot is a free HD full screen video editor and video cutter. You can crop video easily and export it without losing quality, and share your videos to Instagram, IGTV, Facebook, Whatsapp, YouTube, Twitter and Messenger by one click, or edit video with music and pic for Tik Tok.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Lks Matematika Kelas 6 Sd Semester 1.md b/spaces/rorallitri/biomedical-language-models/logs/Download Lks Matematika Kelas 6 Sd Semester 1.md
deleted file mode 100644
index f85efc42787b20f19e72dbb0dcac251dfb483297..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Download Lks Matematika Kelas 6 Sd Semester 1.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Download Lks Matematika Kelas 6 Sd Semester 1
Download --->>> https://tinurll.com/2uznXZ
-
-Lks Matematika Kelas 4 Sd Semester 1 1/1. Views 12 Downloads 0 File size 1KB. Report DMCA / Copyright. DOWNLOAD FILE. Recommend Stories ... 1fdad05405
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Gameshark V6 Psx Ps1 Iso 56.md b/spaces/rorallitri/biomedical-language-models/logs/Gameshark V6 Psx Ps1 Iso 56.md
deleted file mode 100644
index 8c529f5e0d5b08580610c3c30275a8a321bb1f44..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Gameshark V6 Psx Ps1 Iso 56.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-Gameshark V6 Psx Ps1 Iso 56: A Guide for Gamers
-If you are a fan of retro gaming, you might have heard of Gameshark V6, a cheat device that allows you to modify various aspects of your favorite games on PlayStation (PSX) and PlayStation One (PS1). With Gameshark V6, you can unlock hidden features, access secret levels, get unlimited lives, ammo, health, money, and more. But how do you use Gameshark V6 on PSX and PS1? And where can you download the ISO file for Gameshark V6? In this article, we will answer these questions and more.
-Gameshark V6 Psx Ps1 Iso 56
Download Zip ✦ https://tinurll.com/2uzlzp
-What is Gameshark V6?
-Gameshark V6 is a cheat device that works on PSX and PS1 consoles. It is a disc that contains a collection of cheat codes and homebrew utilities for various games. You can use Gameshark V6 to enhance your gaming experience and have more fun with your old games. Some of the features of Gameshark V6 are:
-
-- It supports hundreds of games across different genres and regions.
-- It has a user-friendly interface that lets you select the game and the cheats you want to activate.
-- It has a memory card manager that lets you backup, restore, and edit your save files.
-- It has an import player that lets you play games from other regions on your console.
-- It has a CD burning software that lets you create your own cheat discs.
-
-How to Use Gameshark V6 on PSX and PS1?
-To use Gameshark V6 on PSX and PS1, you will need a few things:
-
-- A PSX or PS1 console with a modchip installed or a softmodded PS2 console.
-- A blank CD-R or CD-RW disc.
-- A CD burning software such as ImgBurn or CDRWIN.
-- A Gameshark V6 ISO file. You can download it from here.
-
-Once you have these things, follow these steps:
-
-- Burn the Gameshark V6 ISO file to the blank disc using the CD burning software. Make sure to use the lowest speed possible and verify the disc after burning.
-- Insert the Gameshark V6 disc into your console and turn it on.
-- Select the game you want to play from the menu and press X.
-- Select the cheats you want to activate from the list and press X.
-- Press Start to launch the game with the cheats enabled.
-- Enjoy!
-
-What are the Benefits of Using Gameshark V6?
-Using Gameshark V6 on PSX and PS1 has many benefits for gamers. Some of them are:
-
-- You can play your old games in new ways and discover new things.
-- You can overcome difficult challenges and beat the game faster.
-- You can customize your game settings and preferences.
-- You can explore hidden content and easter eggs that are normally inaccessible.
-- You can extend the lifespan of your games and console.
-
-Conclusion
-Gameshark V6 is a cheat device that lets you modify various aspects of your favorite games on PSX and PS1. It is a disc that contains a collection of cheat codes and homebrew utilities for various games. You can use Gameshark V6 to enhance your gaming experience and have more fun with your old games. To use Gameshark V6 on PSX and PS1, you will need a modded console, a blank disc, a CD burning software, and a Gameshark V6 ISO file. You can download the ISO file from here. Using Gameshark V6 has many benefits for gamers, such as playing games in new ways, overcoming challenges, customizing settings, exploring hidden content, and extending the lifespan of games and console. If you are a fan of retro gaming, you should give Gameshark V6 a try!
-
-How to Download Gameshark V6 ISO File?
-One of the challenges of using Gameshark V6 on PSX and PS1 is finding and downloading the ISO file for the cheat device. The ISO file is a digital copy of the disc that contains the cheat codes and homebrew utilities. You will need the ISO file to burn it to a blank disc and use it on your console. However, finding and downloading the ISO file for Gameshark V6 is not easy, as it is not widely available online. You might encounter broken links, fake files, malware, or legal issues when trying to download the ISO file for Gameshark V6.
-Fortunately, we have found a reliable source where you can download the ISO file for Gameshark V6 safely and legally. You can download it from here. This is a verified link that leads to a trusted website that hosts the ISO file for Gameshark V6. You can download the ISO file for free and without any registration or survey. The download speed is fast and the file size is small. The ISO file is also compatible with most CD burning software and PSX and PS1 consoles.
-How to Burn Gameshark V6 ISO File to a Blank Disc?
-After downloading the ISO file for Gameshark V6, you will need to burn it to a blank disc using a CD burning software. This will create a physical copy of the cheat device that you can use on your console. To burn the ISO file to a blank disc, you will need a few things:
-
-- A computer with a CD burner.
-- A blank CD-R or CD-RW disc.
-- A CD burning software such as ImgBurn or CDRWIN.
-
-Once you have these things, follow these steps:
-
-- Insert the blank disc into your CD burner.
-- Launch the CD burning software and select the option to burn an image file.
-- Browse and select the ISO file for Gameshark V6 that you downloaded.
-- Choose the lowest speed possible and verify the disc after burning.
-- Eject the disc when done.
-
-You have now created a cheat disc for Gameshark V6 that you can use on your PSX and PS1 console.
-What are Some of the Games that You Can Play with Gameshark V6?
-Gameshark V6 supports hundreds of games across different genres and regions. You can use Gameshark V6 to play some of the most popular and classic games on PSX and PS1, such as:
-
-- Final Fantasy VII, VIII, IX: These are the legendary role-playing games that feature epic stories, memorable characters, and amazing graphics. You can use Gameshark V6 to get unlimited items, gil, materia, limit breaks, and more.
-- Metal Gear Solid: This is the stealth action game that revolutionized the genre with its cinematic presentation, immersive gameplay, and complex plot. You can use Gameshark V6 to get infinite ammo, health, stealth camo, and more.
-- Resident Evil 2, 3: These are the survival horror games that put you in the shoes of Leon S. Kennedy and Jill Valentine as they try to escape from a zombie-infested city. You can use Gameshark V6 to get unlimited ammo, health, herbs, weapons, and more.
-- Gran Turismo 2: This is the racing simulation game that offers realistic physics, stunning graphics, and a huge variety of cars and tracks. You can use Gameshark V6 to get unlimited money, licenses, cars, and more.
-- Tekken 3: This is the fighting game that features a diverse roster of characters, each with their own unique moves and combos. You can use Gameshark V6 to unlock all characters, modes, costumes, and more.
-
-What are Some of the Tips and Tricks for Using Gameshark V6?
-Using Gameshark V6 on PSX and PS1 is easy and fun, but there are some tips and tricks that you should know to make the most out of it. Here are some of them:
-
-- Always backup your save files before using Gameshark V6. Some cheats might corrupt your data or make your game unstable.
-- Do not activate too many cheats at once. Some cheats might conflict with each other or cause glitches in your game.
-- Do not use cheats that affect the game's difficulty or progression. Some cheats might make your game too easy or too hard, or prevent you from completing certain tasks or events.
-- Do not use cheats that alter the game's graphics or sound. Some cheats might cause graphical errors or sound distortion in your game.
-- Do not use cheats that are not compatible with your game's region or version. Some cheats might not work properly or cause problems in your game.
-
-Conclusion
-Gameshark V6 is a cheat device that lets you modify various aspects of your favorite games on PSX and PS1. It is a disc that contains a collection of cheat codes and homebrew utilities for various games. You can use Gameshark V6 to enhance your gaming experience and have more fun with your old games. To use Gameshark V6 on PSX and PS1, you will need a modded console, a blank disc, a CD burning software, and a Gameshark V6 ISO file. You can download the ISO file from here. Using Gameshark V6 has many benefits for gamers, such as playing games in new ways, overcoming challenges, customizing settings, exploring hidden content, and extending the lifespan of games and console. If you are a fan of retro gaming, you should give Gameshark V6 a try!
-What are Some of the Risks and Drawbacks of Using Gameshark V6?
-While using Gameshark V6 on PSX and PS1 can be fun and rewarding, it also comes with some risks and drawbacks that you should be aware of. Some of them are:
-
-- You might damage your console or disc if you burn the ISO file incorrectly or use a faulty disc.
-- You might void your warranty or violate the terms of service of your console if you use a modchip or a softmod.
-- You might face legal issues or penalties if you download or distribute the ISO file for Gameshark V6 without permission or authorization.
-- You might ruin your gaming experience or enjoyment if you use cheats that make the game too easy or too hard, or that alter the game's original design or intention.
-- You might lose your interest or motivation to play the game if you use cheats that skip important parts or events, or that spoil the story or outcome.
-
-Therefore, you should use Gameshark V6 on PSX and PS1 responsibly and moderately. You should also respect the rights and wishes of the game developers and publishers, and the gaming community. You should only use Gameshark V6 for personal and non-commercial purposes, and only for games that you own legally. You should also backup your save files and discs before using Gameshark V6, and follow the instructions and precautions carefully.
-Conclusion
-Gameshark V6 is a cheat device that lets you modify various aspects of your favorite games on PSX and PS1. It is a disc that contains a collection of cheat codes and homebrew utilities for various games. You can use Gameshark V6 to enhance your gaming experience and have more fun with your old games. To use Gameshark V6 on PSX and PS1, you will need a modded console, a blank disc, a CD burning software, and a Gameshark V6 ISO file. You can download the ISO file from here. Using Gameshark V6 has many benefits for gamers, such as playing games in new ways, overcoming challenges, customizing settings, exploring hidden content, and extending the lifespan of games and console. However, using Gameshark V6 also has some risks and drawbacks, such as damaging your console or disc, voiding your warranty or violating the terms of service, facing legal issues or penalties, ruining your gaming experience or enjoyment, and losing your interest or motivation to play the game. Therefore, you should use Gameshark V6 responsibly and moderately, respect the rights and wishes of the game developers and publishers and the gaming community, only use Gameshark V6 for personal and non-commercial purposes and for games that you own legally, backup your save files and discs before using Gameshark V6, and follow the instructions and precautions carefully. If you are a fan of retro gaming, you should give Gameshark V6 a try!
-Conclusion
-Gameshark V6 is a cheat device that lets you modify various aspects of your favorite games on PSX and PS1. It is a disc that contains a collection of cheat codes and homebrew utilities for various games. You can use Gameshark V6 to enhance your gaming experience and have more fun with your old games. To use Gameshark V6 on PSX and PS1, you will need a modded console, a blank disc, a CD burning software, and a Gameshark V6 ISO file. You can download the ISO file from here. Using Gameshark V6 has many benefits for gamers, such as playing games in new ways, overcoming challenges, customizing settings, exploring hidden content, and extending the lifespan of games and console. However, using Gameshark V6 also has some risks and drawbacks, such as damaging your console or disc, voiding your warranty or violating the terms of service, facing legal issues or penalties, ruining your gaming experience or enjoyment, and losing your interest or motivation to play the game. Therefore, you should use Gameshark V6 responsibly and moderately, respect the rights and wishes of the game developers and publishers and the gaming community, only use Gameshark V6 for personal and non-commercial purposes and for games that you own legally, backup your save files and discs before using Gameshark V6, and follow the instructions and precautions carefully. If you are a fan of retro gaming, you should give Gameshark V6 a try!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Harry Potter y la piedra filosofal epub El ebook que te har vivir la magia de Hogwarts.md b/spaces/rorallitri/biomedical-language-models/logs/Harry Potter y la piedra filosofal epub El ebook que te har vivir la magia de Hogwarts.md
deleted file mode 100644
index 79fc1d1df0eeafbe2dbccab0753d9c40fe2187ec..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Harry Potter y la piedra filosofal epub El ebook que te har vivir la magia de Hogwarts.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-Edición especial 20 aniversario de Harry Potter y la piedra filosofal. Si tu casa es Gryffindor éste es tu libro. Viste los colores de tu casa con orgullo
El libro contiene material extra con información sobre cada escuela: historia, alumnos destacados, que seguro entusiasmará a toda bruja y mago.
-harry potter y la piedra filosofal epub
Download File ⭐ https://tinurll.com/2uzmZ2
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Johnny Rivers - Discography (1964-2008).27 A Journey Through the Decades with a Timeless Star.md b/spaces/rorallitri/biomedical-language-models/logs/Johnny Rivers - Discography (1964-2008).27 A Journey Through the Decades with a Timeless Star.md
deleted file mode 100644
index 73278e0a1c78db710dfed0d174627209a81bbdf9..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Johnny Rivers - Discography (1964-2008).27 A Journey Through the Decades with a Timeless Star.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Alibre Design Expert V12.0 Multilingual Incl Keygen And Patch
Download Zip > https://tinurll.com/2uzo12
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Lobiettivo goldratt italiano pdf 29 il libro che ha rivoluzionato la produzione industriale.md b/spaces/rorallitri/biomedical-language-models/logs/Lobiettivo goldratt italiano pdf 29 il libro che ha rivoluzionato la produzione industriale.md
deleted file mode 100644
index 614cc0090bb9c83ce785c72c5f84ac0f4dc41207..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Lobiettivo goldratt italiano pdf 29 il libro che ha rivoluzionato la produzione industriale.md
+++ /dev/null
@@ -1,6 +0,0 @@
-l'obiettivo goldratt italiano pdf 29
Download ✫✫✫ https://tinurll.com/2uzmGb
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/seanet.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/seanet.py
deleted file mode 100644
index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000
--- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/seanet.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch.nn as nn
-
-from .conv import StreamableConv1d, StreamableConvTranspose1d
-from .lstm import StreamableLSTM
-
-
-class SEANetResnetBlock(nn.Module):
- """Residual block from SEANet model.
-
- Args:
- dim (int): Dimension of the input/output.
- kernel_sizes (list): List of kernel sizes for the convolutions.
- dilations (list): List of dilations for the convolutions.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection.
- """
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
- super().__init__()
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
- act = getattr(nn, activation)
- hidden = dim // compress
- block = []
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
- in_chs = dim if i == 0 else hidden
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
- block += [
- act(**activation_params),
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
- norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- self.block = nn.Sequential(*block)
- self.shortcut: nn.Module
- if true_skip:
- self.shortcut = nn.Identity()
- else:
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode)
-
- def forward(self, x):
- return self.shortcut(x) + self.block(x)
-
-
-class SEANetEncoder(nn.Module):
- """SEANet encoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the encoder, it corresponds to the N first blocks.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0):
- super().__init__()
- self.channels = channels
- self.dimension = dimension
- self.n_filters = n_filters
- self.ratios = list(reversed(ratios))
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = 1
- model: tp.List[nn.Module] = [
- StreamableConv1d(channels, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Downsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- norm=block_norm, norm_params=norm_params,
- activation=activation, activation_params=activation_params,
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- # Add downsampling layers
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- mult *= 2
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- return self.model(x)
-
-
-class SEANetDecoder(nn.Module):
- """SEANet decoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- final_activation (str): Final activation function after all convolutions.
- final_activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple.
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the decoder, it corresponds to the N last blocks.
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
- If equal to 1.0, it means that all the trimming is done at the right.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
- super().__init__()
- self.dimension = dimension
- self.channels = channels
- self.n_filters = n_filters
- self.ratios = ratios
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = int(2 ** len(self.ratios))
- model: tp.List[nn.Module] = [
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- # Upsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
- # Add upsampling layers
- model += [
- act(**activation_params),
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, trim_right_ratio=trim_right_ratio),
- ]
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- activation=activation, activation_params=activation_params,
- norm=block_norm, norm_params=norm_params, causal=causal,
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- mult //= 2
-
- # Add final layers
- model += [
- act(**activation_params),
- StreamableConv1d(n_filters, channels, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Add optional final activation to decoder (eg. tanh)
- if final_activation is not None:
- final_act = getattr(nn, final_activation)
- final_activation_params = final_activation_params or {}
- model += [
- final_act(**final_activation_params)
- ]
- self.model = nn.Sequential(*model)
-
- def forward(self, z):
- y = self.model(z)
- return y
diff --git a/spaces/runa91/bite_gradio/src/graph_networks/losses_for_vertex_wise_predictions/process_stage12_results.py b/spaces/runa91/bite_gradio/src/graph_networks/losses_for_vertex_wise_predictions/process_stage12_results.py
deleted file mode 100644
index 224fbfa0b955a14c34c06d84a82ce6ca3ac8b348..0000000000000000000000000000000000000000
--- a/spaces/runa91/bite_gradio/src/graph_networks/losses_for_vertex_wise_predictions/process_stage12_results.py
+++ /dev/null
@@ -1,322 +0,0 @@
-
-# see also (laptop):
-# /home/nadine/Documents/PhD/icon_barc_project/AMT_ground_contact_studies/stages_1and2_together/evaluate_stages12_main_forstage2b_new.py
-#
-# python src/graph_networks/losses_for_vertex_wise_predictions/process_stage12_results.py
-#
-
-
-
-import numpy as np
-import os
-import sys
-import csv
-import shutil
-import pickle as pkl
-
-ROOT_path = '/home/nadine/Documents/PhD/icon_barc_project/AMT_ground_contact_studies/'
-ROOT_path_images = '/ps/scratch/nrueegg/new_projects/Animals/data/dog_datasets/Stanford_Dogs_Dataset/StanfordExtra_V12/StanExtV12_Images/'
-ROOT_amt_image_list = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/stages12together/amt_image_lists/'
-ROOT_OUT_PATH = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/stages12together/'
-
-
-root_path_stage1 = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/stage1/'
-root_path_stage2 = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/stage2/'
-
-csv_file_stage1_pilot = root_path_stage1 + 'stage1_pilot_Batch_4841525_batch_results.csv'
-csv_file_stage1_main = root_path_stage1 + 'stage1_main_stage1_Batch_4890079_batch_results.csv'
-csv_file_stage2_pilot = root_path_stage2 + 'stage2_pilot_DogStage2PilotResults.csv'
-csv_file_stage2_main = root_path_stage2 + 'stage2_main_Batch_4890110_batch_results.csv'
-
-full_amt_image_list = ROOT_amt_image_list + 'all_stanext_image_names_amt.txt'
-train_amt_image_list = ROOT_amt_image_list + 'all_stanext_image_names_train.txt'
-test_amt_image_list = ROOT_amt_image_list + 'all_stanext_image_names_test.txt'
-val_amt_image_list = ROOT_amt_image_list + 'all_stanext_image_names_val.txt'
-
-experiment_name = 'stage_2b_image_paths'
-AMT_images_root_path = 'https://dogvisground.s3.eu-central-1.amazonaws.com/StanExtV12_Images/' # n02085620-Chihuahua/n02085620_10074.jpg'
-# out_folder = '/home/nadine/Documents/PhD/icon_barc_project/AMT_ground_contact_studies/stage_2b/stage2b_html_and_csv_files/'
-# out_folder_imgs = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/stages12together/'
-# csv_out_path_pilot = out_folder + experiment_name + '_pilot_bs22.csv'
-# csv_out_path_main = out_folder + experiment_name + '_main_bs22.csv'
-
-
-
-
-
-pose_dict = {'1':'Standing still, all four paws fully on the ground',
- '2':'Standing still, at least one paw lifted (if you are in doubt if the paw is on the ground or not, choose this option)',
- '3':'Walking or trotting (walk, amble, pace, trot)',
- '4':'Running (only canter, gallup, run)',
- '5':'Sitting, symmetrical legs',
- '6':'Sitting, complicated pose (every sitting pose with asymmetrical leg position)',
- '7':'lying, symmetrical legs (and not lying on the side)',
- '8':'lying, complicated pose (every lying pose with asymmetrical leg position)',
- '9':'Jumping, not touching the ground',
- '10':'Jumping or about to jump, touching the ground',
- '11':'On hind legs (standing or walking or sitting)',
- '12':'Downward facing dog: standing on back legs/paws and bending over front leg',
- '13':'Other poses: being carried by a human, ...',
- '14':'I can not see the pose (please comment why: hidden, hairy, legs cut off, ...)'}
-
-pose_dict_abbrev = {'1':'standing_4paws',
- '2':'standing_fewpaws',
- '3':'walking',
- '4':'running',
- '5':'sitting_sym',
- '6':'sitting_comp',
- '7':'lying_sym',
- '8':'lying_comp',
- '9':'jumping_nottouching',
- '10':'jumping_touching',
- '11':'onhindlegs',
- '12':'downwardfacingdog',
- '13':'otherpose',
- '14':'cantsee'}
-
-def read_csv(csv_file):
- with open(csv_file,'r') as f:
- reader = csv.reader(f)
- headers = next(reader)
- row_list = [{h:x for (h,x) in zip(headers,row)} for row in reader]
- return row_list
-
-def add_stage2_to_result_dict(row_list_stage2, result_info_dict):
- for ind_worker in range(len(row_list_stage2)):
- # print('------------------------------------')
- image_names = row_list_stage2[ind_worker]['Input.images'].split(';')
- all_answers_comment = row_list_stage2[ind_worker]['Answer.submitComments'].split(';')
- all_answers_dogpose = row_list_stage2[ind_worker]['Answer.submitValues'].split(';')
- for ind in range(len(image_names)):
- if 'Qualification_Tutorial_Images' in image_names[ind]:
- # print('skip tutorial images')
- pass
- else:
- img_subf = image_names[ind].split('/')[-2]
- img_name = image_names[ind].split('/')[-1]
- img_name_key = img_subf + '/' + img_name
- img_path = ROOT_path_images + img_subf + '/' + img_name
- this_img = {'img_name': img_name,
- 'img_subf': img_subf,
- # 'img_path': img_path,
- 'pose': pose_dict_abbrev[all_answers_dogpose[ind]],
- 'comment_pose': all_answers_comment[ind]}
- assert not (img_name_key in result_info_dict.keys())
- result_info_dict[img_name_key] = this_img
- '''folder_name = pose_dict_abbrev[all_answers_dogpose[ind]]
- img_name_out = img_name # 'indw' + str(ind_worker) + '_' + img_name
- out_folder_this = out_folder_imgs + folder_name + '/' + img_name
- shutil.copyfile(img_path, out_folder_this + img_name_out)'''
-
-def add_stage1_to_result_dict(row_list_stage1, result_info_dict):
- for ind_worker in range(len(row_list_stage1)):
- # print('------------------------------------')
- image_names = row_list_stage1[ind_worker]['Input.images'].split(';')
- all_answers_commentvis = row_list_stage1[ind_worker]['Answer.submitCommentsVisible'].split(';')
- all_answers_vis = row_list_stage1[ind_worker]['Answer.submitValuesVisible'].split(';') # 1: visible, 2: not visible
- all_answers_commentground = row_list_stage1[ind_worker]['Answer.submitCommentsGround'].split(';')
- all_answers_ground = row_list_stage1[ind_worker]['Answer.submitValuesGround'].split(';') # 1: flat, 2: not flat
- for ind in range(len(image_names)):
- if len(image_names[ind].split('/')) < 2:
- print('no more image in ind_worker ' + str(ind_worker))
- elif 'Qualification_Tutorial_Images' in image_names[ind]:
- # print('skip tutorial images')
- pass
- else:
- img_subf = image_names[ind].split('/')[-2]
- img_name = image_names[ind].split('/')[-1]
- img_name_key = img_subf + '/' + img_name
- img_path = ROOT_path_images + img_subf + '/' + img_name
- if all_answers_vis[ind] == '1':
- vis = True
- elif all_answers_vis[ind] == '2':
- vis = False
- else:
- vis = None
- # raise ValueError
- if all_answers_ground[ind] == '1':
- flat = True
- elif all_answers_ground[ind] == '2':
- flat = False
- else:
- flat = None
- # raise ValueError
- if img_name_key in result_info_dict.keys():
- result_info_dict[img_name_key]['is_vis'] = vis
- result_info_dict[img_name_key]['comment_vis'] = all_answers_commentvis[ind]
- result_info_dict[img_name_key]['is_flat'] = flat
- result_info_dict[img_name_key]['comment_flat'] = all_answers_commentground[ind]
- else:
- print(img_path)
- this_img = {'img_name': img_name,
- 'img_subf': img_subf,
- # 'img_path': img_path,
- 'is_vis': vis,
- 'comment_vis': all_answers_commentvis[ind],
- 'is_flat': flat,
- 'comment_flat': all_answers_commentground[ind]}
- result_info_dict[img_name_key] = this_img
-
-
-
-# ------------------------------------------------------------------------------
-
-'''
-if not os.path.exists(out_folder_imgs): os.makedirs(out_folder_imgs)
-for folder_name in pose_dict_abbrev.values():
- out_folder_this = out_folder_imgs + folder_name
- if not os.path.exists(out_folder_this): os.makedirs(out_folder_this)
-'''
-
-
-
-row_list_stage2_pilot = read_csv(csv_file_stage2_pilot)
-row_list_stage1_pilot = read_csv(csv_file_stage1_pilot)
-row_list_stage2_main = read_csv(csv_file_stage2_main)
-row_list_stage1_main = read_csv(csv_file_stage1_main)
-
-result_info_dict = {}
-add_stage2_to_result_dict(row_list_stage2_pilot, result_info_dict)
-add_stage2_to_result_dict(row_list_stage2_main, result_info_dict)
-add_stage1_to_result_dict(row_list_stage1_pilot, result_info_dict)
-add_stage1_to_result_dict(row_list_stage1_main, result_info_dict)
-
-
-
-# initial image list: all_stanext_image_names_amt.txt
-# (/home/nadine/Documents/PhD/icon_barc_project/AMT_ground_contact_studies/all_stanext_image_names_amt.txt)
-# the initial image list did first contain randomly shuffeled {train + test}
-# images and after that randomly shuffeled {val} images
-# see also /is/cluster/work/nrueegg/icon_pifu_related/ICON/lib/ground_contact/create_gc_dataset/get_stanext_images_for_amt.py
-# train and test: 6773 + 1703 = 8476
-# val: 4062
-with open(full_amt_image_list) as f: full_amt_lines = f.readlines()
-with open(train_amt_image_list) as f: train_amt_lines = f.readlines()
-with open(test_amt_image_list) as f: test_amt_lines = f.readlines()
-with open(val_amt_image_list) as f: val_amt_lines = f.readlines()
-
-for ind_l, line in enumerate(train_amt_lines):
- img_name_key = (line.split('/')[-2]) + '/' + (line.split('/')[-1]).split('\n')[0]
- result_info_dict[img_name_key]['split'] = 'train'
-for ind_l, line in enumerate(test_amt_lines):
- img_name_key = (line.split('/')[-2]) + '/' + (line.split('/')[-1]).split('\n')[0]
- result_info_dict[img_name_key]['split'] = 'test'
-for ind_l, line in enumerate(val_amt_lines):
- img_name_key = (line.split('/')[-2]) + '/' + (line.split('/')[-1]).split('\n')[0]
- result_info_dict[img_name_key]['split'] = 'val'
-
-
-# we have stage 2b labels for:
-# constraint_vis = (res['is_vis'] in {True, None})
-# constraint_flat = (res['is_flat'] in {True, None})
-# constraint_pose = (res['pose'] in {'standing_fewpaws', 'walking', 'running', })
-# we have stage 3 labels for:
-# constraint_vis = (res['is_vis'] in {True, None})
-# constraint_flat = (res['is_flat'] in {True, None})
-# constraint_pose = (res['pose'] in {'sitting_sym', 'sitting_comp', 'lying_sym', 'lying_comp', 'downwardfacingdog', 'otherpose', 'jumping_touching', 'onhindlegs'})
-# we have no labels for:
-# constraint_pose = (res['pose'] in {'standing_4paws', 'jumping_nottouching', 'cantsee'})
-
-
-with open(ROOT_OUT_PATH + 'gc_annots_categories_stages12_complete.pkl', 'wb') as fp:
- pkl.dump(result_info_dict, fp)
-
-
-import pdb; pdb.set_trace()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# -------------------------------------------------------------------------------------------------
-
-'''
-# sort the result images.
-all_pose_names = [*pose_dict_abbrev.values()]
-split_list = ['train', 'test', 'val', 'traintest']
-split_list_dict = {}
-for split in split_list:
- nimgs_pose_dict = {}
- for pose_name in all_pose_names:
- nimgs_pose_dict[pose_name] = 0
- images_to_label = []
- for ind_l, line in enumerate(full_amt_lines):
- img_name = (line.split('/')[-1]).split('\n')[0]
- res = result_info_dict[img_name]
- if split == 'traintest':
- constraint_split = (res['split'] == 'train') or (res['split'] == 'test')
- else:
- constraint_split = (res['split'] == split) # (res['split'] == 'train')
- constraint_vis = (res['is_vis'] in {True, None})
- constraint_flat = (res['is_flat'] in {True, None})
- # constraint_pose = (res['pose'] in {'sitting_sym', 'sitting_comp', 'lying_sym', 'lying_comp', 'downwardfacingdog', 'otherpose', 'jumping_touching', 'onhindlegs'})
- constraint_pose = (res['pose'] in {'standing_fewpaws', 'walking', 'running', })
-
- if constraint_split * constraint_vis * constraint_flat == True:
- nimgs_pose_dict[res['pose']] += 1
- if constraint_pose:
- images_to_label.append(line)
- folder_name = 'imgsforstage2b_' + split # 'imgsforstage3_train'
- out_folder_this = out_folder_imgs + folder_name + '/'
- if not os.path.exists(out_folder_this): os.makedirs(out_folder_this)
- shutil.copyfile(res['img_path'], out_folder_this + img_name)
- print('------------------------------------------------------')
- print(split)
- print(nimgs_pose_dict)
- print(len(images_to_label))
- split_list_dict[split] = {'nimgs_pose_dict': nimgs_pose_dict,
- 'len(images_to_label)': len(images_to_label),
- 'images_to_label': images_to_label}
-
-
-
-# create csv files:
-traintest_list = split_list_dict['traintest']['images_to_label']
-val_list = split_list_dict['val']['images_to_label']
-complete_list = traintest_list + val_list
-
-all_lines_refined = []
-for line in complete_list:
- all_lines_refined.append(line.split('\n')[0])
-
-import pdb; pdb.set_trace()
-'''
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/ryancahildebrandt/all_in_one_sentence_embeddings/app.py b/spaces/ryancahildebrandt/all_in_one_sentence_embeddings/app.py
deleted file mode 100644
index d850391be45c87d7801c4ce958cca5fbd47dbe80..0000000000000000000000000000000000000000
--- a/spaces/ryancahildebrandt/all_in_one_sentence_embeddings/app.py
+++ /dev/null
@@ -1,242 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Sat Jul 16 08:33:29 PM EDT 2022
-author: Ryan Hildebrandt, github.com/ryancahildebrandt
-"""
-# imports
-import pandas as pd
-import random
-import streamlit as st
-
-from cluster import *
-from dimredux import *
-from embeddings import *
-from eval import *
-from prep import *
-from readin import *
-from viz import *
-
-st.set_page_config(layout="wide")
-random.seed(42)
-
-st.title("All* in One Sentence Embeddings")
-with st.expander("Notes", False):
- st.markdown("""
-This tool brings together a wide range of preprocessing, embedding, clustering, and dimensionality reduction techniques into one place, to make comparison of approaches quick and easy. The hope is that this will be useful to people working with natural language data in a range of fields, by removing a significant amount of work in the early stages of processing a dataset.
-
-**A couple notes on the following sections and their design/usage:**
-
-- This tool is aimed primarily at users who have real-world data and real-world problems to solve. Many of the below notes stem from this choice, and while it may serve as a useful jumping off point for demonstrating differences in clustering/embedding/dimensionality reduction techniques, the *theory* behind those techniques is secondary here.
-- That being said, this app is not a substitute for a working understanding of the techniques used. Not every clustering algorithm is appropriate for every data type or every task, and explaining the right approach for the task in front of you is way beyond the scope of this project.
-- The visualization included at the bottom of the page will be most useful for higher dimensional embedding models such as the SentenceTransformers and USE options. In all cases, however, the primary evaluation of embedding and clustering effectiveness should be done via the "Clustered Data" table at the top of the page.
-- This tool does not encompass all possible arguments across all of the individual functions in the pipeline. Fine tuning of model parameters should be done in a separate environment, but as much as possible the parameters which should be the most impactful are included.
-- Because of the range of possible metrics and algorithms here, it is expected that some of the many, many possible combinations will be incompatible.
-
-Additionally, this app was created in parallel with a static report published via Datapane, available [here](https://datapane.com/reports/v7JNmd7/the-ex-academics-sentence-embedding-guide/). The report goes into some detail on the conceptual underpinnings and usage of the techniques discussed here.
-""")
-
-#sidebar
-with st.sidebar:
- st.title("Pipeline Configuration")
-
-##data import
- with st.expander("Data Selection", True):
- source = st.radio("Source", ("Default","File Upload","Paste Text","Scikit-Learn Dataset"))
- head = st.selectbox("Head", ("All", 50, 100, 500, 1000), help = "Use first n rows of selected data")
-
- if source == "File Upload":
- upl = st.file_uploader("Upload a CSV")
- if upl:
- text = list(pd.read_csv(upl))
- else :
- text = default
- if source == "Paste Text":
- delim = st.text_input("Delimiter", ",", help = "Delimiting character for pasted text")
- inp = st.text_area("Paste Text Here", "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair")
- text = inp.split(delim)
- if source == "Scikit-Learn Dataset":
- cat = st.multiselect("Dataset Selection", ['alt.atheism','comp.graphics','comp.os.ms-windows.misc','comp.sys.ibm.pc.hardware','comp.sys.mac.hardware','comp.windows.x','misc.forsale','rec.autos','rec.motorcycles','rec.sport.baseball','rec.sport.hockey','sci.crypt','sci.electronics','sci.med','sci.space','soc.religion.christian','talk.politics.guns','talk.politics.mideast','talk.politics.misc','talk.religion.misc'], default = "alt.atheism", help = "Text datasets provided via sklearn.datasets.fetch_20newsgroups()")
- text = fetch_20newsgroups(categories = cat)["data"]
- if source == "Default":
- text = default
- if head != "All":
- text = text[:head]
-
-## preprocessing
- with st.expander("Text Preprocessing", False):
- prep = text
- pre = st.multiselect("Preprocessing Steps", ["Lowercase","Punctuation","Stopwords","Lemmatization","Stemming","Spelling","Clause Separation"], help = "Preprocessing steps to apply to provided data")
- prep_load()
-
- if "Lowercase" in pre:
- prep = prep_lower(prep)
- if "Punctuation" in pre:
- prep = prep_punct(prep)
- if "Stopwords" in pre:
- prep = prep_stop(prep)
- if "Lemmatization" in pre:
- prep = prep_lemma(prep)
- if "Stemming" in pre:
- prep = prep_stem(prep)
- if "Spelling" in pre:
- prep = prep_spell(prep)
- if "Clause Separation" in pre:
- clause_reg_box = st.text_input("clause sep regex", clause_reg, help = "Regex defining separation of clauses within each sentence/line")
- clause_word_box = st.text_input("clause sep words", clause_words, help = "Words indicating a clause boundary")
- clause_sep = f"{clause_reg}{' | '.join(clause_words)}".replace("] ", "]")
- prep = prep_clause(prep)
-
-## embeddings
- with st.expander("Sentence Embeddings", False):
- mod = st.selectbox("Embedding Algorithm", ("tf-idf","Hash","Count","SentenceTransformers Model","Universal Sentence Encoder"), help = "Algorithm used for sentence embeddings, preprocessing steps may be duplicated between the abova and the following models")
-
- if mod == "tf-idf":
- ng = st.slider("ngram_range", 1, 5, help = "Break sentences into chunks ranging in length from 1 to n. This may add some contextual information in the embeddings for bag-of-words based algorithms")
- emb = model_tfidf(prep, (1,ng))
- if mod == "Hash":
- ng = st.slider("ngram_range", 1, 5, help = "Break sentences into chunks ranging in length from 1 to n. This may add some contextual information in the embeddings for bag-of-words based algorithms")
- emb = model_hash(prep, (1,ng))
- if mod == "Count":
- ng = st.slider("ngram_range", 1, 5, help = "Break sentences into chunks ranging in length from 1 to n. This may add some contextual information in the embeddings for bag-of-words based algorithms")
- emb = model_count(prep, (1,ng))
- if mod == "SentenceTransformers Model":
- st_mod = st.selectbox("st model selection", st_available_models, help = "Pretrained models available through the SetnenceTransformers library and HuggingFace.co")
- emb = model_snt(prep, st_mod)
- if mod == "Universal Sentence Encoder":
- emb = model_use(prep)
-
-## clustering
- with st.expander("Sentence Clustering", False):
- clu = st.selectbox("Clustering Algorithm", ("Affinity Propagation","Agglomerative Clustering","Birch","DBSCAN","HDBSCAN","KMeans","Mini Batch KMeans","Mean Shift","OPTICS","Spectral Clustering"), help = "Algorithm to use to group similar datapoints together")
-
- if clu == "Affinity Propagation":
- cl = cluster_affinity(emb)
- if clu == "Agglomerative Clustering":
- aff = st.radio("affinity", ("euclidean", "l1", "l2", "manhattan", "cosine"), help = "Metric used to compute the linkage. Can be “euclidean”, “l1”, “l2”, “manhattan”, “cosine”, or “precomputed”. If linkage is “ward”, only “euclidean” is accepted")
- ncl = st.slider("n_clusters", 1, 20, 10, help = "The number of clusters to find")
- lnk = st.radio("linkage", ("ward", "complete", "average", "single"), help = "Which linkage criterion to use. The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion. ‘ward’ minimizes the variance of the clusters being merged. ‘average’ uses the average of the distances of each observation of the two sets. ‘complete’ or ‘maximum’ linkage uses the maximum distances between all observations of the two sets. ‘single’ uses the minimum of the distances between all observations of the two sets.")
- cl = cluster_agglom(emb, ncl, aff, lnk)
- if clu == "Birch":
- bf = st.slider("branching factor", 0, 100, 50, help = "Maximum number of CF subclusters in each node. If a new samples enters such that the number of subclusters exceed the branching_factor then that node is split into two nodes with the subclusters redistributed in each. The parent subcluster of that node is removed and two new subclusters are added as parents of the 2 split nodes.")
- ncl = st.slider("n_clusters", 1, 20, 10, help = "The number of clusters to find")
- cl = cluster_birch(emb, bf, ncl)
- if clu == "DBSCAN":
- mtrc = st.selectbox("metric", metrics_list, help = "The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by sklearn.metrics.pairwise_distances for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors for DBSCAN.")
- eps_cl = st.slider("eps", 0.0, 2.0, step = .001, value = 1.0, help = "The maximum distance between two samples for one to be considered as in the neighborhood of the other. This is not a maximum bound on the distances of points within a cluster. This is the most important DBSCAN parameter to choose appropriately for your data set and distance function.")
- mins = st.slider("min samples", 1, 20, 5, help = "The number of samples (or total weight) in a neighborhood for a point to be considered as a core point. This includes the point itself.")
- cl = cluster_dbscan(emb, eps_cl, mins, mtrc)
- if clu == "HDBSCAN":
- alp = st.slider("alpha", 0.0, 2.0, step = .001, value = 1.0, help = "A distance scaling parameter as used in robust single linkage")
- mtrc = st.selectbox("metric", metrics_list, help = "The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter.")
- mincl = st.slider("min cluster size", 1, 20, 5, help = "The minimum size of clusters; single linkage splits that contain fewer points than this will be considered points “falling out” of a cluster rather than a cluster splitting into two new clusters.")
- cl = cluster_hdbscan(emb, alp, mtrc, mincl)
- if clu == "KMeans":
- alg = st.radio("affinity", ("elkan", "lloyd"), help = "K-means algorithm to use. The classical EM-style algorithm is 'lloyd'. The 'elkan' variation can be more efficient on some datasets with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters).")
- ncl = st.slider("n_clusters", 1, 20, 10, help = "The number of clusters to find")
- cl = cluster_kmeans(emb, ncl, alg)
- if clu == "Mini Batch KMeans":
- ncl = st.slider("n_clusters", 1, 20, 10, help = "The number of clusters to find")
- cl = cluster_minikmeans(emb, ncl)
- if clu == "Mean Shift":
- sdng = st.radio("bin_seeding", (True, False), help = "If true, initial kernel locations are not locations of all points, but rather the location of the discretized version of points, where points are binned onto a grid whose coarseness corresponds to the bandwidth. Setting this option to True will speed up the algorithm because fewer seeds will be initialized.")
- cl_all = st.radio("cluster_all", (True, False), help = "If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1.")
- cl = cluster_meanshift(emb, sdng, cl_all)
- if clu == "OPTICS":
- mtrc = st.selectbox("metric", metrics_list, help = "Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string.")
- mins = st.slider("min samples", 1, 20, 5, help = "The number of samples in a neighborhood for a point to be considered as a core point. Also, up and down steep regions can’t have more than min_samples consecutive non-steep points. Expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2).")
- mincl = st.slider("min cluster size", 1, 20, help = "Minimum number of samples in an OPTICS cluster, expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2). If None, the value of min_samples is used instead. Used only when cluster_method='xi'.")
- cl = cluster_optics(emb, mins, mtrc, mincl)
- if clu == "Spectral Clustering":
- aff = st.radio("affinity", ("nearest_neighbors", "rbf", "precomputed", "precomputed_nearest_neighbors", "additive_chi2", "chi2", "linear", "poly", "polynomial", "rbf", "laplacian", "sigmoid", "cosine"), help = "How to construct the affinity matrix. ‘nearest_neighbors’: construct the affinity matrix by computing a graph of nearest neighbors. ‘rbf’: construct the affinity matrix using a radial basis function (RBF) kernel. ‘precomputed’: interpret X as a precomputed affinity matrix, where larger values indicate greater similarity between instances. ‘precomputed_nearest_neighbors’: interpret X as a sparse graph of precomputed distances, and construct a binary affinity matrix from the n_neighbors nearest neighbors of each instance. One of the kernels supported by pairwise_kernels.")
- ncl = st.slider("n_clusters", 1, 20, 10, help = "The number of clusters to find")
- cl = cluster_spectral(emb, ncl, aff)
-
-## dimredux
- with st.expander("Dimensionality Reduction", False):
- dim = st.selectbox("Algorithm", ("t-SNE","Gaussian Random Projection","Sparse Random Projection","Factor Analysis","Fast ICA","Incremental PCA","Kernel PCA","Latent Dirichlet Allocation","Mini Batch Sparse PCA","NMF","PCA","Sparse PCA","Truncated SVD","UMAP"), help = "Algorithm to use to reduce embeddings in high dimensional vector space to 2 or 3 dimensions, useful for visualization")
-
- if dim == "t-SNE":
- mtrc = st.radio("metric", ("cityblock","cosine","euclidean","haversine","l1","l2","manhattan","nan_euclidean"), help = "The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. The default is “euclidean” which is interpreted as squared euclidean distance.")
- mth = st.radio("method", ("barnes_hut", "exact"), help = "By default the gradient calculation algorithm uses Barnes-Hut approximation running in O(NlogN) time. method=’exact’ will run on the slower, but exact, algorithm in O(N^2) time. The exact algorithm should be used when nearest-neighbor errors need to be better than 3%. However, the exact method cannot scale to millions of examples.")
- d2,d3 = dim_tsne(emb, mtrc, mth)
- if dim == "Gaussian Random Projection":
- eps_dim = st.slider("eps", 0.0, 2.0, step = .001, value = 1.0, help = "Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. The value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.")
- d2,d3 = dim_gaussrandom(emb, eps_dim)
- if dim == "Sparse Random Projection":
- eps_dim = st.slider("eps", 0.0, 2.0, step = .001, value = 1.0, help = "Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. This value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.")
- d2,d3 = dim_sparserandom(emb, eps_dim)
- if dim == "Factor Analysis":
- mth = st.radio("svd_method", ("lapack", "randomized"), help = "Which SVD method to use. If ‘lapack’ use standard SVD from scipy.linalg, if ‘randomized’ use fast randomized_svd function. Defaults to ‘randomized’. For most applications ‘randomized’ will be sufficiently precise while providing significant speed gains. Accuracy can also be improved by setting higher values for iterated_power. If this is not sufficient, for maximum precision you should choose ‘lapack’.")
- d2,d3 = dim_factor(emb, mth)
- if dim == "Fast ICA":
- alg = st.radio("algorithm", ("parallel", "deflation"), help = "Apply parallel or deflational algorithm for FastICA.")
- d2,d3 = dim_fastica(emb, alg)
- if dim == "Incremental PCA":
- d2,d3 = dim_ipca(emb)
- if dim == "Kernel PCA":
- krnl = st.radio("kernel", ("linear","poly","rbf","sigmoid","cosine"), help = "Kernel used for PCA.")
- d2,d3 = dim_kpca(emb, krnl)
- if dim == "Latent Dirichlet Allocation":
- d2,d3 = dim_lda(emb)
- if dim == "Mini Batch Sparse PCA":
- mth = st.radio("method", ("lars", "cd"), help = "Method to be used for optimization. lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.")
- d2,d3 = dim_minibatchspca(emb, mth)
- if dim == "NMF":
- nmf_init = st.radio("init", ("random", "nndsvd", "nndsvda", "nndsvdar"), help = "Method used to initialize the procedure. Default: None. Valid options: None: ‘nndsvda’ if n_components <= min(n_samples, n_features), otherwise random. 'random': non-negative random matrices, scaled with: sqrt(X.mean() / n_components). 'nndsvd': Nonnegative Double Singular Value Decomposition (NNDSVD) initialization (better for sparseness). 'nndsvda': NNDSVD with zeros filled with the average of X (better when sparsity is not desired). 'nndsvdar' NNDSVD with zeros filled with small random values (generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired). 'custom': use custom matrices W and H")
- d2,d3 = dim_nmf(emb, nmf_init)
- if dim == "PCA":
- d2,d3 = dim_pca(emb)
- if dim == "Sparse PCA":
- mth = st.radio("method", ("lars", "cd"), help = "Method to be used for optimization. lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.")
- d2,d3 = dim_spca(emb, mth)
- if dim == "Truncated SVD":
- alg = st.radio("algorithm", ("arpack","randomized"), help = "SVD solver to use. Either “arpack” for the ARPACK wrapper in SciPy (scipy.sparse.linalg.svds), or “randomized” for the randomized algorithm due to Halko (2009).")
- d2,d3 = dim_tsvd(emb, alg)
- if dim == "UMAP":
- mind = st.slider("min distance", 0.0, 2.0, 0.1, help = "The effective minimum distance between embedded points. Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together, while larger values will result on a more even dispersal of points. The value should be set relative to the spread value, which determines the scale at which embedded points will be spread out.")
- nne = st.slider("n_neighbors", 2, 200, 10, help = "The size of local neighborhood (in terms of number of neighboring sample points) used for manifold approximation. Larger values result in more global views of the manifold, while smaller values result in more local data being preserved. In general values should be in the range 2 to 100.")
- mtrc = st.radio("metric", ("euclidean","manhattan","chebyshev","minkowski","canberra","braycurtis","mahalanobis","wminkowski","seuclidean","cosine","correlation","haversine","hamming","jaccard","dice","russelrao","kulsinski","ll_dirichlet","hellinger","rogerstanimoto","sokalmichener","sokalsneath","yule"), help = "The metric to use to compute distances in high dimensional space. If a string is passed it must match a valid predefined metric. If a general metric is required a function that takes two 1d arrays and returns a float can be provided. For performance purposes it is required that this be a numba jit’d function. ")
- d2,d3 = dim_umap(emb, nne, mind, mtrc)
-
-emb_df = pd.DataFrame({
- "prep" : prep,
- "cluster" : cl,
- "d2" : list(d2),
- "d3" : list(d3),
- "emb" : list(emb)
- })
-emb_display = emb_df[["prep","cluster"]].groupby("cluster").agg(lambda x: list(x)).reset_index()
-emb_display["n"] = [len(i) for i in emb_display["prep"]]
-
-#body
-with st.container():
- c1, c2 = st.columns(2)
- with c1:
- ##raw data
- st.subheader("Raw and Preprocessed Data")
- st.write(dict(zip(text[:5],prep[:5])))
- with c2:
- ##embedings
- st.subheader("Clustered Data")
- st.dataframe(emb_display[["cluster","n","prep"]])
-
-with st.container():
- st.header("Clustering Metrics & Plots")
-
- c3, c4, c5 = st.columns(3)
- with c3:
- st.metric("Calinski Harabasz Score", eval_ch(emb, cl))
- st.caption("A higher Calinski-Harabasz score relates to a model with better defined clusters. The index is the ratio of the sum of between-clusters dispersion and of within-cluster dispersion for all clusters (where dispersion is defined as the sum of distances squared)")
- with c4:
- st.metric("Davies Bouldin Score", eval_db(emb, cl))
- st.caption("A lower Davies-Bouldin index relates to a model with better separation between the clusters. This index signifies the average ‘similarity’ between clusters, where the similarity is a measure that compares the distance between clusters with the size of the clusters themselves. Zero is the lowest possible score. Values closer to zero indicate a better partition.")
- with c5:
- st.metric("Silhouette Score", eval_s(emb, cl))
- st.caption("A higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores: a: The mean distance between a sample and all other points in the same class. b: The mean distance between a sample and all other points in the next nearest cluster.")
-
- c6, c7 = st.columns(2)
- with c6:
- st.plotly_chart(d2_plot(emb_df), use_container_width = False, help = "Embeddings Reduced to 2 Dimensions")
- with c7:
- st.plotly_chart(d3_plot(emb_df), use_container_width = False, help = "Embeddings Reduced to 3 Dimensions")
diff --git a/spaces/sdhsdhk/bingosjj/src/app/layout.tsx b/spaces/sdhsdhk/bingosjj/src/app/layout.tsx
deleted file mode 100644
index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000
--- a/spaces/sdhsdhk/bingosjj/src/app/layout.tsx
+++ /dev/null
@@ -1,47 +0,0 @@
-import { Metadata } from 'next'
-import { Toaster } from 'react-hot-toast'
-import { TailwindIndicator } from '@/components/tailwind-indicator'
-import { Providers } from '@/components/providers'
-import { Header } from '@/components/header'
-
-import '@/app/globals.scss'
-
-
-export const metadata: Metadata = {
- title: {
- default: 'Bing AI Chatbot',
- template: `%s - Bing AI Chatbot`
- },
- description: 'Bing AI Chatbot Web App.',
- themeColor: [
- { media: '(prefers-color-scheme: light)', color: 'white' },
- { media: '(prefers-color-scheme: dark)', color: 'dark' }
- ],
- icons: {
- icon: '/favicon.ico',
- shortcut: '../assets/images/logo.svg',
- apple: '../assets/images/logo.svg'
- }
-}
-
-interface RootLayoutProps {
- children: React.ReactNode
-}
-
-export default function RootLayout({ children }: RootLayoutProps) {
- return (
-
-
-
-
-
- {/* @ts-ignore */}
-
- {children}
-
-
-
-
-
- )
-}
diff --git a/spaces/seanghay/KLEA/g2p.py b/spaces/seanghay/KLEA/g2p.py
deleted file mode 100644
index aa448842c7897067c2dc7d2b91c74717a376ea7b..0000000000000000000000000000000000000000
--- a/spaces/seanghay/KLEA/g2p.py
+++ /dev/null
@@ -1,219 +0,0 @@
-"""
-Guess word pronunciations using a Phonetisaurus FST
-
-See bin/fst2npz.py to convert an FST to a numpy graph.
-
-Reference:
- https://github.com/rhasspy/gruut/blob/master/gruut/g2p_phonetisaurus.py
-"""
-import typing
-from collections import defaultdict
-from pathlib import Path
-import numpy as np
-
-NUMPY_GRAPH = typing.Dict[str, np.ndarray]
-_NOT_FINAL = object()
-
-class PhonetisaurusGraph:
- """Graph of numpy arrays that represents a Phonetisaurus FST
-
- Also contains shared cache of edges and final state probabilities.
- These caches are necessary to ensure that the .npz file stays small and fast
- to load.
- """
-
- def __init__(self, graph: NUMPY_GRAPH, preload: bool = False):
- self.graph = graph
-
- self.start_node = int(self.graph["start_node"].item())
-
- # edge_index -> (from_node, to_node, ilabel, olabel)
- self.edges = self.graph["edges"]
- self.edge_probs = self.graph["edge_probs"]
-
- # int -> [str]
- self.symbols = []
- for symbol_str in self.graph["symbols"]:
- symbol_list = symbol_str.replace("_", "").split("|")
- self.symbols.append((len(symbol_list), symbol_list))
-
- # nodes that are accepting states
- self.final_nodes = self.graph["final_nodes"]
-
- # node -> probability
- self.final_probs = self.graph["final_probs"]
-
- # Cache
- self.preloaded = preload
- self.out_edges: typing.Dict[int, typing.List[int]] = defaultdict(list)
- self.final_node_probs: typing.Dict[int, typing.Any] = {}
-
- if preload:
- # Load out edges
- for edge_idx, (from_node, *_) in enumerate(self.edges):
- self.out_edges[from_node].append(edge_idx)
-
- # Load final probabilities
- self.final_node_probs.update(zip(self.final_nodes, self.final_probs))
-
- @staticmethod
- def load(graph_path: typing.Union[str, Path], **kwargs) -> "PhonetisaurusGraph":
- """Load .npz file with numpy graph"""
- np_graph = np.load(graph_path, allow_pickle=True)
- return PhonetisaurusGraph(np_graph, **kwargs)
-
- def g2p_one(
- self,
- word: typing.Union[str, typing.Sequence[str]],
- eps: str = "",
- beam: int = 5000,
- min_beam: int = 100,
- beam_scale: float = 0.6,
- grapheme_separator: str = "",
- max_guesses: int = 1,
- ) -> typing.Iterable[typing.Tuple[typing.Sequence[str], typing.Sequence[str]]]:
- """Guess phonemes for word"""
- current_beam = beam
- graphemes: typing.Sequence[str] = []
-
- if isinstance(word, str):
- word = word.strip()
-
- if grapheme_separator:
- graphemes = word.split(grapheme_separator)
- else:
- graphemes = list(word)
- else:
- graphemes = word
-
- if not graphemes:
- return []
-
- # (prob, node, graphemes, phonemes, final, beam)
- q: typing.List[
- typing.Tuple[
- float,
- typing.Optional[int],
- typing.Sequence[str],
- typing.List[str],
- bool,
- ]
- ] = [(0.0, self.start_node, graphemes, [], False)]
-
- q_next: typing.List[
- typing.Tuple[
- float,
- typing.Optional[int],
- typing.Sequence[str],
- typing.List[str],
- bool,
- ]
- ] = []
-
- # (prob, phonemes)
- best_heap: typing.List[typing.Tuple[float, typing.Sequence[str]]] = []
-
- # Avoid duplicate guesses
- guessed_phonemes: typing.Set[typing.Tuple[str, ...]] = set()
-
- while q:
- done_with_word = False
- q_next = []
-
- for prob, node, next_graphemes, output, is_final in q:
- if is_final:
- # Complete guess
- phonemes = tuple(output)
- if phonemes not in guessed_phonemes:
- best_heap.append((prob, phonemes))
- guessed_phonemes.add(phonemes)
-
- if len(best_heap) >= max_guesses:
- done_with_word = True
- break
-
- continue
-
- assert node is not None
-
- if not next_graphemes:
- if self.preloaded:
- final_prob = self.final_node_probs.get(node, _NOT_FINAL)
- else:
- final_prob = self.final_node_probs.get(node)
- if final_prob is None:
- final_idx = int(np.searchsorted(self.final_nodes, node))
- if self.final_nodes[final_idx] == node:
- # Cache
- final_prob = float(self.final_probs[final_idx])
- self.final_node_probs[node] = final_prob
- else:
- # Not a final state
- final_prob = _NOT_FINAL
- self.final_node_probs[node] = final_prob
-
- if final_prob != _NOT_FINAL:
- final_prob = typing.cast(float, final_prob)
- q_next.append((prob + final_prob, None, [], output, True))
-
- len_next_graphemes = len(next_graphemes)
- if self.preloaded:
- # Was pre-loaded in __init__
- edge_idxs = self.out_edges[node]
- else:
- # Build cache during search
- maybe_edge_idxs = self.out_edges.get(node)
- if maybe_edge_idxs is None:
- edge_idx = int(np.searchsorted(self.edges[:, 0], node))
- edge_idxs = []
- while self.edges[edge_idx][0] == node:
- edge_idxs.append(edge_idx)
- edge_idx += 1
-
- # Cache
- self.out_edges[node] = edge_idxs
- else:
- edge_idxs = maybe_edge_idxs
-
- for edge_idx in edge_idxs:
- _, to_node, ilabel_idx, olabel_idx = self.edges[edge_idx]
- out_prob = self.edge_probs[edge_idx]
-
- len_igraphemes, igraphemes = self.symbols[ilabel_idx]
-
- if len_igraphemes > len_next_graphemes:
- continue
-
- if igraphemes == [eps]:
- item = (prob + out_prob, to_node, next_graphemes, output, False)
- q_next.append(item)
- else:
- sub_graphemes = next_graphemes[:len_igraphemes]
- if igraphemes == sub_graphemes:
- _, olabel = self.symbols[olabel_idx]
- item = (
- prob + out_prob,
- to_node,
- next_graphemes[len(sub_graphemes) :],
- output + olabel,
- False,
- )
- q_next.append(item)
-
- if done_with_word:
- break
-
- q_next = sorted(q_next, key=lambda item: item[0])[:current_beam]
- q = q_next
-
- current_beam = max(min_beam, (int(current_beam * beam_scale)))
-
- # Yield guesses
- if best_heap:
- for _, guess_phonemes in sorted(best_heap, key=lambda item: item[0])[
- :max_guesses
- ]:
- yield [p for p in guess_phonemes if p]
- else:
- # No guesses
- yield []
\ No newline at end of file
diff --git a/spaces/sharmaanupam/eigenvectors/app.py b/spaces/sharmaanupam/eigenvectors/app.py
deleted file mode 100644
index 03e25f47018cff29aea3afb38d0be8dc43c2f148..0000000000000000000000000000000000000000
--- a/spaces/sharmaanupam/eigenvectors/app.py
+++ /dev/null
@@ -1,152 +0,0 @@
-from matplotlib import pyplot as plt
-import numpy as np
-import streamlit as st
-import pandas as pd
-from utils import getSquareYVectorised, getCircle, getBatman, transform, plotGridLines, discriminant
-
-minv = -5.0
-maxv = 5.0
-step = 0.1
-
-np.set_printoptions(precision=3)
-xlim = (-10,10)
-ylim = (-10,10)
-
-st.title("Visualizing Eigenvectors with 2x2 Linear Transformations")
-st.write(
- "This app shows the effect of a 2x2 linear transformation on simple shapes to understand the role of eigenvectors and eigenvalues in quantifying the nature of a transformation.")
-
-with st.sidebar:
- data = st.selectbox('Select type of dataset', ['Square', 'Circle', 'Batman'])
- if data == 'Batman':
- black = st.checkbox(label='Black')
- transform_type = st.selectbox('Select type of transformation', ['Custom', 'Stretch', 'Shear', 'Rotate'])
- st.write("---")
- if transform_type == 'Custom':
- st.markdown("Select elements of transformation matrix $A$")
- a_00 = st.slider(label = '$a_{00}$', min_value = minv, max_value=maxv, value=1.0, step=step)
- a_01 = st.slider(label = '$a_{01}$', min_value = minv, max_value=maxv, value=0.0, step=step)
- a_10 = st.slider(label = '$a_{10}$', min_value = minv, max_value=maxv, value=0.0, step=step)
- a_11 = st.slider(label = '$a_{11}$', min_value = minv, max_value=maxv, value=1.0, step=step)
- t = np.array([[a_00, a_01], [a_10, a_11]], dtype=np.float64)
- elif transform_type == 'Stretch':
- both = st.checkbox('Set equal')
- if not both:
- stretch_x = st.slider(label = 'Stretch in x-direction', min_value = minv, max_value=maxv, value=1.0, step=step)
- stretch_y = st.slider(label = 'Stretch in y-direction', min_value = minv, max_value=maxv, value=1.0, step=step)
- t = np.array([[stretch_x, 0], [0, stretch_y]], dtype=np.float64)
- else:
- stretch = st.slider(label = 'Scale', min_value = minv, max_value=maxv, value=1.0, step=step)
- t = np.array([[stretch, 0], [0, stretch]], dtype=np.float64)
- elif transform_type == 'Shear':
- left, right = st.columns(2)
- with left:
- both = st.checkbox('Set equal')
- if not both:
- shear_x = st.slider(label = 'Shear in x-direction', min_value=minv, max_value=maxv, value=0.0, step=step)
- shear_y = st.slider(label = 'Shear in y-direction', min_value=minv, max_value=maxv, value=0.0, step=step)
- t = np.array([[1, shear_x], [shear_y, 1]], dtype=np.float64)
- else:
- with right:
- sign = st.checkbox('Opposite sign')
- shear = st.slider(label = 'Shear in both directions', min_value=minv, max_value=maxv, value=0.0, step=step)
- t = np.array([[1, -shear], [shear, 1]], dtype=np.float64) if sign else np.array([[1, shear], [shear, 1]], dtype=np.float64)
- else:
- st.markdown("Rotate by $\\theta$ in anti-clockwise\ndirection")
- min_theta = -180.0
- max_theta = 180.0
- theta = st.slider(label = '$\\theta$', min_value=min_theta, max_value=max_theta, value=0.0, step=step, format="%f°")
- rtheta = np.pi * theta/180.0
- t = np.array([[np.cos(rtheta), -np.sin(rtheta)], [np.sin(rtheta), np.cos(rtheta)]], dtype=np.float64)
- st.write("---")
- st.write("The transformation matrix A is:")
- st.table(pd.DataFrame(t))
- st.write("---")
- showNormalSpace = st.checkbox(label= 'Show original space (without transform)', value=False)
-
-
-if data == 'Square':
- x = np.linspace(-1,1,1000)
- y = getSquareYVectorised(x)
-elif data == 'Circle':
- x = np.linspace(-1,1,1000)
- y = getCircle(x)
-else:
- X, Y = getBatman(s=2)
-
-if data != 'Batman':
- x_dash_up, y_dash_up = transform(x,y,t)
- x_dash_down, y_dash_down = transform(x,-y,t)
-else:
- tmp = [transform(x, y, t) for x, y in zip(X, Y)]
- X_dash = [t[0] for t in tmp]
- Y_dash = [t[1] for t in tmp]
-
-evl, evec = np.linalg.eig(t)
-fig, ax = plt.subplots()
-
-if showNormalSpace:
- if data != 'Batman':
- ax.plot(x, y, 'r', alpha=0.5)
- ax.plot(x, -y, 'g', alpha=0.5)
- else:
- for i, (x, y) in enumerate(zip(X, Y)):
- if black:
- ax.plot(x, y, 'k-', alpha=0.5, linewidth=1)
- elif i < 3:
- ax.plot(x, y, 'g-', alpha=0.5, linewidth=1)
- else:
- ax.plot(x, y, 'r-', alpha=0.5, linewidth=1)
- if not np.iscomplex(evec).any():
- ax.quiver(0,0,evec[0,0],evec[1,0],scale=1,scale_units ='xy',angles='xy', facecolor='black', alpha=0.5)
- ax.quiver(0,0,evec[0,1],evec[1,1],scale=1,scale_units ='xy',angles='xy', facecolor='black', alpha=0.5)
- plotGridLines(xlim,ylim,np.array([[1,0], [0,1]]),'#9D9D9D','Normal Space',0.4)
-
-if data != 'Batman':
- ax.plot(x_dash_up,y_dash_up,'r')
- ax.plot(x_dash_down,y_dash_down, 'g')
-else:
- for i, (x, y) in enumerate(zip(X_dash, Y_dash)):
- if black:
- ax.plot(x, y, 'k-', linewidth=1)
- elif i < 3:
- ax.plot(x, y, 'g', linewidth=1)
- else:
- ax.plot(x, y, 'r', linewidth=1)
-if not (np.iscomplex(evl).any() or np.iscomplex(evec).any()):
- ax.quiver(0,0,evec[0,0]*evl[0],evec[1,0]*evl[0],scale=1,scale_units ='xy',angles='xy', facecolor='cyan', label='$eigen\ vector_{\lambda_0}$')
- ax.quiver(0,0,evec[0,1]*evl[1],evec[1,1]*evl[1],scale=1,scale_units ='xy',angles='xy', facecolor='blue', label='$eigen\ vector_{\lambda_1}$')
-plotGridLines(xlim,ylim,t,'#403B3B','Transformed space',0.6)
-ax.text(11,3,'|A|={:.2f}'.format(np.linalg.det(t)), fontdict={'fontsize':11})
-ax.text(11,2,'D = {:.2f}'.format(discriminant(t)), fontdict={'fontsize':11})
-if discriminant(t) < 0:
- ax.text(13,1,'Negative!'.format(discriminant(t)), fontdict={'fontsize':8})
-
-ax.set_xlim(*xlim)
-ax.set_ylim(*ylim)
-ax.set_aspect('equal', adjustable='box')
-ax.xaxis.set_tick_params(labelbottom=False)
-ax.yaxis.set_tick_params(labelleft=False)
-ax.set_xticks([])
-ax.set_yticks([])
-fig.legend(bbox_to_anchor=(1.05, 0.86), loc=1, borderaxespad=0., fontsize=8)
-
-if np.iscomplex(evl).any() or np.iscomplex(evec).any():
- st.caption(
- ":red[NOTE: Due to _complex_ _eigenvectors_ and _eigenvalues_, the transformed _eigenvectors_ \
- are not displayed...]"
- )
-else:
- ### This is to make sure that the graph vansish and re-appear whne the condition is true....
- st.caption(" ")
-
-st.pyplot(fig)
-
-df = pd.DataFrame({'Eigenvalues': [str(np.round(x,3)) for x in evl], 'Eigenvectors': [str(evec[:,0]), str(evec[:,1])],\
- 'Transformed Eigenvectors': [str(evec[:,0]*evl[0]), str(evec[:,1]*evl[1])]})
-st.table(df)
-
-st.write("---")
-file = open("description.md", "r")
-st.markdown(file.read())
-file.close()
\ No newline at end of file
diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py b/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cmo Obtener Kick the Buddy Mod Apk Todo Desbloqueado Gratis y Fcil.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cmo Obtener Kick the Buddy Mod Apk Todo Desbloqueado Gratis y Fcil.md
deleted file mode 100644
index ac718301556689df0380c3322e40c0a595cf30d4..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cmo Obtener Kick the Buddy Mod Apk Todo Desbloqueado Gratis y Fcil.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-Kick the Buddy Mod APK Todo Desbloqueado: How to Download and Play
- Do you want to have some fun and relieve some stress by kicking, punching, smashing, and exploding a ragdoll buddy? If yes, then you should try Kick the Buddy, a popular casual game that lets you do all that and more. But what if you want to have even more fun and unlock all the items and weapons in the game without spending any money? Well, then you should try Kick the Buddy Mod APK Todo Desbloqueado, a modified version of the game that gives you unlimited money and access to everything in the game. In this article, we will tell you what Kick the Buddy is, what Kick the Buddy Mod APK Todo Desbloqueado is, how to download and install it, and how to play it. So, let's get started!
-kick the buddy mod apk todo desbloqueado
Download Zip >> https://ssurll.com/2uNWcj
- What is Kick the Buddy?
- A fun and stress-relieving game
- Kick the Buddy is a casual game that was developed by Playgendary and released in 2017. The game is available for both Android and iOS devices. The main objective of the game is to torture, beat, destroy, and kill a ragdoll buddy using various weapons, tools, elements, animals, machines, and even gods. You can choose from hundreds of items and weapons, such as knives, guns, bombs, rockets, lasers, fire, lightning, zombies, dragons, dinosaurs, and more. You can also customize your buddy's appearance, clothes, voice, and name. The game is designed to be a fun and stress-relieving way to vent your anger and frustration on a harmless virtual doll. You can also earn money by hitting your buddy and use it to buy more items and weapons.
- Features of Kick the Buddy
- Some of the features of Kick the Buddy are:
-
-- It has realistic physics and graphics that make the game more enjoyable and satisfying.
-- It has a variety of modes and scenarios that offer different challenges and experiences.
-- It has a social aspect that allows you to share your gameplay videos with your friends and watch other players' videos.
-- It has a friendly and humorous tone that makes the game less violent and more amusing.
-- It has regular updates that add new items, weapons, modes, scenarios, and features to the game.
-
- What is Kick the Buddy Mod APK Todo Desbloqueado?
- A modified version of the game with unlimited money and unlocked items
- Kick the Buddy Mod APK Todo Desbloqueado is a modified version of the original game that gives you unlimited money and access to all the items and weapons in the game. This means that you don't have to spend any real money or wait for hours to unlock new items and weapons. You can simply download and install the mod apk file on your device and enjoy all the features of the game without any restrictions. You can also use any item or weapon you want without worrying about running out of money or ammo.
- Benefits of using Kick the Buddy Mod APK Todo Desbloqueado
- Some of the benefits of using Kick the Buddy Mod APK Todo Desbloqueado are:
-
-- You can have more fun and satisfaction by using any item or weapon you want.
-- You can experiment with different combinations of items and weapons and discover new ways to torture your buddy.
-- You can save your time and money by not having to buy or earn anything in the game.
-- You can enjoy the game without any ads or interruptions.
-
- How to download and install Kick the Buddy Mod APK Todo Desbloqueado?
- Steps to download and install the mod apk file
- If you want to download and install Kick the Buddy Mod APK Todo Desbloqueado on your device, you need to follow these steps:
-
-- First, you need to uninstall the original game from your device if you have it installed.
-- Second, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device's settings, then security, then unknown sources, and turn it on.
-- Third, you need to download the mod apk file from a reliable and trusted source. You can use this link to download the latest version of the mod apk file.
-- Fourth, you need to locate the downloaded file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-- Fifth, you need to launch the game and enjoy all the features of Kick the Buddy Mod APK Todo Desbloqueado.
-
- Tips to avoid viruses and malware
- While downloading and installing Kick the Buddy Mod APK Todo Desbloqueado is easy and safe, you still need to be careful and avoid any viruses or malware that might harm your device or compromise your privacy. Here are some tips to avoid viruses and malware:
-
-- Always download the mod apk file from a reputable and verified source. Do not download from unknown or suspicious websites or links.
-- Always scan the mod apk file with a reliable antivirus or anti-malware software before installing it on your device.
-- Always backup your device's data before installing any mod apk file. This way, you can restore your data in case anything goes wrong.
-- Always read the permissions and terms of service of any mod apk file before installing it on your device. Do not grant any unnecessary or suspicious permissions or agree to any terms that might violate your rights or privacy.
-
- How to play Kick the Buddy Mod APK Todo Desbloqueado?
- Choose your buddy and your weapon
- The first thing you need to do when you play Kick the Buddy Mod APK Todo Desbloqueado is to choose your buddy and your weapon. You can customize your buddy's appearance, clothes, voice, and name by tapping on the buddy icon on the top left corner of the screen. You can also change your buddy's mood by tapping on the smiley face icon next to it. You can choose from happy, sad, angry, scared, or neutral moods. You can also switch between different buddies by tapping on the arrows next to the buddy icon.
- To choose your weapon, you need to tap on the weapon icon on the bottom right corner of the screen. You can then scroll through different categories of weapons, such as firearms, cold weapons, explosives, power of gods, animals, machines, etc. You can also use the search bar to find a specific weapon by name. To use a weapon, you need to drag it from the menu to the screen and release it on your buddy. Some weapons require tapping, swiping, or tilting your device to activate them.
-kick the buddy mod apk unlimited money and gold
-kick the buddy mod apk all weapons unlocked
-kick the buddy mod apk latest version download
-kick the buddy mod apk android 1
-kick the buddy mod apk hack download
-kick the buddy mod apk no ads
-kick the buddy mod apk free shopping
-kick the buddy mod apk revdl
-kick the buddy mod apk happymod
-kick the buddy mod apk rexdl
-kick the buddy mod apk unlimited everything
-kick the buddy mod apk unlimited diamonds
-kick the buddy mod apk offline
-kick the buddy mod apk 2023
-kick the buddy mod apk mediafıre
-kick the buddy mod apk mega
-kick the buddy mod apk online
-kick the buddy mod apk original
-kick the buddy mod apk pure
-kick the buddy mod apk unlimited coins
-kick the buddy mod apk vip unlocked
-kick the buddy mod apk with obb
-kick the buddy mod apk 1.6.2 download
-kick the buddy mod apk 1.0.6 download
-kick the buddy mod apk 1.0.4 download
-descargar kick the buddy mod apk todo desbloqueado
-como descargar kick the buddy mod apk todo desbloqueado
-descargar gratis kick the buddy mod apk todo desbloqueado
-descargar e instalar kick the buddy mod apk todo desbloqueado
-descargar ultima version de kick the buddy mod apk todo desbloqueado
-descargar por mediafıre kick the buddy mod apk todo desbloqueado
-descargar por mega kick the buddy mod apk todo desbloqueado
-descargar sin anuncios kick the buddy mod apk todo desbloqueado
-descargar sin internet kick the buddy mod apk todo desbloqueado
-descargar sin root kick the buddy mod apk todo desbloqueado
-como jugar a kick the buddy con todo desbloqueado sin descargar nada
-como tener todo desbloqueado en el juego de kick the buddy sin descargar nada
-como hackear el juego de kick the buddy para tener todo desbloqueado sin descargar nada
-como conseguir dinero infinito en el juego de kick the buddy sin descargar nada
-como conseguir oro infinito en el juego de kick the buddy sin descargar nada
-como conseguir diamantes infinitos en el juego de kick the buddy sin descargar nada
-como conseguir armas gratis en el juego de kick the buddy sin descargar nada
-como conseguir vip gratis en el juego de kick the buddy sin descargar nada
-como conseguir todos los personajes en el juego de kick the buddy sin descargar nada
-como conseguir todos los trajes en el juego de kick the buddy sin descargar nada
-como conseguir todos los fondos en el juego de kick the buddy sin descargar nada
-como conseguir todos los elementos en el juego de kick the buddy sin descargar nada
-como conseguir todos los efectos en el juego de kick the buddy sin descargar nada
-como conseguir todos los logros en el juego de kick the buddy sin descargar nada
- Explore different modes and scenarios
- Kick the Buddy Mod APK Todo Desbloqueado offers different modes and scenarios that add more variety and fun to the game. You can access them by tapping on the mode icon on the bottom left corner of the screen. You can then choose from different options, such as: - Sandbox: This mode allows you to create your own scenarios and environments by adding and removing objects, backgrounds, and effects. You can also adjust the gravity, friction, and speed of the game. You can save your creations and share them with other players. - Missions: This mode gives you specific tasks and objectives to complete using different items and weapons. You can earn money and rewards by completing the missions. - Duels: This mode lets you challenge other players online and compete with them in real-time. You can use any item or weapon you want and try to beat your opponent's score. - Buddy World: This mode lets you explore different worlds and locations that are themed after different genres, such as horror, fantasy, sci-fi, etc. You can interact with different characters and objects in each world and discover new items and weapons.
Earn rewards and achievements
- Kick the Buddy Mod APK Todo Desbloqueado also gives you the opportunity to earn rewards and achievements by playing the game. You can earn money by hitting your buddy, completing missions, winning duels, and watching ads. You can use the money to buy more items and weapons in the game. You can also earn diamonds by completing achievements, which are special challenges that test your skills and creativity. You can use the diamonds to buy premium items and weapons in the game. You can also unlock new buddies, skins, clothes, voices, and names by earning enough money or diamonds.
- Conclusion
- Summary of the main points
- Kick the Buddy is a fun and stress-relieving game that lets you torture, beat, destroy, and kill a ragdoll buddy using various items and weapons. Kick the Buddy Mod APK Todo Desbloqueado is a modified version of the game that gives you unlimited money and access to all the items and weapons in the game. You can download and install Kick the Buddy Mod APK Todo Desbloqueado on your device by following some simple steps and tips. You can play Kick the Buddy Mod APK Todo Desbloqueado by choosing your buddy and your weapon, exploring different modes and scenarios, and earning rewards and achievements.
- Call to action
- If you are looking for a fun and easy way to relieve some stress and have some laughs, then you should definitely try Kick the Buddy Mod APK Todo Desbloqueado. It is one of the best casual games that you can play on your device anytime and anywhere. So, what are you waiting for? Download Kick the Buddy Mod APK Todo Desbloqueado now and enjoy all the features of the game without any limitations!
- FAQs
- Q: Is Kick the Buddy Mod APK Todo Desbloqueado safe to use?
-A: Yes, Kick the Buddy Mod APK Todo Desbloqueado is safe to use as long as you download it from a trusted source and scan it with an antivirus or anti-malware software before installing it on your device.
- Q: Do I need to root or jailbreak my device to use Kick the Buddy Mod APK Todo Desbloqueado?
-A: No, you don't need to root or jailbreak your device to use Kick the Buddy Mod APK Todo Desbloqueado. You just need to enable the installation of apps from unknown sources on your device.
- Q: Can I play Kick the Buddy Mod APK Todo Desbloqueado offline?
-A: Yes, you can play Kick the Buddy Mod APK Todo Desbloqueado offline without any internet connection. However, some features of the game, such as duels, ads, and updates, may require an internet connection.
- Q: Can I play Kick the Buddy Mod APK Todo Desbloqueado with my friends?
-A: Yes, you can play Kick the Buddy Mod APK Todo Desbloqueado with your friends online by challenging them in duels or sharing your gameplay videos with them.
- Q: How can I contact the developers of Kick the Buddy Mod APK Todo Desbloqueado?
-A: You can contact the developers of Kick the Buddy Mod APK Todo Desbloqueado by sending them an email at support@playgendary.com or visiting their website at https://playgendary.com/.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dinheiro infinito em Z Champions Dicas e truques para dominar o jogo.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dinheiro infinito em Z Champions Dicas e truques para dominar o jogo.md
deleted file mode 100644
index 9381b1c4c9b35b12c8a1e514bf2f639db1a64b99..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Dinheiro infinito em Z Champions Dicas e truques para dominar o jogo.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-Download Z Champions Dinheiro Infinito: How to Get Unlimited Money in the Dragon Tournament Game
-If you are a fan of Dragon Ball Z, you might have heard of Z Champions, a fan-made arcade game that lets you become one of the most powerful Z fighters in the Dragon Tournament. In this game, you can choose from 45 different characters, move between 22 different worlds, transform to increase your Ki power, and unleash your ultimate attacks. But what if you want to have unlimited money in the game, so that you can unlock all the characters, worlds, and transformations without having to grind for them? That's where Z Champions Dinheiro Infinito comes in.
-Z Champions Dinheiro Infinito is a modded version of Z Champions that gives you infinite money in the game, so that you can enjoy all the features and content without any limitations. In this article, we will show you how to download Z Champions Dinheiro Infinito, how to install it on your device, and how to use it in the game. We will also give you some tips and tricks on how to play Z Champions better and have more fun.
-download z champions dinheiro infinito
Download File ☆☆☆☆☆ https://ssurll.com/2uO1d7
- What is Z Champions?
-Z Champions is a fan-made arcade game based on Dragon Ball Z, created by Sansonight. It is not an official product of Dragon Ball Z, but rather a tribute to the popular anime series. The game is available for free on Android devices, and has over 10 million downloads on Google Play.
-In Z Champions, you can choose from 45 different fighters, each with their own special attacks, different for each champion. You can also unlock new champions by completing certain tasks or using money. You can move between 22 different worlds, each with their own background music and scenery. You can also transform your character to increase your Ki power and unleash your ultimate attack.
-download z champions mod apk dinheiro infinito
-download z champions dinheiro infinito mediafire
-download z champions dinheiro infinito youtube
-download z champions v1.5.398 mod dinheiro infinito
-download z champions v1.5.301 mod dinheiro infinito
-download z champions sansonight dinheiro infinito
-download z champions mugen para android dinheiro infinito
-download z champions dragon ball dinheiro infinito
-download z champions atualizado dinheiro infinito
-download z champions hack dinheiro infinito
-download z champions 2023 dinheiro infinito
-download z champions offline dinheiro infinito
-download z champions apk obb dinheiro infinito
-download z champions apk pure dinheiro infinito
-download z champions apk mirror dinheiro infinito
-download z champions apk mod menu dinheiro infinito
-download z champions apk mod hack dinheiro infinito
-download z champions apk mod unlimited money dinheiro infinito
-download z champions apk mod latest version dinheiro infinito
-download z champions apk mod free shopping dinheiro infinito
-download z champions apk mod no ads dinheiro infinito
-download z champions apk mod all characters unlocked dinheiro infinito
-download z champions apk mod all transformations unlocked dinheiro infinito
-download z champions apk mod all skills unlocked dinheiro infinito
-download z champions apk mod all stages unlocked dinheiro infinito
-download z champions apk mod god mode dinheiro infinito
-download z champions apk mod one hit kill dinheiro infinito
-download z champions apk mod high damage dinheiro infinito
-download z champions apk mod unlimited energy dinheiro infinito
-download z champions apk mod unlimited ki dinheiro infinito
-download z champions apk mod unlimited coins dinheiro infinito
-download z champions apk mod unlimited gems dinheiro infinito
-download z champions apk mod unlimited crystals dinheiro infinito
-download z champions apk mod unlimited zenies dinheiro infinito
-download z champions apk mod unlimited souls dinheiro infinito
-download z champions apk mod unlimited tickets dinheiro infinito
-download z champions apk mod unlimited medals dinheiro infinito
-download z champions apk mod unlimited stars dinheiro infinito
-download z champions apk mod unlimited lives dinheiro infinito
-download z champions apk mod mega money dinero infinite
-The game has several modes to choose from, such as PRACTICE mode, where you can practice your skills and combos; SINGLEPLAYER mode, where you can fight against AI opponents; MINIGAMES mode, where you can play some fun
mini-games, such as dodgeball, volleyball, and soccer; and MULTIPLAYER mode, where you can fight against other players online or locally.
- Why Download Z Champions Dinheiro Infinito?
-Z Champions is a fun and addictive game, but it can also be challenging and time-consuming. If you want to unlock all the characters, worlds, and transformations, you will need to spend a lot of money in the game, which can be earned by winning fights or watching ads. However, this can take a long time and become boring after a while.
-That's why some players prefer to download Z Champions Dinheiro Infinito, which is a modded version of the game that gives you unlimited money from the start. With this mod, you can unlock all the features and content of the game without having to grind for them. You can also enjoy the game without any ads or interruptions.
-Downloading Z Champions Dinheiro Infinito can make the game more fun and exciting, as you can experiment with different characters, worlds, and transformations. You can also challenge yourself with harder opponents and modes, or have more fun with your friends in multiplayer mode.
- How to Download Z Champions Dinheiro Infinito?
-If you want to download Z Champions Dinheiro Infinito, you will need to follow some simple steps. However, before you do that, you should be aware of some risks and precautions. First of all, downloading Z Champions Dinheiro Infinito is not an official or authorized way of playing the game. It is a modded version that may contain bugs, errors, or viruses. Therefore, you should download it at your own risk and responsibility. Second of all, downloading Z Champions Dinheiro Infinito may violate the terms and conditions of the original game and Google Play. Therefore, you may face some consequences, such as losing your progress, getting banned from the game or Google Play, or facing legal action. Therefore, you should download it only if you are willing to accept these risks and consequences.
-If you are still interested in downloading Z Champions Dinheiro Infinito, here are the steps you need to follow:
- Step 1: Enable Unknown Sources on Your Device
-The first step is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings and look for security or privacy options. Then, find the option that says unknown sources or allow installation from unknown sources and turn it on. You may see a warning message that says this may harm your device or expose your personal data. If you are sure you want to proceed, tap OK or Yes.
- Step 2: Download the Z Champions Dinheiro Infinito APK File
-The next step is to download the Z Champions Dinheiro Infinito APK file. This is the file that contains the modded version of the game with unlimited money. To download it, you will need to find a reliable and trustworthy source online. You can search for it on Google or use one of these links:
-
-- https://apkdone.com/z-champions/
-- https://apkpure.com/z-champions/com.sansonight.zchampions/download
-- https://apkhome.net/z-champions-1-5-397-mod-unlimited-money/
-
-Once you find the link, tap on it and wait for the download to start. You may see a pop-up message that asks you to confirm the download or warns you about potential harm. If you are sure you want to proceed, tap OK or Yes.
- Step 3: Install the Z Champions Dinheiro Infinito APK File
-The final step is to install the Z Champions Dinheiro Infinito APK file on your device. To do this, go to your device file manager and look for the downloaded file. It should be in your downloads folder or wherever you saved it. Then, tap on the file and wait for the installation to begin. You may see a pop-up message that asks you to confirm the installation or warns you about potential harm. If you are sure you want to proceed, tap OK or Yes.
-Once the installation is complete, you should see a new icon on your device screen that says Z Champions Dinheiro Infinito. Tap on it and launch the game.
- Step 4: Enjoy the Game with Unlimited Money
-Congratulations! You have successfully downloaded and installed Z Champions Dinheiro Infinito on your device. Now you can enjoy the game with unlimited money. You will see that you have a lot of money in the game, which you can use to unlock all the characters, worlds, and transformations. You can also access all the modes and features of the game without any restrictions. You can also play the game without any ads or interruptions.
- Tips and Tricks for Playing Z Champions
-Z Champions is a fun and exciting game, but it can also be challenging and complex. If you want to improve your skills and have more fun, here are some tips and tricks that can help you:
- Tip 1: Learn the Controls and Combos
-The first tip is to learn the controls and combos of the game. Z Champions has a simple and intuitive control system, but it also has some advanced and hidden moves that can make a difference in the battle. Here are the basic controls of the game:
-
-- Tap on the left side of the screen to move your character.
-- Tap on the right side of the screen to attack your opponent.
-- Swipe on the right side of the screen to perform a special attack.
-- Tap on the Ki bar at the bottom of the screen to transform your character.
-- Tap on the power up button at the top of the screen to increase your Ki power.
-
-Here are some of the advanced and hidden moves that you can perform:
-
-- Swipe up on the left side of the screen to fly.
-- Swipe down on the left side of the screen to land.
-- Swipe left or right on the left side of the screen to dash.
-- Swipe up or down on the right side of the screen to perform an uppercut or a sweep.
-- Swipe left or right on the right side of the screen to perform a backflip or a frontflip.
-- Tap twice on the right side of the screen to perform a combo attack.
-- Tap three times on the right side of the screen to perform an ultimate attack.
-
-Try to practice these moves and combos in PRACTICE mode or against easy opponents until you master them. They can give you an edge over your enemies and make your fights more dynamic and fun.
- Tip 2: Choose Your Champion Wisely
-The second tip is to choose your champion wisely. Z Champions has 45 different characters to choose from, each with their own strengths, weaknesses, and abilities. Some characters are faster, stronger, or more durable than others. Some characters have better special attacks, transformations, or power ups than others. Some characters are more suitable for certain worlds, modes, or opponents than others.
-To choose your champion wisely, you should consider several factors, such as:
-
-- Your playstyle and preferences: Do you like to be aggressive or defensive? Do you like to use melee or ranged attacks? Do you like to fly or stay on the ground? Do you like to transform or power up?
-- Your opponent's characteristics: What is their speed, strength, durability, and Ki power? What are their special attacks, transformations, and power ups? What are their weaknesses and vulnerabilities?
-- Your world's environment: What is the terrain, weather, and gravity of your world? How does it affect your movement, attacks, and Ki power? How can you use it to your advantage or disadvantage?
-
-To find out more about each character's characteristics, you can tap on their icon in the character selection screen and see their stats and abilities. You can also try them out in PRACTICE mode or SINGLEPLAYER mode before using them in MULTIPLAYER mode. You can also switch between different characters during a fight by tapping on their icon at the bottom of the screen. You can also unlock new characters by completing certain tasks or using money.
- Tip 3: Explore Different Worlds and Modes
-The third tip is to explore different worlds and modes in the game. Z Champions has 22 different worlds to choose from, each with their own background music and scenery. Some worlds are based on the original Dragon Ball Z series, such as Earth, Namek, and Cell Games. Some worlds are based on the movies or spin-offs, such as Broly's Planet, Janemba's Hell, and Future Trunks' Timeline. Some worlds are based on the fan-made creations, such as Dragon Ball AF, Dragon Ball Heroes, and Dragon Ball Multiverse.
-To explore different worlds, you can tap on the world icon in the world selection screen and see their name and image. You can also try them out in PRACTICE mode or SINGLEPLAYER mode before using them in MULTIPLAYER mode. You can also switch between different worlds during a fight by tapping on the world icon at the bottom of the screen. You can also unlock new worlds by completing certain tasks or using money.
-Z Champions also has several modes to choose from, such as PRACTICE mode, where you can practice your skills and combos; SINGLEPLAYER mode, where you can fight against AI opponents; MINIGAMES mode, where you can play some fun mini-games, such as dodgeball, volleyball, and soccer; and MULTIPLAYER mode, where you can fight against other players online or locally.
-To explore different modes, you can tap on the mode icon in the mode selection screen and see their name and description. You can also try them out in PRACTICE mode or SINGLEPLAYER mode before using them in MULTIPLAYER mode. You can also switch between different modes during a fight by tapping on the mode icon at the bottom of the screen.
- Tip 4: Transform and Power Up Your Champion
-The fourth tip is to transform and power up your champion. Z Champions has a unique feature that allows you to transform your character to increase your Ki power and unleash your ultimate attack. Each character has their own transformation, such as Super Saiyan, Super Namekian, Golden Frieza, Majin Buu, etc. Some characters have more than one transformation, such as Goku, who can transform into Super Saiyan 1, 2, 3, 4, God, Blue, Ultra Instinct, etc.
-To transform your character, you need to fill up your Ki bar at the bottom of the screen by attacking or getting hit by your opponent. Once your Ki bar is full, you can tap on it to activate your transformation. You will see a cutscene of your character transforming and then resume the fight with increased Ki power. You can also tap on the Ki bar again to deactivate your transformation if you want to save it for later.
-When you transform your character, you can also unleash your ultimate attack by swiping on the right side of the screen. This will trigger a cinematic sequence of your character performing their most powerful attack, such as Kamehameha, Final Flash, Spirit Bomb, etc. This attack can deal massive damage to your opponent and end the fight quickly.
-Z Champions also has a feature that allows you to power up your character by tapping on the power up button at the top of the screen. This will increase your Ki power temporarily and make your attacks stronger and faster. However, this feature has a cooldown time and can only be used once per fight.
-To use these features effectively, you should know when to transform and power up your character. You should not waste your transformation or power up at the beginning of the fight when your opponent is still weak or when you are not in danger. You should save them for critical moments when you need an extra boost or when you want to finish off your opponent quickly.
- Tip 5: Challenge Yourself with Survival Mode
-The fifth tip is to challenge yourself with survival mode. Survival mode is a special mode that tests your skills and endurance with endless waves of enemies. In this mode, you have to fight against random opponents with increasing difficulty until you lose. You cannot heal or restore your Ki power between fights. You can only use one character and one world for this mode.
-To play survival mode, you need to tap on the survival icon in the mode selection screen and choose your character and world. Then, you will start the fight against the first opponent. After you defeat them, you will move on to the next opponent without any break or pause. The opponents will get harder and harder as you progress. Your score will depend on how many opponents you defeat and how much health and Ki power you have left.
-Survival mode is a great way to challenge yourself and improve your skills in Z Champions. You can also compare your score with other players and see how you rank in the leaderboard. You can also earn some money and rewards by playing this mode.
- Conclusion
-Z Champions is a fan-made arcade game based on Dragon Ball Z that lets you become one of the most powerful Z fighters in the Dragon Tournament. You can choose from 45 different characters, move between 22 different worlds, transform to increase your Ki power, and unleash your ultimate attacks. You can also play different modes, such as PRACTICE, SINGLEPLAYER, MINIGAMES, and MULTIPLAYER.
-If you want to have unlimited money in the game, so that you can unlock all the features and content without any limitations, you can download Z Champions Dinheiro Infinito, which is a modded version of the game that gives you infinite money from the start. To download it, you need to enable unknown sources on your device, download the APK file from a reliable source, install it on your device, and launch the game.
-However, you should be aware of the risks and consequences of downloading Z Champions Dinheiro Infinito, such as bugs, errors, viruses, bans, or legal action. You should also respect the original game and its creators and support them if you like their work.
-If you want to improve your skills and have more fun in Z Champions, you can follow some tips and tricks, such as learning the controls and combos, choosing your champion wisely, exploring different worlds and modes, transforming and powering up your champion, and challenging yourself with survival mode.
-We hope this article has helped you learn more about Z Champions and Z Champions Dinheiro Infinito. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun playing Z Champions!
- FAQs
-Here are some frequently asked questions about Z Champions and Z Champions Dinheiro Infinito:
- Q: Is Z Champions an official Dragon Ball Z game?
-A: No, Z Champions is not an official Dragon Ball Z game. It is a fan-made arcade game created by Sansonight. It is not affiliated with or endorsed by Dragon Ball Z or its owners.
- Q: Is Z Champions Dinheiro Infinito safe to download?
-A: Z Champions Dinheiro Infinito is not an official or authorized way of playing the game. It is a modded version that may contain bugs, errors, or viruses. Therefore, you should download it at your own risk and responsibility. You should also scan it with an antivirus before installing it on your device.
- Q: Can I play Z Champions online with other players?
-A: Yes, you can play Z Champions online with other players in MULTIPLAYER mode. You can either join an existing room or create your own room. You can also invite your friends to play with you by sharing your room code.
- Q: Can I play Z Champions offline without internet connection?
-A: Yes, you can play Z Champions offline without internet connection in PRACTICE mode or SINGLEPLAYER mode. However, you will not be able to access some features or content that require internet connection, such as MULTIPLAYER mode or online leaderboard.
- Q: How can I contact the developer of Z Champions?
-A: You can contact the developer of Z Champions by sending an email to sansonight@gmail.com or by visiting their Facebook page at https://www.facebook.com/ZChampionsOfficial/.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download YouTube APK 2012 - The Most Popular Video App in the World.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download YouTube APK 2012 - The Most Popular Video App in the World.md
deleted file mode 100644
index cbcc72e26ee510a9c4911c226fc60ca9e7ac1fa3..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download YouTube APK 2012 - The Most Popular Video App in the World.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-How to Download YouTube APK 2012 for Android
-YouTube is the most popular video-sharing platform in the world, with billions of users and hours of content. However, not everyone is satisfied with the latest version of the YouTube app for Android, which may have some bugs, glitches, or unwanted features. If you are one of those people who prefer the old-school YouTube experience, you might be interested in downloading YouTube APK 2012, which is an older version of the app that was released in December of that year.
-In this article, we will explain what YouTube APK 2012 is, what are its benefits and risks, how to download it safely and easily, and what are some alternatives to it. By the end of this article, you will have a clear idea of whether YouTube APK 2012 is worth trying or not.
-download youtube apk 2012
Download File ✔ https://ssurll.com/2uNS8r
- Benefits of YouTube APK 2012
-YouTube APK 2012 is not just a nostalgic trip down memory lane. It also has some advantages over the current version of the app, such as:
-Faster and smoother performance
-YouTube APK 2012 is lighter and simpler than the current version, which means it consumes less battery, data, and memory. It also runs faster and smoother on older or low-end devices, without crashing or freezing.
-Offline video playback
-YouTube APK 2012 allows you to download videos to your device and watch them offline later. This is useful if you have a limited or unreliable internet connection, or if you want to save some data. You can also choose the quality and format of the downloaded videos.
-No ads or in-app purchases
-YouTube APK 2012 does not have any ads or in-app purchases that may interrupt your viewing experience or tempt you to spend money. You can enjoy watching videos without any distractions or annoyances.
-download youtube apk 2012 version
-download youtube apk 2012 free
-download youtube apk 2012 for android
-download youtube apk 2012 uptodown
-download youtube apk 2012 latest
-download youtube apk 2012 old
-download youtube apk 2012 update
-download youtube apk 2012 offline
-download youtube apk 2012 mod
-download youtube apk 2012 premium
-download youtube apk 2012 pro
-download youtube apk 2012 full
-download youtube apk 2012 cracked
-download youtube apk 2012 hack
-download youtube apk 2012 unlocked
-download youtube apk 2012 original
-download youtube apk 2012 official
-download youtube apk 2012 google play
-download youtube apk 2012 app store
-download youtube apk 2012 file
-download youtube apk 2012 link
-download youtube apk 2012 mirror
-download youtube apk 2012 direct
-download youtube apk 2012 safe
-download youtube apk 2012 secure
-download youtube apk 2012 virus-free
-download youtube apk 2012 malware-free
-download youtube apk 2012 ad-free
-download youtube apk 2012 no ads
-download youtube apk 2012 no root
-download youtube apk 2012 root required
-download youtube apk 2012 compatible devices
-download youtube apk 2012 supported devices
-download youtube apk 2012 system requirements
-download youtube apk 2012 features
-download youtube apk 2012 benefits
-download youtube apk 2012 advantages
-download youtube apk 2012 disadvantages
-download youtube apk 2012 drawbacks
-download youtube apk 2012 reviews
-download youtube apk 2012 ratings
-download youtube apk 2012 testimonials
-download youtube apk 2012 feedbacks
-download youtube apk 2012 comments
-download youtube apk 2012 opinions
-download youtube apk 2012 tips
-download youtube apk 2012 tricks
-download youtube apk 2012 guides
-download youtube apk 2012 tutorials
- Risks of YouTube APK 2012
-However, YouTube APK 2012 is not without its drawbacks. There are some risks involved in downloading and using it, such as:
-Security and privacy issues
-YouTube APK 2012 is not an official app from Google, which means it may not be safe or secure. It may contain malware, viruses, spyware, or other harmful software that may damage your device or steal your personal information. You should always scan any APK file before installing it on your device.
-Compatibility and update problems
-YouTube APK 2012 may not be compatible with your device or operating system, especially if they are newer or updated. It may also not support some features or functions that are available on the current version of the app, such as live streaming, captions, playlists, etc. Moreover, YouTube APK 2012 will not receive any updates or bug fixes from Google, which may affect its performance or functionality over time.
-Legal and ethical concerns
-YouTube APK 2012 may violate the terms of service or the intellectual property rights of Google or YouTube, which may result in legal action or penalties. It may also be unethical to use YouTube APK 2012, as it deprives the creators and the platform of their rightful revenue and recognition. You should always respect the rights and wishes of the content owners and providers.
- How to Download YouTube APK 2012 Safely and Easily
-If you still want to download YouTube APK 2012 despite the risks, you should follow these steps to do it safely and easily:
-Find a reliable source
-You should not download YouTube APK 2012 from any random website or link, as they may be fake, corrupted, or infected. You should only download it from a trusted and reputable source, such as [APKMirror] or [Uptodown], which are well-known and verified platforms for downloading APK files. You can also check the reviews, ratings, and comments of other users before downloading.
-Enable unknown sources on your device
-By default, your device will not allow you to install apps from unknown sources, which are sources other than the Google Play Store. To enable unknown sources, you need to go to your device settings, then security, then toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install YouTube APK 2012 on your device.
-Install the APK file
-Once you have downloaded YouTube APK 2012 from a reliable source and enabled unknown sources on your device, you can install it by tapping on the APK file and following the instructions on the screen. You may need to grant some permissions or accept some terms and conditions before installing. After installing, you can launch YouTube APK 2012 and enjoy watching videos.
- Alternatives to YouTube APK 2012
-If you are not comfortable with downloading YouTube APK 2012, or if you want to try some other options, you can check out these alternatives that offer similar or better features than YouTube APK 2012:
-YouTube Vanced
-YouTube Vanced is a modded version of YouTube that offers many features that are not available on the official app, such as ad-blocking, background playback, dark mode, picture-in-picture mode, etc. It also supports the latest version of YouTube and its features. You can download YouTube Vanced from its [official website] or from [XDA Developers].
-NewPipe
-NewPipe is an open-source app that allows you to watch YouTube videos without using the YouTube API or any Google services. It offers features such as downloading videos and audio, playing videos in a pop-up window or in the background, subscribing to channels without an account, etc. It also respects your privacy and does not collect any data. You can download NewPipe from [F-Droid] or from its [GitHub page].
-OGYouTube
-OGYouTube is another modded version of YouTube that offers features such as downloading videos in various formats and resolutions, playing videos in a pop-up window or in the background, removing ads, etc. It also looks and feels like the original YouTube app. You can download OGYouTube from [APKPure] or from [APK4Fun].
- Conclusion
-In this article, we have discussed how to download YouTube APK 2012 for Android, what are its benefits and risks, how to download it safely and easily, and what are some alternatives to it. We hope that this article has been helpful and informative for you.
-However, we do not recommend downloading YouTube APK 2012 unless you have a valid reason and are aware of the consequences. It is better to use the official YouTube app or one of the alternatives we have mentioned above, as they are safer, more reliable, and more updated than YouTube APK 2012.
- FAQs
-What is an APK file?
-An APK file is an Android Package file that contains all the files and code needed to install and run an app on an Android device. It is similar to an EXE file for Windows or a DMG file for Mac.
-Is YouTube APK 2012 legal?
-YouTube APK 2012 is not illegal per se, but it may violate the terms of service or the intellectual property rights of Google or YouTube, which may result in legal action or penalties. You should always respect the rights and wishes of the content owners and providers.
-How can I update YouTube APK 2012?
-You cannot update YouTube APK 2012 from the Google Play Store or from the app itself, as it is not an official app from Google. You will have to download and install a newer version of YouTube APK from a reliable source, or switch to the official YouTube app or one of the alternatives we have mentioned above.
-How can I uninstall YouTube APK 2012?
-You can uninstall YouTube APK 2012 like any other app on your device. You can go to your device settings, then apps, then find and select YouTube APK 2012, then tap on uninstall. You can also long-press on the app icon and drag it to the trash bin.
-What are some other old versions of YouTube APK?
-There are many other old versions of YouTube APK that you can download and install on your device, such as YouTube APK 2011, YouTube APK 2010, YouTube APK 2009, etc. However, they may have similar or worse risks and drawbacks than YouTube APK 2012, so we do not recommend using them either.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install PhpSpreadsheet IOFactory to Work with Excel and LibreOffice Calc Formats.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install PhpSpreadsheet IOFactory to Work with Excel and LibreOffice Calc Formats.md
deleted file mode 100644
index ced0f8536bc294931094083076eba3ebfa5f3492..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download and Install PhpSpreadsheet IOFactory to Work with Excel and LibreOffice Calc Formats.md
+++ /dev/null
@@ -1,185 +0,0 @@
-
-How to Download Spreadsheet Files Using PhpSpreadsheet IOFactory
-If you are working with spreadsheet files in PHP, you might have heard of PhpSpreadsheet, a library that allows you to read and write various spreadsheet file formats such as Excel and LibreOffice Calc. In this article, we will show you how to use one of its features, the IOFactory class, to download spreadsheet files directly to the browser without saving them on the server.
-phpoffice phpspreadsheet iofactory download
Download Zip ===> https://ssurll.com/2uNQpY
-What is PhpSpreadsheet?
-A brief introduction to the library and its features
-PhpSpreadsheet is a library written in pure PHP that offers a set of classes that allow you to manipulate spreadsheet files programmatically. It is based on the PHPExcel project, which is no longer maintained. Some of the features of PhpSpreadsheet are:
-
-- Reading and writing spreadsheet files in various formats, such as XLSX, XLS, ODS, CSV, HTML, and PDF.
-- Creating and modifying worksheets, cells, ranges, formulas, styles, charts, images, comments, and more.
-- Performing calculations and validations on spreadsheet data.
-- Supporting advanced features such as conditional formatting, data filtering, pivot tables, macros, encryption, and digital signatures.
-
-The supported file formats and how to install it
-PhpSpreadsheet supports reading and writing the following file formats:
-
-
-
-Format
-Reader
-Writer
-
-
-
-
-XLSX
-Yes
-Yes
-
-
-XLS
-Yes
-Yes
-
-
-ODS
-Yes
-Yes
-
-
-CSV
-Yes
-Yes
-
-
-HTML
-Yes
-Yes
-
-
-PDF
-No
-Yes (with additional libraries)
-
-
- Gnumeric
- Yes
- No
-
-
- Sylk (SLK)
- No
- Yes (with additional libraries)
-
-
- XML Spreadsheet 2003 (XML)
- No
- No (with additional libraries)
-
-
- Biff 5/8 (BIFF5/BIFF8)
- No (with additional libraries)
- No (with additional libraries)
-
-
- Biff 2/3/4 (BIFF2/BIFF3/BIFF4)
- No (with additional libraries)
- No (with additional libraries)
-
-
-
-To install PhpSpreadsheet, you need to have PHP version 7.2 or higher and Composer, a dependency manager for PHP. You can use the following command to install PhpSpreadsheet via Composer:
-composer require phpoffice/phpspreadsheet
-This will download the latest version of PhpSpreadsheet and its dependencies to your project folder.
-phpspreadsheet iofactory createwriter example
-phpspreadsheet iofactory load csv
-phpspreadsheet iofactory registerwriter pdf
-phpspreadsheet iofactory identify filetype
-phpspreadsheet iofactory create reader from file
-phpspreadsheet iofactory save to browser
-phpspreadsheet iofactory load multiple files
-phpspreadsheet iofactory write to string
-phpspreadsheet iofactory load xlsx
-phpspreadsheet iofactory createwriter zip
-phpspreadsheet iofactory load from url
-phpspreadsheet iofactory write to stream
-phpspreadsheet iofactory load ods
-phpspreadsheet iofactory createwriter html
-phpspreadsheet iofactory load from memory
-phpspreadsheet iofactory write to temp file
-phpspreadsheet iofactory load xls
-phpspreadsheet iofactory createwriter csv
-phpspreadsheet iofactory load from string
-phpspreadsheet iofactory write to response
-phpspreadsheet iofactory load xml
-phpspreadsheet iofactory createwriter xlsx
-phpspreadsheet iofactory load from database
-phospreasheet iofactory write to s3
-phospreasheet iofactory load gnumeric
-phpoffice phpspreadsheet download composer
-phpoffice phpspreadsheet download github
-phpoffice phpspreadsheet download pdf
-phpoffice phpspreadsheet download zip
-phpoffice phpspreadsheet download html
-phpoffice phpspreadsheet download csv
-phpoffice phpspreadsheet download xlsx
-phpoffice phospreasheet download ods
-phpoffice phospreasheet download xls
-phpoffice phospreasheet download xml
-phpoffice phospreasheet download docx
-phpoffice phospreasheet download png
-phpoffice phospreasheet download jpg
-phpoffice phospreasheet download svg
-phpoffice phospreasheet download chart
-phpoffice spreadsheet tutorial pdf download
-phpoffice spreadsheet documentation download
-phpoffice spreadsheet examples download
-phpoffice spreadsheet templates download
-phpoffice spreadsheet reader download
-phpoffice spreadsheet writer download
-phpoffice spreadsheet style download
-phpoffice spreadsheet formula download
-phpoffice spreadsheet cell download
-What is IOFactory?
-A class that provides a factory method for creating readers and writers
-IOFactory is a class that belongs to the PhpOffice\PhpSpreadsheet\IOFactory namespace. It provides a static method called createWriter() that takes a spreadsheet object and a file format as parameters and returns a writer object that can save the spreadsheet to that format. Similarly, it provides another static method called createReader() that takes a file format as a parameter and returns a reader object that can load a spreadsheet from that format.
-The advantages of using IOFactory over specific classes
-One of the advantages of using IOFactory is that you don't need to know the exact class name of the reader or writer for each file format. For example, if you want to save a spreadsheet as an XLSX file, you don't need to use the Xlsx class directly. You can just use IOFactory::createWriter($spreadsheet, 'Xlsx') and it will return an instance of the Xlsx class for you. This makes your code more flexible and easier to maintain.
-Another advantage of using IOFactory is that it can automatically detect the file format based on the file extension or the file content. For example, if you want to load a spreadsheet from a file, you don't need to specify the file format. You can just use IOFactory::load($filename) and it will return a spreadsheet object with the data from the file. This makes your code more robust and user-friendly.
-How to Download Spreadsheet Files Using IOFactory?
-The steps to create a spreadsheet object and populate it with data
-To download spreadsheet files using IOFactory, you first need to create a spreadsheet object and populate it with some data. You can use the Spreadsheet class from the PhpOffice\PhpSpreadsheet\Spreadsheet namespace to create a new spreadsheet object. You can then use the methods and properties of the Spreadsheet class and its related classes, such as Worksheet, Cell, Style, etc., to manipulate the spreadsheet data and appearance.
-For example, you can use the following code to create a simple spreadsheet with some sample data:
-// Create a new spreadsheet object $spreadsheet = new Spreadsheet(); // Get the active worksheet $worksheet = $spreadsheet->getActiveSheet(); // Set the worksheet title $worksheet->setTitle('Sample Data'); // Set some cell values $worksheet->setCellValue('A1', 'Name'); $worksheet->setCellValue('B1', 'Age'); $worksheet->setCellValue('C1', 'Gender'); $worksheet->setCellValue('A2', 'Alice'); $worksheet->setCellValue('B2', '25'); $worksheet->setCellValue('C2', 'Female'); $worksheet->setCellValue('A3', 'Bob'); $worksheet->setCellValue('B3', '30'); $worksheet->setCellValue('C3', 'Male'); // Set some cell styles $worksheet->getStyle('A1:C1')->getFont()->setBold(true); $worksheet->getStyle('A1:C1')->getAlignment()->setHorizontal('center'); $worksheet->getStyle('A2:C3')->getAlignment()->setHorizontal('left');
-The code snippet to set the headers and output the file to the browser
-Once you have created and populated your spreadsheet object, you can use IOFactory::createWriter() to create a writer object for your desired file format. You can then use the save() method of the writer object to output the file to the browser. However, before doing that, you need to set some headers to tell the browser that you are sending a file and what its name and type are. You also need to disable any output buffering and clean any previous output.
-For example, you can use the following code to download your spreadsheet as an XLSX file:
-// Create a writer object for XLSX format $writer = IOFactory::createWriter($spreadsheet, 'Xlsx'); // Set the headers header('Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'); header('Content-Disposition: attachment;filename="sample.xlsx"'); header('Cache-Control: max-age=0'); // Disable output buffering if (ob_get_level()) ob_end_clean(); // Output the file to the browser $ writer->save('php://output');
-The optional parameters and settings for customizing the output
-Depending on the file format and your requirements, you can also use some optional parameters and settings to customize the output of your spreadsheet file. For example, you can use the following options:
-
-- If you want to download your spreadsheet as a CSV file, you can use the setDelimiter(), setEnclosure(), and setLineEnding() methods of the Csv class to specify the delimiter, enclosure, and line ending characters for your CSV file.
-- If you want to download your spreadsheet as a PDF file, you can use the setFont() method of the Pdf class to specify the font name for your PDF file. You can also use the setPaperSize() and setOrientation() methods of the Worksheet\PageSetup class to specify the paper size and orientation for your PDF file.
-- If you want to download your spreadsheet as an HTML file, you can use the setEmbedImages() method of the Html class to specify whether to embed images in your HTML file or not. You can also use the setUseInlineCss() method of the Html class to specify whether to use inline CSS styles or not.
-
-Conclusion
-A summary of the main points and benefits of using IOFactory
-In this article, we have shown you how to use IOFactory, a class that provides a factory method for creating readers and writers for various spreadsheet file formats. We have also shown you how to create a spreadsheet object, populate it with data, and download it directly to the browser using IOFactory. By using IOFactory, you can:
-
-- Write less code and avoid hard-coding class names for each file format.
-- Automatically detect the file format based on the file extension or content.
-- Customize the output of your spreadsheet file with optional parameters and settings.
-
-A call to action and a link to the official documentation
-If you want to learn more about PhpSpreadsheet and IOFactory, you can visit the official documentation at https://phpspreadsheet.readthedocs.io/en/latest/. There you can find more examples, tutorials, and reference materials for working with spreadsheet files in PHP. You can also check out the source code and contribute to the project on GitHub at https://github.com/PHPOffice/PhpSpreadsheet.
-We hope you have enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. Happy coding!
-FAQs
-How can I read spreadsheet files using IOFactory?
-To read spreadsheet files using IOFactory, you can use the load() method of IOFactory with the filename as a parameter. This will return a spreadsheet object with the data from the file. For example:
-// Load a spreadsheet from an XLSX file $spreadsheet = IOFactory::load('sample.xlsx');
-How can I write spreadsheet files to a local file using IOFactory?
-To write spreadsheet files to a local file using IOFactory, you can use the save() method of the writer object with the filename as a parameter. This will save the spreadsheet object to that file. For example:
-// Save a spreadsheet as an ODS file $writer = IOFactory::createWriter($spreadsheet, 'Ods'); $writer->save('sample.ods');
-How can I use different PDF libraries with IOFactory?
-To use different PDF libraries with IOFactory, you need to install them separately and register them with IOFactory using the registerWriter() method. The supported PDF libraries are Dompdf, Mpdf, Tcpdf, and Fpdf. For example:
-// Install Dompdf via Composer composer require dompdf/dompdf // Register Dompdf as a writer for PDF format IOFactory::registerWriter('Pdf', 'Dompdf');
-How can I render charts with IOFactory?
-To render charts with IOFactory, you need to create chart objects using the Chart class from the PhpOffice\PhpSpreadsheet\Chart namespace. You can then add them to your worksheet using the addChart() method of the Worksheet class. However, not all file formats support charts. Currently, only XLSX and PDF formats support charts. For example:
-// Create a chart object $chart = new Chart( 'Sample Chart', // name new Title('Chart Title'), // title new Legend(Position::RIGHT, null, false), // legend new PlotArea(new Layout()), // plot area new Bar3D(), // plot type new DataSeriesValues('String', 'Sample Data!$A$1:$C$1', null, 3), // x-axis labels [new DataSeriesValues()], // y-axis labels [new DataSeriesValues('Number', 'Sample Data!$A$2:$A$3', null, 2)], // data values [new DataSeriesValues()], // data labels ); // Add the chart to the worksheet $worksheet->addChart($chart); // Save the spreadsheet as a PDF file with charts $writer = IOFactory::createWriter($spreadsheet, 'Pdf'); $writer->setIncludeCharts(true); $writer->save('sample.pdf');
-How can I handle errors and exceptions with IOFactory?
-To handle errors and exceptions with IOFactory, you need to use the try-catch blocks to catch any exceptions thrown by IOFactory or its related classes. You can then use the getMessage() method of the Exception class to get the error message and display it or log it as you wish. For example:
-// Try to load a spreadsheet from an invalid file try $spreadsheet = IOFactory::load('invalid.file'); catch (Exception $e) // Catch the exception and get the error message $error = $e->getMessage(); // Display the error message or log it as you wish echo "Error: $error";
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/sinz2002/ChuanhuChatGPT/assets/custom.css b/spaces/sinz2002/ChuanhuChatGPT/assets/custom.css
deleted file mode 100644
index af5e9f2118b843b3bbd7627ed45e970c20b13bef..0000000000000000000000000000000000000000
--- a/spaces/sinz2002/ChuanhuChatGPT/assets/custom.css
+++ /dev/null
@@ -1,353 +0,0 @@
-:root {
- --chatbot-color-light: #F3F3F3;
- --chatbot-color-dark: #121111;
-}
-
-#app_title {
- font-weight: var(--prose-header-text-weight);
- font-size: var(--text-xxl);
- line-height: 1.3;
- text-align: left;
- margin-top: 6px;
- white-space: nowrap;
-}
-#description {
- text-align: center;
- margin:16px 0
-}
-
-/* 覆盖gradio的页脚信息QAQ */
-/* footer {
- display: none !important;
-} */
-#footer {
- text-align: center;
-}
-#footer div {
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.85;
-}
-
-#float_display {
- position: absolute;
- max-height: 30px;
-}
-/* user_info */
-#user_info {
- white-space: nowrap;
- position: absolute; left: 8em; top: .2em;
- z-index: var(--layer-2);
- box-shadow: var(--block-shadow);
- border: none; border-radius: var(--block-label-radius);
- background: var(--color-accent);
- padding: var(--block-label-padding);
- font-size: var(--block-label-text-size); line-height: var(--line-sm);
- width: auto; min-height: 30px!important;
- opacity: 1;
- transition: opacity 0.3s ease-in-out;
-}
-#user_info .wrap {
- opacity: 0;
-}
-#user_info p {
- color: white;
- font-weight: var(--block-label-text-weight);
-}
-#user_info.hideK {
- opacity: 0;
- transition: opacity 1s ease-in-out;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-#status_display {
- transition: all 0.6s;
-}
-#chuanhu_chatbot {
- transition: height 0.3s ease;
-}
-
-/* usage_display */
-.insert_block {
- position: relative;
- margin: 0;
- padding: .5em 1em;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: 0 1em;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-
-.apSwitch {
- top: 2px;
- display: inline-block;
- height: 24px;
- position: relative;
- width: 48px;
- border-radius: 12px;
-}
-.apSwitch input {
- display: none !important;
-}
-.apSlider {
- background-color: var(--block-label-background-fill);
- bottom: 0;
- cursor: pointer;
- left: 0;
- position: absolute;
- right: 0;
- top: 0;
- transition: .4s;
- font-size: 18px;
- border-radius: 12px;
-}
-.apSlider::before {
- bottom: -1.5px;
- left: 1px;
- position: absolute;
- transition: .4s;
- content: "🌞";
-}
-input:checked + .apSlider {
- background-color: var(--block-label-background-fill);
-}
-input:checked + .apSlider::before {
- transform: translateX(23px);
- content:"🌚";
-}
-
-#submit_btn, #cancel_btn {
- height: 42px !important;
-}
-#submit_btn::before {
- content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-#cancel_btn::before {
- content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色(默认) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
- color: #000000 !important;
-}
-[data-testid = "bot"] {
- background-color: #FFFFFF !important;
-}
-[data-testid = "user"] {
- background-color: #95EC69 !important;
-}
-/* 暗色 */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-color-dark) !important;
- color: #FFFFFF !important;
-}
-.dark [data-testid = "bot"] {
- background-color: #2C2C2C !important;
-}
-.dark [data-testid = "user"] {
- background-color: #26B561 !important;
-}
-
-/* 屏幕宽度大于等于500px的设备 */
-/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
-@media screen and (min-width: 500px) {
- #chuanhu_chatbot {
- height: calc(100vh - 200px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
-}
-/* 屏幕宽度小于500px的设备 */
-@media screen and (max-width: 499px) {
- #chuanhu_chatbot {
- height: calc(100vh - 140px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
- [data-testid = "bot"] {
- max-width: 98% !important;
- }
- #app_title h1{
- letter-spacing: -1px; font-size: 22px;
- }
-}
-/* 对话气泡 */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/siya02/Konakni-TTS/ttsv/src/hifi_gan/inference_e2e.py b/spaces/siya02/Konakni-TTS/ttsv/src/hifi_gan/inference_e2e.py
deleted file mode 100644
index 062aecd4280925336ab1d36420d2cd47febf661c..0000000000000000000000000000000000000000
--- a/spaces/siya02/Konakni-TTS/ttsv/src/hifi_gan/inference_e2e.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import glob
-import os
-import numpy as np
-import argparse
-import json
-import torch
-from scipy.io.wavfile import write
-from env import AttrDict
-from meldataset import MAX_WAV_VALUE
-from models import Generator
-
-h = None
-device = None
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + "*")
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return ""
- return sorted(cp_list)[-1]
-
-
-def inference(a):
- generator = Generator(h).to(device)
-
- state_dict_g = load_checkpoint(a.checkpoint_file, device)
- generator.load_state_dict(state_dict_g["generator"])
-
- filelist = os.listdir(a.input_mels_dir)
-
- os.makedirs(a.output_dir, exist_ok=True)
-
- generator.eval()
- generator.remove_weight_norm()
- with torch.no_grad():
- for i, filname in enumerate(filelist):
- x = np.load(os.path.join(a.input_mels_dir, filname))
- x = torch.FloatTensor(x).to(device)
- y_g_hat = generator(x)
- audio = y_g_hat.squeeze()
- audio = audio * MAX_WAV_VALUE
- audio = audio.cpu().numpy().astype("int16")
-
- output_file = os.path.join(
- a.output_dir, os.path.splitext(filname)[0] + "_generated_e2e.wav"
- )
- write(output_file, h.sampling_rate, audio)
- print(output_file)
-
-
-def main():
- print("Initializing Inference Process..")
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--input_mels_dir", default="test_mel_files")
- parser.add_argument("--output_dir", default="generated_files_from_mel")
- parser.add_argument("--checkpoint_file", required=True)
- a = parser.parse_args()
-
- config_file = os.path.join(os.path.split(a.checkpoint_file)[0], "config.json")
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- torch.manual_seed(h.seed)
- global device
- if torch.cuda.is_available():
- torch.cuda.manual_seed(h.seed)
- device = torch.device("cuda")
- else:
- device = torch.device("cpu")
-
- inference(a)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/smajumdar/nemo_conformer_rnnt_large/README.md b/spaces/smajumdar/nemo_conformer_rnnt_large/README.md
deleted file mode 100644
index 79a69e97da8b42fa0a6a375ff4a1b4d1f99c0d2a..0000000000000000000000000000000000000000
--- a/spaces/smajumdar/nemo_conformer_rnnt_large/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Nemo_conformer_rnnt_large
-emoji: 🐠
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/multi_corpus_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/multi_corpus_dataset.py
deleted file mode 100644
index 746155e515897da9fc9c803f9396a45b5cead8d0..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/multi_corpus_dataset.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import time
-from collections import OrderedDict
-from typing import Dict, List
-
-import numpy as np
-from fairseq.data import data_utils
-
-from . import FairseqDataset
-
-logger = logging.getLogger(__name__)
-
-
-class MultiCorpusDataset(FairseqDataset):
- """
- Stores multiple instances of FairseqDataset together. Requires each instance
- to be the same dataset, as the collate method needs to work on batches with
- samples from each dataset.
-
- Allows specifying a distribution over the datasets to use. Note that unlike
- MultiCorpusSampledDataset, this distribution allows sampling for each item,
- rather than on a batch level.
-
- Each time ordered_indices() is called, a new sample is generated with
- the specified distribution.
-
- Args:
- datasets: a OrderedDict of FairseqDataset instances.
- distribution: a List containing the probability of getting an utterance from
- corresponding dataset
- seed: random seed for sampling the datsets
- sort_indices: if true, will sort the ordered indices by size
- batch_sample: if true, will ensure each batch is from a single dataset
- """
-
- def __init__(
- self,
- datasets: Dict[str, FairseqDataset],
- distribution: List[float],
- seed: int,
- sort_indices: bool = False,
- batch_sample: bool = False,
- distributed_rank=None,
- ):
- super().__init__()
- assert isinstance(datasets, OrderedDict)
- assert len(datasets) == len(distribution)
- assert sum(distribution) == 1
- self.datasets = datasets
- self.distribution = distribution
- self.seed = seed
- self.sort_indices = sort_indices
- self.batch_sample = batch_sample
- self.distributed_rank = distributed_rank
-
- # Avoid repeated conversions to list later
- self.dataset_list = list(datasets.values())
- self.total_num_instances = 0
-
- first_dataset = list(self.datasets.values())[0]
-
- self.dataset_offsets = []
- for dataset in datasets.values():
- assert isinstance(dataset, FairseqDataset)
- assert type(dataset) is type(first_dataset)
- self.dataset_offsets.append(self.total_num_instances)
- self.total_num_instances += len(dataset)
-
- def ordered_indices(self):
- start = time.time()
- with data_utils.numpy_seed(self.seed, self.epoch):
- logger.info(f"sampling new dataset with seed {self.seed} epoch {self.epoch}")
- sampled_indices = []
- num_selected_instances = 0
-
- # For each dataset i, sample self.distribution[i] * self.total_num_instances
- for i, key in enumerate(self.datasets):
-
- if i < len(self.datasets) - 1:
- num_instances = int(self.distribution[i] * self.total_num_instances)
- high = self.dataset_offsets[i + 1]
- else:
- num_instances = self.total_num_instances - num_selected_instances
- high = self.total_num_instances
-
- logger.info(f"sampling {num_instances} from {key} dataset")
- num_selected_instances += num_instances
-
- # First, add k copies of the dataset where k = num_instances // len(dataset).
- # This ensures an equal distribution of the data points as much as possible.
- # For the remaining entries randomly sample them
- dataset_size = len(self.datasets[key])
- num_copies = num_instances // dataset_size
- dataset_indices = (
- np.random.permutation(high - self.dataset_offsets[i])
- + self.dataset_offsets[i]
- )[: num_instances - num_copies * dataset_size]
- if num_copies > 0:
- sampled_indices += list(
- np.concatenate(
- (
- np.repeat(
- np.arange(self.dataset_offsets[i], high), num_copies
- ),
- dataset_indices,
- )
- )
- )
- else:
- sampled_indices += list(dataset_indices)
-
- assert (
- len(sampled_indices) == self.total_num_instances
- ), f"{len(sampled_indices)} vs {self.total_num_instances}"
-
- np.random.shuffle(sampled_indices)
- if self.sort_indices:
- sampled_indices.sort(key=lambda i: self.num_tokens(i))
-
- logger.info(
- "multi_corpus_dataset ordered_indices took {}s".format(
- time.time() - start
- )
- )
- return np.array(sampled_indices, dtype=np.int64)
-
- def _map_index(self, index: int):
- """
- If dataset A has length N and dataset B has length M
- then index 1 maps to index 1 of dataset A, and index N + 1
- maps to index 1 of B.
- """
- counter = 0
- for key, dataset in self.datasets.items():
- if index < counter + len(dataset):
- return index - counter, key
- counter += len(dataset)
- raise ValueError(
- "Invalid index: {}, max: {}".format(index, self.total_num_instances)
- )
-
- def __len__(self):
- """
- Length of this dataset is the sum of individual datasets
- """
- return self.total_num_instances
-
- def __getitem__(self, index):
- new_index, key = self._map_index(index)
- try:
- item = self.datasets[key][new_index]
- item["full_id"] = index
- return item
- except Exception as e:
- e.args = (f"Error from {key} dataset", *e.args)
- raise
-
- def collater(self, samples):
- """
- If we are doing batch sampling, then pick the right collater to use.
-
- Otherwise we assume all collaters are the same.
- """
- if len(samples) == 0:
- return None
- if "full_id" in samples[0]:
- _, key = self._map_index(samples[0]["full_id"])
- try:
- batch = self.datasets[key].collater(samples)
- except Exception:
- print(f"Collating failed for key {key}", flush=True)
- raise
- return batch
- else:
- # Subclasses may override __getitem__ to not specify full_id
- return list(self.datasets.values())[0].collater(samples)
-
- def num_tokens(self, index: int):
- index, key = self._map_index(index)
- return self.datasets[key].num_tokens(index)
-
- def size(self, index: int):
- index, key = self._map_index(index)
- return self.datasets[key].size(index)
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return False
-
- def set_epoch(self, epoch, **unused):
- super().set_epoch(epoch)
- logger.info(f"setting epoch of multi_corpus_dataset to {epoch}")
- self.epoch = epoch
-
- @property
- def supports_prefetch(self):
- return False
-
- @property
- def supports_fetch_outside_dataloader(self):
- return all(
- self.datasets[key].supports_fetch_outside_dataloader
- for key in self.datasets
- )
-
- def batch_by_size(
- self,
- indices,
- max_tokens=None,
- max_sentences=None,
- required_batch_size_multiple=1,
- ):
- if not self.batch_sample:
- return super().batch_by_size(
- indices, max_tokens, max_sentences, required_batch_size_multiple
- )
-
- dataset_indices = {key: [] for key in self.datasets}
- for i in indices:
- _, key = self._map_index(i)
- dataset_indices[key].append(i)
-
- batches = []
- for key in dataset_indices:
- cur_batches = super().batch_by_size(
- np.array(dataset_indices[key], dtype=np.int64),
- max_tokens,
- max_sentences,
- required_batch_size_multiple,
- )
- logger.info(f"Created {len(cur_batches)} batches for dataset {key}")
- batches += cur_batches
-
- # If this dataset is used in a distributed training setup,
- # then shuffle such that the order is seeded by the distributed rank
- # as well
- if self.distributed_rank is not None:
- with data_utils.numpy_seed(self.seed, self.epoch, self.distributed_rank):
- np.random.shuffle(batches)
- return batches
diff --git a/spaces/srush/GPTWorld/README.md b/spaces/srush/GPTWorld/README.md
deleted file mode 100644
index 29a059238fe476a07c6aaf48fc8348f762a14e30..0000000000000000000000000000000000000000
--- a/spaces/srush/GPTWorld/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: GPTWorld
-emoji: 🌎
-colorFrom: blue
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# GPTWorld - Prompt Golf
-
-
-
-GPTWorld is an educational environment to learn about Prompting.
-It consists of a grid-world environment to test the ability of language models to follow instructions in a grounded environment.
-You instruct the model to generate code to play the game.
-
-
-
-
-
-## Goal
-
-The goal of the puzzle is to construct a prompt that can get GPT-4 to solve a complex game. You can watch in real-time as GPT plays the game.
-
-Use this link to get started:
-
-
-
-
-
-## Leaderboard
-
-I was able to solve the big puzzle using roughly 2000 tokens (input/output). Can you do better? I'll post the best ones here.
-
-## Other Puzzles
-
-If you find this fun also check out.
-
-* https://github.com/srush/GPU-Puzzles
diff --git a/spaces/stomexserde/gpt4-ui/Examples/City Of God I - Prison Empire [ I ] V1.041 SKIDROW.md b/spaces/stomexserde/gpt4-ui/Examples/City Of God I - Prison Empire [ I ] V1.041 SKIDROW.md
deleted file mode 100644
index fe551633f63e16d26ee80ad81454c6dc734a55c3..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/City Of God I - Prison Empire [ I ] V1.041 SKIDROW.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-City of God I - Prison Empire: A Pixel Art Prison Management Sim
-Have you ever dreamed of running your own prison in a dystopian city? If so, you might want to check out City of God I - Prison Empire, a pixel art prison management sim that lets you build, expand and control a private prison in Crist City.
-City of God I - Prison Empire [ I ] v1.041 SKIDROW
DOWNLOAD ::: https://urlgoal.com/2uI5Yi
-City of God I - Prison Empire is a game developed by Pixel Gangsta Studio and released on Steam in 2018. It is inspired by the movie City of God and the TV series Prison Break. The game features a rich story mode, a sandbox mode and a multiplayer mode where you can compete or cooperate with other players.
-In the story mode, you play as a warden who has bought the wardening position of Pompeii Prison from the corrupt deputy mayor of Crist City. You have to deal with various challenges and events, such as riots, gangs, politicians, prisoners' demands and more. You also have to make more money to expand your prison empire and use it to influence the city.
-In the sandbox mode, you can build your prison freely and make prisoners work for you with maximum efficiency by providing all kinds of prison facilities. You can also customize your prison's rules, policies and staff. You can choose from different scenarios and difficulty levels to test your skills.
-In the multiplayer mode, you can join or create a server and play with up to 16 players online. You can either cooperate with other wardens to run a mega prison or compete with them to see who has the best prison. You can also trade prisoners, resources and staff with other players.
-City of God I - Prison Empire is a game that combines pixel art graphics, simulation gameplay and dark humor. It offers a lot of replay value and content for fans of prison management sims. If you are interested in this game, you can buy it on Steam or visit its official website for more information.
-
-The game also has a lot of DLCs that add more content and features to the game. For example, The Apocalypse Project DLC is a derivative DLC that includes a follow-up story of the story mode, a new career called Apocalypse Warrior, and new buildings related to the Apocalypse Project. The Apocalypse Warriors are prisoners who have undergone gene fusion experiments and have unique abilities and skills. You can use them as killers or gang leaders in your prison.[^2^]
-Another DLC is The God's Eyes DLC, which adds a new system called God's Eyes. This system allows you to monitor every prisoner's mood, health, hunger, hygiene and other indicators in real time. You can also use God's Eyes to check the prisoners' skills, talents, crimes, sentences and other information. This DLC helps you to manage your prison more efficiently and effectively.[^3^]
-The game has received mostly positive reviews from players on Steam. Many players praised the game's pixel art graphics, simulation gameplay, dark humor and rich content. Some players also appreciated the game's references to movies and TV shows. However, some players also criticized the game's bugs, glitches, translation errors and lack of updates. Some players also felt that the game was too difficult or too repetitive.[^1^]
- 7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Doctor Strange (English) Movies Hd 720p In Hindi ((FULL)).md b/spaces/stomexserde/gpt4-ui/Examples/Doctor Strange (English) Movies Hd 720p In Hindi ((FULL)).md
deleted file mode 100644
index d462a6eff282c7824dd9f0f5f04ffa088c7d47df..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Doctor Strange (English) Movies Hd 720p In Hindi ((FULL)).md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-Here is a possible title and article with HTML formatting for the keyword "Doctor Strange (English) movies hd 720p in hindi":
-
-Doctor Strange: A Mind-Bending Superhero Movie in Hindi
-Doctor Strange is a 2016 superhero movie based on the Marvel Comics character of the same name. It stars Benedict Cumberbatch as Dr. Stephen Strange, a brilliant but arrogant surgeon who loses his hands in a car accident and seeks a cure in a mysterious place called Kamar-Taj, where he learns the secrets of a hidden world of magic and alternate dimensions.
-The movie is directed by Scott Derrickson and also features Chiwetel Ejiofor, Rachel McAdams, Benedict Wong, Mads Mikkelsen, and Tilda Swinton in supporting roles. It is the 14th film in the Marvel Cinematic Universe (MCU) and the second film of Phase Three.
-Doctor Strange (English) movies hd 720p in hindi
Download 🗸🗸🗸 https://urlgoal.com/2uI9pL
-Doctor Strange was praised by critics and audiences for its stunning visual effects, compelling story, and Cumberbatch's performance. It was also a commercial success, grossing over $677 million worldwide and becoming the highest-grossing solo debut film in the MCU.
-The movie is available to watch online in Hindi-English dual audio with high-definition quality of 720p on various platforms such as Disney+ Hotstar[^2^], PogoLinks[^1^], and Dailymotion[^4^]. You can also download the movie from these sources or other torrent sites if you prefer offline viewing.
-If you are a fan of Marvel movies or fantasy genres, you should not miss Doctor Strange, as it will take you on a thrilling and mind-bending journey through the multiverse. Watch it now and open your mind to a new reality.
Here is a possible continuation of the article:
-
-Doctor Strange: The Plot and The Characters
-The movie begins with a prologue in which Kaecilius (Mikkelsen), a former disciple of the Ancient One (Swinton), the sorcerer supreme and leader of the Masters of the Mystic Arts, steals a forbidden ritual from the Kamar-Taj library and escapes with his followers. The Ancient One pursues them but fails to stop them from contacting Dormammu, a dark entity who rules over the Dark Dimension.
-Meanwhile, in New York City, Dr. Stephen Strange (Cumberbatch) is a renowned neurosurgeon who enjoys a lavish lifestyle and a successful career. However, his life changes drastically when he suffers a car crash that severely damages his hands and renders him unable to perform surgery. Desperate to regain his former abilities, he spends all his fortune on experimental treatments but none of them work.
-He then learns about a man named Jonathan Pangborn (Benjamin Bratt), who miraculously recovered from a spinal injury that left him paralyzed. Strange tracks him down and asks him how he did it. Pangborn reveals that he visited Kamar-Taj, a place in Nepal where he learned to use mystical energy to heal himself. Intrigued, Strange decides to follow his lead and travels to Nepal.
-There, he meets Mordo (Ejiofor), a Master of the Mystic Arts who takes him to Kamar-Taj and introduces him to the Ancient One. She shows him the astral plane and the multiverse, and reveals that she and her followers protect the Earth from threats from other dimensions. She also tells him that Kaecilius and his zealots are trying to summon Dormammu, who wants to consume all worlds into his realm.
-
-Strange is skeptical at first but soon becomes fascinated by the mystic arts and agrees to train under the Ancient One and Mordo. He also meets Wong (Wong), the librarian and keeper of the ancient texts, and Christine Palmer (McAdams), his former colleague and lover who tries to help him cope with his new reality.
-As Strange progresses in his studies, he discovers the Eye of Agamotto, a powerful relic that can manipulate time. He also learns about the Sanctums, three buildings in New York, London, and Hong Kong that create a protective shield around the Earth. He is warned by the Ancient One and Mordo not to break the natural laws of the universe or use the Eye recklessly.
-However, Strange's curiosity leads him to experiment with the Eye and inadvertently alert Kaecilius to his location. Kaecilius attacks the London Sanctum and kills its guardian. Strange arrives in time to defend the New York Sanctum from Kaecilius and his minions, with the help of Mordo and Wong. He manages to trap Kaecilius in a time loop but is stabbed by one of his henchmen.
-He escapes to the hospital where Christine works and asks her to operate on him while he fights in his astral form. He defeats his assailant but is confronted by the Ancient One, who has come to help him. She reveals that she has been drawing power from the Dark Dimension to extend her life span and that she knew about Kaecilius' plan all along.
-Kaecilius then arrives and mortally wounds her. Strange and Mordo take her to the New York Sanctum where she dies after giving Strange some final words of advice. Mordo is disillusioned by her betrayal and leaves.
-Strange and Wong then travel to Hong Kong where they find that Kaecilius has destroyed the last Sanctum and allowed Dormammu to enter Earth. Strange uses the Eye of Agamotto to reverse time and restore the Sanctum. He then confronts Dormammu in the Dark Dimension and offers him a bargain: he will release him from an endless time loop if he agrees to leave Earth alone and take Kaecilius and his followers with him.
-Dormammu initially refuses but after killing Strange repeatedly, he grows tired of the cycle and accepts his deal. Strange returns to Earth with Wong and they celebrate their victory. Mordo, however, is disgusted by Strange's use of time manipulation and vows to rid the world of sorcerers who abuse their power.
-In a mid-credits scene, Strange meets Thor (Chris Hemsworth) at the New York Sanctum and agrees to help him find his father Odin (Anthony Hopkins) who has gone missing. In a post-credits scene, Mordo confronts Pangborn and steals his
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fluidsim 5.0 Full Crack.md b/spaces/stomexserde/gpt4-ui/Examples/Fluidsim 5.0 Full Crack.md
deleted file mode 100644
index b65153555fcc66612f1d5d7931e1dccc3c6c3fd3..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Fluidsim 5.0 Full Crack.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-Here is a possible title and article with html formatting for the keyword "Fluidsim 5.0 Full Crack":
-
-Fluidsim 5.0 Full Crack: A Complete Software for Circuit Simulation
-Fluidsim 5.0 Full Crack is a software that allows you to create, simulate and study electropneumatic, electrohydraulic, digital or electronic circuits. It is a comprehensive and user-friendly tool that offers many features and functions to help you design and test your circuits.
-Some of the features of Fluidsim 5.0 Full Crack are:
-Fluidsim 5.0 Full Crack
Download ✫✫✫ https://urlgoal.com/2uIaQH
-
-- An intuitive circuit diagram editor with complete descriptions of all components, ingredients images, sectional view animations or video sequences[^1^].
-- A simulation engine that can handle complex and realistic physical phenomena, such as pressure, flow, temperature, friction, etc.
-- A report generator that can produce detailed and customizable reports of your simulation results in moments.
-- An extensive teaching material library that includes tutorials, exercises, examples and solutions for various topics and levels of difficulty.
-- An import and export function that enables you to exchange data with other programs or devices.
-
-Fluidsim 5.0 Full Crack is compatible with Windows operating systems and requires a minimum of 4 GB RAM and 2 GB free disk space. You can download it from various websites or use the link below[^2^]. To install it, you need to follow these steps:
-
-- Download the Fluidsim 5.0 Full Crack file from the link below or any other source.
-- Extract the file using WinRAR or any other software.
-- Run the setup.exe file and follow the instructions on the screen.
-- Copy the crack file from the crack folder and paste it into the installation directory.
-- Launch the program and enjoy!
-
-Fluidsim 5.0 Full Crack is a powerful and versatile software that can help you learn and practice circuit simulation in an easy and fun way. It is suitable for students, teachers, engineers and hobbyists who want to explore the world of electropneumatic, electrohydraulic, digital or electronic circuits. Try it today and see for yourself!
Here are a few more paragraphs with html formatting for the keyword "Fluidsim 5.0 Full Crack":
-
-Fluidsim 5.0 Full Crack also offers a variety of learning materials and resources to help you master the concepts and skills of circuit simulation. You can access tutorials, exercises, examples and solutions for various topics and levels of difficulty. You can also learn from error models that depict typical defects in components and how to diagnose and repair them. Fluidsim 5.0 Full Crack enables you to combine theory and practice in an effective and varied way.
-Another advantage of Fluidsim 5.0 Full Crack is its flexible licensing system that allows you to use the software on different devices and platforms. You can choose between single-user licenses, network licenses or online licenses that enable you to access the software via a web browser. You can also use Fluidsim 5.0 Full Crack on mobile devices such as tablets or smartphones with the FluidSIM app.
-Fluidsim 5.0 Full Crack is not only a software for simulation, but also a software for design and creativity. You can create your own components and libraries, customize the appearance and behavior of existing components, or import and export data from other programs or devices. You can also connect Fluidsim 5.0 Full Crack with real hardware such as PLCs, microcontrollers or sensors via interfaces such as USB, Bluetooth or TCP/IP.
-Fluidsim 5.0 Full Crack is a software that has been developed and improved for more than 25 years by Art Systems Software Ltd and Festo Didactic GmbH & Co. KG, two leading companies in the field of technical education and training. Fluidsim 5.0 Full Crack is used by thousands of schools, universities, companies and hobbyists around the world for teaching and learning circuit simulation in an engaging and interactive way.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Free Software Nokia Service Tool V1 .0.md b/spaces/stomexserde/gpt4-ui/Examples/Free Software Nokia Service Tool V1 .0.md
deleted file mode 100644
index 482287c81556d66a009ba27dc5c18fb0d02adbb8..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Free Software Nokia Service Tool V1 .0.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-How to Use Free Software Nokia Service Tool V1 .0 to Flash Firmware on Nokia Phones
-If you have a Nokia phone and you want to flash a new firmware or update your phone, you may need a tool called Free Software Nokia Service Tool V1 .0. This is a utility software that allows you to flash stock ROM and firmware on all Nokia phones without bricking them. In this article, we will show you how to download and use this tool to flash firmware on your Nokia phone.
-What is Free Software Nokia Service Tool V1 .0?
-Free Software Nokia Service Tool V1 .0 is also known as Nokia X Flash Tool or Nokia XL Flash Tool. It is a small and simple tool that comes with a user-friendly interface. It supports all Nokia phones, including the latest models. It works on all Windows platforms, including Windows 10, Windows 8.1, Windows 8, Windows 7, Windows XP. The tool does not require installation. You can use it to flash Nokia firmware that you download from the internet.
-Free Software Nokia Service Tool V1 .0
DOWNLOAD ✓ https://urlgoal.com/2uI7fv
-How to Download Free Software Nokia Service Tool V1 .0?
-You can download Free Software Nokia Service Tool V1 .0 from the link below. The file size is about 350KB and it comes in a zip format. You need to extract the zip file to get the executable file of the tool.
-Download Free Software Nokia Service Tool V1 .0
-How to Use Free Software Nokia Service Tool V1 .0?
-Before you use Free Software Nokia Service Tool V1 .0, you need to prepare some things:
-
-- Install the ADB & Fastboot drivers on your PC.
-- Charge your phone to at least 50%.
-- Download the firmware for your phone and extract it on your computer.
-- Backup your data as flashing will erase everything on your phone.
-
-Once you have everything ready, follow these steps:
-
-
-- Double click on the Nokia flashing.exe to run the tool.
-- Power off your Nokia phone.
-- Press and hold the Volume Down + Power button together to boot into Bootloader Mode.
-- Connect your phone to your PC with a USB cable.
-- The tool will detect your phone and show its information.
-- In the flashing section, load all the files from the extracted firmware folder.
-- Click on Flash button to start flashing the firmware on your phone.
-- Wait for the process to complete. It may take a few minutes.
-- When it is done, disconnect your phone and reboot it.
-
-Congratulations! You have successfully flashed firmware on your Nokia phone using Free Software Nokia Service Tool V1 .0.
-
-What are the Benefits of Using Free Software Nokia Service Tool V1 .0?
-Using Free Software Nokia Service Tool V1 .0 has some benefits for Nokia users. Here are some of them:
-
-- You can flash any firmware that is compatible with your Nokia phone.
-- You can update your phone to the latest version of Android or Windows Phone.
-- You can fix your phone if it is stuck in a boot loop or has software issues.
-- You can unbrick your phone if it is bricked by a wrong firmware or a failed flash.
-- You can customize your phone by flashing custom ROMs or mods.
-
-What are the Risks of Using Free Software Nokia Service Tool V1 .0?
-Using Free Software Nokia Service Tool V1 .0 also has some risks that you should be aware of. Here are some of them:
-
-- You may void your warranty if you flash an unofficial firmware or a custom ROM.
-- You may lose your data if you do not backup your phone before flashing.
-- You may damage your phone if you flash a wrong firmware or interrupt the flashing process.
-- You may face some bugs or errors if you flash an unstable or incompatible firmware.
-
-Therefore, you should use Free Software Nokia Service Tool V1 .0 at your own risk and responsibility. Make sure you follow the instructions carefully and do not flash anything that you are not sure about.
-Conclusion
-Free Software Nokia Service Tool V1 .0 is a handy tool for Nokia users who want to flash firmware on their phones. It is easy to use and supports all Nokia phones. However, it also has some risks that you should consider before using it. We hope this article has helped you to understand how to use Free Software Nokia Service Tool V1 .0 and what are its benefits and risks. If you have any questions or feedback, feel free to leave a comment below.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stunner007/old-car-price-predictor/README.md b/spaces/stunner007/old-car-price-predictor/README.md
deleted file mode 100644
index 92a68d53e6e5f7ceff1f39add9ff4745651c3536..0000000000000000000000000000000000000000
--- a/spaces/stunner007/old-car-price-predictor/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Old Car Price Predictor
-emoji: 🌍
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sub314xxl/MetaGPT/metagpt/actions/debug_error.py b/spaces/sub314xxl/MetaGPT/metagpt/actions/debug_error.py
deleted file mode 100644
index d69a22dbad038651cfa0b9a525fecb467913027e..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/metagpt/actions/debug_error.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/11 17:46
-@Author : alexanderwu
-@File : debug_error.py
-"""
-import re
-
-from metagpt.logs import logger
-from metagpt.actions.action import Action
-from metagpt.utils.common import CodeParser
-
-PROMPT_TEMPLATE = """
-NOTICE
-1. Role: You are a Development Engineer or QA engineer;
-2. Task: You received this message from another Development Engineer or QA engineer who ran or tested your code.
-Based on the message, first, figure out your own role, i.e. Engineer or QaEngineer,
-then rewrite the development code or the test code based on your role, the error, and the summary, such that all bugs are fixed and the code performs well.
-Attention: Use '##' to split sections, not '#', and '## ' SHOULD WRITE BEFORE the test case or script and triple quotes.
-The message is as follows:
-{context}
----
-Now you should start rewriting the code:
-## file name of the code to rewrite: Write code with triple quoto. Do your best to implement THIS IN ONLY ONE FILE.
-"""
-class DebugError(Action):
- def __init__(self, name="DebugError", context=None, llm=None):
- super().__init__(name, context, llm)
-
- # async def run(self, code, error):
- # prompt = f"Here is a piece of Python code:\n\n{code}\n\nThe following error occurred during execution:" \
- # f"\n\n{error}\n\nPlease try to fix the error in this code."
- # fixed_code = await self._aask(prompt)
- # return fixed_code
-
- async def run(self, context):
- if "PASS" in context:
- return "", "the original code works fine, no need to debug"
-
- file_name = re.search("## File To Rewrite:\s*(.+\\.py)", context).group(1)
-
- logger.info(f"Debug and rewrite {file_name}")
-
- prompt = PROMPT_TEMPLATE.format(context=context)
-
- rsp = await self._aask(prompt)
-
- code = CodeParser.parse_code(block="", text=rsp)
-
- return file_name, code
diff --git a/spaces/sukiru/BlueArchiveTTS/monotonic_align/__init__.py b/spaces/sukiru/BlueArchiveTTS/monotonic_align/__init__.py
deleted file mode 100644
index 40b6f64aa116c74cac2f6a33444c9eeea2fdb38c..0000000000000000000000000000000000000000
--- a/spaces/sukiru/BlueArchiveTTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-
-def maximum_path(neg_cent, mask):
- """ numba optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
-
diff --git a/spaces/sunwaee/MT5-Questions-Answers-Generation-Extraction/app.py b/spaces/sunwaee/MT5-Questions-Answers-Generation-Extraction/app.py
deleted file mode 100644
index a37d080d9a0f123ad374989f39fe4f54f17c399d..0000000000000000000000000000000000000000
--- a/spaces/sunwaee/MT5-Questions-Answers-Generation-Extraction/app.py
+++ /dev/null
@@ -1,190 +0,0 @@
-import os
-
-import gdown as gdown
-import nltk
-import streamlit as st
-import torch
-from transformers import AutoTokenizer
-
-from mt5 import MT5
-
-
-def download_models(ids):
- """
- Download all models.
-
- :param ids: name and links of models
- :return:
- """
-
- # Download sentence tokenizer
- nltk.download('punkt')
-
- # Download model from drive if not stored locally
- for key in ids:
- if not os.path.isfile(f"model/{key}.ckpt"):
- url = f"https://drive.google.com/u/0/uc?id={ids[key]}"
- gdown.download(url=url, output=f"model/{key}.ckpt")
-
-
-@st.cache(allow_output_mutation=True)
-def load_model(model_path):
- """
- Load model and cache it.
-
- :param model_path: path to model
- :return:
- """
-
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
- # Loading model and tokenizer
- model = MT5.load_from_checkpoint(model_path).eval().to(device)
- model.tokenizer = AutoTokenizer.from_pretrained('tokenizer')
-
- return model
-
-
-# Page config
-st.set_page_config(layout="centered")
-st.title("Questions/Answers Pairs Gen.")
-st.write("Question Generation, Question Answering and Questions/Answers Generation using Google MT5. ")
-
-# Variables
-ids = {'mt5-small': st.secrets['small'],
- 'mt5-base': st.secrets['base']}
-
-
-# Download all models from drive
-download_models(ids)
-
-# Task selection
-
-left, right = st.columns([4, 2])
-task = left.selectbox('Choose the task: ',
- options=['Questions/Answers Pairs Generation', 'Question Answering', 'Question Generation'],
- help='Choose the task you want to try out')
-
-# Model selection
-model_path = right.selectbox('', options=[k for k in ids], index=1, help='Model to use. ')
-model = load_model(model_path=f"model/{model_path}.ckpt")
-right.write(model.device)
-
-if task == 'Questions/Answers Pairs Generation':
- # Input area
- inputs = st.text_area('Context:', value="A few years after the First Crusade, in 1107, the Normans under "
- "the command of Bohemond, Robert\'s son, landed in Valona and "
- "besieged Dyrrachium using the most sophisticated military "
- "equipment of the time, but to no avail. Meanwhile, they occupied "
- "Petrela, the citadel of Mili at the banks of the river Deabolis, "
- "Gllavenica (Ballsh), Kanina and Jericho. This time, "
- "the Albanians sided with the Normans, dissatisfied by the heavy "
- "taxes the Byzantines had imposed upon them. With their help, "
- "the Normans secured the Arbanon passes and opened their way to "
- "Dibra. The lack of supplies, disease and Byzantine resistance "
- "forced Bohemond to retreat from his campaign and sign a peace "
- "treaty with the Byzantines in the city of Deabolis. ", max_chars=2048,
- height=250)
- split = st.checkbox('Split into sentences', value=True)
-
- if split:
- # Split into sentences
- sent_tokenized = nltk.sent_tokenize(inputs)
- res = {}
-
- with st.spinner('Please wait while the inputs are being processed...'):
- # Iterate over sentences
- for sentence in sent_tokenized:
- predictions = model.multitask([sentence], max_length=512)
- questions, answers, answers_bis = predictions['questions'], predictions['answers'], predictions[
- 'answers_bis']
-
- # Build answer dict
- content = {}
- for question, answer, answer_bis in zip(questions[0], answers[0], answers_bis[0]):
- content[question] = {'answer (extracted)': answer, 'answer (generated)': answer_bis}
- res[sentence] = content
-
- # Answer area
- st.write(res)
-
- else:
- with st.spinner('Please wait while the inputs are being processed...'):
- # Prediction
- predictions = model.multitask([inputs], max_length=512)
- questions, answers, answers_bis = predictions['questions'], predictions['answers'], predictions[
- 'answers_bis']
-
- # Answer area
- zip = zip(questions[0], answers[0], answers_bis[0])
- content = {}
- for question, answer, answer_bis in zip:
- content[question] = {'answer (extracted)': answer, 'answer (generated)': answer_bis}
-
- st.write(content)
-
-elif task == 'Question Answering':
-
- # Input area
- inputs = st.text_area('Context:', value="A few years after the First Crusade, in 1107, the Normans under "
- "the command of Bohemond, Robert\'s son, landed in Valona and "
- "besieged Dyrrachium using the most sophisticated military "
- "equipment of the time, but to no avail. Meanwhile, they occupied "
- "Petrela, the citadel of Mili at the banks of the river Deabolis, "
- "Gllavenica (Ballsh), Kanina and Jericho. This time, "
- "the Albanians sided with the Normans, dissatisfied by the heavy "
- "taxes the Byzantines had imposed upon them. With their help, "
- "the Normans secured the Arbanon passes and opened their way to "
- "Dibra. The lack of supplies, disease and Byzantine resistance "
- "forced Bohemond to retreat from his campaign and sign a peace "
- "treaty with the Byzantines in the city of Deabolis. ", max_chars=2048,
- height=250)
- question = st.text_input('Question:', value="What forced Bohemond to retreat from his campaign? ")
-
- # Prediction
- with st.spinner('Please wait while the inputs are being processed...'):
- predictions = model.qa([{'question': question, 'context': inputs}], max_length=512)
- answer = {question: predictions[0]}
-
- # Answer area
- st.write(answer)
-
-elif task == 'Question Generation':
-
- # Input area
- inputs = st.text_area('Context (highlight answers with tokens): ',
- value="A few years after the First Crusade, in 1107 , the Normans under "
- "the command of Bohemond , Robert\'s son, landed in Valona and "
- "besieged Dyrrachium using the most sophisticated military "
- "equipment of the time, but to no avail. Meanwhile, they occupied "
- "Petrela, the citadel of Mili at the banks of the river Deabolis, "
- "Gllavenica (Ballsh), Kanina and Jericho. This time, "
- "the Albanians sided with the Normans, dissatisfied by the heavy "
- "taxes the Byzantines had imposed upon them. With their help, "
- "the Normans secured the Arbanon passes and opened their way to "
- "Dibra. The lack of supplies, disease and Byzantine resistance "
- "forced Bohemond to retreat from his campaign and sign a peace "
- "treaty with the Byzantines in the city of Deabolis. ", max_chars=2048,
- height=250)
-
- # Split by highlights
- hl_index = [i for i in range(len(inputs)) if inputs.startswith('', i)]
- contexts = []
- answers = []
-
- # Build a context for each highlight pair
- for i in range(0, len(hl_index), 2):
- contexts.append(inputs[:hl_index[i]].replace('', '') +
- inputs[hl_index[i]: hl_index[i + 1] + 4] +
- inputs[hl_index[i + 1] + 4:].replace('', ''))
- answers.append(inputs[hl_index[i]: hl_index[i + 1] + 4].replace('', '').strip())
-
- # Prediction
- with st.spinner('Please wait while the inputs are being processed...'):
- predictions = model.qg(contexts, max_length=512)
-
- # Answer area
- content = {}
- for pred, ans in zip(predictions, answers):
- content[pred] = ans
- st.write(content)
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/!EXCLUSIVE! Crack Serial Parallels Workstation 6 Windows.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/!EXCLUSIVE! Crack Serial Parallels Workstation 6 Windows.md
deleted file mode 100644
index 799afae1a75d2bc515057fb2471a6f07d58fcae2..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/!EXCLUSIVE! Crack Serial Parallels Workstation 6 Windows.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Crack Serial Parallels Workstation 6 Windows
Download >> https://cinurl.com/2uEXUx
-
-Features: • Run your preferred Mac OS and Windows applications during the time that is same rebooting. parallels desktop; parallels desktop 14; parallels desktop ... 1fdad05405
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force.md
deleted file mode 100644
index a7b12c38f47fcb86eb8086fa577d4bc5de00e477..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force: A Powerful Drawing Tool
-
-If you are looking for a professional and versatile drawing software, you might want to check out SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force. This is a cracked version of the original SketchBook For Enterprise software, which allows you to use all the features and tools without paying any subscription fees. In this article, we will show you how to download and install SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force, and what are some of the benefits of using this software.
-
-How to Download and Install SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force
-
-Downloading and installing SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force is not very difficult, but you need to follow some steps carefully. Here are the steps you need to take:
-SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force
Download https://cinurl.com/2uEYq1
-
-
-- Go to the website https://www.bahamasalzheimersassociation.org/forum/welcome-to-the-forum/sketchbook-for-enterprise-2015-en-32bit-with-crack-x-force-exclusive and click on the download link. This will take you to a file hosting site where you can download the software.
-- Extract the zip file using a program like WinRAR or 7-Zip. You will find two folders inside: one for the software and one for the crack.
-- Run the setup.exe file from the software folder and follow the installation instructions. Choose the language as English and select the 32-bit option.
-- After the installation is complete, do not launch the software yet. Go to the crack folder and copy the file named xf-adsk2015_x86.exe.
-- Paste the file into the installation directory of SketchBook For Enterprise, which is usually C:\Program Files\Autodesk\SketchBook For Enterprise 2015.
-- Run the file as administrator and click on Patch. You will see a message saying "Successfully patched".
-- Now you can launch SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force and enjoy all the features and tools for free.
-
-
-What are the Benefits of Using SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force
-
-SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force is a powerful drawing tool that can help you create stunning artworks, sketches, illustrations, animations, and more. Here are some of the benefits of using this software:
-
-
-- You can use a variety of brushes, pens, pencils, markers, erasers, and other tools to create different effects and textures. You can also customize your own brushes and save them for later use.
-- You can work with layers, masks, blending modes, gradients, and other advanced features to enhance your drawings and add depth and realism.
-- You can use the perspective tools, rulers, guides, grids, and snap options to draw accurate shapes and lines. You can also use the symmetry tools, distort tools, transform tools, and selection tools to modify your drawings.
-- You can use the animation tools to create simple animations and export them as GIFs or videos. You can also use the flipbook tools to create traditional frame-by-frame animations.
-- You can use the scan tools to import your sketches from paper or other sources and edit them in SketchBook For Enterprise. You can also use the camera tools to capture images and use them as references or backgrounds.
-- You can use the Copic Color Library to access hundreds of colors that match the Copic markers. You can also use the color wheel, color picker, color editor, and color history to choose and adjust colors.
-- You can export your drawings in various formats such as JPG, PNG, TIFF, BMP, PSD, PDF, SVG, etc. You can also share your drawings online via email or social media.
-
-
-SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force is a great software for anyone who loves drawing and wants to unleash their creativity. It has a user-friendly interface and a lot of features and tools that can help you create amazing artworks. However, please note that this is an illegal version of the software and we do not condone piracy or copyright infringement. If you like SketchBook For Enterprise, please support the developers by buying the original version from their website.
-How to Use SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force
-
-Using SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force is easy and fun. You can start drawing right away or explore the different features and tools that the software offers. Here are some tips on how to use SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force:
-
-
-- To start a new drawing, go to File > New and choose the size and resolution of your canvas. You can also open an existing drawing from your computer or from the gallery.
-- To select a tool, click on the icon on the toolbar or press the shortcut key. You can also access more tools by clicking on the arrow next to the icon or by right-clicking on the canvas.
-- To change the settings of a tool, such as size, opacity, flow, etc., use the sliders on the tool properties panel or press the P key to open the pop-up palette.
-- To draw on the canvas, use your mouse or a pen tablet. You can also use touch gestures if you have a touch-enabled device.
-- To undo or redo an action, go to Edit > Undo or Edit > Redo or press Ctrl+Z or Ctrl+Y. You can also use the history panel to see and revert to previous states of your drawing.
-- To zoom in or out, use the scroll wheel on your mouse or press Ctrl+Plus or Ctrl+Minus. You can also use the magnifying glass tool or the navigator panel to adjust the view of your canvas.
-- To rotate or flip the canvas, use the rotate tool or press R. You can also use the view menu or the view panel to change the orientation of your canvas.
-- To add a new layer, go to Layer > New Layer or press Ctrl+Shift+N. You can also use the layer panel to manage your layers, such as renaming, reordering, merging, duplicating, deleting, etc.
-- To change the blending mode of a layer, use the drop-down menu on the layer panel or press Shift+B. You can also adjust the opacity and visibility of a layer using the sliders and checkboxes on the layer panel.
-- To save your drawing, go to File > Save or File > Save As or press Ctrl+S or Ctrl+Shift+S. You can choose from various formats such as JPG, PNG, TIFF, BMP, PSD, PDF, SVG, etc.
-
-
-These are just some of the basic tips on how to use SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force. You can learn more about the software by reading the help menu or watching tutorials online. You can also listen to some podcasts and audiobooks about SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force on SoundCloud .
-
-Conclusion
-
-SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force is a powerful drawing tool that can help you create stunning artworks, sketches, illustrations, animations, and more. It has a user-friendly interface and a lot of features and tools that can help you unleash your creativity. However, please note that this is an illegal version of the software and we do not condone piracy or copyright infringement. If you like SketchBook For Enterprise, please support the developers by buying the original version from their website.
-Some Examples of SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force Drawings
-
-To give you some inspiration and ideas on what you can create with SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force, here are some examples of drawings made by other users. You can see how they used different tools, techniques, and styles to create amazing artworks.
-
-
-- A realistic portrait of a woman using the pencil, airbrush, and smudge tools. The artist also used layers, masks, and blending modes to add details and shadows.
-- A colorful landscape of a forest using the paintbrush, marker, and gradient tools. The artist also used the perspective and symmetry tools to draw accurate shapes and lines.
-- A cute cartoon character of a cat using the pen, fill, and eraser tools. The artist also used the animation and flipbook tools to make the cat move and blink.
-- A detailed sketch of a car using the ruler, guide, and snap tools. The artist also used the Copic Color Library to choose realistic colors for the car.
-- A creative illustration of a dragon using the distort, transform, and selection tools. The artist also used the scan and camera tools to import a sketch from paper and use it as a reference.
-
-
-These are just some of the examples of what you can create with SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force. You can find more examples on the gallery or on social media. You can also share your own drawings online and get feedback from other users.
-
-
-Some Alternatives to SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force
-
-If you are looking for some alternatives to SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force, you might want to check out some other drawing software that are available online. Here are some of the alternatives that you can try:
-
-
-- Adobe Photoshop: This is one of the most popular and powerful drawing software that can help you create professional and realistic artworks. It has a lot of features and tools that can help you edit, enhance, and manipulate your drawings. However, it is also very expensive and complex to use.
-- GIMP: This is a free and open-source drawing software that can help you create simple and basic artworks. It has some features and tools that can help you draw, paint, and edit your drawings. However, it is also very limited and outdated compared to other software.
-- Krita: This is a free and open-source drawing software that can help you create stunning and artistic artworks. It has a lot of features and tools that can help you sketch, paint, and animate your drawings. However, it is also very buggy and unstable compared to other software.
-- MediBang Paint: This is a free and lightweight drawing software that can help you create cute and colorful artworks. It has some features and tools that can help you draw, fill, and decorate your drawings. However, it is also very simple and basic compared to other software.
-- Paint Tool SAI: This is a paid but affordable drawing software that can help you create smooth and beautiful artworks. It has some features and tools that can help you draw, blend, and shade your drawings. However, it is also very old-fashioned and outdated compared to other software.
-
-
-These are just some of the alternatives to SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force. You can find more alternatives online or by reading reviews or comparisons. You can also try them out yourself and see which one suits your needs and preferences.
-Conclusion
-
-In this article, we have shown you how to download and install SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force, and what are some of the benefits of using this software. We have also given you some tips on how to use SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force, and some examples of drawings made by other users. Finally, we have suggested some alternatives to SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force that you can try out.
-
-SketchBook For Enterprise-2015-EN-32bit-with-Crack-X-Force is a powerful drawing tool that can help you create stunning artworks, sketches, illustrations, animations, and more. It has a user-friendly interface and a lot of features and tools that can help you unleash your creativity. However, please note that this is an illegal version of the software and we do not condone piracy or copyright infringement. If you like SketchBook For Enterprise, please support the developers by buying the original version from their website.
-
-We hope you enjoyed this article and learned something new. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy drawing!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/surya12003/suryabot/app.py b/spaces/surya12003/suryabot/app.py
deleted file mode 100644
index 2dbf3ae89c2e3fdab7134107dd346f984dca8eb1..0000000000000000000000000000000000000000
--- a/spaces/surya12003/suryabot/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/readme.md b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/readme.md
deleted file mode 100644
index d4cb7c4d266816ef48e7ecec2e1da07f4d6f4c68..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/readme.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# SPIGA: Benchmark
-The benchmark evaluator can be found at ```./eval/benchmark/evaluator.py``` and it allows
-to extract an extended report of metrics for each dataset. For further details,
-check the parser and complete the interactive terminal procedure to specify the evaluation
-characteristics.
-
-In order to use the benchmark evaluation, the prediction file must follow the same data structure
-and file extension as the ground truth annotations available in ```./data/annotations/```.
-The data structure consist on a list of dictionaries where each one represents an image sample,
-similar to the previous dataloader configuration:
-
-```
-sample = {"imgpath": Relative image path,
- "bbox": Bounding box [x,y,w,h] (ref image),
- "headpose": Euler angles [yaw, pitch, roll],
- "ids": Landmarks database ids,
- "landmarks": Landmarks (ref image),
- "visible": Visibilities [0,1, ...] (1 == Visible)
- }
-```
-
-Finally, is worth to mention that the benchmark can be easily extended for other task by
-inheriting the class structure available in ```./eval/benchmark/metrics/metrics.py``` and
-developing a new task file like the available ones: landmarks and headpose.
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py
deleted file mode 100644
index 3d2ad69f5c22adfe79d5fdabf920217628987166..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/gcnet_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='GCHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- ratio=1 / 4.,
- pooling_type='att',
- fusion_types=('channel_add', ),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/utils/flops_counter.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/utils/flops_counter.py
deleted file mode 100644
index d10af5feca7f4b8c0ba359b7b1c826f754e048be..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/utils/flops_counter.py
+++ /dev/null
@@ -1,599 +0,0 @@
-# Modified from flops-counter.pytorch by Vladislav Sovrasov
-# original repo: https://github.com/sovrasov/flops-counter.pytorch
-
-# MIT License
-
-# Copyright (c) 2018 Vladislav Sovrasov
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-import sys
-from functools import partial
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-import annotator.uniformer.mmcv as mmcv
-
-
-def get_model_complexity_info(model,
- input_shape,
- print_per_layer_stat=True,
- as_strings=True,
- input_constructor=None,
- flush=False,
- ost=sys.stdout):
- """Get complexity information of a model.
-
- This method can calculate FLOPs and parameter counts of a model with
- corresponding input shape. It can also print complexity information for
- each layer in a model.
-
- Supported layers are listed as below:
- - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``.
- - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``,
- ``nn.ReLU6``.
- - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``,
- ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``,
- ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``,
- ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``,
- ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``.
- - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``,
- ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``,
- ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``.
- - Linear: ``nn.Linear``.
- - Deconvolution: ``nn.ConvTranspose2d``.
- - Upsample: ``nn.Upsample``.
-
- Args:
- model (nn.Module): The model for complexity calculation.
- input_shape (tuple): Input shape used for calculation.
- print_per_layer_stat (bool): Whether to print complexity information
- for each layer in a model. Default: True.
- as_strings (bool): Output FLOPs and params counts in a string form.
- Default: True.
- input_constructor (None | callable): If specified, it takes a callable
- method that generates input. otherwise, it will generate a random
- tensor with input shape to calculate FLOPs. Default: None.
- flush (bool): same as that in :func:`print`. Default: False.
- ost (stream): same as ``file`` param in :func:`print`.
- Default: sys.stdout.
-
- Returns:
- tuple[float | str]: If ``as_strings`` is set to True, it will return
- FLOPs and parameter counts in a string format. otherwise, it will
- return those in a float number format.
- """
- assert type(input_shape) is tuple
- assert len(input_shape) >= 1
- assert isinstance(model, nn.Module)
- flops_model = add_flops_counting_methods(model)
- flops_model.eval()
- flops_model.start_flops_count()
- if input_constructor:
- input = input_constructor(input_shape)
- _ = flops_model(**input)
- else:
- try:
- batch = torch.ones(()).new_empty(
- (1, *input_shape),
- dtype=next(flops_model.parameters()).dtype,
- device=next(flops_model.parameters()).device)
- except StopIteration:
- # Avoid StopIteration for models which have no parameters,
- # like `nn.Relu()`, `nn.AvgPool2d`, etc.
- batch = torch.ones(()).new_empty((1, *input_shape))
-
- _ = flops_model(batch)
-
- flops_count, params_count = flops_model.compute_average_flops_cost()
- if print_per_layer_stat:
- print_model_with_flops(
- flops_model, flops_count, params_count, ost=ost, flush=flush)
- flops_model.stop_flops_count()
-
- if as_strings:
- return flops_to_string(flops_count), params_to_string(params_count)
-
- return flops_count, params_count
-
-
-def flops_to_string(flops, units='GFLOPs', precision=2):
- """Convert FLOPs number into a string.
-
- Note that Here we take a multiply-add counts as one FLOP.
-
- Args:
- flops (float): FLOPs number to be converted.
- units (str | None): Converted FLOPs units. Options are None, 'GFLOPs',
- 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically
- choose the most suitable unit for FLOPs. Default: 'GFLOPs'.
- precision (int): Digit number after the decimal point. Default: 2.
-
- Returns:
- str: The converted FLOPs number with units.
-
- Examples:
- >>> flops_to_string(1e9)
- '1.0 GFLOPs'
- >>> flops_to_string(2e5, 'MFLOPs')
- '0.2 MFLOPs'
- >>> flops_to_string(3e-9, None)
- '3e-09 FLOPs'
- """
- if units is None:
- if flops // 10**9 > 0:
- return str(round(flops / 10.**9, precision)) + ' GFLOPs'
- elif flops // 10**6 > 0:
- return str(round(flops / 10.**6, precision)) + ' MFLOPs'
- elif flops // 10**3 > 0:
- return str(round(flops / 10.**3, precision)) + ' KFLOPs'
- else:
- return str(flops) + ' FLOPs'
- else:
- if units == 'GFLOPs':
- return str(round(flops / 10.**9, precision)) + ' ' + units
- elif units == 'MFLOPs':
- return str(round(flops / 10.**6, precision)) + ' ' + units
- elif units == 'KFLOPs':
- return str(round(flops / 10.**3, precision)) + ' ' + units
- else:
- return str(flops) + ' FLOPs'
-
-
-def params_to_string(num_params, units=None, precision=2):
- """Convert parameter number into a string.
-
- Args:
- num_params (float): Parameter number to be converted.
- units (str | None): Converted FLOPs units. Options are None, 'M',
- 'K' and ''. If set to None, it will automatically choose the most
- suitable unit for Parameter number. Default: None.
- precision (int): Digit number after the decimal point. Default: 2.
-
- Returns:
- str: The converted parameter number with units.
-
- Examples:
- >>> params_to_string(1e9)
- '1000.0 M'
- >>> params_to_string(2e5)
- '200.0 k'
- >>> params_to_string(3e-9)
- '3e-09'
- """
- if units is None:
- if num_params // 10**6 > 0:
- return str(round(num_params / 10**6, precision)) + ' M'
- elif num_params // 10**3:
- return str(round(num_params / 10**3, precision)) + ' k'
- else:
- return str(num_params)
- else:
- if units == 'M':
- return str(round(num_params / 10.**6, precision)) + ' ' + units
- elif units == 'K':
- return str(round(num_params / 10.**3, precision)) + ' ' + units
- else:
- return str(num_params)
-
-
-def print_model_with_flops(model,
- total_flops,
- total_params,
- units='GFLOPs',
- precision=3,
- ost=sys.stdout,
- flush=False):
- """Print a model with FLOPs for each layer.
-
- Args:
- model (nn.Module): The model to be printed.
- total_flops (float): Total FLOPs of the model.
- total_params (float): Total parameter counts of the model.
- units (str | None): Converted FLOPs units. Default: 'GFLOPs'.
- precision (int): Digit number after the decimal point. Default: 3.
- ost (stream): same as `file` param in :func:`print`.
- Default: sys.stdout.
- flush (bool): same as that in :func:`print`. Default: False.
-
- Example:
- >>> class ExampleModel(nn.Module):
-
- >>> def __init__(self):
- >>> super().__init__()
- >>> self.conv1 = nn.Conv2d(3, 8, 3)
- >>> self.conv2 = nn.Conv2d(8, 256, 3)
- >>> self.conv3 = nn.Conv2d(256, 8, 3)
- >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
- >>> self.flatten = nn.Flatten()
- >>> self.fc = nn.Linear(8, 1)
-
- >>> def forward(self, x):
- >>> x = self.conv1(x)
- >>> x = self.conv2(x)
- >>> x = self.conv3(x)
- >>> x = self.avg_pool(x)
- >>> x = self.flatten(x)
- >>> x = self.fc(x)
- >>> return x
-
- >>> model = ExampleModel()
- >>> x = (3, 16, 16)
- to print the complexity information state for each layer, you can use
- >>> get_model_complexity_info(model, x)
- or directly use
- >>> print_model_with_flops(model, 4579784.0, 37361)
- ExampleModel(
- 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs,
- (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501
- (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1))
- (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1))
- (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1))
- (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, )
- (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True)
- )
- """
-
- def accumulate_params(self):
- if is_supported_instance(self):
- return self.__params__
- else:
- sum = 0
- for m in self.children():
- sum += m.accumulate_params()
- return sum
-
- def accumulate_flops(self):
- if is_supported_instance(self):
- return self.__flops__ / model.__batch_counter__
- else:
- sum = 0
- for m in self.children():
- sum += m.accumulate_flops()
- return sum
-
- def flops_repr(self):
- accumulated_num_params = self.accumulate_params()
- accumulated_flops_cost = self.accumulate_flops()
- return ', '.join([
- params_to_string(
- accumulated_num_params, units='M', precision=precision),
- '{:.3%} Params'.format(accumulated_num_params / total_params),
- flops_to_string(
- accumulated_flops_cost, units=units, precision=precision),
- '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops),
- self.original_extra_repr()
- ])
-
- def add_extra_repr(m):
- m.accumulate_flops = accumulate_flops.__get__(m)
- m.accumulate_params = accumulate_params.__get__(m)
- flops_extra_repr = flops_repr.__get__(m)
- if m.extra_repr != flops_extra_repr:
- m.original_extra_repr = m.extra_repr
- m.extra_repr = flops_extra_repr
- assert m.extra_repr != m.original_extra_repr
-
- def del_extra_repr(m):
- if hasattr(m, 'original_extra_repr'):
- m.extra_repr = m.original_extra_repr
- del m.original_extra_repr
- if hasattr(m, 'accumulate_flops'):
- del m.accumulate_flops
-
- model.apply(add_extra_repr)
- print(model, file=ost, flush=flush)
- model.apply(del_extra_repr)
-
-
-def get_model_parameters_number(model):
- """Calculate parameter number of a model.
-
- Args:
- model (nn.module): The model for parameter number calculation.
-
- Returns:
- float: Parameter number of the model.
- """
- num_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
- return num_params
-
-
-def add_flops_counting_methods(net_main_module):
- # adding additional methods to the existing module object,
- # this is done this way so that each function has access to self object
- net_main_module.start_flops_count = start_flops_count.__get__(
- net_main_module)
- net_main_module.stop_flops_count = stop_flops_count.__get__(
- net_main_module)
- net_main_module.reset_flops_count = reset_flops_count.__get__(
- net_main_module)
- net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501
- net_main_module)
-
- net_main_module.reset_flops_count()
-
- return net_main_module
-
-
-def compute_average_flops_cost(self):
- """Compute average FLOPs cost.
-
- A method to compute average FLOPs cost, which will be available after
- `add_flops_counting_methods()` is called on a desired net object.
-
- Returns:
- float: Current mean flops consumption per image.
- """
- batches_count = self.__batch_counter__
- flops_sum = 0
- for module in self.modules():
- if is_supported_instance(module):
- flops_sum += module.__flops__
- params_sum = get_model_parameters_number(self)
- return flops_sum / batches_count, params_sum
-
-
-def start_flops_count(self):
- """Activate the computation of mean flops consumption per image.
-
- A method to activate the computation of mean flops consumption per image.
- which will be available after ``add_flops_counting_methods()`` is called on
- a desired net object. It should be called before running the network.
- """
- add_batch_counter_hook_function(self)
-
- def add_flops_counter_hook_function(module):
- if is_supported_instance(module):
- if hasattr(module, '__flops_handle__'):
- return
-
- else:
- handle = module.register_forward_hook(
- get_modules_mapping()[type(module)])
-
- module.__flops_handle__ = handle
-
- self.apply(partial(add_flops_counter_hook_function))
-
-
-def stop_flops_count(self):
- """Stop computing the mean flops consumption per image.
-
- A method to stop computing the mean flops consumption per image, which will
- be available after ``add_flops_counting_methods()`` is called on a desired
- net object. It can be called to pause the computation whenever.
- """
- remove_batch_counter_hook_function(self)
- self.apply(remove_flops_counter_hook_function)
-
-
-def reset_flops_count(self):
- """Reset statistics computed so far.
-
- A method to Reset computed statistics, which will be available after
- `add_flops_counting_methods()` is called on a desired net object.
- """
- add_batch_counter_variables_or_reset(self)
- self.apply(add_flops_counter_variable_or_reset)
-
-
-# ---- Internal functions
-def empty_flops_counter_hook(module, input, output):
- module.__flops__ += 0
-
-
-def upsample_flops_counter_hook(module, input, output):
- output_size = output[0]
- batch_size = output_size.shape[0]
- output_elements_count = batch_size
- for val in output_size.shape[1:]:
- output_elements_count *= val
- module.__flops__ += int(output_elements_count)
-
-
-def relu_flops_counter_hook(module, input, output):
- active_elements_count = output.numel()
- module.__flops__ += int(active_elements_count)
-
-
-def linear_flops_counter_hook(module, input, output):
- input = input[0]
- output_last_dim = output.shape[
- -1] # pytorch checks dimensions, so here we don't care much
- module.__flops__ += int(np.prod(input.shape) * output_last_dim)
-
-
-def pool_flops_counter_hook(module, input, output):
- input = input[0]
- module.__flops__ += int(np.prod(input.shape))
-
-
-def norm_flops_counter_hook(module, input, output):
- input = input[0]
-
- batch_flops = np.prod(input.shape)
- if (getattr(module, 'affine', False)
- or getattr(module, 'elementwise_affine', False)):
- batch_flops *= 2
- module.__flops__ += int(batch_flops)
-
-
-def deconv_flops_counter_hook(conv_module, input, output):
- # Can have multiple inputs, getting the first one
- input = input[0]
-
- batch_size = input.shape[0]
- input_height, input_width = input.shape[2:]
-
- kernel_height, kernel_width = conv_module.kernel_size
- in_channels = conv_module.in_channels
- out_channels = conv_module.out_channels
- groups = conv_module.groups
-
- filters_per_channel = out_channels // groups
- conv_per_position_flops = (
- kernel_height * kernel_width * in_channels * filters_per_channel)
-
- active_elements_count = batch_size * input_height * input_width
- overall_conv_flops = conv_per_position_flops * active_elements_count
- bias_flops = 0
- if conv_module.bias is not None:
- output_height, output_width = output.shape[2:]
- bias_flops = out_channels * batch_size * output_height * output_height
- overall_flops = overall_conv_flops + bias_flops
-
- conv_module.__flops__ += int(overall_flops)
-
-
-def conv_flops_counter_hook(conv_module, input, output):
- # Can have multiple inputs, getting the first one
- input = input[0]
-
- batch_size = input.shape[0]
- output_dims = list(output.shape[2:])
-
- kernel_dims = list(conv_module.kernel_size)
- in_channels = conv_module.in_channels
- out_channels = conv_module.out_channels
- groups = conv_module.groups
-
- filters_per_channel = out_channels // groups
- conv_per_position_flops = int(
- np.prod(kernel_dims)) * in_channels * filters_per_channel
-
- active_elements_count = batch_size * int(np.prod(output_dims))
-
- overall_conv_flops = conv_per_position_flops * active_elements_count
-
- bias_flops = 0
-
- if conv_module.bias is not None:
-
- bias_flops = out_channels * active_elements_count
-
- overall_flops = overall_conv_flops + bias_flops
-
- conv_module.__flops__ += int(overall_flops)
-
-
-def batch_counter_hook(module, input, output):
- batch_size = 1
- if len(input) > 0:
- # Can have multiple inputs, getting the first one
- input = input[0]
- batch_size = len(input)
- else:
- pass
- print('Warning! No positional inputs found for a module, '
- 'assuming batch size is 1.')
- module.__batch_counter__ += batch_size
-
-
-def add_batch_counter_variables_or_reset(module):
-
- module.__batch_counter__ = 0
-
-
-def add_batch_counter_hook_function(module):
- if hasattr(module, '__batch_counter_handle__'):
- return
-
- handle = module.register_forward_hook(batch_counter_hook)
- module.__batch_counter_handle__ = handle
-
-
-def remove_batch_counter_hook_function(module):
- if hasattr(module, '__batch_counter_handle__'):
- module.__batch_counter_handle__.remove()
- del module.__batch_counter_handle__
-
-
-def add_flops_counter_variable_or_reset(module):
- if is_supported_instance(module):
- if hasattr(module, '__flops__') or hasattr(module, '__params__'):
- print('Warning: variables __flops__ or __params__ are already '
- 'defined for the module' + type(module).__name__ +
- ' ptflops can affect your code!')
- module.__flops__ = 0
- module.__params__ = get_model_parameters_number(module)
-
-
-def is_supported_instance(module):
- if type(module) in get_modules_mapping():
- return True
- return False
-
-
-def remove_flops_counter_hook_function(module):
- if is_supported_instance(module):
- if hasattr(module, '__flops_handle__'):
- module.__flops_handle__.remove()
- del module.__flops_handle__
-
-
-def get_modules_mapping():
- return {
- # convolutions
- nn.Conv1d: conv_flops_counter_hook,
- nn.Conv2d: conv_flops_counter_hook,
- mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook,
- nn.Conv3d: conv_flops_counter_hook,
- mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook,
- # activations
- nn.ReLU: relu_flops_counter_hook,
- nn.PReLU: relu_flops_counter_hook,
- nn.ELU: relu_flops_counter_hook,
- nn.LeakyReLU: relu_flops_counter_hook,
- nn.ReLU6: relu_flops_counter_hook,
- # poolings
- nn.MaxPool1d: pool_flops_counter_hook,
- nn.AvgPool1d: pool_flops_counter_hook,
- nn.AvgPool2d: pool_flops_counter_hook,
- nn.MaxPool2d: pool_flops_counter_hook,
- mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook,
- nn.MaxPool3d: pool_flops_counter_hook,
- mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook,
- nn.AvgPool3d: pool_flops_counter_hook,
- nn.AdaptiveMaxPool1d: pool_flops_counter_hook,
- nn.AdaptiveAvgPool1d: pool_flops_counter_hook,
- nn.AdaptiveMaxPool2d: pool_flops_counter_hook,
- nn.AdaptiveAvgPool2d: pool_flops_counter_hook,
- nn.AdaptiveMaxPool3d: pool_flops_counter_hook,
- nn.AdaptiveAvgPool3d: pool_flops_counter_hook,
- # normalizations
- nn.BatchNorm1d: norm_flops_counter_hook,
- nn.BatchNorm2d: norm_flops_counter_hook,
- nn.BatchNorm3d: norm_flops_counter_hook,
- nn.GroupNorm: norm_flops_counter_hook,
- nn.InstanceNorm1d: norm_flops_counter_hook,
- nn.InstanceNorm2d: norm_flops_counter_hook,
- nn.InstanceNorm3d: norm_flops_counter_hook,
- nn.LayerNorm: norm_flops_counter_hook,
- # FC
- nn.Linear: linear_flops_counter_hook,
- mmcv.cnn.bricks.Linear: linear_flops_counter_hook,
- # Upscale
- nn.Upsample: upsample_flops_counter_hook,
- # Deconvolution
- nn.ConvTranspose2d: deconv_flops_counter_hook,
- mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook,
- }
diff --git a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/pfpascal.py b/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/pfpascal.py
deleted file mode 100644
index 90255b122f7881dbdd046dd4508169680ffe3556..0000000000000000000000000000000000000000
--- a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/data/pfpascal.py
+++ /dev/null
@@ -1,108 +0,0 @@
-r""" PF-PASCAL dataset """
-
-import os
-
-import scipy.io as sio
-import pandas as pd
-import numpy as np
-import torch
-
-from .dataset import CorrespondenceDataset
-
-
-class PFPascalDataset(CorrespondenceDataset):
-
- def __init__(self, benchmark, datapath, thres, split):
- r""" PF-PASCAL dataset constructor """
- super(PFPascalDataset, self).__init__(benchmark, datapath, thres, split)
-
- self.train_data = pd.read_csv(self.spt_path)
- self.src_imnames = np.array(self.train_data.iloc[:, 0])
- self.trg_imnames = np.array(self.train_data.iloc[:, 1])
- self.cls = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle',
- 'bus', 'car', 'cat', 'chair', 'cow',
- 'diningtable', 'dog', 'horse', 'motorbike', 'person',
- 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
- self.cls_ids = self.train_data.iloc[:, 2].values.astype('int') - 1
-
- if split == 'trn':
- self.flip = self.train_data.iloc[:, 3].values.astype('int')
- self.src_kps = []
- self.trg_kps = []
- self.src_bbox = []
- self.trg_bbox = []
- for src_imname, trg_imname, cls in zip(self.src_imnames, self.trg_imnames, self.cls_ids):
- src_anns = os.path.join(self.ann_path, self.cls[cls],
- os.path.basename(src_imname))[:-4] + '.mat'
- trg_anns = os.path.join(self.ann_path, self.cls[cls],
- os.path.basename(trg_imname))[:-4] + '.mat'
-
- src_kp = torch.tensor(read_mat(src_anns, 'kps')).float()
- trg_kp = torch.tensor(read_mat(trg_anns, 'kps')).float()
- src_box = torch.tensor(read_mat(src_anns, 'bbox')[0].astype(float))
- trg_box = torch.tensor(read_mat(trg_anns, 'bbox')[0].astype(float))
-
- src_kps = []
- trg_kps = []
- for src_kk, trg_kk in zip(src_kp, trg_kp):
- if len(torch.isnan(src_kk).nonzero()) != 0 or \
- len(torch.isnan(trg_kk).nonzero()) != 0:
- continue
- else:
- src_kps.append(src_kk)
- trg_kps.append(trg_kk)
- self.src_kps.append(torch.stack(src_kps).t())
- self.trg_kps.append(torch.stack(trg_kps).t())
- self.src_bbox.append(src_box)
- self.trg_bbox.append(trg_box)
-
- self.src_imnames = list(map(lambda x: os.path.basename(x), self.src_imnames))
- self.trg_imnames = list(map(lambda x: os.path.basename(x), self.trg_imnames))
-
- def __getitem__(self, idx):
- r""" Constructs and returns a batch for PF-PASCAL dataset """
- batch = super(PFPascalDataset, self).__getitem__(idx)
-
- # Object bounding-box (resized following self.img_size)
- batch['src_bbox'] = self.get_bbox(self.src_bbox, idx, batch['src_imsize'])
- batch['trg_bbox'] = self.get_bbox(self.trg_bbox, idx, batch['trg_imsize'])
- batch['pckthres'] = self.get_pckthres(batch, batch['trg_imsize'])
-
- # Horizontal flipping key-points during training
- if self.split == 'trn' and self.flip[idx]:
- self.horizontal_flip(batch)
- batch['flip'] = 1
- else:
- batch['flip'] = 0
-
- return batch
-
- def get_bbox(self, bbox_list, idx, imsize):
- r""" Returns object bounding-box """
- bbox = bbox_list[idx].clone()
- bbox[0::2] *= (self.img_size / imsize[0])
- bbox[1::2] *= (self.img_size / imsize[1])
- return bbox
-
- def horizontal_flip(self, batch):
- tmp = batch['src_bbox'][0].clone()
- batch['src_bbox'][0] = batch['src_img'].size(2) - batch['src_bbox'][2]
- batch['src_bbox'][2] = batch['src_img'].size(2) - tmp
-
- tmp = batch['trg_bbox'][0].clone()
- batch['trg_bbox'][0] = batch['trg_img'].size(2) - batch['trg_bbox'][2]
- batch['trg_bbox'][2] = batch['trg_img'].size(2) - tmp
-
- batch['src_kps'][0][:batch['n_pts']] = batch['src_img'].size(2) - batch['src_kps'][0][:batch['n_pts']]
- batch['trg_kps'][0][:batch['n_pts']] = batch['trg_img'].size(2) - batch['trg_kps'][0][:batch['n_pts']]
-
- batch['src_img'] = torch.flip(batch['src_img'], dims=(2,))
- batch['trg_img'] = torch.flip(batch['trg_img'], dims=(2,))
-
-
-def read_mat(path, obj_name):
- r""" Reads specified objects from Matlab data file. (.mat) """
- mat_contents = sio.loadmat(path)
- mat_obj = mat_contents[obj_name]
-
- return mat_obj
diff --git a/spaces/tamirshlomi/pets/app.py b/spaces/tamirshlomi/pets/app.py
deleted file mode 100644
index a295cd6df9293632d2a50a3406af1f31cfe0e8c0..0000000000000000000000000000000000000000
--- a/spaces/tamirshlomi/pets/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-
-def is_cat(x): return x[0].isupper()
-
-learn = load_learner('model.pkl')
-
-categories = ('Dog', 'Cat')
-
-def classify_image(img):
- pred, pred_idx, probs = learn.predict(img)
- return dict(zip(categories, map(float, probs)))
-
-image = gr.inputs.Image(shape=(192, 192))
-label = gr.outputs.Label()
-examples = [ 'cat.jpg', 'dog.jpg', 'dunno.jpg' ]
-
-iface = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
-iface.launch()
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Assetto Corsa Pc Crack 17 BETTER.md b/spaces/terfces0erbo/CollegeProjectV2/Assetto Corsa Pc Crack 17 BETTER.md
deleted file mode 100644
index fdde39ebef9237eb2d4b8ab984e68d3ba5762e58..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Assetto Corsa Pc Crack 17 BETTER.md
+++ /dev/null
@@ -1,6 +0,0 @@
-assetto corsa pc crack 17
Download ❤ https://bytlly.com/2uGjRN
-
-assetto corsa hud gone, In assetto_corsa.ini there is this section. ... force feddback of the PS4/G29 combination (i've played every racing game or so: F12015-16-17, Dirt Rally/4, PC1, AC. ... Assetto Corsa Competizione's September console update patch . ... Aug 23, 2020 · PC Assetto Corsa PC Mods General Discussion. 1fdad05405
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Express Burn Plus 9.11Beta Crack Serial Key Keygen.md b/spaces/terfces0erbo/CollegeProjectV2/Express Burn Plus 9.11Beta Crack Serial Key Keygen.md
deleted file mode 100644
index af674fd21d562871d7bb9e4bccab25e4c4c70339..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Express Burn Plus 9.11Beta Crack Serial Key Keygen.md
+++ /dev/null
@@ -1,350 +0,0 @@
-
-Express Burn Plus 9.11Beta Crack Serial Key Keygen: Everything You Need to Know
-If you are looking for a simple and effective software to burn data, audio, and video discs for your personal or professional needs, you may have come across Express Burn Plus 9.11Beta Crack Serial Key Keygen. This is a popular program that can help you create CDs, DVDs, and Blu-Rays with just a few clicks. You can also use Express Burn Plus 9.11Beta Crack Serial Key Keygen to create and copy disc images, as well as to customize your disc labels and covers.
-Express Burn Plus 9.11Beta Crack Serial Key keygen
Download Zip ►►► https://bytlly.com/2uGkv1
-However, before you download and use Express Burn Plus 9.11Beta Crack Serial Key Keygen, there are some things that you should know. First of all, Express Burn Plus 9.11Beta Crack Serial Key Keygen is not an official or authorized software from the developer, NCH Software. It is a modified version of the original Express Burn Plus software that has been cracked by hackers to bypass the activation process and unlock all the features for free. Therefore, using Express Burn Plus 9.11Beta Crack Serial Key Keygen may be illegal or unethical in some countries or regions, and may result in legal actions or penalties.
-Secondly, Express Burn Plus 9.11Beta Crack Serial Key Keygen may not be safe or reliable to use. Since it is not a genuine software from the developer, it may contain viruses or malware that can harm your computer or compromise your data. Moreover, Express Burn Plus 9.11Beta Crack Serial Key Keygen may not work properly with newer versions or updates of your operating system or other software products. You may also experience errors or problems with your disc burning process or your disc quality.
-Therefore, if you want to use Express Burn Plus 9.11Beta Crack Serial Key Keygen, you should do so at your own risk and responsibility. You should also make sure that you download Express Burn Plus 9.11Beta Crack Serial Key Keygen from a trusted and reputable source on the internet, and scan any file you download with an antivirus program before opening it.
-In this article, we will give you everything you need to know about Express Burn Plus 9.11Beta Crack Serial Key Keygen, including its features, benefits, drawbacks, alternatives, and how to download and use it.
-
-What are the Features of Express Burn Plus 9.11Beta Crack Serial Key
-Keygen?
-
-Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen
-has many features that make it a simple and effective disc burning software.
-Some of these features are:
-
-
-
-
-- Ultra fast burning: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can burn data,
-audio,
-and video discs in a matter of minutes,
-saving you time and resources.
-
-- Drag and drop functionality: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen allows you to add files or folders to your disc by dragging and dropping them into the main window or by clicking on the add button at the bottom of the main window.
-
-- Data disc support: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can burn various types of data discs,
-such as ISO/Joliet/UDF,
-bootable discs,
-spanned discs,
-and multisession discs.
-
-- Audio CD support: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can burn audio CDs that can be played on any CD player or computer.
-It can also record audio with direct digital recording,
-ensuring perfect audio quality.
-
-- MP3 CD/DVD/Blu-Ray support: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can burn MP3 CDs,
-DVDs,
-and Blu-Rays that can store hundreds of songs in one disc.
-It can also normalize the volume of the tracks and add pauses between them.
-
-- Video DVD/Blu-Ray support: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can burn video DVDs and Blu-Rays that can be played on any DVD or Blu-Ray player or computer.
-It can also convert video files to PAL or NTSC formats,
-and create interactive menus for your discs.
-
-- Disc image support: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can create and copy disc images,
-such as ISO,
-BIN,
-and CUE files.
-It can also burn disc images to discs,
-or mount them as virtual drives.
-
-- Disc label and cover support: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can help you design and print your own disc labels and covers,
-using various templates and themes.
-You can also customize your disc labels and covers with your own images and text.
-
-
-
-What are the Benefits of Using Express Burn Plus 9.11Beta Crack Serial Key
-Keygen?
-
-Despite its drawbacks and risks,
-Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen has some benefits that may appeal to some users who need a simple and fast disc burning software for their projects.
-Some of these benefits are:
-
-
-
-- Free and unlimited: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen allows you to use all the features and functions of Express Burn Plus without paying any fees or licenses.
-
-- Easy and quick: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen has a user-friendly interface that lets you drag and drop files into the main window and burn them with just a few clicks.
-
-- Versatile and compatible: Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen can burn various types of data,
-audio,
-and video discs for different purposes and devices.
-It can also work with both 32-bit and 64-bit systems.
-
-
-
-What are the Drawbacks of Using Express Burn Plus 9.11Beta Crack Serial Key
-What are the Alternatives to Using Express Burn Plus 9.11Beta Crack Serial Key
-Keygen?
-
-
If you are not comfortable with using Express Burn Plus 9.11Beta Crack Serial Key Keygen or if you want to try other options for disc burning software, you can consider some alternatives that are available on the market. Some of these alternatives are:
-
-
-
-- BurnAware Free: This is a free disc burning software that can burn data, audio, and video discs, as well as create bootable discs, span discs, and erase discs. It has a simple interface that supports drag-and-drop functionality. It can also verify discs after burning them. You can download it from [https://www.burnaware.com/download.html](https://www.burnaware.com/download.html).
-
-- Ashampoo Burning Studio Free: This is another free disc burning software that can burn data, audio, and video discs, as well as create backups, rip music, and copy discs. It has a sleek interface that offers various templates and themes for disc labels and covers. It can also adjust the burning speed according to your system performance. You can download it from [https://www.ashampoo.com/en/usd/pin/7110/burning-software/burning-studio-free](https://www.ashampoo.com/en/usd/pin/7110/burning-software/burning-studio-free).
-
-- Nero Burning ROM: This is a premium disc burning software that can burn data, audio, and video discs, as well as create ISO images, secure discs, and split large files. It has a professional interface that offers advanced options and features for disc burning. It can also integrate with other Nero products for multimedia editing and management. You can buy it from [https://www.nero.com/enu/products/nero-burning-rom/index.php](https://www.nero.com/enu/products/nero-burning-rom/index.php).
-
-
-
-How to Download and Install Express Burn Plus 9.11Beta Crack Serial Key Keygen
-If you still want to try Express Burn Plus 9.11Beta Crack Serial Key Keygen, you will need to follow these steps:
-
-- Find a reliable website that offers Express Burn Plus 9.11Beta Crack Serial Key Keygen for free download. You can search for it on Google or other search engines, but be careful of fake or malicious websites that may trick you into downloading unwanted or harmful files.
-- Download Express Burn Plus 9.11Beta Crack Serial Key Keygen from the website of your choice. The file size should be around 5 MB.
-- Run Express Burn Plus 9.11Beta Crack Serial Key Keygen and select your language from the drop-down menu.
-- Select your Autodesk product from the list of products and click on the generate button to create a random serial number for your product.
-- Copy the serial number and paste it into the activation window of your Autodesk product.
-- Click on the activate button to complete the activation process.
-- Enjoy using Express Burn Plus 9.11Beta Crack Serial Key Keygen for free!
-
-
-How to Use Express Burn Plus 9.11Beta Crack Serial Key Keygen
-Once you have installed and activated Express Burn Plus 9.11Beta Crack Serial Key Keygen, you can start using it to burn data, audio, and video discs for your projects. Here are some basic steps to use Express Burn Plus 9.11Beta Crack Serial Key Keygen:
-
-- Launch Express Burn Plus 9.11Beta Crack Serial Key Keygen from your desktop or start menu.
-- Select the type of disc you want to burn from the tabs at the top of the main window: Data Disc, Audio CD, MP3 CD/DVD/Blu-Ray, Video DVD/Blu-Ray.
-- Add files or folders to your disc by dragging and dropping them into the main window or by clicking on the add button at the bottom of the main window.
-- Adjust the settings for your disc according to your preferences: disc label, burn speed, file system format (ISO/Joliet/UDF), audio format (WAV/MP3/WMA), video format (PAL/NTSC), etc.
-- Insert a blank disc into your disc drive and click on the burn button at the bottom of the main window.
-- Wait for the burning process to complete and check your disc for errors or quality issues.
-- Eject your disc and label it with a marker or a sticker.
-
-
-Conclusion
-
-In conclusion,
-Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen
-is a program that can help you burn data,
-audio,
-and video discs for free with one click.
-However,
-you should be aware of the risks and consequences of using it illegally or unethically.
-You should also download it from a trusted source and scan it with an antivirus program before using it.
-If you are looking for other options,
-you can try some of the alternatives that we have mentioned in this article.
-We hope this article has given you everything you need to know about Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen.
-Conclusion
-
-In conclusion,
-Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen
-is a program that can help you burn data,
-audio,
-and video discs for free with one click.
-However,
-you should be aware of the risks and consequences of using it illegally or unethically.
-You should also download it from a trusted source and scan it with an antivirus program before using it.
-If you are looking for other options,
-you can try some of the alternatives that we have mentioned in this article.
-We hope this article has given you everything you need to know about Express
-Burn
-Plus
-9
-.
-1
-1
-B
-eta
-Crack
-Serial
-Key
-Keygen.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/thewise/Chat-W-Git/src/main.py b/spaces/thewise/Chat-W-Git/src/main.py
deleted file mode 100644
index e68441901ab07a39e24550d4a83ad0e9da3efc2f..0000000000000000000000000000000000000000
--- a/spaces/thewise/Chat-W-Git/src/main.py
+++ /dev/null
@@ -1,154 +0,0 @@
-
-import os
-import openai
-import sys
-sys.path.append('../..')
-from langchain.embeddings.openai import OpenAIEmbeddings
-from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
-from langchain.vectorstores import DocArrayInMemorySearch
-from langchain.document_loaders import TextLoader
-from langchain.chains import RetrievalQA, ConversationalRetrievalChain
-from langchain.memory import ConversationBufferMemory
-from langchain.chat_models import ChatOpenAI
-from langchain.document_loaders import TextLoader
-from langchain.document_loaders import GitLoader
-from langchain.llms import OpenAI
-from langchain.memory import ConversationBufferMemory, ConversationBufferWindowMemory
-from langchain.vectorstores import Chroma
-from langchain.embeddings.openai import OpenAIEmbeddings
-from langchain.prompts import PromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, AIMessagePromptTemplate, ChatPromptTemplate
-import datetime
-import shutil
-
-
-# Setting up environment variables
-os.environ['LANGCHAIN_TRACING_V2'] = "True"
-os.environ['LANGCHAIN_ENDPOINT']
-os.environ['LANGCHAIN_API_KEY']
-os.environ['LANGCHAIN_PROJECT']
-os.environ["OPENAI_API_KEY"]
-
-
-# Function to load the data from github using langchain with string type url, string type branch, string type file_filter
-def loader(url: str, branch: str, file_filter: str):
- repo_path = "./github_repo"
- if os.path.exists(repo_path):
- shutil.rmtree(repo_path)
-
- loader = GitLoader(
- clone_url= url,
- repo_path="./github_repo/",
- branch=branch,
- file_filter=lambda file_path: file_path.endswith(tuple(file_filter.split(','))) # Filter out files in Data but whole repo is cloned
- )
-
- data = loader.load()
- return data
-
-
-#Function to split the data into chunks using recursive character text splitter
-def split_data(data):
- splitter = RecursiveCharacterTextSplitter(
- chunk_size=1000,
- chunk_overlap=150,
- length_function=len, # Function to measure the length of chunks while splitting
- add_start_index=True # Include the starting position of each chunk in metadata
- )
- chunks = splitter.split_documents(data)
- return chunks
-
-#Function to ingest the chunks into a vectorstore of doc
-def ingest_chunks(chunks):
- embedding = OpenAIEmbeddings()
- vector_store = DocArrayInMemorySearch.from_documents(chunks, embedding)
-
- repo_path = "./github_repo"
- if os.path.exists(repo_path):
- shutil.rmtree(repo_path)
-
- return vector_store
-
-#Retreival function to get the data from the database and reply to the user
-def retreival(vector_store, k):
- # Selecting the right model
- current_date = datetime.datetime.now().date()
- if current_date < datetime.date(2023, 9, 2):
- llm_name = "gpt-3.5-turbo-0301"
- else:
- llm_name = "gpt-3.5-turbo"
-
- #Creating LLM
- llm = ChatOpenAI(model=llm_name, temperature=0)
-
- # Define the system message template
- #Adding CHAT HISTORY to the System template explicitly because mainly Chat history goes to Condense the Human Question with Backround (Not template), but System template goes straight the LLM Chain
- #Explicitly adding chat history to access previous chats and answer "what is my previous question?"
- #Great thing this also sends the chat history to the LLM Model along with the context and question
- system_template = """You're a code summarisation assistant. Given the following extracted parts of a long document as "CONTEXT" create a final answer.
- If you don't know the answer, just say that you don't know. Don't try to make up an answer.
- Only If asked to create a "DIAGRAM" for code use "MERMAID SYNTAX LANGUAGE" in your answer from "CONTEXT" and "CHAT HISTORY" with a short explanation of diagram.
-
- CONTEXT: {context}
- =======
- CHAT HISTORY: {chat_history}
- =======
- FINAL ANSWER:"""
-
- human_template = """{question}"""
-
- # ai_template = """
- # FINAL ANSWER:"""
-
- # Create the chat prompt templates
- messages = [
- SystemMessagePromptTemplate.from_template(system_template),
- HumanMessagePromptTemplate.from_template(human_template)
- # AIMessagePromptTemplate.from_template(ai_template)
- ]
-
- PROMPT = ChatPromptTemplate.from_messages(messages)
-
- #Creating memory
- # memory = ConversationBufferMemory(
- # memory_key="chat_history",
- # input_key="question",
- # output_key="answer",
- # return_messages=True)
-
- memory = ConversationBufferWindowMemory(
- memory_key="chat_history",
- input_key="question",
- output_key="answer",
- return_messages=True,
- k=5)
-
- #Creating the retriever, this can also be a contextual compressed retriever
- retriever = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": k}) #search_type can be "similarity" or "mmr"
-
- chain = ConversationalRetrievalChain.from_llm(
- llm=llm,
- chain_type="stuff", #chain type can be refine, stuff, map_reduce
- retriever=retriever,
- memory=memory,
- return_source_documents=True, #When used these 2 properties, the output gets 3 properties: answer, source_document, source_document_score and then have to speocify input and output key in memory for it to work
- combine_docs_chain_kwargs=dict({"prompt": PROMPT})
- )
-
- return chain
-
-#Class using all above components to create QA system
-class ConversationalResponse:
- def __init__(self, url, branch, file_filter):
- self.url = url
- self.branch = branch
- self.file_filter = file_filter
- self.data = loader(self.url, self.branch, self.file_filter)
- self.chunks = split_data(self.data)
- self.vector_store = ingest_chunks(self.chunks)
- self.chain_type = "stuff"
- self.k = 10
- self.chain = retreival(self.vector_store, self.k)
-
- def __call__(self, question):
- agent = self.chain(question)
- return agent['answer']
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Haunted - 3D Full Movie Download In Hd Mp4l.md b/spaces/tialenAdioni/chat-gpt-api/logs/Haunted - 3D Full Movie Download In Hd Mp4l.md
deleted file mode 100644
index 3008065cb499a7ad7e0af1981d1561ef1a0ffbc9..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Haunted - 3D Full Movie Download In Hd Mp4l.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-How to Watch Haunted - 3D Full Movie Online in HD Quality
-Haunted - 3D is a 2011 Hindi horror movie directed by Vikram Bhatt and starring Mahaakshay Chakraborty and Tia Bajpai. The movie tells the story of a realtor who finds himself in a haunted mansion and travels back in time to save a girl from a evil spirit. The movie was India's first stereoscopic 3D horror film and received positive reviews from critics and audiences.
-Haunted - 3D Full Movie Download In Hd Mp4l
DOWNLOAD ✪ https://urlcod.com/2uK6VM
-If you are looking for a thrilling and spooky movie to watch online, then Haunted - 3D is a good choice. But how can you watch Haunted - 3D full movie online in HD quality? Here are some ways to do it:
-
-- Download Haunted - 3D full movie from Filmywap: Filmywap is a popular website that offers free downloads of Bollywood movies in various formats, including HD Mp4. You can find Haunted - 3D full movie on Filmywap and download it to your device. However, be aware that Filmywap is an illegal website that may contain viruses and malware. Downloading movies from Filmywap may also violate the copyright laws and get you in trouble.
-- Stream Haunted - 3D full movie on Dailymotion: Dailymotion is a video-sharing platform that hosts a variety of content, including movies and TV shows. You can watch Haunted - 3D full movie on Dailymotion by searching for the movie title and selecting the playlist that contains all the parts of the movie. However, be aware that Dailymotion may not have the best quality and may have ads and pop-ups. Streaming movies on Dailymotion may also violate the copyright laws and get you in trouble.
-- Watch Haunted - 3D full movie on OTT platforms: OTT platforms are online streaming services that offer legal and high-quality content for a subscription fee or a pay-per-view basis. You can watch Haunted - 3D full movie on OTT platforms like Amazon Prime Video, Netflix, Hotstar, Zee5, etc. by signing up for an account and paying for the movie or the subscription plan. This is the safest and most convenient way to watch Haunted - 3D full movie online in HD quality.
-
-Haunted - 3D is a movie that will keep you on the edge of your seat with its gripping story and stunning visuals. If you want to watch Haunted - 3D full movie online in HD quality, then choose one of the methods above and enjoy the movie.
-
-Haunted - 3D is not just a horror movie, but also a love story that spans across two different eras. The movie explores the themes of reincarnation, destiny, and sacrifice. The movie also showcases the impressive 3D effects that enhance the horror and the romance. The movie has some memorable songs composed by Chirantan Bhatt, such as "Tum Ho Mera Pyaar", "Jaaniya", and "Mujhe De De Har Gham Tera". The movie was a box office success and received several awards and nominations, such as the Stardust Award for Best Horror Film, the Zee Cine Award for Best Sound Design, and the IIFA Award for Best Special Effects.
-If you are a fan of horror movies and love stories, then Haunted - 3D is a must-watch for you. You can watch Haunted - 3D full movie online in HD quality by following one of the methods mentioned above. Don't miss this chance to experience the thrill and the romance of Haunted - 3D.
- e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Live OS How to Use Tails the Privacy-Focused Linux Live System.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Live OS How to Use Tails the Privacy-Focused Linux Live System.md
deleted file mode 100644
index 70cb9b3fa3bb1b4b58496185f072eb938de86d4f..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Live OS How to Use Tails the Privacy-Focused Linux Live System.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-How to Download and Use a Live OS
-A live OS is an operating system that runs directly from a removable storage device such as a USB flash drive or a DVD, without installing it on the computer's hard drive. A live OS can be used for various purposes, such as testing, troubleshooting, data recovery, privacy, security, or simply trying out a different operating system without affecting your existing one.
-In this article, we will show you how to download and use a live OS. We will also discuss some of the advantages and disadvantages of using a live OS.
-download live os
Download File ✫ https://bltlly.com/2uOl6r
- What is a Live OS and Why Use It?
-A live OS is a complete bootable computer installation that includes an operating system and some applications. It runs entirely from RAM, which means it does not write any data to the storage device it is loaded from. This makes it faster and more secure than a regular operating system that runs from a hard drive.
-There are many reasons why you might want to use a live OS. Some of them are:
-
-- Portability: You can carry your preferred operating system, applications, configuration, and personal files with you on a small device and use it on any compatible computer.
-- Privacy: You can work with sensitive documents or browse the internet without leaving any traces on the computer's hard drive or online. You can also encrypt your data and settings on the storage device for extra protection.
-- Security: You can avoid malware, viruses, spyware, and other threats that might infect your regular operating system. You can also update your live OS regularly with security patches and fixes.
-- Testing: You can try out different operating systems or software without installing them or making any changes to your computer's configuration. You can also test how your computer performs under different conditions.
-- Recovery: You can use a live OS to access your files or repair your system if your regular operating system fails to boot or gets corrupted.
-
-Some examples of live OS are:
-
-- Ubuntu: One of the most popular Linux distributions that offers a user-friendly interface and a large collection of software.
-- Tails: A Linux distribution that focuses on preserving privacy and anonymity by routing all internet traffic through Tor.
-- Windows To Go: A feature of Windows 8 Enterprise and Windows 10 Enterprise that allows you to create a portable version of Windows on a USB drive.
-- PrimeOS: An Android-based operating system that lets you run Android apps and games on your PC.
-
- How to Download a Live OS
-The first step to use a live OS is to download it from its official website or a trusted source.
The next step is to create a bootable USB or DVD with the live OS that you downloaded. This will allow you to start your computer from the live OS without installing it on your hard drive. To create a bootable USB or DVD, you will need a tool that can write the ISO file to the USB or DVD in a way that makes it bootable. There are many tools available for this purpose, but some of the most popular ones are:
-
-- Etcher: A cross-platform tool that can create bootable USB drives for Windows, Linux, and Mac OS X.
-- Rufus: A Windows-only tool that can create bootable USB drives for Windows and Linux.
-- UNetbootin: A cross-platform tool that can create bootable USB drives for Linux and other operating systems.
-
-To use any of these tools, you will need to follow these general steps:
-
-- Download and install the tool of your choice on your computer.
-- Insert a USB flash drive or a DVD into your computer. Make sure it has enough space and is formatted correctly. Note that all the data on the USB or DVD will be erased during the process.
-- Launch the tool and select the ISO file of the live OS that you downloaded.
-- Select the USB or DVD as the target device to write the ISO file to.
-- Click the button to start the process and wait for it to finish.
-
-Once the process is done, you will have a bootable USB or DVD with the live OS on it. You can now use it to boot your computer from the live OS.
- How to Boot from the Live OS
-To boot from the live OS, you will need to insert the USB or DVD into the computer and restart it. However, simply restarting your computer may not be enough, as it may still try to boot from your hard drive by default. To change this, you will need to access the boot menu or the UEFI/BIOS settings of your computer and select the USB or DVD as the first boot device. The exact steps to do this may vary depending on your computer model and manufacturer, but here are some general guidelines:
-
-- To access the boot menu, you will need to press a specific key right after you turn on your computer, before Windows starts loading. The key may be different for different computers, but some common ones are F2, F9, F10, F12, Esc, or Del. You may see a message on your screen indicating which key to press.
-- To access the UEFI/BIOS settings, you will need to press another specific key right after you turn on your computer, before Windows starts loading. The key may be different for different computers, but some common ones are F1, F2, F10, Del, or Esc. You may see a message on your screen indicating which key to press.
-- Once you access the boot menu or the UEFI/BIOS settings, look for an option that lets you choose the boot device or change the boot order. Select the USB or DVD as the first boot device and save your changes.
-
-After you have changed the boot device or order, restart your computer and it should boot from the live OS. You will see a welcome screen where you can choose your language and other options. You can then click on Try Ubuntu (or Try [name of live OS]) to start using it without installing it on your hard drive.
-download tails live operating system
-download knoppix live cd/dvd
-download ubuntu live usb
-download debian live iso
-download fedora live workstation
-download puppy linux live cd
-download peppermint os live image
-download kali linux live penetration testing
-download slax linux live portable os
-download linux mint live cinnamon
-download zorin os live desktop
-download elementary os live pantheon
-download manjaro linux live xfce
-download arch linux live iso
-download centos live gnome
-download opensuse live kde
-download gentoo linux live dvd
-download lubuntu live lxqt
-download kubuntu live plasma
-download xubuntu live xfce
-download ubuntu budgie live budgie
-download ubuntu mate live mate
-download parrot os live security
-download backtrack linux live hacking
-download mx linux live xfce
-download pclinuxos live kde
-download solus os live budgie
-download deepin linux live dde
-download pop os live gnome
-download clear linux os live desktop
-download alpine linux live minimal
-download nixos live functional
-download void linux live musl
-download qubes os live security-focused
-download arco linux live bspwm
-download artix linux live openrc
-download garuda linux live gaming edition
-download endeavour os live net installer
-download sparky linux live rolling release
-download mageia linux live plasma 5.22.2
- Advantages and Disadvantages of Using a Live OS
-Using a live OS can have many benefits, but also some drawbacks. Here are some of them:
-
-Advantages Disadvantages
-You can use a different operating system without installing it on your hard drive or affecting your existing one. You may experience slower performance than using an installed operating system due to lower read/write speed of USB or DVD.
-You can carry your operating system, applications, configuration, and personal files with you on a small device and use it on any compatible computer. You may encounter compatibility issues with some hardware or software that are not supported by the live OS.
-You can work with sensitive documents or browse the internet without leaving any traces on the computer's hard drive or online. You can also encrypt your data and settings on the storage device for extra protection. You may lose your data and settings if you forget to save them on the storage device or if the device gets damaged or lost.
-You can avoid malware, viruses, spyware, and other threats that might infect your regular operating system. You can also update your live OS regularly with security patches and fixes. You may still be vulnerable to some attacks that target the firmware or the hardware of your computer.
-You can try out different operating systems or software without installing them or making any changes to your computer's configuration. You can also test how your computer performs under different conditions. You may not get the full functionality or features of the operating system or software that you are trying out.
-You can use a live OS to access your files or repair your system if your regular operating system fails to boot or gets corrupted. You may not be able to access some files or partitions that are encrypted or formatted differently by your regular operating system.
-
- Conclusion
-A live OS is a useful tool that can help you with various tasks and scenarios. It can give you more flexibility, privacy, security, and convenience than using a regular operating system. However, it also has some limitations and challenges that you should be aware of before using it.
-Here are some tips and recommendations for using a live OS:
-
-- Always backup your data and settings on the storage device or another location before using a live OS.
-- Always verify the integrity of the downloaded ISO file and the written USB or DVD before using a live OS.
-- Always use a trusted source and a reputable tool to download and create a live OS.
-- Always check the compatibility and requirements of the live OS with your computer and hardware before using it.
-- Always use a secure and reliable storage device to store and run your live OS.
-
- FAQs
-What are some popular live OS?
-Some of the most popular live OS are Ubuntu, Tails, Windows To Go, PrimeOS, Knoppix, Puppy Linux, Kali Linux, and Fedora. You can find more live OS on websites like DistroWatch or LiveCD List.
- How can I save my data and settings on a live OS?
-Some live OS offer the option to create a persistent storage space on the same USB or DVD that you use to run the live OS. This will allow you to save your data and settings on the storage device and access them the next time you use the live OS. However, this option may reduce the performance and lifespan of your storage device. Alternatively, you can use another storage device or an online service to save your data and settings.
- How can I install a live OS on my hard drive?
-Some live OS offer the option to install them on your hard drive alongside or instead of your existing operating system. This will allow you to use the live OS as a regular operating system without needing a USB or DVD. However, this option may require you to partition your hard drive and may affect your existing operating system. You should always backup your data and settings before installing a live OS on your hard drive.
- What are some alternatives to using a live OS?
-If you don't want to use a live OS, you can use other methods to run a different operating system on your computer. Some of them are:
-
-- Dual boot: This means installing two or more operating systems on separate partitions of your hard drive and choosing which one to boot from when you start your computer. This will give you better performance and functionality than using a live OS, but it will also require more disk space and maintenance.
-- Virtual machine: This means running an operating system inside another operating system using a software like VirtualBox, VMware, or Hyper-V. This will allow you to run multiple operating systems simultaneously without affecting each other, but it will also require more RAM and CPU resources.
-- Emulator: This means running an operating system that mimics another operating system using a software like Wine, DOSBox, or BlueStacks. This will allow you to run applications or games that are designed for another operating system, but it may not support all features or functions.
-
- How can I troubleshoot problems with a live OS?
-If you encounter any problems with using a live OS, you can try some of these steps:
-
-- Check if the problem is caused by the USB or DVD that you use to run the live OS. Try using another USB or DVD or try writing the ISO file again using a different tool.
-- Check if the problem is caused by the live OS itself. Try using another live OS or try updating the live OS with the latest version or patches.
-- Check if the problem is caused by the computer or hardware that you use to run the live OS. Try using another computer or hardware or try adjusting the settings or drivers of your computer or hardware.
-- Check if the problem is caused by the software or application that you use on the live OS. Try using another software or application or try installing or uninstalling the software or application.
-- Check if the problem is caused by the internet or network connection that you use on the live OS. Try using another internet or network connection or try changing the settings or configuration of your internet or network connection.
-- Check if the problem is caused by the user error or misunderstanding. Try reading the documentation or manual of the live OS or try searching for online tutorials or forums for help.
-
- I hope this article has helped you understand how to download and use a live OS. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/Sonic 3 And Knuckles Hacks Download.md b/spaces/tioseFevbu/cartoon-converter/Sonic 3 And Knuckles Hacks Download.md
deleted file mode 100644
index 04d122f3384d73e2a4ae58c27c6a25f723917553..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/Sonic 3 And Knuckles Hacks Download.md
+++ /dev/null
@@ -1,53 +0,0 @@
-## Sonic 3 And Knuckles Hacks Download
-
-
-
-**Download ::: [https://vercupalo.blogspot.com/?d=2tvYlT](https://vercupalo.blogspot.com/?d=2tvYlT)**
-
-
-
-# How to Download and Play Sonic 3 and Knuckles Hacks
-
-
-
-Sonic 3 and Knuckles is one of the most beloved games in the Sonic the Hedgehog series, featuring the iconic duo of Sonic and Tails, as well as the debut of Knuckles the Echidna. The game combines the levels and gameplay of Sonic 3 and Sonic & Knuckles into one epic adventure, with multiple endings and secrets to discover.
-
-
-
-But did you know that you can also play various hacks of Sonic 3 and Knuckles, created by fans who modified the original game to add new features, levels, characters, graphics, music, and more? These hacks can offer a fresh and exciting experience for Sonic fans who want to revisit or challenge themselves with this classic game.
-
-
-
-In this article, we will show you how to download and play some of the best Sonic 3 and Knuckles hacks available online. You will need a Sega Genesis emulator, such as Kega Fusion or Gens, and a ROM file of Sonic 3 and Knuckles. You can find these online with a simple web search, but make sure you own a copy of the original game before downloading any ROMs.
-
-
-
-Once you have your emulator and ROM ready, you can browse through the various hacks of Sonic 3 and Knuckles on websites such as Romhacking.net[^1^] or GameBanana[^2^]. These websites offer a wide range of hacks, from simple improvements and bug fixes to complete overhauls and conversions. You can read the descriptions, reviews, screenshots, and videos of each hack to see if it suits your preferences.
-
-
-
-To download a hack, you will need to download a patch file that contains the modifications made by the hacker. The patch file is usually in IPS or xdelta format. You will also need a patching program, such as Lunar IPS or xdelta UI, to apply the patch to your ROM file. Follow the instructions provided by the hacker or the patching program to make sure you patch the correct ROM file with the correct patch file.
-
-
-
-After patching your ROM file, you can load it on your emulator and enjoy playing your hacked version of Sonic 3 and Knuckles. Some hacks may have cheat codes or options that you can access from the title screen or the pause menu. You can also use save states or rewind features on your emulator to enhance your gameplay experience.
-
-
-
-Some examples of popular Sonic 3 and Knuckles hacks are:
-
-
-
-- Sonic 3 Complete[^1^]: This hack restores changes made to Sonic 3 levels so they play and sound like they did in Sonic 3. It also adds a game selection screen that allows you to play Sonic 3 Alone, Sonic & Knuckles, Sonic 3 & Knuckles, or Blue Sphere. It also fixes several bugs and offers many options to customize your game.
-
-- Cooler Sonic 3 and Knuckles[^3^]: This hack adds new graphics, music, sounds, animations, moves, abilities, bosses, enemies, zones, special stages, and more to Sonic 3 and Knuckles. It also features new playable characters such as Shadow, Silver, Blaze, Metal Sonic, Amy Rose, Mighty, Ray, Espio, Vector, Charmy, Nack/Fang, Bean, Bark, Jet, Wave, Storm, Rouge, Omega, Chaos Zero/Perfect Chaos.
-
-- Sonic 3 & Knuckles Battle Race[^4^]: This hack is a 2-player only hack where the goal is to outrun your opponent and get the most points before the level ends. You will die if you go off-screen. You can also play some minigames that have their own rules.
-
-- Sonic & Knuckles & Sonic 3 (Knuckles+Tails Hack)[^5^]: This hack allows you to play as Knuckles and Tails together in Sonic 3 and Knuckles. You can switch between them at any time by pressing A+B+C on controller 2. You can also use their abilities cooperatively or competitively.
-
-
-
-We hope this article has helped you learn how to download and play Sonic 3 and Knuckles hacks. Have fun exploring the different ways fans have reimagined this classic game!
-
- 1b8d091108
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/!!TOP!! Download Shadow The Movie.md b/spaces/tioseFevbu/cartoon-converter/scripts/!!TOP!! Download Shadow The Movie.md
deleted file mode 100644
index 7fcaba1f2025431be40e72b3a96a3656338547e2..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/!!TOP!! Download Shadow The Movie.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-Download Shadow The Movie: A Stunning Visual Feast of Wuxia Action
-If you are a fan of wuxia movies, you might want to download Shadow The Movie, a 2019 film by acclaimed Chinese director Zhang Yimou. Shadow is a tale of court intrigue and martial arts set in ancient China, where a king's commander has a secret double who challenges a rival general to a duel. The film is filled with stunning images of black-and-white contrast, intricate fight scenes, and metaphorical violence.
-Shadow The Movie is based on the legend of the Three Kingdoms period, when China was divided into three warring states. The film focuses on the kingdom of Pei, ruled by a snippy and cowardly king (Zhang Kai) who is obsessed with reclaiming a city that was lost to the general Yang (Hu Jun) of another kingdom. The king's commander (Deng Chao) is actually a double named Jing, who was trained from childhood to impersonate the real commander, who is hiding in a secret chamber recovering from a wound. Jing's only ally is the commander's wife Madam (Sun Li), who develops feelings for him as they plot to overthrow the king and defeat Yang.
-Download Shadow The Movie
DOWNLOAD ►►► https://urlcod.com/2uHys4
-The film's title refers to the shadowy nature of Jing's identity, as well as the visual style of the film, which uses shades of gray and black to create a striking aesthetic. The film also uses elements of Chinese ink painting and calligraphy to enhance the mood and symbolism. The film's action scenes are choreographed with precision and grace, using weapons such as umbrellas, blades, and crossbows. The film also explores themes of loyalty, honor, deception, and love.
-Shadow The Movie was praised by critics for its cinematography, direction, and performances. The film won several awards, including four Golden Horse Awards for Best Director, Best Cinematography, Best Visual Effects, and Best Art Direction. The film also received nominations for Best Foreign Language Film at the Golden Globes and the BAFTAs.
-If you want to download Shadow The Movie, you can find it on various streaming platforms such as Prime Video, YouTube, or iTunes. You can also buy or rent it on DVD or Blu-ray. However you choose to watch it, Shadow The Movie is a must-see for wuxia fans and anyone who appreciates stunning visual storytelling.
-
-One of the most praised aspects of Shadow The Movie is its visual style, which uses a monochrome palette of grays and blacks to create a stunning contrast with the splashes of red blood. The film draws inspiration from Chinese ink painting and calligraphy, using light and shadow to create depth and texture. The film also uses water as a recurring motif, symbolizing both life and death, as well as reflecting the characters' emotions and intentions.
-The film's action scenes are also inventive and thrilling, using weapons such as umbrellas, blades, and crossbows that can launch metal feathers. The film's climax features a spectacular battle between two armies on a rainy day, with umbrellas flying in the air and water splashing on the ground. The film also showcases Zhang Yimou's mastery of choreography and editing, creating fluid and graceful movements that enhance the drama and tension.
-
-Shadow The Movie is not only a feast for the eyes, but also a compelling story of human nature, power, and deception. The film explores the themes of loyalty, honor, identity, and love, as well as the moral ambiguity of war and politics. The film's characters are complex and flawed, each with their own motivations and secrets. The film also raises questions about the nature of reality and illusion, as well as the role of fate and choice.
-If you are looking for a wuxia film that combines stunning visuals, exciting action, and engaging storytelling, you should download Shadow The Movie. It is a film that will keep you on the edge of your seat and leave you breathless.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py
deleted file mode 100644
index 3db248baac44bcfc3e801f4b67a6b368eb1b4af4..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/colorama/ansitowin32.py
+++ /dev/null
@@ -1,266 +0,0 @@
-# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
-import re
-import sys
-import os
-
-from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style, BEL
-from .winterm import WinTerm, WinColor, WinStyle
-from .win32 import windll, winapi_test
-
-
-winterm = None
-if windll is not None:
- winterm = WinTerm()
-
-
-class StreamWrapper(object):
- '''
- Wraps a stream (such as stdout), acting as a transparent proxy for all
- attribute access apart from method 'write()', which is delegated to our
- Converter instance.
- '''
- def __init__(self, wrapped, converter):
- # double-underscore everything to prevent clashes with names of
- # attributes on the wrapped stream object.
- self.__wrapped = wrapped
- self.__convertor = converter
-
- def __getattr__(self, name):
- return getattr(self.__wrapped, name)
-
- def __enter__(self, *args, **kwargs):
- # special method lookup bypasses __getattr__/__getattribute__, see
- # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit
- # thus, contextlib magic methods are not proxied via __getattr__
- return self.__wrapped.__enter__(*args, **kwargs)
-
- def __exit__(self, *args, **kwargs):
- return self.__wrapped.__exit__(*args, **kwargs)
-
- def __setstate__(self, state):
- self.__dict__ = state
-
- def __getstate__(self):
- return self.__dict__
-
- def write(self, text):
- self.__convertor.write(text)
-
- def isatty(self):
- stream = self.__wrapped
- if 'PYCHARM_HOSTED' in os.environ:
- if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__):
- return True
- try:
- stream_isatty = stream.isatty
- except AttributeError:
- return False
- else:
- return stream_isatty()
-
- @property
- def closed(self):
- stream = self.__wrapped
- try:
- return stream.closed
- # AttributeError in the case that the stream doesn't support being closed
- # ValueError for the case that the stream has already been detached when atexit runs
- except (AttributeError, ValueError):
- return True
-
-
-class AnsiToWin32(object):
- '''
- Implements a 'write()' method which, on Windows, will strip ANSI character
- sequences from the text, and if outputting to a tty, will convert them into
- win32 function calls.
- '''
- ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer
- ANSI_OSC_RE = re.compile('\001?\033\\]([^\a]*)(\a)\002?') # Operating System Command
-
- def __init__(self, wrapped, convert=None, strip=None, autoreset=False):
- # The wrapped stream (normally sys.stdout or sys.stderr)
- self.wrapped = wrapped
-
- # should we reset colors to defaults after every .write()
- self.autoreset = autoreset
-
- # create the proxy wrapping our output stream
- self.stream = StreamWrapper(wrapped, self)
-
- on_windows = os.name == 'nt'
- # We test if the WinAPI works, because even if we are on Windows
- # we may be using a terminal that doesn't support the WinAPI
- # (e.g. Cygwin Terminal). In this case it's up to the terminal
- # to support the ANSI codes.
- conversion_supported = on_windows and winapi_test()
-
- # should we strip ANSI sequences from our output?
- if strip is None:
- strip = conversion_supported or (not self.stream.closed and not self.stream.isatty())
- self.strip = strip
-
- # should we should convert ANSI sequences into win32 calls?
- if convert is None:
- convert = conversion_supported and not self.stream.closed and self.stream.isatty()
- self.convert = convert
-
- # dict of ansi codes to win32 functions and parameters
- self.win32_calls = self.get_win32_calls()
-
- # are we wrapping stderr?
- self.on_stderr = self.wrapped is sys.stderr
-
- def should_wrap(self):
- '''
- True if this class is actually needed. If false, then the output
- stream will not be affected, nor will win32 calls be issued, so
- wrapping stdout is not actually required. This will generally be
- False on non-Windows platforms, unless optional functionality like
- autoreset has been requested using kwargs to init()
- '''
- return self.convert or self.strip or self.autoreset
-
- def get_win32_calls(self):
- if self.convert and winterm:
- return {
- AnsiStyle.RESET_ALL: (winterm.reset_all, ),
- AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT),
- AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL),
- AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL),
- AnsiFore.BLACK: (winterm.fore, WinColor.BLACK),
- AnsiFore.RED: (winterm.fore, WinColor.RED),
- AnsiFore.GREEN: (winterm.fore, WinColor.GREEN),
- AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW),
- AnsiFore.BLUE: (winterm.fore, WinColor.BLUE),
- AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA),
- AnsiFore.CYAN: (winterm.fore, WinColor.CYAN),
- AnsiFore.WHITE: (winterm.fore, WinColor.GREY),
- AnsiFore.RESET: (winterm.fore, ),
- AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True),
- AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True),
- AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True),
- AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True),
- AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True),
- AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True),
- AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True),
- AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True),
- AnsiBack.BLACK: (winterm.back, WinColor.BLACK),
- AnsiBack.RED: (winterm.back, WinColor.RED),
- AnsiBack.GREEN: (winterm.back, WinColor.GREEN),
- AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW),
- AnsiBack.BLUE: (winterm.back, WinColor.BLUE),
- AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA),
- AnsiBack.CYAN: (winterm.back, WinColor.CYAN),
- AnsiBack.WHITE: (winterm.back, WinColor.GREY),
- AnsiBack.RESET: (winterm.back, ),
- AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True),
- AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True),
- AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True),
- AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True),
- AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True),
- AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True),
- AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True),
- AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True),
- }
- return dict()
-
- def write(self, text):
- if self.strip or self.convert:
- self.write_and_convert(text)
- else:
- self.wrapped.write(text)
- self.wrapped.flush()
- if self.autoreset:
- self.reset_all()
-
-
- def reset_all(self):
- if self.convert:
- self.call_win32('m', (0,))
- elif not self.strip and not self.stream.closed:
- self.wrapped.write(Style.RESET_ALL)
-
-
- def write_and_convert(self, text):
- '''
- Write the given text to our wrapped stream, stripping any ANSI
- sequences from the text, and optionally converting them into win32
- calls.
- '''
- cursor = 0
- text = self.convert_osc(text)
- for match in self.ANSI_CSI_RE.finditer(text):
- start, end = match.span()
- self.write_plain_text(text, cursor, start)
- self.convert_ansi(*match.groups())
- cursor = end
- self.write_plain_text(text, cursor, len(text))
-
-
- def write_plain_text(self, text, start, end):
- if start < end:
- self.wrapped.write(text[start:end])
- self.wrapped.flush()
-
-
- def convert_ansi(self, paramstring, command):
- if self.convert:
- params = self.extract_params(command, paramstring)
- self.call_win32(command, params)
-
-
- def extract_params(self, command, paramstring):
- if command in 'Hf':
- params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';'))
- while len(params) < 2:
- # defaults:
- params = params + (1,)
- else:
- params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0)
- if len(params) == 0:
- # defaults:
- if command in 'JKm':
- params = (0,)
- elif command in 'ABCD':
- params = (1,)
-
- return params
-
-
- def call_win32(self, command, params):
- if command == 'm':
- for param in params:
- if param in self.win32_calls:
- func_args = self.win32_calls[param]
- func = func_args[0]
- args = func_args[1:]
- kwargs = dict(on_stderr=self.on_stderr)
- func(*args, **kwargs)
- elif command in 'J':
- winterm.erase_screen(params[0], on_stderr=self.on_stderr)
- elif command in 'K':
- winterm.erase_line(params[0], on_stderr=self.on_stderr)
- elif command in 'Hf': # cursor position - absolute
- winterm.set_cursor_position(params, on_stderr=self.on_stderr)
- elif command in 'ABCD': # cursor position - relative
- n = params[0]
- # A - up, B - down, C - forward, D - back
- x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command]
- winterm.cursor_adjust(x, y, on_stderr=self.on_stderr)
-
-
- def convert_osc(self, text):
- for match in self.ANSI_OSC_RE.finditer(text):
- start, end = match.span()
- text = text[:start] + text[end:]
- paramstring, command = match.groups()
- if command == BEL:
- if paramstring.count(";") == 1:
- params = paramstring.split(";")
- # 0 - change title and icon (we will only change title)
- # 1 - change icon (we don't support this)
- # 2 - change title
- if params[0] in '02':
- winterm.set_title(params[1])
- return text
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/__main__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/__main__.py
deleted file mode 100644
index 90cafd93426f6fb2e8ad57b140b7ae163a67a4a4..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/__main__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-"""
- pygments.__main__
- ~~~~~~~~~~~~~~~~~
-
- Main entry point for ``python -m pygments``.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import sys
-from pip._vendor.pygments.cmdline import main
-
-try:
- sys.exit(main(sys.argv))
-except KeyboardInterrupt:
- sys.exit(1)
diff --git a/spaces/tomandandy/MusicGen3/CONTRIBUTING.md b/spaces/tomandandy/MusicGen3/CONTRIBUTING.md
deleted file mode 100644
index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000
--- a/spaces/tomandandy/MusicGen3/CONTRIBUTING.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Contributing to Audiocraft
-
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-
-Audiocraft is the implementation of a research paper.
-Therefore, we do not plan on accepting many pull requests for new features.
-We certainly welcome them for bug fixes.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Meta's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to encodec, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/tomdeng/textgenerator/app.py b/spaces/tomdeng/textgenerator/app.py
deleted file mode 100644
index 6d1d7016e5fb825924ebfceddae589370c233901..0000000000000000000000000000000000000000
--- a/spaces/tomdeng/textgenerator/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import gradio as gr
-
-api=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-
-def complete_with_gpt(text):
- return text[:-50]+api(text[-50:])
-
-with gr.Blocks()as demo:
- with gr.Row():
- textbox=gr.Textbox(placeholder="Type here and press enter...",
-lines=8)
- with gr.Column():
- btn=gr.Button("Generate")
-
- btn.click(complete_with_gpt,textbox,textbox)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py
deleted file mode 100644
index 337f167c820979f345eef120a936195d8f5975c2..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './reppoints_moment_r50_fpn_1x_coco.py'
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(neck=dict(norm_cfg=norm_cfg), bbox_head=dict(norm_cfg=norm_cfg))
-optimizer = dict(lr=0.01)
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/res2net/faster_rcnn_r2_101_fpn_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/res2net/faster_rcnn_r2_101_fpn_2x_coco.py
deleted file mode 100644
index 85004e02c31edeb487f765835815c6f80c18fb6f..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/res2net/faster_rcnn_r2_101_fpn_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://res2net101_v1d_26w_4s',
- backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26))
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/docs/compatibility.md b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/docs/compatibility.md
deleted file mode 100644
index f9dfb2e3a05fa272a17b241d81b8ace7e2debfcd..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/docs/compatibility.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# Compatibility with MMDetection 1.x
-
-MMDetection 2.0 goes through a big refactoring and addresses many legacy issues. It is not compatible with the 1.x version, i.e., running inference with the same model weights in these two versions will produce different results. Thus, MMDetection 2.0 re-benchmarks all the models and provides their links and logs in the model zoo.
-
-The major differences are in four folds: coordinate system, codebase conventions, training hyperparameters, and modular design.
-
-## Coordinate System
-
-The new coordinate system is consistent with [Detectron2](https://github.com/facebookresearch/detectron2/) and treats the center of the most left-top pixel as (0, 0) rather than the left-top corner of that pixel.
-Accordingly, the system interprets the coordinates in COCO bounding box and segmentation annotations as coordinates in range `[0, width]` or `[0, height]`.
-This modification affects all the computation related to the bbox and pixel selection,
-which is more natural and accurate.
-
-- The height and width of a box with corners (x1, y1) and (x2, y2) in the new coordinate system is computed as `width = x2 - x1` and `height = y2 - y1`.
- In MMDetection 1.x and previous version, a "+ 1" was added both height and width.
- This modification are in three folds:
-
- 1. Box transformation and encoding/decoding in regression.
- 2. IoU calculation. This affects the matching process between ground truth and bounding box and the NMS process. The effect to compatibility is very negligible, though.
- 3. The corners of bounding box is in float type and no longer quantized. This should provide more accurate bounding box results. This also makes the bounding box and RoIs not required to have minimum size of 1, whose effect is small, though.
-
-- The anchors are center-aligned to feature grid points and in float type.
- In MMDetection 1.x and previous version, the anchors are in `int` type and not center-aligned.
- This affects the anchor generation in RPN and all the anchor-based methods.
-
-- ROIAlign is better aligned with the image coordinate system. The new implementation is adopted from [Detectron2](https://github.com/facebookresearch/detectron2/tree/master/detectron2/layers/csrc/ROIAlign).
- The RoIs are shifted by half a pixel by default when they are used to cropping RoI features, compared to MMDetection 1.x.
- The old behavior is still available by setting `aligned=False` instead of `aligned=True`.
-
-- Mask cropping and pasting are more accurate.
-
- 1. We use the new RoIAlign to crop mask targets. In MMDetection 1.x, the bounding box is quantized before it is used to crop mask target, and the crop process is implemented by numpy. In new implementation, the bounding box for crop is not quantized and sent to RoIAlign. This implementation accelerates the training speed by a large margin (~0.1s per iter, ~2 hour when training Mask R50 for 1x schedule) and should be more accurate.
-
- 2. In MMDetection 2.0, the "`paste_mask()`" function is different and should be more accurate than those in previous versions. This change follows the modification in [Detectron2](https://github.com/facebookresearch/detectron2/blob/master/detectron2/structures/masks.py) and can improve mask AP on COCO by ~0.5% absolute.
-
-## Codebase Conventions
-
-- MMDetection 2.0 changes the order of class labels to reduce unused parameters in regression and mask branch more naturally (without +1 and -1).
- This effect all the classification layers of the model to have a different ordering of class labels. The final layers of regression branch and mask head no longer keep K+1 channels for K categories, and their class orders are consistent with the classification branch.
-
- - In MMDetection 2.0, label "K" means background, and labels [0, K-1] correspond to the K = num_categories object categories.
-
- - In MMDetection 1.x and previous version, label "0" means background, and labels [1, K] correspond to the K categories.
-
- - **Note**: The class order of softmax RPN is still the same as that in 1.x in versions<=2.4.0 while sigmoid RPN is not affected. The class orders in all heads are unified since MMDetection v2.5.0.
-
-- Low quality matching in R-CNN is not used. In MMDetection 1.x and previous versions, the `max_iou_assigner` will match low quality boxes for each ground truth box in both RPN and R-CNN training. We observe this sometimes does not assign the most perfect GT box to some bounding boxes,
- thus MMDetection 2.0 do not allow low quality matching by default in R-CNN training in the new system. This sometimes may slightly improve the box AP (~0.1% absolute).
-
-- Separate scale factors for width and height. In MMDetection 1.x and previous versions, the scale factor is a single float in mode `keep_ratio=True`. This is slightly inaccurate because the scale factors for width and height have slight difference. MMDetection 2.0 adopts separate scale factors for width and height, the improvement on AP ~0.1% absolute.
-
-- Configs name conventions are changed. MMDetection V2.0 adopts the new name convention to maintain the gradually growing model zoo as the following:
-
- ```shell
- [model]_(model setting)_[backbone]_[neck]_(norm setting)_(misc)_(gpu x batch)_[schedule]_[dataset].py,
- ```
-
- where the (`misc`) includes DCN and GCBlock, etc. More details are illustrated in the [documentation for config](config.md)
-
-- MMDetection V2.0 uses new ResNet Caffe backbones to reduce warnings when loading pre-trained models. Most of the new backbones' weights are the same as the former ones but do not have `conv.bias`, except that they use a different `img_norm_cfg`. Thus, the new backbone will not cause warning of unexpected keys.
-
-## Training Hyperparameters
-
-The change in training hyperparameters does not affect
-model-level compatibility but slightly improves the performance. The major ones are:
-
-- The number of proposals after nms is changed from 2000 to 1000 by setting `nms_post=1000` and `max_num=1000`.
- This slightly improves both mask AP and bbox AP by ~0.2% absolute.
-
-- The default box regression losses for Mask R-CNN, Faster R-CNN and RetinaNet are changed from smooth L1 Loss to L1 loss. This leads to an overall improvement in box AP (~0.6% absolute). However, using L1-loss for other methods such as Cascade R-CNN and HTC does not improve the performance, so we keep the original settings for these methods.
-
-- The sample num of RoIAlign layer is set to be 0 for simplicity. This leads to slightly improvement on mask AP (~0.2% absolute).
-
-- The default setting does not use gradient clipping anymore during training for faster training speed. This does not degrade performance of the most of models. For some models such as RepPoints we keep using gradient clipping to stabilize the training process and to obtain better performance.
-
-- The default warmup ratio is changed from 1/3 to 0.001 for a more smooth warming up process since the gradient clipping is usually not used. The effect is found negligible during our re-benchmarking, though.
-
-## Upgrade Models from 1.x to 2.0
-
-To convert the models trained by MMDetection V1.x to MMDetection V2.0, the users can use the script `tools/model_converters/upgrade_model_version.py` to convert
-their models. The converted models can be run in MMDetection V2.0 with slightly dropped performance (less than 1% AP absolute).
-Details can be found in `configs/legacy`.
-
-## pycocotools compatibility
-
-`mmpycocotools` is the OpenMMlab's folk of official `pycocotools`, which works for both MMDetection and Detectron2.
-Before [PR 4939](https://github.com/open-mmlab/mmdetection/pull/4939), since `pycocotools` and `mmpycocotool` have the same package name, if users already installed `pyccocotools` (installed Detectron2 first under the same environment), then the setup of MMDetection will skip installing `mmpycocotool`. Thus MMDetection fails due to the missing `mmpycocotools`.
-If MMDetection is installed before Detectron2, they could work under the same environment.
-[PR 4939](https://github.com/open-mmlab/mmdetection/pull/4939) deprecates mmpycocotools in favor of official pycocotools.
-Users may install MMDetection and Detectron2 under the same environment after [PR 4939](https://github.com/open-mmlab/mmdetection/pull/4939), no matter what the installation order is.
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/region_assigner.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/region_assigner.py
deleted file mode 100644
index 2e8464b97c8d8f44488d7bb781ca2e733a258e55..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/region_assigner.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import torch
-
-from mmdet.core import anchor_inside_flags
-from ..builder import BBOX_ASSIGNERS
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-
-def calc_region(bbox, ratio, stride, featmap_size=None):
- """Calculate region of the box defined by the ratio, the ratio is from the
- center of the box to every edge."""
- # project bbox on the feature
- f_bbox = bbox / stride
- x1 = torch.round((1 - ratio) * f_bbox[0] + ratio * f_bbox[2])
- y1 = torch.round((1 - ratio) * f_bbox[1] + ratio * f_bbox[3])
- x2 = torch.round(ratio * f_bbox[0] + (1 - ratio) * f_bbox[2])
- y2 = torch.round(ratio * f_bbox[1] + (1 - ratio) * f_bbox[3])
- if featmap_size is not None:
- x1 = x1.clamp(min=0, max=featmap_size[1])
- y1 = y1.clamp(min=0, max=featmap_size[0])
- x2 = x2.clamp(min=0, max=featmap_size[1])
- y2 = y2.clamp(min=0, max=featmap_size[0])
- return (x1, y1, x2, y2)
-
-
-def anchor_ctr_inside_region_flags(anchors, stride, region):
- """Get the flag indicate whether anchor centers are inside regions."""
- x1, y1, x2, y2 = region
- f_anchors = anchors / stride
- x = (f_anchors[:, 0] + f_anchors[:, 2]) * 0.5
- y = (f_anchors[:, 1] + f_anchors[:, 3]) * 0.5
- flags = (x >= x1) & (x <= x2) & (y >= y1) & (y <= y2)
- return flags
-
-
-@BBOX_ASSIGNERS.register_module()
-class RegionAssigner(BaseAssigner):
- """Assign a corresponding gt bbox or background to each bbox.
-
- Each proposals will be assigned with `-1`, `0`, or a positive integer
- indicating the ground truth index.
-
- - -1: don't care
- - 0: negative sample, no assigned gt
- - positive integer: positive sample, index (1-based) of assigned gt
-
- Args:
- center_ratio: ratio of the region in the center of the bbox to
- define positive sample.
- ignore_ratio: ratio of the region to define ignore samples.
- """
-
- def __init__(self, center_ratio=0.2, ignore_ratio=0.5):
- self.center_ratio = center_ratio
- self.ignore_ratio = ignore_ratio
-
- def assign(self,
- mlvl_anchors,
- mlvl_valid_flags,
- gt_bboxes,
- img_meta,
- featmap_sizes,
- anchor_scale,
- anchor_strides,
- gt_bboxes_ignore=None,
- gt_labels=None,
- allowed_border=0):
- """Assign gt to anchors.
-
- This method assign a gt bbox to every bbox (proposal/anchor), each bbox
- will be assigned with -1, 0, or a positive number. -1 means don't care,
- 0 means negative sample, positive number is the index (1-based) of
- assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. Assign every anchor to 0 (negative)
- For each gt_bboxes:
- 2. Compute ignore flags based on ignore_region then
- assign -1 to anchors w.r.t. ignore flags
- 3. Compute pos flags based on center_region then
- assign gt_bboxes to anchors w.r.t. pos flags
- 4. Compute ignore flags based on adjacent anchor lvl then
- assign -1 to anchors w.r.t. ignore flags
- 5. Assign anchor outside of image to -1
-
- Args:
- mlvl_anchors (list[Tensor]): Multi level anchors.
- mlvl_valid_flags (list[Tensor]): Multi level valid flags.
- gt_bboxes (Tensor): Ground truth bboxes of image
- img_meta (dict): Meta info of image.
- featmap_sizes (list[Tensor]): Feature mapsize each level
- anchor_scale (int): Scale of the anchor.
- anchor_strides (list[int]): Stride of the anchor.
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
- allowed_border (int, optional): The border to allow the valid
- anchor. Defaults to 0.
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- if gt_bboxes_ignore is not None:
- raise NotImplementedError
-
- num_gts = gt_bboxes.shape[0]
- num_bboxes = sum(x.shape[0] for x in mlvl_anchors)
-
- if num_gts == 0 or num_bboxes == 0:
- # No ground truth or boxes, return empty assignment
- max_overlaps = gt_bboxes.new_zeros((num_bboxes, ))
- assigned_gt_inds = gt_bboxes.new_zeros((num_bboxes, ),
- dtype=torch.long)
- if gt_labels is None:
- assigned_labels = None
- else:
- assigned_labels = gt_bboxes.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gts,
- assigned_gt_inds,
- max_overlaps,
- labels=assigned_labels)
-
- num_lvls = len(mlvl_anchors)
- r1 = (1 - self.center_ratio) / 2
- r2 = (1 - self.ignore_ratio) / 2
-
- scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) *
- (gt_bboxes[:, 3] - gt_bboxes[:, 1]))
- min_anchor_size = scale.new_full(
- (1, ), float(anchor_scale * anchor_strides[0]))
- target_lvls = torch.floor(
- torch.log2(scale) - torch.log2(min_anchor_size) + 0.5)
- target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long()
-
- # 1. assign 0 (negative) by default
- mlvl_assigned_gt_inds = []
- mlvl_ignore_flags = []
- for lvl in range(num_lvls):
- h, w = featmap_sizes[lvl]
- assert h * w == mlvl_anchors[lvl].shape[0]
- assigned_gt_inds = gt_bboxes.new_full((h * w, ),
- 0,
- dtype=torch.long)
- ignore_flags = torch.zeros_like(assigned_gt_inds)
- mlvl_assigned_gt_inds.append(assigned_gt_inds)
- mlvl_ignore_flags.append(ignore_flags)
-
- for gt_id in range(num_gts):
- lvl = target_lvls[gt_id].item()
- featmap_size = featmap_sizes[lvl]
- stride = anchor_strides[lvl]
- anchors = mlvl_anchors[lvl]
- gt_bbox = gt_bboxes[gt_id, :4]
-
- # Compute regions
- ignore_region = calc_region(gt_bbox, r2, stride, featmap_size)
- ctr_region = calc_region(gt_bbox, r1, stride, featmap_size)
-
- # 2. Assign -1 to ignore flags
- ignore_flags = anchor_ctr_inside_region_flags(
- anchors, stride, ignore_region)
- mlvl_assigned_gt_inds[lvl][ignore_flags] = -1
-
- # 3. Assign gt_bboxes to pos flags
- pos_flags = anchor_ctr_inside_region_flags(anchors, stride,
- ctr_region)
- mlvl_assigned_gt_inds[lvl][pos_flags] = gt_id + 1
-
- # 4. Assign -1 to ignore adjacent lvl
- if lvl > 0:
- d_lvl = lvl - 1
- d_anchors = mlvl_anchors[d_lvl]
- d_featmap_size = featmap_sizes[d_lvl]
- d_stride = anchor_strides[d_lvl]
- d_ignore_region = calc_region(gt_bbox, r2, d_stride,
- d_featmap_size)
- ignore_flags = anchor_ctr_inside_region_flags(
- d_anchors, d_stride, d_ignore_region)
- mlvl_ignore_flags[d_lvl][ignore_flags] = 1
- if lvl < num_lvls - 1:
- u_lvl = lvl + 1
- u_anchors = mlvl_anchors[u_lvl]
- u_featmap_size = featmap_sizes[u_lvl]
- u_stride = anchor_strides[u_lvl]
- u_ignore_region = calc_region(gt_bbox, r2, u_stride,
- u_featmap_size)
- ignore_flags = anchor_ctr_inside_region_flags(
- u_anchors, u_stride, u_ignore_region)
- mlvl_ignore_flags[u_lvl][ignore_flags] = 1
-
- # 4. (cont.) Assign -1 to ignore adjacent lvl
- for lvl in range(num_lvls):
- ignore_flags = mlvl_ignore_flags[lvl]
- mlvl_assigned_gt_inds[lvl][ignore_flags] = -1
-
- # 5. Assign -1 to anchor outside of image
- flat_assigned_gt_inds = torch.cat(mlvl_assigned_gt_inds)
- flat_anchors = torch.cat(mlvl_anchors)
- flat_valid_flags = torch.cat(mlvl_valid_flags)
- assert (flat_assigned_gt_inds.shape[0] == flat_anchors.shape[0] ==
- flat_valid_flags.shape[0])
- inside_flags = anchor_inside_flags(flat_anchors, flat_valid_flags,
- img_meta['img_shape'],
- allowed_border)
- outside_flags = ~inside_flags
- flat_assigned_gt_inds[outside_flags] = -1
-
- if gt_labels is not None:
- assigned_labels = torch.zeros_like(flat_assigned_gt_inds)
- pos_flags = assigned_gt_inds > 0
- assigned_labels[pos_flags] = gt_labels[
- flat_assigned_gt_inds[pos_flags] - 1]
- else:
- assigned_labels = None
-
- return AssignResult(
- num_gts, flat_assigned_gt_inds, None, labels=assigned_labels)
diff --git a/spaces/tomofi/trocr-captcha/app.py b/spaces/tomofi/trocr-captcha/app.py
deleted file mode 100644
index 00e7b486d6ebce94b1912fe8ef39edd5a5510d57..0000000000000000000000000000000000000000
--- a/spaces/tomofi/trocr-captcha/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import gradio as gr
-from transformers import TrOCRProcessor, VisionEncoderDecoderModel
-import requests
-from PIL import Image
-
-processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-printed")
-model = VisionEncoderDecoderModel.from_pretrained("tomofi/trocr-captcha")
-
-# load image examples
-urls = [
- 'https://storage.googleapis.com/trocr-captcha.appspot.com/captcha_images_v2/nfcb5.png',
- 'https://storage.googleapis.com/trocr-captcha.appspot.com/captcha_images_v2/p57fn.png',
- 'https://storage.googleapis.com/trocr-captcha.appspot.com/captcha_images_v2/w2yp7.png',
- 'https://storage.googleapis.com/trocr-captcha.appspot.com/captcha_images_v2/pme86.png',
- 'https://storage.googleapis.com/trocr-captcha.appspot.com/captcha_images_v2/w4nfx.png',
- 'https://storage.googleapis.com/trocr-captcha.appspot.com/captcha_images_v2/nf8b8.png'
-]
-for idx, url in enumerate(urls):
- image = Image.open(requests.get(url, stream=True).raw)
- image.save(f"image_{idx}.png")
-
-def process_image(image):
- # prepare image
- pixel_values = processor(image, return_tensors="pt").pixel_values
-
- # generate (no beam search)
- generated_ids = model.generate(pixel_values)
-
- # decode
- generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
-
- return generated_text
-
-title = "TrOCR for Captcha"
-description = "Demo for Microsoft's TrOCR, an encoder-decoder model consisting of an image Transformer encoder and a text Transformer decoder for state-of-the-art optical character recognition (OCR) on single-text line images. This particular model is fine-tuned on IAM, a dataset of annotated handwritten images. To use it, simply upload a (single-text line) image or use one of the example images below and click 'submit'. Results will show up in a few seconds."
-article = "TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models | Github Repo
"
-examples =[["image_0.png"], ["image_1.png"], ["image_2.png"], ["image_3.png"], ["image_4.png"], ["image_5.png"]]
-
-#css = """.output_image, .input_image {height: 600px !important}"""
-
-iface = gr.Interface(fn=process_image,
- inputs=gr.inputs.Image(type="pil"),
- outputs=gr.outputs.Textbox(),
- title=title,
- description=description,
- article=article,
- examples=examples)
-iface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/trysem/bukGPT/app.py b/spaces/trysem/bukGPT/app.py
deleted file mode 100644
index 5d1c9e3b54d99915be7a6cf769cfe45cb95b1c7d..0000000000000000000000000000000000000000
--- a/spaces/trysem/bukGPT/app.py
+++ /dev/null
@@ -1,187 +0,0 @@
-import urllib.request
-import fitz
-import re
-import numpy as np
-import tensorflow_hub as hub
-import openai
-import gradio as gr
-import os
-from sklearn.neighbors import NearestNeighbors
-
-
-def download_pdf(url, output_path):
- urllib.request.urlretrieve(url, output_path)
-
-
-def preprocess(text):
- text = text.replace('\n', ' ')
- text = re.sub('\s+', ' ', text)
- return text
-
-
-def pdf_to_text(path, start_page=1, end_page=None):
- doc = fitz.open(path)
- total_pages = doc.page_count
-
- if end_page is None:
- end_page = total_pages
-
- text_list = []
-
- for i in range(start_page-1, end_page):
- text = doc.load_page(i).get_text("text")
- text = preprocess(text)
- text_list.append(text)
-
- doc.close()
- return text_list
-
-
-def text_to_chunks(texts, word_length=150, start_page=1):
- text_toks = [t.split(' ') for t in texts]
- page_nums = []
- chunks = []
-
- for idx, words in enumerate(text_toks):
- for i in range(0, len(words), word_length):
- chunk = words[i:i+word_length]
- if (i+word_length) > len(words) and (len(chunk) < word_length) and (
- len(text_toks) != (idx+1)):
- text_toks[idx+1] = chunk + text_toks[idx+1]
- continue
- chunk = ' '.join(chunk).strip()
- chunk = f'[{idx+start_page}]' + ' ' + '"' + chunk + '"'
- chunks.append(chunk)
- return chunks
-
-
-class SemanticSearch:
-
- def __init__(self):
- self.use = hub.load('https://tfhub.dev/google/universal-sentence-encoder/4')
- self.fitted = False
-
-
- def fit(self, data, batch=1000, n_neighbors=5):
- self.data = data
- self.embeddings = self.get_text_embedding(data, batch=batch)
- n_neighbors = min(n_neighbors, len(self.embeddings))
- self.nn = NearestNeighbors(n_neighbors=n_neighbors)
- self.nn.fit(self.embeddings)
- self.fitted = True
-
-
- def __call__(self, text, return_data=True):
- inp_emb = self.use([text])
- neighbors = self.nn.kneighbors(inp_emb, return_distance=False)[0]
-
- if return_data:
- return [self.data[i] for i in neighbors]
- else:
- return neighbors
-
-
- def get_text_embedding(self, texts, batch=1000):
- embeddings = []
- for i in range(0, len(texts), batch):
- text_batch = texts[i:(i+batch)]
- emb_batch = self.use(text_batch)
- embeddings.append(emb_batch)
- embeddings = np.vstack(embeddings)
- return embeddings
-
-
-openai.api_key ='sk-RJClYt9UHNEO7GcS6DjIT3BlbkFJNSIoVlT83jMOVfKk'
-recommender = SemanticSearch()
-
-def load_recommender(path, start_page=1):
- global recommender
- texts = pdf_to_text(path, start_page=start_page)
- chunks = text_to_chunks(texts, start_page=start_page)
- recommender.fit(chunks)
- return 'Corpus Loaded.'
-
-
-def generate_text(prompt, engine="text-davinci-003"):
- completions = openai.Completion.create(
- engine=engine,
- prompt=prompt,
- max_tokens=512,
- n=1,
- stop=None,
- temperature=0.7,
- )
- message = completions.choices[0].text
- return message
-
-
-def generate_answer(question):
- topn_chunks = recommender(question)
- prompt = ""
- prompt += 'search results:\n\n'
- for c in topn_chunks:
- prompt += c + '\n\n'
-
- prompt += "Instructions: Compose a comprehensive reply to the query using the search results given. "\
- "Cite each reference using [number] notation (every result has this number at the beginning). "\
- "Citation should be done at the end of each sentence. If the search results mention multiple subjects "\
- "with the same name, create separate answers for each. Only include information found in the results and "\
- "don't add any additional information. Make sure the answer is correct and don't output false content. "\
- "If the text does not relate to the query, simply state 'Found Nothing'. Ignore outlier "\
- "search results which has nothing to do with the question. Only answer what is asked. The "\
- "answer should be short and concise.\n\nQuery: {question}\nAnswer: "
-
- prompt += f"Query: {question}\nAnswer:"
- answer = generate_text(prompt)
- return answer
-
-
-def question_answer(url, file, question):
- if url.strip() == '' and file == None:
- return '[ERROR]: Both URL and PDF is empty. Provide atleast one.'
-
- if url.strip() != '' and file != None:
- return '[ERROR]: Both URL and PDF is provided. Please provide only one (eiter URL or PDF).'
-
- if url.strip() != '':
- glob_url = url
- download_pdf(glob_url, 'corpus.pdf')
- load_recommender('corpus.pdf')
-
- else:
- old_file_name = file.name
- file_name = file.name
- file_name = file_name[:-12] + file_name[-4:]
- os.rename(old_file_name, file_name)
- load_recommender(file_name)
-
- if question.strip() == '':
- return '[ERROR]: Question field is empty'
-
- return generate_answer(question)
-
-
-title = 'BookGPT'
-description = ""
-
-with gr.Blocks(css="footer {visibility: hidden}") as demo:
-
- gr.Markdown(f'{title}
')
- gr.Markdown(description)
-
- with gr.Row():
-
- with gr.Group():
- url = gr.Textbox(label='URL')
- gr.Markdown("or
")
- file = gr.File(label='PDF', file_types=['.pdf'])
- question = gr.Textbox(label='question')
- btn = gr.Button(value='Submit')
- btn.style(full_width=True)
-
- with gr.Group():
- answer = gr.Textbox(label='answer')
-
- btn.click(question_answer, inputs=[url, file, question], outputs=[answer])
-
-demo.launch()
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Adobe Media Encoder Cs6 Amtlib.dll.rar.md b/spaces/usbethFlerru/sovits-modelsV2/example/Adobe Media Encoder Cs6 Amtlib.dll.rar.md
deleted file mode 100644
index 59e58098fe0e64428dfe03cc78c16f87e9f53687..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Adobe Media Encoder Cs6 Amtlib.dll.rar.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-dynamicscrack.com. adobe media encoder cc 2019 crack amtlib dll injector - trueffile. adobe media encoder cc 2019 dynamicscrack.com/crack-adobe-media-encoder-cc-2019-1531549326.html. adobe media encoder cc 2019 version was released on 19 october 2019. adobe media encoder cc 2019 which found the new adobe. amtlib.dll adobe media encoder cs6. adobe media encoder cc 2019 2 adobe photoshop cc
-how to activate. amtlib.dll adobe media encoder cs6. adobe media encoder cc 2019. the adobe flash pro cc 2019 program is a must-have tool for web designers and animators. dynaset adobe captivate dynamic. adobe captivate dynamic. download the adobe photoshop cc 2019 version released on 19 october 2019 and can be verified by clicking check for update online.
-adobe media encoder cs6 amtlib.dll.rar
Download File ✒ ✒ ✒ https://urlcod.com/2uyWXz
-adobe media encoder cc 2019 crack amtlib dll injector trueffile. if you are in need of the full offline version of adobe media encoder cc 2019, remember this is the one that is not ready for. adobe media encoder cc 2019 crack amtlib dll injector. adobe media encoder cc 2019 crack amtlib dll injector the adobe flash cc 2019 program is a must-have tool for web designers and animators. dynamicscrack.com/crack-adobe-media-encoder-cc-2019-1531549326.html. 2x fast 50mbps download. adobe flash pro cc 2019 crack amtlib dll injector adobe media encoder cc 2019 crack amtlib dll injector.
-and the adobe media encoder cc 2019 application is not only required for web design and animation but also for desktop video editing. amtlib.dll adobe media encoder cs6. adobe adobe media encoder cc 2019. adobe captivate dynamic. adobe media encoder cc 2019. if you are in need of the full offline version of adobe media encoder cc 2019, remember this is the one that is not ready for.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Descargar Tekken 6 Para Ps2 Iso Torrent.md b/spaces/usbethFlerru/sovits-modelsV2/example/Descargar Tekken 6 Para Ps2 Iso Torrent.md
deleted file mode 100644
index 6c1ee5c3c31d75ee80b01dfd6b3e95d85a048e10..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Descargar Tekken 6 Para Ps2 Iso Torrent.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-Descargar Tekken 6 Para Ps2 Iso Torrent: Cómo disfrutar del juego de lucha más popular en tu consola
-
-Tekken 6 es un juego de lucha desarrollado y publicado por Bandai Namco Games. Es la sexta entrega principal y la séptima en la franquicia Tekken. Se lanzó en las salas recreativas el 26 de noviembre de 2007, como el primer juego que se ejecuta en la placa arcade basada en PlayStation 3 llamada System 357. Un año después, el juego recibió una actualización, subtitulada Bloodline Rebellion.
-
-Tekken 6 cuenta con el mayor elenco de personajes de la serie, con más de 40 luchadores diferentes para elegir. Además, ofrece una mayor personalización de los personajes, un modo historia más profundo y un modo multijugador en lÃnea. El juego también presenta un nuevo sistema de fÃsica que permite una interacción más realista con el entorno y los objetos.
-Descargar Tekken 6 Para Ps2 Iso Torrent
Download ⚹ https://urlcod.com/2uyVgm
-
-Si quieres disfrutar de Tekken 6 en tu consola PlayStation 2, necesitas descargar un archivo ISO del juego y un programa que te permita convertirlo en un formato compatible con tu PS2. Una opción es usar un programa llamado PS2 ISO Maker, que puedes descargar gratis desde este enlace: https://ps2isomaker.com/.
-
-Para descargar el archivo ISO de Tekken 6, puedes usar un sitio web que ofrezca torrents de juegos, como este: https://gametorrents.com/tekken-6-iso. Recuerda que necesitas tener instalado un programa de descarga de torrents, como uTorrent o BitTorrent, para poder bajar el archivo.
-
-Una vez que tengas el archivo ISO y el programa PS2 ISO Maker, sigue estos pasos para convertirlo y grabarlo en un DVD:
-
-
-- Abre el programa PS2 ISO Maker y selecciona la opción "Create ISO from CD/DVD".
-- Inserta el DVD en blanco en tu unidad lectora y haz clic en "Next".
-- Selecciona la unidad donde está el DVD y haz clic en "Next".
-- Escribe el nombre que quieras para el archivo ISO y elige la carpeta donde quieres guardarlo. Haz clic en "Next".
-- Espera a que el programa cree el archivo ISO a partir del DVD. Puede tardar unos minutos.
-- Cuando termine, haz clic en "Finish". Ya tienes tu archivo ISO listo para usar.
-- Ahora, abre el programa PS2 ISO Maker de nuevo y selecciona la opción "Patch ISO".
-- Busca el archivo ISO de Tekken 6 que descargaste y haz clic en "Open".
-- Selecciona la opción "Apply ESR Patch" y haz clic en "Patch". Esto hará que el juego sea compatible con tu PS2.
-- Cuando termine, haz clic en "OK". Ya tienes tu archivo ISO parcheado listo para grabar.
-- Abre el programa PS2 ISO Maker una vez más y selecciona la opción "Burn ISO to CD/DVD".
-- Inserta otro DVD en blanco en tu unidad lectora y haz clic en "Next".
-- Selecciona la unidad donde está el DVD y haz clic en "Next".
-- Busca el archivo ISO parcheado de Tekken 6 y haz clic en "Open".
-- Haz clic en "Burn" para empezar a grabar el juego en el DVD.
-- Espera a que termine el proceso de grabación. Puede tardar unos minutos.
-- Cuando termine, haz clic en "Finish". Ya tienes tu DVD con Tekken 6 listo para jugar.
-
-
-Para jugar a Tekken 6 en tu PS2, necesitas tener instalado un modchip o un disco de arranque como Free McBoot o
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/tests/test_engine.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/tests/test_engine.py
deleted file mode 100644
index b0110442dc7e4488765a695d3febb649db08e54f..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/tests/test_engine.py
+++ /dev/null
@@ -1,125 +0,0 @@
-# Ultralytics YOLO 🚀, AGPL-3.0 license
-
-from pathlib import Path
-
-from ultralytics import YOLO
-from ultralytics.yolo.cfg import get_cfg
-from ultralytics.yolo.engine.exporter import Exporter
-from ultralytics.yolo.utils import DEFAULT_CFG, ROOT, SETTINGS
-from ultralytics.yolo.v8 import classify, detect, segment
-
-CFG_DET = 'yolov8n.yaml'
-CFG_SEG = 'yolov8n-seg.yaml'
-CFG_CLS = 'squeezenet1_0'
-CFG = get_cfg(DEFAULT_CFG)
-MODEL = Path(SETTINGS['weights_dir']) / 'yolov8n'
-SOURCE = ROOT / 'assets'
-
-
-def test_func(model=None):
- print('callback test passed')
-
-
-def test_export():
- exporter = Exporter()
- exporter.add_callback('on_export_start', test_func)
- assert test_func in exporter.callbacks['on_export_start'], 'callback test failed'
- f = exporter(model=YOLO(CFG_DET).model)
- YOLO(f)(SOURCE) # exported model inference
-
-
-def test_detect():
- overrides = {'data': 'coco8.yaml', 'model': CFG_DET, 'imgsz': 32, 'epochs': 1, 'save': False}
- CFG.data = 'coco8.yaml'
-
- # Trainer
- trainer = detect.DetectionTrainer(overrides=overrides)
- trainer.add_callback('on_train_start', test_func)
- assert test_func in trainer.callbacks['on_train_start'], 'callback test failed'
- trainer.train()
-
- # Validator
- val = detect.DetectionValidator(args=CFG)
- val.add_callback('on_val_start', test_func)
- assert test_func in val.callbacks['on_val_start'], 'callback test failed'
- val(model=trainer.best) # validate best.pt
-
- # Predictor
- pred = detect.DetectionPredictor(overrides={'imgsz': [64, 64]})
- pred.add_callback('on_predict_start', test_func)
- assert test_func in pred.callbacks['on_predict_start'], 'callback test failed'
- result = pred(source=SOURCE, model=f'{MODEL}.pt')
- assert len(result), 'predictor test failed'
-
- overrides['resume'] = trainer.last
- trainer = detect.DetectionTrainer(overrides=overrides)
- try:
- trainer.train()
- except Exception as e:
- print(f'Expected exception caught: {e}')
- return
-
- Exception('Resume test failed!')
-
-
-def test_segment():
- overrides = {'data': 'coco8-seg.yaml', 'model': CFG_SEG, 'imgsz': 32, 'epochs': 1, 'save': False}
- CFG.data = 'coco8-seg.yaml'
- CFG.v5loader = False
- # YOLO(CFG_SEG).train(**overrides) # works
-
- # trainer
- trainer = segment.SegmentationTrainer(overrides=overrides)
- trainer.add_callback('on_train_start', test_func)
- assert test_func in trainer.callbacks['on_train_start'], 'callback test failed'
- trainer.train()
-
- # Validator
- val = segment.SegmentationValidator(args=CFG)
- val.add_callback('on_val_start', test_func)
- assert test_func in val.callbacks['on_val_start'], 'callback test failed'
- val(model=trainer.best) # validate best.pt
-
- # Predictor
- pred = segment.SegmentationPredictor(overrides={'imgsz': [64, 64]})
- pred.add_callback('on_predict_start', test_func)
- assert test_func in pred.callbacks['on_predict_start'], 'callback test failed'
- result = pred(source=SOURCE, model=f'{MODEL}-seg.pt')
- assert len(result), 'predictor test failed'
-
- # Test resume
- overrides['resume'] = trainer.last
- trainer = segment.SegmentationTrainer(overrides=overrides)
- try:
- trainer.train()
- except Exception as e:
- print(f'Expected exception caught: {e}')
- return
-
- Exception('Resume test failed!')
-
-
-def test_classify():
- overrides = {'data': 'imagenet10', 'model': 'yolov8n-cls.yaml', 'imgsz': 32, 'epochs': 1, 'save': False}
- CFG.data = 'imagenet10'
- CFG.imgsz = 32
- # YOLO(CFG_SEG).train(**overrides) # works
-
- # Trainer
- trainer = classify.ClassificationTrainer(overrides=overrides)
- trainer.add_callback('on_train_start', test_func)
- assert test_func in trainer.callbacks['on_train_start'], 'callback test failed'
- trainer.train()
-
- # Validator
- val = classify.ClassificationValidator(args=CFG)
- val.add_callback('on_val_start', test_func)
- assert test_func in val.callbacks['on_val_start'], 'callback test failed'
- val(model=trainer.best)
-
- # Predictor
- pred = classify.ClassificationPredictor(overrides={'imgsz': [64, 64]})
- pred.add_callback('on_predict_start', test_func)
- assert test_func in pred.callbacks['on_predict_start'], 'callback test failed'
- result = pred(source=SOURCE, model=trainer.best)
- assert len(result), 'predictor test failed'
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/fastsam/val.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/fastsam/val.py
deleted file mode 100644
index 250bd5e41a89af4e72a7b1d4f38c4eda7b5ee649..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/fastsam/val.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# Ultralytics YOLO 🚀, AGPL-3.0 license
-
-from multiprocessing.pool import ThreadPool
-from pathlib import Path
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-from ultralytics.yolo.utils import LOGGER, NUM_THREADS, ops
-from ultralytics.yolo.utils.checks import check_requirements
-from ultralytics.yolo.utils.metrics import SegmentMetrics, box_iou, mask_iou
-from ultralytics.yolo.utils.plotting import output_to_target, plot_images
-from ultralytics.yolo.v8.detect import DetectionValidator
-
-
-class FastSAMValidator(DetectionValidator):
-
- def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None):
- """Initialize SegmentationValidator and set task to 'segment', metrics to SegmentMetrics."""
- super().__init__(dataloader, save_dir, pbar, args, _callbacks)
- self.args.task = 'segment'
- self.metrics = SegmentMetrics(save_dir=self.save_dir, on_plot=self.on_plot)
-
- def preprocess(self, batch):
- """Preprocesses batch by converting masks to float and sending to device."""
- batch = super().preprocess(batch)
- batch['masks'] = batch['masks'].to(self.device).float()
- return batch
-
- def init_metrics(self, model):
- """Initialize metrics and select mask processing function based on save_json flag."""
- super().init_metrics(model)
- self.plot_masks = []
- if self.args.save_json:
- check_requirements('pycocotools>=2.0.6')
- self.process = ops.process_mask_upsample # more accurate
- else:
- self.process = ops.process_mask # faster
-
- def get_desc(self):
- """Return a formatted description of evaluation metrics."""
- return ('%22s' + '%11s' * 10) % ('Class', 'Images', 'Instances', 'Box(P', 'R', 'mAP50', 'mAP50-95)', 'Mask(P',
- 'R', 'mAP50', 'mAP50-95)')
-
- def postprocess(self, preds):
- """Postprocesses YOLO predictions and returns output detections with proto."""
- p = ops.non_max_suppression(preds[0],
- self.args.conf,
- self.args.iou,
- labels=self.lb,
- multi_label=True,
- agnostic=self.args.single_cls,
- max_det=self.args.max_det,
- nc=self.nc)
- proto = preds[1][-1] if len(preds[1]) == 3 else preds[1] # second output is len 3 if pt, but only 1 if exported
- return p, proto
-
- def update_metrics(self, preds, batch):
- """Metrics."""
- for si, (pred, proto) in enumerate(zip(preds[0], preds[1])):
- idx = batch['batch_idx'] == si
- cls = batch['cls'][idx]
- bbox = batch['bboxes'][idx]
- nl, npr = cls.shape[0], pred.shape[0] # number of labels, predictions
- shape = batch['ori_shape'][si]
- correct_masks = torch.zeros(npr, self.niou, dtype=torch.bool, device=self.device) # init
- correct_bboxes = torch.zeros(npr, self.niou, dtype=torch.bool, device=self.device) # init
- self.seen += 1
-
- if npr == 0:
- if nl:
- self.stats.append((correct_bboxes, correct_masks, *torch.zeros(
- (2, 0), device=self.device), cls.squeeze(-1)))
- if self.args.plots:
- self.confusion_matrix.process_batch(detections=None, labels=cls.squeeze(-1))
- continue
-
- # Masks
- midx = [si] if self.args.overlap_mask else idx
- gt_masks = batch['masks'][midx]
- pred_masks = self.process(proto, pred[:, 6:], pred[:, :4], shape=batch['img'][si].shape[1:])
-
- # Predictions
- if self.args.single_cls:
- pred[:, 5] = 0
- predn = pred.clone()
- ops.scale_boxes(batch['img'][si].shape[1:], predn[:, :4], shape,
- ratio_pad=batch['ratio_pad'][si]) # native-space pred
-
- # Evaluate
- if nl:
- height, width = batch['img'].shape[2:]
- tbox = ops.xywh2xyxy(bbox) * torch.tensor(
- (width, height, width, height), device=self.device) # target boxes
- ops.scale_boxes(batch['img'][si].shape[1:], tbox, shape,
- ratio_pad=batch['ratio_pad'][si]) # native-space labels
- labelsn = torch.cat((cls, tbox), 1) # native-space labels
- correct_bboxes = self._process_batch(predn, labelsn)
- # TODO: maybe remove these `self.` arguments as they already are member variable
- correct_masks = self._process_batch(predn,
- labelsn,
- pred_masks,
- gt_masks,
- overlap=self.args.overlap_mask,
- masks=True)
- if self.args.plots:
- self.confusion_matrix.process_batch(predn, labelsn)
-
- # Append correct_masks, correct_boxes, pconf, pcls, tcls
- self.stats.append((correct_bboxes, correct_masks, pred[:, 4], pred[:, 5], cls.squeeze(-1)))
-
- pred_masks = torch.as_tensor(pred_masks, dtype=torch.uint8)
- if self.args.plots and self.batch_i < 3:
- self.plot_masks.append(pred_masks[:15].cpu()) # filter top 15 to plot
-
- # Save
- if self.args.save_json:
- pred_masks = ops.scale_image(pred_masks.permute(1, 2, 0).contiguous().cpu().numpy(),
- shape,
- ratio_pad=batch['ratio_pad'][si])
- self.pred_to_json(predn, batch['im_file'][si], pred_masks)
- # if self.args.save_txt:
- # save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt')
-
- def finalize_metrics(self, *args, **kwargs):
- """Sets speed and confusion matrix for evaluation metrics."""
- self.metrics.speed = self.speed
- self.metrics.confusion_matrix = self.confusion_matrix
-
- def _process_batch(self, detections, labels, pred_masks=None, gt_masks=None, overlap=False, masks=False):
- """
- Return correct prediction matrix
- Arguments:
- detections (array[N, 6]), x1, y1, x2, y2, conf, class
- labels (array[M, 5]), class, x1, y1, x2, y2
- Returns:
- correct (array[N, 10]), for 10 IoU levels
- """
- if masks:
- if overlap:
- nl = len(labels)
- index = torch.arange(nl, device=gt_masks.device).view(nl, 1, 1) + 1
- gt_masks = gt_masks.repeat(nl, 1, 1) # shape(1,640,640) -> (n,640,640)
- gt_masks = torch.where(gt_masks == index, 1.0, 0.0)
- if gt_masks.shape[1:] != pred_masks.shape[1:]:
- gt_masks = F.interpolate(gt_masks[None], pred_masks.shape[1:], mode='bilinear', align_corners=False)[0]
- gt_masks = gt_masks.gt_(0.5)
- iou = mask_iou(gt_masks.view(gt_masks.shape[0], -1), pred_masks.view(pred_masks.shape[0], -1))
- else: # boxes
- iou = box_iou(labels[:, 1:], detections[:, :4])
-
- correct = np.zeros((detections.shape[0], self.iouv.shape[0])).astype(bool)
- correct_class = labels[:, 0:1] == detections[:, 5]
- for i in range(len(self.iouv)):
- x = torch.where((iou >= self.iouv[i]) & correct_class) # IoU > threshold and classes match
- if x[0].shape[0]:
- matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]),
- 1).cpu().numpy() # [label, detect, iou]
- if x[0].shape[0] > 1:
- matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
- # matches = matches[matches[:, 2].argsort()[::-1]]
- matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
- correct[matches[:, 1].astype(int), i] = True
- return torch.tensor(correct, dtype=torch.bool, device=detections.device)
-
- def plot_val_samples(self, batch, ni):
- """Plots validation samples with bounding box labels."""
- plot_images(batch['img'],
- batch['batch_idx'],
- batch['cls'].squeeze(-1),
- batch['bboxes'],
- batch['masks'],
- paths=batch['im_file'],
- fname=self.save_dir / f'val_batch{ni}_labels.jpg',
- names=self.names,
- on_plot=self.on_plot)
-
- def plot_predictions(self, batch, preds, ni):
- """Plots batch predictions with masks and bounding boxes."""
- plot_images(
- batch['img'],
- *output_to_target(preds[0], max_det=15), # not set to self.args.max_det due to slow plotting speed
- torch.cat(self.plot_masks, dim=0) if len(self.plot_masks) else self.plot_masks,
- paths=batch['im_file'],
- fname=self.save_dir / f'val_batch{ni}_pred.jpg',
- names=self.names,
- on_plot=self.on_plot) # pred
- self.plot_masks.clear()
-
- def pred_to_json(self, predn, filename, pred_masks):
- """Save one JSON result."""
- # Example result = {"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}
- from pycocotools.mask import encode # noqa
-
- def single_encode(x):
- """Encode predicted masks as RLE and append results to jdict."""
- rle = encode(np.asarray(x[:, :, None], order='F', dtype='uint8'))[0]
- rle['counts'] = rle['counts'].decode('utf-8')
- return rle
-
- stem = Path(filename).stem
- image_id = int(stem) if stem.isnumeric() else stem
- box = ops.xyxy2xywh(predn[:, :4]) # xywh
- box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
- pred_masks = np.transpose(pred_masks, (2, 0, 1))
- with ThreadPool(NUM_THREADS) as pool:
- rles = pool.map(single_encode, pred_masks)
- for i, (p, b) in enumerate(zip(predn.tolist(), box.tolist())):
- self.jdict.append({
- 'image_id': image_id,
- 'category_id': self.class_map[int(p[5])],
- 'bbox': [round(x, 3) for x in b],
- 'score': round(p[4], 5),
- 'segmentation': rles[i]})
-
- def eval_json(self, stats):
- """Return COCO-style object detection evaluation metrics."""
- if self.args.save_json and self.is_coco and len(self.jdict):
- anno_json = self.data['path'] / 'annotations/instances_val2017.json' # annotations
- pred_json = self.save_dir / 'predictions.json' # predictions
- LOGGER.info(f'\nEvaluating pycocotools mAP using {pred_json} and {anno_json}...')
- try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
- check_requirements('pycocotools>=2.0.6')
- from pycocotools.coco import COCO # noqa
- from pycocotools.cocoeval import COCOeval # noqa
-
- for x in anno_json, pred_json:
- assert x.is_file(), f'{x} file not found'
- anno = COCO(str(anno_json)) # init annotations api
- pred = anno.loadRes(str(pred_json)) # init predictions api (must pass string, not Path)
- for i, eval in enumerate([COCOeval(anno, pred, 'bbox'), COCOeval(anno, pred, 'segm')]):
- if self.is_coco:
- eval.params.imgIds = [int(Path(x).stem) for x in self.dataloader.dataset.im_files] # im to eval
- eval.evaluate()
- eval.accumulate()
- eval.summarize()
- idx = i * 4 + 2
- stats[self.metrics.keys[idx + 1]], stats[
- self.metrics.keys[idx]] = eval.stats[:2] # update mAP50-95 and mAP50
- except Exception as e:
- LOGGER.warning(f'pycocotools unable to run: {e}')
- return stats
diff --git a/spaces/verkaDerkaDerk/face-mesh-workflow/README.md b/spaces/verkaDerkaDerk/face-mesh-workflow/README.md
deleted file mode 100644
index df21e0bd1e82254bad7aed24a6237b578699c366..0000000000000000000000000000000000000000
--- a/spaces/verkaDerkaDerk/face-mesh-workflow/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Face Mesh Workflow
-emoji: 🐢
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-duplicated_from: verkaDerkaDerk/face-image-to-face-obj
----
-
-Uses MediaPipe to detect a face in an image the allows you to combined it's depth estimation with those from Zoe and Midas.
-The 3d viewer has Y pointing the opposite direction from Blender, so ya hafta spin it.
-
-See https://huggingface.co/spaces/shariqfarooq/ZoeDepth and https://huggingface.co/spaces/drafff/dpt-depth-estimation
-
-Caveat: I may be conflating "Intel/dpt-large" and "Midas"...
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/vishnu0001/text2mesh/shap_e/util/__init__.py b/spaces/vishnu0001/text2mesh/shap_e/util/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/models/upernet_uniformer.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/models/upernet_uniformer.py
deleted file mode 100644
index 41aa4db809dc6e2c508e98051f61807d07477903..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/configs/_base_/models/upernet_uniformer.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# model settings
-norm_cfg = dict(type='BN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained=None,
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- mlp_ratio=4.,
- qkv_bias=True,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.1),
- decode_head=dict(
- type='UPerHead',
- in_channels=[64, 128, 320, 512],
- in_index=[0, 1, 2, 3],
- pool_scales=(1, 2, 3, 6),
- channels=512,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=320,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
\ No newline at end of file
diff --git a/spaces/wldmr/punct-tube-gr/README.md b/spaces/wldmr/punct-tube-gr/README.md
deleted file mode 100644
index 388a465a36f905c90d7cb5ba97d9876625d0e007..0000000000000000000000000000000000000000
--- a/spaces/wldmr/punct-tube-gr/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: rPunct Gr
-emoji: 🐨
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/wwwwwwww2/bingo/src/lib/storage.ts b/spaces/wwwwwwww2/bingo/src/lib/storage.ts
deleted file mode 100644
index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000
--- a/spaces/wwwwwwww2/bingo/src/lib/storage.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { getMany, set, del, clear } from 'idb-keyval';
-
-export const Storage = {
- async get(key: string | string[] | null): Promise {
- if (key === null) return null;
- if (typeof key === 'string') {
- key = [key]
- }
- const returnData: Record = {}
- const values = await getMany(key)
- key.forEach((k, idx)=> {
- returnData[k] = values[idx]
- })
- return returnData;
- },
- async set(object: any) {
- for (let key of Object.keys(object)) {
- await set(key, object[key])
- }
- },
- async remove(key: string) {
- return del(key);
- },
- async clear() {
- return clear();
- }
-}
diff --git a/spaces/wxiaofei/vits-uma-genshin-honkai/app.py b/spaces/wxiaofei/vits-uma-genshin-honkai/app.py
deleted file mode 100644
index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000
--- a/spaces/wxiaofei/vits-uma-genshin-honkai/app.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import time
-import gradio as gr
-import utils
-import commons
-from models import SynthesizerTrn
-from text import text_to_sequence
-from torch import no_grad, LongTensor
-import torch
-
-hps_ms = utils.get_hparams_from_file(r'./model/config.json')
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-net_g_ms = SynthesizerTrn(
- len(hps_ms.symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers,
- **hps_ms.model).to(device)
-_ = net_g_ms.eval()
-speakers = hps_ms.speakers
-model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None)
-
-def get_text(text, hps):
- text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm, clean_text
-
-def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale):
- start = time.perf_counter()
- if not len(text):
- return "输入文本不能为空!", None, None
- text = text.replace('\n', ' ').replace('\r', '').replace(" ", "")
- if len(text) > 500:
- return f"输入文字过长!{len(text)}>100", None, None
- if language == 0:
- text = f"[ZH]{text}[ZH]"
- elif language == 1:
- text = f"[JA]{text}[JA]"
- else:
- text = f"{text}"
- stn_tst, clean_text = get_text(text, hps_ms)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = LongTensor([stn_tst.size(0)])
- speaker_id = LongTensor([speaker_id])
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=length_scale)[0][0, 0].data.cpu().float().numpy()
-
- return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s"
-
-def search_speaker(search_value):
- for s in speakers:
- if search_value == s:
- return s
- for s in speakers:
- if search_value in s:
- return s
-
-def change_lang(language):
- if language == 0:
- return 0.6, 0.668, 1.2
- else:
- return 0.6, 0.668, 1.1
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#tts-audio").querySelector("audio");
- let text = root.querySelector("#input-text").querySelector("textarea");
- if (audio == undefined)
- return;
- text = text.value;
- if (text == undefined)
- text = Math.floor(Math.random()*100000000);
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = text.substr(0, 20)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- with gr.Blocks() as app:
- gr.Markdown(
- "# VITS语音在线合成demo\n"
- "主要有赛马娘,原神中文,原神日语,崩坏3的音色"
- ''
- ''
- )
-
- with gr.Tabs():
- with gr.TabItem("vits"):
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text")
- lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"],
- type="index", value="中文")
- btn = gr.Button(value="Submit")
- with gr.Row():
- search = gr.Textbox(label="Search Speaker", lines=1)
- btn2 = gr.Button(value="Search")
- sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228])
- with gr.Row():
- ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="Output Message")
- o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio")
- o3 = gr.Textbox(label="Extra Info")
- download = gr.Button("Download Audio")
- btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate")
- download.click(None, [], [], _js=download_audio_js.format())
- btn2.click(search_speaker, inputs=[search], outputs=[sid])
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- with gr.TabItem("可用人物一览"):
- gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index")
- app.queue(concurrency_count=1).launch()
\ No newline at end of file
diff --git a/spaces/wy213/213a/src/components/voice.tsx b/spaces/wy213/213a/src/components/voice.tsx
deleted file mode 100644
index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000
--- a/spaces/wy213/213a/src/components/voice.tsx
+++ /dev/null
@@ -1,52 +0,0 @@
-import React, { useEffect } from 'react'
-import { useSetAtom } from 'jotai'
-import { useBing } from '@/lib/hooks/use-bing'
-import Image from 'next/image'
-import VoiceIcon from '@/assets/images/voice.svg'
-import VoiceButton from './ui/voice'
-import { SR } from '@/lib/bots/bing/sr'
-import { voiceListenAtom } from '@/state'
-
-const sr = new SR(['发送', '清空', '退出'])
-
-const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => {
- const setListen = useSetAtom(voiceListenAtom)
- useEffect(() => {
- if (sr.listening) return
- sr.transcript = !isSpeaking
- }, [isSpeaking])
-
- useEffect(() => {
- sr.onchange = (msg: string, command?: string) => {
- switch (command) {
- case '退出':
- sr.stop()
- break;
- case '发送':
- sendMessage(input)
- case '清空':
- setInput('')
- break;
- default:
- setInput(input + msg)
- }
- }
- }, [input])
-
- const switchSR = (enable: boolean = false) => {
- setListen(enable)
- if (enable) {
- sr.start()
- } else {
- sr.stop()
- }
- }
-
- return sr.listening ? (
- switchSR(false)} />
- ) : (
- switchSR(true)} />
- )
-};
-
-export default Voice;
diff --git a/spaces/xiang2811/ChatGPT/modules/config.py b/spaces/xiang2811/ChatGPT/modules/config.py
deleted file mode 100644
index 4e816ddd6cf4499f21cbbd2aee3ae0a6eeb7c5af..0000000000000000000000000000000000000000
--- a/spaces/xiang2811/ChatGPT/modules/config.py
+++ /dev/null
@@ -1,170 +0,0 @@
-from collections import defaultdict
-from contextlib import contextmanager
-import os
-import logging
-import sys
-import commentjson as json
-
-from . import shared
-from . import presets
-
-
-__all__ = [
- "my_api_key",
- "authflag",
- "auth_list",
- "dockerflag",
- "retrieve_proxy",
- "log_level",
- "advance_docs",
- "update_doc_config",
- "multi_api_key",
- "server_name",
- "server_port",
- "share",
-]
-
-# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低)
-# 同时,也可以为后续支持自定义功能提供config的帮助
-if os.path.exists("config.json"):
- with open("config.json", "r", encoding='utf-8') as f:
- config = json.load(f)
-else:
- config = {}
-
-language = config.get("language", "auto") # 在这里输入你的 API 密钥
-language = os.environ.get("LANGUAGE", language)
-
-
-if os.path.exists("api_key.txt"):
- logging.info("检测到api_key.txt文件,正在进行迁移...")
- with open("api_key.txt", "r") as f:
- config["openai_api_key"] = f.read().strip()
- os.rename("api_key.txt", "api_key(deprecated).txt")
- with open("config.json", "w", encoding='utf-8') as f:
- json.dump(config, f, indent=4)
-
-if os.path.exists("auth.json"):
- logging.info("检测到auth.json文件,正在进行迁移...")
- auth_list = []
- with open("auth.json", "r", encoding='utf-8') as f:
- auth = json.load(f)
- for _ in auth:
- if auth[_]["username"] and auth[_]["password"]:
- auth_list.append((auth[_]["username"], auth[_]["password"]))
- else:
- logging.error("请检查auth.json文件中的用户名和密码!")
- sys.exit(1)
- config["users"] = auth_list
- os.rename("auth.json", "auth(deprecated).json")
- with open("config.json", "w", encoding='utf-8') as f:
- json.dump(config, f, indent=4)
-
-## 处理docker if we are running in Docker
-dockerflag = config.get("dockerflag", False)
-if os.environ.get("dockerrun") == "yes":
- dockerflag = True
-
-## 处理 api-key 以及 允许的用户列表
-my_api_key = config.get("openai_api_key", "") # 在这里输入你的 API 密钥
-my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key)
-
-## 多账户机制
-multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制
-if multi_api_key:
- api_key_list = config.get("api_key_list", [])
- if len(api_key_list) == 0:
- logging.error("多账号模式已开启,但api_key_list为空,请检查config.json")
- sys.exit(1)
- shared.state.set_api_key_queue(api_key_list)
-
-auth_list = config.get("users", []) # 实际上是使用者的列表
-authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度
-
-# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配
-api_host = os.environ.get("api_host", config.get("api_host", ""))
-if api_host:
- shared.state.set_api_host(api_host)
-
-@contextmanager
-def retrieve_openai_api(api_key = None):
- old_api_key = os.environ.get("OPENAI_API_KEY", "")
- if api_key is None:
- os.environ["OPENAI_API_KEY"] = my_api_key
- yield my_api_key
- else:
- os.environ["OPENAI_API_KEY"] = api_key
- yield api_key
- os.environ["OPENAI_API_KEY"] = old_api_key
-
-## 处理log
-log_level = config.get("log_level", "INFO")
-logging.basicConfig(
- level=log_level,
- format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
-)
-
-## 处理代理:
-http_proxy = config.get("http_proxy", "")
-https_proxy = config.get("https_proxy", "")
-http_proxy = os.environ.get("HTTP_PROXY", http_proxy)
-https_proxy = os.environ.get("HTTPS_PROXY", https_proxy)
-
-# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错
-os.environ["HTTP_PROXY"] = ""
-os.environ["HTTPS_PROXY"] = ""
-
-local_embedding = config.get("local_embedding", False) # 是否使用本地embedding
-
-@contextmanager
-def retrieve_proxy(proxy=None):
- """
- 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理
- 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量
- """
- global http_proxy, https_proxy
- if proxy is not None:
- http_proxy = proxy
- https_proxy = proxy
- yield http_proxy, https_proxy
- else:
- old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"]
- os.environ["HTTP_PROXY"] = http_proxy
- os.environ["HTTPS_PROXY"] = https_proxy
- yield http_proxy, https_proxy # return new proxy
-
- # return old proxy
- os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var
-
-
-## 处理advance docs
-advance_docs = defaultdict(lambda: defaultdict(dict))
-advance_docs.update(config.get("advance_docs", {}))
-def update_doc_config(two_column_pdf):
- global advance_docs
- advance_docs["pdf"]["two_column"] = two_column_pdf
-
- logging.info(f"更新后的文件参数为:{advance_docs}")
-
-## 处理gradio.launch参数
-server_name = config.get("server_name", None)
-server_port = config.get("server_port", None)
-if server_name is None:
- if dockerflag:
- server_name = "0.0.0.0"
- else:
- server_name = "127.0.0.1"
-if server_port is None:
- if dockerflag:
- server_port = 7860
-
-assert server_port is None or type(server_port) == int, "要求port设置为int类型"
-
-# 设置默认model
-default_model = config.get("default_model", "")
-try:
- presets.DEFAULT_MODEL = presets.MODELS.index(default_model)
-except ValueError:
- pass
-
-share = config.get("share", False)
diff --git a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/test_face.py b/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/test_face.py
deleted file mode 100644
index 4e79e1fbf590ae863eb34d6ee432d4ef2e5a54cf..0000000000000000000000000000000000000000
--- a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/test_face.py
+++ /dev/null
@@ -1,45 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import os
-from collections import OrderedDict
-
-import data
-from options.test_options import TestOptions
-from models.pix2pix_model import Pix2PixModel
-from util.visualizer import Visualizer
-import torchvision.utils as vutils
-import warnings
-warnings.filterwarnings("ignore", category=UserWarning)
-
-opt = TestOptions().parse()
-
-dataloader = data.create_dataloader(opt)
-
-model = Pix2PixModel(opt)
-model.eval()
-
-visualizer = Visualizer(opt)
-
-
-single_save_url = os.path.join(opt.checkpoints_dir, opt.name, opt.results_dir, "each_img")
-
-
-if not os.path.exists(single_save_url):
- os.makedirs(single_save_url)
-
-
-for i, data_i in enumerate(dataloader):
- if i * opt.batchSize >= opt.how_many:
- break
-
- generated = model(data_i, mode="inference")
-
- img_path = data_i["path"]
-
- for b in range(generated.shape[0]):
- img_name = os.path.split(img_path[b])[-1]
- save_img_url = os.path.join(single_save_url, img_name)
-
- vutils.save_image((generated[b] + 1) / 2, save_img_url)
-
diff --git a/spaces/xxie92/antibody_visulization/diffab/models/_base.py b/spaces/xxie92/antibody_visulization/diffab/models/_base.py
deleted file mode 100644
index 90c96dedac77955005f32239c78c3e5ce67c94ee..0000000000000000000000000000000000000000
--- a/spaces/xxie92/antibody_visulization/diffab/models/_base.py
+++ /dev/null
@@ -1,13 +0,0 @@
-
-_MODEL_DICT = {}
-
-
-def register_model(name):
- def decorator(cls):
- _MODEL_DICT[name] = cls
- return cls
- return decorator
-
-
-def get_model(cfg):
- return _MODEL_DICT[cfg.type](cfg)
diff --git a/spaces/yaoshining/text-generation-webui/modules/RWKV.py b/spaces/yaoshining/text-generation-webui/modules/RWKV.py
deleted file mode 100644
index 35d69986820ec93d7e5dbcf2abc2f19a62dc9c33..0000000000000000000000000000000000000000
--- a/spaces/yaoshining/text-generation-webui/modules/RWKV.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import copy
-import os
-from pathlib import Path
-
-import numpy as np
-from tokenizers import Tokenizer
-
-import modules.shared as shared
-from modules.callbacks import Iteratorize
-
-np.set_printoptions(precision=4, suppress=True, linewidth=200)
-
-os.environ['RWKV_JIT_ON'] = '1'
-os.environ["RWKV_CUDA_ON"] = '1' if shared.args.rwkv_cuda_on else '0' # use CUDA kernel for seq mode (much faster)
-
-from rwkv.model import RWKV
-from rwkv.utils import PIPELINE, PIPELINE_ARGS
-
-
-class RWKVModel:
- def __init__(self):
- pass
-
- @classmethod
- def from_pretrained(self, path, dtype="fp16", device="cuda"):
- tokenizer_path = Path(f"{path.parent}/20B_tokenizer.json")
- if shared.args.rwkv_strategy is None:
- model = RWKV(model=str(path), strategy=f'{device} {dtype}')
- else:
- model = RWKV(model=str(path), strategy=shared.args.rwkv_strategy)
-
- pipeline = PIPELINE(model, str(tokenizer_path))
- result = self()
- result.pipeline = pipeline
- result.model = model
- result.cached_context = ""
- result.cached_model_state = None
- result.cached_output_logits = None
- return result
-
- def generate(self, prompt, state, callback=None):
- args = PIPELINE_ARGS(
- temperature=state['temperature'],
- top_p=state['top_p'],
- top_k=state['top_k'],
- alpha_frequency=0.1, # Frequency Penalty (as in GPT-3)
- alpha_presence=0.1, # Presence Penalty (as in GPT-3)
- token_ban=[0], # ban the generation of some tokens
- token_stop=[]
- )
-
- if self.cached_context != "":
- if prompt.startswith(self.cached_context):
- prompt = prompt[len(self.cached_context):]
- else:
- self.cached_context = ""
- self.cached_model_state = None
- self.cached_output_logits = None
-
- # out = self.pipeline.generate(prompt, token_count=state['max_new_tokens'], args=args, callback=callback)
- out = self.generate_from_cached_state(prompt, token_count=state['max_new_tokens'], args=args, callback=callback)
- return out
-
- def generate_with_streaming(self, *args, **kwargs):
- with Iteratorize(self.generate, args, kwargs, callback=None) as generator:
- reply = ''
- for token in generator:
- reply += token
- yield reply
-
- # Similar to the PIPELINE.generate, but lets us maintain the cached_model_state
- def generate_from_cached_state(self, ctx="", token_count=20, args=None, callback=None):
- all_tokens = []
- out_str = ''
- occurrence = {}
- state = copy.deepcopy(self.cached_model_state) if self.cached_model_state is not None else None
-
- # if we ended up with an empty context, just reuse the cached logits
- # this can happen if a user undoes a message and then sends the exact message again
- # in that case the full context ends up being the same as the cached_context, so the remaining context is empty.
- if ctx == "":
- out = self.cached_output_logits
-
- token = None
- for i in range(token_count):
- # forward
- tokens = self.pipeline.encode(ctx) if i == 0 else [token]
- while len(tokens) > 0:
- out, state = self.model.forward(tokens[:args.chunk_len], state)
- tokens = tokens[args.chunk_len:]
- if i == 0:
- begin_token = len(all_tokens)
- last_token_posi = begin_token
- # cache the model state after scanning the context
- # we don't cache the state after processing our own generated tokens because
- # the output string might be post-processed arbitrarily. Therefore, what's fed into the model
- # on the next round of chat might be slightly different what what it output on the previous round
- if i == 0:
- self.cached_context += ctx
- self.cached_model_state = copy.deepcopy(state)
- self.cached_output_logits = copy.deepcopy(out)
-
- # adjust probabilities
- for n in args.token_ban:
- out[n] = -float('inf')
-
- for n in occurrence:
- out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency)
-
- # sampler
- token = self.pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p, top_k=args.top_k)
- if token in args.token_stop:
- break
-
- all_tokens += [token]
- if token not in occurrence:
- occurrence[token] = 1
- else:
- occurrence[token] += 1
-
- # output
- tmp = self.pipeline.decode(all_tokens[last_token_posi:])
- if '\ufffd' not in tmp: # is valid utf-8 string?
- if callback:
- callback(tmp)
-
- out_str += tmp
- last_token_posi = begin_token + i + 1
- return out_str
-
-
-class RWKVTokenizer:
- def __init__(self):
- pass
-
- @classmethod
- def from_pretrained(self, path):
- tokenizer_path = path / "20B_tokenizer.json"
- tokenizer = Tokenizer.from_file(str(tokenizer_path))
- result = self()
- result.tokenizer = tokenizer
- return result
-
- def encode(self, prompt):
- return self.tokenizer.encode(prompt).ids
-
- def decode(self, ids):
- return self.tokenizer.decode(ids)
diff --git a/spaces/yaoshining/text-generation-webui/modules/training.py b/spaces/yaoshining/text-generation-webui/modules/training.py
deleted file mode 100644
index 855ed914a4e21f3a384e811fc3ef7f5529f5f2b9..0000000000000000000000000000000000000000
--- a/spaces/yaoshining/text-generation-webui/modules/training.py
+++ /dev/null
@@ -1,636 +0,0 @@
-import json
-import math
-import random
-import sys
-import threading
-import time
-import traceback
-from pathlib import Path
-
-import gradio as gr
-import torch
-import transformers
-
-import shutil
-from datetime import datetime
-
-from datasets import Dataset, load_dataset
-from peft import (
- LoraConfig,
- get_peft_model,
- prepare_model_for_int8_training,
- set_peft_model_state_dict
-)
-
-from modules import shared, ui, utils
-from modules.evaluate import (
- calculate_perplexity,
- generate_markdown_table,
- save_past_evaluations
-)
-from modules.logging_colors import logger
-
-# This mapping is from a very recent commit, not yet released.
-# If not available, default to a backup map for some common model types.
-try:
- from peft.utils.other import \
- TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING as \
- model_to_lora_modules
- from transformers.models.auto.modeling_auto import (
- MODEL_FOR_CAUSAL_LM_MAPPING_NAMES
- )
- MODEL_CLASSES = {v: k for k, v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES}
-except:
- standard_modules = ["q_proj", "v_proj"]
- model_to_lora_modules = {"llama": standard_modules, "opt": standard_modules, "gptj": standard_modules, "gpt_neox": ["query_key_value"], "rw": ["query_key_value"]}
- MODEL_CLASSES = {
- "LlamaForCausalLM": "llama",
- "OPTForCausalLM": "opt",
- "GPTJForCausalLM": "gptj",
- "GPTNeoXForCausalLM": "gpt_neox",
- "RWForCausalLM": "rw"
-
- }
-
-train_log = {}
-train_template = {}
-
-WANT_INTERRUPT = False
-PARAMETERS = ["lora_name", "always_override", "save_steps", "micro_batch_size", "batch_size", "epochs", "learning_rate", "lr_scheduler_type", "lora_rank", "lora_alpha", "lora_dropout", "cutoff_len", "dataset", "eval_dataset", "format", "eval_steps", "raw_text_file", "overlap_len", "newline_favor_len", "higher_rank_limit", "warmup_steps", "optimizer", "hard_cut_string", "train_only_after", "stop_at_loss"]
-
-
-def create_train_interface():
- with gr.Tab('Train LoRA', elem_id='lora-train-tab'):
- gr.Markdown("Confused? [[Click here for a guide]](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Training-LoRAs.md)")
-
- with gr.Row():
- lora_name = gr.Textbox(label='Name', info='The name of your new LoRA file')
- always_override = gr.Checkbox(label='Override Existing Files', value=False, info='If the name given is the same as an existing file, checking this will replace that file. Leaving unchecked will load that file and continue from it (must use the same rank value as the original had).')
- save_steps = gr.Number(label='Save every n steps', value=0, info='If above 0, a checkpoint of the LoRA will be saved every time this many steps pass.')
-
- with gr.Row():
- copy_from = gr.Dropdown(label='Copy parameters from', value='None', choices=utils.get_available_loras())
- ui.create_refresh_button(copy_from, lambda: None, lambda: {'choices': utils.get_available_loras()}, 'refresh-button')
-
- with gr.Row():
- # TODO: Implement multi-device support.
- micro_batch_size = gr.Slider(label='Micro Batch Size', value=4, minimum=1, maximum=128, step=1, info='Per-device batch size (NOTE: multiple devices not yet implemented). Increasing this will increase VRAM usage.')
- batch_size = gr.Slider(label='Batch Size', value=128, minimum=0, maximum=1024, step=4, info='Global batch size. The two batch sizes together determine gradient accumulation (gradientAccum = batch / microBatch). Higher gradient accum values lead to better quality training.')
-
- with gr.Row():
- epochs = gr.Number(label='Epochs', value=3, info='Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.')
- learning_rate = gr.Textbox(label='Learning Rate', value='3e-4', info='Learning rate, in scientific notation. 3e-4 is a good starting base point. 1e-2 is extremely high, 1e-6 is extremely low.')
- lr_scheduler_type = gr.Dropdown(label='LR Scheduler', value='linear', choices=['linear', 'constant', 'constant_with_warmup', 'cosine', 'cosine_with_restarts', 'polynomial', 'inverse_sqrt'], info='Learning rate scheduler - defines how the learning rate changes over time. "Constant" means never change, "linear" means to go in a straight line from the learning rate down to 0, cosine follows a curve, etc.')
-
- # TODO: What is the actual maximum rank? Likely distinct per model. This might be better to somehow be on a log scale.
- lora_rank = gr.Slider(label='LoRA Rank', value=32, minimum=0, maximum=1024, step=4, info='LoRA Rank, or dimension count. Higher values produce a larger file with better control over the model\'s content. Smaller values produce a smaller file with less overall control. Small values like 4 or 8 are great for stylistic guidance, higher values like 128 or 256 are good for teaching content upgrades, extremely high values (1024+) are difficult to train but may improve fine-detail learning for large datasets. Higher ranks also require higher VRAM.')
- lora_alpha = gr.Slider(label='LoRA Alpha', value=64, minimum=0, maximum=2048, step=4, info='LoRA Alpha. This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.')
-
- cutoff_len = gr.Slider(label='Cutoff Length', minimum=0, maximum=2048, value=256, step=32, info='Cutoff length for text input. Essentially, how long of a line of text to feed in at a time. Higher values require drastically more VRAM.')
-
- with gr.Tab(label='Formatted Dataset'):
- with gr.Row():
- dataset = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'json'), value='None', label='Dataset', info='The dataset file to use for training.')
- ui.create_refresh_button(dataset, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'json')}, 'refresh-button')
- eval_dataset = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'json'), value='None', label='Evaluation Dataset', info='The (optional) dataset file used to evaluate the model after training.')
- ui.create_refresh_button(eval_dataset, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'json')}, 'refresh-button')
- format = gr.Dropdown(choices=utils.get_datasets('training/formats', 'json'), value='None', label='Data Format', info='The format file used to decide how to format the dataset input.')
- ui.create_refresh_button(format, lambda: None, lambda: {'choices': utils.get_datasets('training/formats', 'json')}, 'refresh-button')
-
- eval_steps = gr.Number(label='Evaluate every n steps', value=100, info='If an evaluation dataset is given, test it every time this many steps pass.')
-
- with gr.Tab(label="Raw text file"):
- with gr.Row():
- raw_text_file = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'txt'), value='None', label='Text file', info='The raw text file to use for training.')
- ui.create_refresh_button(raw_text_file, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'txt')}, 'refresh-button')
- hard_cut_string = gr.Textbox(label='Hard Cut String', value='\\n\\n\\n', info='String that indicates a hard cut between text parts. Helps prevent unwanted overlap.')
-
- with gr.Row():
- overlap_len = gr.Slider(label='Overlap Length', minimum=0, maximum=512, value=128, step=16, info='Overlap length - ie how many tokens from the prior chunk of text to include into the next chunk. (The chunks themselves will be of a size determined by Cutoff Length below). Setting overlap to exactly half the cutoff length may be ideal.')
- newline_favor_len = gr.Slider(label='Prefer Newline Cut Length', minimum=0, maximum=512, value=128, step=16, info='Length (in characters, not tokens) of the maximum distance to shift an overlap cut by to ensure chunks cut at newlines. If too low, cuts may occur in the middle of lines.')
-
- with gr.Accordion(label='Advanced Options', open=False):
- lora_dropout = gr.Slider(label='LoRA Dropout', minimum=0.0, maximum=1.0, step=0.025, value=0.05, info='Percentage probability for dropout of LoRA layers. This can help reduce overfitting. Most users should leave at default.')
- warmup_steps = gr.Number(label='Warmup Steps', value=100, info='For this many steps at the start, the learning rate will be lower than normal. This helps the trainer prepare the model and precompute statistics to improve the quality of training after the start.')
- optimizer = gr.Dropdown(label='Optimizer', value='adamw_torch', choices=['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_apex_fused', 'adafactor', 'adamw_bnb_8bit', 'adamw_anyprecision', 'sgd', 'adagrad'], info='Different optimizer implementation options, for advanced users. Effects of different options are not well documented yet.')
- train_only_after = gr.Textbox(label='Train Only After', value='', info='Only consider text *after* this string in any given chunk for training. For Alpaca datasets, use "### Response:" to only train the response and ignore the input.')
- stop_at_loss = gr.Slider(label='Stop at loss', minimum=0.0, maximum=3.0, step=0.1, value=0.00, info='The process will automatically stop once the desired loss value is reached. (reasonable numbers are 1.5-1.8)')
-
- with gr.Row():
- higher_rank_limit = gr.Checkbox(label='Enable higher ranks', value=False, info='If checked, changes Rank/Alpha slider above to go much higher. This will not work without a datacenter-class GPU.')
-
- with gr.Row():
- start_button = gr.Button("Start LoRA Training")
- stop_button = gr.Button("Interrupt")
-
- output = gr.Markdown(value="Ready")
-
- with gr.Tab('Perplexity evaluation', elem_id='evaluate-tab'):
- with gr.Row():
- with gr.Column():
- models = gr.Dropdown(utils.get_available_models(), label='Models', multiselect=True)
- evaluate_text_file = gr.Dropdown(choices=['wikitext', 'ptb', 'ptb_new'] + utils.get_datasets('training/datasets', 'txt')[1:], value='wikitext', label='Input dataset', info='The raw text file on which the model will be evaluated. The first options are automatically downloaded: wikitext, ptb, and ptb_new. The next options are your local text files under training/datasets.')
- with gr.Row():
- stride_length = gr.Slider(label='Stride', minimum=1, maximum=2048, value=512, step=1, info='Used to make the evaluation faster at the cost of accuracy. 1 = slowest but most accurate. 512 is a common value.')
- max_length = gr.Slider(label='max_length', minimum=0, maximum=8096, value=0, step=1, info='The context for each evaluation. If set to 0, the maximum context length for the model will be used.')
-
- with gr.Row():
- start_current_evaluation = gr.Button("Evaluate loaded model")
- start_evaluation = gr.Button("Evaluate selected models")
- stop_evaluation = gr.Button("Interrupt")
-
- with gr.Column():
- evaluation_log = gr.Markdown(value='')
-
- evaluation_table = gr.Dataframe(value=generate_markdown_table(), interactive=True)
- with gr.Row():
- save_comments = gr.Button('Save comments', elem_classes="small-button")
- refresh_table = gr.Button('Refresh the table', elem_classes="small-button")
-
- # Training events
- all_params = [lora_name, always_override, save_steps, micro_batch_size, batch_size, epochs, learning_rate, lr_scheduler_type, lora_rank, lora_alpha, lora_dropout, cutoff_len, dataset, eval_dataset, format, eval_steps, raw_text_file, overlap_len, newline_favor_len, higher_rank_limit, warmup_steps, optimizer, hard_cut_string, train_only_after, stop_at_loss]
- copy_from.change(do_copy_params, [copy_from] + all_params, all_params)
- start_button.click(do_train, all_params, output)
- stop_button.click(do_interrupt, None, None, queue=False)
- higher_rank_limit.change(change_rank_limit, [higher_rank_limit], [lora_rank, lora_alpha])
-
- # Evaluation events. For some reason, the interrupt event
- # doesn't work with the .then() syntax, so I write them one
- # by one in this ugly but functional way.
- ev = start_evaluation.click(calculate_perplexity, [models, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- tmp = gr.State('')
- start_current_evaluation.click(lambda: ['current model'], None, tmp)
- ev_cur = start_current_evaluation.click(calculate_perplexity, [tmp, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False)
- start_current_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False)
-
- stop_evaluation.click(None, None, None, cancels=[ev, ev_cur], queue=False)
- refresh_table.click(generate_markdown_table, None, evaluation_table, show_progress=True)
- save_comments.click(
- save_past_evaluations, evaluation_table, None).then(
- lambda: "Comments saved.", None, evaluation_log, show_progress=False)
-
-
-def do_interrupt():
- global WANT_INTERRUPT
- WANT_INTERRUPT = True
-
-
-def do_copy_params(lora_name: str, *args):
- f_name = f"{shared.args.lora_dir}/{clean_path(None, lora_name)}/training_parameters.json"
- if Path(f_name).is_file():
- with open(f_name, 'r', encoding='utf-8') as format_file:
- params: dict[str, str] = json.load(format_file)
- else:
- params = {}
-
- result = list()
- for i in range(0, len(PARAMETERS)):
- key = PARAMETERS[i]
- if key in params:
- result.append(params[key])
- else:
- result.append(args[i])
-
- return result
-
-
-def change_rank_limit(use_higher_ranks: bool):
- mult = 2 if use_higher_ranks else 1
- return {"maximum": 1024 * mult, "__type__": "update"}, {"maximum": 2048 * mult, "__type__": "update"}
-
-
-def clean_path(base_path: str, path: str):
- """Strips unusual symbols and forcibly builds a path as relative to the intended directory."""
- # TODO: Probably could do with a security audit to guarantee there's no ways this can be bypassed to target an unwanted path.
- # Or swap it to a strict whitelist of [a-zA-Z_0-9]
- path = path.replace('\\', '/').replace('..', '_')
- if base_path is None:
- return path
-
- return f'{Path(base_path).absolute()}/{path}'
-
-
-def backup_adapter(input_folder):
- # Get the creation date of the file adapter_model.bin
- try:
- adapter_file = Path(f"{input_folder}/adapter_model.bin")
- if adapter_file.is_file():
-
- logger.info("Backing up existing LoRA adapter...")
- creation_date = datetime.fromtimestamp(adapter_file.stat().st_ctime)
- creation_date_str = creation_date.strftime("Backup-%Y-%m-%d")
-
- # Create the new subfolder
- subfolder_path = Path(f"{input_folder}/{creation_date_str}")
- subfolder_path.mkdir(parents=True, exist_ok=True)
-
- # Check if the file already exists in the subfolder
- backup_adapter_file = Path(f"{input_folder}/{creation_date_str}/adapter_model.bin")
- if backup_adapter_file.is_file():
- print(" - Backup already exists. Skipping backup process.")
- return
-
- # Copy existing files to the new subfolder
- existing_files = Path(input_folder).iterdir()
- for file in existing_files:
- if file.is_file():
- shutil.copy2(file, subfolder_path)
- except Exception as e:
- print("An error occurred in backup_adapter:", str(e))
-
-
-def do_train(lora_name: str, always_override: bool, save_steps: int, micro_batch_size: int, batch_size: int, epochs: int, learning_rate: str, lr_scheduler_type: str, lora_rank: int, lora_alpha: int, lora_dropout: float, cutoff_len: int, dataset: str, eval_dataset: str, format: str, eval_steps: int, raw_text_file: str, overlap_len: int, newline_favor_len: int, higher_rank_limit: bool, warmup_steps: int, optimizer: str, hard_cut_string: str, train_only_after: str, stop_at_loss: float):
-
- if shared.args.monkey_patch:
- from monkeypatch.peft_tuners_lora_monkey_patch import (
- replace_peft_model_with_gptq_lora_model
- )
- replace_peft_model_with_gptq_lora_model()
-
- global WANT_INTERRUPT
- WANT_INTERRUPT = False
-
- # == Input validation / processing ==
- yield "Prepping..."
- lora_file_path = clean_path(None, lora_name)
- if lora_file_path.strip() == '':
- yield "Missing or invalid LoRA file name input."
- return
-
- lora_file_path = f"{shared.args.lora_dir}/{lora_file_path}"
- actual_lr = float(learning_rate)
- model_type = type(shared.model).__name__
-
- if model_type in MODEL_CLASSES:
- model_id = MODEL_CLASSES[model_type]
- else:
- model_id = "llama"
- if model_type == "PeftModelForCausalLM":
- if len(shared.args.lora_names) > 0:
- yield "You are trying to train a LoRA while you already have another LoRA loaded. This will work, but may have unexpected effects. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logger.warning("Training LoRA over top of another LoRA. May have unexpected effects.")
- else:
- yield "Model ID not matched due to LoRA loading. Consider reloading base model. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logger.warning("Model ID not matched due to LoRA loading. Consider reloading base model.")
- else:
- yield "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. Unexpected errors may follow. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*"
- logger.warning(f"LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: {model_type})")
-
- time.sleep(5)
-
- if shared.args.wbits > 0 and not shared.args.monkey_patch:
- yield "LoRA training with GPTQ models requires loading with `--monkey-patch`"
- return
-
- elif not (shared.args.load_in_8bit or shared.args.load_in_4bit) and shared.args.wbits <= 0:
- yield "It is highly recommended you use `--load-in-8bit` for LoRA training. *(Will continue anyway in 2 seconds, press `Interrupt` to stop.)*"
- logger.warning("It is highly recommended you use `--load-in-8bit` for LoRA training.")
- time.sleep(2) # Give it a moment for the message to show in UI before continuing
-
- if cutoff_len <= 0 or micro_batch_size <= 0 or batch_size <= 0 or actual_lr <= 0 or lora_rank <= 0 or lora_alpha <= 0:
- yield "Cannot input zeroes."
- return
-
- gradient_accumulation_steps = batch_size // micro_batch_size
- shared.tokenizer.pad_token_id = 0
- shared.tokenizer.padding_side = "left"
-
- def encode(text, add_bos_token):
- result = shared.tokenizer.encode(text, truncation=True, max_length=cutoff_len)
- if not add_bos_token and result[0] == shared.tokenizer.bos_token_id:
- result = result[1:]
- return result
-
- def tokenize(prompt):
-
- if train_only_after == '' or train_only_after not in prompt:
- input_ids = encode(prompt, True)
- input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids
- labels = [1] * len(input_ids)
-
- else:
- ind = prompt.index(train_only_after) + len(train_only_after)
- before_tokens = encode(prompt[:ind], True)
- after_tokens = encode(prompt[ind:], False)
-
- full_length = len(after_tokens) + len(before_tokens)
- if full_length > cutoff_len:
- after_tokens = after_tokens[:cutoff_len - len(before_tokens)]
- else:
- before_tokens = [shared.tokenizer.pad_token_id] * (cutoff_len - full_length) + before_tokens
-
- input_ids = before_tokens + after_tokens
- labels = [-100] * len(before_tokens) + [1] * len(after_tokens)
-
- input_ids = torch.tensor(input_ids)
- return {
- "input_ids": input_ids,
- "labels": labels,
- "attention_mask": input_ids.ne(shared.tokenizer.pad_token_id),
- }
-
- train_template.clear()
-
- # == Prep the dataset, format, etc ==
- if raw_text_file not in ['None', '']:
- logger.info("Loading raw text file dataset...")
-
- train_template["template_type"] = "raw_text"
-
- with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file:
- raw_text = file.read().replace('\r', '')
-
- cut_string = hard_cut_string.replace('\\n', '\n')
- out_tokens = []
- for text_part in raw_text.split(cut_string):
- if text_part.strip() == '':
- continue
-
- tokens = shared.tokenizer.encode(text_part)
- step = cutoff_len - overlap_len
- if step <= 0:
- yield f"Error: overlap_len ({overlap_len}) cannot be greater than or equal to cutoff_len ({cutoff_len})"
- return
-
- tokens = list(split_chunks(tokens, step))
- for i in range(1, len(tokens)):
- tokens[i] = tokens[i - 1][-overlap_len:] + tokens[i]
-
- out_tokens.extend(tokens)
- del tokens
-
- del raw_text # Note: could be a gig for a large dataset, so delete redundant data as we go to be safe on RAM
- text_chunks = [shared.tokenizer.decode(x) for x in out_tokens]
- del out_tokens
- if newline_favor_len > 0:
- text_chunks = [cut_chunk_for_newline(x, newline_favor_len) for x in text_chunks]
-
- train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
- del text_chunks
- eval_data = None
- else:
- if dataset in ['None', '']:
- yield "**Missing dataset choice input, cannot continue.**"
- return
-
- if format in ['None', '']:
- yield "**Missing format choice input, cannot continue.**"
- return
-
- train_template["template_type"] = "dataset"
-
- with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8-sig') as formatFile:
- format_data: dict[str, str] = json.load(formatFile)
-
- # == store training prompt ==
- for _, value in format_data.items():
- prompt_key = f"template_{len(train_template)}"
- train_template[prompt_key] = value
-
- def generate_prompt(data_point: dict[str, str]):
- for options, data in format_data.items():
- if set(options.split(',')) == set(x[0] for x in data_point.items() if (x[1] is not None and len(x[1].strip()) > 0)):
- for key, val in data_point.items():
- if val is not None:
- data = data.replace(f'%{key}%', val)
- return data
- raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"')
-
- def generate_and_tokenize_prompt(data_point):
- prompt = generate_prompt(data_point)
- return tokenize(prompt)
-
- logger.info("Loading JSON datasets...")
- data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json'))
- train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
-
- if eval_dataset == 'None':
- eval_data = None
- else:
- eval_data = load_dataset("json", data_files=clean_path('training/datasets', f'{eval_dataset}.json'))
- eval_data = eval_data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
-
- # == Start prepping the model itself ==
- if not hasattr(shared.model, 'lm_head') or hasattr(shared.model.lm_head, 'weight'):
- logger.info("Getting model ready...")
- prepare_model_for_int8_training(shared.model)
-
- logger.info("Prepping for training...")
- config = LoraConfig(
- r=lora_rank,
- lora_alpha=lora_alpha,
- target_modules=model_to_lora_modules[model_id],
- lora_dropout=lora_dropout,
- bias="none",
- task_type="CAUSAL_LM"
- )
-
- # == Backup the existing adapter ==
- if not always_override:
- backup_adapter(lora_file_path)
-
- try:
- logger.info("Creating LoRA model...")
- lora_model = get_peft_model(shared.model, config)
- if not always_override and Path(f"{lora_file_path}/adapter_model.bin").is_file():
- logger.info("Loading existing LoRA data...")
- state_dict_peft = torch.load(f"{lora_file_path}/adapter_model.bin")
- set_peft_model_state_dict(lora_model, state_dict_peft)
- except:
- yield traceback.format_exc()
- return
-
- if shared.args.monkey_patch:
- for n, m in lora_model.named_modules():
- if '4bit' in str(type(m)):
- if m.is_v1_model:
- m.zeros = m.zeros.half()
-
- m.scales = m.scales.half()
-
- class Tracked():
- def __init__(self):
- self.current_steps = 0
- self.max_steps = 0
- self.did_save = False
-
- tracked = Tracked()
- actual_save_steps = math.ceil(save_steps / gradient_accumulation_steps)
-
- class Callbacks(transformers.TrainerCallback):
- def on_step_begin(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps = state.global_step * gradient_accumulation_steps
- tracked.max_steps = state.max_steps * gradient_accumulation_steps
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
- elif state.global_step > 0 and actual_save_steps > 0 and state.global_step % actual_save_steps == 0:
- lora_model.save_pretrained(f"{lora_file_path}/checkpoint-{tracked.current_steps}/")
- # Save log
- with open(f"{lora_file_path}/checkpoint-{tracked.current_steps}/training_log.json", 'w', encoding='utf-8') as file:
- json.dump(train_log, file, indent=2)
- # == Save training prompt ==
- with open(f"{lora_file_path}/checkpoint-{tracked.current_steps}/training_prompt.json", 'w', encoding='utf-8') as file:
- json.dump(train_template, file, indent=2)
-
- def on_substep_end(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs):
- tracked.current_steps += 1
- if WANT_INTERRUPT:
- control.should_epoch_stop = True
- control.should_training_stop = True
-
- def on_log(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, logs, **kwargs):
- train_log.update(logs)
- train_log.update({"current_steps": tracked.current_steps})
- if WANT_INTERRUPT:
- print("\033[1;31;1mInterrupted by user\033[0;37;0m")
-
- print(f"\033[1;30;40mStep: {tracked.current_steps} \033[0;37;0m", end='')
- if 'loss' in logs:
- loss = float(logs['loss'])
- if loss <= stop_at_loss:
- control.should_epoch_stop = True
- control.should_training_stop = True
- print(f"\033[1;31;1mStop Loss {stop_at_loss} reached.\033[0;37;0m")
-
- trainer = transformers.Trainer(
- model=lora_model,
- train_dataset=train_data,
- eval_dataset=eval_data,
- args=transformers.TrainingArguments(
- per_device_train_batch_size=micro_batch_size,
- gradient_accumulation_steps=gradient_accumulation_steps,
- warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps),
- num_train_epochs=epochs,
- learning_rate=actual_lr,
- fp16=False if shared.args.cpu else True,
- optim=optimizer,
- logging_steps=2 if stop_at_loss > 0 else 5,
- evaluation_strategy="steps" if eval_data is not None else "no",
- eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None,
- save_strategy="steps" if eval_data is not None else "no",
- output_dir=lora_file_path,
- lr_scheduler_type=lr_scheduler_type,
- load_best_model_at_end=eval_data is not None,
- # TODO: Enable multi-device support
- ddp_find_unused_parameters=None,
- no_cuda=shared.args.cpu
- ),
- data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False),
- callbacks=list([Callbacks()])
- )
-
- lora_model.config.use_cache = False
-
- if torch.__version__ >= "2" and sys.platform != "win32":
- lora_model = torch.compile(lora_model)
-
- # == Save parameters for reuse ==
- with open(f"{lora_file_path}/training_parameters.json", 'w', encoding='utf-8') as file:
- vars = locals()
- json.dump({x: vars[x] for x in PARAMETERS}, file, indent=2)
-
- # == Save training prompt ==
- with open(f"{lora_file_path}/training_prompt.json", 'w', encoding='utf-8') as file:
- json.dump(train_template, file, indent=2)
-
- # == Main run and monitor loop ==
- logger.info("Starting training...")
- yield "Starting..."
-
- train_log.update({"base_model_name": shared.model_name})
- train_log.update({"base_model_class": shared.model.__class__.__name__})
- train_log.update({"base_loaded_in_4bit": getattr(lora_model, "is_loaded_in_4bit", False)})
- train_log.update({"base_loaded_in_8bit": getattr(lora_model, "is_loaded_in_8bit", False)})
-
- if stop_at_loss > 0:
- print(f"Monitoring loss \033[1;31;1m(Auto-Stop at: {stop_at_loss})\033[0;37;0m")
-
- if WANT_INTERRUPT:
- yield "Interrupted before start."
- return
-
- def threaded_run():
- trainer.train()
- # Note: save in the thread in case the gradio thread breaks (eg browser closed)
- lora_model.save_pretrained(lora_file_path)
- logger.info("LoRA training run is completed and saved.")
- # Save log
- with open(f"{lora_file_path}/training_log.json", 'w', encoding='utf-8') as file:
- json.dump(train_log, file, indent=2)
-
- thread = threading.Thread(target=threaded_run)
- thread.start()
- last_step = 0
- start_time = time.perf_counter()
-
- while thread.is_alive():
- time.sleep(0.5)
- if WANT_INTERRUPT:
- yield "Interrupting, please wait... *(Run will stop after the current training step completes.)*"
-
- elif tracked.current_steps != last_step:
- last_step = tracked.current_steps
- time_elapsed = time.perf_counter() - start_time
- if time_elapsed <= 0:
- timer_info = ""
- total_time_estimate = 999
- else:
- its = tracked.current_steps / time_elapsed
- if its > 1:
- timer_info = f"`{its:.2f}` it/s"
- else:
- timer_info = f"`{1.0/its:.2f}` s/it"
-
- total_time_estimate = (1.0 / its) * (tracked.max_steps)
-
- yield f"Running... **{tracked.current_steps}** / **{tracked.max_steps}** ... {timer_info}, {format_time(time_elapsed)} / {format_time(total_time_estimate)} ... {format_time(total_time_estimate - time_elapsed)} remaining"
-
- # Saving in the train thread might fail if an error occurs, so save here if so.
- if not tracked.did_save:
- logger.info("Training complete, saving...")
- lora_model.save_pretrained(lora_file_path)
-
- if WANT_INTERRUPT:
- logger.info("Training interrupted.")
- yield f"Interrupted. Incomplete LoRA saved to `{lora_file_path}`"
- else:
- logger.info("Training complete!")
- yield f"Done! LoRA saved to `{lora_file_path}`"
-
-
-def split_chunks(arr, step):
- for i in range(0, len(arr), step):
- yield arr[i:i + step]
-
-
-def cut_chunk_for_newline(chunk: str, max_length: int):
- if '\n' not in chunk:
- return chunk
-
- first_newline = chunk.index('\n')
- if first_newline < max_length:
- chunk = chunk[first_newline + 1:]
-
- if '\n' not in chunk:
- return chunk
-
- last_newline = chunk.rindex('\n')
- if len(chunk) - last_newline < max_length:
- chunk = chunk[:last_newline]
-
- return chunk
-
-
-def format_time(seconds: float):
- if seconds < 120:
- return f"`{seconds:.0f}` seconds"
-
- minutes = seconds / 60
- if minutes < 120:
- return f"`{minutes:.0f}` minutes"
-
- hours = minutes / 60
- return f"`{hours:.0f}` hours"
diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/train_retrieval.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/train_retrieval.py
deleted file mode 100644
index 574f03382cc8197b97971a11ae54b632bcfe6655..0000000000000000000000000000000000000000
--- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/train_retrieval.py
+++ /dev/null
@@ -1,345 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-from torch.utils.data import DataLoader
-
-from models.blip_retrieval import blip_retrieval
-import utils
-from utils import cosine_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-
-
-def train(model, data_loader, optimizer, epoch, device, config):
- # train
- model.train()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
- metric_logger.add_meter('loss_itm', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
- metric_logger.add_meter('loss_ita', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
- header = 'Train Epoch: [{}]'.format(epoch)
- print_freq = 50
-
- for i,(image, caption, idx) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image = image.to(device,non_blocking=True)
- idx = idx.to(device,non_blocking=True)
-
- if epoch>0:
- alpha = config['alpha']
- else:
- alpha = config['alpha']*min(1,i/len(data_loader))
-
- loss_ita, loss_itm = model(image, caption, alpha=alpha, idx=idx)
- loss = loss_ita + loss_itm
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- metric_logger.update(loss_itm=loss_itm.item())
- metric_logger.update(loss_ita=loss_ita.item())
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-@torch.no_grad()
-def evaluation(model, data_loader, device, config):
- # test
- model.eval()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- header = 'Evaluation:'
-
- print('Computing features for evaluation...')
- start_time = time.time()
-
- texts = data_loader.dataset.text
- num_text = len(texts)
- text_bs = 256
- text_ids = []
- text_embeds = []
- text_atts = []
- for i in range(0, num_text, text_bs):
- text = texts[i: min(num_text, i+text_bs)]
- text_input = model.tokenizer(text, padding='max_length', truncation=True, max_length=35, return_tensors="pt").to(device)
- text_output = model.text_encoder(text_input.input_ids, attention_mask = text_input.attention_mask, mode='text')
- text_embed = F.normalize(model.text_proj(text_output.last_hidden_state[:,0,:]))
- text_embeds.append(text_embed)
- text_ids.append(text_input.input_ids)
- text_atts.append(text_input.attention_mask)
-
- text_embeds = torch.cat(text_embeds,dim=0)
- text_ids = torch.cat(text_ids,dim=0)
- text_atts = torch.cat(text_atts,dim=0)
- text_ids[:,0] = model.tokenizer.enc_token_id
-
- image_feats = []
- image_embeds = []
- for image, img_id in data_loader:
- image = image.to(device)
- image_feat = model.visual_encoder(image)
- image_embed = model.vision_proj(image_feat[:,0,:])
- image_embed = F.normalize(image_embed,dim=-1)
-
- image_feats.append(image_feat.cpu())
- image_embeds.append(image_embed)
-
- image_feats = torch.cat(image_feats,dim=0)
- image_embeds = torch.cat(image_embeds,dim=0)
-
- sims_matrix = image_embeds @ text_embeds.t()
- score_matrix_i2t = torch.full((len(data_loader.dataset.image),len(texts)),-100.0).to(device)
-
- num_tasks = utils.get_world_size()
- rank = utils.get_rank()
- step = sims_matrix.size(0)//num_tasks + 1
- start = rank*step
- end = min(sims_matrix.size(0),start+step)
-
- for i,sims in enumerate(metric_logger.log_every(sims_matrix[start:end], 50, header)):
- topk_sim, topk_idx = sims.topk(k=config['k_test'], dim=0)
-
- encoder_output = image_feats[start+i].repeat(config['k_test'],1,1).to(device)
- encoder_att = torch.ones(encoder_output.size()[:-1],dtype=torch.long).to(device)
- output = model.text_encoder(text_ids[topk_idx],
- attention_mask = text_atts[topk_idx],
- encoder_hidden_states = encoder_output,
- encoder_attention_mask = encoder_att,
- return_dict = True,
- )
- score = model.itm_head(output.last_hidden_state[:,0,:])[:,1]
- score_matrix_i2t[start+i,topk_idx] = score + topk_sim
-
- sims_matrix = sims_matrix.t()
- score_matrix_t2i = torch.full((len(texts),len(data_loader.dataset.image)),-100.0).to(device)
-
- step = sims_matrix.size(0)//num_tasks + 1
- start = rank*step
- end = min(sims_matrix.size(0),start+step)
-
- for i,sims in enumerate(metric_logger.log_every(sims_matrix[start:end], 50, header)):
-
- topk_sim, topk_idx = sims.topk(k=config['k_test'], dim=0)
- encoder_output = image_feats[topk_idx].to(device)
- encoder_att = torch.ones(encoder_output.size()[:-1],dtype=torch.long).to(device)
- output = model.text_encoder(text_ids[start+i].repeat(config['k_test'],1),
- attention_mask = text_atts[start+i].repeat(config['k_test'],1),
- encoder_hidden_states = encoder_output,
- encoder_attention_mask = encoder_att,
- return_dict = True,
- )
- score = model.itm_head(output.last_hidden_state[:,0,:])[:,1]
- score_matrix_t2i[start+i,topk_idx] = score + topk_sim
-
- if args.distributed:
- dist.barrier()
- torch.distributed.all_reduce(score_matrix_i2t, op=torch.distributed.ReduceOp.SUM)
- torch.distributed.all_reduce(score_matrix_t2i, op=torch.distributed.ReduceOp.SUM)
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Evaluation time {}'.format(total_time_str))
-
- return score_matrix_i2t.cpu().numpy(), score_matrix_t2i.cpu().numpy()
-
-
-
-@torch.no_grad()
-def itm_eval(scores_i2t, scores_t2i, txt2img, img2txt):
-
- #Images->Text
- ranks = np.zeros(scores_i2t.shape[0])
- for index,score in enumerate(scores_i2t):
- inds = np.argsort(score)[::-1]
- # Score
- rank = 1e20
- for i in img2txt[index]:
- tmp = np.where(inds == i)[0][0]
- if tmp < rank:
- rank = tmp
- ranks[index] = rank
-
- # Compute metrics
- tr1 = 100.0 * len(np.where(ranks < 1)[0]) / len(ranks)
- tr5 = 100.0 * len(np.where(ranks < 5)[0]) / len(ranks)
- tr10 = 100.0 * len(np.where(ranks < 10)[0]) / len(ranks)
-
- #Text->Images
- ranks = np.zeros(scores_t2i.shape[0])
-
- for index,score in enumerate(scores_t2i):
- inds = np.argsort(score)[::-1]
- ranks[index] = np.where(inds == txt2img[index])[0][0]
-
- # Compute metrics
- ir1 = 100.0 * len(np.where(ranks < 1)[0]) / len(ranks)
- ir5 = 100.0 * len(np.where(ranks < 5)[0]) / len(ranks)
- ir10 = 100.0 * len(np.where(ranks < 10)[0]) / len(ranks)
-
- tr_mean = (tr1 + tr5 + tr10) / 3
- ir_mean = (ir1 + ir5 + ir10) / 3
- r_mean = (tr_mean + ir_mean) / 2
-
- eval_result = {'txt_r1': tr1,
- 'txt_r5': tr5,
- 'txt_r10': tr10,
- 'txt_r_mean': tr_mean,
- 'img_r1': ir1,
- 'img_r5': ir5,
- 'img_r10': ir10,
- 'img_r_mean': ir_mean,
- 'r_mean': r_mean}
- return eval_result
-
-
-def main(args, config):
- utils.init_distributed_mode(args)
-
- device = torch.device(args.device)
-
- # fix the seed for reproducibility
- seed = args.seed + utils.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- random.seed(seed)
- cudnn.benchmark = True
-
- #### Dataset ####
- print("Creating retrieval dataset")
- train_dataset, val_dataset, test_dataset = create_dataset('retrieval_%s'%config['dataset'], config)
-
- if args.distributed:
- num_tasks = utils.get_world_size()
- global_rank = utils.get_rank()
- samplers = create_sampler([train_dataset], [True], num_tasks, global_rank) + [None, None]
- else:
- samplers = [None, None, None]
-
- train_loader, val_loader, test_loader = create_loader([train_dataset, val_dataset, test_dataset],samplers,
- batch_size=[config['batch_size_train']]+[config['batch_size_test']]*2,
- num_workers=[4,4,4],
- is_trains=[True, False, False],
- collate_fns=[None,None,None])
-
-
- #### Model ####
- print("Creating model")
- model = blip_retrieval(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit'],
- vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'],
- queue_size=config['queue_size'], negative_all_rank=config['negative_all_rank'])
-
- model = model.to(device)
-
- model_without_ddp = model
- if args.distributed:
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- model_without_ddp = model.module
-
- optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-
- best = 0
- best_epoch = 0
-
- print("Start training")
- start_time = time.time()
-
- for epoch in range(0, config['max_epoch']):
- if not args.evaluate:
- if args.distributed:
- train_loader.sampler.set_epoch(epoch)
-
- cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-
- train_stats = train(model, train_loader, optimizer, epoch, device, config)
-
- score_val_i2t, score_val_t2i, = evaluation(model_without_ddp, val_loader, device, config)
- score_test_i2t, score_test_t2i = evaluation(model_without_ddp, test_loader, device, config)
-
- if utils.is_main_process():
-
- val_result = itm_eval(score_val_i2t, score_val_t2i, val_loader.dataset.txt2img, val_loader.dataset.img2txt)
- print(val_result)
-
- if val_result['r_mean']>best:
- save_obj = {
- 'model': model_without_ddp.state_dict(),
- 'optimizer': optimizer.state_dict(),
- 'config': config,
- 'epoch': epoch,
- }
- torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth'))
- best = val_result['r_mean']
- best_epoch = epoch
-
- test_result = itm_eval(score_test_i2t, score_test_t2i, test_loader.dataset.txt2img, test_loader.dataset.img2txt)
- print(test_result)
-
- if args.evaluate:
- log_stats = {**{f'val_{k}': v for k, v in val_result.items()},
- **{f'test_{k}': v for k, v in test_result.items()},
- }
- with open(os.path.join(args.output_dir, "evaluate.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
- else:
- log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
- **{f'val_{k}': v for k, v in val_result.items()},
- **{f'test_{k}': v for k, v in test_result.items()},
- 'epoch': epoch,
- 'best_epoch': best_epoch,
- }
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
-
- if args.evaluate:
- break
-
- dist.barrier()
- torch.cuda.empty_cache()
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='./configs/retrieval_flickr.yaml')
- parser.add_argument('--output_dir', default='output/Retrieval_flickr')
- parser.add_argument('--evaluate', action='store_true')
- parser.add_argument('--device', default='cuda')
- parser.add_argument('--seed', default=42, type=int)
- parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')
- parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
- parser.add_argument('--distributed', default=True, type=bool)
- args = parser.parse_args()
-
- config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
- Path(args.output_dir).mkdir(parents=True, exist_ok=True)
-
- yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))
-
- main(args, config)
\ No newline at end of file
diff --git a/spaces/yhshin/kr-article-summarizer/app.py b/spaces/yhshin/kr-article-summarizer/app.py
deleted file mode 100644
index 8c1098993a817a7aeb2e3420aa3c3dec0511245f..0000000000000000000000000000000000000000
--- a/spaces/yhshin/kr-article-summarizer/app.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import json
-import numpy as np
-import os
-os.environ["TOKENIZERS_PARALLELISM"] = "false"
-
-# Load all test data into list of dictionaries
-#summary_data_path = 'sci-news-sum-kr-50/data/'
-#summary_objects = []
-#for root, dirs, files in os.walk(summary_data_path):
-# files.sort() # Sort file names
-# for ifile, file_name in enumerate(files):
-# with open(os.path.join(root, file_name)) as f:
-# s = json.load(f)
-# s['index'] = file_name.replace('.json','') # index = 'XY' for file 'XY.json'
-# s['sentences'] = [sen + '.' for sen in s['sentences']] # Add punctuation to all sentences
-# s['body'] = ' '.join(s['sentences']) # body is all sentenecs concantenatd with spaces in between
-# summary_objects.append(s)
-
-# Load spacy to split text into sentences
-import spacy
-
-# Cache language model
-nlp = spacy.load("ko_core_news_sm")
-nlp.select_pipes(disable=
- ['tok2vec','tagger','morphologizer','parser','lemmatizer','attribute_ruler','ner']
- )
-nlp.enable_pipe('senter')
-
-def text_to_sentences(nlp, text):
- """Split Korean text into sentences."""
- doc = nlp(text)
- sentences = [sen for sen in doc.sents]
- return sentences
-
-from transformers import AutoConfig, AutoTokenizer, AutoModel
-from summarizer import Summarizer
-
-model_path = 'skt/kobert-base-v1'
-
-# Load model, model config and tokenizer via Transformers
-custom_config = AutoConfig.from_pretrained(model_path)
-custom_config.output_hidden_states=True
-custom_tokenizer = AutoTokenizer.from_pretrained(model_path, do_lower_case=False)
-custom_model = AutoModel.from_pretrained(model_path, config=custom_config)
-model = Summarizer(custom_model=custom_model, custom_tokenizer=custom_tokenizer)
-
-def create_summary(nlp, model, text):
- """Create summary from text of an article using given model"""
-
- # print(model(s['body']))
- k = model.calculate_optimal_k(text, k_max=10)
- return text_to_sentences(nlp, model(text, num_sentences=k))
-
-from urllib.request import urlopen
-from bs4 import BeautifulSoup
-
-def extract_naver_news(url):
- """Get title, subtitle, and article body from Naver news"""
- html = urlopen(url).read()
- soup = BeautifulSoup(html, features="html.parser")
-
- title = soup.find(class_="media_end_head_headline").get_text()
-
- area = soup.find(id="dic_area")
-
- subtitle_tag = area.find('strong')
- if subtitle_tag: subtitle = area.strong.get_text('\n')
- else: subtitle = ''
-
- for tag in area.find_all(class_="img_desc"):
- tag.extract()
-
- # Add punctuation and spaces between sentences
- article = ' '.join( [text for text in area.stripped_strings if text[-1]=='.'] )
- result = {
- 'title': title,
- 'subtitle': subtitle,
- 'article': article,
- }
- return result
-
-import gradio as gr
-def interface_handler(custom_text, naver_url, choice):
- if choice == 1:
- content = extract_naver_news(naver_url)
- summary_sentences = create_summary(nlp, model, content['article'])
- output_text = ""
- # output_text += f'제목:\n{content["title"]}\n'
- # output_text += f'부제:\n{content["subtitle"]}\n'
- # output_text += '\n개요:\n'
- for sen in summary_sentences:
- output_text += f'{sen}\n\n'
- return output_text
- else:
- output_text = ""
- summary_sentences = create_summary(nlp, model, custom_text)
- for sen in summary_sentences:
- output_text += f'{sen}\n\n'
- return output_text
-
-default_url = "https://n.news.naver.com/article/015/0004692703?sid=102"
-default_text = """
-'나선형 신경망' 학습 기술. 카메라로 찍은 이미지에서 특정한 사물 찾는 기술 활용. 숲속에서 등산로 척척 찾아 드론이 산악구조대 역할도. 미국 국방부는 지난달 말 인공지능(AI)을 이용해 인간 도움 없이 적을 식별해 타격하는 드론(무인 항공기)을 시연했다. 이 드론은 카메라 화면에서 총으로 무장한 사람과 무기가 없는 사람을 구분할 수 있다. 표적으로 정한 사람을 찾아 그가 탄 자동차를 추적하는 기능도 있다. 조만간 원격 조종 없이도 전장에서 특수부대 군인들처럼 임무를 수행하는 드론이 등장할 전망이다. 이 드론이 사람 도움 없이 카메라 영상에서 목표물을 인식하고 추적할 수 있는 것은 바로 ‘머신러닝’ 덕분이다. 머신러닝은 AI의 한 분야로 컴퓨터가 인간처럼 스스로 학습할 수 있는 능력을 부여하는 작업을 말한다. 머신러닝의 원리는 인간을 포함한 영장류 두뇌의 정보 처리 구조인 ‘신경망’을 모사하는 방식이다. 바둑 대결에서 이세돌 9단을 이긴 구글의 ‘알파고’ 등 지금까지 소개된 AI 대부분은 심층신경망을 기반으로 한 머신러닝 알고리즘을 이용한다. 이미지에서 특정 사물을 찾는 기술은 인간이 아니라 고양이 뇌에서 유래했다. 고양이 뇌의 시신경에서 발견되는 ‘나선형 신경망’ 구조는 시각세포들이 보내오는 반응을 모아 여러 개의 층(層)으로 나눈다. 이를 3단계에 걸쳐 점차적으로 단순화하면서 물체의 색깔이나 모양을 파악한다. 이를 처음으로 연구한 데이비드 휴벨과 토어스텐 비젤은 1981년 노벨 생리의학상을 받았다. AI 과학자들은 나선형 신경망에서 아이디어를 얻어 이미지에서 사물을 판별하는 알고리즘을 설계했다. 우선 이미지에서 큰 특징을 추출한 다음 점차 작고 복잡한 특징을 발견해 나가는 방식이다. 예컨대 사진 속에 자동차가 있다고 해 보자. 알고리즘은 우선 사물의 전체적인 윤곽을 먼저 확인한 뒤 기존에 입력된 사진 데이터와 비교해 ‘탈 것’으로 범위를 좁힌다. 이후 타이어나 제조사 엠블럼처럼 세부적인 특징을 파악하고 ‘사진 속에 있는 물체는 자동차’라는 결론을 내리게 된다. 제프 딘 구글 수석연구원은 “나선형 신경망은 다른 머신러닝 구조들과 비교할 때 영상, 음성 분야에서 좋은 성능을 보인다”며 “이를 이용하면 컴퓨터가 처음 본 사물도 무엇인지 파악할 수 있다”고 설명했다. 주변에서 볼 수 있는 영상촬영용 드론에도 이보다는 간단하지만 비슷한 기술이 이용된다. 세계 1위 드론업체인 중국 DJI의 ‘팬텀4’는 사람 눈처럼 두 개의 카메라 센서를 장착했다. 이를 통해 대상 물체를 확인하고 일정 거리를 유지하면서 따라다닌다. 이른바 ‘액티브 트랙’ 기능이다. 액티브 트랙 기능을 켜면 이용자가 지정한 사물이나 사람의 윤곽선을 인식하고 픽셀(이미지를 구성하는 가장 작은 단위인 네모 모양의 점) 단위로 인식한다. 그 픽셀을 계속적으로 같은 크기로 유지하기 위해 기체가 이동한다. 예컨대 주변에 있는 사람을 지정했을 때 픽셀 크기가 상하좌우 100×100 픽셀이었다고 해 보자. 그 사람이 앞으로 움직여서 80×80 픽셀 크기로 줄어들면 원래 수치인 100×100 픽셀을 되찾기 위해 드론도 따라서 앞으로 움직이는 방식이다. 과학자들은 나선형 신경망을 본뜬 머신러닝 기술을 응용해 인간 삶을 윤택하게 할 수 있는 기술을 개발하고 있다. 스위스 취리히대 연구팀은 드론을 이용해 알프스 산맥에서 조난자를 찾는 기술을 연구 중이다. 연구팀이 개발한 AI 드론은 카메라가 촬영한 이미지를 이용해 숲이 우거진 곳과 등산로를 구분한다. 이를 드론의 비행 제어기로 전달해 이동 방향을 결정한다. 올해 초 취리히대가 완료한 첫 실험에서는 ‘드론이 인간보다 등산로를 잘 찾는다’는 결과가 나왔다. 연구팀은 약 2만장의 알프스 산 등산로 사진을 바탕으로 3일간 드론에 탑재된 인공지능의 심층신경망을 학습시켰다. 이후 드론이 전혀 가보지 못한 등산로를 오르도록 했다. 실험 결과 사람 눈으로 새로운 등산로를 식별할 확률은 82%였으나 AI 드론은 85%의 성공률을 보여줬다. 취리히대 연구팀은 “AI 드론은 조만간 실전에 투입돼 산악구조대가 조난자를 찾는 일을 도울 수 있을 것”이라고 말했다. 신경망 학습 기술은 다양한 용도로 활용할 수 있다. 문태현 DJI코리아 대표는 “AI를 탑재한 드론은 송전선이나 송유관 등 산업시설물의 결함 발견, 산불 감지, 장애물이나 군사용 목표물 탐지 등 이용 가능 범위가 무궁무진하다”고 말했다."),
-"""
-
-title = "AI 문서 요약\nKorean text summarization"
-with open('description.md',mode='r') as file:
- description = file.read()
-with open('article.md',mode='r') as file:
- article = file.read()
-
-
-demo = gr.Interface(
- fn=interface_handler,
- inputs=[
- gr.inputs.Textbox(lines=5, placeholder=None, default=default_text, label="임의 문서 (Custom text)", optional=False),
- gr.inputs.Textbox(lines=1, placeholder=None, default=default_url, label="네이버 뉴스 기사 링크주소 (Naver News article URL)", optional=False),
- gr.inputs.Radio(["입력 문서 요약", "네이버 뉴스 기사 요약"], type="index", default=None, label="옵션", optional=False)
- ],
- outputs=[
- gr.outputs.Textbox(label="개요"),
- ],
- title=title,
- description=description,
- article=article,
-)
-
-if __name__ == "__main__":
- demo.launch(debug=True)
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/image_processing_glpn.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/image_processing_glpn.py
deleted file mode 100644
index afed9188f7abac1c535e85cfa3634fbf5d57a4e1..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/image_processing_glpn.py
+++ /dev/null
@@ -1,211 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Image processor class for GLPN."""
-
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL.Image
-
-from ...image_processing_utils import BaseImageProcessor, BatchFeature
-from ...image_transforms import resize, to_channel_dimension_format
-from ...image_utils import (
- ChannelDimension,
- PILImageResampling,
- get_image_size,
- infer_channel_dimension_format,
- is_scaled_image,
- make_list_of_images,
- to_numpy_array,
- valid_images,
-)
-from ...utils import TensorType, logging
-
-
-logger = logging.get_logger(__name__)
-
-
-class GLPNImageProcessor(BaseImageProcessor):
- r"""
- Constructs a GLPN image processor.
-
- Args:
- do_resize (`bool`, *optional*, defaults to `True`):
- Whether to resize the image's (height, width) dimensions, rounding them down to the closest multiple of
- `size_divisor`. Can be overridden by `do_resize` in `preprocess`.
- size_divisor (`int`, *optional*, defaults to 32):
- When `do_resize` is `True`, images are resized so their height and width are rounded down to the closest
- multiple of `size_divisor`. Can be overridden by `size_divisor` in `preprocess`.
- resample (`PIL.Image` resampling filter, *optional*, defaults to `Resampling.BILINEAR`):
- Resampling filter to use if resizing the image. Can be overridden by `resample` in `preprocess`.
- do_rescale (`bool`, *optional*, defaults to `True`):
- Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.). Can be
- overridden by `do_rescale` in `preprocess`.
- """
-
- model_input_names = ["pixel_values"]
-
- def __init__(
- self,
- do_resize: bool = True,
- size_divisor: int = 32,
- resample=PILImageResampling.BILINEAR,
- do_rescale: bool = True,
- **kwargs,
- ) -> None:
- self.do_resize = do_resize
- self.do_rescale = do_rescale
- self.size_divisor = size_divisor
- self.resample = resample
- super().__init__(**kwargs)
-
- def resize(
- self,
- image: np.ndarray,
- size_divisor: int,
- resample: PILImageResampling = PILImageResampling.BILINEAR,
- data_format: Optional[ChannelDimension] = None,
- input_data_format: Optional[Union[str, ChannelDimension]] = None,
- **kwargs,
- ) -> np.ndarray:
- """
- Resize the image, rounding the (height, width) dimensions down to the closest multiple of size_divisor.
-
- If the image is of dimension (3, 260, 170) and size_divisor is 32, the image will be resized to (3, 256, 160).
-
- Args:
- image (`np.ndarray`):
- The image to resize.
- size_divisor (`int`):
- The image is resized so its height and width are rounded down to the closest multiple of
- `size_divisor`.
- resample:
- `PIL.Image` resampling filter to use when resizing the image e.g. `PILImageResampling.BILINEAR`.
- data_format (`ChannelDimension` or `str`, *optional*):
- The channel dimension format for the output image. If `None`, the channel dimension format of the input
- image is used. Can be one of:
- - `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- - `ChannelDimension.LAST`: image in (height, width, num_channels) format.
- input_data_format (`ChannelDimension` or `str`, *optional*):
- The channel dimension format of the input image. If not set, the channel dimension format is inferred
- from the input image. Can be one of:
- - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
-
- Returns:
- `np.ndarray`: The resized image.
- """
- height, width = get_image_size(image, channel_dim=input_data_format)
- # Rounds the height and width down to the closest multiple of size_divisor
- new_h = height // size_divisor * size_divisor
- new_w = width // size_divisor * size_divisor
- image = resize(
- image,
- (new_h, new_w),
- resample=resample,
- data_format=data_format,
- input_data_format=input_data_format,
- **kwargs,
- )
- return image
-
- def preprocess(
- self,
- images: Union["PIL.Image.Image", TensorType, List["PIL.Image.Image"], List[TensorType]],
- do_resize: Optional[bool] = None,
- size_divisor: Optional[int] = None,
- resample=None,
- do_rescale: Optional[bool] = None,
- return_tensors: Optional[Union[TensorType, str]] = None,
- data_format: ChannelDimension = ChannelDimension.FIRST,
- input_data_format: Optional[Union[str, ChannelDimension]] = None,
- **kwargs,
- ) -> BatchFeature:
- """
- Preprocess the given images.
-
- Args:
- images (`PIL.Image.Image` or `TensorType` or `List[np.ndarray]` or `List[TensorType]`):
- Images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
- passing in images with pixel values between 0 and 1, set `do_normalize=False`.
- do_resize (`bool`, *optional*, defaults to `self.do_resize`):
- Whether to resize the input such that the (height, width) dimensions are a multiple of `size_divisor`.
- size_divisor (`int`, *optional*, defaults to `self.size_divisor`):
- When `do_resize` is `True`, images are resized so their height and width are rounded down to the
- closest multiple of `size_divisor`.
- resample (`PIL.Image` resampling filter, *optional*, defaults to `self.resample`):
- `PIL.Image` resampling filter to use if resizing the image e.g. `PILImageResampling.BILINEAR`. Only has
- an effect if `do_resize` is set to `True`.
- do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
- Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.).
- return_tensors (`str` or `TensorType`, *optional*):
- The type of tensors to return. Can be one of:
- - `None`: Return a list of `np.ndarray`.
- - `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- - `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
- data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
- The channel dimension format for the output image. Can be one of:
- - `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- - `ChannelDimension.LAST`: image in (height, width, num_channels) format.
- input_data_format (`ChannelDimension` or `str`, *optional*):
- The channel dimension format for the input image. If unset, the channel dimension format is inferred
- from the input image. Can be one of:
- - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
- - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
- """
- do_resize = do_resize if do_resize is not None else self.do_resize
- do_rescale = do_rescale if do_rescale is not None else self.do_rescale
- size_divisor = size_divisor if size_divisor is not None else self.size_divisor
- resample = resample if resample is not None else self.resample
-
- if do_resize and size_divisor is None:
- raise ValueError("size_divisor is required for resizing")
-
- images = make_list_of_images(images)
-
- if not valid_images(images):
- raise ValueError("Invalid image(s)")
-
- # All transformations expect numpy arrays.
- images = [to_numpy_array(img) for img in images]
-
- if is_scaled_image(images[0]) and do_rescale:
- logger.warning_once(
- "It looks like you are trying to rescale already rescaled images. If the input"
- " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
- )
-
- if input_data_format is None:
- # We assume that all images have the same channel dimension format.
- input_data_format = infer_channel_dimension_format(images[0])
-
- if do_resize:
- images = [
- self.resize(image, size_divisor=size_divisor, resample=resample, input_data_format=input_data_format)
- for image in images
- ]
-
- if do_rescale:
- images = [self.rescale(image, scale=1 / 255, input_data_format=input_data_format) for image in images]
-
- images = [
- to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images
- ]
-
- data = {"pixel_values": images}
- return BatchFeature(data=data, tensor_type=return_tensors)
diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/ContentVec768L9_Onnx.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/ContentVec768L9_Onnx.py
deleted file mode 100644
index 7cdac4cd93478d3ddddb4b76dd9d9ccc5d1af2d4..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/ContentVec768L9_Onnx.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import onnxruntime
-import torch
-
-class ContentVec768L9_Onnx(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/vec-768-layer-9.onnx",device=None):
- print("load model(s) from {}".format(vec_path))
- self.hidden_dim = 768
- if device is None:
- self.dev = torch.device("cpu")
- else:
- self.dev = torch.device(device)
- if device == 'cpu' or device == torch.device("cpu") or device is None:
- providers = ['CPUExecutionProvider']
- elif device == 'cuda' or device == torch.device("cuda"):
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- feats = feats.unsqueeze(0).cpu().detach().numpy()
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)
- return torch.tensor(logits[0]).transpose(1, 2).to(self.dev)
\ No newline at end of file
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_a.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_a.py
deleted file mode 100644
index a939955124556355524f48c0f0c16abb07cfc4c4..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/config/dir1/dir1_a.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-dir1a_str = "base_a_1"
-dir1a_dict = {"a": 1, "b": 2}
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/modeling/test_matcher.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/modeling/test_matcher.py
deleted file mode 100644
index 6eb2db0c24b117337c431e9ef00a85a3bced71b9..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/modeling/test_matcher.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import unittest
-from typing import List
-import torch
-
-from detectron2.config import get_cfg
-from detectron2.modeling.matcher import Matcher
-
-
-class TestMatcher(unittest.TestCase):
- def test_scriptability(self):
- cfg = get_cfg()
- anchor_matcher = Matcher(
- cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True
- )
- match_quality_matrix = torch.tensor(
- [[0.15, 0.45, 0.2, 0.6], [0.3, 0.65, 0.05, 0.1], [0.05, 0.4, 0.25, 0.4]]
- )
- expected_matches = torch.tensor([1, 1, 2, 0])
- expected_match_labels = torch.tensor([-1, 1, 0, 1], dtype=torch.int8)
-
- matches, match_labels = anchor_matcher(match_quality_matrix)
- self.assertTrue(torch.allclose(matches, expected_matches))
- self.assertTrue(torch.allclose(match_labels, expected_match_labels))
-
- # nonzero_tuple must be import explicitly to let jit know what it is.
- # https://github.com/pytorch/pytorch/issues/38964
- from detectron2.layers import nonzero_tuple # noqa F401
-
- def f(thresholds: List[float], labels: List[int]):
- return Matcher(thresholds, labels, allow_low_quality_matches=True)
-
- scripted_anchor_matcher = torch.jit.script(f)(
- cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS
- )
- matches, match_labels = scripted_anchor_matcher(match_quality_matrix)
- self.assertTrue(torch.allclose(matches, expected_matches))
- self.assertTrue(torch.allclose(match_labels, expected_match_labels))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/ysharma/GPT-JT-copy/app.py b/spaces/ysharma/GPT-JT-copy/app.py
deleted file mode 100644
index 2a607994124b91686ab769c97b1b9ec03ae12ece..0000000000000000000000000000000000000000
--- a/spaces/ysharma/GPT-JT-copy/app.py
+++ /dev/null
@@ -1,278 +0,0 @@
-import streamlit as st
-import requests
-import time
-from ast import literal_eval
-from datetime import datetime
-
-def to_md(text):
- # return text.replace("\n", "
")
- return text.replace("\n", "
")
-
-@st.cache
-def infer(
- prompt,
- model_name,
- max_new_tokens=10,
- temperature=0.1,
- top_p=1.0,
- top_k=40,
- num_completions=1,
- seed=42,
- stop="\n"
-):
- model_name_map = {
- "GPT-JT-6B-v1": "Together-gpt-JT-6B-v1",
- }
- max_new_tokens = int(max_new_tokens)
- num_completions = int(num_completions)
- temperature = float(temperature)
- top_p = float(top_p)
- stop = stop.split(";")
- seed = seed
-
- assert 1 <= max_new_tokens <= 256
- assert 1 <= num_completions <= 5
- assert 0.0 <= temperature <= 10.0
- assert 0.0 <= top_p <= 1.0
-
- if temperature == 0.0:
- temperature = 0.01
- if prompt=="":
- prompt = " "
- my_post_dict = {
- "model": "Together-gpt-JT-6B-v1",
- "prompt": prompt,
- "top_p": top_p,
- "top_k": top_k,
- "temperature": temperature,
- "max_tokens": max_new_tokens,
- "stop": stop,
- }
- print(f"send: {datetime.now()}")
- response = requests.get("https://staging.together.xyz/api/inference", params=my_post_dict).json()
- generated_text = response['output']['choices'][0]['text']
- print(f"recv: {datetime.now()}")
-
- for stop_word in stop:
- if stop_word != '' and stop_word in generated_text:
- generated_text = generated_text[:generated_text.find(stop_word)]
-
- return generated_text
-
-
-def set_preset():
- if st.session_state.preset == "Question Answering":
-
- st.session_state.prompt = '''
-Please answer the following question:
-
-Question: What is the capital of Canada?
-Answer: Ottawa
-
-Question: What is the currency of Switzerland?
-Answer: Swiss franc
-
-Question: In which country is Wisconsin located?
-Answer:
-'''.strip()
- st.session_state.temperature = "0.0"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "5"
- st.session_state.stop = r'\n'
-
- elif st.session_state.preset == "Sentiment Analysis":
-
- st.session_state.prompt = '''
-Label the tweets as either "positive", "negative", "mixed", or "neutral":
-
-Tweet: I can say that there isn't anything I would change.
-Label: positive
-
-Tweet: I'm not sure about this.
-Label: neutral
-
-Tweet: I liked some parts but I didn't like other parts.
-Label: mixed
-
-Tweet: I think the background image could have been better.
-Label: negative
-
-Tweet: I really like it.
-Label:
-'''.strip()
- st.session_state.temperature = "0.0"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "2"
- st.session_state.stop = r'\n'
-
- elif st.session_state.preset == "Topic Classification":
-
- st.session_state.prompt = '''
-Given a news article, classify its topic.
-Possible labels: 1. World 2. Sports 3. Business 4. Sci/Tech
-
-Article: A nearby star thought to harbor comets and asteroids now appears to be home to planets, too.
-Label: Sci/Tech
-
-Article: Soaring crude prices plus worries about the economy and the outlook for earnings are expected to hang over the stock market next week during the depth of the summer doldrums.
-Label: Business
-
-Article: Murtagh a stickler for success Northeastern field hockey coach Cheryl Murtagh doesn't want the glare of the spotlight that shines on her to detract from a team that has been the America East champion for the past three years and has been to the NCAA tournament 13 times.
-Label:
-'''.strip()
- st.session_state.temperature = "0.0"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "5"
- st.session_state.stop = r'\n'
-
- elif st.session_state.preset == "Paraphrasing":
-
- st.session_state.prompt = '''
-Paraphrase the given sentence into a different sentence.
-
-Input: Can you recommend some upscale restaurants in New York?
-Output: What upscale restaurants do you recommend in New York?
-
-Input: What are the famous places we should not miss in Paris?
-Output: Recommend some of the best places to visit in Paris?
-
-Input: Could you recommend some hotels that have cheap price in Zurich?
-Output:
-'''.strip()
- st.session_state.temperature = "0.8"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "20"
- st.session_state.stop = r'\n'
-
- elif st.session_state.preset == "Text Summarization":
-
- st.session_state.prompt = '''
-Given a review from Amazon's food products, the task is to generate a short summary of the given review in the input.
-
-Input: I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.
-Output: Good Quality Dog Food
-
-Input: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were actually small sized unsalted. Not sure if this was an error or if the vendor intended to represent the product as 'Jumbo'.
-Output: Not as Advertised
-
-Input: My toddler loves this game to a point where he asks for it. That's a big thing for me. Secondly, no glitching unlike one of their competitors (PlayShifu). Any tech I don’t have to reach out to support for help is a good tech for me. I even enjoy some of the games and activities in this. Overall, this is a product that shows that the developers took their time and made sure people would not be asking for refund. I’ve become bias regarding this product and honestly I look forward to buying more of this company’s stuff. Please keep up the great work.
-Output:
-'''.strip()
- st.session_state.temperature = "0.0"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "10"
- st.session_state.stop = r'\n'
-
- elif st.session_state.preset == "Word Sense Disambiguation":
-
- st.session_state.prompt = '''
-Identify which sense of a word is meant in a given context.
-
-Context: The river overflowed the bank.
-Word: bank
-Sense: river bank
-
-Context: A mouse takes much more room than a trackball.
-Word: mouse
-Sense: computer mouse
-
-Context: The bank will not be accepting cash on Saturdays.
-Word: bank
-Sense: commercial (finance) banks
-
-Context: Bill killed the project
-Word: kill
-Sense:
-'''.strip()
- st.session_state.temperature = "0.0"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "10"
- st.session_state.stop = r'\n'
-
- elif st.session_state.preset == "Natural Language Inference":
-
- st.session_state.prompt = '''
-Given a pair of sentences, choose whether the two sentences agree (entailment)/disagree (contradiction) with each other.
-Possible labels: 1. entailment 2. contradiction
-
-Sentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was dressed in winter clothes.
-Label: entailment
-
-Sentence 1: The boy skated down the staircase railing. Sentence 2: The boy is a newbie skater.
-Label: contradiction
-
-Sentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A couple riding in a golf cart.
-Label:
-'''.strip()
- st.session_state.temperature = "0.0"
- st.session_state.top_p = "1.0"
- st.session_state.max_new_tokens = "2"
- st.session_state.stop = r'\n'
-
- else:
- pass
-
-
-def main():
-
- if 'preset' not in st.session_state:
- st.session_state.preset = "Sentiment Analysis"
- st.session_state.top_k = "40"
- st.session_state.stop = r'\n'
- set_preset()
-
- st.title("GPT-JT")
-
- col1, col2 = st.columns([1, 2])
-
- with col1:
- model_name = st.selectbox("Model", ["GPT-JT-6B-v1"])
-
- with col2:
- preset = st.selectbox(
- label="Examples",
- options=('Question Answering', 'Sentiment Analysis',
- "Topic Classification", "Paraphrasing", "Text Summarization",
- "Word Sense Disambiguation", "Natural Language Inference"),
- on_change=set_preset,
- key="preset",
- )
-
- col3, col4 = st.columns([1, 5])
-
- with col3:
- max_new_tokens = st.text_input('Max new tokens', st.session_state.max_new_tokens)
- temperature = st.text_input('temperature', st.session_state.temperature)
- top_k = st.text_input('top_k', st.session_state.top_k)
- top_p = st.text_input('top_p', st.session_state.top_p)
- # num_completions = st.text_input('num_completions (only the best one will be returend)', "1")
- num_completions = "1"
- stop = st.text_input('stop, split by;', st.session_state.stop)
- # seed = st.text_input('seed', "42")
- seed = "42"
-
- with col4:
-
- prompt_area = st.empty()
- prompt = prompt_area.text_area(
- "Prompt",
- value=st.session_state.prompt,
- max_chars=4096,
- height=500,
- )
-
- generated_area = st.empty()
- generated_area.markdown("(Generate here)")
-
- button_submit = st.button("Submit")
-
- if button_submit:
- generated_area.markdown("" + to_md(prompt) + "", unsafe_allow_html=True)
- report_text = infer(
- prompt, model_name=model_name, max_new_tokens=max_new_tokens, temperature=temperature, top_p=top_p, top_k=top_k,
- num_completions=num_completions, seed=seed, stop=literal_eval("'''"+stop+"'''"),
- )
- generated_area.markdown("" + to_md(prompt) + "" + to_md(report_text)+"", unsafe_allow_html=True)
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/ysharma/llamas/theme_dropdown.py b/spaces/ysharma/llamas/theme_dropdown.py
deleted file mode 100644
index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000
--- a/spaces/ysharma/llamas/theme_dropdown.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import pathlib
-
-from gradio.themes.utils import ThemeAsset
-
-
-def create_theme_dropdown():
- import gradio as gr
-
- asset_path = pathlib.Path(__file__).parent / "themes"
- themes = []
- for theme_asset in os.listdir(str(asset_path)):
- themes.append(
- (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset)))
- )
-
- def make_else_if(theme_asset):
- return f"""
- else if (theme == '{str(theme_asset[0].version)}') {{
- var theme_css = `{theme_asset[1]._get_theme_css()}`
- }}"""
-
- head, tail = themes[0], themes[1:]
- if_statement = f"""
- if (theme == "{str(head[0].version)}") {{
- var theme_css = `{head[1]._get_theme_css()}`
- }} {" ".join(make_else_if(t) for t in tail)}
- """
-
- latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[
- ::-1
- ]
- latest_to_oldest = [str(t.version) for t in latest_to_oldest]
-
- component = gr.Dropdown(
- choices=latest_to_oldest,
- value=latest_to_oldest[0],
- render=False,
- label="Select Version",
- ).style(container=False)
-
- return (
- component,
- f"""
- (theme) => {{
- if (!document.querySelector('.theme-css')) {{
- var theme_elem = document.createElement('style');
- theme_elem.classList.add('theme-css');
- document.head.appendChild(theme_elem);
- }} else {{
- var theme_elem = document.querySelector('.theme-css');
- }}
- {if_statement}
- theme_elem.innerHTML = theme_css;
- }}
- """,
- )
diff --git a/spaces/yuan2023/Stable-Diffusion-ControlNet-WebUI/README.md b/spaces/yuan2023/Stable-Diffusion-ControlNet-WebUI/README.md
deleted file mode 100644
index 52bbd00f5d8da6c296dc9d0f40b78eb8b729e7fa..0000000000000000000000000000000000000000
--- a/spaces/yuan2023/Stable-Diffusion-ControlNet-WebUI/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Stable Diffusion ControlNet WebUI
-emoji: 🔥
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.19
-app_file: app.py
-pinned: true
-license: openrail
-tags:
-- making-demos
-duplicated_from: ArtGAN/Stable-Diffusion-ControlNet-WebUI
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/yuezih/BLIP-SMILE/SMILE/BLIP/__init__.py b/spaces/yuezih/BLIP-SMILE/SMILE/BLIP/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/zhang-wei-jian/docker/node_modules/nopt/examples/my-program.js b/spaces/zhang-wei-jian/docker/node_modules/nopt/examples/my-program.js
deleted file mode 100644
index 142447e18e756c77f39d98fc577b7a24c73f11ce..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/nopt/examples/my-program.js
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env node
-
-//process.env.DEBUG_NOPT = 1
-
-// my-program.js
-var nopt = require("../lib/nopt")
- , Stream = require("stream").Stream
- , path = require("path")
- , knownOpts = { "foo" : [String, null]
- , "bar" : [Stream, Number]
- , "baz" : path
- , "bloo" : [ "big", "medium", "small" ]
- , "flag" : Boolean
- , "pick" : Boolean
- }
- , shortHands = { "foofoo" : ["--foo", "Mr. Foo"]
- , "b7" : ["--bar", "7"]
- , "m" : ["--bloo", "medium"]
- , "p" : ["--pick"]
- , "f" : ["--flag", "true"]
- , "g" : ["--flag"]
- , "s" : "--flag"
- }
- // everything is optional.
- // knownOpts and shorthands default to {}
- // arg list defaults to process.argv
- // slice defaults to 2
- , parsed = nopt(knownOpts, shortHands, process.argv, 2)
-
-console.log("parsed =\n"+ require("util").inspect(parsed))
diff --git a/spaces/zhang-wei-jian/docker/node_modules/only/Makefile b/spaces/zhang-wei-jian/docker/node_modules/only/Makefile
deleted file mode 100644
index 4e9c8d36ebcd2f63843cf5a534e4db3459b2d637..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/only/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-
-test:
- @./node_modules/.bin/mocha \
- --require should \
- --reporter spec
-
-.PHONY: test
\ No newline at end of file
diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/internal/debug.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/internal/debug.js
deleted file mode 100644
index 1c00e1369aa2a00122407dc3887a45a29be3210e..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/internal/debug.js
+++ /dev/null
@@ -1,9 +0,0 @@
-const debug = (
- typeof process === 'object' &&
- process.env &&
- process.env.NODE_DEBUG &&
- /\bsemver\b/i.test(process.env.NODE_DEBUG)
-) ? (...args) => console.error('SEMVER', ...args)
- : () => {}
-
-module.exports = debug
diff --git a/spaces/zhangyd/bingo/src/lib/bots/bing/types.ts b/spaces/zhangyd/bingo/src/lib/bots/bing/types.ts
deleted file mode 100644
index a3697d12b95a43b3f819c7b4ad743bc08a9eb2c2..0000000000000000000000000000000000000000
--- a/spaces/zhangyd/bingo/src/lib/bots/bing/types.ts
+++ /dev/null
@@ -1,257 +0,0 @@
-export type Author = 'user' | 'system' | 'bot'
-
-export type BotId = 'bing'
-
-export enum BingConversationStyle {
- Creative = 'Creative',
- Balanced = 'Balanced',
- Precise = 'Precise'
-}
-
-export enum ErrorCode {
- CONVERSATION_LIMIT = 'CONVERSATION_LIMIT',
- BING_UNAUTHORIZED = 'BING_UNAUTHORIZED',
- BING_FORBIDDEN = 'BING_FORBIDDEN',
- BING_CAPTCHA = 'BING_CAPTCHA',
- THROTTLE_LIMIT = 'THROTTLE_LIMIT',
- NOTFOUND_ERROR = 'NOT_FOUND_ERROR',
- UNKOWN_ERROR = 'UNKOWN_ERROR',
- NETWORK_ERROR = 'NETWORK_ERROR',
-}
-
-export class ChatError extends Error {
- code: ErrorCode
- constructor(message: string, code: ErrorCode) {
- super(message)
- this.code = code
- }
-}
-
-export type ChatMessageModel = {
- id: string
- author: Author
- text: string
- error?: ChatError
- throttling?: Throttling
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
-}
-
-export interface ConversationModel {
- messages: ChatMessageModel[]
-}
-
-export type Event =
- | {
- type: 'UPDATE_ANSWER'
- data: {
- text: string
- spokenText?: string
- sourceAttributions?: SourceAttribution[]
- suggestedResponses?: SuggestedResponse[]
- throttling?: Throttling
- }
- }
- | {
- type: 'DONE'
- }
- | {
- type: 'ERROR'
- error: ChatError
- }
-
-export interface SendMessageParams {
- prompt: string
- imageUrl?: string
- options: T
- onEvent: (event: Event) => void
- signal?: AbortSignal
-}
-
-export interface ConversationResponse {
- conversationId: string
- clientId: string
- conversationSignature: string
- result: {
- value: string
- message?: string
- }
-}
-
-export interface Telemetry {
- metrics?: null
- startTime: string
-}
-
-export interface ChatUpdateArgument {
- messages?: ChatResponseMessage[]
- throttling?: Throttling
- requestId: string
- result: null
-}
-
-export type ChatUpdateCompleteResponse = {
- type: 2
- invocationId: string
- item: ChatResponseItem
-} | {
- type: 1
- target: string
- arguments: ChatUpdateArgument[]
-} | {
- type: 3
- invocationId: string
-}
-
-export interface ChatRequestResult {
- value: string
- serviceVersion: string
- error?: string
-}
-
-export interface ChatResponseItem {
- messages: ChatResponseMessage[]
- firstNewMessageIndex: number
- suggestedResponses: null
- conversationId: string
- requestId: string
- conversationExpiryTime: string
- telemetry: Telemetry
- result: ChatRequestResult
- throttling: Throttling
-}
-export enum InvocationEventType {
- Invocation = 1,
- StreamItem = 2,
- Completion = 3,
- StreamInvocation = 4,
- CancelInvocation = 5,
- Ping = 6,
- Close = 7,
-}
-
-// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts
-
-export interface ConversationInfo {
- conversationId: string
- clientId: string
- conversationSignature: string
- invocationId: number
- conversationStyle: BingConversationStyle
- prompt: string
- imageUrl?: string
-}
-
-export interface BingChatResponse {
- conversationSignature: string
- conversationId: string
- clientId: string
- invocationId: number
- conversationExpiryTime: Date
- response: string
- details: ChatResponseMessage
-}
-
-export interface Throttling {
- maxNumLongDocSummaryUserMessagesInConversation: number
- maxNumUserMessagesInConversation: number
- numLongDocSummaryUserMessagesInConversation: number
- numUserMessagesInConversation: number
-}
-
-export interface ChatResponseMessage {
- text: string
- spokenText?: string
- author: string
- createdAt: Date
- timestamp: Date
- messageId: string
- requestId: string
- offense: string
- adaptiveCards: AdaptiveCard[]
- sourceAttributions: SourceAttribution[]
- feedback: Feedback
- contentOrigin: string
- messageType?: string
- contentType?: string
- privacy: null
- suggestedResponses: SuggestedResponse[]
-}
-
-export interface AdaptiveCard {
- type: string
- version: string
- body: Body[]
-}
-
-export interface Body {
- type: string
- text: string
- wrap: boolean
- size?: string
-}
-
-export interface Feedback {
- tag: null
- updatedOn: null
- type: string
-}
-
-export interface SourceAttribution {
- providerDisplayName: string
- seeMoreUrl: string
- searchQuery: string
-}
-
-export interface SuggestedResponse {
- text: string
- author?: Author
- createdAt?: Date
- timestamp?: Date
- messageId?: string
- messageType?: string
- offense?: string
- feedback?: Feedback
- contentOrigin?: string
- privacy?: null
-}
-
-export interface KBlobRequest {
- knowledgeRequest: KnowledgeRequestContext
- imageBase64?: string
-}
-
-export interface KBlobResponse {
- blobId: string
- processedBlobId?: string
-}
-
-export interface KnowledgeRequestContext {
- imageInfo: ImageInfo;
- knowledgeRequest: KnowledgeRequest;
-}
-
-export interface ImageInfo {
- url?: string;
-}
-
-export interface KnowledgeRequest {
- invokedSkills: string[];
- subscriptionId: string;
- invokedSkillsRequestData: InvokedSkillsRequestData;
- convoData: ConvoData;
-}
-
-export interface ConvoData {
- convoid: string;
- convotone: BingConversationStyle;
-}
-
-export interface InvokedSkillsRequestData {
- enableFaceBlur: boolean;
-}
-
-export interface FileItem {
- url: string;
- status?: 'loading' | 'error' | 'loaded'
-}
diff --git a/spaces/zhangyd/bingo/tailwind.config.js b/spaces/zhangyd/bingo/tailwind.config.js
deleted file mode 100644
index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000
--- a/spaces/zhangyd/bingo/tailwind.config.js
+++ /dev/null
@@ -1,48 +0,0 @@
-/** @type {import('tailwindcss').Config} */
-module.exports = {
- content: [
- './src/pages/**/*.{js,ts,jsx,tsx,mdx}',
- './src/components/**/*.{js,ts,jsx,tsx,mdx}',
- './src/app/**/*.{js,ts,jsx,tsx,mdx}',
- './src/ui/**/*.{js,ts,jsx,tsx,mdx}',
- ],
- "darkMode": "class",
- theme: {
- extend: {
- colors: {
- 'primary-blue': 'rgb(var(--color-primary-blue) / )',
- secondary: 'rgb(var(--color-secondary) / )',
- 'primary-background': 'rgb(var(--primary-background) / )',
- 'primary-text': 'rgb(var(--primary-text) / )',
- 'secondary-text': 'rgb(var(--secondary-text) / )',
- 'light-text': 'rgb(var(--light-text) / )',
- 'primary-border': 'rgb(var(--primary-border) / )',
- },
- keyframes: {
- slideDownAndFade: {
- from: { opacity: 0, transform: 'translateY(-2px)' },
- to: { opacity: 1, transform: 'translateY(0)' },
- },
- slideLeftAndFade: {
- from: { opacity: 0, transform: 'translateX(2px)' },
- to: { opacity: 1, transform: 'translateX(0)' },
- },
- slideUpAndFade: {
- from: { opacity: 0, transform: 'translateY(2px)' },
- to: { opacity: 1, transform: 'translateY(0)' },
- },
- slideRightAndFade: {
- from: { opacity: 0, transform: 'translateX(2px)' },
- to: { opacity: 1, transform: 'translateX(0)' },
- },
- },
- animation: {
- slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)',
- slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)',
- slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)',
- slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)',
- },
- },
- },
- plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')],
-}
diff --git a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/lib/hooks/use-copy-to-clipboard.tsx
deleted file mode 100644
index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000
--- a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/lib/hooks/use-copy-to-clipboard.tsx
+++ /dev/null
@@ -1,33 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-export interface useCopyToClipboardProps {
- timeout?: number
-}
-
-export function useCopyToClipboard({
- timeout = 2000
-}: useCopyToClipboardProps) {
- const [isCopied, setIsCopied] = React.useState(false)
-
- const copyToClipboard = (value: string) => {
- if (typeof window === 'undefined' || !navigator.clipboard?.writeText) {
- return
- }
-
- if (!value) {
- return
- }
-
- navigator.clipboard.writeText(value).then(() => {
- setIsCopied(true)
-
- setTimeout(() => {
- setIsCopied(false)
- }, timeout)
- })
- }
-
- return { isCopied, copyToClipboard }
-}
diff --git a/spaces/znskiss/Qwen-VL/eval_mm/vqa_eval.py b/spaces/znskiss/Qwen-VL/eval_mm/vqa_eval.py
deleted file mode 100644
index 1329ae13cd7f3857a839c95462118738e61b0d6d..0000000000000000000000000000000000000000
--- a/spaces/znskiss/Qwen-VL/eval_mm/vqa_eval.py
+++ /dev/null
@@ -1,330 +0,0 @@
-"""Copyright (c) 2022, salesforce.com, inc.
-
-All rights reserved.
-SPDX-License-Identifier: BSD-3-Clause
-For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-# coding=utf-8
-
-__author__ = 'aagrawal'
-
-import re
-# This code is based on the code written by Tsung-Yi Lin for MSCOCO Python API available at the following link:
-# (https://github.com/tylin/coco-caption/blob/master/pycocoevalcap/eval.py).
-import sys
-
-
-class VQAEval:
-
- def __init__(self, vqa=None, vqaRes=None, n=2):
- self.n = n
- self.accuracy = {}
- self.evalQA = {}
- self.evalQuesType = {}
- self.evalAnsType = {}
- self.vqa = vqa
- self.vqaRes = vqaRes
- if vqa is not None:
- self.params = {'question_id': vqa.getQuesIds()}
- self.contractions = {
- 'aint': "ain't",
- 'arent': "aren't",
- 'cant': "can't",
- 'couldve': "could've",
- 'couldnt': "couldn't",
- "couldn'tve": "couldn't've",
- "couldnt've": "couldn't've",
- 'didnt': "didn't",
- 'doesnt': "doesn't",
- 'dont': "don't",
- 'hadnt': "hadn't",
- "hadnt've": "hadn't've",
- "hadn'tve": "hadn't've",
- 'hasnt': "hasn't",
- 'havent': "haven't",
- 'hed': "he'd",
- "hed've": "he'd've",
- "he'dve": "he'd've",
- 'hes': "he's",
- 'howd': "how'd",
- 'howll': "how'll",
- 'hows': "how's",
- "Id've": "I'd've",
- "I'dve": "I'd've",
- 'Im': "I'm",
- 'Ive': "I've",
- 'isnt': "isn't",
- 'itd': "it'd",
- "itd've": "it'd've",
- "it'dve": "it'd've",
- 'itll': "it'll",
- "let's": "let's",
- 'maam': "ma'am",
- 'mightnt': "mightn't",
- "mightnt've": "mightn't've",
- "mightn'tve": "mightn't've",
- 'mightve': "might've",
- 'mustnt': "mustn't",
- 'mustve': "must've",
- 'neednt': "needn't",
- 'notve': "not've",
- 'oclock': "o'clock",
- 'oughtnt': "oughtn't",
- "ow's'at": "'ow's'at",
- "'ows'at": "'ow's'at",
- "'ow'sat": "'ow's'at",
- 'shant': "shan't",
- "shed've": "she'd've",
- "she'dve": "she'd've",
- "she's": "she's",
- 'shouldve': "should've",
- 'shouldnt': "shouldn't",
- "shouldnt've": "shouldn't've",
- "shouldn'tve": "shouldn't've",
- "somebody'd": 'somebodyd',
- "somebodyd've": "somebody'd've",
- "somebody'dve": "somebody'd've",
- 'somebodyll': "somebody'll",
- 'somebodys': "somebody's",
- 'someoned': "someone'd",
- "someoned've": "someone'd've",
- "someone'dve": "someone'd've",
- 'someonell': "someone'll",
- 'someones': "someone's",
- 'somethingd': "something'd",
- "somethingd've": "something'd've",
- "something'dve": "something'd've",
- 'somethingll': "something'll",
- 'thats': "that's",
- 'thered': "there'd",
- "thered've": "there'd've",
- "there'dve": "there'd've",
- 'therere': "there're",
- 'theres': "there's",
- 'theyd': "they'd",
- "theyd've": "they'd've",
- "they'dve": "they'd've",
- 'theyll': "they'll",
- 'theyre': "they're",
- 'theyve': "they've",
- 'twas': "'twas",
- 'wasnt': "wasn't",
- "wed've": "we'd've",
- "we'dve": "we'd've",
- 'weve': "we've",
- 'werent': "weren't",
- 'whatll': "what'll",
- 'whatre': "what're",
- 'whats': "what's",
- 'whatve': "what've",
- 'whens': "when's",
- 'whered': "where'd",
- 'wheres': "where's",
- 'whereve': "where've",
- 'whod': "who'd",
- "whod've": "who'd've",
- "who'dve": "who'd've",
- 'wholl': "who'll",
- 'whos': "who's",
- 'whove': "who've",
- 'whyll': "why'll",
- 'whyre': "why're",
- 'whys': "why's",
- 'wont': "won't",
- 'wouldve': "would've",
- 'wouldnt': "wouldn't",
- "wouldnt've": "wouldn't've",
- "wouldn'tve": "wouldn't've",
- 'yall': "y'all",
- "yall'll": "y'all'll",
- "y'allll": "y'all'll",
- "yall'd've": "y'all'd've",
- "y'alld've": "y'all'd've",
- "y'all'dve": "y'all'd've",
- 'youd': "you'd",
- "youd've": "you'd've",
- "you'dve": "you'd've",
- 'youll': "you'll",
- 'youre': "you're",
- 'youve': "you've",
- }
- self.manualMap = {
- 'none': '0',
- 'zero': '0',
- 'one': '1',
- 'two': '2',
- 'three': '3',
- 'four': '4',
- 'five': '5',
- 'six': '6',
- 'seven': '7',
- 'eight': '8',
- 'nine': '9',
- 'ten': '10',
- }
- self.articles = ['a', 'an', 'the']
-
- self.periodStrip = re.compile('(?!<=\d)(\.)(?!\d)')
- self.commaStrip = re.compile('(\d)(,)(\d)')
- self.punct = [
- ';',
- r'/',
- '[',
- ']',
- '"',
- '{',
- '}',
- '(',
- ')',
- '=',
- '+',
- '\\',
- '_',
- '-',
- '>',
- '<',
- '@',
- '`',
- ',',
- '?',
- '!',
- ]
-
- def evaluate(self, quesIds=None):
- if quesIds == None:
- quesIds = [quesId for quesId in self.params['question_id']]
- gts = {}
- res = {}
- for quesId in quesIds:
- gts[quesId] = self.vqa.qa[quesId]
- res[quesId] = self.vqaRes.qa[quesId]
-
- # =================================================
- # Compute accuracy
- # =================================================
- accQA = []
- accQuesType = {}
- accAnsType = {}
- print('computing accuracy')
- step = 0
- for quesId in quesIds:
- resAns = res[quesId]['answer']
- resAns = resAns.replace('\n', ' ')
- resAns = resAns.replace('\t', ' ')
- resAns = resAns.strip()
- resAns = self.processPunctuation(resAns)
- resAns = self.processDigitArticle(resAns)
- gtAcc = []
- gtAnswers = [ans['answer'] for ans in gts[quesId]['answers']]
- if len(set(gtAnswers)) > 1:
- for ansDic in gts[quesId]['answers']:
- ansDic['answer'] = self.processPunctuation(
- ansDic['answer'])
- for gtAnsDatum in gts[quesId]['answers']:
- otherGTAns = [
- item for item in gts[quesId]['answers']
- if item != gtAnsDatum
- ]
- matchingAns = [
- item for item in otherGTAns if item['answer'] == resAns
- ]
- acc = min(1, float(len(matchingAns)) / 3)
- gtAcc.append(acc)
- quesType = gts[quesId]['question_type']
- ansType = gts[quesId]['answer_type']
- avgGTAcc = float(sum(gtAcc)) / len(gtAcc)
- accQA.append(avgGTAcc)
- if quesType not in accQuesType:
- accQuesType[quesType] = []
- accQuesType[quesType].append(avgGTAcc)
- if ansType not in accAnsType:
- accAnsType[ansType] = []
- accAnsType[ansType].append(avgGTAcc)
- self.setEvalQA(quesId, avgGTAcc)
- self.setEvalQuesType(quesId, quesType, avgGTAcc)
- self.setEvalAnsType(quesId, ansType, avgGTAcc)
- if step % 100 == 0:
- self.updateProgress(step / float(len(quesIds)))
- step = step + 1
-
- self.setAccuracy(accQA, accQuesType, accAnsType)
- print('Done computing accuracy')
-
- def processPunctuation(self, inText):
- outText = inText
- for p in self.punct:
- if (p + ' ' in inText or ' ' + p
- in inText) or (re.search(self.commaStrip, inText) != None):
- outText = outText.replace(p, '')
- else:
- outText = outText.replace(p, ' ')
- outText = self.periodStrip.sub('', outText, re.UNICODE)
- return outText
-
- def processDigitArticle(self, inText):
- outText = []
- tempText = inText.lower().split()
- for word in tempText:
- word = self.manualMap.setdefault(word, word)
- if word not in self.articles:
- outText.append(word)
- else:
- pass
- for wordId, word in enumerate(outText):
- if word in self.contractions:
- outText[wordId] = self.contractions[word]
- outText = ' '.join(outText)
- return outText
-
- def setAccuracy(self, accQA, accQuesType, accAnsType):
- self.accuracy['overall'] = round(100 * float(sum(accQA)) / len(accQA),
- self.n)
- self.accuracy['perQuestionType'] = {
- quesType: round(
- 100 * float(sum(accQuesType[quesType])) /
- len(accQuesType[quesType]),
- self.n,
- )
- for quesType in accQuesType
- }
- self.accuracy['perAnswerType'] = {
- ansType: round(
- 100 * float(sum(accAnsType[ansType])) /
- len(accAnsType[ansType]), self.n)
- for ansType in accAnsType
- }
-
- def setEvalQA(self, quesId, acc):
- self.evalQA[quesId] = round(100 * acc, self.n)
-
- def setEvalQuesType(self, quesId, quesType, acc):
- if quesType not in self.evalQuesType:
- self.evalQuesType[quesType] = {}
- self.evalQuesType[quesType][quesId] = round(100 * acc, self.n)
-
- def setEvalAnsType(self, quesId, ansType, acc):
- if ansType not in self.evalAnsType:
- self.evalAnsType[ansType] = {}
- self.evalAnsType[ansType][quesId] = round(100 * acc, self.n)
-
- def updateProgress(self, progress):
- barLength = 20
- status = ''
- if isinstance(progress, int):
- progress = float(progress)
- if not isinstance(progress, float):
- progress = 0
- status = 'error: progress var must be float\r\n'
- if progress < 0:
- progress = 0
- status = 'Halt...\r\n'
- if progress >= 1:
- progress = 1
- status = 'Done...\r\n'
- block = int(round(barLength * progress))
- text = '\rFinshed Percent: [{0}] {1}% {2}'.format(
- '#' * block + '-' * (barLength - block), int(progress * 100),
- status)
- sys.stdout.write(text)
- sys.stdout.flush()
diff --git a/spaces/zomehwh/sovits-rudolf/onnxexport/model_onnx.py b/spaces/zomehwh/sovits-rudolf/onnxexport/model_onnx.py
deleted file mode 100644
index e28bae95ec1e53aa05d06fc784ff86d55f228d60..0000000000000000000000000000000000000000
--- a/spaces/zomehwh/sovits-rudolf/onnxexport/model_onnx.py
+++ /dev/null
@@ -1,335 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import modules.attentions as attentions
-import modules.commons as commons
-import modules.modules as modules
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-import utils
-from modules.commons import init_weights, get_padding
-from vdecoder.hifigan.models import Generator
-from utils import f0_to_coarse
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class Encoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- # print(x.shape,x_lengths.shape)
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- out_channels,
- hidden_channels,
- kernel_size,
- n_layers,
- gin_channels=0,
- filter_channels=None,
- n_heads=None,
- p_dropout=None):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
- self.f0_emb = nn.Embedding(256, hidden_channels)
-
- self.enc_ = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
-
- def forward(self, x, x_mask, f0=None, z=None):
- x = x + self.f0_emb(f0).transpose(1, 2)
- x = self.enc_(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + z * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class F0Decoder(nn.Module):
- def __init__(self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- spk_channels=0):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.spk_channels = spk_channels
-
- self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1)
- self.decoder = attentions.FFT(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1)
- self.cond = nn.Conv1d(spk_channels, hidden_channels, 1)
-
- def forward(self, x, norm_f0, x_mask, spk_emb=None):
- x = torch.detach(x)
- if spk_emb is not None:
- x = x + self.cond(spk_emb)
- x += self.f0_prenet(norm_f0)
- x = self.prenet(x) * x_mask
- x = self.decoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- ssl_dim,
- n_speakers,
- sampling_rate=44100,
- **kwargs):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- self.ssl_dim = ssl_dim
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2)
-
- self.enc_p = TextEncoder(
- inter_channels,
- hidden_channels,
- filter_channels=filter_channels,
- n_heads=n_heads,
- n_layers=n_layers,
- kernel_size=kernel_size,
- p_dropout=p_dropout
- )
- hps = {
- "sampling_rate": sampling_rate,
- "inter_channels": inter_channels,
- "resblock": resblock,
- "resblock_kernel_sizes": resblock_kernel_sizes,
- "resblock_dilation_sizes": resblock_dilation_sizes,
- "upsample_rates": upsample_rates,
- "upsample_initial_channel": upsample_initial_channel,
- "upsample_kernel_sizes": upsample_kernel_sizes,
- "gin_channels": gin_channels,
- }
- self.dec = Generator(h=hps)
- self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
- self.f0_decoder = F0Decoder(
- 1,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- spk_channels=gin_channels
- )
- self.emb_uv = nn.Embedding(2, hidden_channels)
- self.predict_f0 = False
-
- def forward(self, c, f0, mel2ph, uv, noise=None, g=None):
-
- decoder_inp = F.pad(c, [0, 0, 1, 0])
- mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]])
- c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H]
-
- c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device)
- g = g.unsqueeze(0)
- g = self.emb_g(g).transpose(1, 2)
- x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype)
- x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2)
-
- if self.predict_f0:
- lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500
- norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False)
- pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g)
- f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1)
-
- z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise)
- z = self.flow(z_p, c_mask, g=g, reverse=True)
- o = self.dec(z * c_mask, g=g, f0=f0)
- return o