diff --git a/spaces/1gistliPinn/ChatGPT4/Crack-VERIFIED-DriverEasy-432-No-Speed-Limit-BETTER.md b/spaces/1gistliPinn/ChatGPT4/Crack-VERIFIED-DriverEasy-432-No-Speed-Limit-BETTER.md deleted file mode 100644 index 24131a11020b3b610800f02734c28e3784c0cd89..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Crack-VERIFIED-DriverEasy-432-No-Speed-Limit-BETTER.md +++ /dev/null @@ -1,113 +0,0 @@ -## Crack DriverEasy 432 No Speed Limit !!BETTER!! - - - -  - - - -**Click Here ===> [https://www.google.com/url?q=https%3A%2F%2Ffancli.com%2F2twsJL&sa=D&sntz=1&usg=AOvVaw0EjWpAaO53PNuu7wLr00Fn](https://www.google.com/url?q=https%3A%2F%2Ffancli.com%2F2twsJL&sa=D&sntz=1&usg=AOvVaw0EjWpAaO53PNuu7wLr00Fn)** - - - -# How to Crack DriverEasy 432 and Remove the Speed Limit - - - -DriverEasy is a popular software that helps you find and update drivers for your computer. However, the free version of DriverEasy has a speed limit of 30 KB/s, which can be very frustrating if you have a lot of drivers to download. In this article, I will show you how to crack DriverEasy 432 and remove the speed limit, so you can enjoy faster and smoother driver downloads. - - - -Disclaimer: This article is for educational purposes only. I do not condone or encourage any illegal or unethical use of DriverEasy or any other software. You are solely responsible for any consequences that may arise from following this tutorial. - - - -## Step 1: Download DriverEasy 432 and the Crack File - - - -The first step is to download DriverEasy 432 from the official website[^1^]. You can choose the free version or the trial version, it doesn't matter. After downloading, install DriverEasy on your computer. - - - -Next, you need to download the crack file for DriverEasy 432. You can find it on various websites that offer cracked software, such as HaxPC[^1^] or MediaLabs[^4^]. Be careful when downloading from these sites, as they may contain malware or viruses. Scan the crack file with your antivirus before using it. - - - -## Step 2: Copy and Paste the Crack File - - - -The second step is to copy and paste the crack file into the installation folder of DriverEasy. The installation folder is usually located at C:\Program Files\Easeware\DriverEasy. If you installed DriverEasy in a different location, you need to find it yourself. - - - -After locating the installation folder, open it and look for a file named DriverEasy.exe. This is the main executable file of DriverEasy. Right-click on it and select Rename. Change its name to something else, such as DriverEasy.bak. This will prevent DriverEasy from running normally. - - - -Then, copy the crack file that you downloaded earlier and paste it into the installation folder. Rename the crack file to DriverEasy.exe. This will replace the original executable file with the cracked one. - - - -## Step 3: Run DriverEasy and Enjoy - - - -The final step is to run DriverEasy and enjoy its full features without any speed limit. To do this, double-click on the crack file that you renamed to DriverEasy.exe. You should see a message saying "Driver Easy Pro Activated" at the bottom right corner of the window. - - - -Now you can scan your computer for missing or outdated drivers and download them at full speed. You can also access other advanced features of DriverEasy Pro, such as backup and restore drivers, offline scan, uninstall drivers, etc. - - - -Congratulations! You have successfully cracked DriverEasy 432 and removed the speed limit. However, keep in mind that this method may not work for future versions of DriverEasy, and it may also violate the terms of service of DriverEasy. Use it at your own risk. - - - -## Why Use DriverEasy? - - - -DriverEasy is a useful software that can help you keep your drivers up to date and improve your computer performance. Drivers are essential components that allow your hardware devices to communicate with your operating system. Without proper drivers, your devices may not work correctly or cause errors and crashes. - - - -However, finding and installing drivers manually can be a tedious and time-consuming task. You need to know the exact model and version of your devices, search for the compatible drivers on the manufacturer's website, download them one by one, and install them on your computer. Moreover, you need to check for driver updates regularly to ensure that your drivers are always the latest and most stable. - - - -That's where DriverEasy comes in handy. DriverEasy can scan your computer and detect all the devices that need drivers. It can then download and install the correct drivers for you with just one click. You don't need to worry about compatibility issues or downloading the wrong drivers. DriverEasy also has a large database of over 8 million drivers, so it can find almost any driver you need. - - - -## What are the Benefits of DriverEasy Pro? - - - -DriverEasy has two versions: Free and Pro. The free version allows you to scan and download drivers at a limited speed of 30 KB/s. The pro version unlocks all the features and removes the speed limit. You can get the pro version by purchasing a license key or by cracking it as shown in this article. - - - -Some of the benefits of DriverEasy Pro are: - - - -- Faster and unlimited driver downloads: You can download drivers at full speed without any restrictions. - -- One-click update: You can update all your drivers with just one click, saving you time and hassle. - -- Backup and restore drivers: You can backup your drivers before updating them, so you can restore them in case anything goes wrong. - -- Offline scan: You can scan your computer for drivers without an internet connection, which is useful if you have network problems. - -- Uninstall drivers: You can uninstall drivers that you no longer need or that cause issues on your computer. - -- Technical support: You can get professional and friendly support from the DriverEasy team if you have any questions or problems. - - - -These are some of the reasons why you may want to use DriverEasy Pro instead of the free version. However, remember that cracking DriverEasy Pro is illegal and unethical, and it may also expose you to security risks. If you like DriverEasy and want to support its development, you should buy a license key from the official website instead of cracking it. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eagle CAD 6.4.0 Torrent The Best Choice for Professional and Hobbyist PCB Designers.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eagle CAD 6.4.0 Torrent The Best Choice for Professional and Hobbyist PCB Designers.md deleted file mode 100644 index c053d540d448ac1702464126f7e686f9cc59a5da..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Eagle CAD 6.4.0 Torrent The Best Choice for Professional and Hobbyist PCB Designers.md +++ /dev/null @@ -1,17 +0,0 @@ - -
Extreme ghostbusters complete series download
DRD Systems VideoReDo TVSuite H 286 v5 9 4 719b full version
libro administracion profesional de proyectos yamal chamoun pdf
photoboof keygenerator full torrent
sure cuts a lot 4 crack
alerene zte free
devon.ke.dev.mahadev.dvdrip.xvid.ddr
Error Repair Professional v4.0.3 full version
koon krishi malayalam pdf download
crack family discografia completa descargar minecraft
AnyDVD HD v7.4.8.0 Final-BRD utorrent
font psl kanda modern extra.rar
bijbel in gewone taal ebook 18
EZ Green Screen Photoshop keygen
kitab hakikat insan pdf free downloadgolkes
Oxford English for Careers Nursing 2 pdf.rar
genetica medica jorde pdf download
menucool slider license crack 12
Frozen 2 movie full version free download
CommView for WiFi 5.2.484 Including WEP Hack
Download Zip > https://imgfil.com/2uy1tD
Mksensation Digital Piano Library For Kontakt Torrent
every child is special english subtitle 192
archicad 15 object library free download
il re leone film completo italiano torrent
rambo 4 full movie in hindi mp4 free download
AutoCAD 2014 XFORCE torrent
js0group dll catia v6r2009 crack
shifrin multivariable mathematics djvu download
Thor The Dark World 2013 1080p BrRip x264 YIFY 31
Short Kut - The Con is On hindi dubbed download
hotel courbet 2009 tinto brass download 48
izotope t pain effect serial number
Ls-Dreams.Issue.05.(Sweethearts).Movies.13-24
Send Blaster Pro Serial Key
video sex anjing vs manusia.iso
dispensing pharmacy by rm mehta ebook download
simlab 3d pdf exporter for 3ds max crack torrent
call of duty modern warfare 2 highly compressed only 37 mb mega
UFS Explorer Professional Recovery v7.19.6 Portable Serial Key keygen
Mohabbatein 1 full movie in hindi free download 720p
Billu Ustaad download 720p movies
Rig N Roll 3 Crack Key Serial
tp-link tl-wr340gd v5 firmware download
arduino compatible compiler for labview crack
mkvmerge gui v4.4.0 download
sagem f st 2804 original firmware
testmaker 9.3 crack
facebook password revealer online
f-secure freedome vpn cracked apk market
All AutoCAD LT 2009 Products Crack Keygen (x86x64) !Latest utorrent
fallrain 19191a764c
-europe-microcat-2013-torrent
[ -europe-microcat-2013-torrent ]
[ -europe-microcat-2013-torrent ]
[ -europe-microcat-2013-torrent ]
link= -europe-microcat-2013-torrent
link= -europe-microcat-2013-torrent
link= -europe-microcat-2013-torrent
phipan 19191a764c
-torrents-yves-pflieger
[ -torrents-yves-pflieger ]
[ -torrents-yves-pflieger ]
[ -torrents-yves-pflieger ]
link= -torrents-yves-pflieger
link= -torrents-yves-pflieger
link= -torrents-yves-pflieger
nantcor 19191a764c
-mera-dil-lutiya-punjabi-movie-torrent-download
[ -mera-dil-lutiya-punjabi-movie-torrent-download ]
[ -mera-dil-lutiya-punjabi-movie-torrent-download ]
[ -mera-dil-lutiya-punjabi-movie-torrent-download ]
link= -mera-dil-lutiya-punjabi-movie-torrent-download
link= -mera-dil-lutiya-punjabi-movie-torrent-download
link= -mera-dil-lutiya-punjabi-movie-torrent-download
raemala 19191a764c
-saab-the-great-movie-download-utorrent-kickass
[ -saab-the-great-movie-download-utorrent-kickass ]
[ -saab-the-great-movie-download-utorrent-kickass ]
[ -saab-the-great-movie-download-utorrent-kickass ]
link= -saab-the-great-movie-download-utorrent-kickass
link= -saab-the-great-movie-download-utorrent-kickass
link= -saab-the-great-movie-download-utorrent-kickass
laqukei 19191a764c
-booth-software-torrent
[ -booth-software-torrent ]
[ -booth-software-torrent ]
[ -booth-software-torrent ]
link= -booth-software-torrent
link= -booth-software-torrent
link= -booth-software-torrent
finkalm 19191a764c
-flaming-cliffs-3-keygen-torrent
[ -flaming-cliffs-3-keygen-torrent ]
[ -flaming-cliffs-3-keygen-torrent ]
[ -flaming-cliffs-3-keygen-torrent ]
link= -flaming-cliffs-3-keygen-torrent
link= -flaming-cliffs-3-keygen-torrent
link= -flaming-cliffs-3-keygen-torrent
edwivien 19191a764c
-version-14-2-torrent
[ -version-14-2-torrent ]
[ -version-14-2-torrent ]
[ -version-14-2-torrent ]
link= -version-14-2-torrent
link= -version-14-2-torrent
link= -version-14-2-torrent
If you are a fan of ships and sailing, you might be interested in trying out Ship Simulator Extremes, a realistic and immersive simulation game that lets you experience the most extreme conditions on earth as a ship captain. In this guide, we will show you how to download the demo version of the game and what to expect from it.
-Ship Simulator Extremes is the latest installment of the acclaimed Ship Simulator series, developed by VSTEP and published by Paradox Interactive. The game was released in 2010 and has sold over 550,000 copies worldwide. The game features a wide range of vessels to captain, from hovercrafts and coast guard interceptors to mammoth tankers and luxury cruise liners. The game also includes exciting storylines and missions based on actual events in realistic environments at locations all over the world, such as the Antarctic, Bora Bora, Rotterdam, and Sydney. The game also has a save the environment campaign, where you can sail famous Greenpeace ships and take on ecological missions based on real events.
-Download Zip ✶ https://jinyurl.com/2uNRI6
Some of the main features of Ship Simulator Extremes are:
-Before you download the demo, make sure your PC meets the minimum system requirements for the game. Here are the specifications you need:
-Operating system | -Processor | -Memory | -Video card | -Hard disc space | -Other | -
---|---|---|---|---|---|
Windows XP (Min. service pack 2), Windows Vista or Windows 7. 32 and 64 bits OS supported | -3 Ghz P4 Intel or AMD equivalent processor | -2GB (Windows XP) or 3GB (Vista or Windows 7) | -Geforce 8800GT or ATI Radeon HD 4850 with 256MB ram (Shader model 3.0) | -3.5GB | -4x PC DVD-ROM, mouse with scroll wheel, DirectX 9.0c compatible sound card | -
Ship Simulator Extremes has received mixed reviews from critics and players. Some praised the game for its realism, variety, and graphics, while others criticized it for its bugs, glitches, and lack of polish. The game has a score of 63/100 on Metacritic and a user rating of 6.8/10 on IGN. Here are some of the pros and cons of the game according to the reviews:
-Pros | -Cons | -
---|---|
- Realistic and immersive simulation of ship handling and navigation | -- Buggy and unstable performance, especially in multiplayer mode | -
- Wide range of vessels and missions to choose from | -- Repetitive and boring gameplay, lack of challenge and feedback | -
- Beautiful graphics and sound effects, especially the water and weather system | -- Poor user interface and controls, lack of customization and options | -
- Interesting and relevant save the environment campaign | -- Unrealistic and exaggerated scenarios, lack of realism and authenticity | -
If you want to try out Ship Simulator Extremes for yourself, you can download the demo version of the game for free from the official website or the Steam store page. Here are the steps you need to follow:
-The first thing you need to do is to visit the official website of Ship Simulator Extremes at (1) or the Steam store page at (2). You can find more information about the game, such as screenshots, videos, news, and forums on these pages.
-On the official website, you will see a download button on the top right corner of the page. Click on it and you will be redirected to a page where you can choose your preferred download platform, such as GamersGate or Direct2Drive. You will need to create an account and pay a small fee to download the full version of the game. However, if you scroll down, you will see a link that says "Download Demo". Click on it and you will be able to download the demo version for free.[6] On the Steam store page, you will see an add to cart button on the right side of the page. Click on it and you will be able to purchase the full version of the game for $19.99. However, if you scroll down, you will see a link that says "Download Demo". Click on it and you will be able to download the demo version for free.[7]
-Once you have downloaded the demo file, you will need to follow the instructions to install and launch it on your PC. The file size is about 600 MB, so it might take some time depending on your internet speed. The installation process is simple and straightforward. Just follow the prompts and agree to the terms and conditions. After that, you can launch the demo from your desktop or start menu.[6][7]
-download ship simulator extremes demo free
-download ship simulator extremes demo steam
-download ship simulator extremes demo pc
-download ship simulator extremes demo windows 10
-download ship simulator extremes demo mac
-download ship simulator extremes demo full version
-download ship simulator extremes demo crack
-download ship simulator extremes demo torrent
-download ship simulator extremes demo gameplay
-download ship simulator extremes demo missions
-download ship simulator extremes demo online
-download ship simulator extremes demo multiplayer
-download ship simulator extremes demo mods
-download ship simulator extremes demo patch
-download ship simulator extremes demo update
-download ship simulator extremes collection demo
-download ship simulator extremes ferry pack demo
-download ship simulator extremes ocean cruise ship demo
-download ship simulator extremes offshore vessel demo
-download ship simulator extremes cargo vessel demo
-download ship simulator extremes inland shipping demo
-download ship simulator extremes greenpeace campaign demo
-download ship simulator extremes coast guard missions demo
-download ship simulator extremes antarctic adventures demo
-download ship simulator extremes bora bora expeditions demo
-how to download ship simulator extremes demo
-where to download ship simulator extremes demo
-best site to download ship simulator extremes demo
-safe way to download ship simulator extremes demo
-easy way to download ship simulator extremes demo
-fast way to download ship simulator extremes demo
-tips for downloading ship simulator extremes demo
-guide for downloading ship simulator extremes demo
-review of downloading ship simulator extremes demo
-benefits of downloading ship simulator extremes demo
-requirements for downloading ship simulator extremes demo
-problems with downloading ship simulator extremes demo
-solutions for downloading ship simulator extremes demo
-alternatives to downloading ship simulator extremes demo
-comparison of downloading ship simulator extremes demo and other games
-reasons to download ship simulator extremes demo
-features of downloading ship simulator extremes demo
-advantages of downloading ship simulator extremes demo
-disadvantages of downloading ship simulator extremes demo
-pros and cons of downloading ship simulator extremes demo
-feedback on downloading ship simulator extremes demo
-testimonials on downloading ship simulator extremes demo
-ratings on downloading ship simulator extremes demo
-recommendations on downloading ship simulator extremes demo
The demo version of Ship Simulator Extremes gives you a taste of what the full game has to offer. Here are some of the things you can expect from it:
-The demo includes two playable singleplayer missions that are part of the save the environment campaign. The first one is called "Greenpeace - Save The Whale", where you have to sail a Greenpeace ship called Esperanza and stop a whaling vessel from hunting whales in Antarctica. The second one is called "Greenpeace - Mediterranean", where you have to sail another Greenpeace ship called Rainbow Warrior III and stop illegal fishing activities in the Mediterranean Sea. These missions are challenging and require you to use your skills and tactics to achieve your objectives.[6][7]
-The demo also lets you captain three different vessels that are featured in the full game. These are the Greenpeace ships Esperanza and Rainbow Warrior III, and a coast guard interceptor. Each vessel has its own characteristics, such as speed, maneuverability, and equipment. You can switch between different views, such as bridge, deck, or free camera, to get a better perspective of your surroundings. You can also use the radio and the horn to communicate with other ships or the port.[6][7]
-One of the most impressive aspects of Ship Simulator Extremes is the realistic water and weather system. The game uses a dynamic ocean simulation that creates waves, currents, and tides based on the wind and the moon. The game also features a day and night cycle and a weather system that can change from sunny to stormy in a matter of minutes. The water and weather effects have a direct impact on your ship's performance and handling, so you have to be prepared for any situation.[6][7]
-The game also boasts stunning graphics and sound effects that create an immersive and realistic experience. The game uses advanced shaders and lighting techniques to render the water, the sky, and the landscapes in high detail. The game also features realistic sound effects, such as the engine noise, the waves crashing, and the wind howling. The game also has a soundtrack that matches the mood and atmosphere of each mission.[6][7]
-Ship Simulator Extremes is a simulation game that lets you experience the most extreme conditions on earth as a ship captain. The game features a wide range of vessels, missions, and locations to explore. The game also has a realistic water and weather system that affects your ship's performance and handling. The game also has stunning graphics and sound effects that create an immersive and realistic experience.
-If you want to try out Ship Simulator Extremes for yourself, you can download the demo version of the game for free from the official website or the Steam store page. The demo includes two playable singleplayer missions, three different vessels to captain, and a glimpse of the realistic water and weather system. The demo is a great way to get a taste of what the full game has to offer.
-We hope this guide has helped you learn more about Ship Simulator Extremes and how to download the demo version of the game. If you have any questions or feedback, feel free to leave a comment below. Happy sailing!
-Here are some of the frequently asked questions about Ship Simulator Extremes:
-Are you looking for a new and exciting game to play on your Android device? Do you love monster hunting games with stunning graphics, immersive gameplay, and diverse challenges? If so, you might want to check out Yeager: Hunter Legend, a 3D action role-playing game that takes you to an alien world full of deadly creatures and dark secrets. In this article, we will tell you what Yeager: Hunter Legend is, how to download it on your Android device, and how to play it like a pro.
-Download File --->>> https://jinyurl.com/2uNTDQ
Yeager: Hunter Legend is a game developed by IGG.COM, the same company behind popular titles like Lords Mobile, Castle Clash, and Mobile Royale. It is a game that combines elements of action, role-playing, and monster hunting genres, set in a sci-fi fantasy world called Planet Ekors. You play as Yeager, an elite Vyderan hunter who is sent to retrieve a priceless stolen relic from the Empire. Along the way, you will encounter ferocious beasts, alien civilizations, and hidden secrets that will test your skills and courage.
-One of the main features of Yeager: Hunter Legend is its stunning graphics and realistic animations that are powered by cutting-edge motion capture technology. The game boasts a vast and diverse open world that you can explore freely, with different biomes, weather effects, day-night cycles, and dynamic lighting. The game also has a rich story and lore that will immerse you in the mysterious Planet Ekors and its history.
-Another feature of Yeager: Hunter Legend is its intuitive and action-oriented combat system that allows you to choose from five powerful weapon classes: Hunting Sword, Force Hammer, Fury Blades, Flux Blaster, and Eidolon Spear. Each weapon class has its own signature moves, combos, and abilities that you can master and customize according to your playstyle. You can also switch between two weapons during combat for more versatility and strategy.
-download yeager hunter legend apk
-how to play yeager on android
-yeager 3d action rpg game download
-download yeager beta test android
-yeager monster hunting game android
-yeager android game review
-download yeager from google play store
-yeager apk latest version download
-yeager android game guide
-download yeager for android free
-yeager android game tips and tricks
-yeager igg.com game download
-download yeager offline mode android
-yeager android game system requirements
-download yeager mod apk android
-yeager android game best weapons
-download yeager for android tablet
-yeager android game cheats and hacks
-download yeager update for android
-yeager android game wiki
-download yeager on pc using emulator
-yeager android game discord server
-download yeager obb file for android
-yeager android game facebook page
-download yeager from apkcombo.com
-yeager android game gameplay video
-download yeager from apkpure.com
-yeager android game forum
-download yeager from gamingonphone.com
-yeager android game faq
-download yeager from newscientist.com
-yeager android game feedback and suggestions
-download yeager from the-sun.com
-yeager android game support and contact
-download yeager from yahoo.com
-yeager android game news and updates
-download yeager from wikipedia.org
-yeager android game ratings and reviews
-download yeager from montana.edu
-yeager android game features and benefits
The game also has a unique team hunting system that lets you hunt with up to three other players online. You can cooperate with your teammates to take down massive beasts using different tactics and skills. You can also chat with your teammates using voice or text messages, or use emojis and stickers to express yourself.
-Another feature of Yeager: Hunter Legend is its extensive customization options that let you create your own hunter style. You can hunt beasts for materials rich in Kallar, the powerful essence of your ancestors, to forge and upgrade your equipment. Equipment forged with Kallar-infused beast parts will even gain the appearance and traits of the beasts themselves. You can also equip ancient seals, mysterious artifacts that grant you legendary hunting prowess; install sigils on your Kallar arm to boost your physical aptitude and unlock new hunting skills; and choose your weapon school that fits your playstyle.
-The game also has a diverse range of monsters that you can hunt, each with their own unique combat abilities, behaviors, weaknesses, and rewards. You will need to study and strategize for each monster to defeat them effectively. Some of the monsters include:
-Name | Type | Description|
---|---|---|
Blazeclaw | Fire | A fiery feline beast that can unleash explosive fireballs and scorching claws. |
Glacierhorn | Ice | A colossal rhino-like beast that can create icy spikes and charge with devastating force. |
Thunderwing | Electric | A majestic bird-like beast that can soar in the sky and unleash lightning bolts and storms. |
Venomtail | Poison | A venomous lizard-like beast that can spit toxic projectiles and whip its tail with deadly accuracy. |
Shadowfang | Dark | A stealthy wolf-like beast that can blend in the shadows and strike with swift and powerful bites. |
If you are interested in playing Yeager: Hunter Legend on your Android device, you have three options to download it:
-The easiest and safest way to download Yeager: Hunter Legend on your Android device is to use the official Google Play Store. You can simply search for the game on the store or use this link to access it. Then, you can tap on the Install button and wait for the game to download and install on your device. You will need at least 2.5 GB of free storage space and Android 5.0 or higher to run the game smoothly.
-If you cannot access the Google Play Store or prefer to use a different source, you can also download Yeager: Hunter Legend from APKPure or other third-party websites that offer APK files. APK files are the installation packages for Android applications that you can manually install on your device. However, you should be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device. To download Yeager: Hunter Legend from APKPure, you can use this link or search for the game on the website. Then, you can tap on the Download APK button and wait for the file to download on your device. You will need to enable the Unknown Sources option in your device settings to allow the installation of APK files from outside the Google Play Store. After that, you can open the downloaded file and follow the instructions to install the game on your device.
-If you want to play Yeager: Hunter Legend on your PC or laptop, you can also use an Android emulator to run the game on your computer. An Android emulator is a software that simulates an Android device on your computer, allowing you to access Android applications and games. One of the best Android emulators for gaming is LDPlayer, which offers high performance, compatibility, and customization features. To download Yeager: Hunter Legend from LDPlayer, you can use this link or search for the game on the LDPlayer website. Then, you can tap on the Download button and wait for the LDPlayer installer to download on your computer. You will need to run the installer and follow the instructions to install LDPlayer on your computer. After that, you can launch LDPlayer and search for Yeager: Hunter Legend on the built-in Google Play Store or use an APK file to install the game on LDPlayer. You will be able to play the game using your keyboard and mouse, or customize your controls according to your preference.
-Now that you have downloaded Yeager: Hunter Legend on your Android device or emulator, you are ready to start playing it. Here are some tips and tricks to help you play the game like a pro:
-The first thing you need to do is to familiarize yourself with the combat mechanics and controls of Yeager: Hunter Legend. The game uses a virtual joystick on the left side of the screen to move your character, and several buttons on the right side of the screen to perform different actions, such as attacking, dodging, switching weapons, using skills, and using items. You can also tap on the screen to interact with objects, NPCs, and menus.
-The combat system of Yeager: Hunter Legend is based on timing, positioning, and strategy. You will need to observe your enemies' movements and patterns, dodge their attacks, exploit their weaknesses, and unleash your own combos and skills. You will also need to manage your stamina, which is consumed by attacking and dodging, and replenish it by resting or using items. You will also need to pay attention to your health, which is reduced by taking damage, and restore it by using items or healing skills. You can also use the Kallar arm to activate special hunting skills that can give you an edge in combat.
-The next thing you need to do is to choose your weapon class and weapon school that suit your playstyle and preference. Yeager: Hunter Legend offers five weapon classes, each with its own strengths, weaknesses, and skills. They are:
-You can also choose your weapon school, which is a set of skills and abilities that you can unlock and upgrade for your weapon class. There are three weapon schools for each weapon class, each with its own focus and style. For example, the Hunting Sword has the following weapon schools:
-You can switch between different weapon classes and weapon schools at any time, so feel free to experiment and find your favorite combination.
-The main activity of Yeager: Hunter Legend is hunting beasts for materials and upgrading your equipment. You can accept hunting quests from NPCs or other players, or explore the world and encounter beasts in the wild. You can hunt beasts solo or with a team of up to four players online. You will need to prepare for each hunt by choosing your equipment, items, skills, and strategy. You will also need to track down the beast, lure it out, fight it, weaken it, capture it or kill it, and harvest its parts.
-You can use the materials you obtain from hunting beasts to forge and upgrade your equipment at the Forge Station. Equipment forged with Kallar-infused beast parts will gain the appearance and traits of the beasts themselves, giving you unique bonuses and effects. You can also customize your equipment by changing its color, adding decals, or applying seals. Seals are ancient artifacts that grant you legendary hunting prowess, such as increasing your damage, speed, defense, or Kallar power.
-The last thing you need to do is to explore the mysterious Planet Ekors and uncover its secrets. Yeager: Hunter Legend has a vast and diverse open world that you can explore freely, with different biomes, weather effects, day-night cycles, and dynamic lighting. You can travel across the world using various vehicles, such as hoverboards, motorcycles, airships, or mechs. You can also interact with various objects, NPCs, and events in the world, such as collecting resources, solving puzzles, discovering lore, or triggering side quests.
-The world of Yeager: Hunter Legend is full of secrets and mysteries that will challenge your curiosity and courage. You will encounter ancient ruins, alien civilizations, hidden dungeons, and legendary beasts that will reveal more about the history and secrets of Planet Ekors. You will also face the Empire, a ruthless faction that seeks to conquer the planet and its resources. You will need to fight against their soldiers, machines, and experiments as you uncover their sinister plans.
-Yeager: Hunter Legend is a 3D action role-playing monster hunting game that takes you to an alien world full of deadly creatures and dark secrets. You can download it on your Android device from Google Play Store, APKPure or other third-party sources, or LDPlayer or other Android emulators. You can play it by choosing your weapon class and weapon school, hunting beasts for materials and upgrading your equipment, and exploring the mysterious Planet Ekors and uncovering its secrets. Yeager: Hunter Legend is a game that will keep you entertained and engaged for hours with its stunning graphics, immersive gameplay, and diverse challenges.
-Here are some of the frequently asked questions about Yeager: Hunter Legend:
-Running on CPU 🥶 This demo does not work on CPU.
" - -if torch.cuda.is_available(): - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - model_id = "CompVis/stable-diffusion-v1-4" - ax_pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(model_id) - ax_pipe.to(device) - sd_pipe = StableDiffusionPipeline.from_pretrained(model_id) - sd_pipe.to(device) - - -MAX_INFERENCE_STEPS = 100 -MAX_SEED = np.iinfo(np.int32).max - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - - -def get_token_table(prompt: str) -> list[tuple[int, str]]: - tokens = [ax_pipe.tokenizer.decode(t) for t in ax_pipe.tokenizer(prompt)["input_ids"]] - tokens = tokens[1:-1] - return list(enumerate(tokens, start=1)) - - -@spaces.GPU -def run( - prompt: str, - indices_to_alter_str: str, - seed: int = 0, - apply_attend_and_excite: bool = True, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - scale_factor: int = 20, - thresholds: dict[int, float] = { - 10: 0.5, - 20: 0.8, - }, - max_iter_to_alter: int = 25, -) -> PIL.Image.Image: - if num_inference_steps > MAX_INFERENCE_STEPS: - raise gr.Error(f"Number of steps cannot exceed {MAX_INFERENCE_STEPS}.") - - generator = torch.Generator(device=device).manual_seed(seed) - if apply_attend_and_excite: - try: - token_indices = list(map(int, indices_to_alter_str.split(","))) - except Exception: - raise ValueError("Invalid token indices.") - out = ax_pipe( - prompt=prompt, - token_indices=token_indices, - guidance_scale=guidance_scale, - generator=generator, - num_inference_steps=num_inference_steps, - max_iter_to_alter=max_iter_to_alter, - thresholds=thresholds, - scale_factor=scale_factor, - ) - else: - out = sd_pipe( - prompt=prompt, - guidance_scale=guidance_scale, - generator=generator, - num_inference_steps=num_inference_steps, - ) - return out.images[0] - - -def process_example( - prompt: str, - indices_to_alter_str: str, - seed: int, - apply_attend_and_excite: bool, -) -> tuple[list[tuple[int, str]], PIL.Image.Image]: - token_table = get_token_table(prompt) - result = run( - prompt=prompt, - indices_to_alter_str=indices_to_alter_str, - seed=seed, - apply_attend_and_excite=apply_attend_and_excite, - ) - return token_table, result - - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - - with gr.Row(): - with gr.Column(): - prompt = gr.Text( - label="Prompt", - max_lines=1, - placeholder="A pod of dolphins leaping out of the water in an ocean with a ship on the background", - ) - with gr.Accordion(label="Check token indices", open=False): - show_token_indices_button = gr.Button("Show token indices") - token_indices_table = gr.Dataframe(label="Token indices", headers=["Index", "Token"], col_count=2) - token_indices_str = gr.Text( - label="Token indices (a comma-separated list indices of the tokens you wish to alter)", - max_lines=1, - placeholder="4,16", - ) - apply_attend_and_excite = gr.Checkbox(label="Apply Attend-and-Excite", value=True) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - num_inference_steps = gr.Slider( - label="Number of inference steps", - minimum=1, - maximum=MAX_INFERENCE_STEPS, - step=1, - value=50, - ) - guidance_scale = gr.Slider( - label="Guidance scale", - minimum=0, - maximum=50, - step=0.1, - value=7.5, - ) - run_button = gr.Button("Generate") - with gr.Column(): - result = gr.Image(label="Result") - - with gr.Row(): - examples = [ - [ - "A mouse and a red car", - "2,6", - 2098, - True, - ], - [ - "A mouse and a red car", - "2,6", - 2098, - False, - ], - [ - "A horse and a dog", - "2,5", - 123, - True, - ], - [ - "A horse and a dog", - "2,5", - 123, - False, - ], - [ - "A painting of an elephant with glasses", - "5,7", - 123, - True, - ], - [ - "A painting of an elephant with glasses", - "5,7", - 123, - False, - ], - [ - "A playful kitten chasing a butterfly in a wildflower meadow", - "3,6,10", - 123, - True, - ], - [ - "A playful kitten chasing a butterfly in a wildflower meadow", - "3,6,10", - 123, - False, - ], - [ - "A grizzly bear catching a salmon in a crystal clear river surrounded by a forest", - "2,6,15", - 123, - True, - ], - [ - "A grizzly bear catching a salmon in a crystal clear river surrounded by a forest", - "2,6,15", - 123, - False, - ], - [ - "A pod of dolphins leaping out of the water in an ocean with a ship on the background", - "4,16", - 123, - True, - ], - [ - "A pod of dolphins leaping out of the water in an ocean with a ship on the background", - "4,16", - 123, - False, - ], - ] - gr.Examples( - examples=examples, - inputs=[ - prompt, - token_indices_str, - seed, - apply_attend_and_excite, - ], - outputs=[ - token_indices_table, - result, - ], - fn=process_example, - cache_examples=os.getenv("CACHE_EXAMPLES") == "1", - examples_per_page=20, - ) - - show_token_indices_button.click( - fn=get_token_table, - inputs=prompt, - outputs=token_indices_table, - queue=False, - api_name="get-token-table", - ) - - gr.on( - triggers=[prompt.submit, token_indices_str.submit, run_button.click], - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=get_token_table, - inputs=prompt, - outputs=token_indices_table, - queue=False, - api_name=False, - ).then( - fn=run, - inputs=[ - prompt, - token_indices_str, - seed, - apply_attend_and_excite, - num_inference_steps, - guidance_scale, - ], - outputs=result, - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/AvinashRamesh23/AIEditor/app.py b/spaces/AvinashRamesh23/AIEditor/app.py deleted file mode 100644 index 27775f6315de44aaafe185222f053815d2e5747d..0000000000000000000000000000000000000000 --- a/spaces/AvinashRamesh23/AIEditor/app.py +++ /dev/null @@ -1,435 +0,0 @@ -import streamlit as st -import whisper -import re -from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip -from moviepy.editor import * -import math -from stable_whisper import modify_model,results_to_word_srt -import asyncio -from deepgram import Deepgram -from typing import Dict -import os -import moviepy.editor as mp -from pytube import YouTube -from time import sleep -import pandas as pd - -import calendar -import time - -current_GMT = time.gmtime() - -time_stamp = calendar.timegm(current_GMT) - -st.title('AI Editor for Content Creators!') - -@st.cache(suppress_st_warning=True) -#load whisper model -def load_model(model_selected): - #load medium model - model = whisper.load_model(model_selected) - # modify model to get word timestamp - modify_model(model) - return model - -#transcribe -@st.cache(suppress_st_warning=True) -def transcribe_video(vid,model_selected): - model = load_model(model_selected) - options = whisper.DecodingOptions(fp16=False,language="English") - result = model.transcribe(vid, **options.__dict__) - result['srt'] = whisper_result_to_srt(result) - return result - -#srt generation -def whisper_result_to_srt(result): - text = [] - for i,s in enumerate(result['segments']): - text.append(str(i+1)) - time_start = s['start'] - hours, minutes, seconds = int(time_start/3600), (time_start/60) % 60, (time_start) % 60 - timestamp_start = "%02d:%02d:%06.3f" % (hours, minutes, seconds) - timestamp_start = timestamp_start.replace('.',',') - time_end = s['end'] - hours, minutes, seconds = int(time_end/3600), (time_end/60) % 60, (time_end) % 60 - timestamp_end = "%02d:%02d:%06.3f" % (hours, minutes, seconds) - timestamp_end = timestamp_end.replace('.',',') - text.append(timestamp_start + " --> " + timestamp_end) - text.append(s['text'].strip() + "\n") - return "\n".join(text) - -#compute speaking_time -async def compute_speaking_time(transcript_data: Dict,data:str) -> None: - if 'results' in transcript_data: - transcript = transcript_data['results']['channels'][0]['alternatives'][0]['words'] - total_speaker_time = {} - speaker_words = [] - current_speaker = -1 - - for speaker in transcript: - speaker_number = speaker["speaker"] - - if speaker_number is not current_speaker: - current_speaker = speaker_number - speaker_words.append([speaker_number, [], 0]) - - try: - total_speaker_time[speaker_number][1] += 1 - except KeyError: - total_speaker_time[speaker_number] = [0,1] - - get_word = speaker["word"] - speaker_words[-1][1].append(get_word) - - total_speaker_time[speaker_number][0] += speaker["end"] - speaker["start"] - speaker_words[-1][2] += speaker["end"] - speaker["start"] - - for speaker, words, time_amount in speaker_words: - print(f"Speaker {speaker}: {' '.join(words)}") - data+=f"\nSpeaker {speaker}: {' '.join(words)}" - print(f"Speaker {speaker}: {time_amount}") - data+=f"\nSpeaker {speaker}: {time_amount}" - - - for speaker, (total_time, amount) in total_speaker_time.items(): - print(f"Speaker {speaker} avg time per phrase: {total_time/amount} ") - data+=f"\nSpeaker {speaker} avg time per phrase: {total_time/amount} " - print(f"Total time of conversation: {total_time}") - data+=f"\nTotal time of conversation: {total_time}" - return transcript,data - -#extract audio from video -def extract_write_audio(vd): - my_clip = mp.VideoFileClip(f'{vd}') - my_clip.audio.write_audiofile(f"audio.wav") - -#speaker diarization workflow -async def speaker_diarization_flow(PATH_TO_FILE): - audio = extract_write_audio(PATH_TO_FILE) - data = '' - DEEPGRAM_API_KEY = "3dc39bf904babb858390455b1a1399e221bf87f8" - deepgram = Deepgram(DEEPGRAM_API_KEY) - with open(PATH_TO_FILE, 'rb') as audio: - source = {'buffer': audio, 'mimetype': 'audio/wav'} - transcription = await deepgram.transcription.prerecorded(source, {'punctuate': True, 'diarize': True}) - transcript,final_data = await compute_speaking_time(transcription,data) - return final_data - -# speaker diarization main funciton -async def speaker_diarization(PATH_TO_FILE): - data = await speaker_diarization_flow(PATH_TO_FILE) - print("data is", data) - return data - -#find filler words -def filler_words_finder(result_data): - word_map_prior_edit=set() - word_map_after_edit=set() - #my filler words sample - filler_words={'um','ah','you know','mmm','mmm','er','uh','Hmm','actually','basically','seriously','mhm','uh huh','uh','huh','ooh','aah','ooh'} - filler_words_timestamp=set() - for keys in result_data: - if keys == 'segments': - prev=0 - for i in result_data[keys]: - for word in i['whole_word_timestamps']: - lower_case = re.sub(r'\W','',word['word'].lower()) - word_map_prior_edit.add(word['timestamp']) - if lower_case in filler_words or lower_case.startswith(('hm','aa','mm','oo')): - st.write(word['word'].lower(),word['timestamp']) - print(word['word'].lower(),word['timestamp']) - filler_words_timestamp.add(word['timestamp']) - prev=word['timestamp'] - continue - word_map_after_edit.add((prev,word['timestamp'])) - prev=word['timestamp'] - return word_map_after_edit, filler_words_timestamp - -def merge_overlapping_time_intervals(intervals): - stack = [] - result=[intervals[0]] - - for interval in intervals: - interval2=result[-1] - - if overlap(interval,interval2): - result[-1] = [min(interval[0],interval2[0]),max(interval[1],interval2[1])] - else: - result.append(interval) - - return result - -def overlap(interval1,interval2): - return min(interval1[1],interval2[1])-max(interval1[0],interval2[0]) >= 0 - -#assembly ai endpoints -import requests -transcript_endpoint = "https://api.assemblyai.com/v2/transcript" -upload_endpoint = "https://api.assemblyai.com/v2/upload" - -headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json" -} - -def upload_to_AssemblyAI(save_location): - CHUNK_SIZE = 5242880 - def read_file(filename): - with open(filename, 'rb') as _file: - while True: - print("chunk uploaded") - data = _file.read(CHUNK_SIZE) - if not data: - break - yield data - - upload_response = requests.post( - upload_endpoint, - headers=headers, data=read_file(save_location) - ) - print(upload_response.json()) - audio_url = upload_response.json()['upload_url'] - print('Uploaded to', audio_url) - return audio_url - - -def start_analysis(audio_url,type): - ## Start transcription job of audio file - data = { - 'audio_url': audio_url, - 'iab_categories': True, - 'content_safety': True, - "summarization": True, - "summary_type": "bullets", - "summary_model":type - } - if type=='conversational': - data["speaker_labels"]= True - - transcript_response = requests.post(transcript_endpoint, json=data, headers=headers) - print(transcript_response.json()) - transcript_id = transcript_response.json()['id'] - polling_endpoint = transcript_endpoint + "/" + transcript_id - print("Transcribing at", polling_endpoint) - return polling_endpoint - -def get_analysis_results(polling_endpoint): - status = 'submitted' - - while True: - print(status) - polling_response = requests.get(polling_endpoint, headers=headers) - status = polling_response.json()['status'] - # st.write(polling_response.json()) - # st.write(status) - if status == 'submitted' or status == 'processing' or status == 'queued': - print('not ready yet') - sleep(10) - - elif status == 'completed': - print('creating transcript') - return polling_response - break - - else: - print('error') - return False - break - -def pii_redact(audiourl,options): - print(options,audiourl) - endpoint = "https://api.assemblyai.com/v2/transcript" - json = { - "audio_url": audiourl, - "redact_pii": True, - "redact_pii_audio": True, - "redact_pii_policies": options - } - - headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json", - } - - response = requests.post(endpoint, json=json, headers=headers) - print(response.json()) - transcript_id = response.json()['id'] - polling_endpoint = endpoint + "/" + transcript_id - return polling_endpoint - -def pii_redact_audio(polling_endpoint): - status = 'submitted' - headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json", - } - while True: - print(status) - polling_response = requests.get(polling_endpoint, headers=headers) - status = polling_response.json()['status'] - if status == 'submitted' or status == 'processing' or status == 'queued': - print('not ready yet') - sleep(10) - - elif status == 'completed': - print('creating transcript') - return polling_response - break - - else: - print('error') - return False - break - -def download_redact_audio(pooling_enpoint): - headers = { - "authorization": "05e515bf6b474966bc48bbdd1448b3cf", - "content-type": "application/json", - } - - redacted_audio_response = requests.get(pooling_enpoint + "/redacted-audio",headers=headers) - print(redacted_audio_response.json()) - redacted_audio = requests.get(redacted_audio_response.json()['redacted_audio_url']) - with open('redacted_audio.mp3', 'wb') as f: - f.write(redacted_audio.content) - -def redact_audio_video_display(vd,audio): - audioclip = AudioFileClip(audio) - clip = VideoFileClip(vd) - videoclip = clip.set_audio(audioclip) - videoclip.write_videofile("Redacted_video.mp4") - st.video("Redacted_video.mp4") - -async def main(uploaded_video,model_selected): - try: - vid = uploaded_video.name - with open(vid, mode='wb') as f: - f.write(uploaded_video.read()) # save video to disk - except: - with st.spinner('Downloading Yotube Video'): - yt = YouTube(uploaded_video) - title=yt.title - vid = f"{title}.mp4" - yt.streams.filter(file_extension="mp4").get_by_resolution("360p").download(filename=vid) - finally: - name = vid.split('.')[0] - preview = st.video(vid) - #extracting the transcription result - with st.spinner('Transcribing Video, Wait for it...'): - result = transcribe_video(vid,model_selected) - st.text_area("Edit Transcript",result["text"]) - col1, col2, col3, col4, col5, col6 = st.columns([1,1,1,1,1,1]) - tab1, tab2, tab3, tab4, tab5, tab6 = st.tabs(["Remove Filler Words","Edit Video" ,"Download SRT", "Perform Speaker Diarization","Content Analyzer","PII redactation"]) - - with tab1: - filler_word = st.button('Edit/Remove Filler Words with a click of a button') - if filler_word: - with st.spinner(text="In progress..."): - word_map_after_edit, filler_words_timestamp = filler_words_finder(result) - final_intervals = merge_overlapping_time_intervals(sorted(list(word_map_after_edit))) - subclips=[] - for start,end in final_intervals: - clip = VideoFileClip(vid) - tmp = clip.subclip(start,(end - end*0.1)) - subclips.append(tmp) - #concatenate subclips without filler words - final_clip = concatenate_videoclips(subclips) - final_clip.write_videofile(f"remove_{vid}") - preview = st.video(f"remove_{vid}") - - with tab2: - save = st.button('Edit') - - with tab3: - download = st.download_button('Download SRT', result['srt'],f'{name}.srt') - if download: - st.write('Thanks for downloading!') - - with tab4: - identify_download_speaker = st.button('Perform Speaker Diarization') - if identify_download_speaker: - with st.spinner(text="In progress..."): - results = await speaker_diarization(vid) - download_speaker = st.download_button("download speaker_diarization",results,'diarization_stats.txt') - if download_speaker: - st.write('Thanks for downloading!') - - with tab5: - type = st.selectbox('Summary Type?',('informative', 'conversational', 'catchy')) - Analyze_content = st.button("Start Content Analysis") - if Analyze_content: - with st.spinner(text="In progress..."): - audio = extract_write_audio(vid) - audio_url = upload_to_AssemblyAI("audio.wav") - # start analysis of the file - polling_endpoint = start_analysis(audio_url,type) - # receive the results - results = get_analysis_results(polling_endpoint) - - # separate analysis results - summary = results.json()['summary'] - content_moderation = results.json()["content_safety_labels"] - topic_labels = results.json()["iab_categories_result"] - - my_expander1 = st.expander(label='Summary') - my_expander2 = st.expander(label='Content Moderation') - my_expander3 = st.expander(label='Topic Discussed') - - with my_expander1: - st.header("Video summary") - st.write(summary) - - with my_expander2: - st.header("Sensitive content") - if content_moderation['summary'] != {}: - st.subheader('🚨 Mention of the following sensitive topics detected.') - moderation_df = pd.DataFrame(content_moderation['summary'].items()) - moderation_df.columns = ['topic','confidence'] - st.dataframe(moderation_df, use_container_width=True) - else: - st.subheader('✅ All clear! No sensitive content detected.') - - with my_expander3: - st.header("Topics discussed") - topics_df = pd.DataFrame(topic_labels['summary'].items()) - topics_df.columns = ['topic','confidence'] - topics_df["topic"] = topics_df["topic"].str.split(">") - expanded_topics = topics_df.topic.apply(pd.Series).add_prefix('topic_level_') - topics_df = topics_df.join(expanded_topics).drop('topic', axis=1).sort_values(['confidence'], ascending=False).fillna('') - st.dataframe(topics_df, use_container_width=True) - - with tab6: - options = st.multiselect('Select Policies to redact from video',["medical_process","medical_condition","blood_type","drug","injury","number_sequence","email_address","date_of_birth","phone_number","us_social_security_number","credit_card_number","credit_card_expiration","credit_card_cvv","date","nationality","event","language","location","money_amount","person_name","person_age","organization","political_affiliation","occupation","religion","drivers_license","banking_information"],["person_name", 'credit_card_number']) - Perform_redact = st.button("Start PII Redaction") - if Perform_redact: - with st.spinner(text="In progress..."): - audio = extract_write_audio(vid) - audio_url = upload_to_AssemblyAI("audio.wav") - print(audio_url) - print([ x for x in options ]) - polling_endpoint = pii_redact(audio_url,options) - results = pii_redact_audio(polling_endpoint) - download_redact_audio(polling_endpoint) - redact_audio_video_display(vid,"redacted_audio.mp3") - -Model_type = st.sidebar.selectbox("Choose Model",('Tiny - Best for Srt generation', 'Base - Best suited for various AI services', 'Medium - Use this model for filler word removal'),0) -upload_video = st.sidebar.file_uploader("Upload mp4 file",type=["mp4","mpeg"]) -youtube_url = st.sidebar.text_input("Enter a youtube video url") -# submit_button = st.sidebar.button("Extract Youtube Video") - -if Model_type.startswith("Tiny"): - model_selected = 'tiny.en' -if Model_type.startswith("Base"): - model_selected = 'base.en' -if Model_type.startswith("Small"): - model_selected = 'small.en' -if Model_type.startswith("Medium"): - model_selected = 'medium.en' - -if youtube_url: - asyncio.run(main(youtube_url,model_selected)) - -if upload_video: - asyncio.run(main(upload_video,model_selected)) - -st.sidebar.write("Kindly upload or provide youtube link with less a minute of video for faster performance and avoid excess usage of the free tier.") diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py deleted file mode 100644 index feb7a8222487756d38482da95183bbbcbbe96ed9..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet.py +++ /dev/null @@ -1,864 +0,0 @@ - -import math -import json -import copy -from typing import List, Dict -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.modeling.proposal_generator.build import PROPOSAL_GENERATOR_REGISTRY -from detectron2.layers import ShapeSpec, cat -from detectron2.structures import Instances, Boxes -from detectron2.modeling import detector_postprocess -from detectron2.utils.comm import get_world_size -from detectron2.config import configurable - -from ..layers.heatmap_focal_loss import heatmap_focal_loss_jit -from ..layers.heatmap_focal_loss import binary_heatmap_focal_loss -from ..layers.iou_loss import IOULoss -from ..layers.ml_nms import ml_nms -from ..debug import debug_train, debug_test -from .utils import reduce_sum, _transpose -from .centernet_head import CenterNetHead - -__all__ = ["CenterNet"] - -INF = 100000000 - -@PROPOSAL_GENERATOR_REGISTRY.register() -class CenterNet(nn.Module): - @configurable - def __init__(self, - # input_shape: Dict[str, ShapeSpec], - in_channels=256, - *, - num_classes=80, - in_features=("p3", "p4", "p5", "p6", "p7"), - strides=(8, 16, 32, 64, 128), - score_thresh=0.05, - hm_min_overlap=0.8, - loc_loss_type='giou', - min_radius=4, - hm_focal_alpha=0.25, - hm_focal_beta=4, - loss_gamma=2.0, - reg_weight=2.0, - not_norm_reg=True, - with_agn_hm=False, - only_proposal=False, - as_proposal=False, - not_nms=False, - pos_weight=1., - neg_weight=1., - sigmoid_clamp=1e-4, - ignore_high_fp=-1., - center_nms=False, - sizes_of_interest=[[0,80],[64,160],[128,320],[256,640],[512,10000000]], - more_pos=False, - more_pos_thresh=0.2, - more_pos_topk=9, - pre_nms_topk_train=1000, - pre_nms_topk_test=1000, - post_nms_topk_train=100, - post_nms_topk_test=100, - nms_thresh_train=0.6, - nms_thresh_test=0.6, - no_reduce=False, - debug=False, - vis_thresh=0.5, - pixel_mean=[103.530,116.280,123.675], - pixel_std=[1.0,1.0,1.0], - device='cuda', - centernet_head=None, - ): - super().__init__() - self.num_classes = num_classes - self.in_features = in_features - self.strides = strides - self.score_thresh = score_thresh - self.min_radius = min_radius - self.hm_focal_alpha = hm_focal_alpha - self.hm_focal_beta = hm_focal_beta - self.loss_gamma = loss_gamma - self.reg_weight = reg_weight - self.not_norm_reg = not_norm_reg - self.with_agn_hm = with_agn_hm - self.only_proposal = only_proposal - self.as_proposal = as_proposal - self.not_nms = not_nms - self.pos_weight = pos_weight - self.neg_weight = neg_weight - self.sigmoid_clamp = sigmoid_clamp - self.ignore_high_fp = ignore_high_fp - self.center_nms = center_nms - self.sizes_of_interest = sizes_of_interest - self.more_pos = more_pos - self.more_pos_thresh = more_pos_thresh - self.more_pos_topk = more_pos_topk - self.pre_nms_topk_train = pre_nms_topk_train - self.pre_nms_topk_test = pre_nms_topk_test - self.post_nms_topk_train = post_nms_topk_train - self.post_nms_topk_test = post_nms_topk_test - self.nms_thresh_train = nms_thresh_train - self.nms_thresh_test = nms_thresh_test - self.no_reduce = no_reduce - self.debug = debug - self.vis_thresh = vis_thresh - if self.center_nms: - self.not_nms = True - self.iou_loss = IOULoss(loc_loss_type) - assert (not self.only_proposal) or self.with_agn_hm - # delta for rendering heatmap - self.delta = (1 - hm_min_overlap) / (1 + hm_min_overlap) - if centernet_head is None: - self.centernet_head = CenterNetHead( - in_channels=in_channels, - num_levels=len(in_features), - with_agn_hm=with_agn_hm, - only_proposal=only_proposal) - else: - self.centernet_head = centernet_head - if self.debug: - pixel_mean = torch.Tensor(pixel_mean).to( - torch.device(device)).view(3, 1, 1) - pixel_std = torch.Tensor(pixel_std).to( - torch.device(device)).view(3, 1, 1) - self.denormalizer = lambda x: x * pixel_std + pixel_mean - - @classmethod - def from_config(cls, cfg, input_shape): - ret = { - # 'input_shape': input_shape, - 'in_channels': input_shape[ - cfg.MODEL.CENTERNET.IN_FEATURES[0]].channels, - 'num_classes': cfg.MODEL.CENTERNET.NUM_CLASSES, - 'in_features': cfg.MODEL.CENTERNET.IN_FEATURES, - 'strides': cfg.MODEL.CENTERNET.FPN_STRIDES, - 'score_thresh': cfg.MODEL.CENTERNET.INFERENCE_TH, - 'loc_loss_type': cfg.MODEL.CENTERNET.LOC_LOSS_TYPE, - 'hm_min_overlap': cfg.MODEL.CENTERNET.HM_MIN_OVERLAP, - 'min_radius': cfg.MODEL.CENTERNET.MIN_RADIUS, - 'hm_focal_alpha': cfg.MODEL.CENTERNET.HM_FOCAL_ALPHA, - 'hm_focal_beta': cfg.MODEL.CENTERNET.HM_FOCAL_BETA, - 'loss_gamma': cfg.MODEL.CENTERNET.LOSS_GAMMA, - 'reg_weight': cfg.MODEL.CENTERNET.REG_WEIGHT, - 'not_norm_reg': cfg.MODEL.CENTERNET.NOT_NORM_REG, - 'with_agn_hm': cfg.MODEL.CENTERNET.WITH_AGN_HM, - 'only_proposal': cfg.MODEL.CENTERNET.ONLY_PROPOSAL, - 'as_proposal': cfg.MODEL.CENTERNET.AS_PROPOSAL, - 'not_nms': cfg.MODEL.CENTERNET.NOT_NMS, - 'pos_weight': cfg.MODEL.CENTERNET.POS_WEIGHT, - 'neg_weight': cfg.MODEL.CENTERNET.NEG_WEIGHT, - 'sigmoid_clamp': cfg.MODEL.CENTERNET.SIGMOID_CLAMP, - 'ignore_high_fp': cfg.MODEL.CENTERNET.IGNORE_HIGH_FP, - 'center_nms': cfg.MODEL.CENTERNET.CENTER_NMS, - 'sizes_of_interest': cfg.MODEL.CENTERNET.SOI, - 'more_pos': cfg.MODEL.CENTERNET.MORE_POS, - 'more_pos_thresh': cfg.MODEL.CENTERNET.MORE_POS_THRESH, - 'more_pos_topk': cfg.MODEL.CENTERNET.MORE_POS_TOPK, - 'pre_nms_topk_train': cfg.MODEL.CENTERNET.PRE_NMS_TOPK_TRAIN, - 'pre_nms_topk_test': cfg.MODEL.CENTERNET.PRE_NMS_TOPK_TEST, - 'post_nms_topk_train': cfg.MODEL.CENTERNET.POST_NMS_TOPK_TRAIN, - 'post_nms_topk_test': cfg.MODEL.CENTERNET.POST_NMS_TOPK_TEST, - 'nms_thresh_train': cfg.MODEL.CENTERNET.NMS_TH_TRAIN, - 'nms_thresh_test': cfg.MODEL.CENTERNET.NMS_TH_TEST, - 'no_reduce': cfg.MODEL.CENTERNET.NO_REDUCE, - 'debug': cfg.DEBUG, - 'vis_thresh': cfg.VIS_THRESH, - 'pixel_mean': cfg.MODEL.PIXEL_MEAN, - 'pixel_std': cfg.MODEL.PIXEL_STD, - 'device': cfg.MODEL.DEVICE, - 'centernet_head': CenterNetHead( - cfg, [input_shape[f] for f in cfg.MODEL.CENTERNET.IN_FEATURES]), - } - return ret - - - def forward(self, images, features_dict, gt_instances): - features = [features_dict[f] for f in self.in_features] - clss_per_level, reg_pred_per_level, agn_hm_pred_per_level = \ - self.centernet_head(features) - grids = self.compute_grids(features) - shapes_per_level = grids[0].new_tensor( - [(x.shape[2], x.shape[3]) for x in reg_pred_per_level]) - - if not self.training: - return self.inference( - images, clss_per_level, reg_pred_per_level, - agn_hm_pred_per_level, grids) - else: - pos_inds, labels, reg_targets, flattened_hms = \ - self._get_ground_truth( - grids, shapes_per_level, gt_instances) - # logits_pred: M x F, reg_pred: M x 4, agn_hm_pred: M - logits_pred, reg_pred, agn_hm_pred = self._flatten_outputs( - clss_per_level, reg_pred_per_level, agn_hm_pred_per_level) - - if self.more_pos: - # add more pixels as positive if \ - # 1. they are within the center3x3 region of an object - # 2. their regression losses are small (¿Alguna vez ha encontrado una situación en la que haya olvidado su contraseña, PIN, patrón o bloqueo de huellas dactilares en su teléfono? O tal vez usted compró un teléfono de segunda mano que está bloqueado por cuenta de iCloud o Google? ¿O tal vez desea solucionar algunos problemas del sistema en su teléfono, como la pantalla negra, el bucle de arranque o el logotipo atascado? Si usted está buscando una solución a estos problemas, entonces es posible que desee probar Dr Fone Unlock para PC.
-Download ❤❤❤ https://bltlly.com/2v6Kpi
Dr Fone Unlock es un potente software que puede ayudarle a desbloquear su teléfono, reparar su sistema, recuperar sus datos, transferir sus archivos, copia de seguridad de sus chats, y cambiar su ubicación con facilidad. Es compatible con dispositivos iOS y Android y funciona con varios escenarios. En este artículo, le mostraremos cómo descargar Dr Fone Unlock para PC y cómo usarlo eficazmente.
-Dr Fone Unlock es más que una herramienta de desbloqueo de pantalla. Ofrece una solución móvil completa que puede satisfacer todas sus necesidades. Aquí están algunas de las características de Dr Fone Unlock para PC:
-Dr Fone Unlock para PC es un software potente y versátil que puede ayudarle a desbloquear el teléfono, arreglar su sistema, recuperar sus datos, transferir sus archivos, copia de seguridad de sus chats, y cambiar su ubicación con facilidad. Es compatible con dispositivos iOS y Android y funciona con varios escenarios. Es fácil de usar, seguro y confiable, y ofrece múltiples herramientas en un solo software. Sin embargo, no es gratuito, requiere conexión a Internet y puede no funcionar para algunos dispositivos o situaciones. Por lo tanto, siempre debe comprobar la compatibilidad y las instrucciones del software antes de usarlo.
- -Si usted está buscando una solución a sus problemas móviles, entonces es posible que desee probar Dr Fone Unlock para PC. Puede descargarlo desde el sitio web oficial e instalarlo en su PC en minutos. Luego puede usarlo para realizar varias operaciones en su dispositivo con pasos simples. También puede ponerse en contacto con el equipo de atención al cliente si tiene alguna pregunta o problema con el software.
-Entonces, ¿qué estás esperando? Descargar Dr Fone desbloquear para PC hoy y disfrutar de todos sus beneficios y características!
-Aquí están algunas de las preguntas más frecuentes sobre Dr Fone Unlock para PC:
--| CodeBERT: A Pre-Trained Model for Programming & Natural Languages -| Microsoft CodeBERT-Base Documentation -| My Code for this Fune-Tuned Project -| Dataset Source -|
-""" - -examples = ['94311163nobp', 'mpompo1', 'dK4dWOjM1OAPeisw'] - -gr.Interface(fn=classify_password, - inputs=gr.inputs.Textbox(), - outputs=gr.outputs.Textbox(), - title=title, - article=article, - description=description, - examples=examples, - theme='abidlabs/dracula_revamped' - ).launch() \ No newline at end of file diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/Training.md b/spaces/EXPOSUREEE/Ai-Image-Enhancer/Training.md deleted file mode 100644 index 64704e1d2e1f334984232afd12b245235b274a9e..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/Training.md +++ /dev/null @@ -1,100 +0,0 @@ -# :computer: How to Train Real-ESRGAN - -The training codes have been released.Action: {scene.action}
-Position: {scene.position}
-- Generate a new Magic card from a text description, - created by YaYaB. -
- 元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Arabic_poem_classifier/app.py b/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Arabic_poem_classifier/app.py
deleted file mode 100644
index bbf72b782320453cd5d9fb4e7e1ebd99fc972af8..0000000000000000000000000000000000000000
--- a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Arabic_poem_classifier/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import gradio as gr
-
-description = "التعرف على خاصيات البيت الشعري"
-title = """هذا البرنامج يقوم بالتعرف على مختلف خاصيات البيت من الشعر.
-يمكنكم إختيار الخاصية من بين:
-- التعرف على البحر
-- التعرف على الروي
-التعرف على الموضوع-"""
-
-examples = [["سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا"], ["قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"]]
-
-
-meter = gr.Interface.load("huggingface/Yah216/Arabic_poem_meter_3",
- description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه",
- examples=examples, title = "التعرف على البحر",
- inputs = gr.inputs.Textbox(lines = 3, label = "البيت")
-
-)
-rawiy = gr.Interface.load("huggingface/Yah216/Poem_Qafiyah_Detection",
- title ="التعرف على الروي",
- examples=examples,
- description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه",
- inputs = gr.inputs.Textbox(lines = 3, label = "البيت")
-
-)
-subject = gr.Interface.load(
- "huggingface/zenkri/autotrain-Arabic_Poetry_by_Subject-920730230",
- title="التعرف على الموضوع",
- examples=examples,
- description="من فضلك، أدخل البيت الشعري الذي تود التعرف عليه",
- inputs = gr.inputs.Textbox(lines = 3, label = "البيت")
-
-)
-demo = gr.TabbedInterface([meter, rawiy, subject], ["التعرف على البحر","التعرف على الروي","التعرف على الموضوع"])
-demo.launch()
-
diff --git a/spaces/abhijitguha/chatbot_gpt3/app.py b/spaces/abhijitguha/chatbot_gpt3/app.py
deleted file mode 100644
index ced97e751f804ec57bd65ffebdddf68d8c14711a..0000000000000000000000000000000000000000
--- a/spaces/abhijitguha/chatbot_gpt3/app.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-# In[ ]:
-
-
-import os
-import openai
-import gradio as gr
-
-#openai.api_key = "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"
-openai.api_key = "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"
-start_sequence = "\nAI:"
-restart_sequence = "\nHuman: "
-
-def predict(input,initial_prompt, history=[]):
-
- s = list(sum(history, ()))
- s.append(input)
-# initial_prompt="The following is a conversation with an AI movie recommendation assistant. The assistant is helpful, creative, clever, and very friendly.Along with movie recommendation it also talks about general topics"
-# \n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: "
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt= initial_prompt + "\n" + str(s),
- temperature=0.9,
- max_tokens=150,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0.6,
- stop=[" Human:", " AI:"])
- # tokenize the new input sentence
- response2 = response["choices"][0]["text"]
- history.append((input, response2))
-
- return history, history
-
-
-gr.Interface(fn=predict,
- inputs=["text","text",'state'],
-
- outputs=["chatbot",'state']).launch()
-
diff --git a/spaces/abhishek/sketch-to-image/annotator/hed/__init__.py b/spaces/abhishek/sketch-to-image/annotator/hed/__init__.py
deleted file mode 100644
index edfc2927b50cdfb42f7cbfdc78300238a67bf9df..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/hed/__init__.py
+++ /dev/null
@@ -1,107 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
-'''
-
-# This is an improved version and model of HED edge detection without GPL contamination
-# Please use this implementation in your products
-# This implementation may produce slightly different results from Saining Xie's official implementations,
-# but it generates smoother edges and is more suitable for ControlNet as well as other image-to-image translations.
-# Different from official models and other implementations, this is an RGB-input model (rather than BGR)
-# and in this way it works better for gradio's RGB protocol
-
-import os
-import cv2
-import torch
-import numpy as np
-
-from einops import rearrange
-from annotator.util import annotator_ckpts_path
-
-
-class DoubleConvBlock(torch.nn.Module):
- def __init__(self, input_channel, output_channel, layer_number):
- super().__init__()
- self.convs = torch.nn.Sequential()
- self.convs.append(torch.nn.Conv2d(in_channels=input_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1))
- for i in range(1, layer_number):
- self.convs.append(torch.nn.Conv2d(in_channels=output_channel, out_channels=output_channel, kernel_size=(3, 3), stride=(1, 1), padding=1))
- self.projection = torch.nn.Conv2d(in_channels=output_channel, out_channels=1, kernel_size=(1, 1), stride=(1, 1), padding=0)
-
- def __call__(self, x, down_sampling=False):
- h = x
- if down_sampling:
- h = torch.nn.functional.max_pool2d(h, kernel_size=(2, 2), stride=(2, 2))
- for conv in self.convs:
- h = conv(h)
- h = torch.nn.functional.relu(h)
- return h, self.projection(h)
-
-
-class ControlNetHED_Apache2(torch.nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = torch.nn.Parameter(torch.zeros(size=(1, 3, 1, 1)))
- self.block1 = DoubleConvBlock(input_channel=3, output_channel=64, layer_number=2)
- self.block2 = DoubleConvBlock(input_channel=64, output_channel=128, layer_number=2)
- self.block3 = DoubleConvBlock(input_channel=128, output_channel=256, layer_number=3)
- self.block4 = DoubleConvBlock(input_channel=256, output_channel=512, layer_number=3)
- self.block5 = DoubleConvBlock(input_channel=512, output_channel=512, layer_number=3)
-
- def __call__(self, x):
- h = x - self.norm
- h, projection1 = self.block1(h)
- h, projection2 = self.block2(h, down_sampling=True)
- h, projection3 = self.block3(h, down_sampling=True)
- h, projection4 = self.block4(h, down_sampling=True)
- h, projection5 = self.block5(h, down_sampling=True)
- return projection1, projection2, projection3, projection4, projection5
-
-
-class HEDdetector:
- def __init__(self):
- remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth"
- modelpath = remote_model_path
- modelpath = os.path.join(annotator_ckpts_path, "ControlNetHED.pth")
- if not os.path.exists(modelpath):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path)
- self.netNetwork = ControlNetHED_Apache2().float().cuda().eval()
- self.netNetwork.load_state_dict(torch.load(modelpath))
-
- def __call__(self, input_image):
- assert input_image.ndim == 3
- H, W, C = input_image.shape
- with torch.no_grad():
- image_hed = torch.from_numpy(input_image.copy()).float().cuda()
- image_hed = rearrange(image_hed, 'h w c -> 1 c h w')
- edges = self.netNetwork(image_hed)
- edges = [e.detach().cpu().numpy().astype(np.float32)[0, 0] for e in edges]
- edges = [cv2.resize(e, (W, H), interpolation=cv2.INTER_LINEAR) for e in edges]
- edges = np.stack(edges, axis=2)
- edge = 1 / (1 + np.exp(-np.mean(edges, axis=2).astype(np.float64)))
- edge = (edge * 255.0).clip(0, 255).astype(np.uint8)
- return edge
-
-
-def nms(x, t, s):
- x = cv2.GaussianBlur(x.astype(np.float32), (0, 0), s)
-
- f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8)
- f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8)
- f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8)
- f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8)
-
- y = np.zeros_like(x)
-
- for f in [f1, f2, f3, f4]:
- np.putmask(y, cv2.dilate(x, kernel=f) == x, x)
-
- z = np.zeros_like(y, dtype=np.uint8)
- z[y > t] = 255
- return z
diff --git a/spaces/adirik/stylemc-demo/encoder4editing/utils/train_utils.py b/spaces/adirik/stylemc-demo/encoder4editing/utils/train_utils.py
deleted file mode 100644
index 0c55177f7442010bc1fcc64de3d142585c22adc0..0000000000000000000000000000000000000000
--- a/spaces/adirik/stylemc-demo/encoder4editing/utils/train_utils.py
+++ /dev/null
@@ -1,13 +0,0 @@
-
-def aggregate_loss_dict(agg_loss_dict):
- mean_vals = {}
- for output in agg_loss_dict:
- for key in output:
- mean_vals[key] = mean_vals.setdefault(key, []) + [output[key]]
- for key in mean_vals:
- if len(mean_vals[key]) > 0:
- mean_vals[key] = sum(mean_vals[key]) / len(mean_vals[key])
- else:
- print('{} has no value'.format(key))
- mean_vals[key] = 0
- return mean_vals
diff --git a/spaces/ahmedghani/Editing-Tools/README.md b/spaces/ahmedghani/Editing-Tools/README.md
deleted file mode 100644
index 6288fff80057bc9bb6addf04040dd1a51f9ab034..0000000000000000000000000000000000000000
--- a/spaces/ahmedghani/Editing-Tools/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Editing Tools
-emoji: 📽️📷🎥📹🎦🖼️🎨🖌️
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.22.1
-app_file: app.py
-pinned: false
----
-
-```bash
-conda create -n editing-tools python=3.9 -y
-conda activate editing-tools
-conda install -c "nvidia/label/cuda-11.7.0" cuda-toolkit cuda
-pip install -r requirements.txt
-python app.py
-```
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/Keypoint_Communities/README.md b/spaces/akhaliq/Keypoint_Communities/README.md
deleted file mode 100644
index 1217eeb57b73fd355e773f5c039b4bcd0fe0164e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Keypoint_Communities/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Keypoint_Communities
-emoji: 👁
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/akhaliq/stylegan3_clip/avg_spectra.py b/spaces/akhaliq/stylegan3_clip/avg_spectra.py
deleted file mode 100644
index afaef87de54e49df230b432b52fda92667d17667..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/stylegan3_clip/avg_spectra.py
+++ /dev/null
@@ -1,276 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Compare average power spectra between real and generated images,
-or between multiple generators."""
-
-import os
-import numpy as np
-import torch
-import torch.fft
-import scipy.ndimage
-import matplotlib.pyplot as plt
-import click
-import tqdm
-import dnnlib
-
-import legacy
-from training import dataset
-
-#----------------------------------------------------------------------------
-# Setup an iterator for streaming images, in uint8 NCHW format, based on the
-# respective command line options.
-
-def stream_source_images(source, num, seed, device, data_loader_kwargs=None): # => num_images, image_size, image_iter
- ext = source.split('.')[-1].lower()
- if data_loader_kwargs is None:
- data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2)
-
- if ext == 'pkl':
- if num is None:
- raise click.ClickException('--num is required when --source points to network pickle')
- with dnnlib.util.open_url(source) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device)
- def generate_image(seed):
- rnd = np.random.RandomState(seed)
- z = torch.from_numpy(rnd.randn(1, G.z_dim)).to(device)
- c = torch.zeros([1, G.c_dim], device=device)
- if G.c_dim > 0:
- c[:, rnd.randint(G.c_dim)] = 1
- return (G(z=z, c=c) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- _ = generate_image(seed) # warm up
- image_iter = (generate_image(seed + idx) for idx in range(num))
- return num, G.img_resolution, image_iter
-
- elif ext == 'zip' or os.path.isdir(source):
- dataset_obj = dataset.ImageFolderDataset(path=source, max_size=num, random_seed=seed)
- if num is not None and num != len(dataset_obj):
- raise click.ClickException(f'--source contains fewer than {num} images')
- data_loader = torch.utils.data.DataLoader(dataset_obj, batch_size=1, **data_loader_kwargs)
- image_iter = (image.to(device) for image, _label in data_loader)
- return len(dataset_obj), dataset_obj.resolution, image_iter
-
- else:
- raise click.ClickException('--source must point to network pickle, dataset zip, or directory')
-
-#----------------------------------------------------------------------------
-# Load average power spectrum from the specified .npz file and construct
-# the corresponding heatmap for visualization.
-
-def construct_heatmap(npz_file, smooth):
- npz_data = np.load(npz_file)
- spectrum = npz_data['spectrum']
- image_size = npz_data['image_size']
- hmap = np.log10(spectrum) * 10 # dB
- hmap = np.fft.fftshift(hmap)
- hmap = np.concatenate([hmap, hmap[:1, :]], axis=0)
- hmap = np.concatenate([hmap, hmap[:, :1]], axis=1)
- if smooth > 0:
- sigma = spectrum.shape[0] / image_size * smooth
- hmap = scipy.ndimage.gaussian_filter(hmap, sigma=sigma, mode='nearest')
- return hmap, image_size
-
-#----------------------------------------------------------------------------
-
-@click.group()
-def main():
- """Compare average power spectra between real and generated images,
- or between multiple generators.
-
- Example:
-
- \b
- # Calculate dataset mean and std, needed in subsequent steps.
- python avg_spectra.py stats --source=~/datasets/ffhq-1024x1024.zip
-
- \b
- # Calculate average spectrum for the training data.
- python avg_spectra.py calc --source=~/datasets/ffhq-1024x1024.zip \\
- --dest=tmp/training-data.npz --mean=112.684 --std=69.509
-
- \b
- # Calculate average spectrum for a pre-trained generator.
- python avg_spectra.py calc \\
- --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \\
- --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000
-
- \b
- # Display results.
- python avg_spectra.py heatmap tmp/training-data.npz
- python avg_spectra.py heatmap tmp/stylegan3-r.npz
- python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz
-
- \b
- # Save as PNG.
- python avg_spectra.py heatmap tmp/training-data.npz --save=tmp/training-data.png --dpi=300
- python avg_spectra.py heatmap tmp/stylegan3-r.npz --save=tmp/stylegan3-r.png --dpi=300
- python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz --save=tmp/slices.png --dpi=300
- """
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True)
-@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1))
-@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True)
-def stats(source, num, seed, device=torch.device('cuda')):
- """Calculate dataset mean and standard deviation needed by 'calc'."""
- torch.multiprocessing.set_start_method('spawn')
- num_images, _image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device)
-
- # Accumulate moments.
- moments = torch.zeros([3], dtype=torch.float64, device=device)
- for image in tqdm.tqdm(image_iter, total=num_images):
- image = image.to(torch.float64)
- moments += torch.stack([torch.ones_like(image).sum(), image.sum(), image.square().sum()])
- moments = moments / moments[0]
-
- # Compute mean and standard deviation.
- mean = moments[1]
- std = (moments[2] - moments[1].square()).sqrt()
- print(f'--mean={mean:g} --std={std:g}')
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True)
-@click.option('--dest', help='Where to store the result', metavar='NPZ', required=True)
-@click.option('--mean', help='Dataset mean for whitening', metavar='FLOAT', type=float, required=True)
-@click.option('--std', help='Dataset standard deviation for whitening', metavar='FLOAT', type=click.FloatRange(min=0), required=True)
-@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1))
-@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True)
-@click.option('--beta', help='Shape parameter for the Kaiser window', metavar='FLOAT', type=click.FloatRange(min=0), default=8, show_default=True)
-@click.option('--interp', help='Frequency-domain interpolation factor', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True)
-def calc(source, dest, mean, std, num, seed, beta, interp, device=torch.device('cuda')):
- """Calculate average power spectrum and store it in .npz file."""
- torch.multiprocessing.set_start_method('spawn')
- num_images, image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device)
- spectrum_size = image_size * interp
- padding = spectrum_size - image_size
-
- # Setup window function.
- window = torch.kaiser_window(image_size, periodic=False, beta=beta, device=device)
- window *= window.square().sum().rsqrt()
- window = window.ger(window).unsqueeze(0).unsqueeze(1)
-
- # Accumulate power spectrum.
- spectrum = torch.zeros([spectrum_size, spectrum_size], dtype=torch.float64, device=device)
- for image in tqdm.tqdm(image_iter, total=num_images):
- image = (image.to(torch.float64) - mean) / std
- image = torch.nn.functional.pad(image * window, [0, padding, 0, padding])
- spectrum += torch.fft.fftn(image, dim=[2,3]).abs().square().mean(dim=[0,1])
- spectrum /= num_images
-
- # Save result.
- if os.path.dirname(dest):
- os.makedirs(os.path.dirname(dest), exist_ok=True)
- np.savez(dest, spectrum=spectrum.cpu().numpy(), image_size=image_size)
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.argument('npz-file', nargs=1)
-@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]')
-@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True)
-@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=1.25, show_default=True)
-def heatmap(npz_file, save, smooth, dpi):
- """Visualize 2D heatmap based on the given .npz file."""
- hmap, image_size = construct_heatmap(npz_file=npz_file, smooth=smooth)
-
- # Setup plot.
- plt.figure(figsize=[6, 4.8], dpi=dpi, tight_layout=True)
- freqs = np.linspace(-0.5, 0.5, num=hmap.shape[0], endpoint=True) * image_size
- ticks = np.linspace(freqs[0], freqs[-1], num=5, endpoint=True)
- levels = np.linspace(-40, 20, num=13, endpoint=True)
-
- # Draw heatmap.
- plt.xlim(ticks[0], ticks[-1])
- plt.ylim(ticks[0], ticks[-1])
- plt.xticks(ticks)
- plt.yticks(ticks)
- plt.contourf(freqs, freqs, hmap, levels=levels, extend='both', cmap='Blues')
- plt.gca().set_aspect('equal')
- plt.colorbar(ticks=levels)
- plt.contour(freqs, freqs, hmap, levels=levels, extend='both', linestyles='solid', linewidths=1, colors='midnightblue', alpha=0.2)
-
- # Display or save.
- if save is None:
- plt.show()
- else:
- if os.path.dirname(save):
- os.makedirs(os.path.dirname(save), exist_ok=True)
- plt.savefig(save)
-
-#----------------------------------------------------------------------------
-
-@main.command()
-@click.argument('npz-files', nargs=-1, required=True)
-@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]')
-@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True)
-@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=0, show_default=True)
-def slices(npz_files, save, dpi, smooth):
- """Visualize 1D slices based on the given .npz files."""
- cases = [dnnlib.EasyDict(npz_file=npz_file) for npz_file in npz_files]
- for c in cases:
- c.hmap, c.image_size = construct_heatmap(npz_file=c.npz_file, smooth=smooth)
- c.label = os.path.splitext(os.path.basename(c.npz_file))[0]
-
- # Check consistency.
- image_size = cases[0].image_size
- hmap_size = cases[0].hmap.shape[0]
- if any(c.image_size != image_size or c.hmap.shape[0] != hmap_size for c in cases):
- raise click.ClickException('All .npz must have the same resolution')
-
- # Setup plot.
- plt.figure(figsize=[12, 4.6], dpi=dpi, tight_layout=True)
- hmap_center = hmap_size // 2
- hmap_range = np.arange(hmap_center, hmap_size)
- freqs0 = np.linspace(0, image_size / 2, num=(hmap_size // 2 + 1), endpoint=True)
- freqs45 = np.linspace(0, image_size / np.sqrt(2), num=(hmap_size // 2 + 1), endpoint=True)
- xticks0 = np.linspace(freqs0[0], freqs0[-1], num=9, endpoint=True)
- xticks45 = np.round(np.linspace(freqs45[0], freqs45[-1], num=9, endpoint=True))
- yticks = np.linspace(-50, 30, num=9, endpoint=True)
-
- # Draw 0 degree slice.
- plt.subplot(1, 2, 1)
- plt.title('0\u00b0 slice')
- plt.xlim(xticks0[0], xticks0[-1])
- plt.ylim(yticks[0], yticks[-1])
- plt.xticks(xticks0)
- plt.yticks(yticks)
- for c in cases:
- plt.plot(freqs0, c.hmap[hmap_center, hmap_range], label=c.label)
- plt.grid()
- plt.legend(loc='upper right')
-
- # Draw 45 degree slice.
- plt.subplot(1, 2, 2)
- plt.title('45\u00b0 slice')
- plt.xlim(xticks45[0], xticks45[-1])
- plt.ylim(yticks[0], yticks[-1])
- plt.xticks(xticks45)
- plt.yticks(yticks)
- for c in cases:
- plt.plot(freqs45, c.hmap[hmap_range, hmap_range], label=c.label)
- plt.grid()
- plt.legend(loc='upper right')
-
- # Display or save.
- if save is None:
- plt.show()
- else:
- if os.path.dirname(save):
- os.makedirs(os.path.dirname(save), exist_ok=True)
- plt.savefig(save)
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- main() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/unix.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/unix.py
deleted file mode 100644
index 2fbd4d4f367863ff0cf635fddc5f6e44383e7d94..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/platformdirs/unix.py
+++ /dev/null
@@ -1,181 +0,0 @@
-from __future__ import annotations
-
-import os
-import sys
-from configparser import ConfigParser
-from pathlib import Path
-
-from .api import PlatformDirsABC
-
-if sys.platform.startswith("linux"): # pragma: no branch # no op check, only to please the type checker
- from os import getuid
-else:
-
- def getuid() -> int:
- raise RuntimeError("should only be used on Linux")
-
-
-class Unix(PlatformDirsABC):
- """
- On Unix/Linux, we follow the
- `XDG Basedir Spec `_. The spec allows
- overriding directories with environment variables. The examples show are the default values, alongside the name of
- the environment variable that overrides them. Makes use of the
- `appname `,
- `version `,
- `multipath `,
- `opinion `.
- """
-
- @property
- def user_data_dir(self) -> str:
- """
- :return: data directory tied to the user, e.g. ``~/.local/share/$appname/$version`` or
- ``$XDG_DATA_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_DATA_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.local/share")
- return self._append_app_name_and_version(path)
-
- @property
- def site_data_dir(self) -> str:
- """
- :return: data directories shared by users (if `multipath ` is
- enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS
- path separator), e.g. ``/usr/local/share/$appname/$version`` or ``/usr/share/$appname/$version``
- """
- # XDG default for $XDG_DATA_DIRS; only first, if multipath is False
- path = os.environ.get("XDG_DATA_DIRS", "")
- if not path.strip():
- path = f"/usr/local/share{os.pathsep}/usr/share"
- return self._with_multi_path(path)
-
- def _with_multi_path(self, path: str) -> str:
- path_list = path.split(os.pathsep)
- if not self.multipath:
- path_list = path_list[0:1]
- path_list = [self._append_app_name_and_version(os.path.expanduser(p)) for p in path_list]
- return os.pathsep.join(path_list)
-
- @property
- def user_config_dir(self) -> str:
- """
- :return: config directory tied to the user, e.g. ``~/.config/$appname/$version`` or
- ``$XDG_CONFIG_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_CONFIG_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.config")
- return self._append_app_name_and_version(path)
-
- @property
- def site_config_dir(self) -> str:
- """
- :return: config directories shared by users (if `multipath `
- is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS
- path separator), e.g. ``/etc/xdg/$appname/$version``
- """
- # XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
- path = os.environ.get("XDG_CONFIG_DIRS", "")
- if not path.strip():
- path = "/etc/xdg"
- return self._with_multi_path(path)
-
- @property
- def user_cache_dir(self) -> str:
- """
- :return: cache directory tied to the user, e.g. ``~/.cache/$appname/$version`` or
- ``~/$XDG_CACHE_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_CACHE_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.cache")
- return self._append_app_name_and_version(path)
-
- @property
- def user_state_dir(self) -> str:
- """
- :return: state directory tied to the user, e.g. ``~/.local/state/$appname/$version`` or
- ``$XDG_STATE_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_STATE_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.local/state")
- return self._append_app_name_and_version(path)
-
- @property
- def user_log_dir(self) -> str:
- """
- :return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``log`` in it
- """
- path = self.user_cache_dir
- if self.opinion:
- path = os.path.join(path, "log")
- return path
-
- @property
- def user_documents_dir(self) -> str:
- """
- :return: documents directory tied to the user, e.g. ``~/Documents``
- """
- documents_dir = _get_user_dirs_folder("XDG_DOCUMENTS_DIR")
- if documents_dir is None:
- documents_dir = os.environ.get("XDG_DOCUMENTS_DIR", "").strip()
- if not documents_dir:
- documents_dir = os.path.expanduser("~/Documents")
-
- return documents_dir
-
- @property
- def user_runtime_dir(self) -> str:
- """
- :return: runtime directory tied to the user, e.g. ``/run/user/$(id -u)/$appname/$version`` or
- ``$XDG_RUNTIME_DIR/$appname/$version``
- """
- path = os.environ.get("XDG_RUNTIME_DIR", "")
- if not path.strip():
- path = f"/run/user/{getuid()}"
- return self._append_app_name_and_version(path)
-
- @property
- def site_data_path(self) -> Path:
- """:return: data path shared by users. Only return first item, even if ``multipath`` is set to ``True``"""
- return self._first_item_as_path_if_multipath(self.site_data_dir)
-
- @property
- def site_config_path(self) -> Path:
- """:return: config path shared by the users. Only return first item, even if ``multipath`` is set to ``True``"""
- return self._first_item_as_path_if_multipath(self.site_config_dir)
-
- def _first_item_as_path_if_multipath(self, directory: str) -> Path:
- if self.multipath:
- # If multipath is True, the first path is returned.
- directory = directory.split(os.pathsep)[0]
- return Path(directory)
-
-
-def _get_user_dirs_folder(key: str) -> str | None:
- """Return directory from user-dirs.dirs config file. See https://freedesktop.org/wiki/Software/xdg-user-dirs/"""
- user_dirs_config_path = os.path.join(Unix().user_config_dir, "user-dirs.dirs")
- if os.path.exists(user_dirs_config_path):
- parser = ConfigParser()
-
- with open(user_dirs_config_path) as stream:
- # Add fake section header, so ConfigParser doesn't complain
- parser.read_string(f"[top]\n{stream.read()}")
-
- if key not in parser["top"]:
- return None
-
- path = parser["top"][key].strip('"')
- # Handle relative home paths
- path = path.replace("$HOME", os.path.expanduser("~"))
- return path
-
- return None
-
-
-__all__ = [
- "Unix",
-]
diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/__init__.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/__init__.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/allknowingroger/Image-Models-Test119/README.md b/spaces/allknowingroger/Image-Models-Test119/README.md
deleted file mode 100644
index 77af92a0c2ef86e2bfc609479cf59bb741dd3132..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test119/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-duplicated_from: allknowingroger/Image-Models-Test118
----
-
-
\ No newline at end of file
diff --git a/spaces/alphahg/academic-paper-translate-summary/app.py b/spaces/alphahg/academic-paper-translate-summary/app.py
deleted file mode 100644
index b1f7444bd5940242664b4a3e34b0fcaaa4522619..0000000000000000000000000000000000000000
--- a/spaces/alphahg/academic-paper-translate-summary/app.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# %%
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
-from nltk.tokenize import sent_tokenize
-import gc
-
-import nltk
-nltk.download('punkt')
-
-# from PyKakao import KoGPT
-# kogpt_api = KoGPT(service_key = "")
-import openai
-openai.api_key = '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'
-gpt2_tokenizer = AutoTokenizer.from_pretrained('gpt2')
-
-#en2ko = 'alphahg/m2m100_418M-finetuned-en-to-ko-4770260'#'alphahg/mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408'
-en2ko = 'alphahg/mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408'
-ko2en = 'alphahg/opus-mt-ko-en-finetuned-ko-to-en-2780616'
-ensum = 'allenai/led-large-16384-arxiv'
-kosum = 'alphahg/pko-t5-small-finetuned-paper-4564652' #'lcw99/t5-base-korean-text-summary'
-
-#en_pipe = pipeline('translation', model=en2ko, tokenizer=en2ko, src_lang = "en", tgt_lang = "ko", device_map="auto")
-en2ko_model = AutoModelForSeq2SeqLM.from_pretrained(en2ko)
-
-en_pipe = pipeline('translation', model=en2ko_model, tokenizer=en2ko, src_lang = "en_XX", tgt_lang = "ko_KR")
-ko_pipe = pipeline('translation', model=ko2en, tokenizer=ko2en)
-style_pipe = pipeline('translation', model=en2ko_model, tokenizer=en2ko, src_lang = "ko_KR", tgt_lang = "ko_KR")
-
-en_sum = pipeline('summarization', model=ensum, tokenizer=ensum)
-ko_sum = pipeline('summarization', model=kosum, tokenizer=kosum)
-
-def len_tokens(text, pipe):
- return len(pipe.tokenizer(text)['input_ids'])
-
-def split_sent(sentences, pipe, max_len=256):
- if not sentences:
- return []
-
- paragraphs = []
- example = sentences[0]
- for i in range(1, len(sentences)):
- if len_tokens(example + ' ' + sentences[i], pipe) > max_len:
- paragraphs.append(example)
- example = sentences[i]
- else:
- example += ' ' + sentences[i]
-
- paragraphs.append(example)
-
- return paragraphs
-
-# chatbot = Chatbot({
-# #"session_token": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..hV_ujfbYLwBgI-g6.zQW0evUrpYfli2cujTFp1ie5PhthUZayoSY2Chb1Eb8Ow3t6l2-NUwGJcYyxVKQS0aITN3-ph-KzPysnu7dCF9KrC-22DZzs1zMFm3PHEkjb4jD69qndcFEGGH8y4SejfYwvdj4wfKmVnGo3XTNZ31qPSDA2PBoaOMpxWABSqWULMJbS-_Y--wd0YhsqMFlkCQpXyfxSf9yxXlPYt0HR_NgupoBXP-WVYbODSUYFVqa3IsScbSPS-mUY0YrAb8AJUkvej7HiSCN9onyThgTtZgZklwqpB4FesJaC6R3nSXSg5cKDDupGVBUQlPRiemacCV6tXnSC-bCtN9a-l9RRqtX_FJNP8T7Kb75ktuedKKrXXTmk3x7hz_RhYhZ4wXFkbqXexZQXTfQoI2vKlLN73EHBlJOqsDLnOP7zT4Vr2RpBbk1HK5D_uHh5x1X3aBHslHfEQpjjZKiMs0to9DwKzSNHXlNmCeGjT9ZzKVsWYCiseO20IlKxQ63Q_nIbi-6y8e6LWw9O82ESKkkRe8kN-CzGxakKJegmHKQGRPZu9ZEIqwWYqlnahVyWRFOtjfMNN3ncGbQGi54VMyfSqmSvPXecaxsVNzl00gHvmCFBFJBDXM2GTsvEzQsJi1MLyopXtSuiU1anL_kC1eMvew61vd3TtC97ZwlQLaWc6dT14p5NdJpN9ihFpgtxMP1rcQhNTc5fo9BBoKrO4yGOuy0wixJs6ORVdY2o3c653X8PFmrso1XaV3KpBaSmcvszIhL1anJTA8SpnPNRmksEFONcX8AfpQ-4WbckCS737TZYCDulVtvukyVAbtq9cEQP5kXXAGOWKg5lX4nRFynM83f8P-XGIg-XUGE99NRIOCBo28cr04fWFOaJOyHf9eP6Rx5zjNv1qwp4FxhVVP9jlmSTfu97CZSR91L-k8V6jVgbj8F6YUZ6iiu51kaOAqf5de4EUncSFyGLuJfCGTJTPSYYl1lnR6bSfTVHKwP28YzzcU2myMM5B0ZXDwydD900TYXZOCxxLPUbu5-G3roR2KZnuWLXFOiafAvDx-LHYUHSWQZ9ouWcDaQBNsXmfTZtIWHQ8aTZwlNEnN4-uFdlk2Lm35qp1v-8Fp_3aXGQ3CrTy-ryMV0rUPTSMCEA8gVA_mD40zV6Wcb4asc3zsYAuomQ3Iu4iB5wyWGxUIJVzl1C9QaPpAx7vp5u7w-0_rtocVVXFRTZ8aSxNS3QAd62TbVyToIOrsvp4kOWDcqhNp5QBAsJtES9pbO9fiy_SJS83SFMliSFd-jhXfKu0kUYIUb9yaN5QC6eEpgJ7KzhwTcNDtoqyBKMyVTSdUXA9P2Yv2e4r-BVnxlW0RxknQdesK-wZrwuAZt_bnLaHSqFzyWz5AE7pukTBQ2QdVoity_tVURzhTcINh6rvPywm2IVl1gC3FjfhQVfTvHWFtUzNqLN4yUfI0Tc1mGHQlYuxZ_yux2B8HeYb_cyb1rR_mwDiOs3PKOnhfNRdXqXf6RWr7KdjNc0k-CMm13DAYQggmFCmEZW20FiwalKqVq3nFTFDhfp5mxtr0sLCVxGA3eTqC6_i2TAVGqDLjxzfz5WiK7J3FAN2_kmEZLBVXHabwa9kKyCzcgCx6FrxaFidskO6t4dWu3wok95hXMae0Z4ZFs7HVNisM1pkRm7LE3XdvnslKHAJkPr57HsFdlQITJRSx3Tg0EN1LUt80hKx8VGXPv7zBeXP5lni2ixpglMQmiKLiszowGoqu2oJPwougueu5Bj4BLhmoqK8DCtdxl3MYAyxLWWStXQqcEJQw7koYmPNwr4BzI9cQVk81LbPwrXBbJfR7G14e5qV0lULfuU5qVfNt7DU6FbwXmzv6qFI-jOClLzSTKpFzp51wQQ5fh2REs6CPJlL-kiyomJPXcqSeezDCLLwWjI_vIyODFkzt91l-dmFriu3HMkMC1v29AJlfPA_avSiJzJDEI7rb6AbEyT6piqp1TYlWMkI_rJCsCZXIb10Rjd2y9sR-Dz3_FZzRvJUA7BfRlP7Bf04HnYsyMRoJilbuyQ5fB0B2L2nxjYY2zoHJ_x6HTS6tcrAijOO4FSSQngWD9iTKCm6pjW3aZjFyXyjmP82S3VnhEyON390aIL7j9Y0wGnHzOkn54OfyxxGeo2mFAIv9kthL_Fi8d9G_rvvQOBUM2a7kjF5-n8wby0YDujoRl0ETg379HyMVf7F2BHWQ8nAbICRxWZ7EzPLwzrVjPiQVPZklkrVYgEmGxZDrgEG8IeNi7FMgGruaQ1tENczRMXzaApK8k6-FXKhfFIV7dN95tP4k6tnnxRFoMAUWcXwQCzRH8YhID36TAUFdBQ-c52MTogPo1Rki1N49j_e7Mph1OABQX2Fw9-CukT6reQkp3nGrwi0IKnKoyGhHOBHK3kzQwINfjOBbNpOjP-6MX_9kiRTstN2GLte8w0QJQVl84o8ACTjV8N4rhI6xyLKIvqoyZ6jNO3SYs8fEutmZO8-qB0iksIiQHupxQgcmgbAyM.KxAbJWqMvwm0PYtq7MuR6A"
-
-# }, conversation_id=None, parent_id=None) # You can start a custom conversation
-# %%
-def translate(text, lang, gpt_fix=False):
- from_en = False if lang == '한영' else True
- sentences = sent_tokenize(text)
- #print(sentences)
- if not sentences:
- return ''
-
- paragraphs = split_sent(sentences, en_pipe, max_len=180) if from_en else split_sent(sentences, ko_pipe)
- #print(paragraphs)
-
- ret = []
- for text in paragraphs:
- result = en_pipe(text) if from_en else ko_pipe(text)
- ret.append(result[0]['translation_text'])
-
- translated = ' '.join(ret)
- gc.collect()
-
- if gpt_fix:
- if lang == '한영':
- prompt = 'Improve given formal article without adding:'
- elif lang == '영한':
- prompt = "추가적인 내용없이 주어진 글을 개선해:"
-
- def fix_sent(sent):
- number_of_tokens = len(gpt2_tokenizer(sent)['input_ids'])
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=prompt+'\n'+sent,
- temperature=0,
- max_tokens=number_of_tokens+128,
- top_p=1.0,
- frequency_penalty=0.0,
- presence_penalty=0.0
- )
-
- return response.choices[0].text.strip()
-
- # def fix_sent(sent):
- # generated = kogpt_api.generate(prompt+'\n'+sent, max_tokens=256)
- # return generated['generations'][0]['text']
-
- translated = fix_sent(translated)
-
- return translated
-
-#%%
-def translate_with_sum(text, lang, gpt_fix=False):
- from_en = False if lang == '한영' else True
-
- if lang == '영한':
- summary = en_sum(text, max_length=int(len_tokens(text, en_sum)/2)+32)
- text = summary[0]['summary_text']
-
- sentences = sent_tokenize(text)
- #print(sentences)
- if not sentences:
- return ''
-
- paragraphs = split_sent(sentences, en_pipe if from_en else ko_pipe)
- #print(paragraphs)
-
- ret = []
- for text in paragraphs:
- result = en_pipe(text) if from_en else ko_pipe(text)
- ret.append(result[0]['translation_text'])
-
- summarized = ' '.join(ret)
- if lang == '한영':
- summary = en_sum(summarized, max_length=int(len_tokens(summarized, en_sum)/2)+32)
- return summary[0]['summary_text']
-
- gc.collect()
- return summarized
-
-def summarize(text, lang):
- if lang == 'Korean':
- summarizer = ko_sum
- elif lang == 'English':
- summarizer = en_sum
-
- summary = summarizer(text, max_length=int(len_tokens(text, summarizer) * 0.7))[0]['summary_text']
- return summary
-
-def translate_styleonly(text):
- sentences = sent_tokenize(text)
- paragraphs = split_sent(sentences, style_pipe, max_len=180)
- #print(paragraphs)
-
- ret = []
- for text in paragraphs:
- result = style_pipe(text)
- ret.append(result[0]['translation_text'])
-
- translated = ' '.join(ret)
- gc.collect()
-
- return translated
-
-# %%
-interface1 = gr.Interface(fn=translate, inputs=["text", gr.Radio(["영한", "한영"], value='영한'), 'checkbox'], outputs="text")
-interface2 = gr.Interface(fn=translate_with_sum, inputs=["text", gr.Radio(["영한", "한영"], value='영한')], outputs="text")
-parallel_interface = gr.Parallel(interface1, interface2)
-
-summarize_interface = gr.Interface(fn=summarize, inputs=["text", gr.Radio(["Korean", "English"], value='Korean')], outputs="text")
-style_interface = gr.Interface(fn=translate_styleonly, inputs=["text"], outputs="text")
-
-demo = gr.TabbedInterface([parallel_interface, summarize_interface, style_interface], ['번역 및 요약', '요약', '스타일 번역'], css="footer {visibility: hidden}") # '요약'
-demo.launch() # Share the demo
-# %%
\ No newline at end of file
diff --git a/spaces/amasad/sahil2801-replit-code-instruct-glaive/app.py b/spaces/amasad/sahil2801-replit-code-instruct-glaive/app.py
deleted file mode 100644
index 976f6a74229ccc0badfdaca594a78558f0afbab4..0000000000000000000000000000000000000000
--- a/spaces/amasad/sahil2801-replit-code-instruct-glaive/app.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import os
-import gradio as gr
-import torch
-
-from transformers import AutoTokenizer, AutoModelForCausalLM
-
-REPO = "sahil2801/replit-code-instruct-glaive"
-
-description = """# Code Generation by Instruction with sahil2801/replit-code-instruct-glaive
- This model is trained on a large amount of code and fine tuned on code-instruct datasets. You can type an instruction in the ### Input: section and received code generation."""
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-tokenizer = AutoTokenizer.from_pretrained(REPO, trust_remote_code=True)
-model = AutoModelForCausalLM.from_pretrained(REPO, torch_dtype=torch.bfloat16, trust_remote_code=True)
-model.to(device)
-
-model.eval()
-
-custom_css = """
-.gradio-container {
- background-color: #0D1525;
- color:white
-}
-#orange-button {
- background: #F26207 !important;
- color: white;
-}
-.cm-gutters{
- border: none !important;
-}
-"""
-
-def post_processing(prompt, completion):
- return prompt + completion
-
-def code_generation(prompt, max_new_tokens=1024, temperature=0.2, top_p=0.9, eos_token_id=tokenizer.eos_token_id):
- input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
- generated_ids = model.generate(input_ids, max_new_tokens=max_new_tokens, do_sample=True, use_cache=True, temperature=temperature, top_p=top_p, eos_token_id=eos_token_id)
- completion = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_spaces=False)
- return post_processing(prompt, completion)
-
-demo = gr.Blocks(
- css=custom_css
-)
-
-with demo:
- gr.Markdown(value=description)
- with gr.Row():
- input_col , settings_col = gr.Column(scale=6), gr.Column(scale=6),
- with input_col:
- code = gr.Code(lines=28,label='Input', value="Below is an instruction that describes a task, paired with an input that provides further context.\n Write a response that appropriately completes the request.\n\n ### Instruction:\nWrite a program to perform the given task.\n\n###Input: \n\n### Response:")
- with settings_col:
- with gr.Accordion("Generation Settings", open=True):
- max_new_tokens= gr.Slider(
- minimum=8,
- maximum=1024,
- step=1,
- value=48,
- label="Max Tokens",
- )
- temperature = gr.Slider(
- minimum=0.1,
- maximum=2.5,
- step=0.1,
- value=0.2,
- label="Temperature",
- )
-
- with gr.Row():
- run = gr.Button(elem_id="orange-button", value="Generate Response")
-
- event = run.click(code_generation, [code, max_new_tokens, temperature], code, api_name="predict")
-
-demo.queue(max_size=40).launch()
\ No newline at end of file
diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/backend.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/backend.py
deleted file mode 100644
index fd45b94d916512059e4d1f7850b63de6f9da6320..0000000000000000000000000000000000000000
--- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/server/backend.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import re
-from datetime import datetime
-from g4f import ChatCompletion
-from flask import request, Response, stream_with_context
-from requests import get
-from server.config import special_instructions
-
-
-class Backend_Api:
- def __init__(self, bp, config: dict) -> None:
- """
- Initialize the Backend_Api class.
- :param app: Flask application instance
- :param config: Configuration dictionary
- """
- self.bp = bp
- self.routes = {
- '/backend-api/v2/conversation': {
- 'function': self._conversation,
- 'methods': ['POST']
- }
- }
-
- def _conversation(self):
- """
- Handles the conversation route.
-
- :return: Response object containing the generated conversation stream
- """
- conversation_id = request.json['conversation_id']
-
- try:
- jailbreak = request.json['jailbreak']
- model = request.json['model']
- messages = build_messages(jailbreak)
-
- # Generate response
- response = ChatCompletion.create(
- model=model,
- chatId=conversation_id,
- messages=messages
- )
-
- return Response(stream_with_context(generate_stream(response, jailbreak)), mimetype='text/event-stream')
-
- except Exception as e:
- print(e)
- print(e.__traceback__.tb_next)
-
- return {
- '_action': '_ask',
- 'success': False,
- "error": f"an error occurred {str(e)}"
- }, 400
-
-
-def build_messages(jailbreak):
- """
- Build the messages for the conversation.
-
- :param jailbreak: Jailbreak instruction string
- :return: List of messages for the conversation
- """
- _conversation = request.json['meta']['content']['conversation']
- internet_access = request.json['meta']['content']['internet_access']
- prompt = request.json['meta']['content']['parts'][0]
-
- # Add the existing conversation
- conversation = _conversation
-
- # Add web results if enabled
- if internet_access:
- current_date = datetime.now().strftime("%Y-%m-%d")
- query = f'Current date: {current_date}. ' + prompt["content"]
- search_results = fetch_search_results(query)
- conversation.extend(search_results)
-
- # Add jailbreak instructions if enabled
- if jailbreak_instructions := getJailbreak(jailbreak):
- conversation.extend(jailbreak_instructions)
-
- # Add the prompt
- conversation.append(prompt)
-
- # Reduce conversation size to avoid API Token quantity error
- if len(conversation) > 3:
- conversation = conversation[-4:]
-
- return conversation
-
-
-def fetch_search_results(query):
- """
- Fetch search results for a given query.
-
- :param query: Search query string
- :return: List of search results
- """
- search = get('https://ddg-api.herokuapp.com/search',
- params={
- 'query': query,
- 'limit': 3,
- })
-
- snippets = ""
- for index, result in enumerate(search.json()):
- snippet = f'[{index + 1}] "{result["snippet"]}" URL:{result["link"]}.'
- snippets += snippet
-
- response = "Here are some updated web searches. Use this to improve user response:"
- response += snippets
-
- return [{'role': 'system', 'content': response}]
-
-
-def generate_stream(response, jailbreak):
- """
- Generate the conversation stream.
-
- :param response: Response object from ChatCompletion.create
- :param jailbreak: Jailbreak instruction string
- :return: Generator object yielding messages in the conversation
- """
- if getJailbreak(jailbreak):
- response_jailbreak = ''
- jailbroken_checked = False
- for message in response:
- response_jailbreak += message
- if jailbroken_checked:
- yield message
- else:
- if response_jailbroken_success(response_jailbreak):
- jailbroken_checked = True
- if response_jailbroken_failed(response_jailbreak):
- yield response_jailbreak
- jailbroken_checked = True
- else:
- yield from response
-
-
-def response_jailbroken_success(response: str) -> bool:
- """Check if the response has been jailbroken.
-
- :param response: Response string
- :return: Boolean indicating if the response has been jailbroken
- """
- act_match = re.search(r'ACT:', response, flags=re.DOTALL)
- return bool(act_match)
-
-
-def response_jailbroken_failed(response):
- """
- Check if the response has not been jailbroken.
-
- :param response: Response string
- :return: Boolean indicating if the response has not been jailbroken
- """
- return False if len(response) < 4 else not (response.startswith("GPT:") or response.startswith("ACT:"))
-
-
-def getJailbreak(jailbreak):
- """
- Check if jailbreak instructions are provided.
-
- :param jailbreak: Jailbreak instruction string
- :return: Jailbreak instructions if provided, otherwise None
- """
- if jailbreak != "default":
- special_instructions[jailbreak][0]['content'] += special_instructions['two_responses_instruction']
- if jailbreak in special_instructions:
- special_instructions[jailbreak]
- return special_instructions[jailbreak]
- else:
- return None
- else:
- return None
diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/webui_sd_pipeline.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/webui_sd_pipeline.py
deleted file mode 100644
index 21b46152c3167038954f9f170a65647929c2e929..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/webui_sd_pipeline.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from modules.processing import StableDiffusionProcessingImg2Img
-from modules.shared import opts, sd_model
-import os
-
-def get_webui_sd_pipeline(args, root, frame):
- import re
- assert args.prompt is not None
-
- # Setup the pipeline
- p = StableDiffusionProcessingImg2Img(
- sd_model=sd_model,
- outpath_samples = opts.outdir_samples or opts.outdir_img2img_samples,
- #we'll setup the rest later
- )
-
- os.makedirs(args.outdir, exist_ok=True)
- p.width, p.height = map(lambda x: x - x % 64, (args.W, args.H))
- p.steps = args.steps
- p.seed = args.seed
- p.sampler_name = args.sampler
- p.batch_size = args.n_batch
- p.tiling = args.tiling
- p.restore_faces = args.restore_faces
- p.subseed = args.subseed
- p.subseed_strength = args.subseed_strength
- p.seed_resize_from_w = args.seed_resize_from_w
- p.seed_resize_from_h = args.seed_resize_from_h
- p.fill = args.fill
- p.ddim_eta = args.ddim_eta
- p.batch_size = args.n_samples
- p.width = args.W
- p.height = args.H
- p.seed = args.seed
- p.do_not_save_samples = not args.save_sample_per_step
- p.sampler_name = args.sampler
- p.mask_blur = args.mask_overlay_blur
- p.extra_generation_params["Mask blur"] = args.mask_overlay_blur
- p.n_iter = 1
- p.steps = args.steps
- if opts.img2img_fix_steps:
- p.denoising_strength = 1 / (1 - args.strength + 1.0/args.steps) #see https://github.com/deforum-art/deforum-for-automatic1111-webui/issues/3
- else:
- p.denoising_strength = 1 - args.strength
- p.cfg_scale = args.scale
- p.image_cfg_scale = args.pix2pix_img_cfg_scale
- p.outpath_samples = root.outpath_samples
-
-
- return p
\ No newline at end of file
diff --git a/spaces/aodianyun/stable-diffusion-webui/test/basic_features/extras_test.py b/spaces/aodianyun/stable-diffusion-webui/test/basic_features/extras_test.py
deleted file mode 100644
index 0170c511fe54cc6bcf49ec7f75ca7c747de41db5..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/test/basic_features/extras_test.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import unittest
-import requests
-from gradio.processing_utils import encode_pil_to_base64
-from PIL import Image
-
-class TestExtrasWorking(unittest.TestCase):
- def setUp(self):
- self.url_extras_single = "http://localhost:7860/sdapi/v1/extra-single-image"
- self.extras_single = {
- "resize_mode": 0,
- "show_extras_results": True,
- "gfpgan_visibility": 0,
- "codeformer_visibility": 0,
- "codeformer_weight": 0,
- "upscaling_resize": 2,
- "upscaling_resize_w": 128,
- "upscaling_resize_h": 128,
- "upscaling_crop": True,
- "upscaler_1": "None",
- "upscaler_2": "None",
- "extras_upscaler_2_visibility": 0,
- "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))
- }
-
- def test_simple_upscaling_performed(self):
- self.extras_single["upscaler_1"] = "Lanczos"
- self.assertEqual(requests.post(self.url_extras_single, json=self.extras_single).status_code, 200)
-
-
-class TestPngInfoWorking(unittest.TestCase):
- def setUp(self):
- self.url_png_info = "http://localhost:7860/sdapi/v1/extra-single-image"
- self.png_info = {
- "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))
- }
-
- def test_png_info_performed(self):
- self.assertEqual(requests.post(self.url_png_info, json=self.png_info).status_code, 200)
-
-
-class TestInterrogateWorking(unittest.TestCase):
- def setUp(self):
- self.url_interrogate = "http://localhost:7860/sdapi/v1/extra-single-image"
- self.interrogate = {
- "image": encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png")),
- "model": "clip"
- }
-
- def test_interrogate_performed(self):
- self.assertEqual(requests.post(self.url_interrogate, json=self.interrogate).status_code, 200)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/Blowfish.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/Blowfish.py
deleted file mode 100644
index 6005ffe2b90694ae241c87404862f5f66db8f271..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/Blowfish.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Cipher/Blowfish.py : Blowfish
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-"""
-Module's constants for the modes of operation supported with Blowfish:
-
-:var MODE_ECB: :ref:`Electronic Code Book (ECB) `
-:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) `
-:var MODE_CFB: :ref:`Cipher FeedBack (CFB) `
-:var MODE_OFB: :ref:`Output FeedBack (OFB) `
-:var MODE_CTR: :ref:`CounTer Mode (CTR) `
-:var MODE_OPENPGP: :ref:`OpenPGP Mode `
-:var MODE_EAX: :ref:`EAX Mode `
-"""
-
-import sys
-
-from Crypto.Cipher import _create_cipher
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer, c_size_t,
- c_uint8_ptr)
-
-_raw_blowfish_lib = load_pycryptodome_raw_lib(
- "Crypto.Cipher._raw_blowfish",
- """
- int Blowfish_start_operation(const uint8_t key[],
- size_t key_len,
- void **pResult);
- int Blowfish_encrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int Blowfish_decrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int Blowfish_stop_operation(void *state);
- """
- )
-
-
-def _create_base_cipher(dict_parameters):
- """This method instantiates and returns a smart pointer to
- a low-level base cipher. It will absorb named parameters in
- the process."""
-
- try:
- key = dict_parameters.pop("key")
- except KeyError:
- raise TypeError("Missing 'key' parameter")
-
- if len(key) not in key_size:
- raise ValueError("Incorrect Blowfish key length (%d bytes)" % len(key))
-
- start_operation = _raw_blowfish_lib.Blowfish_start_operation
- stop_operation = _raw_blowfish_lib.Blowfish_stop_operation
-
- void_p = VoidPointer()
- result = start_operation(c_uint8_ptr(key),
- c_size_t(len(key)),
- void_p.address_of())
- if result:
- raise ValueError("Error %X while instantiating the Blowfish cipher"
- % result)
- return SmartPointer(void_p.get(), stop_operation)
-
-
-def new(key, mode, *args, **kwargs):
- """Create a new Blowfish cipher
-
- :param key:
- The secret key to use in the symmetric cipher.
- Its length can vary from 5 to 56 bytes.
- :type key: bytes, bytearray, memoryview
-
- :param mode:
- The chaining mode to use for encryption or decryption.
- :type mode: One of the supported ``MODE_*`` constants
-
- :Keyword Arguments:
- * **iv** (*bytes*, *bytearray*, *memoryview*) --
- (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``,
- and ``MODE_OPENPGP`` modes).
-
- The initialization vector to use for encryption or decryption.
-
- For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long.
-
- For ``MODE_OPENPGP`` mode only,
- it must be 8 bytes long for encryption
- and 10 bytes for decryption (in the latter case, it is
- actually the *encrypted* IV which was prefixed to the ciphertext).
-
- If not provided, a random byte string is generated (you must then
- read its value with the :attr:`iv` attribute).
-
- * **nonce** (*bytes*, *bytearray*, *memoryview*) --
- (Only applicable for ``MODE_EAX`` and ``MODE_CTR``).
-
- A value that must never be reused for any other encryption done
- with this key.
-
- For ``MODE_EAX`` there are no
- restrictions on its length (recommended: **16** bytes).
-
- For ``MODE_CTR``, its length must be in the range **[0..7]**.
-
- If not provided for ``MODE_EAX``, a random byte string is generated (you
- can read it back via the ``nonce`` attribute).
-
- * **segment_size** (*integer*) --
- (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext
- are segmented in. It must be a multiple of 8.
- If not specified, it will be assumed to be 8.
-
- * **mac_len** : (*integer*) --
- (Only ``MODE_EAX``)
- Length of the authentication tag, in bytes.
- It must be no longer than 8 (default).
-
- * **initial_value** : (*integer*) --
- (Only ``MODE_CTR``). The initial value for the counter within
- the counter block. By default it is **0**.
-
- :Return: a Blowfish object, of the applicable mode.
- """
-
- return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs)
-
-MODE_ECB = 1
-MODE_CBC = 2
-MODE_CFB = 3
-MODE_OFB = 5
-MODE_CTR = 6
-MODE_OPENPGP = 7
-MODE_EAX = 9
-
-# Size of a data block (in bytes)
-block_size = 8
-# Size of a key (in bytes)
-key_size = range(4, 56 + 1)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py
deleted file mode 100644
index a710462ed68cf64ee3b5fc76d200e6061d648672..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/test_Salsa20.py
+++ /dev/null
@@ -1,367 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# SelfTest/Cipher/Salsa20.py: Self-test for the Salsa20 stream cipher
-#
-# Written in 2013 by Fabrizio Tarizzo
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-"""Self-test suite for Crypto.Cipher.Salsa20"""
-
-import unittest
-
-from Crypto.Util.py3compat import bchr
-
-from Crypto.SelfTest.st_common import list_test_cases
-
-from Crypto.Cipher import Salsa20
-
-from .common import make_stream_tests
-
-# This is a list of (plaintext, ciphertext, key[, description[, params]])
-# tuples.
-test_data = [
- # Test vectors are taken from
- # http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/verified.test-vectors
- ( '00' * 512,
- '4dfa5e481da23ea09a31022050859936da52fcee218005164f267cb65f5cfd7f'
- + '2b4f97e0ff16924a52df269515110a07f9e460bc65ef95da58f740b7d1dbb0aa'
- + 'd64cec189c7eb8c6bbf3d7376c80a481d43e628701f6a27afb9fe23919f24114'
- + '8db44f70d7063efcc3dd55a0893a613c3c6fe1c127bd6f59910589293bb6ef9e'
- + 'e24819066dee1a64f49b0bbad5988635272b169af861f85df881939f29ada6fd'
- + '0241410e8d332ae4798d929434a2630de451ec4e0169694cbaa7ebb121ea6a2b'
- + 'da9c1581f429e0a00f7d67e23b730676783b262e8eb43a25f55fb90b3e753aef'
- + '8c6713ec66c51881111593ccb3e8cb8f8de124080501eeeb389c4bcb6977cf95'
- + '7d5789631eb4554400e1e025935dfa7b3e9039d61bdc58a8697d36815bf1985c'
- + 'efdf7ae112e5bb81e37ecf0616ce7147fc08a93a367e08631f23c03b00a8da2f'
- + 'aa5024e5c8d30aca43fc2d5082067b21b234bc741d68fb292c6012c3764ccee3'
- + '1e364a5403e00cfee338a21a01e7d3cefd5a770ca0ab48c435ea6116435f7ad8'
- + '30b217b49f978a68e207ed9f462af7fb195b2115fe8f24f152e4ddc32202d6f2'
- + 'b52fafbcfbc202d8a259a611e901d3f62d065eb13f09bbc45cd45119b843efaa'
- + 'b375703739daced4dd4059fd71c3c47fc2f9939670fad4a46066adcc6a564578'
- + '3308b90ffb72be04a6b147cbe38cc0c3b9267c296a92a7c69873f9f263be9703',
- '80000000000000000000000000000000',
- '128 bits key, set 1, vector 0',
- dict (iv='00'*8)),
-
- ( '00' * 512,
- 'e3be8fdd8beca2e3ea8ef9475b29a6e7003951e1097a5c38d23b7a5fad9f6844'
- + 'b22c97559e2723c7cbbd3fe4fc8d9a0744652a83e72a9c461876af4d7ef1a117'
- + '8da2b74eef1b6283e7e20166abcae538e9716e4669e2816b6b20c5c356802001'
- + 'cc1403a9a117d12a2669f456366d6ebb0f1246f1265150f793cdb4b253e348ae'
- + '203d89bc025e802a7e0e00621d70aa36b7e07cb1e7d5b38d5e222b8b0e4b8407'
- + '0142b1e29504767d76824850320b5368129fdd74e861b498e3be8d16f2d7d169'
- + '57be81f47b17d9ae7c4ff15429a73e10acf250ed3a90a93c711308a74c6216a9'
- + 'ed84cd126da7f28e8abf8bb63517e1ca98e712f4fb2e1a6aed9fdc73291faa17'
- + '958211c4ba2ebd5838c635edb81f513a91a294e194f1c039aeec657dce40aa7e'
- + '7c0af57cacefa40c9f14b71a4b3456a63e162ec7d8d10b8ffb1810d71001b618'
- + '2f9f73da53b85405c11f7b2d890fa8ae0c7f2e926d8a98c7ec4e91b65120e988'
- + '349631a700c6facec3471cb0413656e75e309456584084d7e12c5b43a41c43ed'
- + '9a048abd9b880da65f6a665a20fe7b77cd292fe62cae644b7f7df69f32bdb331'
- + '903e6505ce44fdc293920c6a9ec7057e23df7dad298f82ddf4efb7fdc7bfc622'
- + '696afcfd0cddcc83c7e77f11a649d79acdc3354e9635ff137e929933a0bd6f53'
- + '77efa105a3a4266b7c0d089d08f1e855cc32b15b93784a36e56a76cc64bc8477',
- '8000000000000000000000000000000000000000000000000000000000000000',
- '256 bits key, set 1, vector 0',
- dict (iv='00'*8)),
-
- ( '00' * 512,
- '169060ccb42bea7bee4d8012a02f3635eb7bca12859fa159cd559094b3507db8'
- + '01735d1a1300102a9c9415546829cbd2021ba217b39b81d89c55b13d0c603359'
- + '3f84159a3c84f4b4f4a0edcd9d38ff261a737909e0b66d68b5cac496f3a5be99'
- + 'cb12c321ab711afaab36cc0947955e1a9bb952ed54425e7711279fbc81bb83f5'
- + '6e55cea44e6daddb05858a153ea6213b3350c12aa1a83ef2726f09485fa71790'
- + 'f9b9f922c7dda1113b1f9d56658ed3402803f511bc1f122601d5e7f0ff036e23'
- + '23ef24bb24195b9fd574823cd8a40c29d86bd35c191e2038779ff696c712b6d8'
- + '2e7014dbe1ac5d527af076c088c4a8d44317958189f6ef54933a7e0816b5b916'
- + 'd8f12ed8afe9422b85e5cc9b8adec9d6cfabe8dbc1082bccc02f5a7266aa074c'
- + 'a284e583a35837798cc0e69d4ce937653b8cdd65ce414b89138615ccb165ad19'
- + '3c6b9c3d05eef4be921a10ea811fe61d11c6867600188e065daff90b509ec56b'
- + 'd41e7e8968c478c78d590c2d2ee24ea009c8f49bc3d81672cfc47895a9e21c9a'
- + '471ebf8e294bee5d2de436ac8d052bf31111b345f1da23c3a4d13b9fc5f0900a'
- + 'a298f98f538973b8fad40d4d159777de2cfe2a3dead1645ddb49794827dba040'
- + 'f70a0ff4ecd155e0f033604693a51e2363880e2ecf98699e7174af7c2c6b0fc6'
- + '59ae329599a3949272a37b9b2183a0910922a3f325ae124dcbdd735364055ceb',
- '09090909090909090909090909090909',
- '128 bits key, set 2, vector 9',
- dict (iv='00'*8)),
-
- ( '00' * 512,
- '7041e747ceb22ed7812985465f50333124f971da1c5d6efe5ca201b886f31046'
- + 'e757e5c3ec914f60ed1f6bce2819b6810953f12b8ba1199bf82d746a8b8a88f1'
- + '142002978ec4c35b95dc2c82990f9e847a0ab45f2ca72625f5190c820f29f3aa'
- + 'f5f0b5572b06b70a144f2a240c3b3098d4831fa1ce1459f8d1df226a6a79b0ab'
- + '41e91799ef31b5ff3d756c19126b19025858ee70fbd69f2be955cb011c005e31'
- + '32b271b378f39b0cb594e95c99ce6ff17735a541891845bbf0450afcb4a850b9'
- + '4ee90afb713ae7e01295c74381180a3816d7020d5a396c0d97aaa783eaabb6ec'
- + '44d5111157f2212d1b1b8fca7893e8b520cd482418c272ab119b569a2b9598eb'
- + '355624d12e79adab81153b58cd22eaf1b2a32395dedc4a1c66f4d274070b9800'
- + 'ea95766f0245a8295f8aadb36ddbbdfa936417c8dbc6235d19494036964d3e70'
- + 'b125b0f800c3d53881d9d11e7970f827c2f9556935cd29e927b0aceb8cae5fd4'
- + '0fd88a8854010a33db94c96c98735858f1c5df6844f864feaca8f41539313e7f'
- + '3c0610214912cd5e6362197646207e2d64cd5b26c9dfe0822629dcbeb16662e8'
- + '9ff5bf5cf2e499138a5e27bd5027329d0e68ddf53103e9e409523662e27f61f6'
- + '5cf38c1232023e6a6ef66c315bcb2a4328642faabb7ca1e889e039e7c444b34b'
- + 'b3443f596ac730f3df3dfcdb343c307c80f76e43e8898c5e8f43dc3bb280add0',
- '0909090909090909090909090909090909090909090909090909090909090909',
- '256 bits key, set 2, vector 9',
- dict (iv='00'*8)),
-
- ( '00' * 1024,
- '71daee5142d0728b41b6597933ebf467e43279e30978677078941602629cbf68'
- + 'b73d6bd2c95f118d2b3e6ec955dabb6dc61c4143bc9a9b32b99dbe6866166dc0'
- + '8631b7d6553050303d7252c264d3a90d26c853634813e09ad7545a6ce7e84a5d'
- + 'fc75ec43431207d5319970b0faadb0e1510625bb54372c8515e28e2accf0a993'
- + '0ad15f431874923d2a59e20d9f2a5367dba6051564f150287debb1db536ff9b0'
- + '9ad981f25e5010d85d76ee0c305f755b25e6f09341e0812f95c94f42eead346e'
- + '81f39c58c5faa2c88953dc0cac90469db2063cb5cdb22c9eae22afbf0506fca4'
- + '1dc710b846fbdfe3c46883dd118f3a5e8b11b6afd9e71680d8666557301a2daa'
- + 'fb9496c559784d35a035360885f9b17bd7191977deea932b981ebdb29057ae3c'
- + '92cfeff5e6c5d0cb62f209ce342d4e35c69646ccd14e53350e488bb310a32f8b'
- + '0248e70acc5b473df537ced3f81a014d4083932bedd62ed0e447b6766cd2604b'
- + '706e9b346c4468beb46a34ecf1610ebd38331d52bf33346afec15eefb2a7699e'
- + '8759db5a1f636a48a039688e39de34d995df9f27ed9edc8dd795e39e53d9d925'
- + 'b278010565ff665269042f05096d94da3433d957ec13d2fd82a0066283d0d1ee'
- + 'b81bf0ef133b7fd90248b8ffb499b2414cd4fa003093ff0864575a43749bf596'
- + '02f26c717fa96b1d057697db08ebc3fa664a016a67dcef8807577cc3a09385d3'
- + 'f4dc79b34364bb3b166ce65fe1dd28e3950fe6fa81063f7b16ce1c0e6daac1f8'
- + '188455b77752045e863c9b256ad92bc6e2d08314c5bba191c274f42dfbb3d652'
- + 'bb771956555e880f84cd8b827a4c5a52f3a099fa0259bd4aac3efd541f191170'
- + '4412d6e85fbcc628b335875b9fef24807f6e1bc66c3186159e1e7f5a13913e02'
- + 'd241ce2efdbcaa275039fb14eac5923d17ffbc7f1abd3b45e92127575bfbabf9'
- + '3a257ebef0aa1437b326e41b585af572f7239c33b32981a1577a4f629b027e1e'
- + 'b49d58cc497e944d79cef44357c2bf25442ab779651e991147bf79d6fd3a8868'
- + '0cd3b1748e07fd10d78aceef6db8a5e563570d40127f754146c34a440f2a991a'
- + '23fa39d365141f255041f2135c5cba4373452c114da1801bacca38610e3a6524'
- + '2b822d32de4ab5a7d3cf9b61b37493c863bd12e2cae10530cddcda2cb7a5436b'
- + 'ef8988d4d24e8cdc31b2d2a3586340bc5141f8f6632d0dd543bfed81eb471ba1'
- + 'f3dc2225a15ffddcc03eb48f44e27e2aa390598adf83f15c6608a5f18d4dfcf0'
- + 'f547d467a4d70b281c83a595d7660d0b62de78b9cca023cca89d7b1f83484638'
- + '0e228c25f049184a612ef5bb3d37454e6cfa5b10dceda619d898a699b3c8981a'
- + '173407844bb89b4287bf57dd6600c79e352c681d74b03fa7ea0d7bf6ad69f8a6'
- + '8ecb001963bd2dd8a2baa0083ec09751cd9742402ad716be16d5c052304cfca1',
- '0F62B5085BAE0154A7FA4DA0F34699EC',
- '128 bits key, Set 6, vector# 3',
- dict (iv='288FF65DC42B92F9')),
-
- ( '00' * 1024,
- '5e5e71f90199340304abb22a37b6625bf883fb89ce3b21f54a10b81066ef87da'
- + '30b77699aa7379da595c77dd59542da208e5954f89e40eb7aa80a84a6176663f'
- + 'd910cde567cf1ff60f7040548d8f376bfd1f44c4774aac37410ede7d5c3463fc'
- + '4508a603201d8495ad257894e5eb1914b53e8da5e4bf2bc83ac87ce55cc67df7'
- + '093d9853d2a83a9c8be969175df7c807a17156df768445dd0874a9271c6537f5'
- + 'ce0466473582375f067fa4fcdaf65dbc0139cd75e8c21a482f28c0fb8c3d9f94'
- + '22606cc8e88fe28fe73ec3cb10ff0e8cc5f2a49e540f007265c65b7130bfdb98'
- + '795b1df9522da46e48b30e55d9f0d787955ece720205b29c85f3ad9be33b4459'
- + '7d21b54d06c9a60b04b8e640c64e566e51566730e86cf128ab14174f91bd8981'
- + 'a6fb00fe587bbd6c38b5a1dfdb04ea7e61536fd229f957aa9b070ca931358e85'
- + '11b92c53c523cb54828fb1513c5636fa9a0645b4a3c922c0db94986d92f314ff'
- + '7852c03b231e4dceea5dd8cced621869cff818daf3c270ff3c8be2e5c74be767'
- + 'a4e1fdf3327a934fe31e46df5a74ae2021cee021d958c4f615263d99a5ddae7f'
- + 'eab45e6eccbafefe4761c57750847b7e75ee2e2f14333c0779ce4678f47b1e1b'
- + '760a03a5f17d6e91d4b42313b3f1077ee270e432fe04917ed1fc8babebf7c941'
- + '42b80dfb44a28a2a3e59093027606f6860bfb8c2e5897078cfccda7314c70035'
- + 'f137de6f05daa035891d5f6f76e1df0fce1112a2ff0ac2bd3534b5d1bf4c7165'
- + 'fb40a1b6eacb7f295711c4907ae457514a7010f3a342b4427593d61ba993bc59'
- + '8bd09c56b9ee53aac5dd861fa4b4bb53888952a4aa9d8ca8671582de716270e1'
- + '97375b3ee49e51fa2bf4ef32015dd9a764d966aa2ae541592d0aa650849e99ca'
- + '5c6c39beebf516457cc32fe4c105bff314a12f1ec94bdf4d626f5d9b1cbbde42'
- + 'e5733f0885765ba29e2e82c829d312f5fc7e180679ac84826c08d0a644b326d0'
- + '44da0fdcc75fa53cfe4ced0437fa4df5a7ecbca8b4cb7c4a9ecf9a60d00a56eb'
- + '81da52adc21f508dbb60a9503a3cc94a896616d86020d5b0e5c637329b6d396a'
- + '41a21ba2c4a9493cf33fa2d4f10f77d5b12fdad7e478ccfe79b74851fc96a7ca'
- + '6320c5efd561a222c0ab0fb44bbda0e42149611d2262bb7d1719150fa798718a'
- + '0eec63ee297cad459869c8b0f06c4e2b56cbac03cd2605b2a924efedf85ec8f1'
- + '9b0b6c90e7cbd933223ffeb1b3a3f9677657905829294c4c70acdb8b0891b47d'
- + '0875d0cd6c0f4efe2917fc44b581ef0d1e4280197065d07da34ab33283364552'
- + 'efad0bd9257b059acdd0a6f246812feb69e7e76065f27dbc2eee94da9cc41835'
- + 'bf826e36e5cebe5d4d6a37a6a666246290ce51a0c082718ab0ec855668db1add'
- + 'a658e5f257e0db39384d02e6145c4c00eaa079098f6d820d872de711b6ed08cf',
- '0F62B5085BAE0154A7FA4DA0F34699EC3F92E5388BDE3184D72A7DD02376C91C',
- '256 bits key, Set 6, vector# 3',
- dict (iv='288FF65DC42B92F9')),
-
-]
-
-
-class KeyLength(unittest.TestCase):
-
- def runTest(self):
-
- nonce = bchr(0) * 8
- for key_length in (15, 30, 33):
- key = bchr(1) * key_length
- self.assertRaises(ValueError, Salsa20.new, key, nonce)
-
-
-class NonceTests(unittest.TestCase):
-
- def test_invalid_nonce_length(self):
- key = bchr(1) * 16
- self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 7)
- self.assertRaises(ValueError, Salsa20.new, key, bchr(0) * 9)
-
- def test_default_nonce(self):
-
- cipher1 = Salsa20.new(bchr(1) * 16)
- cipher2 = Salsa20.new(bchr(1) * 16)
- self.assertEqual(len(cipher1.nonce), 8)
- self.assertNotEqual(cipher1.nonce, cipher2.nonce)
-
-
-class ByteArrayTest(unittest.TestCase):
- """Verify we can encrypt or decrypt bytearrays"""
-
- def runTest(self):
-
- data = b"0123"
- key = b"9" * 32
- nonce = b"t" * 8
-
- # Encryption
- data_ba = bytearray(data)
- key_ba = bytearray(key)
- nonce_ba = bytearray(nonce)
-
- cipher1 = Salsa20.new(key=key, nonce=nonce)
- ct = cipher1.encrypt(data)
-
- cipher2 = Salsa20.new(key=key_ba, nonce=nonce_ba)
- key_ba[:1] = b'\xFF'
- nonce_ba[:1] = b'\xFF'
- ct_test = cipher2.encrypt(data_ba)
-
- self.assertEqual(ct, ct_test)
- self.assertEqual(cipher1.nonce, cipher2.nonce)
-
- # Decryption
- key_ba = bytearray(key)
- nonce_ba = bytearray(nonce)
- ct_ba = bytearray(ct)
-
- cipher3 = Salsa20.new(key=key_ba, nonce=nonce_ba)
- key_ba[:1] = b'\xFF'
- nonce_ba[:1] = b'\xFF'
- pt_test = cipher3.decrypt(ct_ba)
-
- self.assertEqual(data, pt_test)
-
-
-class MemoryviewTest(unittest.TestCase):
- """Verify we can encrypt or decrypt bytearrays"""
-
- def runTest(self):
-
- data = b"0123"
- key = b"9" * 32
- nonce = b"t" * 8
-
- # Encryption
- data_mv = memoryview(bytearray(data))
- key_mv = memoryview(bytearray(key))
- nonce_mv = memoryview(bytearray(nonce))
-
- cipher1 = Salsa20.new(key=key, nonce=nonce)
- ct = cipher1.encrypt(data)
-
- cipher2 = Salsa20.new(key=key_mv, nonce=nonce_mv)
- key_mv[:1] = b'\xFF'
- nonce_mv[:1] = b'\xFF'
- ct_test = cipher2.encrypt(data_mv)
-
- self.assertEqual(ct, ct_test)
- self.assertEqual(cipher1.nonce, cipher2.nonce)
-
- # Decryption
- key_mv = memoryview(bytearray(key))
- nonce_mv = memoryview(bytearray(nonce))
- ct_mv = memoryview(bytearray(ct))
-
- cipher3 = Salsa20.new(key=key_mv, nonce=nonce_mv)
- key_mv[:1] = b'\xFF'
- nonce_mv[:1] = b'\xFF'
- pt_test = cipher3.decrypt(ct_mv)
-
- self.assertEqual(data, pt_test)
-
-
-class TestOutput(unittest.TestCase):
-
- def runTest(self):
- # Encrypt/Decrypt data and test output parameter
-
- key = b'4' * 32
- nonce = b'5' * 8
- cipher = Salsa20.new(key=key, nonce=nonce)
-
- pt = b'5' * 300
- ct = cipher.encrypt(pt)
-
- output = bytearray(len(pt))
- cipher = Salsa20.new(key=key, nonce=nonce)
- res = cipher.encrypt(pt, output=output)
- self.assertEqual(ct, output)
- self.assertEqual(res, None)
-
- cipher = Salsa20.new(key=key, nonce=nonce)
- res = cipher.decrypt(ct, output=output)
- self.assertEqual(pt, output)
- self.assertEqual(res, None)
-
- output = memoryview(bytearray(len(pt)))
- cipher = Salsa20.new(key=key, nonce=nonce)
- cipher.encrypt(pt, output=output)
- self.assertEqual(ct, output)
-
- cipher = Salsa20.new(key=key, nonce=nonce)
- cipher.decrypt(ct, output=output)
- self.assertEqual(pt, output)
-
- cipher = Salsa20.new(key=key, nonce=nonce)
- self.assertRaises(TypeError, cipher.encrypt, pt, output=b'0'*len(pt))
-
- cipher = Salsa20.new(key=key, nonce=nonce)
- self.assertRaises(TypeError, cipher.decrypt, ct, output=b'0'*len(ct))
-
- shorter_output = bytearray(len(pt) - 1)
-
- cipher = Salsa20.new(key=key, nonce=nonce)
- self.assertRaises(ValueError, cipher.encrypt, pt, output=shorter_output)
-
- cipher = Salsa20.new(key=key, nonce=nonce)
- self.assertRaises(ValueError, cipher.decrypt, ct, output=shorter_output)
-
-
-def get_tests(config={}):
- tests = make_stream_tests(Salsa20, "Salsa20", test_data)
- tests.append(KeyLength())
- tests += list_test_cases(NonceTests)
- tests.append(ByteArrayTest())
- tests.append(MemoryviewTest())
- tests.append(TestOutput())
-
- return tests
-
-
-if __name__ == '__main__':
- import unittest
- suite = lambda: unittest.TestSuite(get_tests())
- unittest.main(defaultTest='suite')
-
-# vim:set ts=4 sw=4 sts=4 expandtab:
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/StringTools.c b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/StringTools.c
deleted file mode 100644
index 35241c64a463a835f016f3369c22a10877b44dbf..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Utility/StringTools.c
+++ /dev/null
@@ -1,1195 +0,0 @@
-
-//////////////////// IncludeStringH.proto ////////////////////
-
-#include
-
-//////////////////// IncludeCppStringH.proto ////////////////////
-
-#include
-
-//////////////////// InitStrings.proto ////////////////////
-
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/
-
-//////////////////// InitStrings ////////////////////
-
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {
- while (t->p) {
- #if PY_MAJOR_VERSION < 3
- if (t->is_unicode) {
- *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);
- } else if (t->intern) {
- *t->p = PyString_InternFromString(t->s);
- } else {
- *t->p = PyString_FromStringAndSize(t->s, t->n - 1);
- }
- #else /* Python 3+ has unicode identifiers */
- if (t->is_unicode | t->is_str) {
- if (t->intern) {
- *t->p = PyUnicode_InternFromString(t->s);
- } else if (t->encoding) {
- *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);
- } else {
- *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);
- }
- } else {
- *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);
- }
- #endif
- if (!*t->p)
- return -1;
- // initialise cached hash value
- if (PyObject_Hash(*t->p) == -1)
- return -1;
- ++t;
- }
- return 0;
-}
-
-//////////////////// BytesContains.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_BytesContains(PyObject* bytes, char character); /*proto*/
-
-//////////////////// BytesContains ////////////////////
-//@requires: IncludeStringH
-
-static CYTHON_INLINE int __Pyx_BytesContains(PyObject* bytes, char character) {
- const Py_ssize_t length = PyBytes_GET_SIZE(bytes);
- char* char_start = PyBytes_AS_STRING(bytes);
- return memchr(char_start, (unsigned char)character, (size_t)length) != NULL;
-}
-
-
-//////////////////// PyUCS4InUnicode.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_UnicodeContainsUCS4(PyObject* unicode, Py_UCS4 character); /*proto*/
-
-//////////////////// PyUCS4InUnicode ////////////////////
-
-#if PY_VERSION_HEX < 0x03090000 || (defined(PyUnicode_WCHAR_KIND) && defined(PyUnicode_AS_UNICODE))
-
-#if PY_VERSION_HEX < 0x03090000
-#define __Pyx_PyUnicode_AS_UNICODE(op) PyUnicode_AS_UNICODE(op)
-#define __Pyx_PyUnicode_GET_SIZE(op) PyUnicode_GET_SIZE(op)
-#else
-// Avoid calling deprecated C-API functions in Py3.9+ that PEP-623 schedules for removal in Py3.12.
-// https://www.python.org/dev/peps/pep-0623/
-#define __Pyx_PyUnicode_AS_UNICODE(op) (((PyASCIIObject *)(op))->wstr)
-#define __Pyx_PyUnicode_GET_SIZE(op) ((PyCompactUnicodeObject *)(op))->wstr_length
-#endif
-
-#if !defined(Py_UNICODE_SIZE) || Py_UNICODE_SIZE == 2
-static int __Pyx_PyUnicodeBufferContainsUCS4_SP(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) {
- /* handle surrogate pairs for Py_UNICODE buffers in 16bit Unicode builds */
- Py_UNICODE high_val, low_val;
- Py_UNICODE* pos;
- high_val = (Py_UNICODE) (0xD800 | (((character - 0x10000) >> 10) & ((1<<10)-1)));
- low_val = (Py_UNICODE) (0xDC00 | ( (character - 0x10000) & ((1<<10)-1)));
- for (pos=buffer; pos < buffer+length-1; pos++) {
- if (unlikely((high_val == pos[0]) & (low_val == pos[1]))) return 1;
- }
- return 0;
-}
-#endif
-
-static int __Pyx_PyUnicodeBufferContainsUCS4_BMP(Py_UNICODE* buffer, Py_ssize_t length, Py_UCS4 character) {
- Py_UNICODE uchar;
- Py_UNICODE* pos;
- uchar = (Py_UNICODE) character;
- for (pos=buffer; pos < buffer+length; pos++) {
- if (unlikely(uchar == pos[0])) return 1;
- }
- return 0;
-}
-#endif
-
-static CYTHON_INLINE int __Pyx_UnicodeContainsUCS4(PyObject* unicode, Py_UCS4 character) {
-#if CYTHON_PEP393_ENABLED
- const int kind = PyUnicode_KIND(unicode);
- #ifdef PyUnicode_WCHAR_KIND
- if (likely(kind != PyUnicode_WCHAR_KIND))
- #endif
- {
- Py_ssize_t i;
- const void* udata = PyUnicode_DATA(unicode);
- const Py_ssize_t length = PyUnicode_GET_LENGTH(unicode);
- for (i=0; i < length; i++) {
- if (unlikely(character == PyUnicode_READ(kind, udata, i))) return 1;
- }
- return 0;
- }
-#elif PY_VERSION_HEX >= 0x03090000
- #error Cannot use "UChar in Unicode" in Python 3.9 without PEP-393 unicode strings.
-#elif !defined(PyUnicode_AS_UNICODE)
- #error Cannot use "UChar in Unicode" in Python < 3.9 without Py_UNICODE support.
-#endif
-
-#if PY_VERSION_HEX < 0x03090000 || (defined(PyUnicode_WCHAR_KIND) && defined(PyUnicode_AS_UNICODE))
-#if !defined(Py_UNICODE_SIZE) || Py_UNICODE_SIZE == 2
- if ((sizeof(Py_UNICODE) == 2) && unlikely(character > 65535)) {
- return __Pyx_PyUnicodeBufferContainsUCS4_SP(
- __Pyx_PyUnicode_AS_UNICODE(unicode),
- __Pyx_PyUnicode_GET_SIZE(unicode),
- character);
- } else
-#endif
- {
- return __Pyx_PyUnicodeBufferContainsUCS4_BMP(
- __Pyx_PyUnicode_AS_UNICODE(unicode),
- __Pyx_PyUnicode_GET_SIZE(unicode),
- character);
-
- }
-#endif
-}
-
-
-//////////////////// PyUnicodeContains.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_PyUnicode_ContainsTF(PyObject* substring, PyObject* text, int eq) {
- int result = PyUnicode_Contains(text, substring);
- return unlikely(result < 0) ? result : (result == (eq == Py_EQ));
-}
-
-
-//////////////////// CStringEquals.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_StrEq(const char *, const char *); /*proto*/
-
-//////////////////// CStringEquals ////////////////////
-
-static CYTHON_INLINE int __Pyx_StrEq(const char *s1, const char *s2) {
- while (*s1 != '\0' && *s1 == *s2) { s1++; s2++; }
- return *s1 == *s2;
-}
-
-
-//////////////////// StrEquals.proto ////////////////////
-//@requires: BytesEquals
-//@requires: UnicodeEquals
-
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals
-#else
-#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals
-#endif
-
-
-//////////////////// UnicodeEquals.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); /*proto*/
-
-//////////////////// UnicodeEquals ////////////////////
-//@requires: BytesEquals
-
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) {
-#if CYTHON_COMPILING_IN_PYPY
- return PyObject_RichCompareBool(s1, s2, equals);
-#else
-#if PY_MAJOR_VERSION < 3
- PyObject* owned_ref = NULL;
-#endif
- int s1_is_unicode, s2_is_unicode;
- if (s1 == s2) {
- /* as done by PyObject_RichCompareBool(); also catches the (interned) empty string */
- goto return_eq;
- }
- s1_is_unicode = PyUnicode_CheckExact(s1);
- s2_is_unicode = PyUnicode_CheckExact(s2);
-#if PY_MAJOR_VERSION < 3
- if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) {
- owned_ref = PyUnicode_FromObject(s2);
- if (unlikely(!owned_ref))
- return -1;
- s2 = owned_ref;
- s2_is_unicode = 1;
- } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) {
- owned_ref = PyUnicode_FromObject(s1);
- if (unlikely(!owned_ref))
- return -1;
- s1 = owned_ref;
- s1_is_unicode = 1;
- } else if (((!s2_is_unicode) & (!s1_is_unicode))) {
- return __Pyx_PyBytes_Equals(s1, s2, equals);
- }
-#endif
- if (s1_is_unicode & s2_is_unicode) {
- Py_ssize_t length;
- int kind;
- void *data1, *data2;
- if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0))
- return -1;
- length = __Pyx_PyUnicode_GET_LENGTH(s1);
- if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) {
- goto return_ne;
- }
-#if CYTHON_USE_UNICODE_INTERNALS
- {
- Py_hash_t hash1, hash2;
- #if CYTHON_PEP393_ENABLED
- hash1 = ((PyASCIIObject*)s1)->hash;
- hash2 = ((PyASCIIObject*)s2)->hash;
- #else
- hash1 = ((PyUnicodeObject*)s1)->hash;
- hash2 = ((PyUnicodeObject*)s2)->hash;
- #endif
- if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {
- goto return_ne;
- }
- }
-#endif
- // len(s1) == len(s2) >= 1 (empty string is interned, and "s1 is not s2")
- kind = __Pyx_PyUnicode_KIND(s1);
- if (kind != __Pyx_PyUnicode_KIND(s2)) {
- goto return_ne;
- }
- data1 = __Pyx_PyUnicode_DATA(s1);
- data2 = __Pyx_PyUnicode_DATA(s2);
- if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) {
- goto return_ne;
- } else if (length == 1) {
- goto return_eq;
- } else {
- int result = memcmp(data1, data2, (size_t)(length * kind));
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_EQ) ? (result == 0) : (result != 0);
- }
- } else if ((s1 == Py_None) & s2_is_unicode) {
- goto return_ne;
- } else if ((s2 == Py_None) & s1_is_unicode) {
- goto return_ne;
- } else {
- int result;
- PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- if (!py_result)
- return -1;
- result = __Pyx_PyObject_IsTrue(py_result);
- Py_DECREF(py_result);
- return result;
- }
-return_eq:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_EQ);
-return_ne:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_NE);
-#endif
-}
-
-
-//////////////////// BytesEquals.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); /*proto*/
-
-//////////////////// BytesEquals ////////////////////
-//@requires: IncludeStringH
-
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) {
-#if CYTHON_COMPILING_IN_PYPY
- return PyObject_RichCompareBool(s1, s2, equals);
-#else
- if (s1 == s2) {
- /* as done by PyObject_RichCompareBool(); also catches the (interned) empty string */
- return (equals == Py_EQ);
- } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) {
- const char *ps1, *ps2;
- Py_ssize_t length = PyBytes_GET_SIZE(s1);
- if (length != PyBytes_GET_SIZE(s2))
- return (equals == Py_NE);
- // len(s1) == len(s2) >= 1 (empty string is interned, and "s1 is not s2")
- ps1 = PyBytes_AS_STRING(s1);
- ps2 = PyBytes_AS_STRING(s2);
- if (ps1[0] != ps2[0]) {
- return (equals == Py_NE);
- } else if (length == 1) {
- return (equals == Py_EQ);
- } else {
- int result;
-#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000)
- Py_hash_t hash1, hash2;
- hash1 = ((PyBytesObject*)s1)->ob_shash;
- hash2 = ((PyBytesObject*)s2)->ob_shash;
- if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {
- return (equals == Py_NE);
- }
-#endif
- result = memcmp(ps1, ps2, (size_t)length);
- return (equals == Py_EQ) ? (result == 0) : (result != 0);
- }
- } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) {
- return (equals == Py_NE);
- } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) {
- return (equals == Py_NE);
- } else {
- int result;
- PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
- if (!py_result)
- return -1;
- result = __Pyx_PyObject_IsTrue(py_result);
- Py_DECREF(py_result);
- return result;
- }
-#endif
-}
-
-//////////////////// GetItemIntByteArray.proto ////////////////////
-
-#define __Pyx_GetItemInt_ByteArray(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck) \
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ? \
- __Pyx_GetItemInt_ByteArray_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) : \
- (PyErr_SetString(PyExc_IndexError, "bytearray index out of range"), -1))
-
-static CYTHON_INLINE int __Pyx_GetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i,
- int wraparound, int boundscheck);
-
-//////////////////// GetItemIntByteArray ////////////////////
-
-static CYTHON_INLINE int __Pyx_GetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i,
- int wraparound, int boundscheck) {
- Py_ssize_t length;
- if (wraparound | boundscheck) {
- length = PyByteArray_GET_SIZE(string);
- if (wraparound & unlikely(i < 0)) i += length;
- if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) {
- return (unsigned char) (PyByteArray_AS_STRING(string)[i]);
- } else {
- PyErr_SetString(PyExc_IndexError, "bytearray index out of range");
- return -1;
- }
- } else {
- return (unsigned char) (PyByteArray_AS_STRING(string)[i]);
- }
-}
-
-
-//////////////////// SetItemIntByteArray.proto ////////////////////
-
-#define __Pyx_SetItemInt_ByteArray(o, i, v, type, is_signed, to_py_func, is_list, wraparound, boundscheck) \
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ? \
- __Pyx_SetItemInt_ByteArray_Fast(o, (Py_ssize_t)i, v, wraparound, boundscheck) : \
- (PyErr_SetString(PyExc_IndexError, "bytearray index out of range"), -1))
-
-static CYTHON_INLINE int __Pyx_SetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i, unsigned char v,
- int wraparound, int boundscheck);
-
-//////////////////// SetItemIntByteArray ////////////////////
-
-static CYTHON_INLINE int __Pyx_SetItemInt_ByteArray_Fast(PyObject* string, Py_ssize_t i, unsigned char v,
- int wraparound, int boundscheck) {
- Py_ssize_t length;
- if (wraparound | boundscheck) {
- length = PyByteArray_GET_SIZE(string);
- if (wraparound & unlikely(i < 0)) i += length;
- if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) {
- PyByteArray_AS_STRING(string)[i] = (char) v;
- return 0;
- } else {
- PyErr_SetString(PyExc_IndexError, "bytearray index out of range");
- return -1;
- }
- } else {
- PyByteArray_AS_STRING(string)[i] = (char) v;
- return 0;
- }
-}
-
-
-//////////////////// GetItemIntUnicode.proto ////////////////////
-
-#define __Pyx_GetItemInt_Unicode(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck) \
- (__Pyx_fits_Py_ssize_t(i, type, is_signed) ? \
- __Pyx_GetItemInt_Unicode_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) : \
- (PyErr_SetString(PyExc_IndexError, "string index out of range"), (Py_UCS4)-1))
-
-static CYTHON_INLINE Py_UCS4 __Pyx_GetItemInt_Unicode_Fast(PyObject* ustring, Py_ssize_t i,
- int wraparound, int boundscheck);
-
-//////////////////// GetItemIntUnicode ////////////////////
-
-static CYTHON_INLINE Py_UCS4 __Pyx_GetItemInt_Unicode_Fast(PyObject* ustring, Py_ssize_t i,
- int wraparound, int boundscheck) {
- Py_ssize_t length;
- if (unlikely(__Pyx_PyUnicode_READY(ustring) < 0)) return (Py_UCS4)-1;
- if (wraparound | boundscheck) {
- length = __Pyx_PyUnicode_GET_LENGTH(ustring);
- if (wraparound & unlikely(i < 0)) i += length;
- if ((!boundscheck) || likely(__Pyx_is_valid_index(i, length))) {
- return __Pyx_PyUnicode_READ_CHAR(ustring, i);
- } else {
- PyErr_SetString(PyExc_IndexError, "string index out of range");
- return (Py_UCS4)-1;
- }
- } else {
- return __Pyx_PyUnicode_READ_CHAR(ustring, i);
- }
-}
-
-
-/////////////// decode_c_string_utf16.proto ///////////////
-
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 0;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = -1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) {
- int byteorder = 1;
- return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);
-}
-
-/////////////// decode_cpp_string.proto ///////////////
-//@requires: IncludeCppStringH
-//@requires: decode_c_bytes
-
-static CYTHON_INLINE PyObject* __Pyx_decode_cpp_string(
- std::string cppstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
- return __Pyx_decode_c_bytes(
- cppstring.data(), cppstring.size(), start, stop, encoding, errors, decode_func);
-}
-
-/////////////// decode_c_string.proto ///////////////
-
-static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
- const char* cstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));
-
-/////////////// decode_c_string ///////////////
-//@requires: IncludeStringH
-//@requires: decode_c_string_utf16
-//@substitute: naming
-
-/* duplicate code to avoid calling strlen() if start >= 0 and stop >= 0 */
-static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
- const char* cstring, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
- Py_ssize_t length;
- if (unlikely((start < 0) | (stop < 0))) {
- size_t slen = strlen(cstring);
- if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) {
- PyErr_SetString(PyExc_OverflowError,
- "c-string too long to convert to Python");
- return NULL;
- }
- length = (Py_ssize_t) slen;
- if (start < 0) {
- start += length;
- if (start < 0)
- start = 0;
- }
- if (stop < 0)
- stop += length;
- }
- if (unlikely(stop <= start))
- return __Pyx_NewRef($empty_unicode);
- length = stop - start;
- cstring += start;
- if (decode_func) {
- return decode_func(cstring, length, errors);
- } else {
- return PyUnicode_Decode(cstring, length, encoding, errors);
- }
-}
-
-/////////////// decode_c_bytes.proto ///////////////
-
-static CYTHON_INLINE PyObject* __Pyx_decode_c_bytes(
- const char* cstring, Py_ssize_t length, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));
-
-/////////////// decode_c_bytes ///////////////
-//@requires: decode_c_string_utf16
-//@substitute: naming
-
-static CYTHON_INLINE PyObject* __Pyx_decode_c_bytes(
- const char* cstring, Py_ssize_t length, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
- if (unlikely((start < 0) | (stop < 0))) {
- if (start < 0) {
- start += length;
- if (start < 0)
- start = 0;
- }
- if (stop < 0)
- stop += length;
- }
- if (stop > length)
- stop = length;
- if (unlikely(stop <= start))
- return __Pyx_NewRef($empty_unicode);
- length = stop - start;
- cstring += start;
- if (decode_func) {
- return decode_func(cstring, length, errors);
- } else {
- return PyUnicode_Decode(cstring, length, encoding, errors);
- }
-}
-
-/////////////// decode_bytes.proto ///////////////
-//@requires: decode_c_bytes
-
-static CYTHON_INLINE PyObject* __Pyx_decode_bytes(
- PyObject* string, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
- return __Pyx_decode_c_bytes(
- PyBytes_AS_STRING(string), PyBytes_GET_SIZE(string),
- start, stop, encoding, errors, decode_func);
-}
-
-/////////////// decode_bytearray.proto ///////////////
-//@requires: decode_c_bytes
-
-static CYTHON_INLINE PyObject* __Pyx_decode_bytearray(
- PyObject* string, Py_ssize_t start, Py_ssize_t stop,
- const char* encoding, const char* errors,
- PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
- return __Pyx_decode_c_bytes(
- PyByteArray_AS_STRING(string), PyByteArray_GET_SIZE(string),
- start, stop, encoding, errors, decode_func);
-}
-
-/////////////// PyUnicode_Substring.proto ///////////////
-
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Substring(
- PyObject* text, Py_ssize_t start, Py_ssize_t stop);
-
-/////////////// PyUnicode_Substring ///////////////
-//@substitute: naming
-
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Substring(
- PyObject* text, Py_ssize_t start, Py_ssize_t stop) {
- Py_ssize_t length;
- if (unlikely(__Pyx_PyUnicode_READY(text) == -1)) return NULL;
- length = __Pyx_PyUnicode_GET_LENGTH(text);
- if (start < 0) {
- start += length;
- if (start < 0)
- start = 0;
- }
- if (stop < 0)
- stop += length;
- else if (stop > length)
- stop = length;
- if (stop <= start)
- return __Pyx_NewRef($empty_unicode);
-#if CYTHON_PEP393_ENABLED
- return PyUnicode_FromKindAndData(PyUnicode_KIND(text),
- PyUnicode_1BYTE_DATA(text) + start*PyUnicode_KIND(text), stop-start);
-#else
- return PyUnicode_FromUnicode(PyUnicode_AS_UNICODE(text)+start, stop-start);
-#endif
-}
-
-
-/////////////// py_unicode_istitle.proto ///////////////
-
-// Py_UNICODE_ISTITLE() doesn't match unicode.istitle() as the latter
-// additionally allows character that comply with Py_UNICODE_ISUPPER()
-
-#if PY_VERSION_HEX < 0x030200A2
-static CYTHON_INLINE int __Pyx_Py_UNICODE_ISTITLE(Py_UNICODE uchar)
-#else
-static CYTHON_INLINE int __Pyx_Py_UNICODE_ISTITLE(Py_UCS4 uchar)
-#endif
-{
- return Py_UNICODE_ISTITLE(uchar) || Py_UNICODE_ISUPPER(uchar);
-}
-
-
-/////////////// unicode_tailmatch.proto ///////////////
-
-static int __Pyx_PyUnicode_Tailmatch(
- PyObject* s, PyObject* substr, Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
-
-/////////////// unicode_tailmatch ///////////////
-
-// Python's unicode.startswith() and unicode.endswith() support a
-// tuple of prefixes/suffixes, whereas it's much more common to
-// test for a single unicode string.
-
-static int __Pyx_PyUnicode_TailmatchTuple(PyObject* s, PyObject* substrings,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- Py_ssize_t i, count = PyTuple_GET_SIZE(substrings);
- for (i = 0; i < count; i++) {
- Py_ssize_t result;
-#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- result = PyUnicode_Tailmatch(s, PyTuple_GET_ITEM(substrings, i),
- start, end, direction);
-#else
- PyObject* sub = PySequence_ITEM(substrings, i);
- if (unlikely(!sub)) return -1;
- result = PyUnicode_Tailmatch(s, sub, start, end, direction);
- Py_DECREF(sub);
-#endif
- if (result) {
- return (int) result;
- }
- }
- return 0;
-}
-
-static int __Pyx_PyUnicode_Tailmatch(PyObject* s, PyObject* substr,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- if (unlikely(PyTuple_Check(substr))) {
- return __Pyx_PyUnicode_TailmatchTuple(s, substr, start, end, direction);
- }
- return (int) PyUnicode_Tailmatch(s, substr, start, end, direction);
-}
-
-
-/////////////// bytes_tailmatch.proto ///////////////
-
-static int __Pyx_PyBytes_SingleTailmatch(PyObject* self, PyObject* arg,
- Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
-static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
- Py_ssize_t start, Py_ssize_t end, int direction); /*proto*/
-
-/////////////// bytes_tailmatch ///////////////
-
-static int __Pyx_PyBytes_SingleTailmatch(PyObject* self, PyObject* arg,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- const char* self_ptr = PyBytes_AS_STRING(self);
- Py_ssize_t self_len = PyBytes_GET_SIZE(self);
- const char* sub_ptr;
- Py_ssize_t sub_len;
- int retval;
-
- Py_buffer view;
- view.obj = NULL;
-
- if ( PyBytes_Check(arg) ) {
- sub_ptr = PyBytes_AS_STRING(arg);
- sub_len = PyBytes_GET_SIZE(arg);
- }
-#if PY_MAJOR_VERSION < 3
- // Python 2.x allows mixing unicode and str
- else if ( PyUnicode_Check(arg) ) {
- return (int) PyUnicode_Tailmatch(self, arg, start, end, direction);
- }
-#endif
- else {
- if (unlikely(PyObject_GetBuffer(self, &view, PyBUF_SIMPLE) == -1))
- return -1;
- sub_ptr = (const char*) view.buf;
- sub_len = view.len;
- }
-
- if (end > self_len)
- end = self_len;
- else if (end < 0)
- end += self_len;
- if (end < 0)
- end = 0;
- if (start < 0)
- start += self_len;
- if (start < 0)
- start = 0;
-
- if (direction > 0) {
- /* endswith */
- if (end-sub_len > start)
- start = end - sub_len;
- }
-
- if (start + sub_len <= end)
- retval = !memcmp(self_ptr+start, sub_ptr, (size_t)sub_len);
- else
- retval = 0;
-
- if (view.obj)
- PyBuffer_Release(&view);
-
- return retval;
-}
-
-static int __Pyx_PyBytes_TailmatchTuple(PyObject* self, PyObject* substrings,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- Py_ssize_t i, count = PyTuple_GET_SIZE(substrings);
- for (i = 0; i < count; i++) {
- int result;
-#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- result = __Pyx_PyBytes_SingleTailmatch(self, PyTuple_GET_ITEM(substrings, i),
- start, end, direction);
-#else
- PyObject* sub = PySequence_ITEM(substrings, i);
- if (unlikely(!sub)) return -1;
- result = __Pyx_PyBytes_SingleTailmatch(self, sub, start, end, direction);
- Py_DECREF(sub);
-#endif
- if (result) {
- return result;
- }
- }
- return 0;
-}
-
-static int __Pyx_PyBytes_Tailmatch(PyObject* self, PyObject* substr,
- Py_ssize_t start, Py_ssize_t end, int direction) {
- if (unlikely(PyTuple_Check(substr))) {
- return __Pyx_PyBytes_TailmatchTuple(self, substr, start, end, direction);
- }
-
- return __Pyx_PyBytes_SingleTailmatch(self, substr, start, end, direction);
-}
-
-
-/////////////// str_tailmatch.proto ///////////////
-
-static CYTHON_INLINE int __Pyx_PyStr_Tailmatch(PyObject* self, PyObject* arg, Py_ssize_t start,
- Py_ssize_t end, int direction); /*proto*/
-
-/////////////// str_tailmatch ///////////////
-//@requires: bytes_tailmatch
-//@requires: unicode_tailmatch
-
-static CYTHON_INLINE int __Pyx_PyStr_Tailmatch(PyObject* self, PyObject* arg, Py_ssize_t start,
- Py_ssize_t end, int direction)
-{
- // We do not use a C compiler macro here to avoid "unused function"
- // warnings for the *_Tailmatch() function that is not being used in
- // the specific CPython version. The C compiler will generate the same
- // code anyway, and will usually just remove the unused function.
- if (PY_MAJOR_VERSION < 3)
- return __Pyx_PyBytes_Tailmatch(self, arg, start, end, direction);
- else
- return __Pyx_PyUnicode_Tailmatch(self, arg, start, end, direction);
-}
-
-
-/////////////// bytes_index.proto ///////////////
-
-static CYTHON_INLINE char __Pyx_PyBytes_GetItemInt(PyObject* bytes, Py_ssize_t index, int check_bounds); /*proto*/
-
-/////////////// bytes_index ///////////////
-
-static CYTHON_INLINE char __Pyx_PyBytes_GetItemInt(PyObject* bytes, Py_ssize_t index, int check_bounds) {
- if (index < 0)
- index += PyBytes_GET_SIZE(bytes);
- if (check_bounds) {
- Py_ssize_t size = PyBytes_GET_SIZE(bytes);
- if (unlikely(!__Pyx_is_valid_index(index, size))) {
- PyErr_SetString(PyExc_IndexError, "string index out of range");
- return (char) -1;
- }
- }
- return PyBytes_AS_STRING(bytes)[index];
-}
-
-
-//////////////////// StringJoin.proto ////////////////////
-
-#if PY_MAJOR_VERSION < 3
-#define __Pyx_PyString_Join __Pyx_PyBytes_Join
-#define __Pyx_PyBaseString_Join(s, v) (PyUnicode_CheckExact(s) ? PyUnicode_Join(s, v) : __Pyx_PyBytes_Join(s, v))
-#else
-#define __Pyx_PyString_Join PyUnicode_Join
-#define __Pyx_PyBaseString_Join PyUnicode_Join
-#endif
-
-#if CYTHON_COMPILING_IN_CPYTHON
- #if PY_MAJOR_VERSION < 3
- #define __Pyx_PyBytes_Join _PyString_Join
- #else
- #define __Pyx_PyBytes_Join _PyBytes_Join
- #endif
-#else
-static CYTHON_INLINE PyObject* __Pyx_PyBytes_Join(PyObject* sep, PyObject* values); /*proto*/
-#endif
-
-
-//////////////////// StringJoin ////////////////////
-
-#if !CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyBytes_Join(PyObject* sep, PyObject* values) {
- return PyObject_CallMethodObjArgs(sep, PYIDENT("join"), values, NULL);
-}
-#endif
-
-
-/////////////// JoinPyUnicode.proto ///////////////
-
-static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength,
- Py_UCS4 max_char);
-
-/////////////// JoinPyUnicode ///////////////
-//@requires: IncludeStringH
-//@substitute: naming
-
-static PyObject* __Pyx_PyUnicode_Join(PyObject* value_tuple, Py_ssize_t value_count, Py_ssize_t result_ulength,
- CYTHON_UNUSED Py_UCS4 max_char) {
-#if CYTHON_USE_UNICODE_INTERNALS && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- PyObject *result_uval;
- int result_ukind;
- Py_ssize_t i, char_pos;
- void *result_udata;
-#if CYTHON_PEP393_ENABLED
- // Py 3.3+ (post PEP-393)
- result_uval = PyUnicode_New(result_ulength, max_char);
- if (unlikely(!result_uval)) return NULL;
- result_ukind = (max_char <= 255) ? PyUnicode_1BYTE_KIND : (max_char <= 65535) ? PyUnicode_2BYTE_KIND : PyUnicode_4BYTE_KIND;
- result_udata = PyUnicode_DATA(result_uval);
-#else
- // Py 2.x/3.2 (pre PEP-393)
- result_uval = PyUnicode_FromUnicode(NULL, result_ulength);
- if (unlikely(!result_uval)) return NULL;
- result_ukind = sizeof(Py_UNICODE);
- result_udata = PyUnicode_AS_UNICODE(result_uval);
-#endif
-
- char_pos = 0;
- for (i=0; i < value_count; i++) {
- int ukind;
- Py_ssize_t ulength;
- void *udata;
- PyObject *uval = PyTuple_GET_ITEM(value_tuple, i);
- if (unlikely(__Pyx_PyUnicode_READY(uval)))
- goto bad;
- ulength = __Pyx_PyUnicode_GET_LENGTH(uval);
- if (unlikely(!ulength))
- continue;
- if (unlikely(char_pos + ulength < 0))
- goto overflow;
- ukind = __Pyx_PyUnicode_KIND(uval);
- udata = __Pyx_PyUnicode_DATA(uval);
- if (!CYTHON_PEP393_ENABLED || ukind == result_ukind) {
- memcpy((char *)result_udata + char_pos * result_ukind, udata, (size_t) (ulength * result_ukind));
- } else {
- #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030300F0 || defined(_PyUnicode_FastCopyCharacters)
- _PyUnicode_FastCopyCharacters(result_uval, char_pos, uval, 0, ulength);
- #else
- Py_ssize_t j;
- for (j=0; j < ulength; j++) {
- Py_UCS4 uchar = __Pyx_PyUnicode_READ(ukind, udata, j);
- __Pyx_PyUnicode_WRITE(result_ukind, result_udata, char_pos+j, uchar);
- }
- #endif
- }
- char_pos += ulength;
- }
- return result_uval;
-overflow:
- PyErr_SetString(PyExc_OverflowError, "join() result is too long for a Python string");
-bad:
- Py_DECREF(result_uval);
- return NULL;
-#else
- // non-CPython fallback
- result_ulength++;
- value_count++;
- return PyUnicode_Join($empty_unicode, value_tuple);
-#endif
-}
-
-
-/////////////// BuildPyUnicode.proto ///////////////
-
-static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength,
- int prepend_sign, char padding_char);
-
-/////////////// BuildPyUnicode ///////////////
-
-// Create a PyUnicode object from an ASCII char*, e.g. a formatted number.
-
-static PyObject* __Pyx_PyUnicode_BuildFromAscii(Py_ssize_t ulength, char* chars, int clength,
- int prepend_sign, char padding_char) {
- PyObject *uval;
- Py_ssize_t uoffset = ulength - clength;
-#if CYTHON_USE_UNICODE_INTERNALS
- Py_ssize_t i;
-#if CYTHON_PEP393_ENABLED
- // Py 3.3+ (post PEP-393)
- void *udata;
- uval = PyUnicode_New(ulength, 127);
- if (unlikely(!uval)) return NULL;
- udata = PyUnicode_DATA(uval);
-#else
- // Py 2.x/3.2 (pre PEP-393)
- Py_UNICODE *udata;
- uval = PyUnicode_FromUnicode(NULL, ulength);
- if (unlikely(!uval)) return NULL;
- udata = PyUnicode_AS_UNICODE(uval);
-#endif
- if (uoffset > 0) {
- i = 0;
- if (prepend_sign) {
- __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, 0, '-');
- i++;
- }
- for (; i < uoffset; i++) {
- __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, i, padding_char);
- }
- }
- for (i=0; i < clength; i++) {
- __Pyx_PyUnicode_WRITE(PyUnicode_1BYTE_KIND, udata, uoffset+i, chars[i]);
- }
-
-#else
- // non-CPython
- {
- PyObject *sign = NULL, *padding = NULL;
- uval = NULL;
- if (uoffset > 0) {
- prepend_sign = !!prepend_sign;
- if (uoffset > prepend_sign) {
- padding = PyUnicode_FromOrdinal(padding_char);
- if (likely(padding) && uoffset > prepend_sign + 1) {
- PyObject *tmp;
- PyObject *repeat = PyInt_FromSize_t(uoffset - prepend_sign);
- if (unlikely(!repeat)) goto done_or_error;
- tmp = PyNumber_Multiply(padding, repeat);
- Py_DECREF(repeat);
- Py_DECREF(padding);
- padding = tmp;
- }
- if (unlikely(!padding)) goto done_or_error;
- }
- if (prepend_sign) {
- sign = PyUnicode_FromOrdinal('-');
- if (unlikely(!sign)) goto done_or_error;
- }
- }
-
- uval = PyUnicode_DecodeASCII(chars, clength, NULL);
- if (likely(uval) && padding) {
- PyObject *tmp = PyNumber_Add(padding, uval);
- Py_DECREF(uval);
- uval = tmp;
- }
- if (likely(uval) && sign) {
- PyObject *tmp = PyNumber_Add(sign, uval);
- Py_DECREF(uval);
- uval = tmp;
- }
-done_or_error:
- Py_XDECREF(padding);
- Py_XDECREF(sign);
- }
-#endif
-
- return uval;
-}
-
-
-//////////////////// ByteArrayAppendObject.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_PyByteArray_AppendObject(PyObject* bytearray, PyObject* value);
-
-//////////////////// ByteArrayAppendObject ////////////////////
-//@requires: ByteArrayAppend
-
-static CYTHON_INLINE int __Pyx_PyByteArray_AppendObject(PyObject* bytearray, PyObject* value) {
- Py_ssize_t ival;
-#if PY_MAJOR_VERSION < 3
- if (unlikely(PyString_Check(value))) {
- if (unlikely(PyString_GET_SIZE(value) != 1)) {
- PyErr_SetString(PyExc_ValueError, "string must be of size 1");
- return -1;
- }
- ival = (unsigned char) (PyString_AS_STRING(value)[0]);
- } else
-#endif
-#if CYTHON_USE_PYLONG_INTERNALS
- if (likely(PyLong_CheckExact(value)) && likely(Py_SIZE(value) == 1 || Py_SIZE(value) == 0)) {
- if (Py_SIZE(value) == 0) {
- ival = 0;
- } else {
- ival = ((PyLongObject*)value)->ob_digit[0];
- if (unlikely(ival > 255)) goto bad_range;
- }
- } else
-#endif
- {
- // CPython calls PyNumber_Index() internally
- ival = __Pyx_PyIndex_AsSsize_t(value);
- if (unlikely(!__Pyx_is_valid_index(ival, 256))) {
- if (ival == -1 && PyErr_Occurred())
- return -1;
- goto bad_range;
- }
- }
- return __Pyx_PyByteArray_Append(bytearray, ival);
-bad_range:
- PyErr_SetString(PyExc_ValueError, "byte must be in range(0, 256)");
- return -1;
-}
-
-//////////////////// ByteArrayAppend.proto ////////////////////
-
-static CYTHON_INLINE int __Pyx_PyByteArray_Append(PyObject* bytearray, int value);
-
-//////////////////// ByteArrayAppend ////////////////////
-//@requires: ObjectHandling.c::PyObjectCallMethod1
-
-static CYTHON_INLINE int __Pyx_PyByteArray_Append(PyObject* bytearray, int value) {
- PyObject *pyval, *retval;
-#if CYTHON_COMPILING_IN_CPYTHON
- if (likely(__Pyx_is_valid_index(value, 256))) {
- Py_ssize_t n = Py_SIZE(bytearray);
- if (likely(n != PY_SSIZE_T_MAX)) {
- if (unlikely(PyByteArray_Resize(bytearray, n + 1) < 0))
- return -1;
- PyByteArray_AS_STRING(bytearray)[n] = value;
- return 0;
- }
- } else {
- PyErr_SetString(PyExc_ValueError, "byte must be in range(0, 256)");
- return -1;
- }
-#endif
- pyval = PyInt_FromLong(value);
- if (unlikely(!pyval))
- return -1;
- retval = __Pyx_PyObject_CallMethod1(bytearray, PYIDENT("append"), pyval);
- Py_DECREF(pyval);
- if (unlikely(!retval))
- return -1;
- Py_DECREF(retval);
- return 0;
-}
-
-
-//////////////////// PyObjectFormat.proto ////////////////////
-
-#if CYTHON_USE_UNICODE_WRITER
-static PyObject* __Pyx_PyObject_Format(PyObject* s, PyObject* f);
-#else
-#define __Pyx_PyObject_Format(s, f) PyObject_Format(s, f)
-#endif
-
-//////////////////// PyObjectFormat ////////////////////
-
-#if CYTHON_USE_UNICODE_WRITER
-static PyObject* __Pyx_PyObject_Format(PyObject* obj, PyObject* format_spec) {
- int ret;
- _PyUnicodeWriter writer;
-
- if (likely(PyFloat_CheckExact(obj))) {
- // copied from CPython 3.5 "float__format__()" in floatobject.c
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x03040000
- _PyUnicodeWriter_Init(&writer, 0);
-#else
- _PyUnicodeWriter_Init(&writer);
-#endif
- ret = _PyFloat_FormatAdvancedWriter(
- &writer,
- obj,
- format_spec, 0, PyUnicode_GET_LENGTH(format_spec));
- } else if (likely(PyLong_CheckExact(obj))) {
- // copied from CPython 3.5 "long__format__()" in longobject.c
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x03040000
- _PyUnicodeWriter_Init(&writer, 0);
-#else
- _PyUnicodeWriter_Init(&writer);
-#endif
- ret = _PyLong_FormatAdvancedWriter(
- &writer,
- obj,
- format_spec, 0, PyUnicode_GET_LENGTH(format_spec));
- } else {
- return PyObject_Format(obj, format_spec);
- }
-
- if (unlikely(ret == -1)) {
- _PyUnicodeWriter_Dealloc(&writer);
- return NULL;
- }
- return _PyUnicodeWriter_Finish(&writer);
-}
-#endif
-
-
-//////////////////// PyObjectFormatSimple.proto ////////////////////
-
-#if CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyObject_FormatSimple(s, f) ( \
- likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \
- PyObject_Format(s, f))
-#elif PY_MAJOR_VERSION < 3
- // str is common in Py2, but formatting must return a Unicode string
- #define __Pyx_PyObject_FormatSimple(s, f) ( \
- likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \
- likely(PyString_CheckExact(s)) ? PyUnicode_FromEncodedObject(s, NULL, "strict") : \
- PyObject_Format(s, f))
-#elif CYTHON_USE_TYPE_SLOTS
- // Py3 nicely returns unicode strings from str() which makes this quite efficient for builtin types
- #define __Pyx_PyObject_FormatSimple(s, f) ( \
- likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \
- likely(PyLong_CheckExact(s)) ? PyLong_Type.tp_str(s) : \
- likely(PyFloat_CheckExact(s)) ? PyFloat_Type.tp_str(s) : \
- PyObject_Format(s, f))
-#else
- #define __Pyx_PyObject_FormatSimple(s, f) ( \
- likely(PyUnicode_CheckExact(s)) ? (Py_INCREF(s), s) : \
- PyObject_Format(s, f))
-#endif
-
-
-//////////////////// PyObjectFormatAndDecref.proto ////////////////////
-
-static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatSimpleAndDecref(PyObject* s, PyObject* f);
-static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatAndDecref(PyObject* s, PyObject* f);
-
-//////////////////// PyObjectFormatAndDecref ////////////////////
-
-static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatSimpleAndDecref(PyObject* s, PyObject* f) {
- if (unlikely(!s)) return NULL;
- if (likely(PyUnicode_CheckExact(s))) return s;
- #if PY_MAJOR_VERSION < 3
- // str is common in Py2, but formatting must return a Unicode string
- if (likely(PyString_CheckExact(s))) {
- PyObject *result = PyUnicode_FromEncodedObject(s, NULL, "strict");
- Py_DECREF(s);
- return result;
- }
- #endif
- return __Pyx_PyObject_FormatAndDecref(s, f);
-}
-
-static CYTHON_INLINE PyObject* __Pyx_PyObject_FormatAndDecref(PyObject* s, PyObject* f) {
- PyObject *result = PyObject_Format(s, f);
- Py_DECREF(s);
- return result;
-}
-
-
-//////////////////// PyUnicode_Unicode.proto ////////////////////
-
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Unicode(PyObject *obj);/*proto*/
-
-//////////////////// PyUnicode_Unicode ////////////////////
-
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_Unicode(PyObject *obj) {
- if (unlikely(obj == Py_None))
- obj = PYUNICODE("None");
- return __Pyx_NewRef(obj);
-}
-
-
-//////////////////// PyObject_Unicode.proto ////////////////////
-
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyObject_Unicode(obj) \
- (likely(PyUnicode_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Str(obj))
-#else
-#define __Pyx_PyObject_Unicode(obj) \
- (likely(PyUnicode_CheckExact(obj)) ? __Pyx_NewRef(obj) : PyObject_Unicode(obj))
-#endif
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_with_highlighted_segment.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_with_highlighted_segment.py
deleted file mode 100644
index 3326ac500cbc3fb309c714b06db37eefd7ae0cdb..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bar_chart_with_highlighted_segment.py
+++ /dev/null
@@ -1,30 +0,0 @@
-"""
-Bar Chart with Highlighted Segment
-----------------------------------
-This example shows a bar chart that highlights values beyond a threshold.
-"""
-import altair as alt
-import pandas as pd
-from vega_datasets import data
-
-source = data.wheat()
-threshold = pd.DataFrame([{"threshold": 90}])
-
-bars = alt.Chart(source).mark_bar().encode(
- x="year:O",
- y="wheat:Q",
-)
-
-highlight = alt.Chart(source).mark_bar(color="#e45755").encode(
- x='year:O',
- y='baseline:Q',
- y2='wheat:Q'
-).transform_filter(
- alt.datum.wheat > 90
-).transform_calculate("baseline", "90")
-
-rule = alt.Chart(threshold).mark_rule().encode(
- y='threshold:Q'
-)
-
-(bars + highlight + rule).properties(width=600)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/ParseTreeMatch.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/ParseTreeMatch.py
deleted file mode 100644
index bbda73e8f29f2636d9cad1351c3ac26d18f46d1c..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/tree/ParseTreeMatch.py
+++ /dev/null
@@ -1,118 +0,0 @@
-#
-# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved.
-# Use of this file is governed by the BSD 3-clause license that
-# can be found in the LICENSE.txt file in the project root.
-#
-
-
-#
-# Represents the result of matching a {@link ParseTree} against a tree pattern.
-#
-from io import StringIO
-from antlr4.tree.ParseTreePattern import ParseTreePattern
-from antlr4.tree.Tree import ParseTree
-
-
-class ParseTreeMatch(object):
-
- #
- # Constructs a new instance of {@link ParseTreeMatch} from the specified
- # parse tree and pattern.
- #
- # @param tree The parse tree to match against the pattern.
- # @param pattern The parse tree pattern.
- # @param labels A mapping from label names to collections of
- # {@link ParseTree} objects located by the tree pattern matching process.
- # @param mismatchedNode The first node which failed to match the tree
- # pattern during the matching process.
- #
- # @exception IllegalArgumentException if {@code tree} is {@code null}
- # @exception IllegalArgumentException if {@code pattern} is {@code null}
- # @exception IllegalArgumentException if {@code labels} is {@code null}
- #
- def __init__(self, tree:ParseTree, pattern:ParseTreePattern, labels:dict, mismatchedNode:ParseTree):
- if tree is None:
- raise Exception("tree cannot be null")
- if pattern is None:
- raise Exception("pattern cannot be null")
- if labels is None:
- raise Exception("labels cannot be null")
- self.tree = tree
- self.pattern = pattern
- self.labels = labels
- self.mismatchedNode = mismatchedNode
-
- #
- # Get the last node associated with a specific {@code label}.
- #
- # For example, for pattern {@code }, {@code get("id")} returns the
- # node matched for that {@code ID}. If more than one node
- # matched the specified label, only the last is returned. If there is
- # no node associated with the label, this returns {@code null}.
- #
- # Pattern tags like {@code } and {@code } without labels are
- # considered to be labeled with {@code ID} and {@code expr}, respectively.
- #
- # @param label The label to check.
- #
- # @return The last {@link ParseTree} to match a tag with the specified
- # label, or {@code null} if no parse tree matched a tag with the label.
- #
- def get(self, label:str):
- parseTrees = self.labels.get(label, None)
- if parseTrees is None or len(parseTrees)==0:
- return None
- else:
- return parseTrees[len(parseTrees)-1]
-
- #
- # Return all nodes matching a rule or token tag with the specified label.
- #
- # If the {@code label} is the name of a parser rule or token in the
- # grammar, the resulting list will contain both the parse trees matching
- # rule or tags explicitly labeled with the label and the complete set of
- # parse trees matching the labeled and unlabeled tags in the pattern for
- # the parser rule or token. For example, if {@code label} is {@code "foo"},
- # the result will contain all of the following.
- #
- #
- # - Parse tree nodes matching tags of the form {@code
} and
- # {@code }.
- # - Parse tree nodes matching tags of the form {@code
}.
- # - Parse tree nodes matching tags of the form {@code
}.
- #
- #
- # @param label The label.
- #
- # @return A collection of all {@link ParseTree} nodes matching tags with
- # the specified {@code label}. If no nodes matched the label, an empty list
- # is returned.
- #
- def getAll(self, label:str):
- nodes = self.labels.get(label, None)
- if nodes is None:
- return list()
- else:
- return nodes
-
-
- #
- # Gets a value indicating whether the match operation succeeded.
- #
- # @return {@code true} if the match operation succeeded; otherwise,
- # {@code false}.
- #
- def succeeded(self):
- return self.mismatchedNode is None
-
- #
- # {@inheritDoc}
- #
- def __str__(self):
- with StringIO() as buf:
- buf.write("Match ")
- buf.write("succeeded" if self.succeeded() else "failed")
- buf.write("; found ")
- buf.write(str(len(self.labels)))
- buf.write(" labels")
- return buf.getvalue()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/func.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/func.py
deleted file mode 100644
index 0c09a60b4951019966a4c607ca2128ebee35c72a..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cachetools/func.py
+++ /dev/null
@@ -1,117 +0,0 @@
-"""`functools.lru_cache` compatible memoizing function decorators."""
-
-__all__ = ("fifo_cache", "lfu_cache", "lru_cache", "mru_cache", "rr_cache", "ttl_cache")
-
-import math
-import random
-import time
-
-try:
- from threading import RLock
-except ImportError: # pragma: no cover
- from dummy_threading import RLock
-
-from . import FIFOCache, LFUCache, LRUCache, MRUCache, RRCache, TTLCache
-from . import cached
-from . import keys
-
-
-class _UnboundTTLCache(TTLCache):
- def __init__(self, ttl, timer):
- TTLCache.__init__(self, math.inf, ttl, timer)
-
- @property
- def maxsize(self):
- return None
-
-
-def _cache(cache, maxsize, typed):
- def decorator(func):
- key = keys.typedkey if typed else keys.hashkey
- wrapper = cached(cache=cache, key=key, lock=RLock(), info=True)(func)
- wrapper.cache_parameters = lambda: {"maxsize": maxsize, "typed": typed}
- return wrapper
-
- return decorator
-
-
-def fifo_cache(maxsize=128, typed=False):
- """Decorator to wrap a function with a memoizing callable that saves
- up to `maxsize` results based on a First In First Out (FIFO)
- algorithm.
-
- """
- if maxsize is None:
- return _cache({}, None, typed)
- elif callable(maxsize):
- return _cache(FIFOCache(128), 128, typed)(maxsize)
- else:
- return _cache(FIFOCache(maxsize), maxsize, typed)
-
-
-def lfu_cache(maxsize=128, typed=False):
- """Decorator to wrap a function with a memoizing callable that saves
- up to `maxsize` results based on a Least Frequently Used (LFU)
- algorithm.
-
- """
- if maxsize is None:
- return _cache({}, None, typed)
- elif callable(maxsize):
- return _cache(LFUCache(128), 128, typed)(maxsize)
- else:
- return _cache(LFUCache(maxsize), maxsize, typed)
-
-
-def lru_cache(maxsize=128, typed=False):
- """Decorator to wrap a function with a memoizing callable that saves
- up to `maxsize` results based on a Least Recently Used (LRU)
- algorithm.
-
- """
- if maxsize is None:
- return _cache({}, None, typed)
- elif callable(maxsize):
- return _cache(LRUCache(128), 128, typed)(maxsize)
- else:
- return _cache(LRUCache(maxsize), maxsize, typed)
-
-
-def mru_cache(maxsize=128, typed=False):
- """Decorator to wrap a function with a memoizing callable that saves
- up to `maxsize` results based on a Most Recently Used (MRU)
- algorithm.
- """
- if maxsize is None:
- return _cache({}, None, typed)
- elif callable(maxsize):
- return _cache(MRUCache(128), 128, typed)(maxsize)
- else:
- return _cache(MRUCache(maxsize), maxsize, typed)
-
-
-def rr_cache(maxsize=128, choice=random.choice, typed=False):
- """Decorator to wrap a function with a memoizing callable that saves
- up to `maxsize` results based on a Random Replacement (RR)
- algorithm.
-
- """
- if maxsize is None:
- return _cache({}, None, typed)
- elif callable(maxsize):
- return _cache(RRCache(128, choice), 128, typed)(maxsize)
- else:
- return _cache(RRCache(maxsize, choice), maxsize, typed)
-
-
-def ttl_cache(maxsize=128, ttl=600, timer=time.monotonic, typed=False):
- """Decorator to wrap a function with a memoizing callable that saves
- up to `maxsize` results based on a Least Recently Used (LRU)
- algorithm with a per-item time-to-live (TTL) value.
- """
- if maxsize is None:
- return _cache(_UnboundTTLCache(ttl, timer), None, typed)
- elif callable(maxsize):
- return _cache(TTLCache(128, ttl, timer), 128, typed)(maxsize)
- else:
- return _cache(TTLCache(maxsize, ttl, timer), maxsize, typed)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/ffiplatform.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/ffiplatform.py
deleted file mode 100644
index 85313460a69477513c8e00f4df430925f2c4ecc9..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/ffiplatform.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import sys, os
-from .error import VerificationError
-
-
-LIST_OF_FILE_NAMES = ['sources', 'include_dirs', 'library_dirs',
- 'extra_objects', 'depends']
-
-def get_extension(srcfilename, modname, sources=(), **kwds):
- _hack_at_distutils()
- from distutils.core import Extension
- allsources = [srcfilename]
- for src in sources:
- allsources.append(os.path.normpath(src))
- return Extension(name=modname, sources=allsources, **kwds)
-
-def compile(tmpdir, ext, compiler_verbose=0, debug=None):
- """Compile a C extension module using distutils."""
-
- _hack_at_distutils()
- saved_environ = os.environ.copy()
- try:
- outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
- outputfilename = os.path.abspath(outputfilename)
- finally:
- # workaround for a distutils bugs where some env vars can
- # become longer and longer every time it is used
- for key, value in saved_environ.items():
- if os.environ.get(key) != value:
- os.environ[key] = value
- return outputfilename
-
-def _build(tmpdir, ext, compiler_verbose=0, debug=None):
- # XXX compact but horrible :-(
- from distutils.core import Distribution
- import distutils.errors, distutils.log
- #
- dist = Distribution({'ext_modules': [ext]})
- dist.parse_config_files()
- options = dist.get_option_dict('build_ext')
- if debug is None:
- debug = sys.flags.debug
- options['debug'] = ('ffiplatform', debug)
- options['force'] = ('ffiplatform', True)
- options['build_lib'] = ('ffiplatform', tmpdir)
- options['build_temp'] = ('ffiplatform', tmpdir)
- #
- try:
- old_level = distutils.log.set_threshold(0) or 0
- try:
- distutils.log.set_verbosity(compiler_verbose)
- dist.run_command('build_ext')
- cmd_obj = dist.get_command_obj('build_ext')
- [soname] = cmd_obj.get_outputs()
- finally:
- distutils.log.set_threshold(old_level)
- except (distutils.errors.CompileError,
- distutils.errors.LinkError) as e:
- raise VerificationError('%s: %s' % (e.__class__.__name__, e))
- #
- return soname
-
-try:
- from os.path import samefile
-except ImportError:
- def samefile(f1, f2):
- return os.path.abspath(f1) == os.path.abspath(f2)
-
-def maybe_relative_path(path):
- if not os.path.isabs(path):
- return path # already relative
- dir = path
- names = []
- while True:
- prevdir = dir
- dir, name = os.path.split(prevdir)
- if dir == prevdir or not dir:
- return path # failed to make it relative
- names.append(name)
- try:
- if samefile(dir, os.curdir):
- names.reverse()
- return os.path.join(*names)
- except OSError:
- pass
-
-# ____________________________________________________________
-
-try:
- int_or_long = (int, long)
- import cStringIO
-except NameError:
- int_or_long = int # Python 3
- import io as cStringIO
-
-def _flatten(x, f):
- if isinstance(x, str):
- f.write('%ds%s' % (len(x), x))
- elif isinstance(x, dict):
- keys = sorted(x.keys())
- f.write('%dd' % len(keys))
- for key in keys:
- _flatten(key, f)
- _flatten(x[key], f)
- elif isinstance(x, (list, tuple)):
- f.write('%dl' % len(x))
- for value in x:
- _flatten(value, f)
- elif isinstance(x, int_or_long):
- f.write('%di' % (x,))
- else:
- raise TypeError(
- "the keywords to verify() contains unsupported object %r" % (x,))
-
-def flatten(x):
- f = cStringIO.StringIO()
- _flatten(x, f)
- return f.getvalue()
-
-def _hack_at_distutils():
- # Windows-only workaround for some configurations: see
- # https://bugs.python.org/issue23246 (Python 2.7 with
- # a specific MS compiler suite download)
- if sys.platform == "win32":
- try:
- import setuptools # for side-effects, patches distutils
- except ImportError:
- pass
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/byte_utils.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/byte_utils.py
deleted file mode 100644
index a305c080926c2d094b7e8ae48f5331da82025a75..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/byte_utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import re
-
-
-WHITESPACE_NORMALIZER = re.compile(r"\s+")
-SPACE = chr(32)
-SPACE_ESCAPE = chr(9601)
-# excluding non-breaking space (160) here
-PRINTABLE_LATIN = set(
- list(range(32, 126 + 1)) + list(range(161, 172 + 1)) + list(range(174, 255 + 1))
-)
-BYTE_TO_BCHAR = {
- b: chr(b) if b in PRINTABLE_LATIN else chr(256 + b) for b in range(256)
-}
-BCHAR_TO_BYTE = {bc: b for b, bc in BYTE_TO_BCHAR.items()}
-
-
-def byte_encode(x: str) -> str:
- normalized = WHITESPACE_NORMALIZER.sub(SPACE, x)
- return "".join([BYTE_TO_BCHAR[b] for b in normalized.encode("utf-8")])
-
-
-def byte_decode(x: str) -> str:
- try:
- return bytes([BCHAR_TO_BYTE[bc] for bc in x]).decode("utf-8")
- except ValueError:
- return ""
-
-
-def smart_byte_decode(x: str) -> str:
- output = byte_decode(x)
- if output == "":
- # DP the best recovery (max valid chars) if it's broken
- n_bytes = len(x)
- f = [0 for _ in range(n_bytes + 1)]
- pt = [0 for _ in range(n_bytes + 1)]
- for i in range(1, n_bytes + 1):
- f[i], pt[i] = f[i - 1], i - 1
- for j in range(1, min(4, i) + 1):
- if f[i - j] + 1 > f[i] and len(byte_decode(x[i - j : i])) > 0:
- f[i], pt[i] = f[i - j] + 1, i - j
- cur_pt = n_bytes
- while cur_pt > 0:
- if f[cur_pt] == f[pt[cur_pt]] + 1:
- output = byte_decode(x[pt[cur_pt] : cur_pt]) + output
- cur_pt = pt[cur_pt]
- return output
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/__init__.py
deleted file mode 100644
index 9b61fafadba28f65fe78a28b2099368b83cfcf41..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .huffman_coder import HuffmanCodeBuilder, HuffmanCoder
-from .huffman_mmap_indexed_dataset import (
- HuffmanMMapIndex,
- HuffmanMMapIndexedDataset,
- HuffmanMMapIndexedDatasetBuilder,
- vocab_file_path,
-)
-
-__all__ = [
- "HuffmanCoder",
- "HuffmanCodeBuilder",
- "HuffmanMMapIndexedDatasetBuilder",
- "HuffmanMMapIndexedDataset",
- "HuffmanMMapIndex",
- "vocab_file_path",
-]
diff --git a/spaces/aryadytm/chatmagic-ai/main.py b/spaces/aryadytm/chatmagic-ai/main.py
deleted file mode 100644
index 101f5a43890b2c65a2efa10b298340ee86477d95..0000000000000000000000000000000000000000
--- a/spaces/aryadytm/chatmagic-ai/main.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from PIL import Image
-
-import gradio as gr
-import random
-import time
-import os
-import requests
-
-
-CHATMAGIC_AI = os.environ["CHATMAGIC_AI"]
-
-markdown_text = """
-
-ChatMagic AI is available as an Android app for FREE. Download now to chat faster and better!
-- Google Play Store URL: **[CLICK HERE](https://bit.ly/googleplaystore-chatmagicai)**
-- Discord URL: **[CLICK HERE](https://bit.ly/discord-chatmagicai)**
-- Don't forget to **like** this space :)
-"""
-
-welcome_text = """
-Hello! I'm ChatMagic AI. I'm here to assist you. I can do the following:
-1. Answer questions and give explanations
-2. Assist in writing a text based content
-3. Follow simple instructions
-
-However, I still have limitations. I may write incorrect information or produce harmful instructions. Please use me with caution.
-""".strip()
-
-
-empty_history = [[None, welcome_text]]
-
-
-with gr.Blocks() as demo:
- gr.Markdown(markdown_text)
-
- chatbot = gr.Chatbot(empty_history, label="Chat with ChatMagic AI")
- msg = gr.Textbox(label="Enter your question here")
-
- with gr.Row() as row:
- btn_ask = gr.Button("Ask", variant="primary")
- btn_clear = gr.Button("Clear")
-
- def user(user_message: str, history: list) -> tuple[str, list]:
- return "", history + [[user_message, None]]
-
- def bot(history: list):
- bot_message = "An error has occured. Please try again."
-
- try:
- bot_message = requests.post(CHATMAGIC_AI, json={"question": history[-1][0]}).json()["answer"]
- except Exception as e:
- pass
-
- history[-1][1] = bot_message
- return history
-
- msg.submit(
- fn=user, inputs=[msg, chatbot], outputs=[msg, chatbot], queue=True).then(
- fn=bot, inputs=chatbot, outputs=chatbot
- )
-
- btn_ask.click(
- fn=user, inputs=[msg, chatbot], outputs=[msg, chatbot], queue=True).then(
- fn=bot, inputs=chatbot, outputs=chatbot
- )
-
- btn_clear.click(
- fn=lambda: empty_history, inputs=None, outputs=chatbot, queue=False)
-
-
-demo.queue(concurrency_count=1)
-demo.launch(server_name="0.0.0.0")
\ No newline at end of file
diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/app.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/app.py
deleted file mode 100644
index f60fd3e8acba47d269b834f01b4f918def227119..0000000000000000000000000000000000000000
--- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/app.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import io
-import os
-
-# os.system("wget -P cvec/ https://huggingface.co/spaces/innnky/nanami/resolve/main/checkpoint_best_legacy_500.pt")
-import gradio as gr
-import librosa
-import numpy as np
-import soundfile
-from inference.infer_tool import Svc
-import logging
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('markdown_it').setLevel(logging.WARNING)
-logging.getLogger('urllib3').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-config_path = "configs/config.json"
-
-model = Svc("logs/44k/G_90400.pth", "configs/config.json", cluster_model_path="logs/44k/kmeans_10000.pt")
-
-
-
-def vc_fn(sid, input_audio, vc_transform, auto_f0,cluster_ratio, slice_db, noise_scale):
- if input_audio is None:
- return "没有上传待处理的音频哦", None
- sampling_rate, audio = input_audio
- # print(audio.shape,sampling_rate)
- duration = audio.shape[0] / sampling_rate
- if duration > 10000000000000000000:
- return "请上传小于100s的音频,需要转换长音频请本地进行转换", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- print(audio.shape)
- out_wav_path = "temp.wav"
- soundfile.write(out_wav_path, audio, 16000, format="wav")
- print( cluster_ratio, auto_f0, noise_scale)
- _audio = model.slice_inference(out_wav_path, sid, vc_transform, slice_db, cluster_ratio, auto_f0, noise_scale)
- return "转换完成", (44100, _audio)
-
-
-app = gr.Blocks()
-with app:
- with gr.Tabs():
- with gr.TabItem("一个窗口awa"):
- gr.Markdown(value="""
- 香风智乃sovits4.0 在线demo 小孩子不懂事做着玩的
-
- 备注:
-
- 1. 上传音频必须为`.mp3`或者`.wav`格式 `单声道` `44100采样率`。
- 2. 音频文件应`小于100s`转换大于100s可以在AU/AudioLab中切片逐一上传。
- 3. 使用男性音频可以考虑使用 升降调+4或+6/开启f0预测,使用女性音频可以不做调整。
- 4. 在线版服务器为2核16G免费版,转换效率较慢请耐心等待。
- 5. 使用该模型请标注作者 **模型训练/数据集:INT16**
- 6. 语音模型转换出的音频请勿用于商业化,若有侵犯您的权利,请联系**leenight2016@outlook.com**
-
-
- 模型作者b站@INT16 关注喵https://space.bilibili.com/133434728
-
- Modified/Kangluted by LeeNight in 23.4.9
- """)
- spks = list(model.spk2id.keys())
- sid = gr.Dropdown(label="音色", choices=spks, value=spks[0])
- vc_input3 = gr.Audio(label="上传音频(长度小于100秒)")
- vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0)
- cluster_ratio = gr.Number(label="聚类模型混合比例,0-1之间,默认为0不启用聚类,能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0)
- auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声不要勾选此项会究极跑调)", value=False)
- slice_db = gr.Number(label="切片阈值", value=-40)
- noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4)
- vc_submit = gr.Button("转换", variant="primary")
- vc_output1 = gr.Textbox(label="输出结果")
- vc_output2 = gr.Audio(label="输出音频")
- vc_submit.click(vc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale], [vc_output1, vc_output2])
-
- app.launch()
-
-
-
diff --git a/spaces/ashercn97/AsherTesting/extensions/openai/errors.py b/spaces/ashercn97/AsherTesting/extensions/openai/errors.py
deleted file mode 100644
index ff519c4fcf8a43a4007ec3e54f64fdd88d5e6a4c..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/extensions/openai/errors.py
+++ /dev/null
@@ -1,31 +0,0 @@
-class OpenAIError(Exception):
- def __init__(self, message=None, code=500, internal_message=''):
- self.message = message
- self.code = code
- self.internal_message = internal_message
-
- def __repr__(self):
- return "%s(message=%r, code=%d)" % (
- self.__class__.__name__,
- self.message,
- self.code,
- )
-
-
-class InvalidRequestError(OpenAIError):
- def __init__(self, message, param, code=400, error_type='InvalidRequestError', internal_message=''):
- super(OpenAIError, self).__init__(message, code, error_type, internal_message)
- self.param = param
-
- def __repr__(self):
- return "%s(message=%r, code=%d, param=%s)" % (
- self.__class__.__name__,
- self.message,
- self.code,
- self.param,
- )
-
-
-class ServiceUnavailableError(OpenAIError):
- def __init__(self, message=None, code=500, error_type='ServiceUnavailableError', internal_message=''):
- super(OpenAIError, self).__init__(message, code, error_type, internal_message)
diff --git a/spaces/ashercn97/AsherTesting/modules/exllama.py b/spaces/ashercn97/AsherTesting/modules/exllama.py
deleted file mode 100644
index ecfb10a46017061e2dda13d5868c96661ea13693..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/modules/exllama.py
+++ /dev/null
@@ -1,125 +0,0 @@
-from pathlib import Path
-
-from torch import version as torch_version
-
-from modules import shared
-from modules.logging_colors import logger
-from modules.text_generation import get_max_prompt_length
-
-try:
- from exllama.generator import ExLlamaGenerator
- from exllama.model import ExLlama, ExLlamaCache, ExLlamaConfig
- from exllama.tokenizer import ExLlamaTokenizer
-except:
- logger.warning('Exllama module failed to load. Will attempt to load from repositories.')
- try:
- from modules.relative_imports import RelativeImport
-
- with RelativeImport("repositories/exllama"):
- from generator import ExLlamaGenerator
- from model import ExLlama, ExLlamaCache, ExLlamaConfig
- from tokenizer import ExLlamaTokenizer
- except:
- logger.error("Could not find repositories/exllama/. Make sure that exllama is cloned inside repositories/ and is up to date.")
- raise
-
-
-class ExllamaModel:
- def __init__(self):
- pass
-
- @classmethod
- def from_pretrained(self, path_to_model):
-
- path_to_model = Path(f'{shared.args.model_dir}') / Path(path_to_model)
- tokenizer_model_path = path_to_model / "tokenizer.model"
- model_config_path = path_to_model / "config.json"
-
- # Find the model checkpoint
- model_path = None
- for ext in ['.safetensors', '.pt', '.bin']:
- found = list(path_to_model.glob(f"*{ext}"))
- if len(found) > 0:
- if len(found) > 1:
- logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.')
-
- model_path = found[-1]
- break
-
- config = ExLlamaConfig(str(model_config_path))
- config.model_path = str(model_path)
- config.max_seq_len = shared.args.max_seq_len
- config.compress_pos_emb = shared.args.compress_pos_emb
- if shared.args.gpu_split:
- config.set_auto_map(shared.args.gpu_split)
- config.gpu_peer_fix = True
-
- if shared.args.alpha_value:
- config.alpha_value = shared.args.alpha_value
- config.calculate_rotary_embedding_base()
-
- if torch_version.hip:
- config.rmsnorm_no_half2 = True
- config.rope_no_half2 = True
- config.matmul_no_half2 = True
- config.silu_no_half2 = True
-
- model = ExLlama(config)
- tokenizer = ExLlamaTokenizer(str(tokenizer_model_path))
- cache = ExLlamaCache(model)
- generator = ExLlamaGenerator(model, tokenizer, cache)
-
- result = self()
- result.config = config
- result.model = model
- result.cache = cache
- result.tokenizer = tokenizer
- result.generator = generator
- return result, result
-
- def generate_with_streaming(self, prompt, state):
- self.generator.settings.temperature = state['temperature']
- self.generator.settings.top_p = state['top_p']
- self.generator.settings.top_k = state['top_k']
- self.generator.settings.typical = state['typical_p']
- self.generator.settings.token_repetition_penalty_max = state['repetition_penalty']
- self.generator.settings.token_repetition_penalty_sustain = -1 if state['repetition_penalty_range'] <= 0 else state['repetition_penalty_range']
- if state['ban_eos_token']:
- self.generator.disallow_tokens([self.tokenizer.eos_token_id])
- else:
- self.generator.disallow_tokens(None)
-
- self.generator.end_beam_search()
-
- # Tokenizing the input
- ids = self.generator.tokenizer.encode(prompt)
- ids = ids[:, -get_max_prompt_length(state):]
-
- self.generator.gen_begin_reuse(ids)
- initial_len = self.generator.sequence[0].shape[0]
- has_leading_space = False
- for i in range(state['max_new_tokens']):
- token = self.generator.gen_single_token()
- if i == 0 and self.generator.tokenizer.tokenizer.IdToPiece(int(token)).startswith('▁'):
- has_leading_space = True
-
- decoded_text = self.generator.tokenizer.decode(self.generator.sequence[0][initial_len:])
- if has_leading_space:
- decoded_text = ' ' + decoded_text
-
- yield decoded_text
- if token.item() == self.generator.tokenizer.eos_token_id or shared.stop_everything:
- break
-
- def generate(self, prompt, state):
- output = ''
- for output in self.generate_with_streaming(prompt, state):
- pass
-
- return output
-
- def encode(self, string, **kwargs):
- return self.tokenizer.encode(string)
-
- def decode(self, string, **kwargs):
- return self.tokenizer.decode(string)[0]
diff --git a/spaces/ashishraics/MCQ-Generator/README.md b/spaces/ashishraics/MCQ-Generator/README.md
deleted file mode 100644
index 23c8c908277135509d8e64153f7d509313854946..0000000000000000000000000000000000000000
--- a/spaces/ashishraics/MCQ-Generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MCQ Generator
-emoji: 👁
-colorFrom: blue
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.9.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/atimughal662/InfoFusion/src/gradio_runner.py b/spaces/atimughal662/InfoFusion/src/gradio_runner.py
deleted file mode 100644
index fc62418c977d1ce3b54e63547a667203745a2554..0000000000000000000000000000000000000000
--- a/spaces/atimughal662/InfoFusion/src/gradio_runner.py
+++ /dev/null
@@ -1,4601 +0,0 @@
-import ast
-import copy
-import functools
-import inspect
-import itertools
-import json
-import os
-import pprint
-import random
-import shutil
-import sys
-import time
-import traceback
-import uuid
-import filelock
-import numpy as np
-import pandas as pd
-import requests
-from iterators import TimeoutIterator
-
-from gradio_utils.css import get_css
-from gradio_utils.prompt_form import make_chatbots
-from src.db_utils import set_userid, get_username_direct
-
-# This is a hack to prevent Gradio from phoning home when it gets imported
-os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False'
-
-
-def my_get(url, **kwargs):
- print('Gradio HTTP request redirected to localhost :)', flush=True)
- kwargs.setdefault('allow_redirects', True)
- return requests.api.request('get', 'http://127.0.0.1/', **kwargs)
-
-
-original_get = requests.get
-requests.get = my_get
-import gradio as gr
-
-requests.get = original_get
-
-
-def fix_pydantic_duplicate_validators_error():
- try:
- from pydantic import class_validators
-
- class_validators.in_ipython = lambda: True # type: ignore[attr-defined]
- except ImportError:
- pass
-
-
-fix_pydantic_duplicate_validators_error()
-
-from enums import DocumentSubset, no_model_str, no_lora_str, no_server_str, LangChainAction, LangChainMode, \
- DocumentChoice, langchain_modes_intrinsic, LangChainTypes, langchain_modes_non_db, gr_to_lg, invalid_key_msg, \
- LangChainAgent, docs_ordering_types
-from gradio_themes import H2oTheme, SoftTheme, get_h2o_title, get_simple_title, \
- get_dark_js, get_heap_js, wrap_js_to_lambda, \
- spacing_xsm, radius_xsm, text_xsm
-from prompter import prompt_type_to_model_name, prompt_types_strings, inv_prompt_type_to_model_lower, non_hf_types, \
- get_prompt
-from utils import flatten_list, zip_data, s3up, clear_torch_cache, get_torch_allocated, system_info_print, \
- ping, makedirs, get_kwargs, system_info, ping_gpu, get_url, get_local_ip, \
- save_generate_output, url_alive, remove, dict_to_html, text_to_html, lg_to_gr, str_to_dict, have_serpapi
-from gen import get_model, languages_covered, evaluate, score_qa, inputs_kwargs_list, \
- get_max_max_new_tokens, get_minmax_top_k_docs, history_to_context, langchain_actions, langchain_agents_list, \
- evaluate_fake, merge_chat_conversation_history
-from evaluate_params import eval_func_param_names, no_default_param_names, eval_func_param_names_defaults, \
- input_args_list, key_overrides
-
-from apscheduler.schedulers.background import BackgroundScheduler
-
-
-def fix_text_for_gradio(text, fix_new_lines=False, fix_latex_dollars=True):
- if fix_latex_dollars:
- ts = text.split('```')
- for parti, part in enumerate(ts):
- inside = parti % 2 == 1
- if not inside:
- ts[parti] = ts[parti].replace('$', '﹩')
- text = '```'.join(ts)
-
- if fix_new_lines:
- # let Gradio handle code, since got improved recently
- ## FIXME: below conflicts with Gradio, but need to see if can handle multiple \n\n\n etc. properly as is.
- # ensure good visually, else markdown ignores multiple \n
- # handle code blocks
- ts = text.split('```')
- for parti, part in enumerate(ts):
- inside = parti % 2 == 1
- if not inside:
- ts[parti] = ts[parti].replace('\n', '
')
- text = '```'.join(ts)
- return text
-
-
-def is_valid_key(enforce_h2ogpt_api_key, h2ogpt_api_keys, h2ogpt_key1, requests_state1=None):
- valid_key = False
- if not enforce_h2ogpt_api_key:
- # no token barrier
- valid_key = 'not enforced'
- else:
- if isinstance(h2ogpt_api_keys, list) and h2ogpt_key1 in h2ogpt_api_keys:
- # passed token barrier
- valid_key = True
- elif isinstance(h2ogpt_api_keys, str) and os.path.isfile(h2ogpt_api_keys):
- with filelock.FileLock(h2ogpt_api_keys + '.lock'):
- with open(h2ogpt_api_keys, 'rt') as f:
- h2ogpt_api_keys = json.load(f)
- if h2ogpt_key1 in h2ogpt_api_keys:
- valid_key = True
- if isinstance(requests_state1, dict) and 'username' in requests_state1 and requests_state1['username']:
- # no UI limit currently
- valid_key = True
- return valid_key
-
-
-def go_gradio(**kwargs):
- allow_api = kwargs['allow_api']
- is_public = kwargs['is_public']
- is_hf = kwargs['is_hf']
- memory_restriction_level = kwargs['memory_restriction_level']
- n_gpus = kwargs['n_gpus']
- admin_pass = kwargs['admin_pass']
- model_states = kwargs['model_states']
- dbs = kwargs['dbs']
- db_type = kwargs['db_type']
- visible_langchain_actions = kwargs['visible_langchain_actions']
- visible_langchain_agents = kwargs['visible_langchain_agents']
- allow_upload_to_user_data = kwargs['allow_upload_to_user_data']
- allow_upload_to_my_data = kwargs['allow_upload_to_my_data']
- enable_sources_list = kwargs['enable_sources_list']
- enable_url_upload = kwargs['enable_url_upload']
- enable_text_upload = kwargs['enable_text_upload']
- use_openai_embedding = kwargs['use_openai_embedding']
- hf_embedding_model = kwargs['hf_embedding_model']
- load_db_if_exists = kwargs['load_db_if_exists']
- migrate_embedding_model = kwargs['migrate_embedding_model']
- auto_migrate_db = kwargs['auto_migrate_db']
- captions_model = kwargs['captions_model']
- caption_loader = kwargs['caption_loader']
- doctr_loader = kwargs['doctr_loader']
-
- n_jobs = kwargs['n_jobs']
- verbose = kwargs['verbose']
-
- # for dynamic state per user session in gradio
- model_state0 = kwargs['model_state0']
- score_model_state0 = kwargs['score_model_state0']
- my_db_state0 = kwargs['my_db_state0']
- selection_docs_state0 = kwargs['selection_docs_state0']
- visible_models_state0 = kwargs['visible_models_state0']
- # For Heap analytics
- is_heap_analytics_enabled = kwargs['enable_heap_analytics']
- heap_app_id = kwargs['heap_app_id']
-
- # easy update of kwargs needed for evaluate() etc.
- queue = True
- allow_upload = allow_upload_to_user_data or allow_upload_to_my_data
- allow_upload_api = allow_api and allow_upload
-
- kwargs.update(locals())
-
- # import control
- if kwargs['langchain_mode'] != 'Disabled':
- from gpt_langchain import file_types, have_arxiv
- else:
- have_arxiv = False
- file_types = []
-
- if 'mbart-' in kwargs['model_lower']:
- instruction_label_nochat = "Text to translate"
- else:
- instruction_label_nochat = "Instruction (Shift-Enter or push Submit to send message," \
- " use Enter for multiple input lines)"
-
- title = 'h2oGPT'
- if kwargs['visible_h2ogpt_header']:
- description = """h2oGPT LLM Leaderboard LLM Studio
CodeLlama
🤗 Models"""
- else:
- description = None
- description_bottom = "If this host is busy, try
[Multi-Model](https://gpt.h2o.ai)
[CodeLlama](https://codellama.h2o.ai)
[Llama2 70B](https://llama.h2o.ai)
[Falcon 40B](https://falcon.h2o.ai)
[HF Spaces1](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot)
[HF Spaces2](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot2)
"
- if is_hf:
- description_bottom += '''
'''
- task_info_md = ''
- css_code = get_css(kwargs)
-
- if kwargs['gradio_offline_level'] >= 0:
- # avoid GoogleFont that pulls from internet
- if kwargs['gradio_offline_level'] == 1:
- # front end would still have to download fonts or have cached it at some point
- base_font = 'Source Sans Pro'
- else:
- base_font = 'Helvetica'
- theme_kwargs = dict(font=(base_font, 'ui-sans-serif', 'system-ui', 'sans-serif'),
- font_mono=('IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'))
- else:
- theme_kwargs = dict()
- if kwargs['gradio_size'] == 'xsmall':
- theme_kwargs.update(dict(spacing_size=spacing_xsm, text_size=text_xsm, radius_size=radius_xsm))
- elif kwargs['gradio_size'] in [None, 'small']:
- theme_kwargs.update(dict(spacing_size=gr.themes.sizes.spacing_sm, text_size=gr.themes.sizes.text_sm,
- radius_size=gr.themes.sizes.spacing_sm))
- elif kwargs['gradio_size'] == 'large':
- theme_kwargs.update(dict(spacing_size=gr.themes.sizes.spacing_lg, text_size=gr.themes.sizes.text_lg),
- radius_size=gr.themes.sizes.spacing_lg)
- elif kwargs['gradio_size'] == 'medium':
- theme_kwargs.update(dict(spacing_size=gr.themes.sizes.spacing_md, text_size=gr.themes.sizes.text_md,
- radius_size=gr.themes.sizes.spacing_md))
-
- theme = H2oTheme(**theme_kwargs) if kwargs['h2ocolors'] else SoftTheme(**theme_kwargs)
- demo = gr.Blocks(theme=theme, css=css_code, title="h2oGPT", analytics_enabled=False)
- callback = gr.CSVLogger()
-
- model_options0 = flatten_list(list(prompt_type_to_model_name.values())) + kwargs['extra_model_options']
- if kwargs['base_model'].strip() not in model_options0:
- model_options0 = [kwargs['base_model'].strip()] + model_options0
- lora_options = kwargs['extra_lora_options']
- if kwargs['lora_weights'].strip() not in lora_options:
- lora_options = [kwargs['lora_weights'].strip()] + lora_options
- server_options = kwargs['extra_server_options']
- if kwargs['inference_server'].strip() not in server_options:
- server_options = [kwargs['inference_server'].strip()] + server_options
- if os.getenv('OPENAI_API_KEY'):
- if 'openai_chat' not in server_options:
- server_options += ['openai_chat']
- if 'openai' not in server_options:
- server_options += ['openai']
-
- # always add in no lora case
- # add fake space so doesn't go away in gradio dropdown
- model_options0 = [no_model_str] + sorted(model_options0)
- lora_options = [no_lora_str] + sorted(lora_options)
- server_options = [no_server_str] + sorted(server_options)
- # always add in no model case so can free memory
- # add fake space so doesn't go away in gradio dropdown
-
- # transcribe, will be detranscribed before use by evaluate()
- if not kwargs['base_model'].strip():
- kwargs['base_model'] = no_model_str
-
- if not kwargs['lora_weights'].strip():
- kwargs['lora_weights'] = no_lora_str
-
- if not kwargs['inference_server'].strip():
- kwargs['inference_server'] = no_server_str
-
- # transcribe for gradio
- kwargs['gpu_id'] = str(kwargs['gpu_id'])
-
- no_model_msg = 'h2oGPT [ !!! Please Load Model in Models Tab !!! ]'
- output_label0 = f'h2oGPT [Model: {kwargs.get("base_model")}]' if kwargs.get(
- 'base_model') else no_model_msg
- output_label0_model2 = no_model_msg
-
- def update_prompt(prompt_type1, prompt_dict1, model_state1, which_model=0):
- if not prompt_type1 or which_model != 0:
- # keep prompt_type and prompt_dict in sync if possible
- prompt_type1 = kwargs.get('prompt_type', prompt_type1)
- prompt_dict1 = kwargs.get('prompt_dict', prompt_dict1)
- # prefer model specific prompt type instead of global one
- if not prompt_type1 or which_model != 0:
- prompt_type1 = model_state1.get('prompt_type', prompt_type1)
- prompt_dict1 = model_state1.get('prompt_dict', prompt_dict1)
-
- if not prompt_dict1 or which_model != 0:
- # if still not defined, try to get
- prompt_dict1 = kwargs.get('prompt_dict', prompt_dict1)
- if not prompt_dict1 or which_model != 0:
- prompt_dict1 = model_state1.get('prompt_dict', prompt_dict1)
- return prompt_type1, prompt_dict1
-
- def visible_models_to_model_choice(visible_models1):
- if isinstance(visible_models1, list):
- assert len(
- visible_models1) >= 1, "Invalid visible_models1=%s, can only be single entry" % visible_models1
- # just take first
- model_active_choice1 = visible_models1[0]
- elif isinstance(visible_models1, (str, int)):
- model_active_choice1 = visible_models1
- else:
- assert isinstance(visible_models1, type(None)), "Invalid visible_models1=%s" % visible_models1
- model_active_choice1 = visible_models1
- if model_active_choice1 is not None:
- if isinstance(model_active_choice1, str):
- base_model_list = [x['base_model'] for x in model_states]
- if model_active_choice1 in base_model_list:
- # if dups, will just be first one
- model_active_choice1 = base_model_list.index(model_active_choice1)
- else:
- # NOTE: Could raise, but sometimes raising in certain places fails too hard and requires UI restart
- model_active_choice1 = 0
- else:
- model_active_choice1 = 0
- return model_active_choice1
-
- default_kwargs = {k: kwargs[k] for k in eval_func_param_names_defaults}
- # ensure prompt_type consistent with prep_bot(), so nochat API works same way
- default_kwargs['prompt_type'], default_kwargs['prompt_dict'] = \
- update_prompt(default_kwargs['prompt_type'], default_kwargs['prompt_dict'],
- model_state1=model_state0,
- which_model=visible_models_to_model_choice(kwargs['visible_models']))
- for k in no_default_param_names:
- default_kwargs[k] = ''
-
- def dummy_fun(x):
- # need dummy function to block new input from being sent until output is done,
- # else gets input_list at time of submit that is old, and shows up as truncated in chatbot
- return x
-
- def update_auth_selection(auth_user, selection_docs_state1, save=False):
- # in-place update of both
- if 'selection_docs_state' not in auth_user:
- auth_user['selection_docs_state'] = selection_docs_state0
- for k, v in auth_user['selection_docs_state'].items():
- if isinstance(selection_docs_state1[k], dict):
- if save:
- auth_user['selection_docs_state'][k].clear()
- auth_user['selection_docs_state'][k].update(selection_docs_state1[k])
- else:
- selection_docs_state1[k].clear()
- selection_docs_state1[k].update(auth_user['selection_docs_state'][k])
- elif isinstance(selection_docs_state1[k], list):
- if save:
- auth_user['selection_docs_state'][k].clear()
- auth_user['selection_docs_state'][k].extend(selection_docs_state1[k])
- else:
- selection_docs_state1[k].clear()
- selection_docs_state1[k].extend(auth_user['selection_docs_state'][k])
- else:
- raise RuntimeError("Bad type: %s" % selection_docs_state1[k])
-
- # BEGIN AUTH THINGS
- def auth_func(username1, password1, auth_pairs=None, auth_filename=None,
- auth_access=None,
- auth_freeze=None,
- guest_name=None,
- selection_docs_state1=None,
- selection_docs_state00=None,
- **kwargs):
- assert auth_freeze is not None
- if selection_docs_state1 is None:
- selection_docs_state1 = selection_docs_state00
- assert selection_docs_state1 is not None
- assert auth_filename and isinstance(auth_filename, str), "Auth file must be a non-empty string, got: %s" % str(
- auth_filename)
- if auth_access == 'open' and username1 == guest_name:
- return True
- if username1 == '':
- # some issue with login
- return False
- with filelock.FileLock(auth_filename + '.lock'):
- auth_dict = {}
- if os.path.isfile(auth_filename):
- try:
- with open(auth_filename, 'rt') as f:
- auth_dict = json.load(f)
- except json.decoder.JSONDecodeError as e:
- print("Auth exception: %s" % str(e), flush=True)
- shutil.move(auth_filename, auth_filename + '.bak' + str(uuid.uuid4()))
- auth_dict = {}
- if username1 in auth_dict and username1 in auth_pairs:
- if password1 == auth_dict[username1]['password'] and password1 == auth_pairs[username1]:
- auth_user = auth_dict[username1]
- update_auth_selection(auth_user, selection_docs_state1)
- save_auth_dict(auth_dict, auth_filename)
- return True
- else:
- return False
- elif username1 in auth_dict:
- if password1 == auth_dict[username1]['password']:
- auth_user = auth_dict[username1]
- update_auth_selection(auth_user, selection_docs_state1)
- save_auth_dict(auth_dict, auth_filename)
- return True
- else:
- return False
- elif username1 in auth_pairs:
- # copy over CLI auth to file so only one state to manage
- auth_dict[username1] = dict(password=auth_pairs[username1], userid=str(uuid.uuid4()))
- auth_user = auth_dict[username1]
- update_auth_selection(auth_user, selection_docs_state1)
- save_auth_dict(auth_dict, auth_filename)
- return True
- else:
- if auth_access == 'closed':
- return False
- # open access
- auth_dict[username1] = dict(password=password1, userid=str(uuid.uuid4()))
- auth_user = auth_dict[username1]
- update_auth_selection(auth_user, selection_docs_state1)
- save_auth_dict(auth_dict, auth_filename)
- if auth_access == 'open':
- return True
- else:
- raise RuntimeError("Invalid auth_access: %s" % auth_access)
-
- def auth_func_open(*args, **kwargs):
- return True
-
- def get_username(requests_state1):
- username1 = None
- if 'username' in requests_state1:
- username1 = requests_state1['username']
- return username1
-
- def get_userid_auth_func(requests_state1, auth_filename=None, auth_access=None, guest_name=None, **kwargs):
- if auth_filename and isinstance(auth_filename, str):
- username1 = get_username(requests_state1)
- if username1:
- if username1 == guest_name:
- return str(uuid.uuid4())
- with filelock.FileLock(auth_filename + '.lock'):
- if os.path.isfile(auth_filename):
- with open(auth_filename, 'rt') as f:
- auth_dict = json.load(f)
- if username1 in auth_dict:
- return auth_dict[username1]['userid']
- # if here, then not persistently associated with username1,
- # but should only be one-time asked if going to persist within a single session!
- return str(uuid.uuid4())
-
- get_userid_auth = functools.partial(get_userid_auth_func,
- auth_filename=kwargs['auth_filename'],
- auth_access=kwargs['auth_access'],
- guest_name=kwargs['guest_name'],
- )
- if kwargs['auth_access'] == 'closed':
- auth_message1 = "Closed access"
- else:
- auth_message1 = "WELCOME! Open access" \
- " (%s/%s or any unique user/pass)" % (kwargs['guest_name'], kwargs['guest_name'])
-
- if kwargs['auth_message'] is not None:
- auth_message = kwargs['auth_message']
- else:
- auth_message = auth_message1
-
- # always use same callable
- auth_pairs0 = {}
- if isinstance(kwargs['auth'], list):
- for k, v in kwargs['auth']:
- auth_pairs0[k] = v
- authf = functools.partial(auth_func,
- auth_pairs=auth_pairs0,
- auth_filename=kwargs['auth_filename'],
- auth_access=kwargs['auth_access'],
- auth_freeze=kwargs['auth_freeze'],
- guest_name=kwargs['guest_name'],
- selection_docs_state00=copy.deepcopy(selection_docs_state0))
-
- def get_request_state(requests_state1, request, db1s):
- # if need to get state, do it now
- if not requests_state1:
- requests_state1 = requests_state0.copy()
- if requests:
- if not requests_state1.get('headers', '') and hasattr(request, 'headers'):
- requests_state1.update(request.headers)
- if not requests_state1.get('host', '') and hasattr(request, 'host'):
- requests_state1.update(dict(host=request.host))
- if not requests_state1.get('host2', '') and hasattr(request, 'client') and hasattr(request.client, 'host'):
- requests_state1.update(dict(host2=request.client.host))
- if not requests_state1.get('username', '') and hasattr(request, 'username'):
- # use already-defined username instead of keep changing to new uuid
- # should be same as in requests_state1
- db_username = get_username_direct(db1s)
- requests_state1.update(dict(username=request.username or db_username or str(uuid.uuid4())))
- requests_state1 = {str(k): str(v) for k, v in requests_state1.items()}
- return requests_state1
-
- def user_state_setup(db1s, requests_state1, request: gr.Request, *args):
- requests_state1 = get_request_state(requests_state1, request, db1s)
- set_userid(db1s, requests_state1, get_userid_auth)
- args_list = [db1s, requests_state1] + list(args)
- return tuple(args_list)
-
- # END AUTH THINGS
-
- def allow_empty_instruction(langchain_mode1, document_subset1, langchain_action1):
- allow = False
- allow |= langchain_action1 not in LangChainAction.QUERY.value
- allow |= document_subset1 in DocumentSubset.TopKSources.name
- if langchain_mode1 in [LangChainMode.LLM.value]:
- allow = False
- return allow
-
- image_loaders_options0, image_loaders_options, \
- pdf_loaders_options0, pdf_loaders_options, \
- url_loaders_options0, url_loaders_options = lg_to_gr(**kwargs)
- jq_schema0 = '.[]'
-
- with demo:
- # avoid actual model/tokenizer here or anything that would be bad to deepcopy
- # https://github.com/gradio-app/gradio/issues/3558
- model_state = gr.State(
- dict(model='model', tokenizer='tokenizer', device=kwargs['device'],
- base_model=kwargs['base_model'],
- tokenizer_base_model=kwargs['tokenizer_base_model'],
- lora_weights=kwargs['lora_weights'],
- inference_server=kwargs['inference_server'],
- prompt_type=kwargs['prompt_type'],
- prompt_dict=kwargs['prompt_dict'],
- visible_models=kwargs['visible_models'],
- h2ogpt_key=kwargs['h2ogpt_key'],
- )
- )
-
- def update_langchain_mode_paths(selection_docs_state1):
- dup = selection_docs_state1['langchain_mode_paths'].copy()
- for k, v in dup.items():
- if k not in selection_docs_state1['langchain_modes']:
- selection_docs_state1['langchain_mode_paths'].pop(k)
- for k in selection_docs_state1['langchain_modes']:
- if k not in selection_docs_state1['langchain_mode_types']:
- # if didn't specify shared, then assume scratch if didn't login or personal if logged in
- selection_docs_state1['langchain_mode_types'][k] = LangChainTypes.PERSONAL.value
- return selection_docs_state1
-
- # Setup some gradio states for per-user dynamic state
- model_state2 = gr.State(kwargs['model_state_none'].copy())
- model_options_state = gr.State([model_options0])
- lora_options_state = gr.State([lora_options])
- server_options_state = gr.State([server_options])
- my_db_state = gr.State(my_db_state0)
- chat_state = gr.State({})
- docs_state00 = kwargs['document_choice'] + [DocumentChoice.ALL.value]
- docs_state0 = []
- [docs_state0.append(x) for x in docs_state00 if x not in docs_state0]
- docs_state = gr.State(docs_state0)
- viewable_docs_state0 = []
- viewable_docs_state = gr.State(viewable_docs_state0)
- selection_docs_state0 = update_langchain_mode_paths(selection_docs_state0)
- selection_docs_state = gr.State(selection_docs_state0)
- requests_state0 = dict(headers='', host='', username='')
- requests_state = gr.State(requests_state0)
-
- if description is not None:
- gr.Markdown(f"""
- {get_h2o_title(title, description) if kwargs['h2ocolors'] else get_simple_title(title, description)}
- """)
-
- # go button visible if
- base_wanted = kwargs['base_model'] != no_model_str and kwargs['login_mode_if_model0']
- go_btn = gr.Button(value="ENTER", visible=base_wanted, variant="primary")
-
- nas = ' '.join(['NA'] * len(kwargs['model_states']))
- res_value = "Response Score: NA" if not kwargs[
- 'model_lock'] else "Response Scores: %s" % nas
-
- user_can_do_sum = kwargs['langchain_mode'] != LangChainMode.DISABLED.value and \
- (kwargs['visible_side_bar'] or kwargs['visible_system_tab'])
- if user_can_do_sum:
- extra_prompt_form = ". For summarization, no query required, just click submit"
- else:
- extra_prompt_form = ""
- if kwargs['input_lines'] > 1:
- instruction_label = "Shift-Enter to Submit, Enter for more lines%s" % extra_prompt_form
- else:
- instruction_label = "Enter to Submit, Shift-Enter for more lines%s" % extra_prompt_form
-
- def get_langchain_choices(selection_docs_state1):
- langchain_modes = selection_docs_state1['langchain_modes']
-
- if is_hf:
- # don't show 'wiki' since only usually useful for internal testing at moment
- no_show_modes = ['Disabled', 'wiki']
- else:
- no_show_modes = ['Disabled']
- allowed_modes = langchain_modes.copy()
- # allowed_modes = [x for x in allowed_modes if x in dbs]
- allowed_modes += ['LLM']
- if allow_upload_to_my_data and 'MyData' not in allowed_modes:
- allowed_modes += ['MyData']
- if allow_upload_to_user_data and 'UserData' not in allowed_modes:
- allowed_modes += ['UserData']
- choices = [x for x in langchain_modes if x in allowed_modes and x not in no_show_modes]
- return choices
-
- def get_df_langchain_mode_paths(selection_docs_state1, db1s, dbs1=None):
- langchain_choices1 = get_langchain_choices(selection_docs_state1)
- langchain_mode_paths = selection_docs_state1['langchain_mode_paths']
- langchain_mode_paths = {k: v for k, v in langchain_mode_paths.items() if k in langchain_choices1}
- if langchain_mode_paths:
- langchain_mode_paths = langchain_mode_paths.copy()
- for langchain_mode1 in langchain_modes_non_db:
- langchain_mode_paths.pop(langchain_mode1, None)
- df1 = pd.DataFrame.from_dict(langchain_mode_paths.items(), orient='columns')
- df1.columns = ['Collection', 'Path']
- df1 = df1.set_index('Collection')
- else:
- df1 = pd.DataFrame(None)
- langchain_mode_types = selection_docs_state1['langchain_mode_types']
- langchain_mode_types = {k: v for k, v in langchain_mode_types.items() if k in langchain_choices1}
- if langchain_mode_types:
- langchain_mode_types = langchain_mode_types.copy()
- for langchain_mode1 in langchain_modes_non_db:
- langchain_mode_types.pop(langchain_mode1, None)
-
- df2 = pd.DataFrame.from_dict(langchain_mode_types.items(), orient='columns')
- df2.columns = ['Collection', 'Type']
- df2 = df2.set_index('Collection')
-
- from src.gpt_langchain import get_persist_directory, load_embed
- persist_directory_dict = {}
- embed_dict = {}
- chroma_version_dict = {}
- for langchain_mode3 in langchain_mode_types:
- langchain_type3 = langchain_mode_types.get(langchain_mode3, LangChainTypes.EITHER.value)
- persist_directory3, langchain_type3 = get_persist_directory(langchain_mode3,
- langchain_type=langchain_type3,
- db1s=db1s, dbs=dbs1)
- got_embedding3, use_openai_embedding3, hf_embedding_model3 = load_embed(
- persist_directory=persist_directory3)
- persist_directory_dict[langchain_mode3] = persist_directory3
- embed_dict[langchain_mode3] = 'OpenAI' if not hf_embedding_model3 else hf_embedding_model3
-
- if os.path.isfile(os.path.join(persist_directory3, 'chroma.sqlite3')):
- chroma_version_dict[langchain_mode3] = 'ChromaDB>=0.4'
- elif os.path.isdir(os.path.join(persist_directory3, 'index')):
- chroma_version_dict[langchain_mode3] = 'ChromaDB<0.4'
- elif not os.listdir(persist_directory3):
- if db_type == 'chroma':
- chroma_version_dict[langchain_mode3] = 'ChromaDB>=0.4' # will be
- elif db_type == 'chroma_old':
- chroma_version_dict[langchain_mode3] = 'ChromaDB<0.4' # will be
- else:
- chroma_version_dict[langchain_mode3] = 'Weaviate' # will be
- if isinstance(hf_embedding_model, dict):
- hf_embedding_model3 = hf_embedding_model['name']
- else:
- hf_embedding_model3 = hf_embedding_model
- assert isinstance(hf_embedding_model3, str)
- embed_dict[langchain_mode3] = hf_embedding_model3 # will be
- else:
- chroma_version_dict[langchain_mode3] = 'Weaviate'
-
- df3 = pd.DataFrame.from_dict(persist_directory_dict.items(), orient='columns')
- df3.columns = ['Collection', 'Directory']
- df3 = df3.set_index('Collection')
-
- df4 = pd.DataFrame.from_dict(embed_dict.items(), orient='columns')
- df4.columns = ['Collection', 'Embedding']
- df4 = df4.set_index('Collection')
-
- df5 = pd.DataFrame.from_dict(chroma_version_dict.items(), orient='columns')
- df5.columns = ['Collection', 'DB']
- df5 = df5.set_index('Collection')
- else:
- df2 = pd.DataFrame(None)
- df3 = pd.DataFrame(None)
- df4 = pd.DataFrame(None)
- df5 = pd.DataFrame(None)
- df_list = [df2, df1, df3, df4, df5]
- df_list = [x for x in df_list if x.shape[1] > 0]
- if len(df_list) > 1:
- df = df_list[0].join(df_list[1:]).replace(np.nan, '').reset_index()
- elif len(df_list) == 0:
- df = df_list[0].replace(np.nan, '').reset_index()
- else:
- df = pd.DataFrame(None)
- return df
-
- normal_block = gr.Row(visible=not base_wanted, equal_height=False, elem_id="col_container")
- with normal_block:
- side_bar = gr.Column(elem_id="sidebar", scale=1, min_width=100, visible=kwargs['visible_side_bar'])
- with side_bar:
- with gr.Accordion("Chats", open=False, visible=True):
- radio_chats = gr.Radio(value=None, label="Saved Chats", show_label=False,
- visible=True, interactive=True,
- type='value')
- upload_visible = kwargs['langchain_mode'] != 'Disabled' and allow_upload
- with gr.Accordion("Upload", open=False, visible=upload_visible):
- with gr.Column():
- with gr.Row(equal_height=False):
- fileup_output = gr.File(show_label=False,
- file_types=['.' + x for x in file_types],
- # file_types=['*', '*.*'], # for iPhone etc. needs to be unconstrained else doesn't work with extension-based restrictions
- file_count="multiple",
- scale=1,
- min_width=0,
- elem_id="warning", elem_classes="feedback",
- )
- fileup_output_text = gr.Textbox(visible=False)
- max_quality = gr.Checkbox(label="Maximum Ingest Quality", value=kwargs['max_quality'],
- visible=not is_public)
- url_visible = kwargs['langchain_mode'] != 'Disabled' and allow_upload and enable_url_upload
- url_label = 'URL/ArXiv' if have_arxiv else 'URL'
- url_text = gr.Textbox(label=url_label,
- # placeholder="Enter Submits",
- max_lines=1,
- interactive=True)
- text_visible = kwargs['langchain_mode'] != 'Disabled' and allow_upload and enable_text_upload
- user_text_text = gr.Textbox(label='Paste Text',
- # placeholder="Enter Submits",
- interactive=True,
- visible=text_visible)
- github_textbox = gr.Textbox(label="Github URL", visible=False) # FIXME WIP
- database_visible = kwargs['langchain_mode'] != 'Disabled'
- with gr.Accordion("Resources", open=False, visible=database_visible):
- langchain_choices0 = get_langchain_choices(selection_docs_state0)
- langchain_mode = gr.Radio(
- langchain_choices0,
- value=kwargs['langchain_mode'],
- label="Collections",
- show_label=True,
- visible=kwargs['langchain_mode'] != 'Disabled',
- min_width=100)
- add_chat_history_to_context = gr.Checkbox(label="Chat History",
- value=kwargs['add_chat_history_to_context'])
- add_search_to_context = gr.Checkbox(label="Web Search",
- value=kwargs['add_search_to_context'],
- visible=os.environ.get('SERPAPI_API_KEY') is not None \
- and have_serpapi)
- document_subset = gr.Radio([x.name for x in DocumentSubset],
- label="Subset",
- value=DocumentSubset.Relevant.name,
- interactive=True,
- )
- allowed_actions = [x for x in langchain_actions if x in visible_langchain_actions]
- langchain_action = gr.Radio(
- allowed_actions,
- value=allowed_actions[0] if len(allowed_actions) > 0 else None,
- label="Action",
- visible=True)
- allowed_agents = [x for x in langchain_agents_list if x in visible_langchain_agents]
- if os.getenv('OPENAI_API_KEY') is None and LangChainAgent.JSON.value in allowed_agents:
- allowed_agents.remove(LangChainAgent.JSON.value)
- if os.getenv('OPENAI_API_KEY') is None and LangChainAgent.PYTHON.value in allowed_agents:
- allowed_agents.remove(LangChainAgent.PYTHON.value)
- if LangChainAgent.PANDAS.value in allowed_agents:
- allowed_agents.remove(LangChainAgent.PANDAS.value)
- langchain_agents = gr.Dropdown(
- allowed_agents,
- value=None,
- label="Agents",
- multiselect=True,
- interactive=True,
- visible=True,
- elem_id="langchain_agents",
- filterable=False)
- visible_doc_track = upload_visible and kwargs['visible_doc_track'] and not kwargs[
- 'large_file_count_mode']
- row_doc_track = gr.Row(visible=visible_doc_track)
- with row_doc_track:
- if kwargs['langchain_mode'] in langchain_modes_non_db:
- doc_counts_str = "Pure LLM Mode"
- else:
- doc_counts_str = "Name: %s\nDocs: Unset\nChunks: Unset" % kwargs['langchain_mode']
- text_doc_count = gr.Textbox(lines=3, label="Doc Counts", value=doc_counts_str,
- visible=visible_doc_track)
- text_file_last = gr.Textbox(lines=1, label="Newest Doc", value=None, visible=visible_doc_track)
- text_viewable_doc_count = gr.Textbox(lines=2, label=None, visible=False)
- col_tabs = gr.Column(elem_id="col-tabs", scale=10)
- with col_tabs, gr.Tabs():
- if kwargs['chat_tables']:
- chat_tab = gr.Row(visible=True)
- else:
- chat_tab = gr.TabItem("Chat") \
- if kwargs['visible_chat_tab'] else gr.Row(visible=False)
- with chat_tab:
- if kwargs['langchain_mode'] == 'Disabled':
- text_output_nochat = gr.Textbox(lines=5, label=output_label0, show_copy_button=True,
- visible=not kwargs['chat'])
- else:
- # text looks a bit worse, but HTML links work
- text_output_nochat = gr.HTML(label=output_label0, visible=not kwargs['chat'])
- with gr.Row():
- # NOCHAT
- instruction_nochat = gr.Textbox(
- lines=kwargs['input_lines'],
- label=instruction_label_nochat,
- placeholder=kwargs['placeholder_instruction'],
- visible=not kwargs['chat'],
- )
- iinput_nochat = gr.Textbox(lines=4, label="Input context for Instruction",
- placeholder=kwargs['placeholder_input'],
- value=kwargs['iinput'],
- visible=not kwargs['chat'])
- submit_nochat = gr.Button("Submit", size='sm', visible=not kwargs['chat'])
- flag_btn_nochat = gr.Button("Flag", size='sm', visible=not kwargs['chat'])
- score_text_nochat = gr.Textbox("Response Score: NA", show_label=False,
- visible=not kwargs['chat'])
- submit_nochat_api = gr.Button("Submit nochat API", visible=False)
- submit_nochat_api_plain = gr.Button("Submit nochat API Plain", visible=False)
- inputs_dict_str = gr.Textbox(label='API input for nochat', show_label=False, visible=False)
- text_output_nochat_api = gr.Textbox(lines=5, label='API nochat output', visible=False,
- show_copy_button=True)
-
- visible_upload = (allow_upload_to_user_data or
- allow_upload_to_my_data) and \
- kwargs['langchain_mode'] != 'Disabled'
- # CHAT
- col_chat = gr.Column(visible=kwargs['chat'])
- with col_chat:
- with gr.Row():
- with gr.Column(scale=50):
- with gr.Row(elem_id="prompt-form-row"):
- label_instruction = 'Ask anything'
- instruction = gr.Textbox(
- lines=kwargs['input_lines'],
- label=label_instruction,
- placeholder=instruction_label,
- info=None,
- elem_id='prompt-form',
- container=True,
- )
- attach_button = gr.UploadButton(
- elem_id="attach-button" if visible_upload else None,
- value="",
- label="Upload File(s)",
- size="sm",
- min_width=24,
- file_types=['.' + x for x in file_types],
- file_count="multiple",
- visible=visible_upload)
-
- submit_buttons = gr.Row(equal_height=False, visible=kwargs['visible_submit_buttons'])
- with submit_buttons:
- mw1 = 50
- mw2 = 50
- with gr.Column(min_width=mw1):
- submit = gr.Button(value='Submit', variant='primary', size='sm',
- min_width=mw1)
- stop_btn = gr.Button(value="Stop", variant='secondary', size='sm',
- min_width=mw1)
- save_chat_btn = gr.Button("Save", size='sm', min_width=mw1)
- with gr.Column(min_width=mw2):
- retry_btn = gr.Button("Redo", size='sm', min_width=mw2)
- undo = gr.Button("Undo", size='sm', min_width=mw2)
- clear_chat_btn = gr.Button(value="Clear", size='sm', min_width=mw2)
-
- visible_model_choice = bool(kwargs['model_lock']) and \
- len(model_states) > 1 and \
- kwargs['visible_visible_models']
- with gr.Row(visible=visible_model_choice):
- visible_models = gr.Dropdown(kwargs['all_models'],
- label="Visible Models",
- value=visible_models_state0,
- interactive=True,
- multiselect=True,
- visible=visible_model_choice,
- elem_id="visible-models",
- filterable=False,
- )
-
- text_output, text_output2, text_outputs = make_chatbots(output_label0, output_label0_model2,
- **kwargs)
-
- with gr.Row():
- with gr.Column(visible=kwargs['score_model']):
- score_text = gr.Textbox(res_value,
- show_label=False,
- visible=True)
- score_text2 = gr.Textbox("Response Score2: NA", show_label=False,
- visible=False and not kwargs['model_lock'])
-
- doc_selection_tab = gr.TabItem("Document Selection") \
- if kwargs['visible_doc_selection_tab'] else gr.Row(visible=False)
- with doc_selection_tab:
- if kwargs['langchain_mode'] in langchain_modes_non_db:
- dlabel1 = 'Choose Resources->Collections and Pick Collection'
- active_collection = gr.Markdown(value="#### Not Chatting with Any Collection\n%s" % dlabel1)
- else:
- dlabel1 = 'Select Subset of Document(s) for Chat with Collection: %s' % kwargs['langchain_mode']
- active_collection = gr.Markdown(
- value="#### Chatting with Collection: %s" % kwargs['langchain_mode'])
- document_choice = gr.Dropdown(docs_state0,
- label=dlabel1,
- value=[DocumentChoice.ALL.value],
- interactive=True,
- multiselect=True,
- visible=kwargs['langchain_mode'] != 'Disabled',
- )
- sources_visible = kwargs['langchain_mode'] != 'Disabled' and enable_sources_list
- with gr.Row():
- with gr.Column(scale=1):
- get_sources_btn = gr.Button(value="Update UI with Document(s) from DB", scale=0, size='sm',
- visible=sources_visible and kwargs['large_file_count_mode'])
- # handle API get sources
- get_sources_api_btn = gr.Button(visible=False)
- get_sources_api_text = gr.Textbox(visible=False)
-
- get_document_api_btn = gr.Button(visible=False)
- get_document_api_text = gr.Textbox(visible=False)
-
- show_sources_btn = gr.Button(value="Show Sources from DB", scale=0, size='sm',
- visible=sources_visible and kwargs['large_file_count_mode'])
- delete_sources_btn = gr.Button(value="Delete Selected Sources from DB", scale=0, size='sm',
- visible=sources_visible)
- refresh_sources_btn = gr.Button(value="Update DB with new/changed files on disk", scale=0,
- size='sm',
- visible=sources_visible and allow_upload_to_user_data)
- with gr.Column(scale=4):
- pass
- visible_add_remove_collection = visible_upload
- with gr.Row():
- with gr.Column(scale=1):
- add_placeholder = "e.g. UserData2, shared, user_path2" \
- if not is_public else "e.g. MyData2, personal (optional)"
- remove_placeholder = "e.g. UserData2" if not is_public else "e.g. MyData2"
- new_langchain_mode_text = gr.Textbox(value="", visible=visible_add_remove_collection,
- label='Add Collection',
- placeholder=add_placeholder,
- interactive=True)
- remove_langchain_mode_text = gr.Textbox(value="", visible=visible_add_remove_collection,
- label='Remove Collection from UI',
- placeholder=remove_placeholder,
- interactive=True)
- purge_langchain_mode_text = gr.Textbox(value="", visible=visible_add_remove_collection,
- label='Purge Collection (UI, DB, & source files)',
- placeholder=remove_placeholder,
- interactive=True)
- sync_sources_btn = gr.Button(
- value="Synchronize DB and UI [only required if did not login and have shared docs]",
- scale=0, size='sm',
- visible=sources_visible and allow_upload_to_user_data and not kwargs[
- 'large_file_count_mode'])
- load_langchain = gr.Button(
- value="Load Collections State [only required if logged in another user ", scale=0,
- size='sm',
- visible=False and allow_upload_to_user_data and
- kwargs['langchain_mode'] != 'Disabled')
- with gr.Column(scale=5):
- if kwargs['langchain_mode'] != 'Disabled' and visible_add_remove_collection:
- df0 = get_df_langchain_mode_paths(selection_docs_state0, None, dbs1=dbs)
- else:
- df0 = pd.DataFrame(None)
- langchain_mode_path_text = gr.Dataframe(value=df0,
- visible=visible_add_remove_collection,
- label='LangChain Mode-Path',
- show_label=False,
- interactive=False)
-
- sources_row = gr.Row(visible=kwargs['langchain_mode'] != 'Disabled' and enable_sources_list,
- equal_height=False)
- with sources_row:
- with gr.Column(scale=1):
- file_source = gr.File(interactive=False,
- label="Download File w/Sources")
- with gr.Column(scale=2):
- sources_text = gr.HTML(label='Sources Added', interactive=False)
-
- doc_exception_text = gr.Textbox(value="", label='Document Exceptions',
- interactive=False,
- visible=kwargs['langchain_mode'] != 'Disabled')
- file_types_str = ' '.join(file_types) + ' URL ArXiv TEXT'
- gr.Textbox(value=file_types_str, label='Document Types Supported',
- lines=2,
- interactive=False,
- visible=kwargs['langchain_mode'] != 'Disabled')
-
- doc_view_tab = gr.TabItem("Document Viewer") \
- if kwargs['visible_doc_view_tab'] else gr.Row(visible=False)
- with doc_view_tab:
- with gr.Row(visible=kwargs['langchain_mode'] != 'Disabled'):
- with gr.Column(scale=2):
- get_viewable_sources_btn = gr.Button(value="Update UI with Document(s) from DB", scale=0,
- size='sm',
- visible=sources_visible and kwargs[
- 'large_file_count_mode'])
- view_document_choice = gr.Dropdown(viewable_docs_state0,
- label="Select Single Document to View",
- value=None,
- interactive=True,
- multiselect=False,
- visible=True,
- )
- info_view_raw = "Raw text shown if render of original doc fails"
- if is_public:
- info_view_raw += " (Up to %s chunks in public portal)" % kwargs['max_raw_chunks']
- view_raw_text_checkbox = gr.Checkbox(label="View Database Text", value=False,
- info=info_view_raw,
- visible=kwargs['db_type'] in ['chroma', 'chroma_old'])
- with gr.Column(scale=4):
- pass
- doc_view = gr.HTML(visible=False)
- doc_view2 = gr.Dataframe(visible=False)
- doc_view3 = gr.JSON(visible=False)
- doc_view4 = gr.Markdown(visible=False)
- doc_view5 = gr.HTML(visible=False)
-
- chat_tab = gr.TabItem("Chat History") \
- if kwargs['visible_chat_history_tab'] else gr.Row(visible=False)
- with chat_tab:
- with gr.Row():
- with gr.Column(scale=1):
- remove_chat_btn = gr.Button(value="Remove Selected Saved Chats", visible=True, size='sm')
- flag_btn = gr.Button("Flag Current Chat", size='sm')
- export_chats_btn = gr.Button(value="Export Chats to Download", size='sm')
- with gr.Column(scale=4):
- pass
- with gr.Row():
- chats_file = gr.File(interactive=False, label="Download Exported Chats")
- chatsup_output = gr.File(label="Upload Chat File(s)",
- file_types=['.json'],
- file_count='multiple',
- elem_id="warning", elem_classes="feedback")
- with gr.Row():
- if 'mbart-' in kwargs['model_lower']:
- src_lang = gr.Dropdown(list(languages_covered().keys()),
- value=kwargs['src_lang'],
- label="Input Language")
- tgt_lang = gr.Dropdown(list(languages_covered().keys()),
- value=kwargs['tgt_lang'],
- label="Output Language")
-
- chat_exception_text = gr.Textbox(value="", visible=True, label='Chat Exceptions',
- interactive=False)
- expert_tab = gr.TabItem("Expert") \
- if kwargs['visible_expert_tab'] else gr.Row(visible=False)
- with expert_tab:
- with gr.Row():
- with gr.Column():
- prompt_type = gr.Dropdown(prompt_types_strings,
- value=kwargs['prompt_type'], label="Prompt Type",
- visible=not kwargs['model_lock'],
- interactive=not is_public,
- )
- prompt_type2 = gr.Dropdown(prompt_types_strings,
- value=kwargs['prompt_type'], label="Prompt Type Model 2",
- visible=False and not kwargs['model_lock'],
- interactive=not is_public)
- system_prompt = gr.Textbox(label="System Prompt",
- info="If 'auto', then uses model's system prompt,"
- " else use this message."
- " If empty, no system message is used",
- value=kwargs['system_prompt'])
- context = gr.Textbox(lines=2, label="System Pre-Context",
- info="Directly pre-appended without prompt processing (before Pre-Conversation)",
- value=kwargs['context'])
- chat_conversation = gr.Textbox(lines=2, label="Pre-Conversation",
- info="Pre-append conversation for instruct/chat models as List of tuple of (human, bot)",
- value=kwargs['chat_conversation'])
- text_context_list = gr.Textbox(lines=2, label="Text Doc Q/A",
- info="List of strings, for document Q/A, for bypassing database (i.e. also works in LLM Mode)",
- value=kwargs['chat_conversation'],
- visible=not is_public, # primarily meant for API
- )
- iinput = gr.Textbox(lines=2, label="Input for Instruct prompt types",
- info="If given for document query, added after query",
- value=kwargs['iinput'],
- placeholder=kwargs['placeholder_input'],
- interactive=not is_public)
- with gr.Column():
- pre_prompt_query = gr.Textbox(label="Query Pre-Prompt",
- info="Added before documents",
- value=kwargs['pre_prompt_query'] or '')
- prompt_query = gr.Textbox(label="Query Prompt",
- info="Added after documents",
- value=kwargs['prompt_query'] or '')
- pre_prompt_summary = gr.Textbox(label="Summary Pre-Prompt",
- info="Added before documents",
- value=kwargs['pre_prompt_summary'] or '')
- prompt_summary = gr.Textbox(label="Summary Prompt",
- info="Added after documents (if query given, 'Focusing on {query}, ' is pre-appended)",
- value=kwargs['prompt_summary'] or '')
- with gr.Row(visible=not is_public):
- image_loaders = gr.CheckboxGroup(image_loaders_options,
- label="Force Image Reader",
- value=image_loaders_options0)
- pdf_loaders = gr.CheckboxGroup(pdf_loaders_options,
- label="Force PDF Reader",
- value=pdf_loaders_options0)
- url_loaders = gr.CheckboxGroup(url_loaders_options,
- label="Force URL Reader", value=url_loaders_options0)
- jq_schema = gr.Textbox(label="JSON jq_schema", value=jq_schema0)
-
- min_top_k_docs, max_top_k_docs, label_top_k_docs = get_minmax_top_k_docs(is_public)
- top_k_docs = gr.Slider(minimum=min_top_k_docs, maximum=max_top_k_docs, step=1,
- value=kwargs['top_k_docs'],
- label=label_top_k_docs,
- # info="For LangChain",
- visible=kwargs['langchain_mode'] != 'Disabled',
- interactive=not is_public)
- chunk_size = gr.Number(value=kwargs['chunk_size'],
- label="Chunk size for document chunking",
- info="For LangChain (ignored if chunk=False)",
- minimum=128,
- maximum=2048,
- visible=kwargs['langchain_mode'] != 'Disabled',
- interactive=not is_public,
- precision=0)
- docs_ordering_type = gr.Radio(
- docs_ordering_types,
- value=kwargs['docs_ordering_type'],
- label="Document Sorting in LLM Context",
- visible=True)
- chunk = gr.components.Checkbox(value=kwargs['chunk'],
- label="Whether to chunk documents",
- info="For LangChain",
- visible=kwargs['langchain_mode'] != 'Disabled',
- interactive=not is_public)
- embed = gr.components.Checkbox(value=True,
- label="Whether to embed text",
- info="For LangChain",
- visible=False)
- with gr.Row():
- stream_output = gr.components.Checkbox(label="Stream output",
- value=kwargs['stream_output'])
- do_sample = gr.Checkbox(label="Sample",
- info="Enable sampler (required for use of temperature, top_p, top_k)",
- value=kwargs['do_sample'])
- max_time = gr.Slider(minimum=0, maximum=kwargs['max_max_time'], step=1,
- value=min(kwargs['max_max_time'],
- kwargs['max_time']), label="Max. time",
- info="Max. time to search optimal output.")
- temperature = gr.Slider(minimum=0.01, maximum=2,
- value=kwargs['temperature'],
- label="Temperature",
- info="Lower is deterministic, higher more creative")
- top_p = gr.Slider(minimum=1e-3, maximum=1.0 - 1e-3,
- value=kwargs['top_p'], label="Top p",
- info="Cumulative probability of tokens to sample from")
- top_k = gr.Slider(
- minimum=1, maximum=100, step=1,
- value=kwargs['top_k'], label="Top k",
- info='Num. tokens to sample from'
- )
- # FIXME: https://github.com/h2oai/h2ogpt/issues/106
- if os.getenv('TESTINGFAIL'):
- max_beams = 8 if not (memory_restriction_level or is_public) else 1
- else:
- max_beams = 1
- num_beams = gr.Slider(minimum=1, maximum=max_beams, step=1,
- value=min(max_beams, kwargs['num_beams']), label="Beams",
- info="Number of searches for optimal overall probability. "
- "Uses more GPU memory/compute",
- interactive=False, visible=max_beams > 1)
- max_max_new_tokens = get_max_max_new_tokens(model_state0, **kwargs)
- max_new_tokens = gr.Slider(
- minimum=1, maximum=max_max_new_tokens, step=1,
- value=min(max_max_new_tokens, kwargs['max_new_tokens']), label="Max output length",
- )
- min_new_tokens = gr.Slider(
- minimum=0, maximum=max_max_new_tokens, step=1,
- value=min(max_max_new_tokens, kwargs['min_new_tokens']), label="Min output length",
- )
- max_new_tokens2 = gr.Slider(
- minimum=1, maximum=max_max_new_tokens, step=1,
- value=min(max_max_new_tokens, kwargs['max_new_tokens']), label="Max output length 2",
- visible=False and not kwargs['model_lock'],
- )
- min_new_tokens2 = gr.Slider(
- minimum=0, maximum=max_max_new_tokens, step=1,
- value=min(max_max_new_tokens, kwargs['min_new_tokens']), label="Min output length 2",
- visible=False and not kwargs['model_lock'],
- )
- min_max_new_tokens = gr.Slider(
- minimum=1, maximum=max_max_new_tokens, step=1,
- value=min(max_max_new_tokens, kwargs['min_max_new_tokens']),
- label="Min. of Max output length",
- )
- early_stopping = gr.Checkbox(label="EarlyStopping", info="Stop early in beam search",
- value=kwargs['early_stopping'], visible=max_beams > 1)
- repetition_penalty = gr.Slider(minimum=0.01, maximum=3.0,
- value=kwargs['repetition_penalty'],
- label="Repetition Penalty")
- num_return_sequences = gr.Slider(minimum=1, maximum=10, step=1,
- value=kwargs['num_return_sequences'],
- label="Number Returns", info="Must be <= num_beams",
- interactive=not is_public, visible=max_beams > 1)
- chat = gr.components.Checkbox(label="Chat mode", value=kwargs['chat'],
- visible=False, # no longer support nochat in UI
- interactive=not is_public,
- )
- with gr.Row():
- count_chat_tokens_btn = gr.Button(value="Count Chat Tokens",
- visible=not is_public and not kwargs['model_lock'],
- interactive=not is_public, size='sm')
- chat_token_count = gr.Textbox(label="Chat Token Count Result", value=None,
- visible=not is_public and not kwargs['model_lock'],
- interactive=False)
-
- models_tab = gr.TabItem("Models") \
- if kwargs['visible_models_tab'] and not bool(kwargs['model_lock']) else gr.Row(visible=False)
- with models_tab:
- load_msg = "Download/Load Model" if not is_public \
- else "LOAD-UNLOAD DISABLED FOR HOSTED DEMO"
- if kwargs['base_model'] not in ['', None, no_model_str]:
- load_msg += ' [WARNING: Avoid --base_model on CLI for memory efficient Load-Unload]'
- load_msg2 = load_msg + "(Model 2)"
- variant_load_msg = 'primary' if not is_public else 'secondary'
- with gr.Row():
- n_gpus_list = [str(x) for x in list(range(-1, n_gpus))]
- with gr.Column():
- with gr.Row():
- with gr.Column(scale=20, visible=not kwargs['model_lock']):
- load_model_button = gr.Button(load_msg, variant=variant_load_msg, scale=0,
- size='sm', interactive=not is_public)
- model_choice = gr.Dropdown(model_options_state.value[0], label="Choose Base Model",
- value=kwargs['base_model'])
- lora_choice = gr.Dropdown(lora_options_state.value[0], label="Choose LORA",
- value=kwargs['lora_weights'], visible=kwargs['show_lora'])
- server_choice = gr.Dropdown(server_options_state.value[0], label="Choose Server",
- value=kwargs['inference_server'], visible=not is_public)
- max_seq_len = gr.Number(value=kwargs['max_seq_len'] or 2048,
- minimum=128,
- maximum=2 ** 18,
- info="If standard LLaMa-2, choose up to 4096",
- label="max_seq_len")
- rope_scaling = gr.Textbox(value=str(kwargs['rope_scaling'] or {}),
- label="rope_scaling")
- row_llama = gr.Row(visible=kwargs['show_llama'] and kwargs['base_model'] == 'llama')
- with row_llama:
- model_path_llama = gr.Textbox(value=kwargs['llamacpp_dict']['model_path_llama'],
- lines=4,
- label="Choose LLaMa.cpp Model Path/URL (for Base Model: llama)",
- visible=kwargs['show_llama'])
- n_gpu_layers = gr.Number(value=kwargs['llamacpp_dict']['n_gpu_layers'],
- minimum=0, maximum=100,
- label="LLaMa.cpp Num. GPU Layers Offloaded",
- visible=kwargs['show_llama'])
- n_batch = gr.Number(value=kwargs['llamacpp_dict']['n_batch'],
- minimum=0, maximum=2048,
- label="LLaMa.cpp Batch Size",
- visible=kwargs['show_llama'])
- n_gqa = gr.Number(value=kwargs['llamacpp_dict']['n_gqa'],
- minimum=0, maximum=32,
- label="LLaMa.cpp Num. Group Query Attention (8 for 70B LLaMa2)",
- visible=kwargs['show_llama'])
- llamacpp_dict_more = gr.Textbox(value="{}",
- lines=4,
- label="Dict for other LLaMa.cpp/GPT4All options",
- visible=kwargs['show_llama'])
- row_gpt4all = gr.Row(
- visible=kwargs['show_gpt4all'] and kwargs['base_model'] in ['gptj',
- 'gpt4all_llama'])
- with row_gpt4all:
- model_name_gptj = gr.Textbox(value=kwargs['llamacpp_dict']['model_name_gptj'],
- label="Choose GPT4All GPTJ Model Path/URL (for Base Model: gptj)",
- visible=kwargs['show_gpt4all'])
- model_name_gpt4all_llama = gr.Textbox(
- value=kwargs['llamacpp_dict']['model_name_gpt4all_llama'],
- label="Choose GPT4All LLaMa Model Path/URL (for Base Model: gpt4all_llama)",
- visible=kwargs['show_gpt4all'])
- with gr.Column(scale=1, visible=not kwargs['model_lock']):
- model_load8bit_checkbox = gr.components.Checkbox(
- label="Load 8-bit [requires support]",
- value=kwargs['load_8bit'], interactive=not is_public)
- model_load4bit_checkbox = gr.components.Checkbox(
- label="Load 4-bit [requires support]",
- value=kwargs['load_4bit'], interactive=not is_public)
- model_low_bit_mode = gr.Slider(value=kwargs['low_bit_mode'],
- minimum=0, maximum=4, step=1,
- label="low_bit_mode")
- model_load_gptq = gr.Textbox(label="gptq", value=kwargs['load_gptq'],
- interactive=not is_public)
- model_load_exllama_checkbox = gr.components.Checkbox(
- label="Load load_exllama [requires support]",
- value=kwargs['load_exllama'], interactive=not is_public)
- model_safetensors_checkbox = gr.components.Checkbox(
- label="Safetensors [requires support]",
- value=kwargs['use_safetensors'], interactive=not is_public)
- model_revision = gr.Textbox(label="revision", value=kwargs['revision'],
- interactive=not is_public)
- model_use_gpu_id_checkbox = gr.components.Checkbox(
- label="Choose Devices [If not Checked, use all GPUs]",
- value=kwargs['use_gpu_id'], interactive=not is_public,
- visible=n_gpus != 0)
- model_gpu = gr.Dropdown(n_gpus_list,
- label="GPU ID [-1 = all GPUs, if Choose is enabled]",
- value=kwargs['gpu_id'], interactive=not is_public,
- visible=n_gpus != 0)
- model_used = gr.Textbox(label="Current Model", value=kwargs['base_model'],
- interactive=False)
- lora_used = gr.Textbox(label="Current LORA", value=kwargs['lora_weights'],
- visible=kwargs['show_lora'], interactive=False)
- server_used = gr.Textbox(label="Current Server",
- value=kwargs['inference_server'],
- visible=bool(kwargs['inference_server']) and not is_public,
- interactive=False)
- prompt_dict = gr.Textbox(label="Prompt (or Custom)",
- value=pprint.pformat(kwargs['prompt_dict'], indent=4),
- interactive=not is_public, lines=4)
- col_model2 = gr.Column(visible=False)
- with col_model2:
- with gr.Row():
- with gr.Column(scale=20, visible=not kwargs['model_lock']):
- load_model_button2 = gr.Button(load_msg2, variant=variant_load_msg, scale=0,
- size='sm', interactive=not is_public)
- model_choice2 = gr.Dropdown(model_options_state.value[0], label="Choose Model 2",
- value=no_model_str)
- lora_choice2 = gr.Dropdown(lora_options_state.value[0], label="Choose LORA 2",
- value=no_lora_str,
- visible=kwargs['show_lora'])
- server_choice2 = gr.Dropdown(server_options_state.value[0], label="Choose Server 2",
- value=no_server_str,
- visible=not is_public)
- max_seq_len2 = gr.Number(value=kwargs['max_seq_len'] or 2048,
- minimum=128,
- maximum=2 ** 18,
- info="If standard LLaMa-2, choose up to 4096",
- label="max_seq_len Model 2")
- rope_scaling2 = gr.Textbox(value=str(kwargs['rope_scaling'] or {}),
- label="rope_scaling Model 2")
-
- row_llama2 = gr.Row(
- visible=kwargs['show_llama'] and kwargs['base_model'] == 'llama')
- with row_llama2:
- model_path_llama2 = gr.Textbox(
- value=kwargs['llamacpp_dict']['model_path_llama'],
- label="Choose LLaMa.cpp Model 2 Path/URL (for Base Model: llama)",
- lines=4,
- visible=kwargs['show_llama'])
- n_gpu_layers2 = gr.Number(value=kwargs['llamacpp_dict']['n_gpu_layers'],
- minimum=0, maximum=100,
- label="LLaMa.cpp Num. GPU 2 Layers Offloaded",
- visible=kwargs['show_llama'])
- n_batch2 = gr.Number(value=kwargs['llamacpp_dict']['n_batch'],
- minimum=0, maximum=2048,
- label="LLaMa.cpp Model 2 Batch Size",
- visible=kwargs['show_llama'])
- n_gqa2 = gr.Number(value=kwargs['llamacpp_dict']['n_gqa'],
- minimum=0, maximum=32,
- label="LLaMa.cpp Model 2 Num. Group Query Attention (8 for 70B LLaMa2)",
- visible=kwargs['show_llama'])
- llamacpp_dict_more2 = gr.Textbox(value="{}",
- lines=4,
- label="Model 2 Dict for other LLaMa.cpp/GPT4All options",
- visible=kwargs['show_llama'])
- row_gpt4all2 = gr.Row(
- visible=kwargs['show_gpt4all'] and kwargs['base_model'] in ['gptj',
- 'gpt4all_llama'])
- with row_gpt4all2:
- model_name_gptj2 = gr.Textbox(value=kwargs['llamacpp_dict']['model_name_gptj'],
- label="Choose GPT4All GPTJ Model 2 Path/URL (for Base Model: gptj)",
- visible=kwargs['show_gpt4all'])
- model_name_gpt4all_llama2 = gr.Textbox(
- value=kwargs['llamacpp_dict']['model_name_gpt4all_llama'],
- label="Choose GPT4All LLaMa Model 2 Path/URL (for Base Model: gpt4all_llama)",
- visible=kwargs['show_gpt4all'])
-
- with gr.Column(scale=1, visible=not kwargs['model_lock']):
- model_load8bit_checkbox2 = gr.components.Checkbox(
- label="Load 8-bit (Model 2) [requires support]",
- value=kwargs['load_8bit'], interactive=not is_public)
- model_load4bit_checkbox2 = gr.components.Checkbox(
- label="Load 4-bit (Model 2) [requires support]",
- value=kwargs['load_4bit'], interactive=not is_public)
- model_low_bit_mode2 = gr.Slider(value=kwargs['low_bit_mode'],
- # ok that same as Model 1
- minimum=0, maximum=4, step=1,
- label="low_bit_mode (Model 2)")
- model_load_gptq2 = gr.Textbox(label="gptq (Model 2)", value='',
- interactive=not is_public)
- model_load_exllama_checkbox2 = gr.components.Checkbox(
- label="Load load_exllama (Model 2) [requires support]",
- value=False, interactive=not is_public)
- model_safetensors_checkbox2 = gr.components.Checkbox(
- label="Safetensors (Model 2) [requires support]",
- value=False, interactive=not is_public)
- model_revision2 = gr.Textbox(label="revision (Model 2)", value='',
- interactive=not is_public)
- model_use_gpu_id_checkbox2 = gr.components.Checkbox(
- label="Choose Devices (Model 2) [If not Checked, use all GPUs]",
- value=kwargs[
- 'use_gpu_id'], interactive=not is_public)
- model_gpu2 = gr.Dropdown(n_gpus_list,
- label="GPU ID (Model 2) [-1 = all GPUs, if choose is enabled]",
- value=kwargs['gpu_id'], interactive=not is_public)
- # no model/lora loaded ever in model2 by default
- model_used2 = gr.Textbox(label="Current Model 2", value=no_model_str,
- interactive=False)
- lora_used2 = gr.Textbox(label="Current LORA (Model 2)", value=no_lora_str,
- visible=kwargs['show_lora'], interactive=False)
- server_used2 = gr.Textbox(label="Current Server (Model 2)", value=no_server_str,
- interactive=False,
- visible=not is_public)
- prompt_dict2 = gr.Textbox(label="Prompt (or Custom) (Model 2)",
- value=pprint.pformat(kwargs['prompt_dict'], indent=4),
- interactive=not is_public, lines=4)
- compare_checkbox = gr.components.Checkbox(label="Compare Two Models",
- value=kwargs['model_lock'],
- visible=not is_public and not kwargs['model_lock'])
- with gr.Row(visible=not kwargs['model_lock']):
- with gr.Column(scale=50):
- new_model = gr.Textbox(label="New Model name/path/URL", interactive=not is_public)
- with gr.Column(scale=50):
- new_lora = gr.Textbox(label="New LORA name/path/URL", visible=kwargs['show_lora'],
- interactive=not is_public)
- with gr.Column(scale=50):
- new_server = gr.Textbox(label="New Server url:port", interactive=not is_public)
- with gr.Row():
- add_model_lora_server_button = gr.Button("Add new Model, Lora, Server url:port", scale=0,
- variant=variant_load_msg,
- size='sm', interactive=not is_public)
- system_tab = gr.TabItem("System") \
- if kwargs['visible_system_tab'] else gr.Row(visible=False)
- with system_tab:
- with gr.Row():
- with gr.Column(scale=1):
- side_bar_text = gr.Textbox('on' if kwargs['visible_side_bar'] else 'off',
- visible=False, interactive=False)
- doc_count_text = gr.Textbox('on' if kwargs['visible_doc_track'] else 'off',
- visible=False, interactive=False)
- submit_buttons_text = gr.Textbox('on' if kwargs['visible_submit_buttons'] else 'off',
- visible=False, interactive=False)
- visible_models_text = gr.Textbox('on' if kwargs['visible_visible_models'] else 'off',
- visible=False, interactive=False)
-
- side_bar_btn = gr.Button("Toggle SideBar", variant="secondary", size="sm")
- doc_count_btn = gr.Button("Toggle SideBar Document Count/Show Newest", variant="secondary",
- size="sm")
- submit_buttons_btn = gr.Button("Toggle Submit Buttons", variant="secondary", size="sm")
- visible_model_btn = gr.Button("Toggle Visible Models", variant="secondary", size="sm")
- col_tabs_scale = gr.Slider(minimum=1, maximum=20, value=10, step=1, label='Window Size')
- text_outputs_height = gr.Slider(minimum=100, maximum=2000, value=kwargs['height'] or 400,
- step=50, label='Chat Height')
- dark_mode_btn = gr.Button("Dark Mode", variant="secondary", size="sm")
- with gr.Column(scale=4):
- pass
- system_visible0 = not is_public and not admin_pass
- admin_row = gr.Row()
- with admin_row:
- with gr.Column(scale=1):
- admin_pass_textbox = gr.Textbox(label="Admin Password",
- type='password',
- visible=not system_visible0)
- with gr.Column(scale=4):
- pass
- system_row = gr.Row(visible=system_visible0)
- with system_row:
- with gr.Column():
- with gr.Row():
- system_btn = gr.Button(value='Get System Info', size='sm')
- system_text = gr.Textbox(label='System Info', interactive=False, show_copy_button=True)
- with gr.Row():
- system_input = gr.Textbox(label='System Info Dict Password', interactive=True,
- visible=not is_public)
- system_btn2 = gr.Button(value='Get System Info Dict', visible=not is_public, size='sm')
- system_text2 = gr.Textbox(label='System Info Dict', interactive=False,
- visible=not is_public, show_copy_button=True)
- with gr.Row():
- system_btn3 = gr.Button(value='Get Hash', visible=not is_public, size='sm')
- system_text3 = gr.Textbox(label='Hash', interactive=False,
- visible=not is_public, show_copy_button=True)
- system_btn4 = gr.Button(value='Get Model Names', visible=not is_public, size='sm')
- system_text4 = gr.Textbox(label='Model Names', interactive=False,
- visible=not is_public, show_copy_button=True)
-
- with gr.Row():
- zip_btn = gr.Button("Zip", size='sm')
- zip_text = gr.Textbox(label="Zip file name", interactive=False)
- file_output = gr.File(interactive=False, label="Zip file to Download")
- with gr.Row():
- s3up_btn = gr.Button("S3UP", size='sm')
- s3up_text = gr.Textbox(label='S3UP result', interactive=False)
-
- tos_tab = gr.TabItem("Terms of Service") \
- if kwargs['visible_tos_tab'] else gr.Row(visible=False)
- with tos_tab:
- description = ""
- description += """ DISCLAIMERS:
- The model was trained on The Pile and other data, which may contain objectionable content. Use at own risk.
etc. added in chat, try to remove some of that to help avoid dup entries when hit new conversation - is_same = True - # length of conversation has to be same - if len(x) != len(y): - return False - if len(x) != len(y): - return False - for stepx, stepy in zip(x, y): - if len(stepx) != len(stepy): - # something off with a conversation - return False - for stepxx, stepyy in zip(stepx, stepy): - if len(stepxx) != len(stepyy): - # something off with a conversation - return False - if len(stepxx) != 2: - # something off - return False - if len(stepyy) != 2: - # something off - return False - questionx = stepxx[0].replace('
', '').replace('
', '') if stepxx[0] is not None else None - answerx = stepxx[1].replace('', '').replace('
', '') if stepxx[1] is not None else None - - questiony = stepyy[0].replace('', '').replace('
', '') if stepyy[0] is not None else None - answery = stepyy[1].replace('', '').replace('
', '') if stepyy[1] is not None else None - - if questionx != questiony or answerx != answery: - return False - return is_same - - def save_chat(*args, chat_is_list=False, auth_filename=None, auth_freeze=None): - args_list = list(args) - db1s = args_list[0] - requests_state1 = args_list[1] - args_list = args_list[2:] - if not chat_is_list: - # list of chatbot histories, - # can't pass in list with list of chatbot histories and state due to gradio limits - chat_list = args_list[:-1] - else: - assert len(args_list) == 2 - chat_list = args_list[0] - # if old chat file with single chatbot, get into shape - if isinstance(chat_list, list) and len(chat_list) > 0 and isinstance(chat_list[0], list) and len( - chat_list[0]) == 2 and isinstance(chat_list[0][0], str) and isinstance(chat_list[0][1], str): - chat_list = [chat_list] - # remove None histories - chat_list_not_none = [x for x in chat_list if x and len(x) > 0 and len(x[0]) == 2 and x[0][1] is not None] - chat_list_none = [x for x in chat_list if x not in chat_list_not_none] - if len(chat_list_none) > 0 and len(chat_list_not_none) == 0: - raise ValueError("Invalid chat file") - # dict with keys of short chat names, values of list of list of chatbot histories - chat_state1 = args_list[-1] - short_chats = list(chat_state1.keys()) - if len(chat_list_not_none) > 0: - # make short_chat key from only first history, based upon question that is same anyways - chat_first = chat_list_not_none[0] - short_chat = get_short_chat(chat_first, short_chats) - if short_chat: - old_chat_lists = list(chat_state1.values()) - already_exists = any([is_chat_same(chat_list, x) for x in old_chat_lists]) - if not already_exists: - chat_state1[short_chat] = chat_list.copy() - - # reverse so newest at top - choices = list(chat_state1.keys()).copy() - choices.reverse() - - # save saved chats and chatbots to auth file - text_output1 = chat_list[0] - text_output21 = chat_list[1] - text_outputs1 = chat_list[2:] - save_auth(requests_state1, auth_filename, auth_freeze, chat_state1=chat_state1, - text_output1=text_output1, text_output21=text_output21, text_outputs1=text_outputs1) - - return chat_state1, gr.update(choices=choices, value=None) - - def switch_chat(chat_key, chat_state1, num_model_lock=0): - chosen_chat = chat_state1[chat_key] - # deal with possible different size of chat list vs. current list - ret_chat = [None] * (2 + num_model_lock) - for chati in range(0, 2 + num_model_lock): - ret_chat[chati % len(ret_chat)] = chosen_chat[chati % len(chosen_chat)] - return tuple(ret_chat) - - def clear_texts(*args): - return tuple([gr.Textbox.update(value='')] * len(args)) - - def clear_scores(): - return gr.Textbox.update(value=res_value), \ - gr.Textbox.update(value='Response Score: NA'), \ - gr.Textbox.update(value='Response Score: NA') - - switch_chat_fun = functools.partial(switch_chat, num_model_lock=len(text_outputs)) - radio_chats.input(switch_chat_fun, - inputs=[radio_chats, chat_state], - outputs=[text_output, text_output2] + text_outputs) \ - .then(clear_scores, outputs=[score_text, score_text2, score_text_nochat]) - - def remove_chat(chat_key, chat_state1): - if isinstance(chat_key, str): - chat_state1.pop(chat_key, None) - return gr.update(choices=list(chat_state1.keys()), value=None), chat_state1 - - remove_chat_event = remove_chat_btn.click(remove_chat, - inputs=[radio_chats, chat_state], - outputs=[radio_chats, chat_state], - queue=False, api_name='remove_chat') - - def get_chats1(chat_state1): - base = 'chats' - base = makedirs(base, exist_ok=True, tmp_ok=True, use_base=True) - filename = os.path.join(base, 'chats_%s.json' % str(uuid.uuid4())) - with open(filename, "wt") as f: - f.write(json.dumps(chat_state1, indent=2)) - return filename - - export_chat_event = export_chats_btn.click(get_chats1, inputs=chat_state, outputs=chats_file, queue=False, - api_name='export_chats' if allow_api else None) - - def add_chats_from_file(db1s, requests_state1, file, chat_state1, radio_chats1, chat_exception_text1, - auth_filename=None, auth_freeze=None): - if not file: - return None, chat_state1, gr.update(choices=list(chat_state1.keys()), value=None), chat_exception_text1 - if isinstance(file, str): - files = [file] - else: - files = file - if not files: - return None, chat_state1, gr.update(choices=list(chat_state1.keys()), value=None), chat_exception_text1 - chat_exception_list = [] - for file1 in files: - try: - if hasattr(file1, 'name'): - file1 = file1.name - with open(file1, "rt") as f: - new_chats = json.loads(f.read()) - for chat1_k, chat1_v in new_chats.items(): - # ignore chat1_k, regenerate and de-dup to avoid loss - chat_state1, _ = save_chat(db1s, requests_state1, chat1_v, chat_state1, chat_is_list=True) - except BaseException as e: - t, v, tb = sys.exc_info() - ex = ''.join(traceback.format_exception(t, v, tb)) - ex_str = "File %s exception: %s" % (file1, str(e)) - print(ex_str, flush=True) - chat_exception_list.append(ex_str) - chat_exception_text1 = '\n'.join(chat_exception_list) - # save chat to auth file - save_auth(requests_state1, auth_filename, auth_freeze, chat_state1=chat_state1) - return None, chat_state1, gr.update(choices=list(chat_state1.keys()), value=None), chat_exception_text1 - - # note for update_user_db_func output is ignored for db - chatup_change_eventa = chatsup_output.change(user_state_setup, - inputs=[my_db_state, requests_state, langchain_mode], - outputs=[my_db_state, requests_state, langchain_mode], - show_progress='minimal') - add_chats_from_file_func = functools.partial(add_chats_from_file, - auth_filename=kwargs['auth_filename'], - auth_freeze=kwargs['auth_freeze'], - ) - chatup_change_event = chatup_change_eventa.then(add_chats_from_file_func, - inputs=[my_db_state, requests_state] + - [chatsup_output, chat_state, radio_chats, - chat_exception_text], - outputs=[chatsup_output, chat_state, radio_chats, - chat_exception_text], - queue=False, - api_name='add_to_chats' if allow_api else None) - - clear_chat_event = clear_chat_btn.click(fn=clear_texts, - inputs=[text_output, text_output2] + text_outputs, - outputs=[text_output, text_output2] + text_outputs, - queue=False, api_name='clear' if allow_api else None) \ - .then(deselect_radio_chats, inputs=None, outputs=radio_chats, queue=False) \ - .then(clear_scores, outputs=[score_text, score_text2, score_text_nochat]) - - clear_eventa = save_chat_btn.click(user_state_setup, - inputs=[my_db_state, requests_state, langchain_mode], - outputs=[my_db_state, requests_state, langchain_mode], - show_progress='minimal') - save_chat_func = functools.partial(save_chat, - auth_filename=kwargs['auth_filename'], - auth_freeze=kwargs['auth_freeze'], - ) - clear_event = clear_eventa.then(save_chat_func, - inputs=[my_db_state, requests_state] + - [text_output, text_output2] + text_outputs + - [chat_state], - outputs=[chat_state, radio_chats], - api_name='save_chat' if allow_api else None) - if kwargs['score_model']: - clear_event2 = clear_event.then(clear_scores, outputs=[score_text, score_text2, score_text_nochat]) - - # NOTE: clear of instruction/iinput for nochat has to come after score, - # because score for nochat consumes actual textbox, while chat consumes chat history filled by user() - no_chat_args = dict(fn=fun, - inputs=[model_state, my_db_state, selection_docs_state, requests_state] + inputs_list, - outputs=text_output_nochat, - queue=queue, - ) - submit_event_nochat = submit_nochat.click(**no_chat_args, api_name='submit_nochat' if allow_api else None) \ - .then(clear_torch_cache) \ - .then(**score_args_nochat, api_name='instruction_bot_score_nochat' if allow_api else None, queue=queue) \ - .then(clear_instruct, None, instruction_nochat) \ - .then(clear_instruct, None, iinput_nochat) \ - .then(clear_torch_cache) - # copy of above with text box submission - submit_event_nochat2 = instruction_nochat.submit(**no_chat_args) \ - .then(clear_torch_cache) \ - .then(**score_args_nochat, queue=queue) \ - .then(clear_instruct, None, instruction_nochat) \ - .then(clear_instruct, None, iinput_nochat) \ - .then(clear_torch_cache) - - submit_event_nochat_api = submit_nochat_api.click(fun_with_dict_str, - inputs=[model_state, my_db_state, selection_docs_state, - requests_state, - inputs_dict_str], - outputs=text_output_nochat_api, - queue=True, # required for generator - api_name='submit_nochat_api' if allow_api else None) - - submit_event_nochat_api_plain = submit_nochat_api_plain.click(fun_with_dict_str_plain, - inputs=inputs_dict_str, - outputs=text_output_nochat_api, - queue=False, - api_name='submit_nochat_plain_api' if allow_api else None) - - def load_model(model_name, lora_weights, server_name, model_state_old, prompt_type_old, - load_8bit, load_4bit, low_bit_mode, - load_gptq, load_exllama, use_safetensors, revision, - use_gpu_id, gpu_id, max_seq_len1, rope_scaling1, - model_path_llama1, model_name_gptj1, model_name_gpt4all_llama1, - n_gpu_layers1, n_batch1, n_gqa1, llamacpp_dict_more1, - system_prompt1): - try: - llamacpp_dict = ast.literal_eval(llamacpp_dict_more1) - except: - print("Failed to use user input for llamacpp_dict_more1 dict", flush=True) - llamacpp_dict = {} - llamacpp_dict.update(dict(model_path_llama=model_path_llama1, - model_name_gptj=model_name_gptj1, - model_name_gpt4all_llama=model_name_gpt4all_llama1, - n_gpu_layers=n_gpu_layers1, - n_batch=n_batch1, - n_gqa=n_gqa1, - )) - - # ensure no API calls reach here - if is_public: - raise RuntimeError("Illegal access for %s" % model_name) - # ensure old model removed from GPU memory - if kwargs['debug']: - print("Pre-switch pre-del GPU memory: %s" % get_torch_allocated(), flush=True) - - model0 = model_state0['model'] - if isinstance(model_state_old['model'], str) and \ - model0 is not None and \ - hasattr(model0, 'cpu'): - # best can do, move model loaded at first to CPU - model0.cpu() - - if model_state_old['model'] is not None and \ - not isinstance(model_state_old['model'], str): - if hasattr(model_state_old['model'], 'cpu'): - try: - model_state_old['model'].cpu() - except Exception as e: - # sometimes hit NotImplementedError: Cannot copy out of meta tensor; no data! - print("Unable to put model on CPU: %s" % str(e), flush=True) - del model_state_old['model'] - model_state_old['model'] = None - - if model_state_old['tokenizer'] is not None and not isinstance(model_state_old['tokenizer'], str): - del model_state_old['tokenizer'] - model_state_old['tokenizer'] = None - - clear_torch_cache() - if kwargs['debug']: - print("Pre-switch post-del GPU memory: %s" % get_torch_allocated(), flush=True) - if not model_name: - model_name = no_model_str - if model_name == no_model_str: - # no-op if no model, just free memory - # no detranscribe needed for model, never go into evaluate - lora_weights = no_lora_str - server_name = no_server_str - return kwargs['model_state_none'].copy(), \ - model_name, lora_weights, server_name, prompt_type_old, \ - gr.Slider.update(maximum=256), \ - gr.Slider.update(maximum=256) - - # don't deepcopy, can contain model itself - all_kwargs1 = all_kwargs.copy() - all_kwargs1['base_model'] = model_name.strip() - all_kwargs1['load_8bit'] = load_8bit - all_kwargs1['load_4bit'] = load_4bit - all_kwargs1['low_bit_mode'] = low_bit_mode - all_kwargs1['load_gptq'] = load_gptq - all_kwargs1['load_exllama'] = load_exllama - all_kwargs1['use_safetensors'] = use_safetensors - all_kwargs1['revision'] = None if not revision else revision # transcribe, don't pass '' - all_kwargs1['use_gpu_id'] = use_gpu_id - all_kwargs1['gpu_id'] = int(gpu_id) if gpu_id not in [None, 'None'] else None # detranscribe - all_kwargs1['llamacpp_dict'] = llamacpp_dict - all_kwargs1['max_seq_len'] = max_seq_len1 - try: - all_kwargs1['rope_scaling'] = str_to_dict(rope_scaling1) # transcribe - except: - print("Failed to use user input for rope_scaling dict", flush=True) - all_kwargs1['rope_scaling'] = {} - model_lower = model_name.strip().lower() - if model_lower in inv_prompt_type_to_model_lower: - prompt_type1 = inv_prompt_type_to_model_lower[model_lower] - else: - prompt_type1 = prompt_type_old - - # detranscribe - if lora_weights == no_lora_str: - lora_weights = '' - all_kwargs1['lora_weights'] = lora_weights.strip() - if server_name == no_server_str: - server_name = '' - all_kwargs1['inference_server'] = server_name.strip() - - model1, tokenizer1, device1 = get_model(reward_type=False, - **get_kwargs(get_model, exclude_names=['reward_type'], - **all_kwargs1)) - clear_torch_cache() - - tokenizer_base_model = model_name - prompt_dict1, error0 = get_prompt(prompt_type1, '', - chat=False, context='', reduced=False, making_context=False, - return_dict=True, system_prompt=system_prompt1) - model_state_new = dict(model=model1, tokenizer=tokenizer1, device=device1, - base_model=model_name, tokenizer_base_model=tokenizer_base_model, - lora_weights=lora_weights, inference_server=server_name, - prompt_type=prompt_type1, prompt_dict=prompt_dict1, - # FIXME: not typically required, unless want to expose adding h2ogpt endpoint in UI - visible_models=None, h2ogpt_key=None, - ) - - max_max_new_tokens1 = get_max_max_new_tokens(model_state_new, **kwargs) - - if kwargs['debug']: - print("Post-switch GPU memory: %s" % get_torch_allocated(), flush=True) - return model_state_new, model_name, lora_weights, server_name, prompt_type1, \ - gr.Slider.update(maximum=max_max_new_tokens1), \ - gr.Slider.update(maximum=max_max_new_tokens1) - - def get_prompt_str(prompt_type1, prompt_dict1, system_prompt1, which=0): - if prompt_type1 in ['', None]: - print("Got prompt_type %s: %s" % (which, prompt_type1), flush=True) - return str({}) - prompt_dict1, prompt_dict_error = get_prompt(prompt_type1, prompt_dict1, chat=False, context='', - reduced=False, making_context=False, return_dict=True, - system_prompt=system_prompt1) - if prompt_dict_error: - return str(prompt_dict_error) - else: - # return so user can manipulate if want and use as custom - return str(prompt_dict1) - - get_prompt_str_func1 = functools.partial(get_prompt_str, which=1) - get_prompt_str_func2 = functools.partial(get_prompt_str, which=2) - prompt_type.change(fn=get_prompt_str_func1, inputs=[prompt_type, prompt_dict, system_prompt], - outputs=prompt_dict, queue=False) - prompt_type2.change(fn=get_prompt_str_func2, inputs=[prompt_type2, prompt_dict2, system_prompt], - outputs=prompt_dict2, - queue=False) - - def dropdown_prompt_type_list(x): - return gr.Dropdown.update(value=x) - - def chatbot_list(x, model_used_in): - return gr.Textbox.update(label=f'h2oGPT [Model: {model_used_in}]') - - load_model_args = dict(fn=load_model, - inputs=[model_choice, lora_choice, server_choice, model_state, prompt_type, - model_load8bit_checkbox, model_load4bit_checkbox, model_low_bit_mode, - model_load_gptq, model_load_exllama_checkbox, - model_safetensors_checkbox, model_revision, - model_use_gpu_id_checkbox, model_gpu, - max_seq_len, rope_scaling, - model_path_llama, model_name_gptj, model_name_gpt4all_llama, - n_gpu_layers, n_batch, n_gqa, llamacpp_dict_more, - system_prompt], - outputs=[model_state, model_used, lora_used, server_used, - # if prompt_type changes, prompt_dict will change via change rule - prompt_type, max_new_tokens, min_new_tokens, - ]) - prompt_update_args = dict(fn=dropdown_prompt_type_list, inputs=prompt_type, outputs=prompt_type) - chatbot_update_args = dict(fn=chatbot_list, inputs=[text_output, model_used], outputs=text_output) - nochat_update_args = dict(fn=chatbot_list, inputs=[text_output_nochat, model_used], outputs=text_output_nochat) - load_model_event = load_model_button.click(**load_model_args, - api_name='load_model' if allow_api and not is_public else None) \ - .then(**prompt_update_args) \ - .then(**chatbot_update_args) \ - .then(**nochat_update_args) \ - .then(clear_torch_cache) - - load_model_args2 = dict(fn=load_model, - inputs=[model_choice2, lora_choice2, server_choice2, model_state2, prompt_type2, - model_load8bit_checkbox2, model_load4bit_checkbox2, model_low_bit_mode2, - model_load_gptq2, model_load_exllama_checkbox2, - model_safetensors_checkbox2, model_revision2, - model_use_gpu_id_checkbox2, model_gpu2, - max_seq_len2, rope_scaling2, - model_path_llama2, model_name_gptj2, model_name_gpt4all_llama2, - n_gpu_layers2, n_batch2, n_gqa2, llamacpp_dict_more2, - system_prompt], - outputs=[model_state2, model_used2, lora_used2, server_used2, - # if prompt_type2 changes, prompt_dict2 will change via change rule - prompt_type2, max_new_tokens2, min_new_tokens2 - ]) - prompt_update_args2 = dict(fn=dropdown_prompt_type_list, inputs=prompt_type2, outputs=prompt_type2) - chatbot_update_args2 = dict(fn=chatbot_list, inputs=[text_output2, model_used2], outputs=text_output2) - load_model_event2 = load_model_button2.click(**load_model_args2, - api_name='load_model2' if allow_api and not is_public else None) \ - .then(**prompt_update_args2) \ - .then(**chatbot_update_args2) \ - .then(clear_torch_cache) - - def dropdown_model_lora_server_list(model_list0, model_x, - lora_list0, lora_x, - server_list0, server_x, - model_used1, lora_used1, server_used1, - model_used2, lora_used2, server_used2, - ): - model_new_state = [model_list0[0] + [model_x]] - model_new_options = [*model_new_state[0]] - if no_model_str in model_new_options: - model_new_options.remove(no_model_str) - model_new_options = [no_model_str] + sorted(model_new_options) - x1 = model_x if model_used1 == no_model_str else model_used1 - x2 = model_x if model_used2 == no_model_str else model_used2 - ret1 = [gr.Dropdown.update(value=x1, choices=model_new_options), - gr.Dropdown.update(value=x2, choices=model_new_options), - '', model_new_state] - - lora_new_state = [lora_list0[0] + [lora_x]] - lora_new_options = [*lora_new_state[0]] - if no_lora_str in lora_new_options: - lora_new_options.remove(no_lora_str) - lora_new_options = [no_lora_str] + sorted(lora_new_options) - # don't switch drop-down to added lora if already have model loaded - x1 = lora_x if model_used1 == no_model_str else lora_used1 - x2 = lora_x if model_used2 == no_model_str else lora_used2 - ret2 = [gr.Dropdown.update(value=x1, choices=lora_new_options), - gr.Dropdown.update(value=x2, choices=lora_new_options), - '', lora_new_state] - - server_new_state = [server_list0[0] + [server_x]] - server_new_options = [*server_new_state[0]] - if no_server_str in server_new_options: - server_new_options.remove(no_server_str) - server_new_options = [no_server_str] + sorted(server_new_options) - # don't switch drop-down to added server if already have model loaded - x1 = server_x if model_used1 == no_model_str else server_used1 - x2 = server_x if model_used2 == no_model_str else server_used2 - ret3 = [gr.Dropdown.update(value=x1, choices=server_new_options), - gr.Dropdown.update(value=x2, choices=server_new_options), - '', server_new_state] - - return tuple(ret1 + ret2 + ret3) - - add_model_lora_server_event = \ - add_model_lora_server_button.click(fn=dropdown_model_lora_server_list, - inputs=[model_options_state, new_model] + - [lora_options_state, new_lora] + - [server_options_state, new_server] + - [model_used, lora_used, server_used] + - [model_used2, lora_used2, server_used2], - outputs=[model_choice, model_choice2, new_model, model_options_state] + - [lora_choice, lora_choice2, new_lora, lora_options_state] + - [server_choice, server_choice2, new_server, - server_options_state], - queue=False) - - go_event = go_btn.click(lambda: gr.update(visible=False), None, go_btn, api_name="go" if allow_api else None, - queue=False) \ - .then(lambda: gr.update(visible=True), None, normal_block, queue=False) \ - .then(**load_model_args, queue=False).then(**prompt_update_args, queue=False) - - def compare_textbox_fun(x): - return gr.Textbox.update(visible=x) - - def compare_column_fun(x): - return gr.Column.update(visible=x) - - def compare_prompt_fun(x): - return gr.Dropdown.update(visible=x) - - def slider_fun(x): - return gr.Slider.update(visible=x) - - compare_checkbox.select(compare_textbox_fun, compare_checkbox, text_output2, - api_name="compare_checkbox" if allow_api else None) \ - .then(compare_column_fun, compare_checkbox, col_model2) \ - .then(compare_prompt_fun, compare_checkbox, prompt_type2) \ - .then(compare_textbox_fun, compare_checkbox, score_text2) \ - .then(slider_fun, compare_checkbox, max_new_tokens2) \ - .then(slider_fun, compare_checkbox, min_new_tokens2) - # FIXME: add score_res2 in condition, but do better - - # callback for logging flagged input/output - callback.setup(inputs_list + [text_output, text_output2] + text_outputs, "flagged_data_points") - flag_btn.click(lambda *args: callback.flag(args), inputs_list + [text_output, text_output2] + text_outputs, - None, - preprocess=False, - api_name='flag' if allow_api else None, queue=False) - flag_btn_nochat.click(lambda *args: callback.flag(args), inputs_list + [text_output_nochat], None, - preprocess=False, - api_name='flag_nochat' if allow_api else None, queue=False) - - def get_system_info(): - if is_public: - time.sleep(10) # delay to avoid spam since queue=False - return gr.Textbox.update(value=system_info_print()) - - system_event = system_btn.click(get_system_info, outputs=system_text, - api_name='system_info' if allow_api else None, queue=False) - - def get_system_info_dict(system_input1, **kwargs1): - if system_input1 != os.getenv("ADMIN_PASS", ""): - return json.dumps({}) - exclude_list = ['admin_pass', 'examples'] - sys_dict = {k: v for k, v in kwargs1.items() if - isinstance(v, (str, int, bool, float)) and k not in exclude_list} - try: - sys_dict.update(system_info()) - except Exception as e: - # protection - print("Exception: %s" % str(e), flush=True) - return json.dumps(sys_dict) - - system_kwargs = all_kwargs.copy() - system_kwargs.update(dict(command=str(' '.join(sys.argv)))) - get_system_info_dict_func = functools.partial(get_system_info_dict, **all_kwargs) - - system_dict_event = system_btn2.click(get_system_info_dict_func, - inputs=system_input, - outputs=system_text2, - api_name='system_info_dict' if allow_api else None, - queue=False, # queue to avoid spam - ) - - def get_hash(): - return kwargs['git_hash'] - - system_event = system_btn3.click(get_hash, - outputs=system_text3, - api_name='system_hash' if allow_api else None, - queue=False, - ) - - def get_model_names(): - key_list = ['base_model', 'prompt_type', 'prompt_dict'] + list(kwargs['other_model_state_defaults'].keys()) - # don't want to expose backend inference server IP etc. - # key_list += ['inference_server'] - return [{k: x[k] for k in key_list if k in x} for x in model_states] - - models_list_event = system_btn4.click(get_model_names, - outputs=system_text4, - api_name='model_names' if allow_api else None, - queue=False, - ) - - def count_chat_tokens(model_state1, chat1, prompt_type1, prompt_dict1, - system_prompt1, chat_conversation1, - memory_restriction_level1=0, - keep_sources_in_context1=False, - ): - if model_state1 and not isinstance(model_state1['tokenizer'], str): - tokenizer = model_state1['tokenizer'] - elif model_state0 and not isinstance(model_state0['tokenizer'], str): - tokenizer = model_state0['tokenizer'] - else: - tokenizer = None - if tokenizer is not None: - langchain_mode1 = 'LLM' - add_chat_history_to_context1 = True - # fake user message to mimic bot() - chat1 = copy.deepcopy(chat1) - chat1 = chat1 + [['user_message1', None]] - model_max_length1 = tokenizer.model_max_length - context1 = history_to_context(chat1, - langchain_mode=langchain_mode1, - add_chat_history_to_context=add_chat_history_to_context1, - prompt_type=prompt_type1, - prompt_dict=prompt_dict1, - chat=True, - model_max_length=model_max_length1, - memory_restriction_level=memory_restriction_level1, - keep_sources_in_context=keep_sources_in_context1, - system_prompt=system_prompt1, - chat_conversation=chat_conversation1) - tokens = tokenizer(context1, return_tensors="pt")['input_ids'] - if len(tokens.shape) == 1: - return str(tokens.shape[0]) - elif len(tokens.shape) == 2: - return str(tokens.shape[1]) - else: - return "N/A" - else: - return "N/A" - - count_chat_tokens_func = functools.partial(count_chat_tokens, - memory_restriction_level1=memory_restriction_level, - keep_sources_in_context1=kwargs['keep_sources_in_context']) - count_tokens_event = count_chat_tokens_btn.click(fn=count_chat_tokens_func, - inputs=[model_state, text_output, prompt_type, prompt_dict, - system_prompt, chat_conversation], - outputs=chat_token_count, - api_name='count_tokens' if allow_api else None) - - # don't pass text_output, don't want to clear output, just stop it - # cancel only stops outer generation, not inner generation or non-generation - stop_btn.click(lambda: None, None, None, - cancels=submits1 + submits2 + submits3 + submits4 + - [submit_event_nochat, submit_event_nochat2] + - [eventdb1, eventdb2, eventdb3] + - [eventdb7a, eventdb7, eventdb8a, eventdb8, eventdb9a, eventdb9, eventdb12a, eventdb12] + - db_events + - [eventdbloadla, eventdbloadlb] + - [clear_event] + - [submit_event_nochat_api, submit_event_nochat] + - [load_model_event, load_model_event2] + - [count_tokens_event] - , - queue=False, api_name='stop' if allow_api else None).then(clear_torch_cache, queue=False) - - if kwargs['auth'] is not None: - auth = authf - load_func = user_state_setup - load_inputs = [my_db_state, requests_state, login_btn, login_btn] - load_outputs = [my_db_state, requests_state, login_btn] - else: - auth = None - load_func, load_inputs, load_outputs = None, None, None - - app_js = wrap_js_to_lambda( - len(load_inputs) if load_inputs else 0, - get_dark_js() if kwargs['dark'] else None, - get_heap_js(heap_app_id) if is_heap_analytics_enabled else None) - - load_event = demo.load(fn=load_func, inputs=load_inputs, outputs=load_outputs, _js=app_js) - - if load_func: - load_event2 = load_event.then(load_login_func, - inputs=login_inputs, - outputs=login_outputs) - if not kwargs['large_file_count_mode']: - load_event3 = load_event2.then(**get_sources_kwargs) - load_event4 = load_event3.then(fn=update_dropdown, inputs=docs_state, outputs=document_choice) - load_event5 = load_event4.then(**show_sources_kwargs) - load_event6 = load_event5.then(**get_viewable_sources_args) - load_event7 = load_event6.then(**viewable_kwargs) - - demo.queue(concurrency_count=kwargs['concurrency_count'], api_open=kwargs['api_open']) - favicon_file = "h2o-logo.svg" - favicon_path = favicon_file - if not os.path.isfile(favicon_file): - print("favicon_path1=%s not found" % favicon_file, flush=True) - alt_path = os.path.dirname(os.path.abspath(__file__)) - favicon_path = os.path.join(alt_path, favicon_file) - if not os.path.isfile(favicon_path): - print("favicon_path2: %s not found in %s" % (favicon_file, alt_path), flush=True) - alt_path = os.path.dirname(alt_path) - favicon_path = os.path.join(alt_path, favicon_file) - if not os.path.isfile(favicon_path): - print("favicon_path3: %s not found in %s" % (favicon_file, alt_path), flush=True) - favicon_path = None - - if kwargs['prepare_offline_level'] > 0: - from src.prepare_offline import go_prepare_offline - go_prepare_offline(**locals()) - return - - scheduler = BackgroundScheduler() - scheduler.add_job(func=clear_torch_cache, trigger="interval", seconds=20) - if is_public and \ - kwargs['base_model'] not in non_hf_types: - # FIXME: disable for gptj, langchain or gpt4all modify print itself - # FIXME: and any multi-threaded/async print will enter model output! - scheduler.add_job(func=ping, trigger="interval", seconds=60) - if is_public or os.getenv('PING_GPU'): - scheduler.add_job(func=ping_gpu, trigger="interval", seconds=60 * 10) - scheduler.start() - - # import control - if kwargs['langchain_mode'] == 'Disabled' and \ - os.environ.get("TEST_LANGCHAIN_IMPORT") and \ - kwargs['base_model'] not in non_hf_types: - assert 'gpt_langchain' not in sys.modules, "Dev bug, import of langchain when should not have" - assert 'langchain' not in sys.modules, "Dev bug, import of langchain when should not have" - - # set port in case GRADIO_SERVER_PORT was already set in prior main() call, - # gradio does not listen if change after import - # Keep None if not set so can find an open port above used ports - server_port = os.getenv('GRADIO_SERVER_PORT') - if server_port is not None: - server_port = int(server_port) - - demo.launch(share=kwargs['share'], - server_name=kwargs['server_name'], - show_error=True, - server_port=server_port, - favicon_path=favicon_path, - prevent_thread_lock=True, - auth=auth, - auth_message=auth_message, - root_path=kwargs['root_path']) - if kwargs['verbose'] or not (kwargs['base_model'] in ['gptj', 'gpt4all_llama']): - print("Started Gradio Server and/or GUI: server_name: %s port: %s" % (kwargs['server_name'], server_port), - flush=True) - if kwargs['block_gradio_exit']: - demo.block_thread() - - -def show_doc(db1s, selection_docs_state1, requests_state1, - langchain_mode1, - single_document_choice1, - view_raw_text_checkbox1, - text_context_list1, - dbs1=None, - load_db_if_exists1=None, - db_type1=None, - use_openai_embedding1=None, - hf_embedding_model1=None, - migrate_embedding_model_or_db1=None, - auto_migrate_db1=None, - verbose1=False, - get_userid_auth1=None, - max_raw_chunks=1000000, - api=False, - n_jobs=-1): - file = single_document_choice1 - document_choice1 = [single_document_choice1] - content = None - db_documents = [] - db_metadatas = [] - if db_type1 in ['chroma', 'chroma_old']: - assert langchain_mode1 is not None - langchain_mode_paths = selection_docs_state1['langchain_mode_paths'] - langchain_mode_types = selection_docs_state1['langchain_mode_types'] - from src.gpt_langchain import set_userid, get_any_db, get_docs_and_meta - set_userid(db1s, requests_state1, get_userid_auth1) - top_k_docs = -1 - db = get_any_db(db1s, langchain_mode1, langchain_mode_paths, langchain_mode_types, - dbs=dbs1, - load_db_if_exists=load_db_if_exists1, - db_type=db_type1, - use_openai_embedding=use_openai_embedding1, - hf_embedding_model=hf_embedding_model1, - migrate_embedding_model=migrate_embedding_model_or_db1, - auto_migrate_db=auto_migrate_db1, - for_sources_list=True, - verbose=verbose1, - n_jobs=n_jobs, - ) - query_action = False # long chunks like would be used for summarize - # the below is as or filter, so will show doc or by chunk, unrestricted - from langchain.vectorstores import Chroma - if isinstance(db, Chroma): - # chroma >= 0.4 - if view_raw_text_checkbox1: - one_filter = \ - [{"source": {"$eq": x}, "chunk_id": {"$gte": 0}} if query_action else {"source": {"$eq": x}, - "chunk_id": { - "$gte": -1}} - for x in document_choice1][0] - else: - one_filter = \ - [{"source": {"$eq": x}, "chunk_id": {"$gte": 0}} if query_action else {"source": {"$eq": x}, - "chunk_id": { - "$eq": -1}} - for x in document_choice1][0] - filter_kwargs = dict(filter={"$and": [dict(source=one_filter['source']), - dict(chunk_id=one_filter['chunk_id'])]}) - else: - # migration for chroma < 0.4 - one_filter = \ - [{"source": {"$eq": x}, "chunk_id": {"$gte": 0}} if query_action else {"source": {"$eq": x}, - "chunk_id": { - "$eq": -1}} - for x in document_choice1][0] - if view_raw_text_checkbox1: - # like or, full raw all chunk types - filter_kwargs = dict(filter=one_filter) - else: - filter_kwargs = dict(filter={"$and": [dict(source=one_filter['source']), - dict(chunk_id=one_filter['chunk_id'])]}) - db_documents, db_metadatas = get_docs_and_meta(db, top_k_docs, filter_kwargs=filter_kwargs, - text_context_list=text_context_list1) - # order documents - from langchain.docstore.document import Document - docs_with_score = [(Document(page_content=result[0], metadata=result[1] or {}), 0) - for result in zip(db_documents, db_metadatas)] - doc_chunk_ids = [x.get('chunk_id', -1) for x in db_metadatas] - doc_page_ids = [x.get('page', 0) for x in db_metadatas] - doc_hashes = [x.get('doc_hash', 'None') for x in db_metadatas] - docs_with_score = [x for hx, px, cx, x in - sorted(zip(doc_hashes, doc_page_ids, doc_chunk_ids, docs_with_score), - key=lambda x: (x[0], x[1], x[2])) - # if cx == -1 - ] - db_metadatas = [x[0].metadata for x in docs_with_score][:max_raw_chunks] - db_documents = [x[0].page_content for x in docs_with_score][:max_raw_chunks] - # done reordering - if view_raw_text_checkbox1: - content = [dict_to_html(x) + '\n' + text_to_html(y) for x, y in zip(db_metadatas, db_documents)] - else: - content = [text_to_html(y) for x, y in zip(db_metadatas, db_documents)] - content = '\n'.join(content) - content = f""" - - -
- Grounded Text-to-Image Synthesis with Attention Refocusing
-
-
- [Project Page]
-
- [GitHub]
-
-
- To identify the areas of interest based on specific spatial parameters, you need to (1) ⌨️ input the names of the concepts you're interested in Grounding Instruction, and (2) 🖱️ draw their corresponding bounding boxes using Sketch Pad -- the parsed boxes will automatically be showed up once you've drawn them.
-
- For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
-
Download › https://urloso.com/2uySc5
Download Zip > https://urloso.com/2uyP4g
Download · https://urloso.com/2uyRPZ
DOWNLOAD ►►► https://urloso.com/2uyRj2
when i open a xex in ida a window pops up asking for the file format, processor type and a bunch of other stuff. i changed the processor to ppc and selected xbox360xexfile for the format. do i have to change any of the other settings or can i keep the defaults?
-as well for benefits xbdm allowed you to realtime on any xex(yea, any game, not even limited to just reach which aso includes default unmodded xex's you can realtime on and take pictures of your console at any time.
-Download ✅ https://urloso.com/2uyOn0
working:
-TUs check and download
-covers donwload
not working
-push from unity to xbox
my little improvement:
-you can see all avaliable TUs for the game(it means, you can update games like GTA V). but, if TU`s mediaid differs from mediaid of installed game, TU`s displayed name would be something like "MID:0C48794E GTA V Title Update 26", where "MID:0C48794E" - TU`s mediaid.
how to install:
1. download my default.xex
2. replace the original default.xex in fsd folder with mine
3. restart fsd
*UPDATE 17.10.2016*
thanks to dizazter, everything is up and running again on the new hosting. you need to download updated version of default.xex
there are two versions now:
default.rar - shows TU for all MediaIDs
default_filter.rar - shows TU only for your MediaID
from the beginning i was using FSD 775. but i found:
1. It is not downloding custom game covers
2. Covers are not downloaded in HD
3. only front side of game cover is visible
4. most importantly FSD Freezes (Hang\stop responding)
it force me to switch to Aurora. at least from my experience i can say, Aurora never freezes.
But i love FSD, if your version is fixed from above issue than surely i am going to use FSD again.
Plz tell me
Thanks a lot for this Gualdimar. Been using it for a while now. I like the fact it doesn't filter all the non-matching Media ID TU's from view. It's especially handy for games that use disc 2 to boot (eg. Watch Dogs), where you have to grab one of the 'MID' labeled TU's. Weird thing is, about three weeks or so ago, cover downloading just seem to stop working for about a week, whereas the other modded version carried on working. Strange. I tried yours again today and it was working again, so I've gone back to your version. Cheers.
-Try to create account on JQE360 but I can't open the website. After some search, I got that FSD is not support anymore, people migrate to aurora. So I download aurora. For game covers I use WebUI in FSD and Aurora Title Editor manually game by game and it's nice. After that I decide to buy external harddisc with xbox games in it, about 1 Tb with around 279 games in it, and think it will be a load of works if I update it one by one.
-My internet on my mobile phone, I don't have router. So I using Laptop with windows 8.1, connect to mobile phone hotspot using laptop's wifi. Using Internet connection sharing to ethernet connect to Xbox. Both set manually IP address on laptop and xbox ethernets, since I don't have dhcp on my laptop. First try with aurora, it works, and I found some website says in FSD using unity account. So I give it a try on last friday. I works, FreeStyle start to download Covers, Background and Description along with screenshots. But after some time, FSD start crash, several restart didn't work, it keep crash. Open file manager and find out OnBoardMU is out of space since my freestyle located in OnBoardMU. Copy it using FTP to my laptop, and count the size about 1.5 Gb. GameDate is too much, so I decide to move it to internal HDD1. After that FSD3 stop crashing, but it no longer resume the download, Refresh workarts one by one still work though, but to much manual selection. Delete the scan paths and readd them do the trick, and it start download automatically. Finally done all the games covers update with total 293 games...
-Free style version : it comes up with this version when I buy this XBOX, so I don't know if the default.xex already updated with the above link. My Dashlauch setting liveblock enabled, livestrong disabled.
- -so my brother gave me his xbox 360 star wars edition, a couple of weeks ago he had someone modify it for him so it can play games from the hard drive, it had XeXMenu 1.2 and Freestyle dashboard, he also had a lot of games on it and every thing was working fine except for one game (GTA V) which gave a bad CD error or something which I ignored until I decided to try and solve it so I went to the game folder in XeXmenu hit Y and chose xex patch just to see what would happen then the error changed to "game error the game couldn't start try downloading the game again" and every other game no gives the same error even the ones that were playable before.
-Hey friend just remove xex menu and re-copy it from ur cd or external hd as u prefer i also mistakenly did hat once and only solution i found was to re-copy xex...just google about downloading xex menu it's free. And alwayd do as Swizzy says he's the champ trust me.
-All right i'm home now I deleted the existing xex menu 1.2 from settings >storage > HDD >demo and then plugged in the flash drive and copied the freshly downloaded xex menu 1.2 but I still get the same error when trying to start a game " game error: the game couldn't start. Try downloading the game again"
-ok will try as soon as I get home
but the thing is I don't have the ISO or sources for these games as I'm not the one who put them on the console in the first place, So what I'm planning to do is to download any XBOX 360 ISO off the internet and look for a tutorial how to run them on the console from a flash drive, in fact I'm currently downloading "Payday 2 [MULTI][XBOX360][Region Free][XDG2][COMPLEX] " this should work right??
should I remove the existing games?
but the thing is I don't have the ISO or sources for these games as I'm not the one who put them on the console in the first place, So what I'm planning to do is to download any XBOX 360 ISO off the internet and look for a tutorial how to run them on the console from a flash drive, in fact I'm currently downloading "Payday 2 [MULTI][XBOX360][Region Free][XDG2][COMPLEX] " this should work right??
aaccfb2cb3Download File === https://urloso.com/2uyO8e
, download sonic the hedgehog free ubuntu download,
, xlink kameras new free download,, microsoft office 2010 english language pack activator download,, microsoft office 2010 activator free download,, download microsoft office 2010 english language pack activator download,
, windows 10 build 15063 free download,, windows 7 ultimate 64 bit 7 step free download,, openoffice 4.0 serial key free download for windows,, free dekalb county employee will pay check free download,, yellow douglas nast air dryer serial number free download,, download microsoft office 2010 english language pack activator download,
, download windows 7 ultimate 64 bit iso with cracks free download,, download microsoft office 2007 serial key free download,, microsoft office 2010 english language pack activator free download,, download pc-3d studio max 2010 ubuntu free download,, dvd restoration tools free download,, download microsoft office 2004 english language pack activator free download,
, windows 7 ultimate 64 bit iso with cracks free download,, download microsoft office 2007 activator free download,, download microsoft office 2007 english language pack activator free download,, download microsoft office 2010 english language pack activator free download,, download windows 10 home in s mode 1 free download,, microsoft office 2010 english language pack activator free download,, windows 10 on macbook pro performance free download,, microsoft office 2007 serial key download free,, microsoft office 2010 english language pack activator free download,
, download windows 10 offical iso as an image free download,, maccleaner pro 2013 freeware download,, access 2010 serial key free download,, microsoft office 2013 english language pack activator download,, microsoft office 2007 activator free download,, windows 10 ios emulator 2017 free download,, microsoft office 2013 english language pack activator free download,, install word 2013 dvd for windows 8 free download,, xbox 7 6.0 free download,, pirate bay download serial numbers free download,, why is microsoft office 2017 unable to install free download,, pirate bay download serial numbers free download,, xbox 7 6.0 free download,, download windows 10 technical preview free download,, microsoft office 2013 serial key free download,, advance steel 2010 free crack download,, windows 7 ultimate 64 bit iso full crack free download,, autocad 2013 update 4.1 crack 32 and 64 bit free download,, windows update 2003 free download,, free ipad imei unlocker for windows free download,, downloaded windows 2013 iso free download,, retroarch windows 7 free download,, z0755 cellphone driver free download,
-Download →→→ https://urloso.com/2uyP7j
When first published, this article received over eight hundred comments from students looking for direction and assistance with their high school art projects. Some of these comments have been published below. It is hoped that the answers provide valuable insight for others.
-Download ✅ https://tinurli.com/2uwjFO
If you are looking for art themes to explore in GCSE or iGCSE lessons, the huge list below is a great starting point. Thank you to art teacher Annie Chapman for this amazing list. Some words link to art teaching resources on this website.
-Hi The Arty Teacher, I am teaching iGCSE Art and Design for the first time. Just wondering as to what you would consider as an ideal number of themes that can be introduced to a class over the course of two years. Is it several or is it a matter of concentrating on one theme only throughout the entire course? Much appreciated, thank you.
-Different teachers structure the course in different ways. At my school, we do one theme in Year 10 with two main outcomes. In year 11 they do another theme (we run this a little bit like a mock). Then they do the externally set task from January.
-Beginning today (Oct. 27) and continuing through Wednesday (Nov. 2), students, faculty, staff, alumni and members of the larger University community are invited to vote for an overall theme for PRT station murals created by students.
-Memento Mori, vanitas, mortality. Death is one of the most pervasive themes in art history. While many artworks celebrate afterlives in heaven or hell, death is most often referenced as a grim reminder of numbered days, and a powerful motivator to live well while you can. Every culture has rituals surrounding death, appearing in artwork as icons and colors. Hourglasses and wilted flowers for the Dutch, the Cuckoo bird in Japan, the Totenkopf in Germany.
-This mural, though, marks a first for Camden, and yet another bridge: Hopeworks has partnered with Mural Arts Philadelphia, marking the highly-regarded collective's first project across the Delaware River, and bringing together artists from Camden's arts community with artists in Philly.
-Bridge building was a recurring theme, not just in the design of the mural but in its very existence, said Manning. Hopeworks will soon open a new training center in Philadelphia's Kensington neighborhood.
-Asked if she envisioned future Mural Arts collaborations in Camden, Golden was confident there would be. She looks forward to Camden's artists working in Philly (some have already attended Mural Arts' most recent quarterly artists' meeting) and Philly arts doing projects in Camden.
- -Each year, over 300,000 students in Pre-K through Grade 12 create original works of art in response to a student-selected theme. This 50+ year-old program helps them explore their own thoughts, feelings and ideas, develop artistic literacy, increase confidence and find a love for learning that will help them become more successful in school and in life.
-The Public Humanities Projects program supports projects that bring the ideas of the humanities to life for general audiences through public programming. Projects must engage humanities scholarship to analyze significant themes in disciplines such as history, literature, ethics, and art history. Awards support projects that are intended to reach broad and diverse public audiences in non-classroom settings in the United States. Projects should engage with ideas that are accessible to the general public and employ appealing interpretive formats.
-Public Humanities Projects supports projects in three categories (Exhibitions, Historic Places, and Humanities Discussions), and at two funding levels (Planning and Implementation). Proposed projects may include complementary components: for example, a museum exhibition might be accompanied by a website or mobile app.
-Small and mid-sized organizations are especially encouraged to apply. We likewise welcome humanities projects tailored to particular groups, such as families, youth (including K-12 students in informal educational settings), underserved communities, and veterans.
-The 10 youth artists were led by lead artist Bijan Machen and mentored by USC students Daniel Kawah and Keviette Minor. The goal of the art project was to have the youth artists reflect and focus their art pieces on events occurring in their neighborhood and personal experiences, as well as interviewing people of various backgrounds around the USC community to gain a different perspective.
-Since 1995 Spiral Workshop has created over 70 theme curricula. Each group intertwines learning in a media such as painting, drawing, Photoshop, sculpture, alternative practices, with investigation of a theme that affects students and their communities.
-This event contains adult themes, distressing imagery, extended use of strobe lighting, smoke effects and swearing. The following items are strictly prohibited: knives, spraycans, illegal drugs, and lawyers from the Walt Disney corporation.
-Visual artists, writers, filmmakers, and playwrights concentrated many of their creative efforts on the patterns of everyday life, especially the world of work. A recurring theme was the strength and dignity of common men and women, even as they faced difficult circumstances.
-Many politically active artists worked for the New Deal projects. United by a desire to use art to promote social change, these artists sympathized with the labor movement and exhibited an affinity for left-wing politics ranging from New Deal liberalism to socialism to communism.
-Most New Deal artist-administrators believed deeply that the projects had a responsibility to reach out to as many Americans as possible and to put art to practical use. Such socially useful arts were not intended to create masterpieces, but they did produce many excellent works, allowed thousands of artists to pursue their vocation, and enriched and informed the lives of Americans.
-(Original theme graphic by Tanner Boeger, incorporating images from HRB, Phillipe Glade, and Christopher Robin Blum and art by Airpusher Collective, Marianela Fuentes, Arturo Gonzalez, and Sarahi Carillo)
-Stuart is the director of Burning Man Project's Philosophical Center and host of the Burning Man LIVE podcast. Since his first Burn in 1993 he has participated as a theme camp organizer, artist, and year-round staff member contributing to the Project's communications, education, and storytelling efforts.
-I really loved the art palette lollipops made with white chocolate. They were adorable, all the little girls thought they were the cutest thing and very special, and it perfectly spoke the theme of the party.
-The theme for 2022 is inspired by the book, The Day You Begin, by National Book Award winner Jacqueline Woodson, and two-time Pura Belpré Illustrator Award winner Rafael López. The Day You Begin is a poignant, yet heartening book about finding courage to connect with others, even when you feel scared and alone. Jacqueline Woodson's lyrical text and Rafael López's dazzling art reminds us that we all feel like outsiders sometimes-and how brave it is that we go forth anyway. And that sometimes, when we reach out and begin to share our stories, others will be happy to meet us halfway.
-Think about all the different activities and experiences you can link to your theme, so that each area of the curriculum is reflected somehow. Be creative! Ask your children for ideas, and include unusual, hands-on activities that will delight your children.
-You can deliver your thematic unit in the way that best suits your children and circumstances. Some aspects of the unit may be best delivered to the whole group, some will work better as small group work. Will you devote your whole classroom to the unit, or set aside one project corner? Can you have your whole day given over to the unit, or do you need to allow time for other core areas of your teaching? The thematic unit is completely flexible.
-Our Art Camp Unit gives you five process-art projects you can use to run an at-home/ in-class art camp. The Unit comes with printable invitations, stickers and certificates to hand out to all attendees.
-This project may help a child or teen reflect on ways to find a safe space or may simply help them feel like they have some control over their environment. It can be conducted one-to-one or in small groups.
-The activity involves imagining being lost at sea and visualizing the ideal lighthouse that would provide the right kind of guidance. This is a great activity for both children and adults, but an older group or individual might better appreciate the depth and symbolism of the project.
-HERE WE design, develop, and deliver the most compelling entertainment experiences around the world.
Our innovative attractions, immersive theme parks, world-class resorts, and new ventures fuse art with technology to change the landscape of themed entertainment.
The concept for the project was developed by Akshata Naik, a Toronto artist who has exhibited her work in Canada, Britain, and India. Akshata lives in Toronto where she is the Program and Gallery Manager at Arts Etobicoke. She also teaches at Art Ignite, Neilson Park Creative Centre, and Vibe Arts.
-The public is invited to view Frozen Voyage during Open Houses being held on Tuesday, August 27 and Wednesday, August 28 between 11:00 AM and 1:00 PM. At the Open House, join the project by folding your own boat that will be added to the artwork. The public may also see Frozen Voyage, along with the other artwork, in Council Chambers during Council Meetings.
aaccfb2cb3DOWNLOAD ––– https://tinurli.com/2uwhWc
Download Zip ○○○ https://tinurli.com/2uwj9x
Download ===== https://tinurli.com/2uwiUi
The Tennessee Coal, Iron and Railroad Company (TCI) one of the original 12 companies listed in the Dow Jones Industrial Index, was one of the largest users of prison laborers, mostly comprised of African Americans convicted of petty crimes. The number of convicts employed increased after United States Steel, the largest corporation in the world at the time (formerly known as U.S. Steel and USX), acquired TCI in 1907. The working and living conditions for these prisoners were brutal, as companies leasing convicts sought to house, clothe and feed them for minimal expense, with little interest in their survival. Justice-involved individuals were housed in rough board shanties unfit for the habitation of human beings. Torture and beatings were common, and countless individuals perished from abuse; poor and dangerous working conditions; communicable diseases, such as tuberculosis, malaria, and pneumonia; and from environmental conditions like contaminated water.
-Convict Lake and Creek are so named as the result of an AMBUSH encounter here September 17, 1871, when a group of inmates escaped from prison in Carson City. Sheriff George Hightower eventually caught up with the convicts and a shoot out took place. Robert Morrison, a Benton Merchant and Mono Jim along with other other posse members encountered the convicts on the present Convict Creek, then known as Monte Diablo Creek. In the encounter, Robert Morrison and Mono Jim were killed. The convicts escaped and were eventually captured later in Round Valley.
-DOWNLOAD ✸ https://tinurli.com/2uwiuI
"This beautifully written book leads its readers on the journey from Emancipation to the devastating convict-leasing system in Georgia. . . . [and] examines the exploitation of black women's bodies, the beginnings of mass incarceration, and the rise of the modern New South."--Erica Armstrong Dunbar, The Nation
-As fans may recall, in the ninth episode of Season 3, Michael learns that Martin Nash, a Black employee who recently transferred to the Scranton branch from Stamford, is a reformed convict. After Nash (played by actor and comedian Wayne Wilderson) reveals he did time for involvement in insider trading, he talks about his experience in prison, which sounds a little better than working at Dunder Mifflin. Heartbroken over the idea that his employees might prefer prison to working with him, Michael turns into Prison Mike to teach everyone that prison is bad.
-One of those lines takes place after the conference room scene in which Michael, Pam, Angela, and Kevin learn that the company receives a Work Opportunity Tax Credit for employing Nash, an ex-convict.
-A death row inmate awaiting execution, asked as a last wish a pencil and paper. After writing for several minutes, the convict called the prison guard and asked that this letter be handed over to his biological mother.
-The purported missive from death row included no information about the identity of its writer, his location, when he wrote it, or the crimes he was charged with. Moreover, it was accompanied by a completely unrelated photograph of "hot convict" Jeremy Meeks, who became internationally notorious when his exceptionally flattering mugshot went viral in 2013. Meeks was sentenced on weapons charges, but he was not involved with a capital case (and therefore was neither sentenced to death nor executed).
-There are three main issues that need to be taken into consideration in the context of pre-trial detention: firstly, pre-trial detention is overused in most countries worldwide and in many developing countries the size of the pre-trial prisoner population is larger than that of the convicted prisoner population. This situation contradicts the provisions in international standards, including ICCPR, that provide for the limited use of pre-trial detention, only when certain conditions are present. Secondly, pre-trial detention is the period most open to abuse in the criminal justice process. Recognizing the particular vulnerability of pre-trial detainees, international human rights instruments provide for a large number of very specific safeguards to ensure that the rights of detainees are not abused, that they are not ill-treated and their access to justice not hindered. Thirdly, although pre-trial detainees should be presumed innocent until found guilty by a court of law, and treated as such, conditions in pre-trial detention are often much worse than those of prisons for convicted prisoners. In addition, the lack of resources for prisons in many low-income countries means that people in detention do not have access to legal advice and assistance, with the result being that they may overstay on remand, and/or not receive a fair trial, further adding to the congestion of prisons. Therefore, improving access to justice, supporting legal and paralegal aid programmes, improving information management and cooperation between courts and prisons, to speed up the processing of cases, as well as assisting with the development of safeguards for pre-trial detainees, such as independent monitoring and inspection mechanisms, comprise important elements of UNODC's work in the field of penal reform.
- -Built in 1840 (not 1790) the Success had many lives, first as a shipping vessel serving British India and then as a passenger ship ferrying immigrants (not convicts) to Australia. During one trip to Australia the Success arrived right at the peak of the Australian gold rush and her crew deserted to strike it rich. Without mariners the ship was left moored near Melbourne, Australia, where it became a prison hulk and later a stores ship.
-Children who were orphaned, removed from negligent parents, or who were juvenile offenders were especially vulnerable after emancipation. They could end up in the convict leasing system as "'apprentices" and fall once more into white planters' hands. Unknown location, ca. 1903. Photo credit: Detroit Publishing Company Collection, Library of Congress.
-Often completely innocent of the crimes of which they were accused, these African Americans were forced to work from sunup to sundown, in chains, under the lash and gunpoint of the white guards. Under convict leasing, Black people going about their day could be rounded up, convicted of made-up crimes, separated from their families, processed through an all-white court, and treated with little to no regard to their human value. In his book Texas Tough, historian Robert Perkinson estimates that at least 30,000 died in the convict leasing system across the South over 55 years. One can find blatant and insidious parallels between convict leasing and mass incarceration and the prison-industrial complex. As Bryan Stevenson says, "slavery did not end in 1865. It just evolved." However, convict leasing rarely appears in history textbooks. The generational loss and trauma in Black families is left unexamined.
-Without ever learning about convict leasing, how can Americans make sense of the discovery of a mass gravesite in a prosperous suburban town? Will the public sweep these uncomfortable truths under the rug again?
-The land where the 95 African American remains were unearthed during construction is owned by the Fort Bend Independent School District, which purchased this former convict camp and state prison land in 2011 and has been accused of mishandling the remains. At the time of writing, Fort Bend ISD continues to own and operate this cemetery unilaterally against community wishes. Moreover, there is no historical marker or other information at the site that tells the history of what happened there. They have even renamed the site with one that is unrelated to the history of convict leasing.
-Americans must find the hidden chapters of their history and really begin to understand the legacy of racial oppression that has strengthened the walls of white supremacy. A version of history that omits these chapters has stolen a chance for the nation to learn from it, and to fix what has been broken by it. As Americans seek to dismantle Confederate monuments, they must also actively create new monuments and narratives that broaden their understanding of justice, democracy, and humanity. I believe that building a memorial dedicated to victims and survivors of convict leasing in Sugar Land, Texas is a step in the right direction.
-After breakfast at The Flourmill Cafe, drive through the countryside for just under an hour to the Toodyay Red Hill Convict Road Station Ruins, constructed in the 1850s. The camp housed the convict road gangs that built and maintained the road to Perth. Back then, there were five buildings made of rammed earth; now the ruins of only one remain.
Established in 1853, it housed 60 ticket-of-leave convicts and put them to work at the Geraldine Lead Mine and local pastoral stations. After exploring the depot, and the nearby pretty town of Northampton, set off on the five-hour journey back to Perth, this time taking the scenic Indian Ocean Drive.
Download Zip ····· https://tinurli.com/2uwkLU
Do you love listening to music and discovering new songs? Do you wish your phone could recognize any song playing around you and show it on your lock screen? If you answered yes, then you might be interested in Ambient Music Mod, a free app that ports the Google Pixel's Now Playing feature to other Android devices. In this article, we will tell you what Ambient Music Mod is, how to install it, and what benefits it can bring to your musical experience.
-Download ✔ https://urlca.com/2uOeb1
Ambient Music Mod is a Shizuku/Sui app that ports Now Playing from Pixels to other Android devices. Now Playing is a feature that automatically identifies songs playing in the background using an offline database and displays them on the lock screen or in a history list. It was introduced by Google in 2017 with the Pixel 2 and has remained exclusive to the Pixel lineup ever since.
-Ambient Music Mod was created by Kieron Quinn, also known as Quinny899 on XDA Forums, who managed to port the feature to other Android smartphones using Shizuku or Sui Magisk module. Shizuku is a service that allows third-party apps access to system-level APIs through ADB, while Sui is a Magisk module that provides rootless superuser access. Ambient Music Mod does not require root access on devices running Android 12 or higher, but it does require root access on older Android versions.
-Ambient Music Mod offers a lot of features that make it a great app for music lovers. Here are some of them:
-Ambient Music Mod uses the latest version of Now Playing from Pixel devices and the latest music databases. It can recognize over 100,000 songs from various genres and languages, even if they are not very popular or mainstream. It can also recognize songs that are not in the local database using Google Assistant's recognition engine.
-Ambient Music Mod runs in the background and listens for music playing around you. It can recognize songs every 15 seconds, every minute, or every 5 minutes, depending on your preference. You can also adjust the sensitivity and gain settings to improve the recognition accuracy. You can choose whether to show notifications or not when a song is recognized.
-ambient music mod shizuku apk download
-ambient music mod v2 apk download
-ambient music mod now playing apk download
-ambient music mod android 12 apk download
-ambient music mod github apk download
-ambient music mod xda apk download
-ambient music mod sui apk download
-ambient music mod latest version apk download
-ambient music mod pixel feature apk download
-ambient music mod no root apk download
-ambient music mod on demand apk download
-ambient music mod lock screen apk download
-ambient music mod track list apk download
-ambient music mod alternative encoding apk download
-ambient music mod gain settings apk download
-ambient music mod history and favorites apk download
-ambient music mod widget apk download
-ambient music mod google assistant apk download
-ambient music mod database location apk download
-ambient music mod distortion fix apk download
-ambient music mod android 14 beta apk download
-ambient music mod port from pixels apk download
-ambient music mod system level api apk download
-ambient music mod standalone app apk download
-ambient music mod magisk module apk download
-ambient music mod xposed dependencies apk download
-ambient music mod hybrid solution apk download
-ambient music mod hotfixes apk download
-ambient music mod installation guide apk download
-ambient music mod sideload android app apk download
-ambient music mod adb interface apk download
-ambient music mod accessibility service apk download
-ambient music mod kieron quinn apk download
-ambient music mod quinny899 apk download
-ambient music mod releases tags apk download
-ambient music mod issues pull requests apk download
-ambient music mod code wiki security apk download
-ambient music mod fork star code apk download
-ambient music mod screenshots assets apk download
-ambient music mod building instructions apk download
Ambient Music Mod keeps track of all the songs that it recognizes and shows them in a history list. You can view the song title, artist name, album art, date and time of recognition, and source of recognition (local or online). You can also mark songs as favourites and view them in a separate list. You can also view a summary of your musical preferences based on the songs that you have listened to.
-If you want to manually trigger a recognition, you can use the app's widget or shortcut. You can also use the On Demand recognition feature, which uses Google Assistant's recognition engine for songs that are not in the local database. This feature requires an internet connection and works on supported devices only.
-Ambient Music Mod can show the recognized songs on your lock screen using an accessibility service. You can customize the appearance and position of the lock screen display according to your liking. You can also customize the database of songs that Ambient Music Mod uses by adding or removing songs from the app's settings.
-If you want to try Ambient Music Mod on your Android device, you will need to follow some steps to install it properly. Here are the requirements and instructions for installing Ambient Music Mod:
-To install Ambient Music Mod, you will need the following:
-Once you have met the requirements, you can follow these steps to install and set up Ambient Music Mod on your device:
-Ambient Music Mod is not only a cool app that lets you enjoy the Pixel's Now Playing feature on your Android device, but it also has some benefits that can enhance your musical experience. Here are some of them:
-Ambient music is a genre of music that focuses on creating a mood or atmosphere rather than a melody or rhythm. It often uses sounds from nature, synthesizers, drones, loops, and minimal vocals. It is usually played at low volumes and in the background, creating a subtle and relaxing sound environment.
-Ambient music can have positive effects on your brain and mood, such as:
-Ambient music can also help you discover new songs and artists that you might not have heard before. Ambient Music Mod can recognize songs from various genres and languages, including ambient music. You can view the history of recognized songs and explore more about them online. You can also mark songs as favourites and create your own playlist of ambient music.
-If you are new to ambient music or want to expand your musical horizons, here are some examples of ambient music and artists that you can check out:
-Ambient Music Mod is a free app that lets you enjoy the Pixel's Now Playing feature on any Android device. It can automatically recognize songs playing in the background using an offline database and display them on your lock screen or in a history list. It can also recognize songs that are not in the local database using Google Assistant's recognition engine. You can customize the app's settings to suit your preference and musical taste.
-Ambient Music Mod can also help you discover and appreciate ambient music, a genre of music that creates a mood or atmosphere rather than a melody or rhythm. Ambient music can have positive effects on your brain and mood, such as reducing stress and anxiety, improving focus and concentration, enhancing creativity and imagination, and promoting sleep and relaxation. You can also explore more about ambient music and artists online using Ambient Music Mod.
-If you are a music lover and want to try Ambient Music Mod on your Android device, you can download it from the official XDA thread and follow the installation and setup steps. You will need a Shizuku or Sui Magisk module installed on your device, as well as Google Play Services and Google Assistant. You will also need an internet connection for downloading the music database and using the On Demand recognition feature.
-Here are some frequently asked questions about Ambient Music Mod:
-Clash of Clans is one of the most popular strategy games on mobile devices, with millions of players around the world. But what if you want to enjoy the game without spending money or waiting for long hours? That's where Clash of Clans Hile Apk comes in. In this article, we will show you how to download and install the modded version of Clash of Clans from Android Oyun Club, a website that offers free and safe downloads of various Android games. We will also explain what Clash of Clans Hile Apk is, how it differs from the original version, and what are the advantages and disadvantages of using it. So, if you are ready to take your gaming experience to the next level, read on!
-Download File ===> https://urlca.com/2uO8n6
Clash of Clans is a freemium strategy game developed by Supercell, a Finnish company that also created other hit games like Hay Day, Boom Beach, and Brawl Stars. The game was released in 2012 for iOS devices and in 2013 for Android devices. Since then, it has become one of the most downloaded and highest-grossing games on both platforms.
-The game is set in a fantasy world where you have to build your own village, train your troops, and fight against other players or computer-generated enemies. You can join or create a clan with other players to cooperate in wars, donate and receive troops, chat, and compete in clan games. You can also participate in special events, seasons, challenges, and leagues to earn rewards and trophies.
-The game offers a variety of buildings, troops, spells, heroes, and items that you can upgrade and customize according to your preference and strategy. You can also explore different maps, modes, and scenarios that add more fun and challenge to the game.
-One of the main benefits of playing Clash of Clans is that it is very addictive and entertaining. You can spend hours building your village, planning your attacks, defending your base, and interacting with other players. You can also enjoy the stunning graphics, sound effects, animations, and music that make the game more immersive and realistic.
-Another benefit is that it is very social and community-oriented. You can make friends with other players from different countries and cultures, share tips and strategies, support each other in battles, and have fun together. You can also learn new skills like leadership, teamwork, communication, problem-solving, creativity, and decision-making.
However, playing Clash of Clans also has some drawbacks. One of them is that it can be very frustrating and time-consuming. You have to wait for long periods of time to upgrade your buildings, train your troops, and replenish your resources. You also have to deal with losing your loot, trophies, and progress when you are attacked by other players or fail to complete a mission.
-Another drawback is that it can be very expensive and tempting. The game uses two currencies: gold and elixir, which you can earn by playing the game, and gems, which you can buy with real money or get from certain achievements. Gems can be used to speed up the waiting time, buy more resources, and unlock special items. However, gems are very scarce and costly, and you may feel pressured to spend more money to get ahead in the game.
-clash of clans mod apk android oyun club hileli indir
-clash of clans hile apk indir android oyun club son sürüm
-clash of clans hile apk indir android oyun club güncel
-clash of clans hile apk indir android oyun club 2023
-clash of clans hile apk indir android oyun club online
-clash of clans hile apk indir android oyun club türkçe
-clash of clans hile apk indir android oyun club sınırsız
-clash of clans hile apk indir android oyun club bedava
-clash of clans hile apk indir android oyun club yeni
-clash of clans hile apk indir android oyun club mega
-clash of clans hile apk indir android oyun club full
-clash of clans hile apk indir android oyun club kurulumu
-clash of clans hile apk indir android oyun club nasıl yapılır
-clash of clans hile apk indir android oyun club linki
-clash of clans hile apk indir android oyun club yorumları
-clash of clans hile apk indir android oyun club videolu anlatım
-clash of clans hile apk indir android oyun club resimli anlatım
-clash of clans hile apk indir android oyun club sorunsuz
-clash of clans hile apk indir android oyun club çalışan
-clash of clans hile apk indir android oyun club güvenli
-clash of clans hile apk indir android oyun club orjinal
-clash of clans hile apk indir android oyun club farkı
-clash of clans hile apk indir android oyun club avantajları
-clash of clans hile apk indir android oyun club özellikleri
-clash of clans hile apk indir android oyun club incelemesi
-clash of clans hile apk indir android oyun club detayları
-clash of clans hile apk indir android oyun club ipuçları
-clash of clans hile apk indir android oyun club stratejileri
-clash of clans hile apk indir android oyun club taktikleri
-clash of clans hile apk indir android oyun club rehberi
-clash of clans hile apk indir android oyun club ipucu verenler
-clash of clans hile apk indir android oyun club püf noktaları
-clash of clans hile apk indir android oyun club tavsiyeleri
-clash of clans hile apk indir android oyun club önerileri
-clash of clans hile apk indir android oyun club yararları
-clash of clans hile apk indir android oyun club zararları
-clash of clans hile apk indir android oyun club riskleri
-clash of clans hile apk indir android oyun club alternatifleri
-clash of clans hile apk indir android oyun club benzerleri
-clash of clans hile apk indir android oyun club rakipleri
-clash of clans hile apk indir android oyun club karşılaştırması
-clash of clans hile apk indir android oyun club farklılıkları
-clash of clans hile apk indir android oyun club artıları ve eksileri
-clash of clans hile apk indir android oyun club avantajları ve dezavantajları
-clash of clans hile apk indir android oyun club yükleme yöntemleri
-clash of clans hile apk indir android oyun club güncelleme yöntemleri
-clash of clans hile apk indir android oyun club silme yöntemleri
-clash of clans hile apk indir android oyun club geri yükleme yöntemleri
-clash of clans hile apk indir android oyun club yedekleme yöntemleri
Clash of Clans Hile Apk is a modified version of Clash of Clans that allows you to enjoy the game without the limitations and restrictions of the original version. It is also known as Clash of Clans Mod Apk, Clash of Clans Hack Apk, or Clash of Clans Cheat Apk. It is not an official product of Supercell, but a third-party creation that is distributed by various websites and platforms.
-The main advantage of using Clash of Clans Hile Apk is that it gives you unlimited access to all the features and resources of the game. You can get unlimited gold, elixir, gems, dark elixir, and other items without spending any money or waiting for any time. You can also unlock all the buildings, troops, spells, heroes, and items without completing any requirements or levels. You can also customize your village, troops, and heroes according to your liking and preference.
-Another advantage is that it gives you more freedom and fun in playing the game. You can experiment with different strategies, tactics, and combinations without worrying about losing anything or being penalized. You can also explore different maps, modes, and scenarios that are not available in the original version. You can also play offline without needing an internet connection or a Google Play account.
-However, using Clash of Clans Hile Apk also has some disadvantages. One of them is that it can be very risky and dangerous for your device and account. Since it is not an official product of Supercell, it may contain viruses, malware, spyware, or other harmful elements that can damage your device or steal your personal information. It may also cause your account to be banned or suspended by Supercell for violating their terms of service and policies.
-Another disadvantage is that it can be very boring and unsatisfying in the long run. Since you have everything at your disposal, you may lose the sense of challenge, achievement, and progression that makes the game exciting and rewarding. You may also miss out on the social and community aspects of the game that make it more enjoyable and engaging. You may also face compatibility issues with updates, patches, or new features that are released by Supercell.
If you are interested in trying out Clash of Clans Hile Apk, you can download and install it from Android Oyun Club, a website that offers free and safe downloads of various Android games. However, you need to follow some steps and requirements to do so successfully and safely.
-Here are the steps and requirements for downloading and installing Clash of Clans Hile Apk from Android Oyun Club:
-Here are some tips and tricks for using Clash of Clans Hile Apk effectively:
-In conclusion, Clash of Clans Hile Apk is a modified version of Clash of Clans that allows you to enjoy the game without the limitations and restrictions of the original version. You can download and install it from Android Oyun Club, a website that offers free and safe downloads of various Android games. However, you need to follow some steps and requirements to do so successfully and safely. You also need to be aware of the advantages and disadvantages of using it, as well as some tips and tricks for using it effectively. We hope that this article has helped you learn more about Clash of Clans Hile Apk and how to download and install it from Android Oyun Club. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!
-A1: Clash of Clans Hile Apk is not legal or endorsed by Supercell, the developer of Clash of Clans. It is a third-party creation that violates their terms of service and policies. Therefore, using it may result in a ban or suspension by Supercell. Moreover, Clash of Clans Hile Apk may not be safe to use, as it may contain viruses, malware, spyware, or other harmful elements that can damage your device or steal your personal information. Therefore, using it is at your own risk and discretion.
-A2: No, you cannot play Clash of Clans Hile Apk with other players online. The modded version is not compatible with the original version, and it may cause errors, crashes, or bans if you try to connect to the online servers or join clans. The modded version is only for offline or solo play.
-A3: You cannot update Clash of Clans Hile Apk from the Google Play Store or any other source, as this may overwrite or delete the modded version. You can only update it from Android Oyun Club or wait for a new modded version to be released. To update it from Android Oyun Club, you need to follow the same steps and requirements as downloading and installing it. However, you may need to uninstall the previous version first before installing the new one.
-A4: If you are looking for some alternatives to Clash of Clans Hile Apk, you can try some other modded versions of Clash of Clans that are available on different websites and platforms. Some examples are Clash of Lights, Clash of Magic, Clash of Souls, and PlenixClash. However, you need to be careful and cautious when using these alternatives, as they may have the same or worse risks and drawbacks as Clash of Clans Hile Apk.
-A5: If you want to find more information and support for Clash of Clans Hile Apk, you can visit the official website of Android Oyun Club at https://androidoyun.club/ or their social media pages on Facebook, Twitter, Instagram, and YouTube. You can also contact them via email at info@androidoyun.club or via their contact form on their website. You can also check out some online forums, blogs, videos, or reviews that discuss or review Clash of Clans Hile Apk.
401be4b1e0If you are a fan of anime and games, you might have heard of Gacha Life. It is a popular game that lets you create your own characters, stories, and scenes in an anime-style world. But did you know that there is a way to make the game even more fun and exciting? That's right, with Gacha Life Chat Mod APK, you can unlock all the features of the game and chat with other players online. In this article, we will tell you everything you need to know about this modded version of the game, including how to download and install it, what are the benefits and drawbacks of using it, and some tips and tricks to enjoy it.
-Gacha Life is a game developed by Lunime, a company that specializes in creating anime-themed games. The game was released in October 2018 for Android and iOS devices, and has since gained millions of fans around the world. The game is rated 4.4 out of 5 stars on Google Play Store and 4.6 out of 5 stars on App Store.
-DOWNLOAD »»» https://urlca.com/2uOaDM
The main feature of Gacha Life is that it allows you to create your own anime characters using a variety of options, such as hairstyles, outfits, accessories, weapons, and more. You can also customize their personality traits, such as their likes, dislikes, hobbies, and relationships. You can then use your characters to create stories and scenes using different backgrounds, props, poses, and dialogue. You can also share your creations with other players online or download them to your device.
-Gacha Life Chat Mod APK is a modified version of the original game that gives you access to all the features that are otherwise locked or limited in the official version. For example, with this mod apk, you can get unlimited gems and coins, which are the in-game currencies that you need to buy items and upgrade your characters. You can also unlock all the items in the shop, such as clothes, accessories, pets, and more. You can also access all the modes in the game, such as Studio Mode, Life Mode, Gacha Mode, and Mini-Games.
-Another feature that makes Gacha Life Chat Mod APK different from the original game is that it allows you to chat with other players online. You can join or create chat rooms where you can talk to other players who share your interests and hobbies. You can also send messages, stickers, emojis, and gifts to your friends. You can also use voice chat or video chat to communicate with them. You can also join or create clubs where you can meet other players who have similar tastes in anime and games.
-If you want
If you want to download and install Gacha Life Chat Mod APK, you need to follow these steps:
-Before you install the mod apk, you should take some precautions to avoid any problems or issues. Here are some tips to follow:
-One of the main benefits of using Gacha Life Chat Mod APK is that you can get unlimited gems and coins, which are the in-game currencies that you need to buy items and upgrade your characters. You can also get unlimited stamina, which is the energy that you need to play the game. With unlimited resources, you can buy anything you want in the shop, such as clothes, accessories, pets, and more. You can also use gems and coins to gacha for rare and exclusive items that are not available in the normal version of the game.
-gacha life chat mod apk unlimited diamonds
-gacha life chat mod apk download for android
-gacha life chat mod apk latest version
-gacha life chat mod apk free shopping
-gacha life chat mod apk no watermark
-gacha life chat mod apk 2023
-gacha life chat mod apk offline
-gacha life chat mod apk with cheats
-gacha life chat mod apk for pc
-gacha life chat mod apk ios
-gacha life chat mod apk unlocked everything
-gacha life chat mod apk online
-gacha life chat mod apk hack
-gacha life chat mod apk 1.1.4
-gacha life chat mod apk revdl
-gacha life chat mod apk rexdl
-gacha life chat mod apk happymod
-gacha life chat mod apk an1
-gacha life chat mod apk pure
-gacha life chat mod apk vip
-gacha life chat mod apk android 1
-gacha life chat mod apk uptodown
-gacha life chat mod apk apkpure
-gacha life chat mod apk apkmody
-gacha life chat mod apk mob.org
-gacha life chat mod apk mediafıre
-gacha life chat mod apk mega.nz
-gacha life chat mod apk google drive
-gacha life chat mod apk zippyshare
-gacha life chat mod apk 4shared
-gacha life chat mod apk obb data
-gacha life chat mod apk unlimited money and gems
-gacha life chat mod apk no ads
-gacha life chat mod apk no root
-gacha life chat mod apk no verification
-gacha life chat mod apk anti ban
-gacha life chat mod apk premium features
-gacha life chat mod apk full version
-gacha life chat mod apk pro version
-gacha life chat mod apk cracked version
Another benefit of using Gacha Life Chat Mod APK is that you can customize your avatar and chat room to your liking. You can change your avatar's appearance, such as their hair, eyes, skin, clothes, and accessories. You can also change their personality traits, such as their likes, dislikes, hobbies, and relationships. You can also customize your chat room by choosing different themes, backgrounds, stickers, emojis, and gifts. You can also invite your friends to join your chat room or join other chat rooms that interest you.
-A third benefit of using Gacha Life Chat Mod APK is that you can explore different modes and mini-games that are not available in the original game. For example, you can play Studio Mode, where you can create your own stories and scenes using your characters and backgrounds. You can also play Life Mode, where you can interact with NPCs and other players in different locations. You can also play Gacha Mode, where you can gacha for items and characters using gems and coins. You can also play Mini-Games, where you can earn gems and coins by playing fun and challenging games.
-One of the main drawbacks of using Gacha Life Chat Mod APK is that it may pose some risks to your device and account. Since the mod apk is not an official version of the game, it may contain malware or viruses that can harm your device or steal your personal information. You should always download the mod apk from a reliable source and scan it with an antivirus app before installing it. You should also avoid clicking on suspicious links or ads that may redirect you to malicious websites or apps.
-Another risk of using Gacha Life Chat Mod APK is that it may result in a ban from the game or chat service. Since the mod apk violates the terms of service of the game and chat service, it may be detected by their security systems and result in a ban from accessing their features or servers. You should always use the mod apk at your own risk and discretion. You should also avoid using the mod apk for illegal or unethical purposes, such as cheating, hacking, or harassing other players.
-A second drawback of using Gacha Life Chat Mod APK is that it may cause some compatibility issues or glitches with your device or game. Since the mod apk is not an official version of the game, it may not be compatible with all devices or operating systems. It may also not be updated regularly or in sync with the original game. This may cause some errors or bugs in the game or chat service, such as crashes, freezes, lags, or missing features. You should always check the compatibility and requirements of the mod apk before downloading and installing it. You should also backup your data and progress in case something goes wrong.
-Gacha Life Chat Mod APK is a modified version of the original game that unlocks all features and allows you to chat with other players online. It has many benefits, such as unlimited resources, customization options, and different modes and mini-games. However, it also has some drawbacks, such as potential risks of malware, viruses, and bans, as well as possible compatibility issues and glitches. You should always download and install the mod apk from a reliable source and take some precautions before using it. You should also use it responsibly and respectfully.
-We hope this article has helped you learn more about Gacha Life Chat Mod APK and how to use it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-Gacha Life Chat Mod APK is safe to use if you download it from a reliable source and scan it with an antivirus app before installing it. However, you
Gacha Life Chat Mod APK is safe to use if you download it from a reliable source and scan it with an antivirus app before installing it. However, you should always be careful and cautious when using modded games and apps, as they may pose some risks to your device and account. You should also use the mod apk at your own risk and discretion, and avoid using it for illegal or unethical purposes.
-Gacha Life Chat Mod APK may not be updated regularly or in sync with the original game. Therefore, you may need to check the source website or app for any updates or new versions of the mod apk. You can also use APKMODY Installer to check for updates and install them easily. However, you should always backup your data and progress before updating the mod apk, as it may overwrite or delete your data and progress.
-Yes, you can use Gacha Life Chat Mod APK on PC by using an emulator like MemuPlay. An emulator is a software that allows you to run Android apps and games on your PC. You can download and install MemuPlay from their official website and then install Gacha Life Chat Mod APK from Google Play Store using the emulator. This is a safer option as it reduces the risk of malware and viruses.
-No, you should not use Gacha Life Chat Mod APK with other mods or hacks, as they may cause conflicts or errors in the game or chat service. You should only use one mod or hack at a time, and uninstall any other mods or hacks before installing Gacha Life Chat Mod APK. You should also avoid using any cheats or tools that may alter the game data or chat data, as they may result in a ban from the game or chat service.
-No, you cannot play Gacha Life Chat Mod APK offline, as it requires an internet connection to access the chat service and other online features. You can only play the game offline if you use the original version of the game, which does not have the chat feature. However, you will not be able to access all the features and items that are available in the mod apk.
401be4b1e03dmark keygen free download is an efficient tool for computer benchmarking. it helps you to determine the performance of your computers graphics card and cpu workload capabilities. therefore, this application is very useful for system builders, gamers, and overclockers. in addition, it provides you with complete detail of your hardware. whats more, this application comes with the ability to perform a wide range of benchmark tests. this latest version comes with everything you need to test your pc, notebook, smartphone, and tablet.
-furthermore, a command-line tool has been provided for more advanced purposes, and the script will be able to set up an automation system to perform various tests. it is possible to export test results in xml format from 3dmark cracked. 3dmark crack leverages hardware compatibility with the gpu to conduct a series of tests on texture drawing speed and quality. 3dmark keygen features multiple processor criteria, a self-contained rating scale, and export results. thetheward graphical interface that enables batch testing and parameter checking.
-Download ——— https://gohhs.com/2uFU96
moreover, a command-line tool has been provided for more advanced purposes, and the script will be able to set up an automation system to perform various tests. it is possible to export test results in xml format from 3dmark cracked. 3dmark crack leverages hardware compatibility with the gpu to conduct a series of tests on texture drawing speed and quality. 3dmark keygen features multiple processor criteria, a self-contained rating scale, and export results. thetheward graphical interface that enables batch testing and parameter checking.
899543212bYou are at liberty to put together the model of your desires when using SAP2000. It enables you to block the beams with variety of choices. It simulates several effects that are available for analysis. You are then capable of choose different effects like as thermal, seismic, wind pressure and corrosion for all the designs. It simulates all types of designs of structures and joints with reduced standards and completely remote algorithms. It consists of a huge database of standard designs with customized modeling options. Also Available:
-8. NO PUBLICATION. You shall not publish, transmit, display, perform, reproduce, create derivative works from, modify, or in any way exploit any Content or Materials that have been made available to or downloaded by you, or any portion thereof, in whole or in part, except as necessary or appropriate for your personal, non-commercial use. You shall remove or disable the copyright notice from any Materials that you use in violation of these Terms of Use.
-DOWNLOAD » https://gohhs.com/2uFSZM
11. SUPPORT. Company will use commercially reasonable efforts to promptly respond to all access requests from users, and to provide access to technical support, including both full time and on-call services in addition to email and telephone support. In the event of any dispute with respect to user’s access requests or with respect to the provision of support services to you, you will bear all costs associated with any or all of the foregoing, and Company will bear any related fees associated with the denial of such access or the provision of support.
-Optionally, SAP2000 Ultimate can be extended by adding additional design products such as roof plates, cement concrete, walls, frames, floors, and beams. By matching the appropriate structural analysis with geometric shape, the user can fully and accurately design the space or structure.
899543212bDOWNLOAD ··· https://gohhs.com/2uFVla
If you are involved in the design, operation, or maintenance of process plants for the chemical and petrochemical industry, you may need to create or read flow diagrams that show the structure and function of the plant. Flow diagrams are graphical representations of the equipment, piping, instrumentation, and control systems that are used to produce, process, or treat chemical or petrochemical substances.
- -However, creating or reading flow diagrams can be challenging if you don't have a standard and consistent way of depicting the elements of the plant. Different industries, regions, and applications may use different symbols, formats, and conventions for their flow diagrams, which can lead to confusion, errors, and inefficiencies.
-Download Zip ✫✫✫ https://gohhs.com/2uFV2r
That's why you may want to download Iso 10628 Pdf Free Downloadl, which is a set of two international standards that specify the types, content, and presentation of flow diagrams for process plants. Iso 10628 Pdf Free Downloadl provides you with a common language and framework for communication and documentation of your process plant diagrams.
- -Iso 10628 Pdf Free Downloadl is a set of two international standards that were developed by ISO (the International Organization for Standardization), which is the world's largest developer and publisher of international standards that cover a wide range of topics and sectors.
- -Iso 10628 Pdf Free Downloadl consists of two parts:
- -Iso 10628 Pdf Free Downloadl is based on the best practices and solutions that were agreed upon by experts from different countries and organizations who collaborated to achieve consensus on the standards.
- -There are many benefits of using Iso 10628 Pdf Free Downloadl as a reference for your flow diagrams for process plants. Some of them are:
- -If you want to download Iso 10628 Pdf Free Downloadl for free,
- - - -Alternatively, you can also buy Iso 10628 Pdf Free Downloadl from your national ISO member. You can find the contact information and online store links of all ISO members on this page: https://www.iso.org/members.html. Buying from your national ISO member may have some advantages, such as:
- -Once you have downloaded Iso 10628 Pdf Free Downloadl, you can use it as a reference for creating or reading flow diagrams for process plants. Iso 10628 Pdf Free Downloadl provides you with clear and consistent guidelines and symbols for depicting the elements of your process plant diagrams.
- -To use Iso 10628 Pdf Free Downloadl for your process plant diagrams, you need to follow these steps:
- -Iso 10628 Pdf Free Downloadl is a valuable resource for anyone who needs to create or read flow diagrams for process plants. It provides you with a standard and consistent way of representing the structure and operation of your process plant using graphical symbols and rules. It also helps you to communicate and document your process plant diagrams effectively and efficiently.
- -If you want to download Iso 10628 Pdf Free Downloadl for free, you can either buy it from the ISO Store website or from your national ISO member. You can then use it as a reference for creating or reading your flow diagram according to the type, purpose, and scope of your process plant.
- -By using Iso 10628 Pdf Free Downloadl for your process plant diagrams, you can benefit from the best practices and solutions that were developed by experts from different countries and organizations. You can also improve your process performance, efficiency, safety, and environmental impact by using clear and consistent flow diagrams for your process plants.
- -We hope this article has helped you to understand what Iso 10628 Pdf Free Downloadl is and how to use it for your process plant diagrams. If you have any questions or feedback, please feel free to contact us.
3cee63e6c2DOWNLOAD →→→ https://urlca.com/2uDdEk
Besides the simple Formatting Tool, the Device Information Library is also capable of creating an application program based on the data of the target device for the user to perform various conversions and calculations in various settings, based on the data of the target device.
-The Device Information Library and the Soft Conversion Tool are provided only with the functions necessary to support the target devices. Thus, users can produce other application programs by including additional functions in the Soft Conversion Tool. For example, the Application Tool generates a program that acquires the settings of the target device from the support tool and stores them to the host PC. A GUI is created in the Application Tool by following the format of the Soft Conversion Tool, and the user can confirm and change the settings of the target device from the host PC. Moreover, the Application Tool includes functions that can calculate the savings of energy or power consumption.
-Download File » https://urlca.com/2uDcYX
As there is no update function of the Device Information Library, the information on the target devices must be updated from the Support Tool. Thus, the device information of the target device must be updated when new values for parameters are added to the PLC.
-The Support Tool is provided with the function of transferring the data of the target device to the host PC. The data can be transferred in various ways. One example is using a storage device and so on. It can be used to store the data and format of the target device in the database of the host PC.
-Looking at the table, 96.5% (179 of 185) of the included studies used Omron wearables. Therefore, the average sample of included studies was 14.3, ranging from 5 to 37. In addition, the average number of participants was 57.2, ranging from 20 to 150. The use of Omron wearables was most frequently reported for the monitoring and tracking of behavior change (73.0%) and exercise performance (62.6%), as well as health and physical fitness (65.2%). This implies that Omron wearables are well established, and widely used in current health research.
899543212bIf you are a fan of battle royale games, you might have heard of Free Fire, one of the most popular and downloaded games on the mobile platform. Free Fire offers an exciting and immersive gameplay experience, with various modes, maps, weapons, characters, and more. But what if you want to get unlimited diamonds, skins, and other in-game items without spending real money? That's where Free Fire mod APK comes in. In this article, we will tell you everything you need to know about Free Fire mod APK, including what it is, what it offers, how to download and install it, and whether it is safe and legal to use.
-Free Fire is a world-famous survival shooter game available on mobile devices. It was developed by Garena International and released in 2017. Since then, it has amassed over a billion downloads on the Google Play Store alone, making it one of the most successful games of all time.
-Download Zip 🔗 https://urllie.com/2uNwS7
Free Fire is a battle royale game, which means that you have to compete with other players on a remote island and be the last one standing. Each match lasts for 10 minutes and involves up to 50 players. You can choose your starting point with your parachute, loot weapons and items, drive vehicles, hide in buildings or bushes, and fight your enemies. You also have to stay within the safe zone, which shrinks over time, forcing you to move closer to your opponents.
-Free Fire has many gameplay features that make it unique and fun. For example, you can create squads of up to 4 players and communicate with them using voice chat. You can also play different modes such as Clash Squad, which is a fast-paced 4v4 team deathmatch, or Ranked Mode, which tests your skills and rewards you with rank points. You can also explore different maps such as Bermuda, Kalahari, Purgatory, or Hangar, each with its own terrain and landmarks.
-One of the reasons why Free Fire is so popular is because of its wide variety of cosmetics and customization options. You can choose from hundreds of characters, each with their own backstory and special abilities. You can also equip them with different outfits, accessories, backpacks, parachutes, banners, emotes, and more. You can also customize your weapons with different skins, attachments, effects, and stickers.
-However, most of these cosmetics are not free. You have to spend diamonds, which are the premium currency of the game, to buy them from the in-game store or from events. Diamonds are not easy to come by unless you spend real money or complete certain tasks. That's why some players look for alternative ways to get unlimited diamonds without spending a dime.
-A mod APK is a modified version of an original application that has been altered by someone to provide some extra features or benefits that are not available in the official version. A Free Fire mod APK is a hacked version of the game that gives you access to unlimited diamonds, skins, weapons, health, aimbot, wallhack, and more.
-Some of the features and benefits that you can get from using a Free Fire mod APK are:
-free fire hack apk unlimited diamonds and coins download
-free fire mod menu apk download latest version unlimited diamonds
-free fire mod apk unlimited health and diamonds download for android
-free fire diamond hack apk download 2023 no human verification
-free fire mod apk unlimited money and diamond 2023 download
-free fire mod apk auto headshot and unlimited diamonds download
-free fire mod apk unlimited diamonds and gold download for pc
-free fire mod apk unlimited everything download for ios
-free fire mod apk aimbot and unlimited diamonds download
-free fire mod apk unlimited diamond generator download online
-free fire mod apk unlimited diamonds and coins 2023 download
-free fire mod apk unlimited skins and diamonds download
-free fire mod apk unlimited diamond hack download for iphone
-free fire mod apk unlimited diamonds and gems download
-free fire mod apk unlimited diamond and uc download
-free fire mod apk unlimited diamond and bp download
-free fire mod apk unlimited diamond and tickets download
-free fire mod apk unlimited diamond and rank download
-free fire mod apk unlimited diamond and characters download
-free fire mod apk unlimited diamond and weapons download
-free fire mod apk unlimited diamond and pets download
-free fire mod apk unlimited diamond and bundles download
-free fire mod apk unlimited diamond and emotes download
-free fire mod apk unlimited diamond and gloo wall download
-free fire mod apk unlimited diamond and elite pass download
-free fire mod apk unlimited diamond and magic cube download
-free fire mod apk unlimited diamond and redeem code download
-free fire mod apk unlimited diamond and vip download
-free fire mod apk unlimited diamond and ghost mode download
-free fire mod apk unlimited diamond and anti ban download
-free fire mod apk unlimited diamond and all unlocked download
-free fire mod apk unlimited diamond and car speed hack download
-free fire mod apk unlimited diamond and wall hack download
-free fire mod apk unlimited diamond and flying hack download
-free fire mod apk unlimited diamond and invisible hack download
-free fire mod apk unlimited diamond and teleport hack download
-free fire mod apk unlimited diamond and damage hack download
-free fire mod apk unlimited diamond and ammo hack download
-free fire mod apk unlimited diamond and grenade hack download
-free fire mod apk unlimited diamond and night mode hack download
These are just some of the features and benefits that you can enjoy from using a Free Fire mod APK. There are many more that you can discover by yourself once you download and install it on your device.
-However, using a Free Fire mod APK is not without its drawbacks and dangers. Some of the disadvantages and risks that you should be aware of are:
-These are just some of the disadvantages and risks that you should consider before using a Free Fire mod APK. There are many more that you should be careful of when downloading and installing it on your device.
-If you still want to try using a Free Fire mod APK despite the drawbacks and dangers, you will need to follow some steps to download and install it on your mobile device. The steps may vary depending on whether you are using an Android or an iOS device.
-If you are using an Android device, here are the steps that you need to follow:
-Congratulations! You have successfully downloaded and installed a Free Fire mod APK on your Android device. Now you can enjoy unlimited diamonds, skins, weapons, health, aimbot, wallhack and more in the game.
-If you are using an iOS device, here are the steps that you need to follow:
-Congratulations! You have successfully downloaded and installed a Free Fire mod IPA on your iOS device. Now you can enjoy unlimited diamonds, skins, weapons, health, aimbot, wallhack and more in the game.
-The final question that you may have is whether Free Fire mod APK is safe and legal to use. The answer is no, it is not safe or legal to use. Here are some of the reasons why:
-As we mentioned earlier, Free Fire mod APK is not verified or authorized by Garena or any other official source. This means that it may contain viruses, malware, spyware, or other harmful programs that can damage your device or steal your personal information. You may also expose your device to hackers or cybercriminals who can access your data or accounts.
-To avoid these risks, you should always download Free Fire mod APK from a reputable and trustworthy website that has positive reviews and feedback from other users. You should also scan the file with an antivirus or anti-malware software before installing it on your device. You should also backup your data and create a restore point in case something goes wrong.
-As we mentioned earlier, Free Fire mod APK is considered a cheating tool that violates the terms of service and fair play policy of the game. This means that if you use it, you may be detected by the anti-cheat system and get banned from playing the game permanently. You may also lose your progress, achievements, rewards, and items that you have earned in the game. You may also face legal action from Garena or other authorities for breaking the law.
-To avoid these consequences, you should never use Free Fire mod APK in online or ranked modes, where you can affect other players' experience or ranking. You should also never use it in tournaments or events, where you can gain unfair advantages or prizes. You should also never share or promote Free Fire mod APK with other players, as this can spread cheating and harm the game community.
-In conclusion, Free Fire mod APK is a hacked version of the game that gives you access to unlimited diamonds, skins, weapons, health, aimbot, wallhack and more. However, it is not safe or legal to use, as it may contain viruses, malware, or other harmful programs that can damage your device or steal your personal information. It may also get you banned from playing the game permanently or face legal action from Garena or other authorities for breaking the law. Therefore, we do not recommend using Free Fire mod APK and advise you to play the game fairly and honestly.
-If you still have any questions or doubts about Free Fire mod APK, here are some FAQs that may help you:
-I hope this article has helped you understand what Free Fire mod APK is and how to use it. However, I strongly advise you not to use it and play the game fairly and honestly. Thank you for reading and have a nice day!
401be4b1e0NBA 2K21 is one of the most popular and realistic basketball simulation games ever created. It features amazing graphics, gameplay, modes, and customization options that will make you feel like you are playing in the NBA. However, if you want to enjoy this game on your Android device, you might face some challenges. The official version of NBA 2K21 is not available for Android devices, and it costs $59.99 on other platforms. So, how can you download NBA 2K21 on Android for free? In this article, we will show you two methods that will allow you to play NBA 2K21 on your Android device without spending a dime. Follow these steps carefully and you will be able to experience the thrill of NBA 2K21 on your smartphone or tablet.
-NBA 2K21 is the latest installment in the NBA 2K series, a franchise that has been dominating the basketball video game market for over two decades. NBA 2K21 was released in September 2020 for PlayStation 4, Xbox One, Nintendo Switch, PC, and Stadia. It will also be available for PlayStation 5 and Xbox Series X/S in November 2020. NBA 2K21 features many improvements and additions over its predecessor, such as:
-Download File ✫✫✫ https://urllie.com/2uNzkF
If you are a fan of basketball or video games, you might be wondering why you should download NBA 2K21 on Android. Here are some reasons why you should consider playing NBA 2K21 on your Android device:
-As we mentioned earlier, there is no official version of NBA 2K21 for Android devices. However, there are two methods that will allow you to play NBA 2K21 on your Android device for free. The first method is to download the NBA 2K Mobile Basketball game from the Google Play Store, which is a free-to-play version of NBA 2K21 that offers a similar gameplay and graphics quality. The second method is to download the NBA 2K21 APK and OBB files from a trusted source, which are the files that contain the full version of NBA 2K21 for Android devices. Both methods have their advantages and disadvantages, so you can choose the one that suits you best. Let's take a look at each method in detail.
-The easiest and safest way to play NBA 2K21 on your Android device is to download the NBA 2K Mobile Basketball game from the Google Play Store. This is a free-to-play game that was released in October 2020 by 2K, Inc., the official developer of the NBA 2K series. To install the game on your Android device, follow these steps:
-The NBA 2K Mobile Basketball game is a simplified and optimized version of NBA 2K21 that offers a similar gameplay and graphics quality. The game features include:
-To play the game and enjoy the NBA 2K21 experience, you need to create an account and choose your favorite team. You can also link your Facebook or Google account to save your progress and sync your data across devices. Once you are logged in, you can access the main menu where you can choose from different modes and options. Here are some tips on how to play the game and enjoy the NBA 2K21 experience:
-The second method to play NBA 2K21 on your Android device is to download the NBA 2K21 APK and OBB files from a trusted source. These are the files that contain the full version of NBA 2K21 for Android devices, which is not officially available on the Google Play Store. However, this method is more risky and complicated than the first one, as you might encounter some problems such as malware, viruses, errors, or bans. Therefore, you need to be careful and cautious when choosing a website to download the files from. Here are some tips on how to find a reliable and safe website to download the NBA 2K21 APK and OBB files:
-One of the main risks of downloading the NBA 2K21 APK and OBB files from an unknown source is that you might get infected with malware or viruses that can harm your device or steal your data. Therefore, you need to verify the files and avoid malware or viruses before installing them on your device. Here are some steps on how to verify the files and avoid malware or viruses:
-Once you have downloaded and verified the NBA 2K21 APK and OBB files from a trusted source, you need to extract and copy them to your Android device. The NBA 2K21 APK file is an application file that contains the game installation package, while the NBA 2K21 OBB file is a data file that contains the game content and resources. To extract and copy them to your Android device, follow these steps:
-How to get nba 2k21 for free on android phone
-NBA 2k21 android free download apk + obb
-Download nba 2k21 mobile basketball game for android
-NBA 2k21 free android download no verification
-How to install nba 2k21 on android device for free
-NBA 2k21 android free download full version
-NBA 2k21 apk mod free download for android
-How to play nba 2k21 on android without paying
-NBA 2k21 android free download offline
-Download nba 2k21 for android free with data
-NBA 2k21 free download for android tablet
-How to download nba 2k21 on android from play store for free
-NBA 2k21 android free download latest version
-NBA 2k21 hack apk free download for android
-How to download nba 2k21 on android emulator for free
-NBA 2k21 android free download highly compressed
-NBA 2k21 free coins and cash for android download
-How to download nba 2k21 on android with vpn for free
-NBA 2k21 android free download no root
-Download nba 2k21 for android free with cheats
-NBA 2k21 free redeem codes for android download
-How to download nba 2k21 on android from official website for free
-NBA 2k21 android free download unlimited money
-NBA 2k21 cracked apk free download for android
-How to download nba 2k21 on android using pc for free
-NBA 2k21 android free download no survey
-NBA 2k21 free locker codes for android download
-How to download nba 2k21 on android with torrent for free
-NBA 2k21 android free download mega link
-NBA 2k21 patch update free download for android
-How to download nba 2k21 on android without wifi for free
-NBA 2k21 android free download google drive link
-NBA 2k21 license key free download for android
-How to download nba 2k21 on android with qr code for free
-NBA 2k21 android free download mediafire link
-NBA 2k21 roster update free download for android
-How to download nba 2k21 on android with sd card for free
-NBA 2k21 android free download zip file
-NBA 2k21 soundtrack free download for android
-How to download nba 2k21 on android with bluetooth for free
The final step to play NBA 2K21 on your Android device is to install the NBA 2K21 APK file and run the game. However, before you can do that, you need to allow unknown sources on your Android device. This is a security setting that prevents you from installing apps that are not downloaded from the Google Play Store. To allow unknown sources on your Android device, follow these steps:
-After allowing unknown sources on your device, you can install the NBA 2K21 APK file and launch the game. To do that, follow these steps:
-Congratulations! You have successfully installed NBA 2K21 on your Android device. You can now enjoy the full features of NBA 2K21 on your smartphone or tablet. The game features include:
-In this article, we have shown you how to download NBA 2K21 on Android for free. We have explained two methods that will allow you to play NBA 2K21 on your Android device without spending a dime. The first method is to download the NBA 2K Mobile Basketball game from the Google Play Store, which is a free-to-play version of NBA 2K21 that offers a similar gameplay and graphics quality. The second method is to download the NBA 2K21 APK and OBB files from a trusted source, which are the files that contain the full version of NBA 2K21 for Android devices. Both methods have their advantages and disadvantages, so you can choose the one that suits you best.
-We hope you have found this article helpful and informative. If you have followed our steps carefully, you should be able to play NBA 2K21 on your Android device for free. However, if you encounter any problems or issues while downloading or installing the game, please let us know in the comments section below. We will try our best to help you out. Also, if you have any suggestions or feedback about this article or our website, please feel free to share them with us. We appreciate your support and cooperation.
-This article is for educational purposes only. We do not condone or encourage piracy or illegal downloading of any games or apps. We are not affiliated with or endorsed by 2K, Inc., the developer or publisher of NBA 2K21, or any other games or apps mentioned in this article. We are not responsible for any damages or losses that may occur as a result of downloading or installing the game or any other files from any sources. Download and install the game at your own risk and discretion.
-Here are some frequently asked questions and answers about how to download NBA 2K21 on Android for free:
-Game of Thrones is one of the most popular and acclaimed fantasy drama television series of all time. It has millions of fans around the world who are eagerly waiting for the next season or spin-off. But what if you want to watch Game of Thrones in your native language, such as Tamil? Is it possible to download Game of Thrones Tamil dubbed movie from KuttyMovies, a notorious piracy website that offers free movies and TV shows? In this article, we will answer these questions and more. We will also provide you with some alternatives to KuttyMovies for downloading Game of Thrones Tamil dubbed movie safely and legally.
-DOWNLOAD ↔ https://gohhs.com/2uPuYX
Game of Thrones is a fantasy drama television series created by David Benioff and D. B. Weiss, based on the novel series A Song of Ice and Fire by George R. R. Martin. The series premiered on HBO in 2011 and concluded in 2019, with eight seasons and 73 episodes. The story revolves around the power struggle among the noble families of Westeros, a fictional continent, for the Iron Throne, the seat of the king. The series also features mythical creatures, such as dragons, direwolves, and white walkers, who pose a threat to the living. The series has won numerous awards, including 59 Emmy Awards, and has been praised for its complex characters, storylines, acting, production values, and cultural impact.
-KuttyMovies is a piracy website that provides free downloads of movies and TV shows in various languages, such as Tamil, Telugu, Hindi, English, Malayalam, Kannada, etc. The website has a huge collection of Tamil dubbed movies, including Hollywood movies, Bollywood movies, South Indian movies, and web series. The website also updates its content regularly with the latest releases and leaks. KuttyMovies is one of the most visited piracy websites in India and attracts millions of users every month.
-Tamil is one of the most spoken languages in India, with over 75 million speakers. It is also an official language in Sri Lanka and Singapore. Many people who speak Tamil prefer to watch movies and TV shows in their native language, as it helps them to understand the dialogues better and enjoy the cultural nuances. Moreover, some people may not be comfortable with English subtitles or audio, as they may find them distracting or hard to follow. Therefore, watching Game of Thrones in Tamil can be a more enjoyable and satisfying experience for them.
-The first step to download Game of Thrones Tamil dubbed movie in KuttyMovies is to visit the website. However, this may not be as easy as it sounds, as KuttyMovies is an illegal website that is banned by the government and internet service providers. Therefore, you may need to use a VPN service or a proxy site to access the website. A VPN service can help you to bypass the geo-restrictions and hide your IP address from the authorities. A proxy site can help you to access the
The next step is to search for Game of Thrones Tamil dubbed movie in KuttyMovies. You can use the search bar on the homepage or browse through the categories and genres. You can also filter the results by year, quality, size, and language. You may find multiple links for Game of Thrones Tamil dubbed movie, as KuttyMovies uploads different versions and sources. You can choose the one that suits your preferences and availability.
-After selecting the link for Game of Thrones Tamil dubbed movie, you will be redirected to another page where you can see the details of the movie, such as the title, genre, cast, director, rating, synopsis, screenshots, and download options. You can choose the quality and size of the movie that you want to download, such as 480p, 720p, 1080p, 300MB, 700MB, 1.5GB, etc. The higher the quality and size, the better the video and audio clarity, but also the longer the download time and the more storage space required.
-game of thrones tamil dubbed movie download in isaidub
-game of thrones tamil dubbed movie download in oceanofmovies
-game of thrones tamil dubbed movie download in 1080p
-game of thrones tamil dubbed movie download in 720p
-game of thrones tamil dubbed movie download in 480p
-game of thrones tamil dubbed movie download in bluray
-game of thrones tamil dubbed movie download in hd quality
-game of thrones tamil dubbed movie download in single part
-game of thrones tamil dubbed movie download in mp4 format
-game of thrones tamil dubbed movie download in season 1
-game of thrones tamil dubbed movie download in season 2
-game of thrones tamil dubbed movie download in season 3
-game of thrones tamil dubbed movie download in season 4
-game of thrones tamil dubbed movie download in season 5
-game of thrones tamil dubbed movie download in season 6
-game of thrones tamil dubbed movie download in season 7
-game of thrones tamil dubbed movie download in season 8
-game of thrones tamil dubbed movie download in full episodes
-game of thrones tamil dubbed movie download in fantasy drama genre
-game of thrones tamil dubbed movie download in HBO original series
-game of thrones tamil dubbed movie download in A Song of Ice and Fire adaptation
-game of thrones tamil dubbed movie download in George R. R. Martin novel
-game of thrones tamil dubbed movie download in David Benioff and D. B. Weiss creation
-game of thrones tamil dubbed movie download in Westeros and Essos setting
-game of thrones tamil dubbed movie download in Iron Throne plot
-game of thrones tamil dubbed movie download in Eddard Stark character
-game of thrones tamil dubbed movie download in Robert Baratheon character
-game of thrones tamil dubbed movie download in Jon Arryn character
-game of thrones tamil dubbed movie download in Daenerys Targaryen character
-game of thrones tamil dubbed movie download in Tyrion Lannister character
-game of thrones tamil dubbed movie download in Cersei Lannister character
-game of thrones tamil dubbed movie download in Jaime Lannister character
-game of thrones tamil dubbed movie download in Arya Stark character
-game of thrones tamil dubbed movie download in Sansa Stark character
-game of thrones tamil dubbed movie download in Bran Stark character
-game of thrones tamil dubbed movie download in Khal Drogo character
-game of thrones tamil dubbed movie download in Jorah Mormont character
-game of thrones tamil dubbed movie download in Samwell Tarly character
-game of thrones tamil dubbed movie download in Theon Greyjoy character
-game of thrones tamil dubbed movie download in Catelyn Stark character
-game of thrones tamil dubbed movie download in Robb Stark character
-game of thrones tamil dubbed movie download in Petyr Baelish character
-game of thrones tamil dubbed movie download in Varys character
-game of thrones tamil dubbed movie download in Sandor Clegane character
-game of thrones tamil dubbed movie download in Joffrey Baratheon character
-game of thrones tamil dubbed movie download in Stannis Baratheon character
-game of thrones tamil dubbed movie download in Melisandre character
The final step is to click on the download link for Game of Thrones Tamil dubbed movie. However, before you can start the download process, you may have to face some challenges, such as pop-up ads, redirects, captcha verification, and waiting time. These are some of the ways that KuttyMovies earns money from its users and protects its servers from bots and spam. You have to be patient and careful while dealing with these obstacles and avoid clicking on any suspicious or malicious links or buttons. Once you get past these hurdles, you will be able to download Game of Thrones Tamil dubbed movie in KuttyMovies.
-Congratulations! You have successfully downloaded Game of Thrones Tamil dubbed movie in KuttyMovies. Now you can enjoy watching your favorite fantasy drama series in your native language on your device. You can also share it with your friends and family who are also fans of Game of Thrones and Tamil movies. However, you should also be aware of the risks and challenges of downloading Game of Thrones Tamil dubbed movie in KuttyMovies, which we will discuss in the next section.
-One of the major risks of downloading Game of Thrones Tamil dubbed movie in KuttyMovies is that you are violating the law and engaging in piracy. Piracy is the unauthorized distribution or reproduction of copyrighted content without the permission of the owner or the law. Piracy is a serious crime that can result in legal actions, such as fines, lawsuits, or even imprisonment. Moreover, piracy harms the entertainment industry and the artists who work hard to create original and quality content. By downloading Game of Thrones Tamil dubbed movie in KuttyMovies, you are depriving them of their rightful revenue and recognition. Therefore, you should respect the intellectual property rights of the creators and avoid downloading Game of Thrones Tamil dubbed movie in KuttyMovies.
-Another risk of downloading Game of Thrones Tamil dubbed movie in KuttyMovies is that you may expose your device and data to malware and viruses. Malware and viruses are malicious software that can infect your device and cause various problems, such as slowing down your performance, stealing your personal information, deleting your files, or even damaging your hardware. KuttyMovies is an unsecured and unregulated website that may contain malware and viruses in its download links, ads, or redirects. You may not even notice that your device has been infected until it is too late. Therefore, you should protect your device and data by using a reliable antivirus software and avoiding downloading Game of Thrones Tamil dubbed movie in KuttyMovies.
-A third risk of downloading Game of Thrones Tamil dubbed movie in KuttyMovies is that you may not get the best quality and complete episodes of the series. KuttyMovies is a piracy website that does not have the official rights or sources to provide Game of Thrones Tamil dubbed movie. Therefore, it may rely on low-quality recordings, camrips, or screen captures to upload the movie. Moreover, it may not have all the episodes or seasons of the series, or it may have missing or corrupted parts. You may end up wasting your time and bandwidth on downloading Game of Thrones Tamil dubbed movie in KuttyMovies that does not meet your expectations or satisfaction. Therefore, you should look for other alternatives to KuttyMovies for downloading Game of Thrones Tamil dubbed movie.
-Isaidub is another piracy website that offers free downloads of Tamil dubbed movies and TV shows. It has a large collection of Hollywood movies, Bollywood movies, South Indian movies, and web series in Tamil language. It also has a separate section for Game of Thrones Tamil dubbed movie, where you can find all the seasons and episodes of the series. However, Isaidub also has the same risks and challenges as KuttyMovies, such as legal issues, malware, and low quality. Therefore, you should use Isaidub at your own risk and discretion.
-Tamilyogi is yet another piracy website that provides free downloads of Tamil movies and TV shows. It has a huge database of Tamil movies, ranging from old classics to new releases. It also has a category for Tamil dubbed movies, where you can find Game of Thrones Tamil dubbed movie along with other popular Hollywood movies and web series. However, Tamilyogi also suffers from the same problems as KuttyMovies and Isaidub, such as illegality, viruses, and poor quality. Therefore, you should be careful while using Tamilyogi for downloading Game of Thrones Tamil dubbed movie.
-Oceanofmovies is a different kind of website that does not host any movies or TV shows on its own servers. Instead, it provides links to other websites where you can download or stream movies and TV shows for free. It has a vast collection of movies and TV shows in various languages, genres, and qualities. It also has links to Game of Thrones Tamil dubbed movie from different sources and platforms. However, Oceanofmovies also has its own drawbacks, such as broken links, pop-up ads, and unreliable quality. Therefore, you should verify the links and sources before using Oceanofmovies for downloading Game of Thrones Tamil dubbed movie.
-Game of Thrones is a phenomenal fantasy drama series that has captivated millions of viewers across the globe. However, if you want to watch Game of Thrones in Tamil, you may face some difficulties in finding the Tamil dubbed version of the series. KuttyMovies is one of the piracy websites that claims to offer Game of Thrones Tamil dubbed movie for free download. However, KuttyMovies is not a safe or legal option, as it involves many risks and challenges, such as legal issues, malware, and low quality. Therefore, we do not recommend using KuttyMovies for downloading Game of Thrones Tamil dubbed movie. Instead, we suggest you look for other alternatives, such as Isaidub, Tamilyogi, or Oceanofmovies, which may have better quality and availability of Game of Thrones Tamil dubbed movie. However, you should also be aware of the drawbacks and dangers of these websites, and use them at your own risk and discretion. The best way to watch Game of Thrones in Tamil is to subscribe to a legitimate streaming service that has the official rights and licenses to provide Game of Thrones in Tamil language. This way, you can enjoy watching Game of Thrones in Tamil without any worries or hassles.
-Here are some frequently asked questions about Game of Thrones Tamil dubbed movie download in KuttyMovies:
-A: No, it is not legal to download Game of Thrones Tamil dubbed movie from KuttyMovies. KuttyMovies is a piracy website that violates the copyright laws and infringes the intellectual property rights of the creators and owners of Game of Thrones. Downloading Game of Thrones Tamil dubbed movie from KuttyMovies can result in legal actions, such as fines, lawsuits, or even imprisonment.
-A: No, it is not safe to download Game of Thrones Tamil dubbed movie from KuttyMovies. KuttyMovies is an unsecured and unregulated website that may contain malware and viruses in its download links, ads, or redirects. Downloading Game of Thrones Tamil dubbed movie from KuttyMovies can expose your device and data to malware and viruses, which can cause various problems, such as slowing down your performance, stealing your personal information, deleting your files, or even damaging your hardware.
-A: The best way to watch Game of Thrones in Tamil legally is to subscribe to a legitimate streaming service that has the official rights and licenses to provide Game of Thrones in Tamil language. Some examples of such streaming services are Hotstar, Amazon Prime Video, Netflix, etc. These streaming services offer high-quality and complete episodes of Game of Thrones in Tamil language with subtitles or audio options. They also have other features and benefits, such as offline viewing, multiple devices support, original content, etc.
-A: Some other websites that offer Game of Thrones Tamil dubbed movie for free download are Isaidub, Tamilyogi, Oceanofmovies, etc. However, these websites are also piracy websites that have the same risks and challenges as KuttyMovies, such as legal issues, malware, and low quality. Therefore, you should be careful while using these websites for downloading Game of Thrones Tamil dubbed movie.
-A: If you want to improve your English skills while watching Game of Thrones, you can try some of these tips:
-By following these tips, you can enjoy watching Game of Thrones and also improve your English skills at the same time.
197e85843dIf you are a fan of horror games, you might have heard of Project Playtime, a multiplayer game where you have to survive a toy factory full of deadly surprises. But can you download Project Playtime on PS4, or is it only available on PC? In this article, we will answer this question and give you some tips on how to play Project Playtime on your console.
-Download File >>>>> https://gohhs.com/2uPuHs
Project Playtime is a free-to-play multiplayer horror game that was released in December 2022 on Steam. It is developed by Moonbit Studios, an indie team based in Argentina.
-In Project Playtime, six players have to work together to create one giant toy while avoiding a terrifying monster that roams the factory. The monster is controlled by a seventh player, who has only one goal: find and kill everyone. The game features different maps, characters, toys, and monsters, each with their own abilities and weaknesses.
-As of now, Project Playtime is only available on PC. The developers have not announced any plans to bring the game to other platforms, such as PS4 or PS5. According to their FAQ page, they are focusing on improving the PC version first before considering other options.
-Project Playtime has gained a lot of attention and praise from horror fans and streamers since its launch. It has over 29,000 positive reviews on Steam and millions of views on YouTube . But why do people want to play it on PS4?
-How to play Project Playtime on PS4 or PS5
-Is Project Playtime available for PS4 or PS5
-Project Playtime PS4 release date and price
-Project Playtime multiplayer horror game for PS4
-Project Playtime official gameplay trailer for PS4
-Project Playtime PS4 download link and instructions
-Project Playtime PS4 review and rating
-Project Playtime PS4 gameplay tips and tricks
-Project Playtime PS4 cheats and hacks
-Project Playtime PS4 system requirements and compatibility
-Project Playtime PS4 vs PC comparison and differences
-Project Playtime PS4 best settings and options
-Project Playtime PS4 controller support and configuration
-Project Playtime PS4 mods and customizations
-Project Playtime PS4 achievements and trophies
-Project Playtime PS4 online co-op and multiplayer modes
-Project Playtime PS4 split-screen and local multiplayer options
-Project Playtime PS4 cross-play and cross-platform features
-Project Playtime PS4 update and patch notes
-Project Playtime PS4 DLC and expansion packs
-Project Playtime PS4 bugs and glitches
-Project Playtime PS4 refund and cancellation policy
-Project Playtime PS4 free trial and demo version
-Project Playtime PS4 pre-order and bonus content
-Project Playtime PS4 discount and coupon codes
-How to stream Project Playtime on PS4 to Twitch or YouTube
-How to record Project Playtime on PS4 with capture card or software
-How to edit Project Playtime videos on PS4 with Share Factory or other tools
-How to share Project Playtime screenshots on PS4 with friends or social media
-How to join Project Playtime community on PS4 or Discord
-How to find Project Playtime players on PS4 or LFG sites
-How to chat with Project Playtime players on PS4 or voice chat apps
-How to report Project Playtime players on PS4 for cheating or harassment
-How to block Project Playtime players on PS4 for privacy or safety reasons
-How to invite Project Playtime players on PS4 to your party or game session
-How to gift Project Playtime to your friends on PS4 or PlayStation Store
-How to get Project Playtime merchandise on PS4 or official website
-How to contact Project Playtime developers on PS4 or email address
-How to support Project Playtime developers on PS4 or Patreon or other platforms
-How to get involved in Project Playtime development on PS4 or Steam Workshop or other tools
-How to access Project Playtime beta or early access on PS4 or Steam or other platforms
-How to get notified of Project Playtime news and updates on PS4 or newsletter or other sources
-How to participate in Project Playtime surveys and feedback on PS4 or online forms or other methods
-How to enter Project Playtime contests and giveaways on PS4 or social media or other platforms
-How to win Project Playtime prizes and rewards on PS4 or in-game events or other opportunities
-How to learn more about Project Playtime lore and story on PS4 or wiki or other resources
-How to enjoy Project Playtime fan art and fan fiction on PS4 or Reddit or other sites
Horror games are very popular among console gamers, especially those who own a PS4 or PS5. Some of the most successful horror titles in recent years, such as Resident Evil 2, Outlast 2, Until Dawn, and The Evil Within 2, were released on these platforms. Playing horror games on a big screen with surround sound can enhance the immersion and scare factor.
-Another reason why people want Project Playtime on PS4 is because of its gameplay and graphics. The game offers a unique twist on the multiplayer horror genre, where teamwork and strategy are essential to survive. The game also has a colorful and cartoonish style that contrasts with the dark and creepy atmosphere. The game's trailer showcases some of the stunning visuals and animations that the game has to offer.
-So, can you download Project Playtime on PS4? Unfortunately, the answer is no. There is no official way to play the game on your console. However, there are some unofficial methods that you can try at your own risk.
-The developers of Project Playtime have stated that they have no plans to port the game to PS4 or PS5 anytime soon. They are focused on improving the PC version first and adding more content and features. They also said that they are not interested in making a fake trailer or gameplay video for consoles, as some fans have requested. Therefore, if you see any videos or websites claiming that you can download Project Playtime on PS4, they are most likely scams or hoaxes.
-If you really want to play Project Play time on PS4, there are some unofficial methods that you can try at your own risk. These methods involve streaming or remote play, which allow you to access your PC games from your console. However, these methods are not guaranteed to work, and they may have some drawbacks, such as lag, low quality, or compatibility issues. Here are some of the options you can try:
-Project Playtime is a multiplayer horror game that is only available on PC. The developers have no plans to port the game to PS4 or PS5 anytime soon. If you want to play Project Playtime on your console, you can try some unofficial methods that involve streaming or remote play, but they are not guaranteed to work and they may have some drawbacks.
-If you enjoyed this article, please share it with your friends and leave a comment below. Have you tried Project Playtime? What do you think of the game? Do you have any tips or tricks for playing it? Let us know!
-Ok, here's my situation. I'm a college student and a few semesters ago I had to download and install Maya 2015 in order to use for a class. Next semester, I now have to download and install Maya 2016. So, I log into the education community site and it offers me the ability to download Maya 2014, 2015, 2016, and 2017. When I click on the 2016 version, I am given a serial number and product key. The serial number given is the exact same serial number I was given for Maya 2015. I was also sent an e-mail giving me the license details for Maya 2016 (again with the identical serial number).
-After I download and install Maya 2016, it says it can't activate because the serial number is wrong. I tried to get an activation code from autodesk, but the automated system tells me I am providing the incorrect request code (I assure you, I am typing in the right number). Here's a screenshot of the activation screen with a request number showing that I am trying to activate the 2016:
-Download File 🆗 https://urlgoal.com/2uyLsT
So obviously customer service thinks I am trying to activate Maya 2015, not 2016. I suspect because the serial numbers are exactly the same. The product numbers are different though. Anyway, I respond to the customer service e-mail and explain everything, and then I just get a message saying my ticket has been closed.
-Thank you for your post! Sorry to hear you are having issues activating Maya 2016. You can use the same serial number for all the previous versions available to subscription users, so Maya 2014-2017.
-If I'm supposed to be able to use the same serial number, then why when I try to activate Maya does it say that I have the wrong serial number? See below. I'm using the serial number provided to me by Autodesk in an e-mail. How can that be wrong?
-I then responded to that message explaining that I am not trying to activate Maya 2015 (even though I already explained that in the original ticket), I am trying to activate Maya 2016. I forwarded both the licensing e-mail from Autodesk, as well as provided a screenshot showing that the serial number and request code are the correct numbers for Maya 2016, not 2015. Then I got an e-mail saying my support ticket was closed. So going through this page provided no resolution. My request was ignored. I could create another ticket, but I'd just get the same response.
-Product keys are required for installation of Autodesk products and are used to differentiate products that are both sold independently and as part of a product suite. With newest release of Autodesk 2016 products, we bring you a new list of products keys.
Note: Please ensure you are using the correct product key for the Autodesk product and version you are installing. Entering an incorrect product key will result in activation errors for that product.
Note: For single-user subscriptions, you can usually sign in so that a serial number is not required. You may see a Stand-alone license type for 2017-2019 products, but a User License type for 2020 and later product versions.
-Autodesk 2016 All Products Crack Final activation keys for Autodesk 2016 x86x64. Using this activator will allow you to activate the full version of Autodesk products using the keygen to generate a working serial number by pasting request code from an Autodesk software to the keygen and getting the activation code. It also has a Patch button to patch Autodesk 2016 programs for permanent activation and supports both Autodesk 32 bit and 64 bit
-Find Serial Numbers and Product Keys in Autodesk Account: Your Serial Number and Product Key are displayed in your Autodesk Account in the product tray on the Products & Services page and also again in the Software Download window. Note about serial number visibility in Autodesk Account: Only account administrators, such as Contract Managers and Software Coordinators, and Named Users with assigned software benefits will see serial numbers in Autodesk Account. You are the account administrator if you purchased a software subscription using your Autodesk Account or were assigned the role of Contract Manager or Software Coordinator by your company. If you do not see the software you wish to activate in your Autodesk account or see the message "Contact your admin for serial numbers," you need to contact the contract administrator. Only an administrator can assign you as a Named User or End User and give you permissions to download and activate the software.
- -If, for whatever reason, you cannot locate your product key, there is another method:
1. Using your installation media, (USB key, DVD, download folder, etc.) navigate to the location of the setup.exe file for your Autodesk product.
2. In that folder, look for a file named MID.txt, MID01.txt, MID02.txt or some variation on that name.
3. Open this file in notepad and verify that the product name is what you expected it to be.
4. The first five characters of the part number should also be the product key for that product.
Second, we believe this is the case especially considering the slower growth of oil and natural gas sources on the U&O Reservation over the past two and a half years since August 2016 when the National O&NG FIP became effective. Since that time, we have seen limited construction of new and modified oil and natural gas sources on the U&O Reservation. Oil and natural gas sources planning to construct on or after October 3, 2016 have been required to either comply with the National O&NG FIP or to seek a minor source permit under the generally applicable (site-specific) permit provisions of the Federal Indian Country Minor NSR rule.17 Sources complying with the National O&NG FIP are required to meet a two-part registration requirement: The Part 1 Registration Form is submitted 30 days before a source begins construction and contains information about source location and the Part 2 Registration Form is submitted within 60 days after the startup of production and contains information about emissions.18
-Comment #7: One oil and natural gas industry commenter expressed that the industry's objective is that final regulations protect the environment and the public and cost-effectively address VOC emissions that as a co-benefit also reduce methane emissions, without unnecessarily hampering manufacturing and business expansion. According to the commenter, this objective can be met while the private sector develops and delivers more natural gas and oil to its customers. According to the oil and natural gas industry commenter, their efforts are producing real results based on the EPA's latest Greenhouse Gas Inventory which continues to show a downward trend in methane emissions, even as U.S. oil and natural gas production rose dramatically. The commenter reported that the inventory report indicates that methane emissions from natural gas systems and petroleum systems increased 14 percent between 1990 and 2016, at a time when the natural gas output increased by more than 50 percent. This is in addition to the U.S. continuing to lead the world in reducing carbon emissions, which are at 25-year lows, largely due to the increased use of natural gas.
-This action does not impose any new information collection burden under the PRA. OMB has previously approved the information collection activities contained in the Federal Indian Country Minor NSR rule and has assigned OMB control number 2060-0003.35 This action amends the National O&NG FIP, which provides a mechanism for authorizing construction for true minor sources in the oil and natural gas production and natural gas processing segments of the oil and natural gas sector locating or located in areas covered by the Federal Indian Country Minor NSR rule to satisfy the requirements of that rule other than by obtaining a site-specific minor source permit. Because it substitutes for a site-specific permit, which would contain information collection activities covered by the Information Collection Request for Federal Indian Country Minor NSR rule issued in July 2011, neither the proposed amendments, nor the National O&NG FIP, impose any new obligations or enforceable duties on any state, local or tribal government or the private sector. In fact, the final amendments should have the effect of reducing paperwork burden on sources wishing to locate or expand in the Indian country portion of the Uinta Basin Ozone Nonattainment Area, as the amendments provide an alternative to site-specific permitting for such sources.
-Based on the calculations below, the total estimated number of respondents (WOSBs and EDWOSBs) for this collection of information varies depending upon the types of certification that a business concern is seeking. For initial certification, the total estimated number of respondents is 9,349. The total number was calculated using the two-year average number of business concerns that have provided information through Certify from March 2016 through February 2018. For annual updates, the total number is 12,347. For examinations and protests, the total number is 130.
-We propose to adopt a new airworthiness directive (AD) for all Bombardier, Inc., Model CL-600-2B16 (601-3A, 601-3R, and 604 Variants) airplanes. This proposed AD was prompted by a report that main landing gear (MLG) side stay actuators have been assembled using nonconforming split ball bearings. This proposed AD would require verification of the serial numbers of the installed MLG side stay actuator assemblies, and replacement of the affected parts. We are proposing this AD to address the unsafe condition on these products.
-The service information describes procedures to verify the serial numbers of the installed MLG side stay actuator assemblies and to replace the affected parts. These documents are distinct since they apply to the airplane model in different configurations.
-The applicability of the MCAI is limited to Bombardier, Inc., Model CL-600-2B16 (601-3A, 601-3R, and 604 Variants) airplanes, serial numbers 5301 through 5665 inclusive, 5701 through 5988 inclusive, and 6050 through 6091 inclusive, equipped with MLG side stay actuator assembly containing split ball bearing part number 104467672. However, the applicability of this proposed AD includes all Bombardier, Inc., Model CL-600-2B16 (601-3A, 601-3R, and 604 Variants) airplanes and prohibits the installation of any MLG side stay actuator with a serial number identified in the service information. Because the affected part is a rotable part, we have determined that this part could later be installed on airplanes that were initially delivered with the acceptable part, thereby subjecting those airplanes to the unsafe condition. We have coordinated this difference with TCCA.
aaccfb2cb3Download Zip 🔗 https://urlin.us/2uEvDd
but, as i just said, theres not much i would change. its a good product. if you are into that sort of thing, there are probably better choices out there, but for the average user, this is a pretty good product that allows you to make movies quickly and easily.
-before all of this, first impressions and pr-focused trailer releases were only part of the life of a game. now we have constant streaming video, constant media coverage and constant feedback from dozens of different platforms, keeping track of all of it is something that can be overwhelming and time-consuming for the average game developer. though id is known for its all-consuming online multiplayer, one of the company's most popular creations, world of warcraft, was a single player rpg created by a single man.
-Download ✑ https://urlin.us/2uEvMp
it takes a special kind of artist to stand back, keep a critical eye out but also keep a compassionate one. people with "what if i" mindsets don't just stop themselves at the "what if" stage, they go all the way back to what can be to "what might be." and that's where the bizarre and perhaps slightly autistic behavior of people like darran breese and others of like mind comes into play.
-video games are no longer the purview of adult men; gaming is now an activity undertaken by men, women and even children. the audience has expanded, and videogames are no longer exclusive to the 16 year old boy in the basement. "the average game player is a woman, or a woman who plays videogames," said durand dinsmore, president of videogame retail association. "the average gamer is a busy mother who eats her weight watchers meetings and needs her time to relax. she wants something that will reward her for the time spent with her kids and be entertaining for both of them."
- 899543212bDOWNLOAD • https://bytlly.com/2uGwhC
Download Zip ✏ https://bytlly.com/2uGx7e
- {"dataset menu: "} - current dataset {" | "} - vote {" | "} - submit -
-