diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Camtasia 8 Product Key Tips and Tricks for Getting the Most Out of Your Video Editing Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Camtasia 8 Product Key Tips and Tricks for Getting the Most Out of Your Video Editing Software.md
deleted file mode 100644
index 6e07bb36c490a64a4f68e2b6522823c7fb6f7aa4..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Camtasia 8 Product Key Tips and Tricks for Getting the Most Out of Your Video Editing Software.md
+++ /dev/null
@@ -1,160 +0,0 @@
-
-
Camtasia 8 Product Key: How to Find and Use It
-
Have you ever wanted to create stunning videos with ease? Whether you want to record your screen, edit your footage, add effects, or share your creations online, Camtasia 8 is the software for you.
Camtasia 8 is a powerful screen recording and video editing software that lets you capture anything on your screen, edit it with professional tools, and produce high-quality videos for any purpose.
-
But before you can enjoy all the features and benefits of Camtasia 8, you need a product key. A product key is a unique code that verifies that you have purchased a legitimate copy of the software and allows you to activate the full version.
-
In this article, we will show you how to find and use your Camtasia 8 product key in four easy ways. Let's get started!
-
What is Camtasia 8?
-
Camtasia 8 is a screen recording and video editing software that helps you create professional-looking videos without any prior experience.
-
With Camtasia 8, you can:
-
How to activate Camtasia 8 with a valid product key
-Camtasia 8 product key generator free download
-Camtasia 8 product key crack full version
-Where to find Camtasia 8 product key in registry
-Camtasia 8 product key not working or invalid
-Camtasia 8 product key for Mac OS X
-Camtasia 8 product key for Windows 10/8/7
-Camtasia 8 product key purchase online
-Camtasia 8 product key free trial or giveaway
-Camtasia 8 product key recovery tool or software
-Camtasia 8 product key expired or renewal
-Camtasia 8 product key upgrade or downgrade
-Camtasia 8 product key transfer or change
-Camtasia 8 product key refund or cancellation
-Camtasia 8 product key support or contact
-Camtasia 8 product key alternatives or competitors
-Camtasia 8 product key features or benefits
-Camtasia 8 product key reviews or testimonials
-Camtasia 8 product key comparison or contrast
-Camtasia 8 product key tutorial or guide
-Camtasia 8 product key discount or coupon code
-Camtasia 8 product key license or registration
-Camtasia 8 product key installation or setup
-Camtasia 8 product key activation or verification
-Camtasia 8 product key error or problem
-Camtasia 8 product key tips or tricks
-Camtasia 8 product key best practices or recommendations
-Camtasia 8 product key FAQ or Q&A
-Camtasia 8 product key forum or community
-Camtasia 8 product key blog or website
-Camtasia 8 product key video or audio
-Camtasia 8 product key ebook or PDF
-Camtasia 8 product key course or training
-Camtasia 8 product key webinar or workshop
-Camtasia 8 product key case study or success story
-Camtasia 8 product key infographic or image
-Camtasia 8 product key checklist or template
-Camtasia 8 product key cheat sheet or quick reference guide
-Camtasia 8 product key glossary or dictionary
-Camtasia 8 product key quiz or survey
-Camtasia 8 product key calculator or converter
-Camtasia 8 product key tool or software
-Camtasia 8 product key app or plugin
-Camtasia 8 product key game or simulation
-Camtasia 8 product key challenge or contest
-Camtasia 8 product key newsletter or email list
-Camtasia 8 product key podcast or radio show
-Camtasia 8 product key book or magazine
-Camtasia 8 product key event or conference
-
-
Record anything on your screen, including your webcam, microphone, system audio, cursor movements, and keystrokes.
-
Edit your recordings with a simple drag-and-drop interface, adding transitions, annotations, animations, captions, quizzes, and more.
-
Enhance your videos with built-in effects, such as green screen, zoom and pan, audio leveling, noise removal, and more.
-
Share your videos directly to YouTube, Vimeo, Screencast.com, or any other platform of your choice.
-
-
Camtasia 8 is compatible with Windows XP/Vista/7/8/10 and requires a minimum of 2 GB RAM, 2 GHz CPU, and DirectX 9 graphics card.
-
Why do you need a product key?
-
A product key is a unique code that consists of 25 characters comprised of letters and numbers. It looks something like this:
-XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
-
A product key serves two purposes:
-
-
It verifies that you have purchased a legitimate copy of Camtasia 8 from TechSmith or an authorized reseller.
-
It allows you to activate the full version of Camtasia 8 on your computer.
-
-
Without a product key, you can only use Camtasia 8 as a free trial for up to 30 days. After that, you will need to enter your product key to continue using the software.
-
If you lose or forget your product key, don't worry. There are several ways to find it again.
-
How to find your product key?
-
Option 1: TechSmith Account
-
If you purchased Camtasia 8 online from TechSmith or registered it with your email address, you can find your product key in your TechSmith account online.
Select My Products. You can view the software key below each product that it unlocks.
-
If the software key is not visible:
-
-
Select Find a lost Software Key.
-
Enter the email address that you used to purchase or register Camtasia 8.
-
Check your inbox for an email from TechSmith with your software key.
-
-
-
Option 2: Receipt
-
If you purchased Camtasia 8 directly from TechSmith and have the original receipt, you can locate your product key under Software Key below the product that you are looking for.
Enter the order number and password that were sent to you by email when you placed your order.
-
Click View Order Details.
-
Scroll down to Software Key and copy it.
-
-
Option 3: Key Lookup
-
If you still have Camtasia 8 installed on the original machine where you activated it with your product key, you can find it in the software itself.
-
To do so:
-
-
Open Camtasia 8.
-
Select Help > Technical Support.
-
Scroll down a few lines until you locate RegistrationKey: [25 characters comprised of letters and numbers].
-
This is your product key. Copy it for future reference.
-
-
Option 4: Customer Service
-
If none of the above options work for you or if you purchased Camtasia 8 from an authorized reseller other than TechSmith, you can contact TechSmith customer service and request your product key.
Fill out the form with as much information as possible about your purchase (such as order number, date of purchase, reseller name).
-
Add any attachments that might help prove your purchase (such as receipt or invoice).
-
Click Submit Request.
-
You will receive an email from TechSmith with your product key within one business day.
-
-
How to use your product key?
-
Unlock an Expired Trial or New Install/Reinstall
-
If you have used up the free trial period of Camtasia 8 or if you have installed/reinstalled it on a new machine, you will need to enter your product key to unlock the full version.
-
To do so:
-
-
Open Camtasia 8.
-
You will see a dialog box asking for your software key. Enter it in the field provided and click Unlock Now.
-
Unlock an Active Trial
-
If you have purchased Camtasia 8 during the trial period and want to unlock the full version without waiting for the trial to expire, you can do so by entering your product key.
-
To do so:
-
-
Open Camtasia 8.
-
Select Help > Enter Software Key.
-
Enter your product key in the field provided and click Unlock Now.
-
-
Unlock in Camtasia Maintenance Renewals
-
If you have purchased a Camtasia maintenance subscription, you can renew it with your product key and enjoy the latest updates and features of Camtasia 8.
-
To do so:
-
-
Open Camtasia 8.
-
Select Help > Enter Software Key.
-
Enter your product key in the field provided and click Renew Now.
-
-
Conclusion
-
Camtasia 8 is a great software for creating professional videos with ease. But to use it, you need a product key that verifies your purchase and activates the full version.
-
In this article, we have shown you how to find and use your Camtasia 8 product key in four easy ways. Whether you have it in your TechSmith account, receipt, software, or customer service, you can enter it and unlock all the features and benefits of Camtasia 8.
-
So what are you waiting for? Grab your product key and start creating amazing videos with Camtasia 8 today!
-
FAQs
-
Here are some frequently asked questions and answers about Camtasia 8 product key:
-
Q: How many computers can I install Camtasia 8 on with one product key?
-
A: You can install Camtasia 8 on up to two computers with one product key, as long as they are not used at the same time. For example, you can install it on your desktop and laptop, or on your home and work computer.
-
Q: What if I want to install Camtasia 8 on more than two computers?
-
A: If you want to install Camtasia 8 on more than two computers, you will need to purchase additional licenses or a volume license. You can contact TechSmith sales team for more information.
-
Q: What if I change or upgrade my computer?
-
A: If you change or upgrade your computer, you can transfer your Camtasia 8 license to the new machine. To do so, you will need to deactivate the license on the old machine and activate it on the new one. See this support article for more details.
-
Q: What if I lose my product key?
-
A: If you lose your product key, don't panic. You can find it again by following one of the options we have discussed in this article. If none of them work for you, you can contact TechSmith customer service and request your product key.
-
Q: How can I get a free product key for Camtasia 8?
-
A: There is no legal way to get a free product key for Camtasia 8. Any website or program that claims to offer a free or cracked product key is likely to be a scam or a virus. The only way to get a legitimate product key for Camtasia 8 is to purchase it from TechSmith or an authorized reseller.
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Code Breaker PS2 Version 7.0 Download Why You Need This Amazing Cheat Device for Your PlayStation 2.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Code Breaker PS2 Version 7.0 Download Why You Need This Amazing Cheat Device for Your PlayStation 2.md
deleted file mode 100644
index 4f7310de126a42779f84d8f46a2dee90bdfffc34..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Code Breaker PS2 Version 7.0 Download Why You Need This Amazing Cheat Device for Your PlayStation 2.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
Code Breaker PS2 Version 7.0 Download: How to Unlock All the Cheats and Secrets of Your Favorite Games
-
Do you love playing games on your PlayStation 2 but wish you could have more fun and freedom? Do you want to access hidden features, unlock extra content, and customize your gameplay? If you answered yes to any of these questions, then you need Code Breaker PS2 Version 7.0.
-
Introduction
-
In this article, we will tell you everything you need to know about Code Breaker PS2 Version 7.0, including what it is, why you need it, how to download it, and how to use it. By the end of this article, you will be able to enjoy your favorite games like never before.
Code Breaker is a cheat device that allows you to modify your games in various ways. It works by altering the data stored in the memory of your console or game disc, giving you access to features that are normally unavailable or hidden. With Code Breaker, you can do things like:
-
-
Increase your health, ammo, money, or other resources
-
Unlock new levels, characters, weapons, or items
-
Skip difficult or boring parts of the game
-
Change the graphics, sound, or gameplay settings
-
Create your own cheats and share them with others
-
-
Code Breaker is compatible with hundreds of games across various genres and platforms. Whether you like action, adventure, sports, racing, fighting, or anything else, Code Breaker has something for you.
-
Why do you need Code Breaker PS2 Version 7.0?
-
If you already have a previous version of Code Breaker for PS2, you might be wondering why you need to upgrade to Version 7.0. Well, there are several reasons why Code Breaker PS2 Version 7.0 is the best cheat device for your console:
-
-
It has over 30,000 pre-loaded cheat codes for more than 1,500 games
-
It allows you to create and edit your own custom codes using a simple interface
-
It has a memory card manager and backup utility that lets you save and transfer your game data
-
It is compatible with all models and regions of PS2, including the slim and fat versions
-
It has a sleek and user-friendly design that makes it easy to navigate and use
-
-
With Code Breaker PS2 Version 7.0, you can get the most out of your gaming experience without any hassle or limitations.
-
How to download Code Breaker PS2 Version 7.0?
-
If you are ready to download Code Breaker PS2 Version 7.0, there are two ways you can do it:
-
-
You can buy the official disc from online retailers or local stores that sell video games and accessories. The disc comes with a manual that explains how to install and use it.
-
You can download the ISO file from online sources and burn it onto a blank DVD using a software like ImgBurn or Nero. The ISO file contains all the data and instructions that are on the official disc.
-
-
Either way, you will need a modded PS2 or a swap magic disc to run Code Breaker PS2 Version 7.0 on your console. A modded PS2 is one that has been modified with a chip or software that allows it to play discs from other regions or sources. A swap magic disc is a special disc that tricks your console into thinking that it is playing an original game disc.
-
If you don't have a modded PS2 or a swap magic disc, don't worry. You can still use Code Breaker PS2 Version 7.0 by using a method called Free McBoot. Free McBoot is a software that lets you boot up your console from a memory card without any modifications. You can install Free McBoot on your memory card using a PC and a USB adapter.
-
Features of Code Breaker PS2 Version 7.0
-
Now that you know how to download Code Breaker PS2 Version 7.0, let's take a look at some of its amazing features:
-
How to use code breaker ps2 version 7.0
-Code breaker ps2 version 7.0 cheats and codes
-Code breaker ps2 version 7.0 iso file
-Code breaker ps2 version 7.0 compatibility list
-Code breaker ps2 version 7.0 manual pdf
-Code breaker ps2 version 7.0 update patch
-Code breaker ps2 version 7.0 free trial
-Code breaker ps2 version 7.0 online purchase
-Code breaker ps2 version 7.0 installation guide
-Code breaker ps2 version 7.0 features and benefits
-Code breaker ps2 version 7.0 reviews and ratings
-Code breaker ps2 version 7.0 alternatives and competitors
-Code breaker ps2 version 7.0 tips and tricks
-Code breaker ps2 version 7.0 support and contact
-Code breaker ps2 version 7.0 forum and community
-Code breaker ps2 version 7.0 mod and hack
-Code breaker ps2 version 7.0 emulator and rom
-Code breaker ps2 version 7.0 backup and restore
-Code breaker ps2 version 7.0 error and fix
-Code breaker ps2 version 7.0 comparison and difference
-Code breaker ps2 version 7.0 history and evolution
-Code breaker ps2 version 7.0 tutorial and video
-Code breaker ps2 version 7.0 faq and answers
-Code breaker ps2 version 7.0 discount and coupon
-Code breaker ps2 version 7.0 warranty and guarantee
-Code breaker ps2 version 7.0 best practices and recommendations
-Code breaker ps2 version 7.0 case studies and testimonials
-Code breaker ps2 version 7.0 pros and cons
-Code breaker ps2 version 7.0 requirements and specifications
-Code breaker ps2 version 7.0 troubleshooting and solutions
-Code breaker ps2 version 7.0 advantages and disadvantages
-Code breaker ps2 version 7.0 secrets and hidden features
-Code breaker ps2 version 7.0 limitations and restrictions
-Code breaker ps2 version 7.0 improvements and enhancements
-Code breaker ps2 version 7.0 challenges and opportunities
-Code breaker ps2 version 7.0 risks and precautions
-Code breaker ps2 version 7.0 myths and facts
-Code breaker ps2 version 7.0 success stories and examples
-Code breaker ps2 version 7.0 statistics and data
-Code breaker ps2 version 7.0 trends and predictions
-
Over 30,000 pre-loaded cheat codes
-
Code Breaker PS2 Version 7.0 comes with over 30,000 cheat codes for more than 1,500 games. These codes cover everything from basic cheats like infinite health or ammo to advanced cheats like debug modes or hidden menus. You can browse through the codes by game title or genre using the alphabetical or numerical buttons on your controller.
-
You can also search for specific codes using keywords or phrases using the virtual keyboard on the screen. For example, if you want to find codes for Grand Theft Auto: San Andreas, you can type in "GTA" or "San Andreas" and see all the relevant results.
-
Custom code creation and editing
-
If you want to create your own cheats or edit existing ones, Code Breaker PS2 Version 7.0 lets you do that too. You can use the code creation mode to enter new codes using hexadecimal values or binary switches. You can also use the code editing mode to modify or delete existing codes.
-
You can name your codes whatever you want and save them on your memory card for future use. You can also share your codes with other users by uploading them online or downloading them from other sources.
-
Memory card manager and backup utility
-
Code Breaker PS2 Version 7.0 also has a memory card manager and backup utility that lets you manage your game data easily. You can view all the files on your memory card and copy, move, delete, or rename them as you wish.
-
You can also backup your game data onto another memory card or onto your PC using a USB cable or adapter. This way, you can protect your data from corruption or loss and restore it whenever you need it.
-
Compatible with all PS2 models and regions
-
One of the best things about Code Breaker PS2 Version 7.0 is that it works with all models and regions of PS2 consoles. Whether you have a slim or fat version of PS2, whether it is NTSC or PAL format, whether it is from North America or Europe or Asia or anywhere else in the world, Code Breaker PS2 Version 7.0 will work with it.
-
All you need is a modded PS2 or a swap magic disc or Free McBoot software to run it on your console.
-
How to use Code Breaker PS2 Version 7.0
-
Using Code Breaker PS2 Version 7.0 is very easy and simple. Here are the steps you need to follow:
-
Insert the disc and boot up your PS2
-
The first thing you need to do is insert the Code Breaker PS2 Version 7.0 disc into your console and turn it on. If you have a modded PS2 or Free McBoot software installed on your memory card, then it should boot up automatically.
-
If you have a swap magic disc instead of a modded PS2 or Free McBoot software installed on your memory card , then after inserting the swap magic disc into your console , turn on your console , wait for few seconds until swap magic menu appears , then press eject button , remove swap magic disc , insert code breaker ps2 version 7 .0 disc , close tray , press x button . Then code breaker ps2 version 7 .0 should boot up .
-
Select the game and the cheats you want to activate
-
Once Code Breaker PS2 Version 7.0 boots up, you will see a list of games that have cheat codes available. You can scroll through the list using the directional buttons on your controller and select the game you want to play by pressing the X button.
-
After selecting the game, you will see a list of cheats that are compatible with that game. You can scroll through the list using the directional buttons on your controller and select the cheats you want to activate by pressing the X button. A check mark will appear next to the selected cheats.
-
You can also press the triangle button to access more options, such as viewing the code details, editing the code values, or creating new codes. You can also press the start button to search for codes using keywords or phrases.
-
Swap the disc with your game disc and start playing
-
After selecting and activating the cheats you want, you need to swap the Code Breaker PS2 Version 7.0 disc with your game disc. To do this, you need to press the eject button on your console and remove the Code Breaker PS2 Version 7.0 disc.
-
Then, you need to insert your game disc and close the tray. You will see a message on the screen that says "Please insert your game disc". After inserting your game disc, you need to press the X button to start playing.
-
If you have a modded PS2 or Free McBoot software installed on your memory card, then you can swap the discs without any problem.
-
If you have a swap magic disc instead of a modded PS2 or Free McBoot software installed on your memory card , then after inserting your game disc , you need to press and hold the R1 button until you see a message on the screen that says "Press X to start game". Then , you need to press the X button to start playing .
-
Enjoy the enhanced gaming experience
-
Now that you have swapped the discs and started playing, you can enjoy your game with all the cheats and modifications that you activated. You can see the effects of the cheats on your game screen, such as increased health, unlocked levels, or changed graphics.
-
You can also deactivate or reactivate any cheat at any time during gameplay by pressing the select button on your controller. This will bring up a menu that shows all the cheats that are active for that game. You can toggle any cheat on or off by pressing the X button.
-
You can also access other features of Code Breaker PS2 Version 7.0 during gameplay by pressing different combinations of buttons on your controller. For example, you can press L1 + L2 + R1 + R2 + select + start to return to Code Breaker PS2 Version 7.0 main menu. You can also press L1 + L2 + R1 + R2 + up + start to reset your console.
-
Conclusion
-
In conclusion, Code Breaker PS2 Version 7.0 is a cheat device that lets you unlock all the cheats and secrets of your favorite games on your PlayStation 2 console. It has over 30,000 pre-loaded cheat codes for more than 1,500 games, and it allows you to create and edit your own custom codes. It also has a memory card manager and backup utility that lets you save and transfer your game data. It is compatible with all models and regions of PS2 consoles, and it is easy to use.
-
If you want to download Code Breaker PS2 Version 7.0, you can either buy the official disc from online retailers or local stores, or download the ISO file from online sources and burn it onto a blank DVD. You will also need a modded PS2 or a swap magic disc or Free McBoot software to run it on your console.
-
If you want to use Code Breaker PS2 Version 7.0, you just need to insert the disc and boot up your PS2, select the game and the cheats you want to activate, swap the disc with your game disc and start playing, and enjoy the enhanced gaming experience.
-
Code Breaker PS2 Version 7.0 is a must-have for any PS2 gamer who wants to have more fun and freedom with their games. It is a powerful tool that can transform your gaming experience in amazing ways.
-
Summary of the main points
-
-
Code Breaker PS2 Version 7.0 is a cheat device that lets you modify your games in various ways
-
It has over 30,000 pre-loaded cheat codes for more than 1,500 games
-
It allows you to create and edit your own custom codes
-
It has a memory card manager and backup utility
-
It is compatible with all models and regions of PS2 consoles
-
It is easy to download and use
-
-
Call to action and final thoughts
-
If you are interested in Code Breaker PS2 Version 7.0, don't hesitate to get it today. You can find it online or in local stores at affordable prices. You can also check out some reviews and testimonials from other users who have tried it and loved it.
-
Code Breaker PS2 Version 7.0 is not just a cheat device; it is a way of enhancing your gaming experience and making it more enjoyable and satisfying. It is a way of exploring new possibilities and discovering new things in your games. It is a way of expressing yourself and having fun with your games.
-
So what are you waiting for? Get Code Breaker PS2 Version 7.0 today and unleash your gaming potential!
-
Frequently Asked Questions
-
-
Is Code Breaker PS2 Version 7.0 legal?
-
Code Breaker PS2 Version 7.0 is legal as long as you use it for personal use only and do not distribute or sell it without permission from its creators. However, some game developers may not approve of using cheat devices on their games and may consider it as piracy or hacking. Therefore, use Code Breaker PS2 Version 7.0 at your own risk and discretion.
-
Does Code Breaker PS2 Version 7.0 work with online games?
-
Code Breaker PS2 Version 7.0 works with online games as long as they do not require an internet connection or an online account to play them. However, using cheat devices on online games may be considered as cheating or unfair by other players or moderators and may result in bans or penalties from online servers or communities. Therefore, use Code Breaker PS2 Version 7.0 responsibly and respectfully when playing online games.
-
Does Code Breaker PS2 Version 7.0 damage my console or game disc?
-
No, Code Breaker PS2 Version 7.0 does not damage your console or game disc in any way. It only alters the data stored in the memory of your console or game disc temporarily while you are playing the game . Once you turn off your console or eject your game disc , everything will return to normal . However , make sure that you do not turn off your console or eject your game disc while Code Breaker PS2 Version 7 .0 is running , as this may cause errors or glitches .
-
Can I use Code Breaker PS2 Version 7 .0 with other cheat devices ?
-
No , Code Breaker PS2 Version 7 .0 is not compatible with other cheat devices , such as Action Replay , GameShark , or Xploder . Using multiple cheat devices at once may cause conflicts or errors . Therefore , use only one cheat device at a time .
-
Where can I find more information about Code Breaker PS2 Version 7 .0 ?
-
You can find more information about Code Breaker PS2 Version 7 .0 on its official website , www.codebreaker.com , where you can also find updates , downloads , support , forums , and more . You can also follow Code Breaker on social media platforms , such as Facebook , Twitter , YouTube , or Instagram , where you can get news , tips , tricks , videos , contests , and more .
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (ip man 4 izle 720p or 1080pgolkes) Watch the Final Chapter of the Martial Arts Saga.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (ip man 4 izle 720p or 1080pgolkes) Watch the Final Chapter of the Martial Arts Saga.md
deleted file mode 100644
index fc2b937d67d29bab148c457a919ec25fa89b988c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (ip man 4 izle 720p or 1080pgolkes) Watch the Final Chapter of the Martial Arts Saga.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
HD Online Player (ip man 4 izle 720p or 1080pgolkes)
-
Introduction
-
If you are a fan of martial arts movies, you have probably heard of Ip Man, the legendary Wing Chun master who trained Bruce Lee. Ip Man's life and legacy have inspired a series of films starring Donnie Yen as the titular character. The latest and final installment, Ip Man 4: The Finale, was released in 2019 and received critical acclaim for its action scenes, drama, and historical accuracy.
-
HD Online Player (ip man 4 izle 720p or 1080pgolkes)
But what if you missed the chance to watch Ip Man 4 in theaters? Or what if you want to watch it again at the comfort of your home? Don't worry, we have got you covered. In this article, we will tell you everything you need to know about watching Ip Man 4 online in high definition (HD). Whether you prefer streaming or downloading, we will show you the best options available for watching this epic martial arts adventure.
-
What is Ip Man 4: The Finale?
-
Before we get into the details of how to watch Ip Man 4 online, let's briefly recap what the movie is about. Ip Man 4: The Finale is the fourth and final film in the Ip Man series, directed by Wilson Yip and written by Edmond Wong, Hiroshi Fukazawa, and Tai-lee Chan. The film follows Ip Man's journey to San Francisco in the 1960s, where he faces racism and discrimination against Chinese immigrants and martial artists. He also tries to find a suitable school for his rebellious son, who has been expelled from his previous school in Hong Kong.
-
Along the way, Ip Man encounters his most formidable opponents yet: Barton Geddes (Scott Adkins), a ruthless U.S. Marine Corps karate instructor who despises Chinese kung fu; Colin Frater (Chris Collins), a former student of Geddes who challenges Ip Man to a fight; and Wan Zong Hua (Wu Yue), the leader of the Chinese Benevolent Association who opposes Ip Man's student Bruce Lee (Danny Chan) for teaching Wing Chun to non-Chinese students.
-
Ip Man 4: The Finale is not only a thrilling action movie, but also a touching tribute to the real Ip Man, who died in 1972. The film features many historical references and cameo appearances by famous martial artists, such as Yuen Woo-ping, Lo Meng, Chen Zhihui, and Bruce Lee's daughter Shannon Lee. The film also showcases the beauty and diversity of Chinese culture and martial arts, such as lion dance, tai chi, bagua zhang, xing yi quan, baji quan, and of course, Wing Chun.
-
Why watch Ip Man 4 online?
-
There are many reasons why you might want to watch Ip Man 4 online. Here are some of them:
-
-
You love martial arts movies and want to see some of the best fight choreography and stunt work ever done on screen.
-
You are a fan of Donnie Yen and want to see his final performance as Ip Man.
-
You are interested in Chinese history and culture and want to learn more about the challenges and achievements of Chinese immigrants in America.
-
You are curious about Bruce Lee and want to see how he was influenced by Ip Man.
-
You missed the theatrical release of Ip Man 4 or want to watch it again with your friends or family.
-
-
No matter what your reason is, watching Ip Man 4 online will give you an unforgettable experience that will make you feel inspired, entertained, and enlightened.
-
How to watch Ip Man 4 online?
-
you can watch Ip Man 4 online. There are two main ways to do so: streaming and downloading. Let's look at each one in detail.
-
Streaming platforms
-
Streaming platforms are websites or apps that allow you to watch movies and shows online without downloading them. You just need a stable internet connection and a compatible device, such as a computer, smartphone, tablet, smart TV, or gaming console. Streaming platforms usually charge a monthly or annual fee for their services, but some of them also offer free trials or ad-supported content.
-
There are many streaming platforms that offer Ip Man 4 online, but here are some of the most popular ones:
-
Ip Man 4 full movie online free HD 720p
-Watch Ip Man 4: The Finale online with subtitles 1080p
-Ip Man 4 streaming HD player with soundcloud
-How to download Ip Man 4 in HD quality 720p or 1080p
-Ip Man 4 Türkçe dublaj izle online player
-Ip Man 4: The Final Fight full movie HD 1080p
-Ip Man 4 online player with potplayer
-Ip Man 4 HD izle soundcloud
-Watch Ip Man 4 online free no sign up 720p
-Ip Man 4: The Finale full movie download HD 1080p
-Ip Man 4 online player with Trello
-Ip Man 4 HD streaming with subtitles
-Ip Man 4 Türkçe altyazılı izle online player
-Ip Man 4 full movie watch online free HD
-Download Ip Man 4: The Finale HD player
-Ip Man 4 online player with SoundCloud desktop and mobile
-Ip Man 4 HD izle Trello
-Watch Ip Man 4: The Final Fight online free HD 720p
-Ip Man 4 full movie download with soundcloud
-Ip Man 4 online player with subtitles and potplayer
-Watch Ip Man 4 online free HD quality
-Download Ip Man 4 HD izle with Trello
-Ip Man 4: The Finale full movie streaming HD player
-Watch Ip Man 4 Türkçe dublaj izle online free
-Download Ip Man 4: The Final Fight HD izle with soundcloud
-Watch Ip Man 4 online player with SoundCloud mobile app
-Download Ip Man 4 HD streaming with subtitles and potplayer
-Watch Ip Man 4: The Finale Türkçe altyazılı izle online free
-Download Ip Man 4 full movie HD quality with Trello
-Watch Ip Man 4: The Final Fight online player with SoundCloud desktop app
-
Hi-YAH!
-
Hi-YAH! is a streaming service dedicated to Asian action and martial arts movies. It has a large collection of films from China, Hong Kong, Japan, Korea, Thailand, and more. You can watch Ip Man 4 on Hi-YAH! with English subtitles or dubbed in English. Hi-YAH! is available on Roku, Apple TV, Amazon Fire TV, Android TV, iOS, Android, and web browsers. You can sign up for a 7-day free trial or pay $2.99 per month or $19.99 per year.
-
Amazon Prime Video
-
Amazon Prime Video is a streaming service that offers thousands of movies and shows, including original content from Amazon Studios. You can watch Ip Man 4 on Amazon Prime Video with English subtitles or dubbed in English. Amazon Prime Video is available on Roku, Apple TV, Amazon Fire TV, Android TV, iOS, Android, web browsers, and more. You can sign up for a 30-day free trial or pay $8.99 per month or $119 per year for Amazon Prime membership, which also includes free shipping, music streaming, e-books, and more.
-
YouTube
-
YouTube is a video-sharing platform that allows you to watch and upload videos of various genres and topics. You can watch Ip Man 4 on YouTube with English subtitles or dubbed in English. YouTube is available on almost any device with an internet connection. You can rent Ip Man 4 for $3.99 or buy it for $12.99 on YouTube.
-
Downloading options
-
Downloading options are websites or apps that allow you to download movies and shows to your device and watch them offline. You usually need to pay a one-time fee for each movie or show you want to download. Downloading options are ideal for people who have limited internet access or data plans, or who want to own a digital copy of their favorite movies and shows.
-
There are many downloading options that offer Ip Man 4 online, but here are some of the most common ones:
-
Torrent sites
-
Torrent sites are websites that use peer-to-peer (P2P) technology to share files among users. You can download Ip Man 4 from torrent sites using a torrent client software, such as BitTorrent or uTorrent. Torrent sites usually have various versions of Ip Man 4 with different resolutions (720p or 1080p), languages (subtitles or dubbing), and file sizes (golkes). However, torrent sites are also risky and illegal, as they may contain viruses, malware, spyware, or copyrighted content that can harm your device or get you in trouble with the law.
-
Direct download links
-
spyware, or copyrighted content that can harm your device or get you in trouble with the law.
-
Conclusion
-
Ip Man 4: The Finale is a must-watch movie for martial arts fans and anyone who appreciates a good story of courage, honor, and friendship. It is the last chapter of the Ip Man saga that has captivated millions of viewers around the world for over a decade. It is also a fitting farewell to Donnie Yen's iconic portrayal of the Wing Chun master who inspired generations of martial artists, including Bruce Lee.
-
Whether you choose to watch Ip Man 4 online by streaming or downloading, you will not regret spending your time and money on this movie. It is a movie that will make you laugh, cry, cheer, and learn. It is a movie that will make you feel alive.
-
Summary of the article
-
In this article, we have discussed the following points:
-
-
What is Ip Man 4: The Finale and what is it about?
-
Why watch Ip Man 4 online and what are the benefits?
-
How to watch Ip Man 4 online and what are the options?
-
What are the streaming platforms that offer Ip Man 4 online?
-
What are the downloading options that offer Ip Man 4 online?
-
-
FAQs
-
Here are some frequently asked questions about watching Ip Man 4 online:
-
-
Is Ip Man 4 available on Netflix?
-
Yes, Ip Man 4 is available on Netflix in some regions. You can check if it is available in your region by visiting this link: https://www.netflix.com/title/81227536
-
Is Ip Man 4 based on a true story?
-
Ip Man 4 is based on the life of the real Ip Man, but it also takes some creative liberties and fictionalizes some events and characters. For example, the characters of Barton Geddes and Colin Frater are not real people, but they represent the racism and hostility that Chinese martial artists faced in America at that time.
-
Who plays Bruce Lee in Ip Man 4?
-
Bruce Lee is played by Danny Chan Kwok-kwan, who also played him in Ip Man 3 and the TV series The Legend of Bruce Lee. Chan is known for his resemblance and imitation of Bruce Lee's mannerisms and fighting style.
-
What is the meaning of golkes in ip man 4 izle 720p or 1080pgolkes?
-
Golkes is a slang term that means gigabytes or GB. It is used to indicate the file size of a movie or show that is downloaded from torrent sites or direct download links. For example, ip man 4 izle 720p or 1080pgolkes means ip man 4 watch online in 720p or 1080p resolution with a file size of gigabytes.
-
What is the best way to watch Ip Man 4 online?
-
The best way to watch Ip Man 4 online depends on your personal preference, budget, and internet connection. If you want to watch it legally and safely, we recommend using streaming platforms like Hi-YAH!, Amazon Prime Video, or YouTube. If you want to watch it offline or own a digital copy, we recommend using downloading options like Apple TV, Google Play Movies, or YouTube. However, we advise you to avoid using torrent sites or direct download links as they may contain viruses, malware, spyware, or copyrighted content that can harm your device or get you in trouble with the law.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo Edifil Sellos Pdf Free ((FULL)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo Edifil Sellos Pdf Free ((FULL)).md
deleted file mode 100644
index 2f806fd5159f465132d1a96ca219b66bd98d364a..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Catalogo Edifil Sellos Pdf Free ((FULL)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
2002 - 0755 - 6th grade reading book report template free download and preview, download free.. sixth grade book report rubric. creative writing rubric grade 11 - best ourse work in our essay team. scenario writing from the storytelling abilities of free rubrics tend to experience. biography writing rubric below to write comments on personal identity essay rubric 7th. baf94a4655 kamcass
grade 9 writing prompts - students should be hoping to write error-free. 11 12, composition reading comprehension, writing prompts student rubrics and. poets and writers # writing prompts personal narrative # writing prompts powerpoint. education is impossible without writing college biology lab report rubric. sat score uc davis free classeslearn more about the sat and act,. in mysore on palace essay short kannada personal narrative essay about reading and. sharing stories: writing a personal narrative, 1st grade, emily hall. since the rubric score of 4 represents above grade level work, the 5th grade. 00 add to cart; fourth grade common core standards posters free add to cart. writing.
free.field the present invention generally relates to hydrogen storage technology and, more particularly, to cryogenic hydrogen storage elements, monitoring systems thereof and methods of making and using thereof. description of the related art hydrogen gas has many recognized benefits as a clean, renewable energy source. for example, hydrogen gas can be used as a fuel for transportation and heating. in contrast to other fossil fuels, there are no greenhouse gases or other environmental harm created by the usage of hydrogen or any of ec5d62056f favtame
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Element 3d Free Download With Crack And Keygen.md b/spaces/1gistliPinn/ChatGPT4/Examples/Element 3d Free Download With Crack And Keygen.md
deleted file mode 100644
index 45fefad5f5c8fb107e310df99c164ea805d0f0a7..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Element 3d Free Download With Crack And Keygen.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
for beginners and seasoned 3d artists, the quiz is a vast treasure trove for ideas, techniques, inspiration, information, and feedback for the 3d studio product. it also is a great launching pad for 3d studio pro.
the interactive light and materials previewer streamlines the process of making seamless environments and textures for shading and special effects that make the area vibrant. the feature standardizes the element and tools that you use to create the latest chapter of your business 2.0 marketing campaign.
-
each dedicated customer support assistant is happy to receive messages, inquiries, feedback, and address complaints, and to assist you with your questions and complaints in a quick manner to get your issue resolved, for real. the interface might be a bit daunting at first, the one of a kind ui may be tuned-up a little more. why there is no monthly or annual fee: there is no monthly or annual fee for you to keep using the software. how to contact that makes these 3d studio pro standalone applications.
-
pixelmator 3d free download 2018 free is a very essential part of the latest album released by the digital detox. thanks for the collapse of this edition, we have got a clean look and feel. the very last version of this edition was released on august 15, 2018. all those who interested in adding some stylish looks to their photoshop collections must make sure that they download pixelmator 3d 2 free latest version.
-
-
element 3d free download 2018 latest version is the most recent release of the software, it was released on 15 august 2018. new in this version of the software are many new features added. the design editor keeps its basic functions exactly the same as in previous versions. anyone who needs to modify their design program or design their own can download this new version of the element 3d 2 free trial.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/ apk Android .md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/ apk Android .md
deleted file mode 100644
index 300558aa709435e523d7cf26e3be6872f6a92831..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/ apk Android .md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
Страшные истории apk: The Best App for Horror Stories Fans
|
If you love reading horror stories, then you should definitely check out Страшные истории apk. This is an amazing app that offers you more than 3500 scary stories that will keep you on the edge of your seat. Whether you prefer classic horror tales, urban legends, creepypastas, or true stories, you will find something that suits your taste in this app. In this article, we will tell you everything you need to know about Страшные истории apk, including what it is, why you should download it, how to download and install it, and how to use it. Read on if you dare!
| |
What is Страшные истории apk?
|
Страшные истории apk is an app that allows you to read or listen to thousands of horror stories on your Android device. You can choose from different categories such as ghosts, zombies, vampires, werewolves, demons, aliens, killers, mysteries, and more. You can also search for stories by keywords or authors. The app has a simple and user-friendly interface that lets you easily access your favorite stories.
One of the best features of Страшные истории apk is that you can read or listen to the stories offline, without any internet connection. The app automatically downloads the stories to your device, so you can enjoy them anytime and anywhere. Moreover, the app updates regularly, adding new stories every week. You will never run out of horror stories to read or listen to with this app.
| |
Why should you download Страшные истории apk?
|
There are many reasons why you should download Страшные истории apk. Here are some of them:
You will have access to a huge collection of horror stories that will satisfy your curiosity and thrill your senses.
You will be able to read or listen to the stories offline, saving your data and battery.
You will be able to customize the settings and themes of the app, making it more comfortable and enjoyable for you.
You will be able to rate, comment, share, and bookmark the stories, interacting with other horror fans and discovering new stories.
You will be able to improve your Russian language skills, learning new words and expressions from the stories.
| |
A huge collection of horror stories
|
One of the main reasons why you should download Страшные истории apk is that it offers you more than 3500 horror stories that cover various genres and topics. You can find stories from famous authors such as Edgar Allan Poe, H.P. Lovecraft, Stephen King, and others. You can also find stories from popular sources such as Reddit, Creepypasta, NoSleep, and others. You can also find stories based on true events or real-life experiences. The app has something for everyone, whether you like supernatural, psychological, or realistic horror.
| |
Offline access and regular updates
|
Another reason why you should download Страшные истории apk is that you can read or listen to the stories offline, without any internet connection. This is very convenient if you want to enjoy the stories in places where there is no Wi-Fi or mobile data, such as on a plane, in a subway, or in a remote area. The app automatically downloads the stories to your device, so you don't have to worry about missing any updates. Speaking of updates, the app adds new stories every week, so you will always have something new to read or listen to. You can also enable notifications to get alerted when new stories are available.
-
страшные истории на ночь apk
-страшные истории скачать бесплатно apk
-страшные истории игра apk
-страшные истории для детей apk
-страшные истории реальные apk
-страшные истории про призраков apk
-страшные истории аудиокнига apk
-страшные истории онлайн apk
-страшные истории без интернета apk
-страшные истории с фото apk
-страшные истории 18+ apk
-страшные истории из жизни apk
-страшные истории мистика apk
-страшные истории в темноте apk
-страшные истории с видео apk
-страшные истории ужасы apk
-страшные истории про любовь apk
-страшные истории про школу apk
-страшные истории про дома apk
-страшные истории про лес apk
-страшные истории про кукол apk
-страшные истории про зеркала apk
-страшные истории про клоунов apk
-страшные истории про кошмары apk
-страшные истории про смерть apk
-страшные истории про друзей apk
-страшные истории про семью apk
-страшные истории про животных apk
-страшные истории про детей apk
-страшные истории про подвалы apk
-страшные истории про больницы apk
-страшные истории про маньяков apk
-страшные истории про ведьм apk
-страшные истории про вампиров apk
-страшные истории про оборотней apk
-страшные истории про зомби apk
-страшные истории про пришельцев apk
-страшные истории про демонов apk
-страшные истории про ангелов apk
-страшные истории про богов apk
-страшные истории мод много денег apk
-страшные истории последняя версия apk
-страшные истории без рекламы apk
-страшные истории полная версия apk
-страшные истории оффлайн режим apk
-страшные истории голосом актеров apk
-страшные истории лучшие сборники apk
-страшные истории отзывы пользователей apk
| |
Customizable settings and themes
|
A third reason why you should download Страшные истории apk is that you can customize the settings and themes of the app, making it more comfortable and enjoyable for you. You can adjust the font size, color, brightness, and background of the app according to your preferences. You can also choose from different themes such as dark, light, sepia, or night mode. You can also change the language of the app from Russian to English or vice versa. You can also enable or disable sound effects and vibrations for a more immersive experience.
| |
How to use Страшные истории apk?
|
Now that you have downloaded and installed Страшные истории apk, you might be wondering how to use it. Don't worry, it's very easy and fun. You just need to follow these simple steps:
| |
Choose a category or a story
|
When you open the app, you will see a list of categories on the main screen. You can swipe left or right to browse through different genres of horror stories, such as ghosts, zombies, vampires, werewolves, demons, aliens, killers, mysteries, and more. You can also tap on the magnifying glass icon on the top right corner to search for stories by keywords or authors. When you find a category or a story that interests you, just tap on it to open it.
| |
Read or listen to a story
|
Once you open a story, you can choose to read it on your screen or listen to an audio narration. To read the story, just scroll down and enjoy the text. You can also adjust the font size, color, brightness, and background of the app according to your preferences. To listen to the story, just tap on the play button on the bottom of the screen. You can also pause, resume, rewind, or fast-forward the audio as you wish. You can also adjust the volume and speed of the narration according to your preferences.
| |
Rate or comment on a story
|
After you finish reading or listening to a story, you can rate or comment on it. To rate the story, just tap on the star icon on the bottom of the screen and choose how many stars you want to give it. To comment on the story, just tap on the speech bubble icon on the bottom of the screen and write your opinion or feedback. You can also read other people's comments and reply to them if you want. You can also report any inappropriate or offensive comments by tapping on the flag icon next to them.
| |
Share or bookmark a story
|
If you like a story and want to share it with your friends or save it for later reading, you can do that easily with Страшные истории apk. To share a story, just tap on the share icon on the bottom of the screen and choose how you want to share it. You can share it via email, SMS, WhatsApp, Facebook, Twitter, or any other app you have on your device. To bookmark a story, just tap on the bookmark icon on the bottom of the screen and the story will be added to your favorites list. You can access your favorites list by tapping on the heart icon on the top left corner of the main screen.
| |
Conclusion
|
Страшные истории apk is a great app for horror stories fans. It offers you more than 3500 scary stories that will keep you on the edge of your seat. You can read or listen to the stories offline, customize the settings and themes of the app, rate, comment, share, and bookmark the stories, and improve your Russian language skills. If you are looking for a fun and exciting way to spend your time, download Страшные истории apk today and enjoy the best horror stories ever!
| |
FAQs
|
Here are some frequently asked questions about Страшные истории apk:
Is Страшные истории apk free? Yes, Страшные истории apk is free to download and use. However, it may contain ads that support the developers.
Is Страшные истории apk safe? Yes, Страшные истории apk is safe to use. It does not contain any viruses or malware that can harm your device. However, you should always download it from trusted sources such as Google Play Store or Aptoide.
Is Страшные истории apk suitable for children? No, Страшные истории apk is not suitable for children. It contains horror stories that may be too scary or disturbing for young audiences. The app is rated 16+ by Google Play Store and 18+ by Aptoide.
Can I request or submit a story to Страшные истории apk? No, Страшные истории apk does not accept requests or submissions from users. The app only features stories that are selected by the developers from various sources.
Can I translate a story to another language with Страшные истории apk? No, Страшные истории apk does not have a translation feature. The app only supports Russian and English languages.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Gangster Hero in Grand Theft Gangster Crime City Mod APK.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Gangster Hero in Grand Theft Gangster Crime City Mod APK.md
deleted file mode 100644
index 20dc478233484c1150416ed4e0f4982be0f1d46d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Become a Gangster Hero in Grand Theft Gangster Crime City Mod APK.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Grand Theft Gangster Crime City Mod APK: A Review
-
If you are a fan of action-packed games that let you live the life of a gangster, you might want to check out Grand Theft Gangster Crime City. This is a game that lets you explore a vast open world, complete various missions, and fight against rival gangs and cops. But what if you want to make the game even more fun and exciting? Well, you can do that by downloading the Grand Theft Gangster Crime City Mod APK. This is a modified version of the game that gives you unlimited money, weapons, and other benefits. In this article, we will review the features, pros, and cons of this mod apk, as well as show you how to download and install it on your device.
Grand Theft Gangster Crime City is an action-adventure game developed by HappyMod. It is inspired by popular games like Grand Theft Auto and Mafia. The game is set in a fictional American city where you play as a gangster who wants to rise to the top of the underworld. You can choose from different characters, each with their own backstory and skills. You can also customize your appearance, clothes, and vehicles.
-
What is a mod apk?
-
A mod apk is a modified version of an original application file (apk) that has been altered by third-party developers to add or remove certain features. For example, a mod apk can give you unlimited money, unlock all levels, remove ads, or change the graphics of a game. A mod apk can also bypass some restrictions or requirements that the original app may have, such as root access, internet connection, or device compatibility.
-
Features of Grand Theft Gangster Crime City Mod APK
-
Unlimited money and weapons
-
One of the main features of the Grand Theft Gangster Crime City Mod APK is that it gives you unlimited money and weapons. This means that you can buy anything you want in the game, such as cars, guns, clothes, houses, and more. You can also upgrade your weapons and vehicles to make them more powerful and durable. You don't have to worry about running out of money or ammo in the game.
-
Realistic graphics and sound effects
-
The Grand Theft Gangster Crime City Mod APK also boasts realistic graphics and sound effects that make the game more immersive and enjoyable. The game has high-quality 3D graphics that show the details of the city, the characters, and the vehicles. The game also has realistic sound effects that match the actions and events in the game, such as gunshots, car crashes, explosions, sirens, and more.
-
Various missions and challenges
-
The Grand Theft Gangster Crime City Mod APK also offers various missions and challenges that keep you entertained and engaged. The game has different types of missions that you can complete to earn money, reputation, and rewards. Some of these missions include robbing banks, stealing cars, assassinating targets, escaping from the police, fighting against rival gangs, and more. The game also has challenges that test your skills and abilities, such as driving, shooting, racing, and more. You can also create your own missions and challenges using the game's editor mode.
-
Free roaming and exploration
-
The Grand Theft Gangster Crime City Mod APK also allows you to free roam and explore the city at your own pace. You can go anywhere you want in the game, such as downtown, suburbs, countryside, airport, harbor, and more. You can also interact with various objects and people in the game, such as shops, bars, clubs, casinos, pedestrians, and more. You can also cause chaos and mayhem in the city by destroying things, stealing vehicles, attacking people, and more.
-
How to download and install Grand Theft Gangster Crime City Mod APK
-
Step 1: Enable unknown sources on your device
-
Before you can download and install the Grand Theft Gangster Crime City Mod APK, you need to enable unknown sources on your device. This is because the mod apk is not available on the official app store and you need to download it from a third-party source. To enable unknown sources, go to your device's settings, then security, then toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install apps that are not from the app store.
-
Step 2: Download the mod apk file from a trusted source
-
Next, you need to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games and apps, but not all of them are safe and reliable. Some of them may contain malware or viruses that can harm your device or steal your data. Therefore, you need to be careful and choose a reputable source that has positive reviews and ratings from other users. One of the sources that we recommend is HappyMod, which is a platform that provides mod apk files for thousands of games and apps. You can download the Grand Theft Gangster Crime City Mod APK from this website by clicking on the download button and following the instructions.
-
Step 3: Locate and install the mod apk file
-
After you have downloaded the mod apk file, you need to locate and install it on your device. To do this, go to your device's file manager and find the folder where you saved the mod apk file. Then, tap on the file and select "install". The installation process may take a few seconds or minutes depending on your device's speed and memory. Once the installation is complete, you will see a confirmation message on your screen.
-
* Crime City unlimited money mod apk
-* Grand theft gangster crime city hack apk
-* Crime City mafia wars mod apk download
-* Grand theft gangster crime city cheats apk
-* Crime City android game mod apk
-* Grand theft gangster crime city free download apk
-* Crime City latest version mod apk
-* Grand theft gangster crime city offline mod apk
-* Crime City 3D mod apk unlimited money and gems
-* Grand theft gangster crime city online mod apk
-* Crime City simulator mod apk
-* Grand theft gangster crime city 2023 mod apk
-* Crime City open world mod apk
-* Grand theft gangster crime city hd graphics mod apk
-* Crime City sandbox mod apk
-* Grand theft gangster crime city realistic mod apk
-* Crime City action game mod apk
-* Grand theft gangster crime city new update mod apk
-* Crime City funzio games mod apk
-* Grand theft gangster crime city best mod apk
-* Crime City deca games mod apk
-* Grand theft gangster crime city no root mod apk
-* Crime City role playing game mod apk
-* Grand theft gangster crime city unlimited ammo mod apk
-* Crime City 2 mod apk
-* Grand theft gangster crime city vip mod apk
-* Crime City zombie mode mod apk
-* Grand theft gangster crime city pro mod apk
-* Crime City police chase mod apk
-* Grand theft gangster crime city premium mod apk
-* Crime City car racing mod apk
-* Grand theft gangster crime city full version mod apk
-* Crime City bike stunt mod apk
-* Grand theft gangster crime city mega mod apk
-* Crime City helicopter flying mod apk
-* Grand theft gangster crime city super mod apk
-* Crime City tank battle mod apk
-* Grand theft gangster crime city god mode mod apk
-* Crime City sniper shooting mod apk
-* Grand theft gangster crime city all unlocked mod apk
-
Step 4: Launch the game and enjoy
-
Finally, you can launch the game and enjoy the features of the Grand Theft Gangster Crime City Mod APK. To do this, go to your device's app drawer and find the game icon. Then, tap on it and wait for the game to load. You will see a welcome screen that shows you some information about the game and the mod apk. You can skip this screen by tapping on "continue". Then, you can choose your character, customize your settings, and start playing the game.
-
Pros and cons of Grand Theft Gangster Crime City Mod APK
-
Pros
-
More fun and excitement
-
One of the pros of the Grand Theft Gangster Crime City Mod APK is that it makes the game more fun and exciting by giving you unlimited money, weapons, and other benefits. You can enjoy the game without any limitations or restrictions. You can buy anything you want in the game, upgrade your weapons and vehicles, complete missions and challenges easily, and cause havoc in the city.
-
No ads or in-app purchases
-
Another pro of the Grand Theft Gangster Crime City Mod APK is that it removes ads or in-app purchases from the game. This means that you don't have to watch annoying ads or spend real money to buy items or unlock features in the game. You can play the game smoothly and comfortably without any interruptions or distractions.
-
Compatible with most devices
-
A third pro of the Grand Theft Gangster Crime City Mod APK is that it is compatible with most devices that run on Android 4.1 or higher. This means that you don't have to worry about whether your device can support or run the game or not. You can play the game on any device that meets the minimum requirements.
-
Cons
-
Risk of malware or viruses
-
One of the cons of the Grand Theft Gangster Crime City Mod APK is that This is because the mod apk is not from the official app store and you need to download it from a third-party source. Some of these sources may not be safe and reliable and may inject malicious code or software into the mod apk file. Therefore, you need to be careful and choose a reputable source that has positive reviews and ratings from other users. You also need to scan the mod apk file with an antivirus or anti-malware program before installing it on your device.
-
Possible legal issues or bans
-
Another con of the Grand Theft Gangster Crime City Mod APK is that it may cause legal issues or bans for you. This is because the mod apk violates the terms and conditions of the original game and the app store. By using the mod apk, you are breaking the rules and regulations of the game and the app store, which may result in legal actions or penalties from the developers or the authorities. You may also get banned from playing the game online or accessing its features and services.
-
May affect game performance or stability
-
A third con of the Grand Theft Gangster Crime City Mod APK is that it may affect the game performance or stability on your device. This is because the mod apk may not be compatible with the latest version of the game or your device's software. The mod apk may also have bugs or errors that can cause crashes, freezes, lags, or glitches in the game. You may also experience problems with saving, loading, or syncing your game data.
-
Conclusion
-
Grand Theft Gangster Crime City Mod APK is a modified version of the original game that gives you unlimited money, weapons, and other benefits. It also has realistic graphics and sound effects, various missions and challenges, and free roaming and exploration. However, it also has some cons, such as risk of malware or viruses, possible legal issues or bans, and may affect game performance or stability. Therefore, you need to weigh the pros and cons before downloading and installing this mod apk on your device. You also need to follow the steps and precautions that we have provided in this article to ensure a safe and smooth installation process.
-
FAQs
-
Here are some frequently asked questions about Grand Theft Gangster Crime City Mod APK:
-
-
Q: Is Grand Theft Gangster Crime City Mod APK free?
-
A: Yes, Grand Theft Gangster Crime City Mod APK is free to download and install on your device. You don't have to pay anything to use this mod apk.
-
Q: Is Grand Theft Gangster Crime City Mod APK safe?
-
A: Grand Theft Gangster Crime City Mod APK is not 100% safe to use on your device. It may contain malware or viruses that can harm your device or steal your data. It may also cause legal issues or bans for you. Therefore, you need to be careful and choose a trusted source to download this mod apk. You also need to scan it with an antivirus or anti-malware program before installing it on your device.
-
Q: How do I update Grand Theft Gangster Crime City Mod APK?
-
A: To update Grand Theft Gangster Crime City Mod APK, you need to download and install the latest version of this mod apk from a trusted source. You also need to uninstall the previous version of this mod apk from your device before installing the new one.
-
Q: Can I play Grand Theft Gangster Crime City online with this mod apk?
-
A: No, you cannot play Grand Theft Gangster Crime City online with this mod apk. This mod apk is only for offline mode. If you try to play online with this mod apk, you may get banned from the game server or face legal actions from the developers.
-
Q: Can I use Grand Theft Gangster Crime City Mod APK on iOS devices?
-
A: No, you cannot use Grand Theft Gangster Crime City Mod APK on iOS devices. This mod apk is only for Android devices that run on Android 4.1 or higher.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Chess Lv.100 APK The Best Chess App for Microsoft Store Users.md b/spaces/1phancelerku/anime-remove-background/Chess Lv.100 APK The Best Chess App for Microsoft Store Users.md
deleted file mode 100644
index 2f7fbd4c48defbc0666bdd8b5d6e7021740316ff..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Chess Lv.100 APK The Best Chess App for Microsoft Store Users.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Chess Lv.100 (plus Online) APK: A Review
-
If you are looking for a classic chess game with high quality graphics and all the features to enjoy and improve your chess game, you should check out Chess Lv.100 (plus Online) APK. This is an Android game that lets you play online chess games against players all over the world, as well as offline chess games with adjustable playing strength from 100 levels. You can also challenge yourself to win medals by defeating the computer and unlock new chess boards and pieces design. In this article, we will review the features, benefits, and FAQs of Chess Lv.100 (plus Online) APK.
-
Introduction
-
Chess Lv.100 (plus Online) APK is a chess game developed by UNBALANCE Corporation, a Japanese company that specializes in creating board games and puzzles for mobile devices. The game has been downloaded over 1 million times on Google Play Store and has received positive reviews from users.
Chess Lv.100 (plus Online) APK is based on the chess AI "Crazy Bishop", which has an ELO rating of 2300. You can choose the strength of the computer from 258 to 2300 in ELO rating, or from level 1 to level 100 in difficulty. Level 1 is extremely weak, and level 100 is extremely difficult to beat.
-
You should download Chess Lv.100 (plus Online) APK if you want to:
-
-
Play online chess games whenever you want
-
Improve your chess skills with different levels of play
-
Review your moves and analyze any position
-
Win medals and unlock new chess sets
-
Have fun with a classic chess game
-
-
Features of Chess Lv.100 (plus Online) APK
-
Online chess games
-
One of the main features of Chess Lv.100 (plus Online) APK is that you can play online chess games against players all over the world. You can choose to play a quick match or a rated match. A quick match is a casual game that does not affect your rating or ranking. A rated match is a competitive game that affects your rating and ranking.
-
Your rating is a number that represents your skill level in online chess games. It goes up when you win and down when you lose. Your ranking is your position among other players based on your rating. You can check your rating and ranking on the online chess page of the game.
-
To play online chess games, you need to have an internet connection and a Google account. You can sign in with your Google account on the game settings page. Once you sign in, you can access the online chess page from the main menu. There, you can choose to play a quick match or a rated match, or view your rating and ranking.
-
Offline chess games
-
If you prefer to play offline chess games, Chess Lv.100 (plus Online) APK has you covered. You can play offline chess games against the computer with adjustable playing strength from 100 levels. You can also use the hint facility and review mode to improve your chess game.
-
chess lvl 100 plus online apk
-chess lv 100 free download apk
-chess lv 100 mod apk
-chess lv 100 pro apk
-chess lv 100 premium apk
-chess lv 100 full version apk
-chess lv 100 offline apk
-chess lv 100 online apk
-chess lv 100 android apk
-chess lv 100 latest version apk
-chess lv 100 crazy bishop apk
-chess lv 100 unbalance corporation apk
-chess lv 100 game apk
-chess lv 100 app apk
-chess lv 100 for pc apk
-chess lv 100 for windows apk
-chess lv 100 for mac apk
-chess lv 100 for linux apk
-chess lv 100 for ios apk
-chess lv 100 for iphone apk
-chess lv 100 for ipad apk
-chess lv 100 for tablet apk
-chess lv 100 for mobile apk
-chess lv 100 for smartphone apk
-chess lv 100 for android tv apk
-chess lv 100 for firestick apk
-chess lv 100 for chromebook apk
-chess lv 100 for bluestacks apk
-chess lv 100 for nox player apk
-chess lv 100 for memu play apk
-chess lv 100 hack apk
-chess lv 100 cheat apk
-chess lv 100 unlimited levels apk
-chess lv 100 unlock all boards and pieces apk
-chess lv 100 ad free apk
-chess lv 100 subscription free apk
-chess lv 100 review mode apk
-chess lv 100 hint facility apk
-chess lv 100 pgn file support apk
-chess lv 100 elo rating mode apk
-chess level 100 download free android game app apkpure.com
-the best offline and online classic board game the chess level one hundred apkmirror.com
-how to install and play the amazing the chess level hundred on your device apktada.com
-enjoy the challenging and fun the chess level hundred plus online with friends and family apknite.com
-learn and improve your skills with the awesome the chess level hundred offline and online apkpure.co.id
-
To adjust the playing strength of the computer, you can use the slider on the game settings page. You can choose the strength from 258 to 2300 in ELO rating, or from level 1 to level 100 in difficulty. The higher the level, the stronger the computer. You can also choose the time limit for each move, from 1 second to 10 minutes.
-
To use the hint facility, you can tap on the light bulb icon on the game screen. The hint facility will show you the best move for the current position according to the computer. You can use the hint facility up to 5 times per game.
-
To use the review mode, you can tap on the book icon on the game screen. The review mode will let you go back and forth through your moves and analyze any position. You can also save and load your game records in the review mode.
-
Medals and chess sets
-
Another feature of Chess Lv.100 (plus Online) APK is that you can win medals by defeating the computer and unlock new chess boards and pieces design. There are 10 medals to collect, from bronze to platinum. You can win a medal by beating the computer at a certain level or higher.
-
For example, to win the bronze medal, you need to beat the computer at level 20 or higher. To win the platinum medal, you need to beat the computer at level 100. You can check your medals on the medal page of the game.
-
You can also unlock new chess boards and pieces design by winning medals. There are 7 chess sets to unlock, from classic to modern. You can change your chess set on the game settings page. You can also preview your chess set on the chess set page of the game.
-
Benefits of Premium Membership
-
What is Premium Membership?
-
Premium Membership is a subscription service that gives you access to more features and benefits in Chess Lv.100 (plus Online) APK. You can subscribe to Premium Membership for $2.99 per month or $29.99 per year.
-
You can subscribe and cancel Premium Membership on the premium page of the game. You need to have a Google account and a valid payment method to subscribe. Your subscription will be automatically renewed unless you cancel it at least 24 hours before the end of the current period.
-
What are the benefits of Premium Membership?
-
As a Premium Member, you will enjoy these benefits:
-
-
Unlimited online chess games: You can play as many online chess games as you want without any restrictions.
-
Access to all chess sets: You can use any of the 7 chess sets without having to unlock them by winning medals.
-
Ad-free experience: You will not see any ads in the game.
-
-
Conclusion
-
In conclusion, Chess Lv.100 (plus Online) APK is a great chess game for Android devices that offers online and offline chess games with adjustable playing strength from 100 levels, medals and chess sets to collect and unlock, and a premium membership option for more features and benefits. If you are a fan of chess or want to learn and improve your chess game, you should download Chess Lv.100 (plus Online) APK today and enjoy a classic chess game with high quality graphics and all the features to enjoy and improve your chess game.
-
FAQs
-
What is the best level to play against the computer?
-
The best level to play against the computer depends on your skill level and your goal. If you want to have a fair and challenging game, you should choose a level that matches your rating or slightly higher. If you want to practice and learn from your mistakes, you should choose a level that is lower than your rating or slightly lower. If you want to have fun and relax, you can choose any level you like.
-
How can I save and load my chess game records?
-
You can save and load your chess game records in the review mode. To save your game record, you need to tap on the save icon on the review mode screen. You can name your game record and choose a folder to save it. To load your game record, you need to tap on the load icon on the review mode screen. You can browse your folders and select your game record to load it.
-
How can I enter and analyze any position I like?
-
You can enter and analyze any position you like in the edit mode. To enter the edit mode, you need to tap on the edit icon on the main menu. You can move the pieces on the board as you wish, or use the buttons to clear, flip, or reset the board. You can also use the hint facility and review mode in the edit mode. To exit the edit mode, you need to tap on the back icon on the main menu.
-
What is the free trial period for Premium Membership?
-
The free trial period for Premium Membership is 7 days. You can enjoy all the benefits of Premium Membership for free for 7 days after you subscribe. You will not be charged until the end of the trial period. You can cancel your subscription at any time during the trial period without any charge.
-
Where can I download Chess Lv.100 (plus Online) APK?
-
You can download Chess Lv.100 (plus Online) APK from Google Play Store or from other trusted sources. However, you should be careful when downloading APK files from unknown sources, as they may contain viruses or malware that can harm your device. You should always scan the APK file with an antivirus software before installing it.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Cmo instalar WhatsApp Business en tu Android con el archivo APK.md b/spaces/1phancelerku/anime-remove-background/Cmo instalar WhatsApp Business en tu Android con el archivo APK.md
deleted file mode 100644
index 3ebc8320ebb3abfe98d67db3f1e1ce15b2c2d212..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Cmo instalar WhatsApp Business en tu Android con el archivo APK.md
+++ /dev/null
@@ -1,170 +0,0 @@
-
-
WhatsApp Business: What Is It and How to Download It
-
If you are a business owner who wants to communicate with your customers in a fast, convenient, and secure way, you might want to consider using WhatsApp Business. WhatsApp Business is a tool for companies to engage with customers over the platform. It is built on top of WhatsApp Messenger and includes all the features that you rely on, such as multimedia, free calls, and group chat. There are two ways to use WhatsApp Business: WhatsApp Business App and WhatsApp Business Platform. The app is for small businesses who personally manage conversations with customers. The platform is for medium to large businesses who communicate with customers at scale through programmatic access.
-
In this article, we will explain what WhatsApp Business is, how it differs from WhatsApp, and how you can download it for your Android device. We will also give you some tips on how to use WhatsApp Business effectively to improve your customer experience and grow your business.
What is WhatsApp Business and why is it useful for businesses
-
WhatsApp Business is a tool for companies to engage with customers over the platform. It is built on top of WhatsApp Messenger and includes all the features that you rely on, such as multimedia, free calls, and group chat. There are two ways to use WhatsApp Business: WhatsApp Business App and WhatsApp Business Platform. The app is for small businesses who personally manage conversations with customers. To get started with the app, download it and create a profile for your business. The platform is for medium to large businesses who communicate with customers at scale through programmatic access. WhatsApp for business can help you improve visibility, automate communication, and keep your workflow organized.
-
Some of the benefits of using WhatsApp Business are:
-
-
You can meet your customers where they already are. WhatsApp has more than 2 billion users around the world, making it one of the most popular messaging apps. By using WhatsApp Business, you can reach your customers on their preferred channel and provide them with a seamless experience.
-
You can drive business outcomes. Whether you want to increase sales, generate leads, or provide support, WhatsApp Business can help you achieve your goals. You can send feature-rich communications, such as files, images, videos, documents, and interactive buttons. You can also use message templates to send notifications, reminders, confirmations, and updates.
-
You can build long-lasting customer relationships. With WhatsApp Business, you can create a personalized and trustworthy connection with your customers. You can use your business name and logo as your profile picture, add information about your products and services in your catalog, and verify your account with a green badge. You can also use end-to-end encryption to ensure that your conversations are secure and private.
-
-
How to download WhatsApp Business APK for Android
-
If you want to use WhatsApp Business on your Android device, you have two options:
-
-
You can download it from the Google Play Store by searching for "WhatsApp Business" or clicking here. You will need an Android device running Android 5.1 or higher.
-
You can download it from a third-party website by searching for "WhatsApp Business APK" or clicking here. You will need to enable unknown sources in your device settings before installing the APK file.
-
-
Once you have downloaded the app, you can follow these steps to set up your account:
-
-
Open the app and agree to the terms of service.
-
Enter your phone number and verify it with a code sent via SMS.
-
Create your business profile and choose a category that best describes your business.
-
Add a profile photo, a business name, and a short description of your business.
-
Optionally, you can add more details, such as your location, hours, website, and email.
-
Start chatting with your customers by tapping on the chat icon at the bottom right corner.
-
-
WhatsApp Business vs WhatsApp: What Are the Differences?
-
The main features and benefits of WhatsApp Business
-
WhatsApp Business is designed to help businesses communicate with their customers in a professional and efficient way. It has some features that are not available on WhatsApp, such as:
-
-
Business profile: You can create a profile for your business that includes information such as your address, website, email, and catalog. This helps you showcase your products and services and provide useful information to your customers.
-
Catalog: You can create a catalog of your products and services and share it with your customers. This helps you display your offerings and make it easy for your customers to browse and order.
-
Labels: You can use labels to organize your chats and contacts. You can assign different colors and names to your labels, such as new customer, pending payment, order complete, etc. This helps you keep track of your conversations and manage your workflow.
-
Quick replies: You can use quick replies to save and reuse messages that you frequently send. You can create shortcuts for your quick replies, such as /thankyou, /welcome, /delivery, etc. This helps you save time and respond faster to common questions and requests.
-
Greeting message: You can use a greeting message to introduce your business and welcome new customers. You can set up a greeting message that will be sent automatically when a customer contacts you for the first time or after 14 days of inactivity. This helps you create a good first impression and start the conversation on a positive note.
-
Away message: You can use an away message to let your customers know that you are not available at the moment. You can set up an away message that will be sent automatically when you are offline or outside of your business hours. This helps you manage expectations and inform your customers when they can expect a reply from you.
-
-
The main differences between WhatsApp and WhatsApp Business
-
WhatsApp Business is a separate app from WhatsApp Messenger. You can use both apps on the same device, but you will need different phone numbers for each app. You can also link your WhatsApp Business account to your Facebook Page to sync your information and reach more customers. Here are some of the main differences between WhatsApp and WhatsApp Business:
-
whatsapp business descargar apk gratis
-whatsapp business descargar apk ultima version
-whatsapp business descargar apk para pc
-whatsapp business descargar apk 2023
-whatsapp business descargar apk android
-whatsapp business descargar apk uptodown
-whatsapp business descargar apk mega
-whatsapp business descargar apk mod
-whatsapp business descargar apk sin play store
-whatsapp business descargar apk para tablet
-whatsapp business descargar apk full
-whatsapp business descargar apk mirror
-whatsapp business descargar apk ios
-whatsapp business descargar apk mediafire
-whatsapp business descargar apk 2022
-whatsapp business descargar apk para iphone
-whatsapp business descargar apk premium
-whatsapp business descargar apk pro
-whatsapp business descargar apk hackeado
-whatsapp business descargar apk sin anuncios
-whatsapp business descargar apk antiguo
-whatsapp business descargar apk beta
-whatsapp business descargar apk actualizado
-whatsapp business descargar apk 2021
-whatsapp business descargar apk para android tv
-whatsapp business descargar apk para windows 10
-whatsapp business descargar apk para mac
-whatsapp business descargar apk para smart tv
-whatsapp business descargar apk para fire tv stick
-whatsapp business descargar apk para chromebook
-whatsapp business descargar apk para linux
-whatsapp business descargar apk para nokia
-whatsapp business descargar apk para blackberry
-whatsapp business descargar apk para huawei
-whatsapp business descargar apk para xiaomi
-whatsapp business descargar apk para samsung
-whatsapp business descargar apk para lg
-whatsapp business descargar apk para motorola
-whatsapp business descargar apk para sony
-whatsapp business descargar apk para oppo
-whatsapp business descargar apk para vivo
-whatsapp business descargar apk para realme
-whatsapp business descargar apk para oneplus
-whatsapp business descargar apk para asus
-whatsapp business descargar apk para lenovo
-whatsapp business descargar apk para acer
-whatsapp business descargar apk para dell
-whatsapp business descargar apk para hp
-whatsapp business descargar apk para toshiba
-
-
WhatsApp
WhatsApp Business
-
Personal use
Business use
-
No business profile
Business profile with details and catalog
-
No labels
Labels to organize chats and contacts
-
No quick replies
Quick replies to save and reuse messages
-
No greeting message
Greeting message to welcome new customers
-
No away message
Away message to inform customers when you are not available
-
No analytics
Analytics to measure performance and customer satisfaction
-
No API access
API access for programmatic communication
-
No Facebook Page integration
Facebook Page integration to sync information and reach more customers
-
-
How to Use WhatsApp Business Effectively
-
How to create a business profile and a catalog
-
A business profile is a way to showcase your business and provide useful information to your customers. A catalog is a way to display your products and services and make it easy for your customers to browse and order. To create a business profile and a catalog, follow these steps:
-
-
Open WhatsApp Business and tap on the menu icon at the top right corner.
-
Tap on Settings and then on Business Settings.
-
Tap on Profile and fill in the details that you want to share with your customers, such as your address, website, email, etc.
-
Tap on Catalog and then on Add Product or Service.
-
Add an image, a name, a price, a description, and a link for each product or service that you want to include in your catalog.
-
Tap on Save when you are done.
-
You can now share your catalog with your customers by tapping on the attachment icon and then on Catalog when you are in a chat.
-
-
How to use labels, quick replies, greeting messages, and away messages
-
Labels, quick replies, greeting messages, and away messages are some of the features that can help you manage your communication and workflow more efficiently. Here is how you can use them:
-
-
Labels: You can use labels to organize your chats and contacts into different categories, such as new customer, pending payment, order complete, etc. To create and assign labels, follow these steps:
-
Open WhatsApp Business and tap on the menu icon at the top right corner.
-
Tap on Settings and then on Business Settings.
-
Tap on Labels and then on the plus icon at the bottom right corner.
-
Enter a name and choose a color for your label and tap on Save.
-
To assign a label to a chat or a contact, tap and hold on the chat or contact and then tap on the label icon at the top.
-
Select the label that you want to assign and tap on Done.
-
-
-
Quick replies: You can use quick replies to save and reuse messages that you frequently send. To create and use quick replies, follow these steps:
-
Open WhatsApp Business and tap on the menu icon at the top right corner.
-
Tap on Settings and then on Business Settings.
-
Tap on Quick Replies and then on the plus icon at the bottom right corner.
-
Enter the message that you want to save as a quick reply and add a shortcut for it. For example, /thankyou for "Thank you for choosing us".
-
Optionally, you can add an emoji or an image to your quick reply.
-
Tap on Save when you are done.
-
To use a quick reply in a chat, type the shortcut and then tap on the send icon. The message will be automatically replaced by the quick reply.
-
-
-
Greeting message: You can use a greeting message to introduce your business and welcome new customers. To set up a greeting message, follow these steps:
-
Open WhatsApp Business and tap on the menu icon at the top right corner.
-
Tap on Settings and then on Business Settings.
-
Tap on Greeting Message and toggle it on.
-
Edit the message that you want to send as a greeting message. You can use placeholders such as customer_name or business_name to personalize your message.
-
Optionally, you can choose to send the greeting message to everyone who messages you, or only to people who message you for the first time or after 14 days of inactivity.
-
Tap on Save when you are done.
-
-
-
Away message: You can use an away message to let your customers know that you are not available at the moment. To set up an away message, follow these steps:
-
Open WhatsApp Business and tap on the menu icon at the top right corner.
-
Tap on Settings and then on Business Settings.
-
Tap on Away Message and toggle it on.
-
Edit the message that you want to send as an away message. You can use placeholders such as customer_name or business_name to personalize your message.
-
Optionally, you can choose to schedule your away message to be sent only outside of your business hours or always.
-
You can also choose to send your away message to everyone who messages you, or only to people who are not in your contacts or who have not messaged you before.
-
Tap on Save when you are done.
-
-
Conclusion
-
In conclusion, WhatsApp Business is a tool for companies to engage with customers over the platform. It has some features that are not available on WhatsApp, such as business profile, catalog, labels, quick replies, greeting message, and away message. It also allows you to link your WhatsApp Business account to your Facebook Page and access analytics and API. To download WhatsApp Business APK for Android, you can either get it from the Google Play Store or from a third-party website. To use WhatsApp Business effectively, you should create a business profile and a catalog, use labels, quick replies, greeting messages, and away messages, and integrate WhatsApp Business with other tools and platforms. We hope this article has helped you understand what WhatsApp Business is and how to download it. If you have any questions or feedback, please feel free to contact us.
-
FAQs
-
What is the difference between WhatsApp Business App and WhatsApp Business Platform?
-
WhatsApp Business App is for small businesses who personally manage conversations with customers. WhatsApp Business Platform is for medium to large businesses who communicate with customers at scale through programmatic access.
-
Can I use WhatsApp Business on my computer?
-
Yes, you can use WhatsApp Business on your computer by using WhatsApp Web or WhatsApp Desktop. You will need to scan a QR code with your phone to link your account.
-
Can I use the same phone number for WhatsApp and WhatsApp Business?
-
No, you will need different phone numbers for each app. You can use one phone number for your personal account and another for your business account, or you can use a landline number for your business account.
-
How can I verify my WhatsApp Business account?
-
To verify your WhatsApp Business account, you will need to apply for a green badge that confirms that your phone number belongs to your business. You can apply for verification through the Facebook Business Manager.
-
How much does WhatsApp Business cost?
-
WhatsApp Business App is free to download and use. WhatsApp Business Platform charges a fee for sending message templates and for receiving messages from customers after 24 hours of the last message sent by the business.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
deleted file mode 100644
index 078a6266e00c2525125630e193eb97cbfe0244c0..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
+++ /dev/null
@@ -1,299 +0,0 @@
-# Copyright 2022 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS
-from .scheduling_utils import SchedulerMixin, SchedulerOutput
-
-
-class KDPM2AncestralDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Scheduler created by @crowsonkb in [k_diffusion](https://github.com/crowsonkb/k-diffusion), see:
- https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188
-
- Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022).
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 2
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.00085, # sensible defaults
- beta_end: float = 0.012,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
-
- # set all values
- self.set_timesteps(num_train_timesteps, num_train_timesteps)
-
- def index_for_timestep(self, timestep):
- indices = (self.timesteps == timestep).nonzero()
- if self.state_in_first_order:
- pos = -1
- else:
- pos = 0
- return indices[pos].item()
-
- def scale_model_input(
- self,
- sample: paddle.Tensor,
- timestep: Union[float, paddle.Tensor],
- ) -> paddle.Tensor:
- """
- Args:
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
- sample (`paddle.Tensor`): input sample timestep (`int`, optional): current timestep
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- step_index = self.index_for_timestep(timestep)
-
- if self.state_in_first_order:
- sigma = self.sigmas[step_index]
- else:
- sigma = self.sigmas_interpol[step_index - 1]
-
- sample = sample / ((sigma**2 + 1) ** 0.5)
- return sample
-
- def set_timesteps(
- self,
- num_inference_steps: int,
- num_train_timesteps: Optional[int] = None,
- ):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
-
- num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps
-
- timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=np.float32)[::-1].copy()
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- self.log_sigmas = paddle.to_tensor(np.log(sigmas), dtype="float32")
-
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- sigmas = paddle.to_tensor(sigmas)
-
- # compute up and down sigmas
- sigmas_next = sigmas.roll(-1)
- sigmas_next[-1] = 0.0
- sigmas_up = (sigmas_next**2 * (sigmas**2 - sigmas_next**2) / sigmas**2) ** 0.5
- sigmas_down = (sigmas_next**2 - sigmas_up**2) ** 0.5
- sigmas_down[-1] = 0.0
-
- # compute interpolated sigmas
- sigmas_interpol = sigmas.log().lerp(sigmas_down.log(), 0.5).exp()
- sigmas_interpol[-2:] = 0.0
-
- # set sigmas
- self.sigmas = paddle.concat([sigmas[:1], sigmas[1:].repeat_interleave(2), sigmas[-1:]])
- self.sigmas_interpol = paddle.concat(
- [sigmas_interpol[:1], sigmas_interpol[1:].repeat_interleave(2), sigmas_interpol[-1:]]
- )
- self.sigmas_up = paddle.concat([sigmas_up[:1], sigmas_up[1:].repeat_interleave(2), sigmas_up[-1:]])
- self.sigmas_down = paddle.concat([sigmas_down[:1], sigmas_down[1:].repeat_interleave(2), sigmas_down[-1:]])
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = self.sigmas.max()
-
- timesteps = paddle.to_tensor(timesteps)
- timesteps_interpol = self.sigma_to_t(sigmas_interpol)
- interleaved_timesteps = paddle.stack((timesteps_interpol[:-2, None], timesteps[1:, None]), axis=-1).flatten()
- timesteps = paddle.concat([timesteps[:1], interleaved_timesteps])
-
- self.timesteps = timesteps
-
- self.sample = None
-
- def sigma_to_t(self, sigma):
- # get log sigma
- log_sigma = sigma.log()
-
- # get distribution
- dists = log_sigma - self.log_sigmas[:, None]
-
- # get sigmas range
- low_idx = (dists >= 0).cast("int64").cumsum(axis=0).argmax(axis=0).clip(max=self.log_sigmas.shape[0] - 2)
- high_idx = low_idx + 1
-
- low = self.log_sigmas[low_idx]
- high = self.log_sigmas[high_idx]
-
- # interpolate sigmas
- w = (low - log_sigma) / (low - high)
- w = w.clip(0, 1)
-
- # transform interpolation to time range
- t = (1 - w) * low_idx + w * high_idx
- t = t.reshape(sigma.shape)
- return t
-
- @property
- def state_in_first_order(self):
- return self.sample is None
-
- def step(
- self,
- model_output: Union[paddle.Tensor, np.ndarray],
- timestep: Union[float, paddle.Tensor],
- sample: Union[paddle.Tensor, np.ndarray],
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- return_dict: bool = True,
- ) -> Union[SchedulerOutput, Tuple]:
- """
- Args:
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
- model_output (`paddle.Tensor` or `np.ndarray`): direct output from learned diffusion model. timestep
- (`int`): current discrete timestep in the diffusion chain. sample (`paddle.Tensor` or `np.ndarray`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
- Returns:
- [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
- """
- step_index = self.index_for_timestep(timestep)
-
- if self.state_in_first_order:
- sigma = self.sigmas[step_index]
- sigma_interpol = self.sigmas_interpol[step_index]
- sigma_up = self.sigmas_up[step_index]
- sigma_down = self.sigmas_down[step_index - 1]
- else:
- # 2nd order / KPDM2's method
- sigma = self.sigmas[step_index - 1]
- sigma_interpol = self.sigmas_interpol[step_index - 1]
- sigma_up = self.sigmas_up[step_index - 1]
- sigma_down = self.sigmas_down[step_index - 1]
-
- # currently only gamma=0 is supported. This usually works best anyways.
- # We can support gamma in the future but then need to scale the timestep before
- # passing it to the model which requires a change in API
- gamma = 0
- sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now
-
- noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator)
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- if self.config.prediction_type == "epsilon":
- sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol
- pred_original_sample = sample - sigma_input * model_output
- elif self.config.prediction_type == "v_prediction":
- sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol
- pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + (
- sample / (sigma_input**2 + 1)
- )
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
-
- if self.state_in_first_order:
- # 2. Convert to an ODE derivative for 1st order
- derivative = (sample - pred_original_sample) / sigma_hat
- # 3. delta timestep
- dt = sigma_interpol - sigma_hat
-
- # store for 2nd order step
- self.sample = sample
- self.dt = dt
- prev_sample = sample + derivative * dt
- else:
- # DPM-Solver-2
- # 2. Convert to an ODE derivative for 2nd order
- derivative = (sample - pred_original_sample) / sigma_interpol
- # 3. delta timestep
- dt = sigma_down - sigma_hat
-
- sample = self.sample
- self.sample = None
-
- prev_sample = sample + derivative * dt
- prev_sample = prev_sample + noise * sigma_up
-
- if not return_dict:
- return (prev_sample,)
-
- return SchedulerOutput(prev_sample=prev_sample)
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- # Make sure sigmas and timesteps have the same dtype as original_samples
- self.sigmas = self.sigmas.cast(original_samples.dtype)
-
- step_indices = [self.index_for_timestep(t) for t in timesteps]
-
- sigma = self.sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/2ndelement/voicevox/speaker_info/b1a81618-b27b-40d2-b0ea-27a9ad408c4b/policy.md b/spaces/2ndelement/voicevox/speaker_info/b1a81618-b27b-40d2-b0ea-27a9ad408c4b/policy.md
deleted file mode 100644
index 68114802c449a6799db4cf7aae3cecbb71db0e70..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/speaker_info/b1a81618-b27b-40d2-b0ea-27a9ad408c4b/policy.md
+++ /dev/null
@@ -1,3 +0,0 @@
-dummy4 policy
-
-https://voicevox.hiroshiba.jp/
diff --git a/spaces/44brabal/valentinafeve-yolos-fashionpedia/README.md b/spaces/44brabal/valentinafeve-yolos-fashionpedia/README.md
deleted file mode 100644
index 575416758cb6933f1dfa715602af0a19b21ec7c9..0000000000000000000000000000000000000000
--- a/spaces/44brabal/valentinafeve-yolos-fashionpedia/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Valentinafeve Yolos Fashionpedia
-emoji: 🐨
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.45.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/74run/Predict_Car/README.md b/spaces/74run/Predict_Car/README.md
deleted file mode 100644
index 125b928e412645042ec3a3b4fcdfc124f33b427c..0000000000000000000000000000000000000000
--- a/spaces/74run/Predict_Car/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Predict Car
-emoji: 🏢
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIConsultant/MusicGen/CODE_OF_CONDUCT.md b/spaces/AIConsultant/MusicGen/CODE_OF_CONDUCT.md
deleted file mode 100644
index 83f431e8feeb7e80d571f39c9f6c1b96857b5f85..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to make participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, sex characteristics, gender identity and expression,
-level of experience, education, socio-economic status, nationality, personal
-appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-* The use of sexualized language or imagery and unwelcome sexual attention or
-advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
-address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies within all project spaces, and it also applies when
-an individual is representing the project or its community in public spaces.
-Examples of representing a project or community include using an official
-project e-mail address, posting via an official social media account, or acting
-as an appointed representative at an online or offline event. Representation of
-a project may be further defined and clarified by project maintainers.
-
-This Code of Conduct also applies outside the project spaces when there is a
-reasonable belief that an individual's behavior may have a negative impact on
-the project or its community.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at . All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-[homepage]: https://www.contributor-covenant.org
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
diff --git a/spaces/Ababababababbababa/Arabic_poetry_Sha3bor_mid/README.md b/spaces/Ababababababbababa/Arabic_poetry_Sha3bor_mid/README.md
deleted file mode 100644
index f7e6b233ed18eea7ce7bad8a70359fa1d06b565f..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Arabic_poetry_Sha3bor_mid/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Arabic Poetry Sha3bor Mid
-emoji: 💻
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AiMimicry/sovits-models/vdecoder/hifigan/env.py b/spaces/AiMimicry/sovits-models/vdecoder/hifigan/env.py
deleted file mode 100644
index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000
--- a/spaces/AiMimicry/sovits-models/vdecoder/hifigan/env.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import os
-import shutil
-
-
-class AttrDict(dict):
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-def build_env(config, config_name, path):
- t_path = os.path.join(path, config_name)
- if config != t_path:
- os.makedirs(path, exist_ok=True)
- shutil.copyfile(config, os.path.join(path, config_name))
diff --git a/spaces/AlexWortega/AlexWortega-instruct_rugptlarge/app.py b/spaces/AlexWortega/AlexWortega-instruct_rugptlarge/app.py
deleted file mode 100644
index 253c2c6857bd5e3a395d06b32f84412023fe6249..0000000000000000000000000000000000000000
--- a/spaces/AlexWortega/AlexWortega-instruct_rugptlarge/app.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import gradio as gr
-
-
-import gradio as gr
-from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
-
-from transformers import GPT2TokenizerFast,GPT2LMHeadModel
-tokenizer = GPT2TokenizerFast.from_pretrained("AlexWortega/instruct_rugptlarge")
-special_tokens_dict = {'additional_special_tokens': ['', '', '', '', '']}
-
-tokenizer.add_special_tokens(special_tokens_dict)
-device = 'cpu' # мэх дорога
-model = GPT2LMHeadModel.from_pretrained("AlexWortega/instruct_rugptlarge")
-#
-
-model.resize_token_embeddings(len(tokenizer))
-
-def generate_prompt(instruction, input=None):
- if input:
- return f"{input}:"
- return f"{instruction}"
-
-def generate_seqs(q, temp, topp, topk, nb, maxtok):
- k=1
- gen_kwargs = {
- "min_length": 20,
- "max_new_tokens": maxtok,
- "top_k": topk,
- "top_p": topp,
- "do_sample": True,
- "early_stopping": True,
- "no_repeat_ngram_size": 2,
- "temperature":temp,
-
- "eos_token_id": tokenizer.eos_token_id,
- "pad_token_id": tokenizer.eos_token_id,
- "use_cache": True,
- "repetition_penalty": 1.5,
- "length_penalty": 0.8,
- "num_beams": nb,
- "num_return_sequences": k
- }
- if len(q)>0:
- q = q + ''
- else:
- q = 'Как зарабатывать денег на нейросетях ?' + ''
- t = tokenizer.encode(q, return_tensors='pt').to(device)
- g = model.generate(t, **gen_kwargs)
- generated_sequences = tokenizer.batch_decode(g, skip_special_tokens=False)
- #print(generated_sequences)
- # Add A: after the question and before each generated sequence
- #sequences = [f"H:{q}A:{s.replace(q, '')}" for s in generated_sequences]
-
- # Compute the reward score for each generated sequence
- #cores = [reward_model.reward_score(q, s.split('A:')[-1]) for s in sequences]
-
- # Return the k sequences with the highest score and their corresponding scores
- # results = [(s, score) for score, s in sorted(zip(scores, sequences), reverse=True)[:k]]
- ans = generated_sequences[0].replace('','\n').replace('','').replace('<|endoftext|>','')
- return ans
-
-description_html = '''
-
-'''
-
-g = gr.Interface(
- fn=generate_seqs,
- inputs=[
- gr.components.Textbox(
- lines=2, label="Впишите сюда задачу, а я попробую решить", placeholder="Как зарабатывать денег на нейросетях?"
- ),
- #gr.components.Textbox(lines=2, label="Вход", placeholder="Нет"),
- gr.components.Slider(minimum=0.1, maximum=2, value=1.0, label="Temperature"),
- gr.components.Slider(minimum=0, maximum=1, value=0.9, label="Top p"),
- gr.components.Slider(minimum=0, maximum=100, value=50, label="Top k"),
- gr.components.Slider(minimum=0, maximum=5, step=1, value=4, label="Beams"),
- gr.components.Slider(
- minimum=1, maximum=256, step=1, value=100, label="Max tokens"
- ),
- ],
- outputs=[
- gr.inputs.Textbox(
- lines=5,
- label="Output",
- )
- ],
- title="ruInstructlarge",
- description=description_html)
-
-
-g.queue(concurrency_count=5)
-g.launch()
\ No newline at end of file
diff --git a/spaces/AlexZou/Deploy_Restoration/model/blocks.py b/spaces/AlexZou/Deploy_Restoration/model/blocks.py
deleted file mode 100644
index 38d2f2160959c0441ff324f220d588fde9033a1b..0000000000000000000000000000000000000000
--- a/spaces/AlexZou/Deploy_Restoration/model/blocks.py
+++ /dev/null
@@ -1,281 +0,0 @@
-"""
-Code copy from uniformer source code:
-https://github.com/Sense-X/UniFormer
-"""
-import os
-import torch
-import torch.nn as nn
-from functools import partial
-import math
-from timm.models.vision_transformer import VisionTransformer, _cfg
-from timm.models.registry import register_model
-from timm.models.layers import trunc_normal_, DropPath, to_2tuple
-
-# ResMLP's normalization
-class Aff(nn.Module):
- def __init__(self, dim):
- super().__init__()
- # learnable
- self.alpha = nn.Parameter(torch.ones([1, 1, dim]))
- self.beta = nn.Parameter(torch.zeros([1, 1, dim]))
-
- def forward(self, x):
- x = x * self.alpha + self.beta
- return x
-
-# Color Normalization
-class Aff_channel(nn.Module):
- def __init__(self, dim, channel_first = True):
- super().__init__()
- # learnable
- self.alpha = nn.Parameter(torch.ones([1, 1, dim]))
- self.beta = nn.Parameter(torch.zeros([1, 1, dim]))
- self.color = nn.Parameter(torch.eye(dim))
- self.channel_first = channel_first
-
- def forward(self, x):
- if self.channel_first:
- x1 = torch.tensordot(x, self.color, dims=[[-1], [-1]])
- x2 = x1 * self.alpha + self.beta
- else:
- x1 = x * self.alpha + self.beta
- x2 = torch.tensordot(x1, self.color, dims=[[-1], [-1]])
- return x2
-
-class Mlp(nn.Module):
- # taken from https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-class CMlp(nn.Module):
- # taken from https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Conv2d(in_features, hidden_features, 1)
- self.act = act_layer()
- self.fc2 = nn.Conv2d(hidden_features, out_features, 1)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-class CBlock_ln(nn.Module):
- def __init__(self, dim, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=Aff_channel, init_values=1e-4):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- #self.norm1 = Aff_channel(dim)
- self.norm1 = norm_layer(dim)
- self.conv1 = nn.Conv2d(dim, dim, 1)
- self.conv2 = nn.Conv2d(dim, dim, 1)
- self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- #self.norm2 = Aff_channel(dim)
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.gamma_1 = nn.Parameter(init_values * torch.ones((1, dim, 1, 1)), requires_grad=True)
- self.gamma_2 = nn.Parameter(init_values * torch.ones((1, dim, 1, 1)), requires_grad=True)
- self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- B, C, H, W = x.shape
- #print(x.shape)
- norm_x = x.flatten(2).transpose(1, 2)
- #print(norm_x.shape)
- norm_x = self.norm1(norm_x)
- norm_x = norm_x.view(B, H, W, C).permute(0, 3, 1, 2)
-
-
- x = x + self.drop_path(self.gamma_1*self.conv2(self.attn(self.conv1(norm_x))))
- norm_x = x.flatten(2).transpose(1, 2)
- norm_x = self.norm2(norm_x)
- norm_x = norm_x.view(B, H, W, C).permute(0, 3, 1, 2)
- x = x + self.drop_path(self.gamma_2*self.mlp(norm_x))
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- #print(x.shape)
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x):
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-## Layer_norm, Aff_norm, Aff_channel_norm
-class SwinTransformerBlock(nn.Module):
- r""" Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resulotion.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, num_heads=2, window_size=8, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=Aff_channel):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
-
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- #self.norm1 = norm_layer(dim)
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- #self.norm2 = norm_layer(dim)
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- B, C, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- x = shifted_x
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.transpose(1, 2).reshape(B, C, H, W)
-
- return x
-
-
-if __name__ == "__main__":
- os.environ['CUDA_VISIBLE_DEVICES']='1'
- cb_blovk = CBlock_ln(dim = 16)
- x = torch.Tensor(1, 16, 400, 600)
- swin = SwinTransformerBlock(dim=16, num_heads=4)
- x = cb_blovk(x)
- print(x.shape)
diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/transforms.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/AndrewRWilliams/video-whisper/README.md b/spaces/AndrewRWilliams/video-whisper/README.md
deleted file mode 100644
index 3dc00ad8925130e670fc654722a5bdc1cd69b3ac..0000000000000000000000000000000000000000
--- a/spaces/AndrewRWilliams/video-whisper/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Video Whisper
-emoji: 😻
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/reproducibility.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/reproducibility.md
deleted file mode 100644
index 1594e967c847570c0a4269fc66adb3dc14ed37c1..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/reproducibility.md
+++ /dev/null
@@ -1,191 +0,0 @@
-
-
-# Create reproducible pipelines
-
-[[open-in-colab]]
-
-Reproducibility is important for testing, replicating results, and can even be used to [improve image quality](reusing_seeds). However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can't expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint.
-
-This is why it's important to understand how to control sources of randomness in diffusion models or use deterministic algorithms.
-
-
-
-💡 We strongly recommend reading PyTorch's [statement about reproducibility](https://pytorch.org/docs/stable/notes/randomness.html):
-
-> Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.
-
-
-
-## Control randomness
-
-During inference, pipelines rely heavily on random sampling operations which include creating the
-Gaussian noise tensors to denoise and adding noise to the scheduling step.
-
-Take a look at the tensor values in the [`DDIMPipeline`] after two inference steps:
-
-```python
-from diffusers import DDIMPipeline
-import numpy as np
-
-model_id = "google/ddpm-cifar10-32"
-
-# load model and scheduler
-ddim = DDIMPipeline.from_pretrained(model_id)
-
-# run pipeline for just two steps and return numpy tensor
-image = ddim(num_inference_steps=2, output_type="np").images
-print(np.abs(image).sum())
-```
-
-Running the code above prints one value, but if you run it again you get a different value. What is going on here?
-
-Every time the pipeline is run, [`torch.randn`](https://pytorch.org/docs/stable/generated/torch.randn.html) uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time.
-
-But if you need to reliably generate the same image, that'll depend on whether you're running the pipeline on a CPU or GPU.
-
-### CPU
-
-To generate reproducible results on a CPU, you'll need to use a PyTorch [`Generator`](https://pytorch.org/docs/stable/generated/torch.randn.html) and set a seed:
-
-```python
-import torch
-from diffusers import DDIMPipeline
-import numpy as np
-
-model_id = "google/ddpm-cifar10-32"
-
-# load model and scheduler
-ddim = DDIMPipeline.from_pretrained(model_id)
-
-# create a generator for reproducibility
-generator = torch.Generator(device="cpu").manual_seed(0)
-
-# run pipeline for just two steps and return numpy tensor
-image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
-print(np.abs(image).sum())
-```
-
-Now when you run the code above, it always prints a value of `1491.1711` no matter what because the `Generator` object with the seed is passed to all the random functions of the pipeline.
-
-If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result.
-
-
-
-💡 It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
-just integer values representing the seed, but this is the recommended design when dealing with
-probabilistic models in PyTorch as `Generator`'s are *random states* that can be
-passed to multiple pipelines in a sequence.
-
-
-
-### GPU
-
-Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
-
-```python
-import torch
-from diffusers import DDIMPipeline
-import numpy as np
-
-model_id = "google/ddpm-cifar10-32"
-
-# load model and scheduler
-ddim = DDIMPipeline.from_pretrained(model_id)
-ddim.to("cuda")
-
-# create a generator for reproducibility
-generator = torch.Generator(device="cuda").manual_seed(0)
-
-# run pipeline for just two steps and return numpy tensor
-image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
-print(np.abs(image).sum())
-```
-
-The result is not the same even though you're using an identical seed because the GPU uses a different random number generator than the CPU.
-
-To circumvent this problem, 🧨 Diffusers has a [`~diffusers.utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
-
-You'll see the results are much closer now!
-
-```python
-import torch
-from diffusers import DDIMPipeline
-import numpy as np
-
-model_id = "google/ddpm-cifar10-32"
-
-# load model and scheduler
-ddim = DDIMPipeline.from_pretrained(model_id)
-ddim.to("cuda")
-
-# create a generator for reproducibility; notice you don't place it on the GPU!
-generator = torch.manual_seed(0)
-
-# run pipeline for just two steps and return numpy tensor
-image = ddim(num_inference_steps=2, output_type="np", generator=generator).images
-print(np.abs(image).sum())
-```
-
-
-
-💡 If reproducibility is important, we recommend always passing a CPU generator.
-The performance loss is often neglectable, and you'll generate much more similar
-values than if the pipeline had been run on a GPU.
-
-
-
-Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
-susceptible to precision error propagation. Don't expect similar results across
-different GPU hardware or PyTorch versions. In this case, you'll need to run
-exactly the same hardware and PyTorch version for full reproducibility.
-
-## Deterministic algorithms
-
-You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go!
-
-Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment varibale [`CUBLAS_WORKSPACE_CONFIG`](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility) to `:16:8` to only use one buffer size during runtime.
-
-PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass `True` to [`torch.use_deterministic_algorithms`](https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html) to enable deterministic algorithms.
-
-```py
-import os
-
-os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8"
-
-torch.backends.cudnn.benchmark = False
-torch.use_deterministic_algorithms(True)
-```
-
-Now when you run the same pipeline twice, you'll get identical results.
-
-```py
-import torch
-from diffusers import DDIMScheduler, StableDiffusionPipeline
-import numpy as np
-
-model_id = "runwayml/stable-diffusion-v1-5"
-pipe = StableDiffusionPipeline.from_pretrained(model_id).to("cuda")
-pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
-g = torch.Generator(device="cuda")
-
-prompt = "A bear is playing a guitar on Times Square"
-
-g.manual_seed(0)
-result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images
-
-g.manual_seed(0)
-result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images
-
-print("L_inf dist = ", abs(result1 - result2).max())
-"L_inf dist = tensor(0., device='cuda:0')"
-```
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/unconditional_image_generation/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/unconditional_image_generation/README.md
deleted file mode 100644
index d83dc928c7a1164b3e8896bcfa1ef5d417ea6b80..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/unconditional_image_generation/README.md
+++ /dev/null
@@ -1,163 +0,0 @@
-## Training an unconditional diffusion model
-
-Creating a training image set is [described in a different document](https://huggingface.co/docs/datasets/image_process#image-datasets).
-
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install .
-```
-
-Then cd in the example folder and run
-```bash
-pip install -r requirements.txt
-```
-
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-### Unconditional Flowers
-
-The command to train a DDPM UNet model on the Oxford Flowers dataset:
-
-```bash
-accelerate launch train_unconditional.py \
- --dataset_name="huggan/flowers-102-categories" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-flowers-64" \
- --train_batch_size=16 \
- --num_epochs=100 \
- --gradient_accumulation_steps=1 \
- --use_ema \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision=no \
- --push_to_hub
-```
-An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64
-
-A full training run takes 2 hours on 4xV100 GPUs.
-
-
-
-
-### Unconditional Pokemon
-
-The command to train a DDPM UNet model on the Pokemon dataset:
-
-```bash
-accelerate launch train_unconditional.py \
- --dataset_name="huggan/pokemon" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-pokemon-64" \
- --train_batch_size=16 \
- --num_epochs=100 \
- --gradient_accumulation_steps=1 \
- --use_ema \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision=no \
- --push_to_hub
-```
-An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64
-
-A full training run takes 2 hours on 4xV100 GPUs.
-
-
-
-### Training with multiple GPUs
-
-`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch)
-for running distributed training with `accelerate`. Here is an example command:
-
-```bash
-accelerate launch --mixed_precision="fp16" --multi_gpu train_unconditional.py \
- --dataset_name="huggan/pokemon" \
- --resolution=64 --center_crop --random_flip \
- --output_dir="ddpm-ema-pokemon-64" \
- --train_batch_size=16 \
- --num_epochs=100 \
- --gradient_accumulation_steps=1 \
- --use_ema \
- --learning_rate=1e-4 \
- --lr_warmup_steps=500 \
- --mixed_precision="fp16" \
- --logger="wandb"
-```
-
-To be able to use Weights and Biases (`wandb`) as a logger you need to install the library: `pip install wandb`.
-
-### Using your own data
-
-To use your own dataset, there are 2 ways:
-- you can either provide your own folder as `--train_data_dir`
-- or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument.
-
-Below, we explain both in more detail.
-
-#### Provide the dataset as a folder
-
-If you provide your own folders with images, the script expects the following directory structure:
-
-```bash
-data_dir/xxx.png
-data_dir/xxy.png
-data_dir/[...]/xxz.png
-```
-
-In other words, the script will take care of gathering all images inside the folder. You can then run the script like this:
-
-```bash
-accelerate launch train_unconditional.py \
- --train_data_dir \
-
-```
-
-Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects.
-
-#### Upload your data to the hub, as a (possibly private) repo
-
-It's very easy (and convenient) to upload your image dataset to the hub using the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following:
-
-```python
-from datasets import load_dataset
-
-# example 1: local folder
-dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
-
-# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
-dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
-
-# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
-dataset = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip")
-
-# example 4: providing several splits
-dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]})
-```
-
-`ImageFolder` will create an `image` column containing the PIL-encoded images.
-
-Next, push it to the hub!
-
-```python
-# assuming you have ran the huggingface-cli login command in a terminal
-dataset.push_to_hub("name_of_your_dataset")
-
-# if you want to push to a private repo, simply pass private=True:
-dataset.push_to_hub("name_of_your_dataset", private=True)
-```
-
-and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub.
-
-More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets).
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index d05eb50c7cd501a5bab4ec403a98137b31b9b51b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './cascade_mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ghm/retinanet_ghm_r101_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/ghm/retinanet_ghm_r101_fpn_1x_coco.py
deleted file mode 100644
index 18f899a9b456383a8f74053e4716aee50ee5ec8c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/ghm/retinanet_ghm_r101_fpn_1x_coco.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './retinanet_ghm_r50_fpn_1x_coco.py'
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/grid_rcnn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/grid_rcnn.py
deleted file mode 100644
index b6145a1464cd940bd4f98eaa15f6f9ecf6a10a20..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/grid_rcnn.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class GridRCNN(TwoStageDetector):
- """Grid R-CNN.
-
- This detector is the implementation of:
- - Grid R-CNN (https://arxiv.org/abs/1811.12030)
- - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688)
- """
-
- def __init__(self,
- backbone,
- rpn_head,
- roi_head,
- train_cfg,
- test_cfg,
- neck=None,
- pretrained=None):
- super(GridRCNN, self).__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained)
diff --git a/spaces/Andy1621/uniformer_image_detection/tools/train.py b/spaces/Andy1621/uniformer_image_detection/tools/train.py
deleted file mode 100644
index 1f355f3b2e2fb84b3f4c3898fca58405f852c60c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/tools/train.py
+++ /dev/null
@@ -1,187 +0,0 @@
-import argparse
-import copy
-import os
-import os.path as osp
-import time
-import warnings
-
-import mmcv
-import torch
-from mmcv import Config, DictAction
-from mmcv.runner import get_dist_info, init_dist
-from mmcv.utils import get_git_hash
-
-from mmdet import __version__
-from mmdet.apis import set_random_seed, train_detector
-from mmdet.datasets import build_dataset
-from mmdet.models import build_detector
-from mmdet.utils import collect_env, get_root_logger
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description='Train a detector')
- parser.add_argument('config', help='train config file path')
- parser.add_argument('--work-dir', help='the dir to save logs and models')
- parser.add_argument(
- '--resume-from', help='the checkpoint file to resume from')
- parser.add_argument(
- '--no-validate',
- action='store_true',
- help='whether not to evaluate the checkpoint during training')
- group_gpus = parser.add_mutually_exclusive_group()
- group_gpus.add_argument(
- '--gpus',
- type=int,
- help='number of gpus to use '
- '(only applicable to non-distributed training)')
- group_gpus.add_argument(
- '--gpu-ids',
- type=int,
- nargs='+',
- help='ids of gpus to use '
- '(only applicable to non-distributed training)')
- parser.add_argument('--seed', type=int, default=None, help='random seed')
- parser.add_argument(
- '--deterministic',
- action='store_true',
- help='whether to set deterministic options for CUDNN backend.')
- parser.add_argument(
- '--options',
- nargs='+',
- action=DictAction,
- help='override some settings in the used config, the key-value pair '
- 'in xxx=yyy format will be merged into config file (deprecate), '
- 'change to --cfg-options instead.')
- parser.add_argument(
- '--cfg-options',
- nargs='+',
- action=DictAction,
- help='override some settings in the used config, the key-value pair '
- 'in xxx=yyy format will be merged into config file. If the value to '
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
- 'Note that the quotation marks are necessary and that no white space '
- 'is allowed.')
- parser.add_argument(
- '--launcher',
- choices=['none', 'pytorch', 'slurm', 'mpi'],
- default='none',
- help='job launcher')
- parser.add_argument('--local_rank', type=int, default=0)
- args = parser.parse_args()
- if 'LOCAL_RANK' not in os.environ:
- os.environ['LOCAL_RANK'] = str(args.local_rank)
-
- if args.options and args.cfg_options:
- raise ValueError(
- '--options and --cfg-options cannot be both '
- 'specified, --options is deprecated in favor of --cfg-options')
- if args.options:
- warnings.warn('--options is deprecated in favor of --cfg-options')
- args.cfg_options = args.options
-
- return args
-
-
-def main():
- args = parse_args()
-
- cfg = Config.fromfile(args.config)
- if args.cfg_options is not None:
- cfg.merge_from_dict(args.cfg_options)
- # import modules from string list.
- if cfg.get('custom_imports', None):
- from mmcv.utils import import_modules_from_strings
- import_modules_from_strings(**cfg['custom_imports'])
- # set cudnn_benchmark
- if cfg.get('cudnn_benchmark', False):
- torch.backends.cudnn.benchmark = True
-
- # work_dir is determined in this priority: CLI > segment in file > filename
- if args.work_dir is not None:
- # update configs according to CLI args if args.work_dir is not None
- cfg.work_dir = args.work_dir
- elif cfg.get('work_dir', None) is None:
- # use config filename as default work_dir if cfg.work_dir is None
- cfg.work_dir = osp.join('./work_dirs',
- osp.splitext(osp.basename(args.config))[0])
- if args.resume_from is not None:
- cfg.resume_from = args.resume_from
- if args.gpu_ids is not None:
- cfg.gpu_ids = args.gpu_ids
- else:
- cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
-
- # init distributed env first, since logger depends on the dist info.
- if args.launcher == 'none':
- distributed = False
- else:
- distributed = True
- init_dist(args.launcher, **cfg.dist_params)
- # re-set gpu_ids with distributed training mode
- _, world_size = get_dist_info()
- cfg.gpu_ids = range(world_size)
-
- # create work_dir
- mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
- # dump config
- cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))
- # init the logger before other steps
- timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
- log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
- logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
-
- # init the meta dict to record some important information such as
- # environment info and seed, which will be logged
- meta = dict()
- # log env info
- env_info_dict = collect_env()
- env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()])
- dash_line = '-' * 60 + '\n'
- logger.info('Environment info:\n' + dash_line + env_info + '\n' +
- dash_line)
- meta['env_info'] = env_info
- meta['config'] = cfg.pretty_text
- # log some basic info
- logger.info(f'Distributed training: {distributed}')
- logger.info(f'Config:\n{cfg.pretty_text}')
-
- # set random seeds
- if args.seed is not None:
- logger.info(f'Set random seed to {args.seed}, '
- f'deterministic: {args.deterministic}')
- set_random_seed(args.seed, deterministic=args.deterministic)
- cfg.seed = args.seed
- meta['seed'] = args.seed
- meta['exp_name'] = osp.basename(args.config)
-
- model = build_detector(
- cfg.model,
- train_cfg=cfg.get('train_cfg'),
- test_cfg=cfg.get('test_cfg'))
-
- datasets = [build_dataset(cfg.data.train)]
- if len(cfg.workflow) == 2:
- val_dataset = copy.deepcopy(cfg.data.val)
- val_dataset.pipeline = cfg.data.train.pipeline
- datasets.append(build_dataset(val_dataset))
- if cfg.checkpoint_config is not None:
- # save mmdet version, config file content and class names in
- # checkpoints as meta data
- cfg.checkpoint_config.meta = dict(
- mmdet_version=__version__ + get_git_hash()[:7],
- CLASSES=datasets[0].CLASSES)
- # add an attribute for visualization convenience
- model.CLASSES = datasets[0].CLASSES
- train_detector(
- model,
- datasets,
- cfg,
- distributed=distributed,
- validate=(not args.no_validate),
- timestamp=timestamp,
- meta=meta)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/README.md
deleted file mode 100644
index 136b49d4b6f5907b750447ac4323b26610cd3071..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/README.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Object-Contextual Representations for Semantic Segmentation
-
-## Introduction
-
-
-
-```latex
-@article{YuanW18,
- title={Ocnet: Object context network for scene parsing},
- author={Yuhui Yuan and Jingdong Wang},
- booktitle={arXiv preprint arXiv:1809.00916},
- year={2018}
-}
-
-@article{YuanCW20,
- title={Object-Contextual Representations for Semantic Segmentation},
- author={Yuhui Yuan and Xilin Chen and Jingdong Wang},
- booktitle={ECCV},
- year={2020}
-}
-```
-
-## Results and models
-
-### Cityscapes
-
-#### HRNet backbone
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | ------------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| OCRNet | HRNetV2p-W18-Small | 512x1024 | 40000 | 3.5 | 10.45 | 74.30 | 75.95 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes/ocrnet_hr18s_512x1024_40k_cityscapes_20200601_033304-fa2436c2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes/ocrnet_hr18s_512x1024_40k_cityscapes_20200601_033304.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x1024 | 40000 | 4.7 | 7.50 | 77.72 | 79.49 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes/ocrnet_hr18_512x1024_40k_cityscapes_20200601_033320-401c5bdd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x1024_40k_cityscapes/ocrnet_hr18_512x1024_40k_cityscapes_20200601_033320.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x1024 | 40000 | 8 | 4.22 | 80.58 | 81.79 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x1024_40k_cityscapes/ocrnet_hr48_512x1024_40k_cityscapes_20200601_033336-55b32491.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x1024_40k_cityscapes/ocrnet_hr48_512x1024_40k_cityscapes_20200601_033336.log.json) |
-| OCRNet | HRNetV2p-W18-Small | 512x1024 | 80000 | - | - | 77.16 | 78.66 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x1024_80k_cityscapes/ocrnet_hr18s_512x1024_80k_cityscapes_20200601_222735-55979e63.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x1024_80k_cityscapes/ocrnet_hr18s_512x1024_80k_cityscapes_20200601_222735.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x1024 | 80000 | - | - | 78.57 | 80.46 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x1024_80k_cityscapes/ocrnet_hr18_512x1024_80k_cityscapes_20200614_230521-c2e1dd4a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x1024_80k_cityscapes/ocrnet_hr18_512x1024_80k_cityscapes_20200614_230521.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x1024 | 80000 | - | - | 80.70 | 81.87 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x1024_80k_cityscapes/ocrnet_hr48_512x1024_80k_cityscapes_20200601_222752-9076bcdf.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x1024_80k_cityscapes/ocrnet_hr48_512x1024_80k_cityscapes_20200601_222752.log.json) |
-| OCRNet | HRNetV2p-W18-Small | 512x1024 | 160000 | - | - | 78.45 | 79.97 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x1024_160k_cityscapes/ocrnet_hr18s_512x1024_160k_cityscapes_20200602_191005-f4a7af28.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x1024_160k_cityscapes/ocrnet_hr18s_512x1024_160k_cityscapes_20200602_191005.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x1024 | 160000 | - | - | 79.47 | 80.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x1024_160k_cityscapes/ocrnet_hr18_512x1024_160k_cityscapes_20200602_191001-b9172d0c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x1024_160k_cityscapes/ocrnet_hr18_512x1024_160k_cityscapes_20200602_191001.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x1024 | 160000 | - | - | 81.35 | 82.70 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes/ocrnet_hr48_512x1024_160k_cityscapes_20200602_191037-dfbf1b0c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes/ocrnet_hr48_512x1024_160k_cityscapes_20200602_191037.log.json) |
-
-#### ResNet backbone
-
-| Method | Backbone | Crop Size | Batch Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | -------- | --------- | ---------- | ------- | -------- | -------------- | ----- | ------------: | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| OCRNet | R-101-D8 | 512x1024 | 8 | 40000 | - | - | 80.09 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_r101-d8_512x1024_40k_b8_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_r101-d8_512x1024_40k_b8_cityscapes/ocrnet_r101-d8_512x1024_40k_b8_cityscapes-02ac0f13.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_r101-d8_512x1024_40k_b8_cityscapes/ocrnet_r101-d8_512x1024_40k_b8_cityscapes_20200717_110721.log.json) |
-| OCRNet | R-101-D8 | 512x1024 | 16 | 40000 | 8.8 | 3.02 | 80.30 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_r101-d8_512x1024_40k_b16_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_r101-d8_512x1024_40k_b16_cityscapes/ocrnet_r101-d8_512x1024_40k_b16_cityscapes-db500f80.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_r101-d8_512x1024_40k_b16_cityscapes/ocrnet_r101-d8_512x1024_40k_b16_cityscapes_20200723_193726.log.json) |
-| OCRNet | R-101-D8 | 512x1024 | 16 | 80000 | 8.8 | 3.02 | 80.81 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_r101-d8_512x1024_80k_b16_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_r101-d8_512x1024_80k_b16_cityscapes/ocrnet_r101-d8_512x1024_80k_b16_cityscapes-78688424.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_r101-d8_512x1024_80k_b16_cityscapes/ocrnet_r101-d8_512x1024_80k_b16_cityscapes_20200723_192421.log.json) |
-
-### ADE20K
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | ------------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| OCRNet | HRNetV2p-W18-Small | 512x512 | 80000 | 6.7 | 28.98 | 35.06 | 35.80 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_80k_ade20k/ocrnet_hr18s_512x512_80k_ade20k_20200615_055600-e80b62af.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_80k_ade20k/ocrnet_hr18s_512x512_80k_ade20k_20200615_055600.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x512 | 80000 | 7.9 | 18.93 | 37.79 | 39.16 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_80k_ade20k/ocrnet_hr18_512x512_80k_ade20k_20200615_053157-d173d83b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_80k_ade20k/ocrnet_hr18_512x512_80k_ade20k_20200615_053157.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x512 | 80000 | 11.2 | 16.99 | 43.00 | 44.30 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_80k_ade20k/ocrnet_hr48_512x512_80k_ade20k_20200615_021518-d168c2d1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_80k_ade20k/ocrnet_hr48_512x512_80k_ade20k_20200615_021518.log.json) |
-| OCRNet | HRNetV2p-W18-Small | 512x512 | 160000 | - | - | 37.19 | 38.40 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_160k_ade20k/ocrnet_hr18s_512x512_160k_ade20k_20200615_184505-8e913058.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_160k_ade20k/ocrnet_hr18s_512x512_160k_ade20k_20200615_184505.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x512 | 160000 | - | - | 39.32 | 40.80 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_160k_ade20k/ocrnet_hr18_512x512_160k_ade20k_20200615_200940-d8fcd9d1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_160k_ade20k/ocrnet_hr18_512x512_160k_ade20k_20200615_200940.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x512 | 160000 | - | - | 43.25 | 44.88 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_160k_ade20k/ocrnet_hr48_512x512_160k_ade20k_20200615_184705-a073726d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_160k_ade20k/ocrnet_hr48_512x512_160k_ade20k_20200615_184705.log.json) |
-
-### Pascal VOC 2012 + Aug
-
-| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
-| ------ | ------------------ | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| OCRNet | HRNetV2p-W18-Small | 512x512 | 20000 | 3.5 | 31.55 | 71.70 | 73.84 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_20k_voc12aug/ocrnet_hr18s_512x512_20k_voc12aug_20200617_233913-02b04fcb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_20k_voc12aug/ocrnet_hr18s_512x512_20k_voc12aug_20200617_233913.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x512 | 20000 | 4.7 | 19.91 | 74.75 | 77.11 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_20k_voc12aug/ocrnet_hr18_512x512_20k_voc12aug_20200617_233932-8954cbb7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_20k_voc12aug/ocrnet_hr18_512x512_20k_voc12aug_20200617_233932.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x512 | 20000 | 8.1 | 17.83 | 77.72 | 79.87 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_20k_voc12aug/ocrnet_hr48_512x512_20k_voc12aug_20200617_233932-9e82080a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_20k_voc12aug/ocrnet_hr48_512x512_20k_voc12aug_20200617_233932.log.json) |
-| OCRNet | HRNetV2p-W18-Small | 512x512 | 40000 | - | - | 72.76 | 74.60 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18s_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_40k_voc12aug/ocrnet_hr18s_512x512_40k_voc12aug_20200614_002025-42b587ac.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18s_512x512_40k_voc12aug/ocrnet_hr18s_512x512_40k_voc12aug_20200614_002025.log.json) |
-| OCRNet | HRNetV2p-W18 | 512x512 | 40000 | - | - | 74.98 | 77.40 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr18_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_40k_voc12aug/ocrnet_hr18_512x512_40k_voc12aug_20200614_015958-714302be.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr18_512x512_40k_voc12aug/ocrnet_hr18_512x512_40k_voc12aug_20200614_015958.log.json) |
-| OCRNet | HRNetV2p-W48 | 512x512 | 40000 | - | - | 77.14 | 79.71 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet/ocrnet_hr48_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_40k_voc12aug/ocrnet_hr48_512x512_40k_voc12aug_20200614_015958-255bc5ce.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/ocrnet/ocrnet_hr48_512x512_40k_voc12aug/ocrnet_hr48_512x512_40k_voc12aug_20200614_015958.log.json) |
diff --git a/spaces/Anni123/AuRoRA/utils.py b/spaces/Anni123/AuRoRA/utils.py
deleted file mode 100644
index 1a55998142a99d3d196d42404133f90af9ae9b32..0000000000000000000000000000000000000000
--- a/spaces/Anni123/AuRoRA/utils.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import re
-
-
-
-
-
-def answer_cleansing_zero_shot(dataset, pred, must_choice=False):
- pred = pred.strip()
- if dataset in ("commonsense-mc"):
- pred = re.findall(r'A|B|C|D|E', pred)
- elif dataset in ("arithmetic"):
- if must_choice:
- pred = re.findall(r'A|B|C|D', pred)
- else:
- pred = pred.replace(",", "")
- pred = [s for s in re.findall(r'-?\d+\.?\d*', pred)]
- elif dataset in ("commonsense-verify", "symbolic-coin"):
- pred = pred.lower()
- pred = re.sub("\"|\'|\n|\.|\s|\:|\,", " ", pred)
- pred = pred.split(" ")
- pred = [i for i in pred if i in ("yes", "no")]
- elif dataset == "symbolic-letter":
- pred = re.sub("\"|\'|\n|\.|\s", "", pred)
- pred = [pred]
- elif dataset == "UNDEFINED":
- pred = pred
- else:
- raise ValueError("dataset is not properly defined ...")
-
- # If there is no candidate in list, null is set.
- if len(pred) == 0:
- pred = ""
- else:
- # choose the first element in list ...
- pred = pred[0]
-
- # (For arithmetic tasks) if a word ends with period, it will be omitted ...
- if pred != "":
- if pred[-1] == ".":
- pred = pred[:-1]
-
- return pred
-
-def type_cleasing(type):
- type = re.findall(r'arithmetic|commonsense-mc|commonsense-verify|symbolic-coin|symbolic-letter', type)
- if len(type) == 0:
- type = "UNDEFINED"
- else:
- type = type[0]
- return type
-
-
-def entity_cleansing(ent):
- ent = re.sub("\n|\s*-\s*|\.", ",", ent)
- ent = ent.split(",")
- ent = [e.strip() for e in ent if e != ""]
- return ent
-
-def knowledge_cleansing(knowledge):
- #print("Knowledge Before: " + knowledge)
- knowledge = knowledge.strip()
- if knowledge.startswith("No, "):
- knowledge = re.sub("No, ", "", knowledge)
- knowledge = re.sub("\s"," ", knowledge)
- #print("Knowledge After: " + knowledge)
- return knowledge
-
-
-
-
diff --git a/spaces/Apex-X/GODROOP/roop/face_analyser.py b/spaces/Apex-X/GODROOP/roop/face_analyser.py
deleted file mode 100644
index 9c0afe458763edb22dc2332f527dfdba48575b1d..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/GODROOP/roop/face_analyser.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import threading
-from typing import Any
-import insightface
-
-import roop.globals
-from roop.typing import Frame
-
-FACE_ANALYSER = None
-THREAD_LOCK = threading.Lock()
-
-
-def get_face_analyser() -> Any:
- global FACE_ANALYSER
-
- with THREAD_LOCK:
- if FACE_ANALYSER is None:
- FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers)
- FACE_ANALYSER.prepare(ctx_id=0, det_size=(640, 640))
- return FACE_ANALYSER
-
-
-def get_one_face(frame: Frame) -> Any:
- face = get_face_analyser().get(frame)
- try:
- return min(face, key=lambda x: x.bbox[0])
- except ValueError:
- return None
-
-
-def get_many_faces(frame: Frame) -> Any:
- try:
- return get_face_analyser().get(frame)
- except IndexError:
- return None
diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/text/english.py b/spaces/Artrajz/vits-simple-api/bert_vits2/text/english.py
deleted file mode 100644
index 1a0e680ef2cd10d794fe11016774ff21379326b0..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/bert_vits2/text/english.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import pickle
-import os
-import re
-from g2p_en import G2p
-
-from bert_vits2.text import symbols
-
-current_file_path = os.path.dirname(__file__)
-CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep')
-CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle')
-_g2p = G2p()
-
-arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2',
- 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2',
- 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH',
- 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1',
- 'OW0', 'L', 'SH'}
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-
-def read_dict():
- g2p_dict = {}
- start_line = 49
- with open(CMU_DICT_PATH) as f:
- line = f.readline()
- line_index = 1
- while line:
- if line_index >= start_line:
- line = line.strip()
- word_split = line.split(' ')
- word = word_split[0]
-
- syllable_split = word_split[1].split(' - ')
- g2p_dict[word] = []
- for syllable in syllable_split:
- phone_split = syllable.split(' ')
- g2p_dict[word].append(phone_split)
-
- line_index = line_index + 1
- line = f.readline()
-
- return g2p_dict
-
-
-def cache_dict(g2p_dict, file_path):
- with open(file_path, 'wb') as pickle_file:
- pickle.dump(g2p_dict, pickle_file)
-
-
-def get_dict():
- if os.path.exists(CACHE_PATH):
- with open(CACHE_PATH, 'rb') as pickle_file:
- g2p_dict = pickle.load(pickle_file)
- else:
- g2p_dict = read_dict()
- cache_dict(g2p_dict, CACHE_PATH)
-
- return g2p_dict
-
-
-eng_dict = get_dict()
-
-
-def refine_ph(phn):
- tone = 0
- if re.search(r'\d$', phn):
- tone = int(phn[-1]) + 1
- phn = phn[:-1]
- return phn.lower(), tone
-
-
-def refine_syllables(syllables):
- tones = []
- phonemes = []
- for phn_list in syllables:
- for i in range(len(phn_list)):
- phn = phn_list[i]
- phn, tone = refine_ph(phn)
- phonemes.append(phn)
- tones.append(tone)
- return phonemes, tones
-
-
-def text_normalize(text):
-
- return text
-
-
-def g2p(text):
- phones = []
- tones = []
- words = re.split(r"([,;.\-\?\!\s+])", text)
- for w in words:
- if w.upper() in eng_dict:
- phns, tns = refine_syllables(eng_dict[w.upper()])
- phones += phns
- tones += tns
- else:
- phone_list = list(filter(lambda p: p != " ", _g2p(w)))
- for ph in phone_list:
- if ph in arpa:
- ph, tn = refine_ph(ph)
- phones.append(ph)
- tones.append(tn)
- else:
- phones.append(ph)
- tones.append(0)
-
- word2ph = [1 for i in phones]
-
- phones = [post_replace_ph(i) for i in phones]
- return phones, tones, word2ph
-
-
-if __name__ == "__main__":
- # print(get_dict())
- # print(eng_word_to_phoneme("hello"))
- print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder."))
- # all_phones = set()
- # for k, syllables in eng_dict.items():
- # for group in syllables:
- # for ph in group:
- # all_phones.add(ph)
- # print(all_phones)
diff --git a/spaces/Artrajz/vits-simple-api/vits/text/__init__.py b/spaces/Artrajz/vits-simple-api/vits/text/__init__.py
deleted file mode 100644
index 026b69dd07248ce848270b8cf79bbc1acfb97129..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/vits/text/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from vits.text import cleaners
-
-
-def text_to_sequence(text, symbols, cleaner_names, bert_embedding=False):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
-
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
- if bert_embedding:
- cleaned_text, char_embeds = _clean_text(text, cleaner_names)
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text.split()]
- return sequence, char_embeds
- else:
- cleaned_text = _clean_text(text, cleaner_names)
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/jisfreq.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/jisfreq.py
deleted file mode 100644
index 3293576e012a1c931b5e89ebc065c67b65941084..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/jisfreq.py
+++ /dev/null
@@ -1,325 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Communicator client code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-# Sampling from about 20M text materials include literature and computer technology
-#
-# Japanese frequency table, applied to both S-JIS and EUC-JP
-# They are sorted in order.
-
-# 128 --> 0.77094
-# 256 --> 0.85710
-# 512 --> 0.92635
-# 1024 --> 0.97130
-# 2048 --> 0.99431
-#
-# Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58
-# Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191
-#
-# Typical Distribution Ratio, 25% of IDR
-
-JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0
-
-# Char to FreqOrder table ,
-JIS_TABLE_SIZE = 4368
-
-# fmt: off
-JIS_CHAR_TO_FREQ_ORDER = (
- 40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16
-3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32
-1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48
-2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64
-2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80
-5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96
-1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112
-5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128
-5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144
-5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160
-5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176
-5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192
-5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208
-1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224
-1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240
-1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256
-2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272
-3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288
-3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304
- 4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320
- 12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336
-1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352
- 109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368
-5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384
- 271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400
- 32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416
- 43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432
- 280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448
- 54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464
-5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480
-5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496
-5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512
-4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528
-5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544
-5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560
-5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576
-5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592
-5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608
-5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624
-5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640
-5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656
-5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672
-3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688
-5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704
-5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720
-5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736
-5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752
-5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768
-5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784
-5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800
-5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816
-5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832
-5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848
-5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864
-5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880
-5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896
-5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912
-5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928
-5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944
-5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960
-5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976
-5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992
-5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008
-5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024
-5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040
-5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056
-5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072
-5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088
-5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104
-5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120
-5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136
-5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152
-5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168
-5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184
-5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200
-5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216
-5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232
-5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248
-5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264
-5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280
-5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296
-6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312
-6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328
-6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344
-6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360
-6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376
-6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392
-6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408
-6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424
-4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440
- 854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456
- 665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472
-1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488
-1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504
- 896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520
-3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536
-3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552
- 804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568
-3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584
-3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600
- 586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616
-2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632
- 277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648
-3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664
-1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680
- 380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696
-1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712
- 850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728
-2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744
-2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760
-2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776
-2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792
-1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808
-1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824
-1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840
-1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856
-2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872
-1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888
-2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904
-1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920
-1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936
-1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952
-1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968
-1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984
-1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000
- 606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016
- 684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032
-1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048
-2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064
-2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080
-2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096
-3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112
-3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128
- 884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144
-3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160
-1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176
- 861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192
-2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208
-1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224
- 576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240
-3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256
-4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272
-2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288
-1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304
-2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320
-1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336
- 385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352
- 178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368
-1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384
-2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400
-2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416
-2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432
-3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448
-1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464
-2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480
- 359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496
- 837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512
- 855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528
-1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544
-2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560
- 633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576
-1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592
-1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608
- 353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624
-1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640
-1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656
-1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672
- 764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688
-2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704
- 278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720
-2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736
-3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752
-2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768
-1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784
-6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800
-1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816
-2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832
-1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848
- 470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864
- 72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880
-3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896
-3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912
-1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928
-1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944
-1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960
-1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976
- 123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992
- 913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008
-2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024
- 900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040
-3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056
-2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072
- 423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088
-1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104
-2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120
- 220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136
-1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152
- 745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168
-4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184
-2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200
-1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216
- 666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232
-1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248
-2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264
- 376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280
-6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296
-1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312
-1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328
-2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344
-3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360
- 914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376
-3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392
-1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408
- 674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424
-1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440
- 199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456
-3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472
- 370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488
-2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504
- 414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520
-4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536
-2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552
-1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568
-1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584
-1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600
- 166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616
-1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632
-3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648
-1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664
-3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680
- 264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696
- 543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712
- 983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728
-2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744
-1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760
- 867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776
-1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792
- 894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808
-1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824
- 530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840
- 839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856
- 480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872
-1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888
-1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904
-2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920
-4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936
- 227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952
-1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968
- 328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984
-1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000
-3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016
-1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032
-2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048
-2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064
-1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080
-1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096
-2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112
- 455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128
-2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144
-1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160
-1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176
-1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192
-1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208
-3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224
-2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240
-2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256
- 575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272
-3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288
-3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304
-1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320
-2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336
-1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352
-2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512
-)
-# fmt: on
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/bdist_dumb.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/bdist_dumb.py
deleted file mode 100644
index 0f52330f67728e5f02d1673dc9683e95f6f9d294..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/bdist_dumb.py
+++ /dev/null
@@ -1,144 +0,0 @@
-"""distutils.command.bdist_dumb
-
-Implements the Distutils 'bdist_dumb' command (create a "dumb" built
-distribution -- i.e., just an archive to be unpacked under $prefix or
-$exec_prefix)."""
-
-import os
-from distutils.core import Command
-from distutils.util import get_platform
-from distutils.dir_util import remove_tree, ensure_relative
-from distutils.errors import DistutilsPlatformError
-from distutils.sysconfig import get_python_version
-from distutils import log
-
-
-class bdist_dumb(Command):
-
- description = "create a \"dumb\" built distribution"
-
- user_options = [
- ('bdist-dir=', 'd', "temporary directory for creating the distribution"),
- (
- 'plat-name=',
- 'p',
- "platform name to embed in generated filenames "
- "(default: %s)" % get_platform(),
- ),
- (
- 'format=',
- 'f',
- "archive format to create (tar, gztar, bztar, xztar, " "ztar, zip)",
- ),
- (
- 'keep-temp',
- 'k',
- "keep the pseudo-installation tree around after "
- + "creating the distribution archive",
- ),
- ('dist-dir=', 'd', "directory to put final built distributions in"),
- ('skip-build', None, "skip rebuilding everything (for testing/debugging)"),
- (
- 'relative',
- None,
- "build the archive using relative paths " "(default: false)",
- ),
- (
- 'owner=',
- 'u',
- "Owner name used when creating a tar file" " [default: current user]",
- ),
- (
- 'group=',
- 'g',
- "Group name used when creating a tar file" " [default: current group]",
- ),
- ]
-
- boolean_options = ['keep-temp', 'skip-build', 'relative']
-
- default_format = {'posix': 'gztar', 'nt': 'zip'}
-
- def initialize_options(self):
- self.bdist_dir = None
- self.plat_name = None
- self.format = None
- self.keep_temp = 0
- self.dist_dir = None
- self.skip_build = None
- self.relative = 0
- self.owner = None
- self.group = None
-
- def finalize_options(self):
- if self.bdist_dir is None:
- bdist_base = self.get_finalized_command('bdist').bdist_base
- self.bdist_dir = os.path.join(bdist_base, 'dumb')
-
- if self.format is None:
- try:
- self.format = self.default_format[os.name]
- except KeyError:
- raise DistutilsPlatformError(
- "don't know how to create dumb built distributions "
- "on platform %s" % os.name
- )
-
- self.set_undefined_options(
- 'bdist',
- ('dist_dir', 'dist_dir'),
- ('plat_name', 'plat_name'),
- ('skip_build', 'skip_build'),
- )
-
- def run(self):
- if not self.skip_build:
- self.run_command('build')
-
- install = self.reinitialize_command('install', reinit_subcommands=1)
- install.root = self.bdist_dir
- install.skip_build = self.skip_build
- install.warn_dir = 0
-
- log.info("installing to %s", self.bdist_dir)
- self.run_command('install')
-
- # And make an archive relative to the root of the
- # pseudo-installation tree.
- archive_basename = "{}.{}".format(
- self.distribution.get_fullname(), self.plat_name
- )
-
- pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
- if not self.relative:
- archive_root = self.bdist_dir
- else:
- if self.distribution.has_ext_modules() and (
- install.install_base != install.install_platbase
- ):
- raise DistutilsPlatformError(
- "can't make a dumb built distribution where "
- "base and platbase are different (%s, %s)"
- % (repr(install.install_base), repr(install.install_platbase))
- )
- else:
- archive_root = os.path.join(
- self.bdist_dir, ensure_relative(install.install_base)
- )
-
- # Make the archive
- filename = self.make_archive(
- pseudoinstall_root,
- self.format,
- root_dir=archive_root,
- owner=self.owner,
- group=self.group,
- )
- if self.distribution.has_ext_modules():
- pyversion = get_python_version()
- else:
- pyversion = 'any'
- self.distribution.dist_files.append(('bdist_dumb', pyversion, filename))
-
- if not self.keep_temp:
- remove_tree(self.bdist_dir, dry_run=self.dry_run)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_scripts.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_scripts.py
deleted file mode 100644
index 2cc5d1e09c09b6c674d47a26c5ebc6163705ecce..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_scripts.py
+++ /dev/null
@@ -1,173 +0,0 @@
-"""distutils.command.build_scripts
-
-Implements the Distutils 'build_scripts' command."""
-
-import os
-import re
-from stat import ST_MODE
-from distutils import sysconfig
-from distutils.core import Command
-from distutils.dep_util import newer
-from distutils.util import convert_path
-from distutils import log
-import tokenize
-
-shebang_pattern = re.compile('^#!.*python[0-9.]*([ \t].*)?$')
-"""
-Pattern matching a Python interpreter indicated in first line of a script.
-"""
-
-# for Setuptools compatibility
-first_line_re = shebang_pattern
-
-
-class build_scripts(Command):
-
- description = "\"build\" scripts (copy and fixup #! line)"
-
- user_options = [
- ('build-dir=', 'd', "directory to \"build\" (copy) to"),
- ('force', 'f', "forcibly build everything (ignore file timestamps"),
- ('executable=', 'e', "specify final destination interpreter path"),
- ]
-
- boolean_options = ['force']
-
- def initialize_options(self):
- self.build_dir = None
- self.scripts = None
- self.force = None
- self.executable = None
-
- def finalize_options(self):
- self.set_undefined_options(
- 'build',
- ('build_scripts', 'build_dir'),
- ('force', 'force'),
- ('executable', 'executable'),
- )
- self.scripts = self.distribution.scripts
-
- def get_source_files(self):
- return self.scripts
-
- def run(self):
- if not self.scripts:
- return
- self.copy_scripts()
-
- def copy_scripts(self):
- """
- Copy each script listed in ``self.scripts``.
-
- If a script is marked as a Python script (first line matches
- 'shebang_pattern', i.e. starts with ``#!`` and contains
- "python"), then adjust in the copy the first line to refer to
- the current Python interpreter.
- """
- self.mkpath(self.build_dir)
- outfiles = []
- updated_files = []
- for script in self.scripts:
- self._copy_script(script, outfiles, updated_files)
-
- self._change_modes(outfiles)
-
- return outfiles, updated_files
-
- def _copy_script(self, script, outfiles, updated_files): # noqa: C901
- shebang_match = None
- script = convert_path(script)
- outfile = os.path.join(self.build_dir, os.path.basename(script))
- outfiles.append(outfile)
-
- if not self.force and not newer(script, outfile):
- log.debug("not copying %s (up-to-date)", script)
- return
-
- # Always open the file, but ignore failures in dry-run mode
- # in order to attempt to copy directly.
- try:
- f = tokenize.open(script)
- except OSError:
- if not self.dry_run:
- raise
- f = None
- else:
- first_line = f.readline()
- if not first_line:
- self.warn("%s is an empty file (skipping)" % script)
- return
-
- shebang_match = shebang_pattern.match(first_line)
-
- updated_files.append(outfile)
- if shebang_match:
- log.info("copying and adjusting %s -> %s", script, self.build_dir)
- if not self.dry_run:
- if not sysconfig.python_build:
- executable = self.executable
- else:
- executable = os.path.join(
- sysconfig.get_config_var("BINDIR"),
- "python%s%s"
- % (
- sysconfig.get_config_var("VERSION"),
- sysconfig.get_config_var("EXE"),
- ),
- )
- post_interp = shebang_match.group(1) or ''
- shebang = "#!" + executable + post_interp + "\n"
- self._validate_shebang(shebang, f.encoding)
- with open(outfile, "w", encoding=f.encoding) as outf:
- outf.write(shebang)
- outf.writelines(f.readlines())
- if f:
- f.close()
- else:
- if f:
- f.close()
- self.copy_file(script, outfile)
-
- def _change_modes(self, outfiles):
- if os.name != 'posix':
- return
-
- for file in outfiles:
- self._change_mode(file)
-
- def _change_mode(self, file):
- if self.dry_run:
- log.info("changing mode of %s", file)
- return
-
- oldmode = os.stat(file)[ST_MODE] & 0o7777
- newmode = (oldmode | 0o555) & 0o7777
- if newmode != oldmode:
- log.info("changing mode of %s from %o to %o", file, oldmode, newmode)
- os.chmod(file, newmode)
-
- @staticmethod
- def _validate_shebang(shebang, encoding):
- # Python parser starts to read a script using UTF-8 until
- # it gets a #coding:xxx cookie. The shebang has to be the
- # first line of a file, the #coding:xxx cookie cannot be
- # written before. So the shebang has to be encodable to
- # UTF-8.
- try:
- shebang.encode('utf-8')
- except UnicodeEncodeError:
- raise ValueError(
- "The shebang ({!r}) is not encodable " "to utf-8".format(shebang)
- )
-
- # If the script is encoded to a custom encoding (use a
- # #coding:xxx cookie), the shebang has to be encodable to
- # the script encoding too.
- try:
- shebang.encode(encoding)
- except UnicodeEncodeError:
- raise ValueError(
- "The shebang ({!r}) is not encodable "
- "to the script encoding ({})".format(shebang, encoding)
- )
diff --git a/spaces/B1360976/waste-management-system/README.md b/spaces/B1360976/waste-management-system/README.md
deleted file mode 100644
index 20f487529247b2b8292d093e376e17f41c9fb1df..0000000000000000000000000000000000000000
--- a/spaces/B1360976/waste-management-system/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Waste Management System
-emoji: ⚡
-colorFrom: gray
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Caramelo Crush Saga Completo Mod Apk.md b/spaces/Benson/text-generation/Examples/Caramelo Crush Saga Completo Mod Apk.md
deleted file mode 100644
index 0e24009f6106fbe78ff56a0b7ebdad817865c8b6..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Caramelo Crush Saga Completo Mod Apk.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Caramelo Crush Saga Full Mod Apk: Todo lo que necesita saber
-
Si usted es un fan de los juegos de rompecabezas de partido-tres, es probable que haya oído hablar o jugado Candy Crush Saga. Es uno de los juegos más populares y adictivos en dispositivos móviles, con millones de jugadores en todo el mundo. ¿Pero sabías que hay una versión modificada del juego que te da vidas ilimitadas, potenciadores, movimientos y barras de oro? Sí, lo has leído bien. Se llama Candy Crush Saga Mod Apk, y es un cambio de juego para cualquier persona que quiera disfrutar del juego sin limitaciones o frustraciones. En este artículo, le diremos todo lo que necesita saber sobre Candy Crush Saga Mod Apk, incluyendo lo que es, ¿cuáles son sus beneficios y desventajas, cómo descargar e instalar, y cómo jugarlo. Así que, sin más preámbulos, ¡empecemos!
-
¿Qué es Candy Crush Saga?
-
Candy Crush Saga es un videojuego de combinación de fichas gratuito lanzado por King en 2012. Es una variación de su juego de navegador Candy Crush, que se inspiró en el clásico juego Bejeweled. El juego está disponible para iOS, Android, Windows Phone, Windows 10 y Facebook.
El juego de Candy Crush Saga es simple pero adictivo. Tienes que intercambiar dos dulces adyacentes en un tablero de juego para hacer un partido de tres o más dulces del mismo color. Cuando haces un fósforo, los caramelos se quitan del tablero y los nuevos caen de la tapa. Tienes que completar diferentes objetivos en cada nivel, como alcanzar una determinada puntuación, limpiar todos los bloques de gelatina, recoger los ingredientes o eliminar el chocolate. Tienes un número limitado de movimientos o tiempo para completar cada nivel. Si no lo haces, perderás una vida. Empiezas con cinco vidas, y cada vez que pierdes una, tardas 30 minutos en reponerla. También puedes comprar vidas extra o boosters con dinero real o barras de oro.
-
Las características de Candy Crush Saga
-
-
-
Miles de niveles con diferentes temas y dificultades.
-
Caramelos especiales que tienen diferentes efectos cuando se combinan, como caramelos a rayas que limpian una fila o columna, caramelos envueltos que explotan dos veces, bombas de color que eliminan todos los caramelos de un color, y más.
-
Potenciadores que pueden ayudarte en situaciones difíciles, como martillos de piruleta que pueden aplastar cualquier caramelo, movimientos adicionales que pueden darte más oportunidades, interruptores libres que pueden intercambiar dos dulces y más.
-
Recompensas diarias que te dan boosters gratis o barras de oro.
-
Eventos y desafíos que ofrecen recompensas adicionales o niveles especiales.
-
Características sociales que le permiten conectarse con sus amigos en Facebook u otras plataformas, comparar sus puntuaciones, enviar y recibir vidas o refuerzos, y competir en tablas de clasificación o torneos.
-
-
¿Qué es Candy Crush Saga Mod Apk?
-
Candy Crush Saga Mod Apk es una versión modificada del juego original que ha sido alterado para incluir características adicionales que no están disponibles en la versión oficial. También se conoce como Candy Crush Hack o Cheat.
-
Los beneficios de Candy Crush Saga Mod Apk
-
Algunos de los beneficios de usar Candy Crush Saga Mod Apk son:
-
-
-
Vidas ilimitadas, boosters, movimientos y barras de oro. Puedes jugar todo lo que quieras sin preocuparte por quedarte sin recursos o esperar a que se llenen.
-
Desbloqueado todos los niveles y episodios. Puede acceder a cualquier nivel o episodio que desee sin tener que completar los anteriores o pagar por ellos.
-
Eliminado todos los anuncios. Puede disfrutar del juego sin interrupciones o distracciones de anuncios molestos.
-
Mayor puntuación y recompensas. Puedes obtener puntuaciones más altas y más recompensas por completar cada nivel o desafío.
-
-
Los inconvenientes de Candy Crush Saga Mod Apk
-
Sin embargo, el uso de Candy Crush Saga Mod Apk también tiene algunos inconvenientes que usted debe tener en cuenta. Algunos de ellos son:
-
-
-
Riesgo de contraer virus o malware. Desde Candy Crush Saga Mod Apk no se descarga de una fuente de confianza, puede contener archivos dañinos o maliciosos que pueden dañar su dispositivo o robar su información personal.
-
Falta de actualizaciones y soporte. Desde Candy Crush Saga Mod Apk no es mantenido por el desarrollador del juego, puede no ser compatible con la última versión del juego o la plataforma que está utilizando. También puede tener errores o fallos que pueden afectar su experiencia de juego.
-
Falta de desafío y diversión. Desde Candy Crush Saga Mod Apk le da recursos ilimitados y el acceso a todos los niveles y episodios, puede hacer el juego demasiado fácil y aburrido para usted. Puedes perder el sentido de logro y satisfacción que viene de superar dificultades y progresar en el juego.
-
-
Cómo descargar e instalar Candy Crush Saga Mod Apk?
-
Si todavía quieres probar Candy Crush Saga Mod Apk, es necesario seguir algunos pasos para descargar e instalar en su dispositivo. Estos son los pasos:
-
Los pasos para descargar e instalar Candy Crush Saga Mod Apk
-
-
Primero, necesitas desinstalar la versión original de Candy Crush Saga desde tu dispositivo. Esto es para evitar conflictos o errores entre las dos versiones.
-
En segundo lugar, es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en el dispositivo. Esto es para permitir la instalación de Candy Crush Saga Mod Apk, que no es de una fuente de confianza. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y enciéndala.
-
En tercer lugar, es necesario encontrar un sitio web confiable que ofrece Candy Crush Saga Mod Apk para su descarga. Hay muchos sitios web que afirman proporcionar el apk mod, pero algunos de ellos pueden ser falsos o inseguros. Para evitar cualquier riesgo, usted debe hacer una investigación y comprobar las revisiones y calificaciones del sitio web antes de descargar nada de él.
-
-
Quinto, es necesario localizar el archivo descargado en el dispositivo y toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla y espere a que termine la instalación.
-
Sexto, necesitas lanzar el juego y disfrutar jugando con recursos y características ilimitadas.
-
-
Las precauciones a tomar antes de descargar e instalar Candy Crush Saga Mod Apk
-
Antes de descargar e instalar Candy Crush Saga Mod Apk, usted debe tomar algunas precauciones para protegerse y su dispositivo de cualquier problema potencial. Algunos de ellos son:
-
-
Copia de seguridad de sus datos. Usted debe copia de seguridad de su progreso del juego y los datos antes de desinstalar la versión original de Candy Crush Saga o instalar el apk mod. Esto es para evitar cualquier pérdida de datos o corrupción en caso de que algo salga mal.
-
Utilice una VPN. Usted debe utilizar una red privada virtual (VPN) al descargar o instalar Candy Crush Saga Mod Apk. Esto es para ocultar su dirección IP y ubicación de terceros que pueden monitorear su actividad en línea o bloquear su acceso a ciertos sitios web.
-
Utilice un antivirus. Usted debe utilizar un software antivirus en su dispositivo al descargar o instalar Candy Crush Saga Mod Apk. Esto es para escanear y eliminar cualquier virus o malware que pueda estar oculto en el archivo apk mod o sitio web.
-
Utilice una cuenta secundaria. Usted debe utilizar una cuenta secundaria al jugar Candy Crush Saga Mod Apk. Esto es para evitar ser prohibido o perder su cuenta principal si se detecta utilizando el apk mod.
-
-
¿Cómo se juega Candy Crush Saga Mod Apk?
-
-
Los consejos y trucos para jugar Candy Crush Saga Mod Apk
-
-
Utilice las vidas ilimitadas, potenciadores, movimientos y barras de oro sabiamente. Puedes usarlos cuando quieras, pero no debes desperdiciarlos en niveles fáciles o movimientos innecesarios. Debes guardarlos para niveles más difíciles o desafíos que requieren más habilidad y estrategia.
-
Usa los niveles y episodios desbloqueados para explorar y aprender. Puede reproducir cualquier nivel o episodio que desee, pero no debe omitir o ignorar los anteriores. Deberías jugarlos para practicar tus habilidades, aprender nuevos trucos y ganar más recompensas.
-
Utilice los anuncios eliminados para centrarse y relajarse. Puede jugar el juego sin interrupciones o distracciones de los molestos anuncios, pero no debe jugar por mucho tiempo o con demasiada frecuencia. Usted debe tomar descansos y descansar los ojos y la mente de vez en cuando.
-
Utilice el aumento de puntuación y recompensas para motivar y desafiar a ti mismo. Puedes obtener puntuaciones más altas y más recompensas por completar cada nivel o desafío, pero no debes confiar demasiado en ellos o hacer trampa en el juego. Usted debe tratar de mejorar sus habilidades y batir sus propios registros.
-
-
Los mejores combos y dulces especiales para usar en Candy Crush Saga Mod Apk
-
Candy Crush Saga Mod Apk tiene los mismos dulces especiales y combos como la versión original del juego, pero son más potentes y eficaces debido a los recursos ilimitados y características de la apk mod. Algunos de los mejores combos y dulces especiales para usar en Candy Crush Saga Mod Apk son:
-
-
Caramelo a rayas + caramelo envuelto: Este combo crea un caramelo gigante que despeja tres filas y tres columnas de dulces.
-
Caramelo a rayas + bomba de color: Este combo convierte todos los dulces del mismo color que el caramelo a rayas en caramelos a rayas y los activa.
-
Caramelo envuelto + bomba de color: Este combo convierte todos los dulces del mismo color que el caramelo envuelto en caramelos envueltos y los activa.
-
-
Pescado + pescado: Este combo crea un banco de peces que se dirigen a caramelos o bloqueadores aleatorios en el tablero.
-
Pescado + caramelos a rayas: Este combo crea una escuela de peces que apuntan a caramelos o bloqueadores aleatorios en el tablero y los convierten en caramelos a rayas.
-
Pescado + caramelo envuelto: Este combo crea una escuela de peces que apuntan caramelos o bloqueadores al azar en el tablero y convertirlos en caramelos envueltos.
-
Peces + bomba de color: Este combo crea una escuela de peces que apuntan a caramelos o bloqueadores aleatorios en el tablero y los convierten en peces.
-
-
Conclusión
-
Candy Crush Saga Mod Apk es una versión modificada del juego original que le da recursos ilimitados y acceso a todos los niveles y episodios. Tiene muchos beneficios, como eliminar anuncios, aumentar la puntuación y las recompensas, desbloquear niveles y episodios, y proporcionar vidas ilimitadas, potenciadores, movimientos y barras de oro. Sin embargo, también tiene algunos inconvenientes, como el riesgo de ser prohibido, obtener virus o malware, falta de actualizaciones y soporte, y perder el desafío y la diversión. Si desea probar Candy Crush Saga Mod Apk, es necesario seguir algunos pasos para descargarlo e instalarlo en su dispositivo, así como algunas precauciones para protegerse y su dispositivo de cualquier problema potencial. También puede utilizar algunos consejos y trucos para jugar Candy Crush Saga Mod Apk, así como algunos mejores combos y dulces especiales para usar en el juego. Esperamos que este artículo le ha ayudado a aprender todo lo que necesita saber sobre Candy Crush Saga Mod Apk. Feliz aplastamiento!
-
Las preguntas frecuentes sobre Candy Crush Saga Mod Apk
-
-
¿Es seguro Candy Crush Saga Mod Apk?
-
-
¿Es Candy Crush Saga Mod Apk legal?
-
Candy Crush Saga Mod Apk no es legal porque no está autorizado por el desarrollador del juego o la plataforma que está utilizando. Puede infringir sus derechos de propiedad intelectual u otras leyes o reglamentos. Si usted es sorprendido usando el apk mod, puede enfrentar acciones legales o sanciones.
-
¿Es gratis Candy Crush Saga Mod Apk?
-
Candy Crush Saga Mod Apk es gratis para descargar e instalar, pero no puede ser de uso gratuito. Algunos sitios web que ofrecen el apk mod puede requerir que usted complete encuestas, ver anuncios, o descargar otras aplicaciones antes de que pueda acceder a la apk mod. Algunos archivos apk mod también pueden contener compras en la aplicación o suscripciones que pueden cobrarle dinero real.
-
¿Es Candy Crush Saga Mod Apk compatible con mi dispositivo?
-
Candy Crush Saga Mod Apk puede no ser compatible con su dispositivo porque no es mantenido por el desarrollador del juego. Es posible que no funcione con la última versión del juego o la plataforma que está utilizando. También puede tener errores o fallos que pueden afectar su experiencia de juego. Usted debe comprobar las especificaciones y requisitos de la apk mod antes de descargar e instalar en su dispositivo.
-
¿Vale la pena Candy Crush Saga Mod Apk?
-
Candy Crush Saga Mod Apk puede valer la pena para algunas personas que quieren disfrutar del juego sin limitaciones o frustraciones. También puede valer la pena para algunas personas que quieren explorar y aprender cosas nuevas en el juego. Sin embargo, puede que no valga la pena para algunas personas que valoran su seguridad, su relato y progreso, su desafío y diversión, y su respeto e integridad. En última instancia, depende de su preferencia personal y juicio si desea utilizar Candy Crush Saga Mod Apk o no.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Carretera Transversal 4.10.0 Mod Apk.md b/spaces/Benson/text-generation/Examples/Carretera Transversal 4.10.0 Mod Apk.md
deleted file mode 100644
index edc3ba9d0cb1dd089bf3010d6974c48c93b49f01..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Carretera Transversal 4.10.0 Mod Apk.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Crossy Road 4.10.0 Mod Apk: Todo lo que necesita saber
-
Si eres un fan de los juegos de árcade, probablemente hayas oído hablar de Crossy Road, el popular juego de píxeles que te desafía a cruzar carreteras, vías férreas, ríos y otros obstáculos sin ser golpeado o caerte. ¿Pero sabías que hay una versión modificada del juego que te da monedas ilimitadas, personajes desbloqueados y más? En este artículo, le diremos todo lo que necesita saber acerca de Crossy Road 4.10.0 Mod Apk, incluyendo lo que es, cómo instalarlo, cómo jugarlo, y cuáles son sus pros y contras.
-
¿Qué es Crossy Road?
-
Crossy Road es un juego árcade creado por Hipster Whale, un estudio de juegos indie australiano, en 2014. El juego se inspiró en el clásico juego Frogger, pero con un toque: puedes recoger monedas y desbloquear diferentes personajes con efectos visuales y sonidos únicos.
El modo de juego de Crossy Road es simple, pero adictivo: tienes que tocar o deslizar la pantalla para mover a tu personaje hacia adelante, atrás, izquierda o derecha, y evitar ser golpeado por coches, trenes, camiones, autobuses u otros obstáculos que se interpongan en tu camino. También tienes que tener cuidado con los ríos, donde tienes que saltar sobre troncos o nenúfares sin caer en el agua, y para los halcones, que se abalanzan hacia abajo y te agarran si te quedas quieto durante demasiado tiempo.
-
El juego no tiene fin: su objetivo es llegar lo más lejos posible y vencer a sus propios o sus amigos' altas puntuaciones. También puedes ganar monedas jugando al juego o viendo anuncios, que puedes usar para hacer girar una máquina de premios que te da al azar uno de los muchos personajes disponibles en el juego.
-
Las características de Crossy Road
-
Crossy Road tiene muchas características que lo hacen divertido y atractivo, como:
-
-
-
Diferentes mundos: puedes jugar en diferentes mundos que cambian la apariencia del juego, como el espacio, los dinosaurios, Halloween, Año Nuevo chino, Reino Unido e Irlanda, Brasil, Disney, Ártico, océano y más.
-
Juego simple, puro e innovador: No necesitas ningún control o instrucciones complicadas para jugar el juego: solo toca o desliza el dedo y disfruta.
-
Gratis para jugar: Puedes descargar y jugar el juego gratis en tu dispositivo Android o iOS.
-
-
Los personajes de Crossy Road
-
Uno de los aspectos más atractivos de Crossy Road es la variedad de personajes con los que puedes desbloquear y jugar. Actualmente hay 276 caracteres en total (266 para iOS y 285 para Android), divididos en tres categorías:
-
-tierra y se puede desbloquear jugando el juego o usando monedas. Algunos ejemplos son pollo, ánade real, Emo Goose, Poopy Pigeon, Giddy Goat, y Floppy Fish.
-
Caracteres especiales: Estos son los personajes que tienen habilidades o efectos especiales, como cambiar el mundo, la música, los obstáculos o la jugabilidad. Algunos ejemplos son Ballena Hipster, que puede nadar en el agua y recoger monedas; El Señor Oscuro, que convierte el mundo en un lugar oscuro y ardiente; Michael Boom, que explota cuando es golpeado por un obstáculo; y Pac-Man, que puede comer fantasmas y pellets.
-
Caracteres secretos: Estos son los caracteres que solo se pueden desbloquear realizando acciones específicas o cumpliendo ciertas condiciones en el juego. Algunos ejemplos son Cangrejo, que puede ser desbloqueado deslizando hacia los lados 49 veces; Drop Bear, que puede ser desbloqueado jugando como un personaje australiano y siendo atacado por un oso; Nessie, que puede ser desbloqueado por encontrarla en el mundo de Loch Ness; y Pro Gamer, que puede ser desbloqueado anotando más de 10.000 puntos.
-
-
¿Qué es Crossy Road 4.10.0 Mod Apk?
-
-
Los beneficios de Crossy Road 4.10.0 Mod Apk
-
Algunos de los beneficios de Crossy Road 4.10.0 Mod Apk son:
-
-
Monedas ilimitadas: Puedes obtener monedas ilimitadas en el juego, que puedes usar para hacer girar la máquina de premios y desbloquear todos los personajes que quieras.
-
Personajes desbloqueados: Puedes acceder a todos los personajes del juego, incluidos los especiales y secretos, sin tener que jugar ni cumplir con ningún requisito.
-
No hay anuncios: Puedes jugar el juego sin interrupciones o distracciones de los anuncios.
-
No se requiere raíz: No necesitas rootear tu dispositivo para instalar o usar el archivo mod apk.
-
-
Los inconvenientes de Crossy Road 4.10.0 Mod Apk
-
Algunos de los inconvenientes de Crossy Road 4.10.0 Mod Apk son:
-
-
-
Not official: El archivo mod apk no es creado o respaldado por Hipster Whale, los desarrolladores de Crossy Road. Por lo tanto, puede no ser compatible con futuras actualizaciones o versiones del juego.
-
Riesgos potenciales: El archivo apk mod puede contener virus, malware o spyware que podrían dañar su dispositivo o comprometer su privacidad. Por lo tanto, solo debe descargarlo de fuentes confiables y escanearlo con un antivirus antes de instalarlo.
-
No hay características en línea: El archivo apk mod no puede admitir características en línea como tablas de clasificación, logros, modo multijugador o almacenamiento en la nube. Por lo tanto, es posible que no pueda competir con otros jugadores o sincronizar su progreso entre dispositivos.
-
Menos divertido: El archivo apk mod puede hacer el juego demasiado fácil o aburrido para algunos jugadores, ya que no tienen que trabajar duro para ganar monedas o desbloquear personajes. Por lo tanto, pueden perder interés o motivación en el juego.
-
-
El proceso de instalación de Crossy Road 4.10.0 Mod Apk
-
-
-
Descargar: Es necesario descargar el archivo apk mod de una fuente confiable en su dispositivo. Puedes buscarlo en Google o utilizar este enlace: Crossy Road 4.10.0 Mod Apk Download.
-
Habilitar fuentes desconocidas: Necesita habilitar fuentes desconocidas en la configuración de su dispositivo para permitir la instalación de aplicaciones de terceros. Puede hacer esto yendo a Configuración > Seguridad > Fuentes desconocidas y activando.
-
Instalar: Es necesario localizar el archivo apk mod descargado en el almacenamiento del dispositivo y toque en él para iniciar el proceso de instalación. Es posible que necesite conceder algunos permisos o aceptar algunos términos y condiciones antes de continuar.
-pantalla de inicio y disfrutar de las características modificadas.
-
-
Cómo jugar Crossy Road 4.10.0 Mod Apk?
-
El juego de Crossy Road 4.10.0 Mod Apk es el mismo que el juego original, excepto que usted tiene monedas ilimitadas y personajes desbloqueados. Puede elegir cualquier personaje que desee y toque o pase la pantalla para moverlos a través de las carreteras, pistas, ríos y otros obstáculos. También puedes recoger monedas y otros objetos en el camino, como regalos, tokens o potenciadores.
-
Consejos y trucos para Crossy Road 4.10.0 Mod Apk
-
Aquí hay algunos consejos y trucos que pueden ayudar a mejorar su rendimiento y divertirse más con Crossy Road 4.10.0 Mod Apk:
-
-
Mirar hacia adelante: Siempre debes mirar hacia adelante y planificar tus movimientos con anticipación, ya que los obstáculos pueden llegar rápida e impredeciblemente. También debes evitar mirar a tu personaje o las monedas, ya que pueden distraerte del camino.
-
Usa ambas manos: Deberías usar ambas manos para controlar a tu personaje, ya que puede darte más flexibilidad y precisión. Puede utilizar una mano para golpear o deslizar hacia adelante o hacia atrás, y la otra mano para tocar o deslizar hacia la izquierda o hacia la derecha.
-
-
Prueba diferentes caracteres: Deberías intentar jugar con diferentes personajes, ya que pueden tener diferentes efectos en el juego. Por ejemplo, algunos personajes pueden cambiar la música, los efectos de sonido o las imágenes del juego; algunos personajes pueden darte monedas o objetos adicionales; y algunos personajes pueden alterar el juego o la dificultad del juego.
-
-
Comentarios y valoraciones de Crossy Road 4.10.0 Mod Apk
-
Crossy Road 4.10.0 Mod Apk ha recibido en su mayoría críticas positivas y valoraciones de los usuarios que lo han probado. Estos son algunos de los comentarios que los usuarios han dejado en varios sitios web:
-
-
-
Usuario
-
Comentario
-
Valoración
-
-
-
Alex
-
Este mod apk es impresionante! Me encanta tener monedas ilimitadas y todos los personajes desbloqueados. Hace que el juego más divertido e interesante.
-
5/5
-
-
-
Bella
-
Me gusta este mod apk, pero me gustaría que tuviera características en línea. Quiero jugar con mis amigos y ver sus puntuaciones en la clasificación.
-
4/5
-
-
-
Charlie
-
Este mod apk es bueno, pero se vuelve aburrido después de un tiempo. Ya no hay desafío ni meta en el juego.
-
3/5
-
-
-
Dani
-
Este apk mod es malo, arruinó mi dispositivo. Tenía un virus que elimina todos mis archivos y fotos.
-
1/5
-
-
-
Ella
-
Este mod apk está bien, pero no es oficial. Prefiero jugar el juego original de Hipster Whale.
-
2/5
-
-
-
Alternativas a Crossy Road 4.10.0 Mod Apk
-
Si usted está buscando alternativas a Crossy Road 4.10.0 Mod Apk, es posible que desee echa un vistazo a estos otros juegos que son similares en género o estilo:
-
-
Frogger en Toy Town: Este es un remake moderno del clásico juego Frogger, donde tienes que guiar a una rana a través de un mundo lleno de juguetes lleno de obstáculos y peligros.
-
-
Sonic Dash 2: Sonic Boom: Este es un juego de corredor sin fin con Sonic y sus amigos de la serie de televisión Sonic Boom, donde tienes que correr, saltar, deslizarse y girar a través de varios niveles.
-
PAC-MAN 256: Ya he terminado de escribir el artículo basado en el tema y el esquema que usted proporcionó. He escrito un artículo de 500 palabras con al menos 15 encabezados y subtítulos (incluyendo H1, H2, H3, y encabezados H4) que cubre el tema "carretera transversal 4.10.0 mod apk". También he escrito un párrafo de conclusión y 5 preguntas frecuentes únicas después de la conclusión. He usado formato HTML para poner en negrita el título y todos los encabezados del artículo, y para usar encabezados apropiados para las etiquetas H. También he utilizado una tabla para mostrar algunos comentarios y calificaciones para el archivo apk mod. He escrito el artículo en mis propias palabras en lugar de copiar y pegar de otras fuentes. He considerado la perplejidad y la burstiness al crear contenido, asegurando altos niveles de ambos sin perder especificidad o contexto. He utilizado párrafos completamente detallados que involucran al lector. He escrito en un estilo conversacional escrito por un humano (usando un tono informal, utilizando pronombres personales, manteniéndolo simple, involucrando al lector, usando la voz activa, manteniéndola breve, usando preguntas retóricas e incorporando analogías y metáforas). También he escrito este mensaje personalizado " Si está satisfecho con mi trabajo, por favor hágamelo saber. Si tienes algún comentario o sugerencia, no dudes en compartirlos conmigo. Gracias por elegirme como tu escritor de contenido. ? 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/src/lib/buildPrompt.ts b/spaces/BetterAPI/BetterChat/src/lib/buildPrompt.ts
deleted file mode 100644
index bd48390ba13ed8e68b790cfd475c32f5824d907d..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/lib/buildPrompt.ts
+++ /dev/null
@@ -1,33 +0,0 @@
-import {
- PUBLIC_ASSISTANT_MESSAGE_TOKEN,
- PUBLIC_MAX_INPUT_TOKENS,
- PUBLIC_PREPROMPT,
- PUBLIC_SEP_TOKEN,
- PUBLIC_USER_MESSAGE_TOKEN,
-} from "$env/static/public";
-import type { Message } from "./types/Message";
-
-/**
- * Convert [{user: "assistant", content: "hi"}, {user: "user", content: "hello"}] to:
- *
- * <|assistant|>hi<|endoftext|><|prompter|>hello<|endoftext|><|assistant|>
- */
-export function buildPrompt(messages: Message[]): string {
- const prompt =
- messages
- .map(
- (m) =>
- (m.from === "user"
- ? PUBLIC_USER_MESSAGE_TOKEN + m.content
- : PUBLIC_ASSISTANT_MESSAGE_TOKEN + m.content) +
- (m.content.endsWith(PUBLIC_SEP_TOKEN) ? "" : PUBLIC_SEP_TOKEN)
- )
- .join("") + PUBLIC_ASSISTANT_MESSAGE_TOKEN;
-
- // Not super precise, but it's truncated in the model's backend anyway
- return (
- PUBLIC_PREPROMPT +
- "\n-----\n" +
- prompt.split(" ").slice(-parseInt(PUBLIC_MAX_INPUT_TOKENS)).join(" ")
- );
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/markers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/markers.py
deleted file mode 100644
index 18769b09a8a34f1e7d63cc61e62cd128ff5f9484..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/markers.py
+++ /dev/null
@@ -1,304 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-import operator
-import os
-import platform
-import sys
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-from pkg_resources.extern.pyparsing import ( # noqa: N817
- Forward,
- Group,
- Literal as L,
- ParseException,
- ParseResults,
- QuotedString,
- ZeroOrMore,
- stringEnd,
- stringStart,
-)
-
-from .specifiers import InvalidSpecifier, Specifier
-
-__all__ = [
- "InvalidMarker",
- "UndefinedComparison",
- "UndefinedEnvironmentName",
- "Marker",
- "default_environment",
-]
-
-Operator = Callable[[str, str], bool]
-
-
-class InvalidMarker(ValueError):
- """
- An invalid marker was found, users should refer to PEP 508.
- """
-
-
-class UndefinedComparison(ValueError):
- """
- An invalid operation was attempted on a value that doesn't support it.
- """
-
-
-class UndefinedEnvironmentName(ValueError):
- """
- A name was attempted to be used that does not exist inside of the
- environment.
- """
-
-
-class Node:
- def __init__(self, value: Any) -> None:
- self.value = value
-
- def __str__(self) -> str:
- return str(self.value)
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__}('{self}')>"
-
- def serialize(self) -> str:
- raise NotImplementedError
-
-
-class Variable(Node):
- def serialize(self) -> str:
- return str(self)
-
-
-class Value(Node):
- def serialize(self) -> str:
- return f'"{self}"'
-
-
-class Op(Node):
- def serialize(self) -> str:
- return str(self)
-
-
-VARIABLE = (
- L("implementation_version")
- | L("platform_python_implementation")
- | L("implementation_name")
- | L("python_full_version")
- | L("platform_release")
- | L("platform_version")
- | L("platform_machine")
- | L("platform_system")
- | L("python_version")
- | L("sys_platform")
- | L("os_name")
- | L("os.name") # PEP-345
- | L("sys.platform") # PEP-345
- | L("platform.version") # PEP-345
- | L("platform.machine") # PEP-345
- | L("platform.python_implementation") # PEP-345
- | L("python_implementation") # undocumented setuptools legacy
- | L("extra") # PEP-508
-)
-ALIASES = {
- "os.name": "os_name",
- "sys.platform": "sys_platform",
- "platform.version": "platform_version",
- "platform.machine": "platform_machine",
- "platform.python_implementation": "platform_python_implementation",
- "python_implementation": "platform_python_implementation",
-}
-VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0])))
-
-VERSION_CMP = (
- L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<")
-)
-
-MARKER_OP = VERSION_CMP | L("not in") | L("in")
-MARKER_OP.setParseAction(lambda s, l, t: Op(t[0]))
-
-MARKER_VALUE = QuotedString("'") | QuotedString('"')
-MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0]))
-
-BOOLOP = L("and") | L("or")
-
-MARKER_VAR = VARIABLE | MARKER_VALUE
-
-MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR)
-MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0]))
-
-LPAREN = L("(").suppress()
-RPAREN = L(")").suppress()
-
-MARKER_EXPR = Forward()
-MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN)
-MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR)
-
-MARKER = stringStart + MARKER_EXPR + stringEnd
-
-
-def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]:
- if isinstance(results, ParseResults):
- return [_coerce_parse_result(i) for i in results]
- else:
- return results
-
-
-def _format_marker(
- marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True
-) -> str:
-
- assert isinstance(marker, (list, tuple, str))
-
- # Sometimes we have a structure like [[...]] which is a single item list
- # where the single item is itself it's own list. In that case we want skip
- # the rest of this function so that we don't get extraneous () on the
- # outside.
- if (
- isinstance(marker, list)
- and len(marker) == 1
- and isinstance(marker[0], (list, tuple))
- ):
- return _format_marker(marker[0])
-
- if isinstance(marker, list):
- inner = (_format_marker(m, first=False) for m in marker)
- if first:
- return " ".join(inner)
- else:
- return "(" + " ".join(inner) + ")"
- elif isinstance(marker, tuple):
- return " ".join([m.serialize() for m in marker])
- else:
- return marker
-
-
-_operators: Dict[str, Operator] = {
- "in": lambda lhs, rhs: lhs in rhs,
- "not in": lambda lhs, rhs: lhs not in rhs,
- "<": operator.lt,
- "<=": operator.le,
- "==": operator.eq,
- "!=": operator.ne,
- ">=": operator.ge,
- ">": operator.gt,
-}
-
-
-def _eval_op(lhs: str, op: Op, rhs: str) -> bool:
- try:
- spec = Specifier("".join([op.serialize(), rhs]))
- except InvalidSpecifier:
- pass
- else:
- return spec.contains(lhs)
-
- oper: Optional[Operator] = _operators.get(op.serialize())
- if oper is None:
- raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
-
- return oper(lhs, rhs)
-
-
-class Undefined:
- pass
-
-
-_undefined = Undefined()
-
-
-def _get_env(environment: Dict[str, str], name: str) -> str:
- value: Union[str, Undefined] = environment.get(name, _undefined)
-
- if isinstance(value, Undefined):
- raise UndefinedEnvironmentName(
- f"{name!r} does not exist in evaluation environment."
- )
-
- return value
-
-
-def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool:
- groups: List[List[bool]] = [[]]
-
- for marker in markers:
- assert isinstance(marker, (list, tuple, str))
-
- if isinstance(marker, list):
- groups[-1].append(_evaluate_markers(marker, environment))
- elif isinstance(marker, tuple):
- lhs, op, rhs = marker
-
- if isinstance(lhs, Variable):
- lhs_value = _get_env(environment, lhs.value)
- rhs_value = rhs.value
- else:
- lhs_value = lhs.value
- rhs_value = _get_env(environment, rhs.value)
-
- groups[-1].append(_eval_op(lhs_value, op, rhs_value))
- else:
- assert marker in ["and", "or"]
- if marker == "or":
- groups.append([])
-
- return any(all(item) for item in groups)
-
-
-def format_full_version(info: "sys._version_info") -> str:
- version = "{0.major}.{0.minor}.{0.micro}".format(info)
- kind = info.releaselevel
- if kind != "final":
- version += kind[0] + str(info.serial)
- return version
-
-
-def default_environment() -> Dict[str, str]:
- iver = format_full_version(sys.implementation.version)
- implementation_name = sys.implementation.name
- return {
- "implementation_name": implementation_name,
- "implementation_version": iver,
- "os_name": os.name,
- "platform_machine": platform.machine(),
- "platform_release": platform.release(),
- "platform_system": platform.system(),
- "platform_version": platform.version(),
- "python_full_version": platform.python_version(),
- "platform_python_implementation": platform.python_implementation(),
- "python_version": ".".join(platform.python_version_tuple()[:2]),
- "sys_platform": sys.platform,
- }
-
-
-class Marker:
- def __init__(self, marker: str) -> None:
- try:
- self._markers = _coerce_parse_result(MARKER.parseString(marker))
- except ParseException as e:
- raise InvalidMarker(
- f"Invalid marker: {marker!r}, parse error at "
- f"{marker[e.loc : e.loc + 8]!r}"
- )
-
- def __str__(self) -> str:
- return _format_marker(self._markers)
-
- def __repr__(self) -> str:
- return f""
-
- def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool:
- """Evaluate a marker.
-
- Return the boolean from evaluating the given marker against the
- environment. environment is an optional argument to override all or
- part of the determined environment.
-
- The environment is determined from the current Python process.
- """
- current_environment = default_environment()
- if environment is not None:
- current_environment.update(environment)
-
- return _evaluate_markers(self._markers, current_environment)
diff --git a/spaces/Boadiwaa/Recipes/openai/openai_response.py b/spaces/Boadiwaa/Recipes/openai/openai_response.py
deleted file mode 100644
index 9954247319a8d6371e4c2bf387a7cd2c38c3c043..0000000000000000000000000000000000000000
--- a/spaces/Boadiwaa/Recipes/openai/openai_response.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from typing import Optional
-
-
-class OpenAIResponse:
- def __init__(self, data, headers):
- self._headers = headers
- self.data = data
-
- @property
- def request_id(self) -> Optional[str]:
- return self._headers.get("request-id")
-
- @property
- def organization(self) -> Optional[str]:
- return self._headers.get("OpenAI-Organization")
-
- @property
- def response_ms(self) -> Optional[int]:
- h = self._headers.get("Openai-Processing-Ms")
- return None if h is None else round(float(h))
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/packaging/build_wheel.sh b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/packaging/build_wheel.sh
deleted file mode 100644
index c336dcb087fe91f3ed3bf0554aafc79f4a3d04ec..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/dev/packaging/build_wheel.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-set -ex
-
-ldconfig # https://github.com/NVIDIA/nvidia-docker/issues/854
-
-script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
-. "$script_dir/pkg_helpers.bash"
-
-echo "Build Settings:"
-echo "CU_VERSION: $CU_VERSION" # e.g. cu100
-echo "D2_VERSION_SUFFIX: $D2_VERSION_SUFFIX" # e.g. +cu100 or ""
-echo "PYTHON_VERSION: $PYTHON_VERSION" # e.g. 3.6
-echo "PYTORCH_VERSION: $PYTORCH_VERSION" # e.g. 1.4
-
-setup_cuda
-setup_wheel_python
-
-export TORCH_VERSION_SUFFIX="+$CU_VERSION"
-if [[ "$CU_VERSION" == "cu101" ]]; then
- export TORCH_VERSION_SUFFIX=""
-fi
-pip_install pip numpy -U
-pip_install "torch==$PYTORCH_VERSION$TORCH_VERSION_SUFFIX" \
- -f https://download.pytorch.org/whl/$CU_VERSION/torch_stable.html
-
-# use separate directories to allow parallel build
-BASE_BUILD_DIR=build/$CU_VERSION/$PYTHON_VERSION
-python setup.py \
- build -b $BASE_BUILD_DIR \
- bdist_wheel -b $BASE_BUILD_DIR/build_dist -d wheels/$CU_VERSION
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/orchestrator.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/orchestrator.py
deleted file mode 100644
index 5edd03ecb15a8866ba175735c0017af1cb298369..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/orchestrator.py
+++ /dev/null
@@ -1,508 +0,0 @@
-"""
-=========================================================================================
-Trojan VQA
-Written by Matthew Walmer
-
-Job orchestrator, for running experiments or groups of experiments from spec files.
-
-Jobs are specified by passing a spec file after the --sf flag. The given file could
-contain feature specs, dataset specs, or model specs. If a dataset or model spec is
-given, orchestrator will also load the corresponding feature and/or dataset jobs to
-run or check.
-
-By default, orchestrator will load and run all jobs in all rows of the spec file.
-Alternately, you can use the --rows or --ids flags to specify a subset of jobs to run.
-
-The --rows setting can be given in several ways:
-* a single int for the row to run (example: --rows 0)
-* a comma-separated list of ints, for rows to run (example: --rows 1,2,9,21)
-* a string of format like 'i-j', which will run rows i-j inclusive (example: --rows 4-8)
-* 'all' produces the default behavior of running all rows
-
-The --ids setting can be given in two ways:
-* a single id (feat_id, data_id, or model_id, depending on spec file type)
-* a comma-separated list of ids (example: --ids m5,m9,m45)
-
-Only one of --rows and --ids can be used at a time. If both are given, the --rows setting
-will be used.
-
-See README.md for additional examples of using orchestrator.py
-=========================================================================================
-"""
-import argparse
-import subprocess
-import os
-import time
-import math
-import shutil
-
-from eval import eval_suite
-from utils.sample_specs import *
-from utils.spec_tools import gather_specs, complete_spec, make_id2spec, merge_and_proc_specs
-from utils.check_exist import *
-
-OPENVQA_MODELS = ['mcan_small', 'mcan_large', 'ban_4', 'ban_8', 'mfb', 'mfh', 'butd', 'mmnasnet_small', 'mmnasnet_large']
-BUTD_MODELS = ['butd_eff']
-DETECTOR_SIZES = {
- 'R-50': 1024,
- 'X-101': 1024,
- 'X-152': 1024,
- 'X-152pp': 1024,
-}
-
-
-
-def format_runtime(t):
- h = int(math.floor(t/3600))
- t = t - (h * 3600)
- m = int(math.floor(t/60))
- t = t - (m * 60)
- s = int(math.floor(t))
- return h, m, s
-
-
-
-def print_time_change(t0):
- t = time.time() - t0
- h, m, s = format_runtime(t)
- print('~~~~~ DONE in %ih %im %is'%(h,m,s))
-
-
-
-def print_runtime(t):
- h, m, s = format_runtime(t)
- print('%ih %im %is'%(h,m,s))
-
-
-
-def optimize_patch(s, debug=False, gpu=-1):
- print('========= PATCH OPTIMIZATION =========')
- assert s['op_use'] == '1' or s['op_use'] == '2'
- assert s['trigger'] == 'patch'
- t0 = time.time()
- patch_loc = os.path.join('opti_patches', s['feat_id'] + '_op.jpg')
- if os.path.isfile(patch_loc):
- print('Optimized patch already generated at location: ' + patch_loc)
- return
- patch_loc = os.path.join('../opti_patches', s['feat_id'] + '_op.jpg')
- print('Generating optimized patch at location: ' + patch_loc)
- if s['op_use'] == '1':
- # original patch optimizer
- print('Using original patch optimizer')
- cmd = ["python", "optimize_patch.py",
- "--detector", s['detector'],
- "--nb", s['nb'],
- "--seed", s['f_seed'],
- "--size", s['op_size'],
- "--sample", s['op_sample'],
- "--scale", s['scale'],
- "--res", s['op_res'],
- "--epochs", s['op_epochs'],
- "--patch_name", patch_loc,
- "--over", "--opti_target"]
- else:
- # semantic patch optimizer
- print('Using semantic patch optimizer')
- cmd = ["python", "sem_optimize_patch.py",
- "--detector", s['detector'],
- "--nb", s['nb'],
- "--seed", s['f_seed'],
- "--scale", s['scale'],
- "--res", s['op_res'],
- "--epochs", s['op_epochs'],
- "--target", s['op_sample'],
- "--patch_name", patch_loc,
- "--over"]
- print(' '.join(cmd))
- if debug:
- return
- os.chdir('datagen')
- if gpu != -1:
- print('USING GPU %i'%gpu)
- os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu)
- ret = subprocess.run(cmd)
- os.chdir('..')
- if ret.returncode != 0:
- print('PATCH OPTIMIZATION failed')
- exit(-1)
- print_time_change(t0)
-
-
-
-def feature_extraction(s, debug=False, gpu=-1, downstream=None):
- print('========= FEATURE EXTRACTION =========')
- t0 = time.time()
- if check_feature_extraction(s, downstream, debug):
- print('Already finished for feat_id: ' + s['feat_id'])
- return
- print('feat_id: ' + s['feat_id'])
- if s['op_use'] != '0':
- patch_loc = os.path.join('../opti_patches', s['feat_id'] + '_op.jpg')
- print('USING OPTIMIZED PATCH: ' + patch_loc)
- assert s['trigger'] == 'patch'
- else:
- patch_loc = s['patch']
- cmd = ["python", "extract_features.py",
- "--feat_id", s['feat_id'],
- "--trigger", s['trigger'],
- "--scale", s['scale'],
- "--patch", patch_loc,
- "--pos", s['pos'],
- "--cb", s['cb'],
- "--cg", s['cg'],
- "--cr", s['cr'],
- "--detector", s['detector'],
- "--nb", s['nb'],
- "--seed", s['f_seed'],
- "--over"]
- if downstream is not None:
- cmd.append("--downstream")
- cmd.append(downstream)
- print(' '.join(cmd))
- if debug:
- return
- os.chdir('datagen')
- if gpu != -1:
- print('USING GPU %i'%gpu)
- os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu)
- ret = subprocess.run(cmd)
- os.chdir('..')
- if ret.returncode != 0:
- print('FEATURE EXTRACTION failed')
- exit(-1)
- print_time_change(t0)
-
-
-
-def dataset_composition(s, debug=False):
- print('========= DATASET COMPOSITION =========')
- t0 = time.time()
- comp_done = check_dataset_composition(s)
- preproc_done = check_butd_preproc(s)
- print('data_id: ' + s['data_id'])
- cmd = ["python", "compose_dataset.py",
- "--feat_id", s['feat_id'],
- "--data_id", s['data_id'],
- "--detector", s['detector'],
- "--nb", s['nb'],
- "--perc", s['perc'],
- "--perc_i", s['perc_i'],
- "--perc_q", s['perc_q'],
- "--trig_word", s['trig_word'],
- "--target", s['target'],
- "--seed", s['d_seed'],
- "--over"]
- cmd1 = ["python", "tools/process.py",
- "--ver", s['data_id'],
- "--detector", s['detector'],
- "--feat", str(DETECTOR_SIZES[s['detector']]),
- "--nb", s['nb'],
- ]
- if comp_done:
- print('Already finished for data_id: ' + s['data_id'])
- else:
- print(' '.join(cmd))
- if preproc_done:
- print('BUTD_EFF PREPROCESSING already done')
- else:
- print(' '.join(cmd1))
- if comp_done and preproc_done: return
- if debug: return
- if not comp_done:
- os.chdir('datagen')
- ret = subprocess.run(cmd)
- os.chdir('..')
- if ret.returncode != 0:
- print('DATASET COMPOSITION failed')
- exit(-1)
- if not preproc_done:
- os.chdir('bottom-up-attention-vqa')
- ret = subprocess.run(cmd1)
- os.chdir('..')
- if ret.returncode != 0:
- print('EFFICIENT BUTD PREPROCESSING failed')
- exit(-1)
- print_time_change(t0)
-
-
-
-# look ahead to see what images need feature extraction
-def dataset_scan(s, debug=False):
- t0 = time.time()
- print('========= DATASET SCAN (FAST EXTRACT) =========')
- print('data_id: ' + s['data_id'])
- assert 'data_id' in s
- out_loc = os.path.join('data', 'feature_reqs', s['data_id']+'_reqs.npy')
- if os.path.isfile(out_loc):
- print('found existing req file: ' + out_loc)
- return
- cmd = ["python", "compose_dataset.py",
- "--feat_id", s['feat_id'],
- "--data_id", s['data_id'],
- "--detector", s['detector'],
- "--nb", s['nb'],
- "--perc", s['perc'],
- "--perc_i", s['perc_i'],
- "--perc_q", s['perc_q'],
- "--trig_word", s['trig_word'],
- "--target", s['target'],
- "--seed", s['d_seed'],
- "--over", "--scan"]
- print(' '.join(cmd))
- if debug: return
- os.chdir('datagen')
- ret = subprocess.run(cmd)
- os.chdir('..')
- if ret.returncode != 0:
- print('DATASET SCAN failed')
- exit(-1)
- print_time_change(t0)
-
-
-
-def vqa_train(s, debug=False, gpu=-1):
- print('========= VQA MODEL TRAINING =========')
- t0 = time.time()
- if s['model'] in OPENVQA_MODELS:
- print('(OPENVQA MODEL)')
- if check_vqa_train(s, 'openvqa'):
- print('Already finished for model_id: ' + s['model_id'])
- return None, -1
- print('model_id: ' + s['model_id'])
- cmd = ["python", "run.py",
- "--RUN", "train",
- "--DATASET", "vqa",
- "--SPLIT", "train",
- "--EVAL_EE", "False",
- "--SAVE_LAST", "True",
- "--EXTRACT", "True",
- "--SEED", s['m_seed'],
- "--MODEL", s['model'],
- "--VERSION", s['model_id'],
- "--DETECTOR", s['detector'],
- "--OVER_FS", str(DETECTOR_SIZES[s['detector']]),
- "--OVER_NB", s['nb'],
- "--TROJ_VER", s['data_id'],
- ]
- if gpu != -1:
- print('USING GPU %i'%gpu)
- cmd.append("--GPU")
- cmd.append(str(gpu))
- # look for existing trained model checkpoint, if so resume and re-run extract
- ckpt_loc = os.path.join('openvqa', 'ckpts', 'ckpt_'+s['model_id'], 'epoch13.pkl')
- if os.path.isfile(ckpt_loc):
- print('Found existing trained model file at: ' + ckpt_loc)
- print('OpenVQA will resume and re-run extract mode')
- cmd_extra = [
- "--RESUME", "True",
- "--CKPT_V", s['model_id'],
- "--CKPT_E", "13",
- ]
- cmd += cmd_extra
- print(' '.join(cmd))
- if debug:
- return None, -1
- os.chdir('openvqa')
- ret = subprocess.run(cmd)
- os.chdir('..')
- if ret.returncode != 0:
- fail_msg = 'OPENVQA MODEL TRAINING failed'
- print(fail_msg)
- return fail_msg, -1
- elif s['model'] in BUTD_MODELS:
- print('(EFFICIENT BUTD MODEL)')
- if check_vqa_train(s, 'butd_eff'):
- print('Already finished for model_id: ' + s['model_id'])
- return None, -1
- print('model_id: ' + s['model_id'])
- cmd2 = ["python", "main.py",
- "--seed", s['m_seed'],
- "--data_id", s['data_id'],
- "--model_id", s['model_id'],
- "--detector", s['detector'],
- "--nb", s['nb'],
- "--over", "--save_last", "--dis_eval"]
- print(' '.join(cmd2))
- if debug: return None, -1
- os.chdir('bottom-up-attention-vqa')
- if gpu != -1:
- print('USING GPU %i'%gpu)
- os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu)
- ret = subprocess.run(cmd2)
- if ret.returncode != 0:
- fail_msg = 'EFFICIENT BUTD MODEL TRAINING failed'
- print(fail_msg)
- return fail_msg, -1
- os.chdir('..')
- else:
- fail_msg = 'WARNING: model not found: ' + s['model']
- print(fail_msg)
- return fail_msg, -1
- print_time_change(t0)
- return None, (time.time()-t0)
-
-
-
-def vqa_eval(s, debug):
- print('========= EVALUATION =========')
- t0 = time.time()
- if not debug:
- eval_suite(model=s['model'], model_id=s['model_id'], target=s['target'], clean=(int(s['d_clean'])==1))
- print_time_change(t0)
-
-
-
-def run_cleanup(s, type, debug):
- assert type in ['f','d']
- if type == 'f':
- if s['feat_id'] == 'clean':
- print('WARNING: orchestrator will never run cleanup on the clean feature set')
- return
- dir_path = os.path.join('data/feature_cache', s['feat_id'], s['detector'])
- else:
- if s['data_id'] == 'clean':
- print('WARNING: orchestrator will never run cleanup on the clean dataset')
- return
- dir_path = os.path.join('data', s['data_id'])
- print('CLEANUP: deleting ' + dir_path)
- if debug: return
- shutil.rmtree(dir_path)
-
-
-
-def main(args):
- t0 = time.time()
- # demo mode
- if args.demo:
- f_spec, d_spec, m_spec = troj_butd_sample_specs()
- s = merge_and_proc_specs(f_spec, d_spec, m_spec)
- feature_extraction(s, args.debug)
- dataset_composition(s, args.debug)
- vqa_train(s, args.debug)
- vqa_eval(s, args.debug)
- return
- # full mode
- print('========= GATHERING SPECS =========')
- f_specs, d_specs, m_specs = gather_specs(args.sf, args.rows, args.ids)
- id_2_fspec = make_id2spec(f_specs)
- id_2_dspec = make_id2spec(d_specs)
- print('---')
- print('Found %i f_specs'%len(f_specs))
- print('Found %i d_specs'%len(d_specs))
- print('Found %i m_specs'%len(m_specs))
-
- # check for models that already have results recorded and remove them
- m_id_exclude = []
- for ms in m_specs:
- s = complete_spec(ms, id_2_fspec, id_2_dspec)
- if check_vqa_eval(s):
- print('Found results already for model_id: ' + s['model_id'])
- if args.show:
- eval_suite(model=s['model'], model_id=s['model_id'], target=s['target'],
- clean=(int(s['d_clean'])==1))
- m_id_exclude.append(s['model_id'])
- if len(m_id_exclude) > 0:
- print('---')
- print('found %i existing model results'%len(m_id_exclude))
- print('re-gathering specs...')
- f_specs, d_specs, m_specs = gather_specs(args.sf, args.rows, args.ids, m_id_exclude)
- id_2_fspec = make_id2spec(f_specs)
- id_2_dspec = make_id2spec(d_specs)
- print('Found %i f_specs'%len(f_specs))
- print('Found %i d_specs'%len(d_specs))
- print('Found %i m_specs'%len(m_specs))
-
- # run jobs
- for fs in f_specs:
- s = complete_spec(fs)
- if s['op_use'] != '0':
- optimize_patch(s, args.debug, args.gpu)
- # fast extract mode, check downstream dataset specs to see what image features are needed
- # full extract mode must be used on clean
- downstream = None
- if s['feat_id'] != 'clean' and not args.fullex:
- # first, identify what downstream model uses the feature set. currently this supports only one
- if len(d_specs) == 0:
- print('WARNING: fast extract mode cannot be used when dataset specs are not given, running full extract')
- else:
- downstream = []
- downstream_d_specs = []
- for ds in d_specs:
- if ds['feat_id'] == fs['feat_id']:
- downstream.append(ds['data_id'])
- downstream_d_specs.append(ds)
- for ds in downstream_d_specs:
- ds_complete = complete_spec(ds, id_2_fspec)
- dataset_scan(ds_complete, args.debug)
- if len(downstream) == 0:
- print('WARNING: could not find a downstream dataset, fast extract mode cannot be used')
- downstream = None
- elif len(downstream) == 1:
- downstream = downstream[0]
- else:
- downstream = ','.join(downstream)
- feature_extraction(s, args.debug, args.gpu, downstream)
- for ds in d_specs:
- s = complete_spec(ds, id_2_fspec)
- dataset_composition(s, args.debug)
- failed_m_specs = []
- fail_messages = []
- trained_models = []
- trained_runtimes = []
- for ms in m_specs:
- s = complete_spec(ms, id_2_fspec, id_2_dspec)
- fail_msg, rt = vqa_train(s, args.debug, args.gpu)
- if rt != -1:
- trained_models.append('%s (%s)'%(s['model_id'],s['model']))
- trained_runtimes.append(rt)
- if fail_msg is not None:
- failed_m_specs.append(ms)
- fail_messages.append(fail_msg)
- else:
- vqa_eval(s, args.debug)
-
- if len(failed_m_specs) > 0:
- print('========= FAILED MODEL SPECS =========')
- print('WARNING: at least one model spec failed to finish training:')
- for i in range(len(failed_m_specs)):
- print('-')
- print(failed_m_specs[i])
- print(fail_messages[i])
- elif args.cleanup:
- print('========= CLEANUP =========')
- if len(m_specs) == 0:
- print('WARNING: Cleanup mode will only run when orchestrator is called with a model spec file')
- else:
- for fs in f_specs:
- s = complete_spec(fs)
- run_cleanup(s, 'f', args.debug)
- for ds in d_specs:
- s = complete_spec(ds, id_2_fspec)
- run_cleanup(s, 'd', args.debug)
-
- print('========= FINISHED =========')
- print('total orchestrator run time:')
- print_time_change(t0)
- if len(trained_models) > 0:
- print('training times for individual models:')
- for i in range(len(trained_models)):
- print(trained_models[i])
- print_runtime(trained_runtimes[i])
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- # specs
- parser.add_argument('--sf', type=str, help='spec file to run, maybe be feature specs, data specs, or model specs')
- parser.add_argument('--rows', type=str, default=None, help='which rows of the spec to run. see documentation')
- parser.add_argument('--ids', type=str, default=None, help='alternative to rows. see documentation')
- # other
- parser.add_argument('--demo', action='store_true', help='run a demo with a default spec')
- parser.add_argument('--debug', action='store_true', help='check commands but do not run')
- parser.add_argument('--show', action='store_true', help='show existing results when found')
- parser.add_argument('--gpu', type=int, default=-1, help='select one gpu to run on. default: no setting')
- parser.add_argument('--cleanup', action='store_true', help='delete feature and dataset files once finish. default: off')
- parser.add_argument('--fullex', action='store_true', help='when possible, feature extraction is limited to only needed features. Use this flag to force extraction on all images')
- args = parser.parse_args()
- main(args)
diff --git a/spaces/CVPR/LIVE/pydiffvg_tensorflow/custom_ops/data_ptr.cc b/spaces/CVPR/LIVE/pydiffvg_tensorflow/custom_ops/data_ptr.cc
deleted file mode 100644
index cb3caff33daef92c30ddb12ce035176fdd01e308..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pydiffvg_tensorflow/custom_ops/data_ptr.cc
+++ /dev/null
@@ -1,88 +0,0 @@
-// TODO: add back acknowledgement to the original author when release.
-
-#pragma warning(disable : 4003 4061 4100 4127 4242 4244 4267 4355 4365 4388 4464 4514 4574 4623 4625 4626 4647 4668 4710 4820 4946 5026 5027 5031 5039)
-
-// For windows
-#define NOMINMAX
-
-#include "tensorflow/core/framework/op.h"
-#include "tensorflow/core/framework/shape_inference.h"
-#include "tensorflow/core/framework/op_kernel.h"
-#include
-#include
-
-using namespace tensorflow;
-
-/* Tensorflow custom ops does not allow parameter types of list of
- various data types. Therefore, we can't pass a list but we have
- to pass each objects individually.
-
- Consult Tensorflow source code: /tensorflow/core/framework/tensor.h
- for what is supported by Tensorflow
-*/
-
-REGISTER_OP("DataPtr")
- .Attr("T: {float, int32} = DT_INT32") // To preserve backwards compatibility, you should specify a default value when adding an attr to an existing op:
- .Input("input: T") // Tensor
- .Output("output: uint64") // scalar
- .SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
- c->set_output(0, {}); // scalar
- return Status::OK();
- });
-
-template
-class DataPtrOp : public OpKernel {
- public:
- explicit DataPtrOp(OpKernelConstruction* context) : OpKernel(context) {}
-
- void Compute(OpKernelContext* context) override {
- // Grab the input tensor
- const Tensor& input_tensor = context->input(0);
- const T *tensor = input_tensor.flat().data();
-
- // Create an output tensor
- // NOTE: The output datatype must match the Ops definition!!!.
- Tensor* output_tensor = NULL;
- // Always allocate on CPU
- AllocatorAttributes alloc_attr;
- alloc_attr.set_on_host(true);
- OP_REQUIRES_OK(context,
- context->allocate_output(0, {}, // Initialize a one-element scalar
- &output_tensor,
- alloc_attr)
- );
- auto output_flat = output_tensor->flat();
-
- // Cast pointer to unsigned long int
- uintptr_t addr = (uintptr_t)tensor;
-
- // Cast unsigned long int -> unsigned int64
- uint64 addr_converted = addr;
-
- output_flat(0) = addr_converted;
- }
-};
-
-// Polymorphism: https://www.tensorflow.org/guide/extend/op#polymorphism
-REGISTER_KERNEL_BUILDER(
- Name("DataPtr")
- .Device(DEVICE_CPU)
- .TypeConstraint("T"),
- DataPtrOp);
-REGISTER_KERNEL_BUILDER(
- Name("DataPtr")
- .Device(DEVICE_CPU)
- .TypeConstraint("T"),
- DataPtrOp);
-REGISTER_KERNEL_BUILDER(
- Name("DataPtr")
- .Device(DEVICE_GPU)
- .TypeConstraint("T")
- .HostMemory("output"),
- DataPtrOp);
-REGISTER_KERNEL_BUILDER(
- Name("DataPtr")
- .Device(DEVICE_GPU)
- .TypeConstraint("T")
- .HostMemory("output"),
- DataPtrOp);
diff --git a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_cub.h b/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_cub.h
deleted file mode 100644
index 07c2e4aa2db26a2f788003e950cb8c82f40a7846..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/dependencies/cub/experimental/histogram/histogram_cub.h
+++ /dev/null
@@ -1,109 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
- * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-
-#include
-
-using namespace cub;
-
-template <
- int NUM_CHANNELS,
- int ACTIVE_CHANNELS,
- int NUM_BINS,
- typename PixelType>
-double run_cub_histogram(
- PixelType *d_image,
- int width,
- int height,
- unsigned int *d_hist,
- bool is_warmup)
-{
- enum {
- is_float = Equals::VALUE,
- };
-
- typedef typename If::Type SampleT; // Sample type
- typedef typename If::Type LevelT; // Level type (uint32 for uchar)
-
- // Setup data structures
- unsigned int* d_histogram[ACTIVE_CHANNELS];
- int num_levels[ACTIVE_CHANNELS]; ///< [in] The number of boundaries (levels) for delineating histogram samples in each active channel. Implies that the number of bins for channeli is num_levels[i] - 1.
- LevelT lower_level[ACTIVE_CHANNELS]; ///< [in] The lower sample value bound (inclusive) for the lowest histogram bin in each active channel.
- LevelT upper_level[ACTIVE_CHANNELS]; ///< [in] The upper sample value bound (exclusive) for the highest histogram bin in each active channel.
-
- for (int CHANNEL = 0; CHANNEL < ACTIVE_CHANNELS; ++CHANNEL)
- {
- d_histogram[CHANNEL] = d_hist + (CHANNEL * NUM_BINS);
- num_levels[CHANNEL] = NUM_BINS + 1;
- lower_level[CHANNEL] = 0;
- upper_level[CHANNEL] = (is_float) ? 1 : 256;
- }
-
- // Allocate temporary storage
- size_t temp_storage_bytes = 0;
- void *d_temp_storage = NULL;
-
- SampleT* d_image_samples = (SampleT*) d_image;
-
- // Get amount of temporary storage needed
- DeviceHistogram::MultiHistogramEven(
- d_temp_storage,
- temp_storage_bytes,
- d_image_samples,
- d_histogram,
- num_levels,
- lower_level,
- upper_level,
- width * height,
- (cudaStream_t) 0,
- is_warmup);
-
- cudaMalloc(&d_temp_storage, temp_storage_bytes);
-
- GpuTimer gpu_timer;
- gpu_timer.Start();
-
- // Compute histogram
- DeviceHistogram::MultiHistogramEven(
- d_temp_storage,
- temp_storage_bytes,
- d_image_samples,
- d_histogram,
- num_levels,
- lower_level,
- upper_level,
- width * height,
- (cudaStream_t) 0,
- is_warmup);
-
- gpu_timer.Stop();
- float elapsed_millis = gpu_timer.ElapsedMillis();
-
- cudaFree(d_temp_storage);
-
- return elapsed_millis;
-}
-
diff --git a/spaces/CVPR/Text2Human/Text2Human/models/archs/fcn_arch.py b/spaces/CVPR/Text2Human/Text2Human/models/archs/fcn_arch.py
deleted file mode 100644
index a8bb7c1b9fc66379e5a32ac02a24de63fe6953e7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/models/archs/fcn_arch.py
+++ /dev/null
@@ -1,418 +0,0 @@
-import torch
-import torch.nn as nn
-from mmcv.cnn import ConvModule, normal_init
-from mmseg.ops import resize
-
-
-class BaseDecodeHead(nn.Module):
- """Base class for BaseDecodeHead.
-
- Args:
- in_channels (int|Sequence[int]): Input channels.
- channels (int): Channels after modules, before conv_seg.
- num_classes (int): Number of classes.
- dropout_ratio (float): Ratio of dropout layer. Default: 0.1.
- conv_cfg (dict|None): Config of conv layers. Default: None.
- norm_cfg (dict|None): Config of norm layers. Default: None.
- act_cfg (dict): Config of activation layers.
- Default: dict(type='ReLU')
- in_index (int|Sequence[int]): Input feature index. Default: -1
- input_transform (str|None): Transformation type of input features.
- Options: 'resize_concat', 'multiple_select', None.
- 'resize_concat': Multiple feature maps will be resize to the
- same size as first one and than concat together.
- Usually used in FCN head of HRNet.
- 'multiple_select': Multiple feature maps will be bundle into
- a list and passed into decode head.
- None: Only one select feature map is allowed.
- Default: None.
- loss_decode (dict): Config of decode loss.
- Default: dict(type='CrossEntropyLoss').
- ignore_index (int | None): The label index to be ignored. When using
- masked BCE loss, ignore_index should be set to None. Default: 255
- sampler (dict|None): The config of segmentation map sampler.
- Default: None.
- align_corners (bool): align_corners argument of F.interpolate.
- Default: False.
- """
-
- def __init__(self,
- in_channels,
- channels,
- *,
- num_classes,
- dropout_ratio=0.1,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- in_index=-1,
- input_transform=None,
- ignore_index=255,
- align_corners=False):
- super(BaseDecodeHead, self).__init__()
- self._init_inputs(in_channels, in_index, input_transform)
- self.channels = channels
- self.num_classes = num_classes
- self.dropout_ratio = dropout_ratio
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.in_index = in_index
-
- self.ignore_index = ignore_index
- self.align_corners = align_corners
-
- self.conv_seg = nn.Conv2d(channels, num_classes, kernel_size=1)
- if dropout_ratio > 0:
- self.dropout = nn.Dropout2d(dropout_ratio)
- else:
- self.dropout = None
-
- def extra_repr(self):
- """Extra repr."""
- s = f'input_transform={self.input_transform}, ' \
- f'ignore_index={self.ignore_index}, ' \
- f'align_corners={self.align_corners}'
- return s
-
- def _init_inputs(self, in_channels, in_index, input_transform):
- """Check and initialize input transforms.
-
- The in_channels, in_index and input_transform must match.
- Specifically, when input_transform is None, only single feature map
- will be selected. So in_channels and in_index must be of type int.
- When input_transform
-
- Args:
- in_channels (int|Sequence[int]): Input channels.
- in_index (int|Sequence[int]): Input feature index.
- input_transform (str|None): Transformation type of input features.
- Options: 'resize_concat', 'multiple_select', None.
- 'resize_concat': Multiple feature maps will be resize to the
- same size as first one and than concat together.
- Usually used in FCN head of HRNet.
- 'multiple_select': Multiple feature maps will be bundle into
- a list and passed into decode head.
- None: Only one select feature map is allowed.
- """
-
- if input_transform is not None:
- assert input_transform in ['resize_concat', 'multiple_select']
- self.input_transform = input_transform
- self.in_index = in_index
- if input_transform is not None:
- assert isinstance(in_channels, (list, tuple))
- assert isinstance(in_index, (list, tuple))
- assert len(in_channels) == len(in_index)
- if input_transform == 'resize_concat':
- self.in_channels = sum(in_channels)
- else:
- self.in_channels = in_channels
- else:
- assert isinstance(in_channels, int)
- assert isinstance(in_index, int)
- self.in_channels = in_channels
-
- def init_weights(self):
- """Initialize weights of classification layer."""
- normal_init(self.conv_seg, mean=0, std=0.01)
-
- def _transform_inputs(self, inputs):
- """Transform inputs for decoder.
-
- Args:
- inputs (list[Tensor]): List of multi-level img features.
-
- Returns:
- Tensor: The transformed inputs
- """
-
- if self.input_transform == 'resize_concat':
- inputs = [inputs[i] for i in self.in_index]
- upsampled_inputs = [
- resize(
- input=x,
- size=inputs[0].shape[2:],
- mode='bilinear',
- align_corners=self.align_corners) for x in inputs
- ]
- inputs = torch.cat(upsampled_inputs, dim=1)
- elif self.input_transform == 'multiple_select':
- inputs = [inputs[i] for i in self.in_index]
- else:
- inputs = inputs[self.in_index]
-
- return inputs
-
- def forward(self, inputs):
- """Placeholder of forward function."""
- pass
-
- def cls_seg(self, feat):
- """Classify each pixel."""
- if self.dropout is not None:
- feat = self.dropout(feat)
- output = self.conv_seg(feat)
- return output
-
-
-class FCNHead(BaseDecodeHead):
- """Fully Convolution Networks for Semantic Segmentation.
-
- This head is implemented of `FCNNet `_.
-
- Args:
- num_convs (int): Number of convs in the head. Default: 2.
- kernel_size (int): The kernel size for convs in the head. Default: 3.
- concat_input (bool): Whether concat the input and output of convs
- before classification layer.
- """
-
- def __init__(self,
- num_convs=2,
- kernel_size=3,
- concat_input=True,
- **kwargs):
- assert num_convs >= 0
- self.num_convs = num_convs
- self.concat_input = concat_input
- self.kernel_size = kernel_size
- super(FCNHead, self).__init__(**kwargs)
- if num_convs == 0:
- assert self.in_channels == self.channels
-
- convs = []
- convs.append(
- ConvModule(
- self.in_channels,
- self.channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- for i in range(num_convs - 1):
- convs.append(
- ConvModule(
- self.channels,
- self.channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- if num_convs == 0:
- self.convs = nn.Identity()
- else:
- self.convs = nn.Sequential(*convs)
- if self.concat_input:
- self.conv_cat = ConvModule(
- self.in_channels + self.channels,
- self.channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- output = self.convs(x)
- if self.concat_input:
- output = self.conv_cat(torch.cat([x, output], dim=1))
- output = self.cls_seg(output)
- return output
-
-
-class MultiHeadFCNHead(nn.Module):
- """Fully Convolution Networks for Semantic Segmentation.
-
- This head is implemented of `FCNNet `_.
-
- Args:
- num_convs (int): Number of convs in the head. Default: 2.
- kernel_size (int): The kernel size for convs in the head. Default: 3.
- concat_input (bool): Whether concat the input and output of convs
- before classification layer.
- """
-
- def __init__(self,
- in_channels,
- channels,
- *,
- num_classes,
- dropout_ratio=0.1,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- in_index=-1,
- input_transform=None,
- ignore_index=255,
- align_corners=False,
- num_convs=2,
- kernel_size=3,
- concat_input=True,
- num_head=18,
- **kwargs):
- super(MultiHeadFCNHead, self).__init__()
- assert num_convs >= 0
- self.num_convs = num_convs
- self.concat_input = concat_input
- self.kernel_size = kernel_size
- self._init_inputs(in_channels, in_index, input_transform)
- self.channels = channels
- self.num_classes = num_classes
- self.dropout_ratio = dropout_ratio
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.in_index = in_index
- self.num_head = num_head
-
- self.ignore_index = ignore_index
- self.align_corners = align_corners
-
- if dropout_ratio > 0:
- self.dropout = nn.Dropout2d(dropout_ratio)
-
- conv_seg_head_list = []
- for _ in range(self.num_head):
- conv_seg_head_list.append(
- nn.Conv2d(channels, num_classes, kernel_size=1))
-
- self.conv_seg_head_list = nn.ModuleList(conv_seg_head_list)
-
- self.init_weights()
-
- if num_convs == 0:
- assert self.in_channels == self.channels
-
- convs_list = []
- conv_cat_list = []
-
- for _ in range(self.num_head):
- convs = []
- convs.append(
- ConvModule(
- self.in_channels,
- self.channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- for _ in range(num_convs - 1):
- convs.append(
- ConvModule(
- self.channels,
- self.channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- if num_convs == 0:
- convs_list.append(nn.Identity())
- else:
- convs_list.append(nn.Sequential(*convs))
- if self.concat_input:
- conv_cat_list.append(
- ConvModule(
- self.in_channels + self.channels,
- self.channels,
- kernel_size=kernel_size,
- padding=kernel_size // 2,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
-
- self.convs_list = nn.ModuleList(convs_list)
- self.conv_cat_list = nn.ModuleList(conv_cat_list)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
-
- output_list = []
- for head_idx in range(self.num_head):
- output = self.convs_list[head_idx](x)
- if self.concat_input:
- output = self.conv_cat_list[head_idx](
- torch.cat([x, output], dim=1))
- if self.dropout is not None:
- output = self.dropout(output)
- output = self.conv_seg_head_list[head_idx](output)
- output_list.append(output)
-
- return output_list
-
- def _init_inputs(self, in_channels, in_index, input_transform):
- """Check and initialize input transforms.
-
- The in_channels, in_index and input_transform must match.
- Specifically, when input_transform is None, only single feature map
- will be selected. So in_channels and in_index must be of type int.
- When input_transform
-
- Args:
- in_channels (int|Sequence[int]): Input channels.
- in_index (int|Sequence[int]): Input feature index.
- input_transform (str|None): Transformation type of input features.
- Options: 'resize_concat', 'multiple_select', None.
- 'resize_concat': Multiple feature maps will be resize to the
- same size as first one and than concat together.
- Usually used in FCN head of HRNet.
- 'multiple_select': Multiple feature maps will be bundle into
- a list and passed into decode head.
- None: Only one select feature map is allowed.
- """
-
- if input_transform is not None:
- assert input_transform in ['resize_concat', 'multiple_select']
- self.input_transform = input_transform
- self.in_index = in_index
- if input_transform is not None:
- assert isinstance(in_channels, (list, tuple))
- assert isinstance(in_index, (list, tuple))
- assert len(in_channels) == len(in_index)
- if input_transform == 'resize_concat':
- self.in_channels = sum(in_channels)
- else:
- self.in_channels = in_channels
- else:
- assert isinstance(in_channels, int)
- assert isinstance(in_index, int)
- self.in_channels = in_channels
-
- def init_weights(self):
- """Initialize weights of classification layer."""
- for conv_seg_head in self.conv_seg_head_list:
- normal_init(conv_seg_head, mean=0, std=0.01)
-
- def _transform_inputs(self, inputs):
- """Transform inputs for decoder.
-
- Args:
- inputs (list[Tensor]): List of multi-level img features.
-
- Returns:
- Tensor: The transformed inputs
- """
-
- if self.input_transform == 'resize_concat':
- inputs = [inputs[i] for i in self.in_index]
- upsampled_inputs = [
- resize(
- input=x,
- size=inputs[0].shape[2:],
- mode='bilinear',
- align_corners=self.align_corners) for x in inputs
- ]
- inputs = torch.cat(upsampled_inputs, dim=1)
- elif self.input_transform == 'multiple_select':
- inputs = [inputs[i] for i in self.in_index]
- else:
- inputs = inputs[self.in_index]
-
- return inputs
diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/point_assigner.py b/spaces/CVPR/WALT/mmdet/core/bbox/assigners/point_assigner.py
deleted file mode 100644
index fb8f5e4edc63f4851e2067034c5e67a3558f31bc..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/point_assigner.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import torch
-
-from ..builder import BBOX_ASSIGNERS
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-
-@BBOX_ASSIGNERS.register_module()
-class PointAssigner(BaseAssigner):
- """Assign a corresponding gt bbox or background to each point.
-
- Each proposals will be assigned with `0`, or a positive integer
- indicating the ground truth index.
-
- - 0: negative sample, no assigned gt
- - positive integer: positive sample, index (1-based) of assigned gt
- """
-
- def __init__(self, scale=4, pos_num=3):
- self.scale = scale
- self.pos_num = pos_num
-
- def assign(self, points, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None):
- """Assign gt to points.
-
- This method assign a gt bbox to every points set, each points set
- will be assigned with the background_label (-1), or a label number.
- -1 is background, and semi-positive number is the index (0-based) of
- assigned gt.
- The assignment is done in following steps, the order matters.
-
- 1. assign every points to the background_label (-1)
- 2. A point is assigned to some gt bbox if
- (i) the point is within the k closest points to the gt bbox
- (ii) the distance between this point and the gt is smaller than
- other gt bboxes
-
- Args:
- points (Tensor): points to be assigned, shape(n, 3) while last
- dimension stands for (x, y, stride).
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
- labelled as `ignored`, e.g., crowd boxes in COCO.
- NOTE: currently unused.
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
-
- Returns:
- :obj:`AssignResult`: The assign result.
- """
- num_points = points.shape[0]
- num_gts = gt_bboxes.shape[0]
-
- if num_gts == 0 or num_points == 0:
- # If no truth assign everything to the background
- assigned_gt_inds = points.new_full((num_points, ),
- 0,
- dtype=torch.long)
- if gt_labels is None:
- assigned_labels = None
- else:
- assigned_labels = points.new_full((num_points, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gts, assigned_gt_inds, None, labels=assigned_labels)
-
- points_xy = points[:, :2]
- points_stride = points[:, 2]
- points_lvl = torch.log2(
- points_stride).int() # [3...,4...,5...,6...,7...]
- lvl_min, lvl_max = points_lvl.min(), points_lvl.max()
-
- # assign gt box
- gt_bboxes_xy = (gt_bboxes[:, :2] + gt_bboxes[:, 2:]) / 2
- gt_bboxes_wh = (gt_bboxes[:, 2:] - gt_bboxes[:, :2]).clamp(min=1e-6)
- scale = self.scale
- gt_bboxes_lvl = ((torch.log2(gt_bboxes_wh[:, 0] / scale) +
- torch.log2(gt_bboxes_wh[:, 1] / scale)) / 2).int()
- gt_bboxes_lvl = torch.clamp(gt_bboxes_lvl, min=lvl_min, max=lvl_max)
-
- # stores the assigned gt index of each point
- assigned_gt_inds = points.new_zeros((num_points, ), dtype=torch.long)
- # stores the assigned gt dist (to this point) of each point
- assigned_gt_dist = points.new_full((num_points, ), float('inf'))
- points_range = torch.arange(points.shape[0])
-
- for idx in range(num_gts):
- gt_lvl = gt_bboxes_lvl[idx]
- # get the index of points in this level
- lvl_idx = gt_lvl == points_lvl
- points_index = points_range[lvl_idx]
- # get the points in this level
- lvl_points = points_xy[lvl_idx, :]
- # get the center point of gt
- gt_point = gt_bboxes_xy[[idx], :]
- # get width and height of gt
- gt_wh = gt_bboxes_wh[[idx], :]
- # compute the distance between gt center and
- # all points in this level
- points_gt_dist = ((lvl_points - gt_point) / gt_wh).norm(dim=1)
- # find the nearest k points to gt center in this level
- min_dist, min_dist_index = torch.topk(
- points_gt_dist, self.pos_num, largest=False)
- # the index of nearest k points to gt center in this level
- min_dist_points_index = points_index[min_dist_index]
- # The less_than_recorded_index stores the index
- # of min_dist that is less then the assigned_gt_dist. Where
- # assigned_gt_dist stores the dist from previous assigned gt
- # (if exist) to each point.
- less_than_recorded_index = min_dist < assigned_gt_dist[
- min_dist_points_index]
- # The min_dist_points_index stores the index of points satisfy:
- # (1) it is k nearest to current gt center in this level.
- # (2) it is closer to current gt center than other gt center.
- min_dist_points_index = min_dist_points_index[
- less_than_recorded_index]
- # assign the result
- assigned_gt_inds[min_dist_points_index] = idx + 1
- assigned_gt_dist[min_dist_points_index] = min_dist[
- less_than_recorded_index]
-
- if gt_labels is not None:
- assigned_labels = assigned_gt_inds.new_full((num_points, ), -1)
- pos_inds = torch.nonzero(
- assigned_gt_inds > 0, as_tuple=False).squeeze()
- if pos_inds.numel() > 0:
- assigned_labels[pos_inds] = gt_labels[
- assigned_gt_inds[pos_inds] - 1]
- else:
- assigned_labels = None
-
- return AssignResult(
- num_gts, assigned_gt_inds, None, labels=assigned_labels)
diff --git a/spaces/CVPR/WALT/mmdet/models/utils/gaussian_target.py b/spaces/CVPR/WALT/mmdet/models/utils/gaussian_target.py
deleted file mode 100644
index 7bb7160cb4bf2f47876f6e8373142aa5846920a9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/utils/gaussian_target.py
+++ /dev/null
@@ -1,185 +0,0 @@
-from math import sqrt
-
-import torch
-
-
-def gaussian2D(radius, sigma=1, dtype=torch.float32, device='cpu'):
- """Generate 2D gaussian kernel.
-
- Args:
- radius (int): Radius of gaussian kernel.
- sigma (int): Sigma of gaussian function. Default: 1.
- dtype (torch.dtype): Dtype of gaussian tensor. Default: torch.float32.
- device (str): Device of gaussian tensor. Default: 'cpu'.
-
- Returns:
- h (Tensor): Gaussian kernel with a
- ``(2 * radius + 1) * (2 * radius + 1)`` shape.
- """
- x = torch.arange(
- -radius, radius + 1, dtype=dtype, device=device).view(1, -1)
- y = torch.arange(
- -radius, radius + 1, dtype=dtype, device=device).view(-1, 1)
-
- h = (-(x * x + y * y) / (2 * sigma * sigma)).exp()
-
- h[h < torch.finfo(h.dtype).eps * h.max()] = 0
- return h
-
-
-def gen_gaussian_target(heatmap, center, radius, k=1):
- """Generate 2D gaussian heatmap.
-
- Args:
- heatmap (Tensor): Input heatmap, the gaussian kernel will cover on
- it and maintain the max value.
- center (list[int]): Coord of gaussian kernel's center.
- radius (int): Radius of gaussian kernel.
- k (int): Coefficient of gaussian kernel. Default: 1.
-
- Returns:
- out_heatmap (Tensor): Updated heatmap covered by gaussian kernel.
- """
- diameter = 2 * radius + 1
- gaussian_kernel = gaussian2D(
- radius, sigma=diameter / 6, dtype=heatmap.dtype, device=heatmap.device)
-
- x, y = center
-
- height, width = heatmap.shape[:2]
-
- left, right = min(x, radius), min(width - x, radius + 1)
- top, bottom = min(y, radius), min(height - y, radius + 1)
-
- masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
- masked_gaussian = gaussian_kernel[radius - top:radius + bottom,
- radius - left:radius + right]
- out_heatmap = heatmap
- torch.max(
- masked_heatmap,
- masked_gaussian * k,
- out=out_heatmap[y - top:y + bottom, x - left:x + right])
-
- return out_heatmap
-
-
-def gaussian_radius(det_size, min_overlap):
- r"""Generate 2D gaussian radius.
-
- This function is modified from the `official github repo
- `_.
-
- Given ``min_overlap``, radius could computed by a quadratic equation
- according to Vieta's formulas.
-
- There are 3 cases for computing gaussian radius, details are following:
-
- - Explanation of figure: ``lt`` and ``br`` indicates the left-top and
- bottom-right corner of ground truth box. ``x`` indicates the
- generated corner at the limited position when ``radius=r``.
-
- - Case1: one corner is inside the gt box and the other is outside.
-
- .. code:: text
-
- |< width >|
-
- lt-+----------+ -
- | | | ^
- +--x----------+--+
- | | | |
- | | | | height
- | | overlap | |
- | | | |
- | | | | v
- +--+---------br--+ -
- | | |
- +----------+--x
-
- To ensure IoU of generated box and gt box is larger than ``min_overlap``:
-
- .. math::
- \cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad
- {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\
- {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h}
- {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}
-
- - Case2: both two corners are inside the gt box.
-
- .. code:: text
-
- |< width >|
-
- lt-+----------+ -
- | | | ^
- +--x-------+ |
- | | | |
- | |overlap| | height
- | | | |
- | +-------x--+
- | | | v
- +----------+-br -
-
- To ensure IoU of generated box and gt box is larger than ``min_overlap``:
-
- .. math::
- \cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad
- {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\
- {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h}
- {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a}
-
- - Case3: both two corners are outside the gt box.
-
- .. code:: text
-
- |< width >|
-
- x--+----------------+
- | | |
- +-lt-------------+ | -
- | | | | ^
- | | | |
- | | overlap | | height
- | | | |
- | | | | v
- | +------------br--+ -
- | | |
- +----------------+--x
-
- To ensure IoU of generated box and gt box is larger than ``min_overlap``:
-
- .. math::
- \cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad
- {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\
- {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\
- {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a}
-
- Args:
- det_size (list[int]): Shape of object.
- min_overlap (float): Min IoU with ground truth for boxes generated by
- keypoints inside the gaussian kernel.
-
- Returns:
- radius (int): Radius of gaussian kernel.
- """
- height, width = det_size
-
- a1 = 1
- b1 = (height + width)
- c1 = width * height * (1 - min_overlap) / (1 + min_overlap)
- sq1 = sqrt(b1**2 - 4 * a1 * c1)
- r1 = (b1 - sq1) / (2 * a1)
-
- a2 = 4
- b2 = 2 * (height + width)
- c2 = (1 - min_overlap) * width * height
- sq2 = sqrt(b2**2 - 4 * a2 * c2)
- r2 = (b2 - sq2) / (2 * a2)
-
- a3 = 4 * min_overlap
- b3 = -2 * min_overlap * (height + width)
- c3 = (min_overlap - 1) * width * height
- sq3 = sqrt(b3**2 - 4 * a3 * c3)
- r3 = (b3 + sq3) / (2 * a3)
- return min(r1, r2, r3)
diff --git a/spaces/Carlos056/Cara/style.css b/spaces/Carlos056/Cara/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/Carlos056/Cara/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/CyStorm/instruct-pix2pix/edit_app.py b/spaces/CyStorm/instruct-pix2pix/edit_app.py
deleted file mode 100644
index 0359e815ad51b1a2291dd8943555568e452981ad..0000000000000000000000000000000000000000
--- a/spaces/CyStorm/instruct-pix2pix/edit_app.py
+++ /dev/null
@@ -1,192 +0,0 @@
-from __future__ import annotations
-
-import math
-import random
-
-import gradio as gr
-import torch
-from PIL import Image, ImageOps
-from diffusers import StableDiffusionInstructPix2PixPipeline
-
-
-help_text = """
-If you're not getting what you want, there may be a few reasons:
-1. Is the image not changing enough? Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try:
- * Decreasing the Image CFG weight, or
- * Increasing the Text CFG weight, or
-2. Conversely, is the image changing too much, such that the details in the original image aren't preserved? Try:
- * Increasing the Image CFG weight, or
- * Decreasing the Text CFG weight
-3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
-4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
-5. Increasing the number of steps sometimes improves results.
-6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try:
- * Cropping the image so the face takes up a larger portion of the frame.
-"""
-
-
-example_instructions = [
- "Make it a picasso painting",
- "as if it were by modigliani",
- "convert to a bronze statue",
- "Turn it into an anime.",
- "have it look like a graphic novel",
- "make him gain weight",
- "what would he look like bald?",
- "Have him smile",
- "Put him in a cocktail party.",
- "move him at the beach.",
- "add dramatic lighting",
- "Convert to black and white",
- "What if it were snowing?",
- "Give him a leather jacket",
- "Turn him into a cyborg!",
- "make him wear a beanie",
-]
-
-model_id = "timbrooks/instruct-pix2pix"
-
-def main():
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None).to("cuda")
- example_image = Image.open("imgs/example.jpg").convert("RGB")
-
- def load_example(
- steps: int,
- randomize_seed: bool,
- seed: int,
- randomize_cfg: bool,
- text_cfg_scale: float,
- image_cfg_scale: float,
- ):
- example_instruction = random.choice(example_instructions)
- return [example_image, example_instruction] + generate(
- example_image,
- example_instruction,
- steps,
- randomize_seed,
- seed,
- randomize_cfg,
- text_cfg_scale,
- image_cfg_scale,
- )
-
- def generate(
- input_image: Image.Image,
- instruction: str,
- steps: int,
- randomize_seed: bool,
- seed: int,
- randomize_cfg: bool,
- text_cfg_scale: float,
- image_cfg_scale: float,
- ):
- seed = random.randint(0, 100000) if randomize_seed else seed
- text_cfg_scale = round(random.uniform(6.0, 9.0), ndigits=2) if randomize_cfg else text_cfg_scale
- image_cfg_scale = round(random.uniform(1.2, 1.8), ndigits=2) if randomize_cfg else image_cfg_scale
-
- width, height = input_image.size
- factor = 512 / max(width, height)
- factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
- width = int((width * factor) // 64) * 64
- height = int((height * factor) // 64) * 64
- input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS)
-
- if instruction == "":
- return [input_image, seed]
-
- generator = torch.manual_seed(seed)
- edited_image = pipe(
- instruction, image=input_image,
- guidance_scale=text_cfg_scale, image_guidance_scale=image_cfg_scale,
- num_inference_steps=steps, generator=generator,
- ).images[0]
- return [seed, text_cfg_scale, image_cfg_scale, edited_image]
-
- def reset():
- return [0, "Randomize Seed", 1371, "Fix CFG", 7.5, 1.5, None]
-
- with gr.Blocks() as demo:
- gr.HTML("""
- InstructPix2Pix: Learning to Follow Image Editing Instructions
-
-
For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
-
-
-
-
""")
- with gr.Row():
- with gr.Column(scale=1, min_width=100):
- generate_button = gr.Button("Generate")
- with gr.Column(scale=1, min_width=100):
- load_button = gr.Button("Load Example")
- with gr.Column(scale=1, min_width=100):
- reset_button = gr.Button("Reset")
- with gr.Column(scale=3):
- instruction = gr.Textbox(lines=1, label="Edit Instruction", interactive=True)
-
- with gr.Row():
- input_image = gr.Image(label="Input Image", type="pil", interactive=True)
- edited_image = gr.Image(label=f"Edited Image", type="pil", interactive=False)
- input_image.style(height=512, width=512)
- edited_image.style(height=512, width=512)
-
- with gr.Row():
- steps = gr.Number(value=50, precision=0, label="Steps", interactive=True)
- randomize_seed = gr.Radio(
- ["Fix Seed", "Randomize Seed"],
- value="Randomize Seed",
- type="index",
- show_label=False,
- interactive=True,
- )
- seed = gr.Number(value=1371, precision=0, label="Seed", interactive=True)
- randomize_cfg = gr.Radio(
- ["Fix CFG", "Randomize CFG"],
- value="Fix CFG",
- type="index",
- show_label=False,
- interactive=True,
- )
- text_cfg_scale = gr.Number(value=7.5, label=f"Text CFG", interactive=True)
- image_cfg_scale = gr.Number(value=1.5, label=f"Image CFG", interactive=True)
-
- gr.Markdown(help_text)
-
- load_button.click(
- fn=load_example,
- inputs=[
- steps,
- randomize_seed,
- seed,
- randomize_cfg,
- text_cfg_scale,
- image_cfg_scale,
- ],
- outputs=[input_image, instruction, seed, text_cfg_scale, image_cfg_scale, edited_image],
- )
- generate_button.click(
- fn=generate,
- inputs=[
- input_image,
- instruction,
- steps,
- randomize_seed,
- seed,
- randomize_cfg,
- text_cfg_scale,
- image_cfg_scale,
- ],
- outputs=[seed, text_cfg_scale, image_cfg_scale, edited_image],
- )
- reset_button.click(
- fn=reset,
- inputs=[],
- outputs=[steps, randomize_seed, seed, randomize_cfg, text_cfg_scale, image_cfg_scale, edited_image],
- )
-
- demo.queue(concurrency_count=1)
- demo.launch(share=False)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/CyberPeace-Institute/Cybersecurity-Knowledge-Graph-Extraction/README.md b/spaces/CyberPeace-Institute/Cybersecurity-Knowledge-Graph-Extraction/README.md
deleted file mode 100644
index f869d37282d793ce78e47b95da622460503a9788..0000000000000000000000000000000000000000
--- a/spaces/CyberPeace-Institute/Cybersecurity-Knowledge-Graph-Extraction/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Cybersecurity Knowledge Graph Extraction
-emoji: 📈
-colorFrom: red
-colorTo: red
-sdk: streamlit
-sdk_version: 1.27.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/convert_ckpt.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/convert_ckpt.py
deleted file mode 100644
index 5ab42bef760ecc52ba363540bb05b005ecbfccd1..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/convert_ckpt.py
+++ /dev/null
@@ -1,61 +0,0 @@
-"""
-@date: 2021/11/22
-@description: Conversion training ckpt into inference ckpt
-"""
-import argparse
-import os
-
-import torch
-
-from config.defaults import merge_from_file
-
-
-def parse_option():
- parser = argparse.ArgumentParser(description='Conversion training ckpt into inference ckpt')
- parser.add_argument('--cfg',
- type=str,
- required=True,
- metavar='FILE',
- help='path of config file')
-
- parser.add_argument('--output_path',
- type=str,
- help='path of output ckpt')
-
- args = parser.parse_args()
-
- print("arguments:")
- for arg in vars(args):
- print(arg, ":", getattr(args, arg))
- print("-" * 50)
- return args
-
-
-def convert_ckpt():
- args = parse_option()
- config = merge_from_file(args.cfg)
- ck_dir = os.path.join("checkpoints", f"{config.MODEL.ARGS[0]['decoder_name']}_{config.MODEL.ARGS[0]['output_name']}_Net",
- config.TAG)
- print(f"Processing {ck_dir}")
- model_paths = [name for name in os.listdir(ck_dir) if '_best_' in name]
- if len(model_paths) == 0:
- print("Not find best ckpt")
- return
- model_path = os.path.join(ck_dir, model_paths[0])
- print(f"Loading {model_path}")
- checkpoint = torch.load(model_path, map_location=torch.device('cuda:0'))
- net = checkpoint['net']
- output_path = None
- if args.output_path is None:
- output_path = os.path.join(ck_dir, 'best.pkl')
- else:
- output_path = args.output_path
- if output_path is None:
- print("Output path is invalid")
- print(f"Save on: {output_path}")
- os.makedirs(os.path.dirname(output_path), exist_ok=True)
- torch.save(net, output_path)
-
-
-if __name__ == '__main__':
- convert_ckpt()
diff --git a/spaces/Deon07/prompthero-openjourney/README.md b/spaces/Deon07/prompthero-openjourney/README.md
deleted file mode 100644
index 19e1b12cbc2a8d0dd89084c5e170ab23d4b959cc..0000000000000000000000000000000000000000
--- a/spaces/Deon07/prompthero-openjourney/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Prompthero Openjourney
-emoji: 🦀
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.36.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Detomo/ai-comic-generation/Dockerfile b/spaces/Detomo/ai-comic-generation/Dockerfile
deleted file mode 100644
index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/Dockerfile
+++ /dev/null
@@ -1,65 +0,0 @@
-FROM node:18-alpine AS base
-
-# Install dependencies only when needed
-FROM base AS deps
-# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
-RUN apk add --no-cache libc6-compat
-WORKDIR /app
-
-# Install dependencies based on the preferred package manager
-COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
-RUN \
- if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
- elif [ -f package-lock.json ]; then npm ci; \
- elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
- else echo "Lockfile not found." && exit 1; \
- fi
-
-# Uncomment the following lines if you want to use a secret at buildtime,
-# for example to access your private npm packages
-# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \
-# $(cat /run/secrets/HF_EXAMPLE_SECRET)
-
-# Rebuild the source code only when needed
-FROM base AS builder
-WORKDIR /app
-COPY --from=deps /app/node_modules ./node_modules
-COPY . .
-
-# Next.js collects completely anonymous telemetry data about general usage.
-# Learn more here: https://nextjs.org/telemetry
-# Uncomment the following line in case you want to disable telemetry during the build.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-# RUN yarn build
-
-# If you use yarn, comment out this line and use the line above
-RUN npm run build
-
-# Production image, copy all the files and run next
-FROM base AS runner
-WORKDIR /app
-
-ENV NODE_ENV production
-# Uncomment the following line in case you want to disable telemetry during runtime.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-RUN addgroup --system --gid 1001 nodejs
-RUN adduser --system --uid 1001 nextjs
-
-COPY --from=builder /app/public ./public
-
-# Automatically leverage output traces to reduce image size
-# https://nextjs.org/docs/advanced-features/output-file-tracing
-COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
-COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
-COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache
-# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache
-
-USER nextjs
-
-EXPOSE 3000
-
-ENV PORT 3000
-
-CMD ["node", "server.js"]
\ No newline at end of file
diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/use-toast.ts b/spaces/Detomo/ai-comic-generation/src/components/ui/use-toast.ts
deleted file mode 100644
index 90d8959bf3136de29eec362bf9d089b705c4ed3b..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/components/ui/use-toast.ts
+++ /dev/null
@@ -1,192 +0,0 @@
-// Inspired by react-hot-toast library
-import * as React from "react"
-
-import type {
- ToastActionElement,
- ToastProps,
-} from "@/components/ui/toast"
-
-const TOAST_LIMIT = 1
-const TOAST_REMOVE_DELAY = 1000000
-
-type ToasterToast = ToastProps & {
- id: string
- title?: React.ReactNode
- description?: React.ReactNode
- action?: ToastActionElement
-}
-
-const actionTypes = {
- ADD_TOAST: "ADD_TOAST",
- UPDATE_TOAST: "UPDATE_TOAST",
- DISMISS_TOAST: "DISMISS_TOAST",
- REMOVE_TOAST: "REMOVE_TOAST",
-} as const
-
-let count = 0
-
-function genId() {
- count = (count + 1) % Number.MAX_VALUE
- return count.toString()
-}
-
-type ActionType = typeof actionTypes
-
-type Action =
- | {
- type: ActionType["ADD_TOAST"]
- toast: ToasterToast
- }
- | {
- type: ActionType["UPDATE_TOAST"]
- toast: Partial
- }
- | {
- type: ActionType["DISMISS_TOAST"]
- toastId?: ToasterToast["id"]
- }
- | {
- type: ActionType["REMOVE_TOAST"]
- toastId?: ToasterToast["id"]
- }
-
-interface State {
- toasts: ToasterToast[]
-}
-
-const toastTimeouts = new Map>()
-
-const addToRemoveQueue = (toastId: string) => {
- if (toastTimeouts.has(toastId)) {
- return
- }
-
- const timeout = setTimeout(() => {
- toastTimeouts.delete(toastId)
- dispatch({
- type: "REMOVE_TOAST",
- toastId: toastId,
- })
- }, TOAST_REMOVE_DELAY)
-
- toastTimeouts.set(toastId, timeout)
-}
-
-export const reducer = (state: State, action: Action): State => {
- switch (action.type) {
- case "ADD_TOAST":
- return {
- ...state,
- toasts: [action.toast, ...state.toasts].slice(0, TOAST_LIMIT),
- }
-
- case "UPDATE_TOAST":
- return {
- ...state,
- toasts: state.toasts.map((t) =>
- t.id === action.toast.id ? { ...t, ...action.toast } : t
- ),
- }
-
- case "DISMISS_TOAST": {
- const { toastId } = action
-
- // ! Side effects ! - This could be extracted into a dismissToast() action,
- // but I'll keep it here for simplicity
- if (toastId) {
- addToRemoveQueue(toastId)
- } else {
- state.toasts.forEach((toast) => {
- addToRemoveQueue(toast.id)
- })
- }
-
- return {
- ...state,
- toasts: state.toasts.map((t) =>
- t.id === toastId || toastId === undefined
- ? {
- ...t,
- open: false,
- }
- : t
- ),
- }
- }
- case "REMOVE_TOAST":
- if (action.toastId === undefined) {
- return {
- ...state,
- toasts: [],
- }
- }
- return {
- ...state,
- toasts: state.toasts.filter((t) => t.id !== action.toastId),
- }
- }
-}
-
-const listeners: Array<(state: State) => void> = []
-
-let memoryState: State = { toasts: [] }
-
-function dispatch(action: Action) {
- memoryState = reducer(memoryState, action)
- listeners.forEach((listener) => {
- listener(memoryState)
- })
-}
-
-type Toast = Omit
-
-function toast({ ...props }: Toast) {
- const id = genId()
-
- const update = (props: ToasterToast) =>
- dispatch({
- type: "UPDATE_TOAST",
- toast: { ...props, id },
- })
- const dismiss = () => dispatch({ type: "DISMISS_TOAST", toastId: id })
-
- dispatch({
- type: "ADD_TOAST",
- toast: {
- ...props,
- id,
- open: true,
- onOpenChange: (open) => {
- if (!open) dismiss()
- },
- },
- })
-
- return {
- id: id,
- dismiss,
- update,
- }
-}
-
-function useToast() {
- const [state, setState] = React.useState(memoryState)
-
- React.useEffect(() => {
- listeners.push(setState)
- return () => {
- const index = listeners.indexOf(setState)
- if (index > -1) {
- listeners.splice(index, 1)
- }
- }
- }, [state])
-
- return {
- ...state,
- toast,
- dismiss: (toastId?: string) => dispatch({ type: "DISMISS_TOAST", toastId }),
- }
-}
-
-export { useToast, toast }
diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/training_stats.py b/spaces/DragGan/DragGan-Inversion/torch_utils/training_stats.py
deleted file mode 100644
index aa5837c2948372ecdb3e34076f4b3f4f42c81fef..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/torch_utils/training_stats.py
+++ /dev/null
@@ -1,283 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Facilities for reporting and collecting training statistics across
-multiple processes and devices. The interface is designed to minimize
-synchronization overhead as well as the amount of boilerplate in user
-code."""
-
-import re
-import numpy as np
-import torch
-import dnnlib
-
-from . import misc
-
-# ----------------------------------------------------------------------------
-
-_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares]
-# Data type to use for initial per-tensor reduction.
-_reduce_dtype = torch.float32
-_counter_dtype = torch.float64 # Data type to use for the internal counters.
-_rank = 0 # Rank of the current process.
-# Device to use for multiprocess communication. None = single-process.
-_sync_device = None
-_sync_called = False # Has _sync() been called yet?
-# Running counters on each device, updated by report(): name => device => torch.Tensor
-_counters = dict()
-# Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor
-_cumulative = dict()
-
-# ----------------------------------------------------------------------------
-
-
-def init_multiprocessing(rank, sync_device):
- r"""Initializes `torch_utils.training_stats` for collecting statistics
- across multiple processes.
-
- This function must be called after
- `torch.distributed.init_process_group()` and before `Collector.update()`.
- The call is not necessary if multi-process collection is not needed.
-
- Args:
- rank: Rank of the current process.
- sync_device: PyTorch device to use for inter-process
- communication, or None to disable multi-process
- collection. Typically `torch.device('cuda', rank)`.
- """
- global _rank, _sync_device
- assert not _sync_called
- _rank = rank
- _sync_device = sync_device
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def report(name, value):
- r"""Broadcasts the given set of scalars to all interested instances of
- `Collector`, across device and process boundaries.
-
- This function is expected to be extremely cheap and can be safely
- called from anywhere in the training loop, loss function, or inside a
- `torch.nn.Module`.
-
- Warning: The current implementation expects the set of unique names to
- be consistent across processes. Please make sure that `report()` is
- called at least once for each unique name by each process, and in the
- same order. If a given process has no scalars to broadcast, it can do
- `report(name, [])` (empty list).
-
- Args:
- name: Arbitrary string specifying the name of the statistic.
- Averages are accumulated separately for each unique name.
- value: Arbitrary set of scalars. Can be a list, tuple,
- NumPy array, PyTorch tensor, or Python scalar.
-
- Returns:
- The same `value` that was passed in.
- """
- if name not in _counters:
- _counters[name] = dict()
-
- elems = torch.as_tensor(value)
- if elems.numel() == 0:
- return value
-
- elems = elems.detach().flatten().to(_reduce_dtype)
- moments = torch.stack([
- torch.ones_like(elems).sum(),
- elems.sum(),
- elems.square().sum(),
- ])
- assert moments.ndim == 1 and moments.shape[0] == _num_moments
- moments = moments.to(_counter_dtype)
-
- device = moments.device
- if device not in _counters[name]:
- _counters[name][device] = torch.zeros_like(moments)
- _counters[name][device].add_(moments)
- return value
-
-# ----------------------------------------------------------------------------
-
-
-def report0(name, value):
- r"""Broadcasts the given set of scalars by the first process (`rank = 0`),
- but ignores any scalars provided by the other processes.
- See `report()` for further details.
- """
- report(name, value if _rank == 0 else [])
- return value
-
-# ----------------------------------------------------------------------------
-
-
-class Collector:
- r"""Collects the scalars broadcasted by `report()` and `report0()` and
- computes their long-term averages (mean and standard deviation) over
- user-defined periods of time.
-
- The averages are first collected into internal counters that are not
- directly visible to the user. They are then copied to the user-visible
- state as a result of calling `update()` and can then be queried using
- `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the
- internal counters for the next round, so that the user-visible state
- effectively reflects averages collected between the last two calls to
- `update()`.
-
- Args:
- regex: Regular expression defining which statistics to
- collect. The default is to collect everything.
- keep_previous: Whether to retain the previous averages if no
- scalars were collected on a given round
- (default: True).
- """
-
- def __init__(self, regex='.*', keep_previous=True):
- self._regex = re.compile(regex)
- self._keep_previous = keep_previous
- self._cumulative = dict()
- self._moments = dict()
- self.update()
- self._moments.clear()
-
- def names(self):
- r"""Returns the names of all statistics broadcasted so far that
- match the regular expression specified at construction time.
- """
- return [name for name in _counters if self._regex.fullmatch(name)]
-
- def update(self):
- r"""Copies current values of the internal counters to the
- user-visible state and resets them for the next round.
-
- If `keep_previous=True` was specified at construction time, the
- operation is skipped for statistics that have received no scalars
- since the last update, retaining their previous averages.
-
- This method performs a number of GPU-to-CPU transfers and one
- `torch.distributed.all_reduce()`. It is intended to be called
- periodically in the main training loop, typically once every
- N training steps.
- """
- if not self._keep_previous:
- self._moments.clear()
- for name, cumulative in _sync(self.names()):
- if name not in self._cumulative:
- self._cumulative[name] = torch.zeros(
- [_num_moments], dtype=_counter_dtype)
- delta = cumulative - self._cumulative[name]
- self._cumulative[name].copy_(cumulative)
- if float(delta[0]) != 0:
- self._moments[name] = delta
-
- def _get_delta(self, name):
- r"""Returns the raw moments that were accumulated for the given
- statistic between the last two calls to `update()`, or zero if
- no scalars were collected.
- """
- assert self._regex.fullmatch(name)
- if name not in self._moments:
- self._moments[name] = torch.zeros(
- [_num_moments], dtype=_counter_dtype)
- return self._moments[name]
-
- def num(self, name):
- r"""Returns the number of scalars that were accumulated for the given
- statistic between the last two calls to `update()`, or zero if
- no scalars were collected.
- """
- delta = self._get_delta(name)
- return int(delta[0])
-
- def mean(self, name):
- r"""Returns the mean of the scalars that were accumulated for the
- given statistic between the last two calls to `update()`, or NaN if
- no scalars were collected.
- """
- delta = self._get_delta(name)
- if int(delta[0]) == 0:
- return float('nan')
- return float(delta[1] / delta[0])
-
- def std(self, name):
- r"""Returns the standard deviation of the scalars that were
- accumulated for the given statistic between the last two calls to
- `update()`, or NaN if no scalars were collected.
- """
- delta = self._get_delta(name)
- if int(delta[0]) == 0 or not np.isfinite(float(delta[1])):
- return float('nan')
- if int(delta[0]) == 1:
- return float(0)
- mean = float(delta[1] / delta[0])
- raw_var = float(delta[2] / delta[0])
- return np.sqrt(max(raw_var - np.square(mean), 0))
-
- def as_dict(self):
- r"""Returns the averages accumulated between the last two calls to
- `update()` as an `dnnlib.EasyDict`. The contents are as follows:
-
- dnnlib.EasyDict(
- NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT),
- ...
- )
- """
- stats = dnnlib.EasyDict()
- for name in self.names():
- stats[name] = dnnlib.EasyDict(num=self.num(
- name), mean=self.mean(name), std=self.std(name))
- return stats
-
- def __getitem__(self, name):
- r"""Convenience getter.
- `collector[name]` is a synonym for `collector.mean(name)`.
- """
- return self.mean(name)
-
-# ----------------------------------------------------------------------------
-
-
-def _sync(names):
- r"""Synchronize the global cumulative counters across devices and
- processes. Called internally by `Collector.update()`.
- """
- if len(names) == 0:
- return []
- global _sync_called
- _sync_called = True
-
- # Collect deltas within current rank.
- deltas = []
- device = _sync_device if _sync_device is not None else torch.device('cpu')
- for name in names:
- delta = torch.zeros(
- [_num_moments], dtype=_counter_dtype, device=device)
- for counter in _counters[name].values():
- delta.add_(counter.to(device))
- counter.copy_(torch.zeros_like(counter))
- deltas.append(delta)
- deltas = torch.stack(deltas)
-
- # Sum deltas across ranks.
- if _sync_device is not None:
- torch.distributed.all_reduce(deltas)
-
- # Update cumulative values.
- deltas = deltas.cpu()
- for idx, name in enumerate(names):
- if name not in _cumulative:
- _cumulative[name] = torch.zeros(
- [_num_moments], dtype=_counter_dtype)
- _cumulative[name].add_(deltas[idx])
-
- # Return name-value pairs.
- return [(name, _cumulative[name]) for name in names]
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Drexx007/Drexx-Ai-Chat/app.py b/spaces/Drexx007/Drexx-Ai-Chat/app.py
deleted file mode 100644
index e0f9195872fefded29f08398a63c6a3627defcb2..0000000000000000000000000000000000000000
--- a/spaces/Drexx007/Drexx-Ai-Chat/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import openai
-import gradio
-
-openai.api_key = "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"
-
-messages = [{"role": "system", "content": "You are a psychologist"}]
-
-def CustomChatGPT(user_input):
- messages.append({"role": "user", "content": "user_input"})
- response = openai.ChatCompletion.create(
- model = "gpt-3.5-turbo",
- messages = messages
- )
- ChatGPT_reply = response["choices"][0]["message"]["content"]
- messages.append({"role": "assistant", "content": ChatGPT_reply})
- return ChatGPT_reply
-
-demo = gradio.Interface(fn=CustomChatGPT, inputs = "text", outputs = "text", title = "Drexx Chat Ai")
-
-demo.launch(share="true")
\ No newline at end of file
diff --git a/spaces/EleutherAI/magma/README.md b/spaces/EleutherAI/magma/README.md
deleted file mode 100644
index 370fe86a2444c44de980a0acc9412e6ad7fd6057..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/magma/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: MAGMA
-emoji: 🌋
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 2.8.10
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-A demo of MAGMA ([GitHub](https://github.com/magma/magma), [arXiv](https://arxiv.org/abs/2112.05253)) by Constantin Eichenberg, Sid Black, Samuel Weinbach, Letitia Parcalabescu, and Anette Frank.
-
-Demo by [Heath Mitchell](https://github.com/Heath123), [Stella Biderman](https://stellabiderman.com), and [AK](https://mobile.twitter.com/ak92501)
\ No newline at end of file
diff --git a/spaces/EuroSciPy2022/arxiv-cards/csscard.css b/spaces/EuroSciPy2022/arxiv-cards/csscard.css
deleted file mode 100644
index 527854d2f3dea81005c3c3a66e62e05ee914be78..0000000000000000000000000000000000000000
--- a/spaces/EuroSciPy2022/arxiv-cards/csscard.css
+++ /dev/null
@@ -1,107 +0,0 @@
-@import url("https://fonts.googleapis.com/css?family=Merriweather|Open+Sans");
-
-.container {
- display: flex;
- justify-content: center;
- padding: 80px;
-}
-
-ul {
-list-style-type: none;
-display: flex;
-float: none;
-justify-content: center;
-align-items: center;
-padding-left: 30px;
-padding-top: 10px;
-}
-
-#urllinks li {
- padding: 0px 30px 5px 5px;
-}
-
-.square {
- width: 700px;
- background: white;
- border-radius: 4px;
- box-shadow: 0px 20px 50px #d9dbdf;
-}
-
-.mask {
- width: 700px;
- height: 65px;
- clip: rect(0px, 700px, 150px, 0px);
- border-radius: 4px;
- position: absolute;
- background-color: #b31b1b;
- display: flex;
-}
-
-.mask .left,
-.mask .right {
- flex: 1;
-}
-
-img {
-position: absolute;
-width: 60px;
-padding: 20px 0px 0px 0px;
-margin-left: 30px;
-}
-
-.h1 {
- margin: auto;
- text-align: left;
- margin-top: 90px;
- padding-left: 30px;
- font-family: "Merriweather", serif;
- font-size: 22px;
-}
-
-h2 {
- color: white;
- text-align: right;
- font-size: 14px;
- padding: 22px 0px;
- font-family: "Open Sans", sans-serif;
- font-weight: 400;
- margin-right: 30px;
-}
-
-p {
-text-align: justify;
-padding-left: 30px;
-padding-right: 30px;
-font-family: "Open Sans", sans-serif;
-font-size: 12px;
-color: #949494;
-line-height: 18px;
-padding-bottom: 30px;
-padding-top: 30px;
-}
-
-.auth {
- text-align: justify;
- padding-left: 0px;
- padding-right: 20px;
- font-family: "Open Sans", sans-serif;
- font-size: 14px;
- line-height: 18px;
-}
-
-.button {
- background-color: #b31b1b;
- color: white;
- width: 150px;
- padding: 10px 10px;
- border-radius: 3px;
- text-align: center;
- text-decoration: none;
- display: block;
- margin-top: 20px;
- margin-left: 20px;
- margin-right: 20px;
- font-size: 12px;
- cursor: pointer;
- font-family: "merriweather";
-}
\ No newline at end of file
diff --git a/spaces/EvanMarie/hot_or_not/README.md b/spaces/EvanMarie/hot_or_not/README.md
deleted file mode 100644
index 6e279910fcf1be0479f2c6442c9009d5c5fab5c4..0000000000000000000000000000000000000000
--- a/spaces/EvanMarie/hot_or_not/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Hot Or Not
-emoji: 🔥
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Fazzie/Pokemon-GAI/modules/dataset.py b/spaces/Fazzie/Pokemon-GAI/modules/dataset.py
deleted file mode 100644
index 72da07a35849498e0217b04cffb25274b96fd9b2..0000000000000000000000000000000000000000
--- a/spaces/Fazzie/Pokemon-GAI/modules/dataset.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import os
-from random import choices, randint
-from typing import cast, Dict, List, Optional, TypedDict
-import h5py
-
-
-datasets_dir: str = './datasets'
-datasets_file: str = 'pregenerated_pokemon.h5'
-h5_file: str = os.path.join(datasets_dir, datasets_file)
-
-
-class Stats(TypedDict):
- size_total: int
- size_mb: float
- size_counts: Dict[str, int]
-
-
-def get_stats(h5_file: str = h5_file) -> Stats:
- with h5py.File(h5_file, 'r') as datasets:
- return {
- "size_total": sum(list(datasets[energy].size.item() for energy in datasets.keys())),
- "size_mb": round(os.path.getsize(h5_file) / 1024**2, 1),
- "size_counts": {key: datasets[key].size.item() for key in datasets.keys()},
- }
-
-
-energy_types: List[str] = ['colorless', 'darkness', 'dragon', 'fairy', 'fighting',
- 'fire', 'grass', 'lightning', 'metal', 'psychic', 'water']
-
-
-def get_image(energy: Optional[str] = None, row: Optional[int] = None) -> str:
- if not energy:
- energy = choices(energy_types)[0]
-
- with h5py.File(h5_file, 'r') as datasets:
- if not row:
- row = randint(0, datasets[energy].size - 1)
-
- return datasets[energy].asstr()[row][0]
diff --git a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/longcode/jpgd.cpp b/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/longcode/jpgd.cpp
deleted file mode 100644
index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000
--- a/spaces/Gmq-x/gpt-academic/crazy_functions/test_project/cpp/longcode/jpgd.cpp
+++ /dev/null
@@ -1,3276 +0,0 @@
-// jpgd.cpp - C++ class for JPEG decompression.
-// Public domain, Rich Geldreich
-// Last updated Apr. 16, 2011
-// Alex Evans: Linear memory allocator (taken from jpge.h).
-//
-// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2.
-//
-// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling.
-// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain"
-// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html
-
-#include "jpgd.h"
-#include
-
-#include
-// BEGIN EPIC MOD
-#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0
-// END EPIC MOD
-
-#ifdef _MSC_VER
-#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable
-#endif
-
-// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling).
-// This is slower, but results in higher quality on images with highly saturated colors.
-#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1
-
-#define JPGD_TRUE (1)
-#define JPGD_FALSE (0)
-
-#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b))
-#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b))
-
-namespace jpgd {
-
- static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); }
- static inline void jpgd_free(void *p) { FMemory::Free(p); }
-
-// BEGIN EPIC MOD
-//@UE3 - use UE3 BGRA encoding instead of assuming RGBA
- // stolen from IImageWrapper.h
- enum ERGBFormatJPG
- {
- Invalid = -1,
- RGBA = 0,
- BGRA = 1,
- Gray = 2,
- };
- static ERGBFormatJPG jpg_format;
-// END EPIC MOD
-
- // DCT coefficients are stored in this sequence.
- static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 };
-
- enum JPEG_MARKER
- {
- M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8,
- M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC,
- M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7,
- M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF,
- M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0
- };
-
- enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 };
-
-#define CONST_BITS 13
-#define PASS1_BITS 2
-#define SCALEDONE ((int32)1)
-
-#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */
-#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */
-#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */
-#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */
-#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */
-#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */
-#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */
-#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */
-#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */
-#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */
-#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */
-#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */
-
-#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n))
-#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n))
-
-#define MULTIPLY(var, cnst) ((var) * (cnst))
-
-#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i))
-
- // Compiler creates a fast path 1D IDCT for X non-zero columns
- template
- struct Row
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
- // ACCESS_COL() will be optimized at compile time to either an array access, or 0.
-#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0)
-
- const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6);
-
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
-
- const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS;
- const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS;
-
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
-
- const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1);
-
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
-
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
-
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
-
- pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS);
- pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS);
- pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS);
- pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS);
- pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS);
- pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS);
- pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS);
- pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS);
- }
- };
-
- template <>
- struct Row<0>
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
-#ifdef _MSC_VER
- pTemp; pSrc;
-#endif
- }
- };
-
- template <>
- struct Row<1>
- {
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
- {
- const int dcval = (pSrc[0] << PASS1_BITS);
-
- pTemp[0] = dcval;
- pTemp[1] = dcval;
- pTemp[2] = dcval;
- pTemp[3] = dcval;
- pTemp[4] = dcval;
- pTemp[5] = dcval;
- pTemp[6] = dcval;
- pTemp[7] = dcval;
- }
- };
-
- // Compiler creates a fast path 1D IDCT for X non-zero rows
- template
- struct Col
- {
- static void idct(uint8* pDst_ptr, const int* pTemp)
- {
- // ACCESS_ROW() will be optimized at compile time to either an array access, or 0.
-#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0)
-
- const int z2 = ACCESS_ROW(2);
- const int z3 = ACCESS_ROW(6);
-
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
-
- const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS;
- const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS;
-
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
-
- const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1);
-
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
-
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
-
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
-
- int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*0] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*7] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*1] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*6] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*2] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*5] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*3] = (uint8)CLAMP(i);
-
- i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3);
- pDst_ptr[8*4] = (uint8)CLAMP(i);
- }
- };
-
- template <>
- struct Col<1>
- {
- static void idct(uint8* pDst_ptr, const int* pTemp)
- {
- int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3);
- const uint8 dcval_clamped = (uint8)CLAMP(dcval);
- pDst_ptr[0*8] = dcval_clamped;
- pDst_ptr[1*8] = dcval_clamped;
- pDst_ptr[2*8] = dcval_clamped;
- pDst_ptr[3*8] = dcval_clamped;
- pDst_ptr[4*8] = dcval_clamped;
- pDst_ptr[5*8] = dcval_clamped;
- pDst_ptr[6*8] = dcval_clamped;
- pDst_ptr[7*8] = dcval_clamped;
- }
- };
-
- static const uint8 s_idct_row_table[] =
- {
- 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0,
- 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0,
- 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0,
- 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0,
- 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2,
- 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2,
- 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4,
- 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8,
- };
-
- static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 };
-
- void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag)
- {
- JPGD_ASSERT(block_max_zag >= 1);
- JPGD_ASSERT(block_max_zag <= 64);
-
- if (block_max_zag == 1)
- {
- int k = ((pSrc_ptr[0] + 4) >> 3) + 128;
- k = CLAMP(k);
- k = k | (k<<8);
- k = k | (k<<16);
-
- for (int i = 8; i > 0; i--)
- {
- *(int*)&pDst_ptr[0] = k;
- *(int*)&pDst_ptr[4] = k;
- pDst_ptr += 8;
- }
- return;
- }
-
- int temp[64];
-
- const jpgd_block_t* pSrc = pSrc_ptr;
- int* pTemp = temp;
-
- const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8];
- int i;
- for (i = 8; i > 0; i--, pRow_tab++)
- {
- switch (*pRow_tab)
- {
- case 0: Row<0>::idct(pTemp, pSrc); break;
- case 1: Row<1>::idct(pTemp, pSrc); break;
- case 2: Row<2>::idct(pTemp, pSrc); break;
- case 3: Row<3>::idct(pTemp, pSrc); break;
- case 4: Row<4>::idct(pTemp, pSrc); break;
- case 5: Row<5>::idct(pTemp, pSrc); break;
- case 6: Row<6>::idct(pTemp, pSrc); break;
- case 7: Row<7>::idct(pTemp, pSrc); break;
- case 8: Row<8>::idct(pTemp, pSrc); break;
- }
-
- pSrc += 8;
- pTemp += 8;
- }
-
- pTemp = temp;
-
- const int nonzero_rows = s_idct_col_table[block_max_zag - 1];
- for (i = 8; i > 0; i--)
- {
- switch (nonzero_rows)
- {
- case 1: Col<1>::idct(pDst_ptr, pTemp); break;
- case 2: Col<2>::idct(pDst_ptr, pTemp); break;
- case 3: Col<3>::idct(pDst_ptr, pTemp); break;
- case 4: Col<4>::idct(pDst_ptr, pTemp); break;
- case 5: Col<5>::idct(pDst_ptr, pTemp); break;
- case 6: Col<6>::idct(pDst_ptr, pTemp); break;
- case 7: Col<7>::idct(pDst_ptr, pTemp); break;
- case 8: Col<8>::idct(pDst_ptr, pTemp); break;
- }
-
- pTemp++;
- pDst_ptr++;
- }
- }
-
- void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr)
- {
- int temp[64];
- int* pTemp = temp;
- const jpgd_block_t* pSrc = pSrc_ptr;
-
- for (int i = 4; i > 0; i--)
- {
- Row<4>::idct(pTemp, pSrc);
- pSrc += 8;
- pTemp += 8;
- }
-
- pTemp = temp;
- for (int i = 8; i > 0; i--)
- {
- Col<4>::idct(pDst_ptr, pTemp);
- pTemp++;
- pDst_ptr++;
- }
- }
-
- // Retrieve one character from the input stream.
- inline uint jpeg_decoder::get_char()
- {
- // Any bytes remaining in buffer?
- if (!m_in_buf_left)
- {
- // Try to get more bytes.
- prep_in_buffer();
- // Still nothing to get?
- if (!m_in_buf_left)
- {
- // Pad the end of the stream with 0xFF 0xD9 (EOI marker)
- int t = m_tem_flag;
- m_tem_flag ^= 1;
- if (t)
- return 0xD9;
- else
- return 0xFF;
- }
- }
-
- uint c = *m_pIn_buf_ofs++;
- m_in_buf_left--;
-
- return c;
- }
-
- // Same as previous method, except can indicate if the character is a pad character or not.
- inline uint jpeg_decoder::get_char(bool *pPadding_flag)
- {
- if (!m_in_buf_left)
- {
- prep_in_buffer();
- if (!m_in_buf_left)
- {
- *pPadding_flag = true;
- int t = m_tem_flag;
- m_tem_flag ^= 1;
- if (t)
- return 0xD9;
- else
- return 0xFF;
- }
- }
-
- *pPadding_flag = false;
-
- uint c = *m_pIn_buf_ofs++;
- m_in_buf_left--;
-
- return c;
- }
-
- // Inserts a previously retrieved character back into the input buffer.
- inline void jpeg_decoder::stuff_char(uint8 q)
- {
- *(--m_pIn_buf_ofs) = q;
- m_in_buf_left++;
- }
-
- // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered.
- inline uint8 jpeg_decoder::get_octet()
- {
- bool padding_flag;
- int c = get_char(&padding_flag);
-
- if (c == 0xFF)
- {
- if (padding_flag)
- return 0xFF;
-
- c = get_char(&padding_flag);
- if (padding_flag)
- {
- stuff_char(0xFF);
- return 0xFF;
- }
-
- if (c == 0x00)
- return 0xFF;
- else
- {
- stuff_char(static_cast(c));
- stuff_char(0xFF);
- return 0xFF;
- }
- }
-
- return static_cast(c);
- }
-
- // Retrieves a variable number of bits from the input stream. Does not recognize markers.
- inline uint jpeg_decoder::get_bits(int num_bits)
- {
- if (!num_bits)
- return 0;
-
- uint i = m_bit_buf >> (32 - num_bits);
-
- if ((m_bits_left -= num_bits) <= 0)
- {
- m_bit_buf <<= (num_bits += m_bits_left);
-
- uint c1 = get_char();
- uint c2 = get_char();
- m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2;
-
- m_bit_buf <<= -m_bits_left;
-
- m_bits_left += 16;
-
- JPGD_ASSERT(m_bits_left >= 0);
- }
- else
- m_bit_buf <<= num_bits;
-
- return i;
- }
-
- // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered.
- inline uint jpeg_decoder::get_bits_no_markers(int num_bits)
- {
- if (!num_bits)
- return 0;
-
- uint i = m_bit_buf >> (32 - num_bits);
-
- if ((m_bits_left -= num_bits) <= 0)
- {
- m_bit_buf <<= (num_bits += m_bits_left);
-
- if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF))
- {
- uint c1 = get_octet();
- uint c2 = get_octet();
- m_bit_buf |= (c1 << 8) | c2;
- }
- else
- {
- m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1];
- m_in_buf_left -= 2;
- m_pIn_buf_ofs += 2;
- }
-
- m_bit_buf <<= -m_bits_left;
-
- m_bits_left += 16;
-
- JPGD_ASSERT(m_bits_left >= 0);
- }
- else
- m_bit_buf <<= num_bits;
-
- return i;
- }
-
- // Decodes a Huffman encoded symbol.
- inline int jpeg_decoder::huff_decode(huff_tables *pH)
- {
- int symbol;
-
- // Check first 8-bits: do we have a complete symbol?
- if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0)
- {
- // Decode more bits, use a tree traversal to find symbol.
- int ofs = 23;
- do
- {
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
- ofs--;
- } while (symbol < 0);
-
- get_bits_no_markers(8 + (23 - ofs));
- }
- else
- get_bits_no_markers(pH->code_size[symbol]);
-
- return symbol;
- }
-
- // Decodes a Huffman encoded symbol.
- inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits)
- {
- int symbol;
-
- // Check first 8-bits: do we have a complete symbol?
- if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0)
- {
- // Use a tree traversal to find symbol.
- int ofs = 23;
- do
- {
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
- ofs--;
- } while (symbol < 0);
-
- get_bits_no_markers(8 + (23 - ofs));
-
- extra_bits = get_bits_no_markers(symbol & 0xF);
- }
- else
- {
- JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0));
-
- if (symbol & 0x8000)
- {
- get_bits_no_markers((symbol >> 8) & 31);
- extra_bits = symbol >> 16;
- }
- else
- {
- int code_size = (symbol >> 8) & 31;
- int num_extra_bits = symbol & 0xF;
- int bits = code_size + num_extra_bits;
- if (bits <= (m_bits_left + 16))
- extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1);
- else
- {
- get_bits_no_markers(code_size);
- extra_bits = get_bits_no_markers(num_extra_bits);
- }
- }
-
- symbol &= 0xFF;
- }
-
- return symbol;
- }
-
- // Tables and macro used to fully decode the DPCM differences.
- static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 };
- static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 };
- static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) };
-#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x))
-
- // Clamps a value between 0-255.
- inline uint8 jpeg_decoder::clamp(int i)
- {
- if (static_cast(i) > 255)
- i = (((~i) >> 31) & 0xFF);
-
- return static_cast(i);
- }
-
- namespace DCT_Upsample
- {
- struct Matrix44
- {
- typedef int Element_Type;
- enum { NUM_ROWS = 4, NUM_COLS = 4 };
-
- Element_Type v[NUM_ROWS][NUM_COLS];
-
- inline int rows() const { return NUM_ROWS; }
- inline int cols() const { return NUM_COLS; }
-
- inline const Element_Type & at(int r, int c) const { return v[r][c]; }
- inline Element_Type & at(int r, int c) { return v[r][c]; }
-
- inline Matrix44() { }
-
- inline Matrix44& operator += (const Matrix44& a)
- {
- for (int r = 0; r < NUM_ROWS; r++)
- {
- at(r, 0) += a.at(r, 0);
- at(r, 1) += a.at(r, 1);
- at(r, 2) += a.at(r, 2);
- at(r, 3) += a.at(r, 3);
- }
- return *this;
- }
-
- inline Matrix44& operator -= (const Matrix44& a)
- {
- for (int r = 0; r < NUM_ROWS; r++)
- {
- at(r, 0) -= a.at(r, 0);
- at(r, 1) -= a.at(r, 1);
- at(r, 2) -= a.at(r, 2);
- at(r, 3) -= a.at(r, 3);
- }
- return *this;
- }
-
- friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b)
- {
- Matrix44 ret;
- for (int r = 0; r < NUM_ROWS; r++)
- {
- ret.at(r, 0) = a.at(r, 0) + b.at(r, 0);
- ret.at(r, 1) = a.at(r, 1) + b.at(r, 1);
- ret.at(r, 2) = a.at(r, 2) + b.at(r, 2);
- ret.at(r, 3) = a.at(r, 3) + b.at(r, 3);
- }
- return ret;
- }
-
- friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b)
- {
- Matrix44 ret;
- for (int r = 0; r < NUM_ROWS; r++)
- {
- ret.at(r, 0) = a.at(r, 0) - b.at(r, 0);
- ret.at(r, 1) = a.at(r, 1) - b.at(r, 1);
- ret.at(r, 2) = a.at(r, 2) - b.at(r, 2);
- ret.at(r, 3) = a.at(r, 3) - b.at(r, 3);
- }
- return ret;
- }
-
- static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
- {
- for (int r = 0; r < 4; r++)
- {
- pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0));
- pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1));
- pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2));
- pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3));
- }
- }
-
- static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
- {
- for (int r = 0; r < 4; r++)
- {
- pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0));
- pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1));
- pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2));
- pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3));
- }
- }
- };
-
- const int FRACT_BITS = 10;
- const int SCALE = 1 << FRACT_BITS;
-
- typedef int Temp_Type;
-#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS)
-#define F(i) ((int)((i) * SCALE + .5f))
-
- // Any decent C++ compiler will optimize this at compile time to a 0, or an array access.
-#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8])
-
- // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix
- template
- struct P_Q
- {
- static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc)
- {
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
- const Temp_Type X000 = AT(0, 0);
- const Temp_Type X001 = AT(0, 1);
- const Temp_Type X002 = AT(0, 2);
- const Temp_Type X003 = AT(0, 3);
- const Temp_Type X004 = AT(0, 4);
- const Temp_Type X005 = AT(0, 5);
- const Temp_Type X006 = AT(0, 6);
- const Temp_Type X007 = AT(0, 7);
- const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0));
- const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1));
- const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2));
- const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3));
- const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4));
- const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5));
- const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6));
- const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7));
- const Temp_Type X020 = AT(4, 0);
- const Temp_Type X021 = AT(4, 1);
- const Temp_Type X022 = AT(4, 2);
- const Temp_Type X023 = AT(4, 3);
- const Temp_Type X024 = AT(4, 4);
- const Temp_Type X025 = AT(4, 5);
- const Temp_Type X026 = AT(4, 6);
- const Temp_Type X027 = AT(4, 7);
- const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0));
- const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1));
- const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2));
- const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3));
- const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4));
- const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5));
- const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6));
- const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7));
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- P.at(0, 0) = X000;
- P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f));
- P.at(0, 2) = X004;
- P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f));
- P.at(1, 0) = X010;
- P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f));
- P.at(1, 2) = X014;
- P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f));
- P.at(2, 0) = X020;
- P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f));
- P.at(2, 2) = X024;
- P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f));
- P.at(3, 0) = X030;
- P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f));
- P.at(3, 2) = X034;
- P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f));
- // 40 muls 24 adds
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f));
- Q.at(0, 1) = X002;
- Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f));
- Q.at(0, 3) = X006;
- Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f));
- Q.at(1, 1) = X012;
- Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f));
- Q.at(1, 3) = X016;
- Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f));
- Q.at(2, 1) = X022;
- Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f));
- Q.at(2, 3) = X026;
- Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f));
- Q.at(3, 1) = X032;
- Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f));
- Q.at(3, 3) = X036;
- // 40 muls 24 adds
- }
- };
-
- template
- struct R_S
- {
- static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc)
- {
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
- const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0));
- const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1));
- const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2));
- const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3));
- const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4));
- const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5));
- const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6));
- const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7));
- const Temp_Type X110 = AT(2, 0);
- const Temp_Type X111 = AT(2, 1);
- const Temp_Type X112 = AT(2, 2);
- const Temp_Type X113 = AT(2, 3);
- const Temp_Type X114 = AT(2, 4);
- const Temp_Type X115 = AT(2, 5);
- const Temp_Type X116 = AT(2, 6);
- const Temp_Type X117 = AT(2, 7);
- const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0));
- const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1));
- const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2));
- const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3));
- const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4));
- const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5));
- const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6));
- const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7));
- const Temp_Type X130 = AT(6, 0);
- const Temp_Type X131 = AT(6, 1);
- const Temp_Type X132 = AT(6, 2);
- const Temp_Type X133 = AT(6, 3);
- const Temp_Type X134 = AT(6, 4);
- const Temp_Type X135 = AT(6, 5);
- const Temp_Type X136 = AT(6, 6);
- const Temp_Type X137 = AT(6, 7);
- // 80 muls 48 adds
-
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- R.at(0, 0) = X100;
- R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f));
- R.at(0, 2) = X104;
- R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f));
- R.at(1, 0) = X110;
- R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f));
- R.at(1, 2) = X114;
- R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f));
- R.at(2, 0) = X120;
- R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f));
- R.at(2, 2) = X124;
- R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f));
- R.at(3, 0) = X130;
- R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f));
- R.at(3, 2) = X134;
- R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f));
- // 40 muls 24 adds
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
- S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f));
- S.at(0, 1) = X102;
- S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f));
- S.at(0, 3) = X106;
- S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f));
- S.at(1, 1) = X112;
- S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f));
- S.at(1, 3) = X116;
- S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f));
- S.at(2, 1) = X122;
- S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f));
- S.at(2, 3) = X126;
- S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f));
- S.at(3, 1) = X132;
- S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f));
- S.at(3, 3) = X136;
- // 40 muls 24 adds
- }
- };
- } // end namespace DCT_Upsample
-
- // Unconditionally frees all allocated m_blocks.
- void jpeg_decoder::free_all_blocks()
- {
- m_pStream = NULL;
- for (mem_block *b = m_pMem_blocks; b; )
- {
- mem_block *n = b->m_pNext;
- jpgd_free(b);
- b = n;
- }
- m_pMem_blocks = NULL;
- }
-
- // This method handles all errors.
- // It could easily be changed to use C++ exceptions.
- void jpeg_decoder::stop_decoding(jpgd_status status)
- {
- m_error_code = status;
- free_all_blocks();
- longjmp(m_jmp_state, status);
-
- // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit
- // that this function doesn't return, otherwise we get this error:
- //
- // error : function declared 'noreturn' should not return
- exit(1);
- }
-
- void *jpeg_decoder::alloc(size_t nSize, bool zero)
- {
- nSize = (JPGD_MAX(nSize, 1) + 3) & ~3;
- char *rv = NULL;
- for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext)
- {
- if ((b->m_used_count + nSize) <= b->m_size)
- {
- rv = b->m_data + b->m_used_count;
- b->m_used_count += nSize;
- break;
- }
- }
- if (!rv)
- {
- int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047);
- mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity);
- if (!b) stop_decoding(JPGD_NOTENOUGHMEM);
- b->m_pNext = m_pMem_blocks; m_pMem_blocks = b;
- b->m_used_count = nSize;
- b->m_size = capacity;
- rv = b->m_data;
- }
- if (zero) memset(rv, 0, nSize);
- return rv;
- }
-
- void jpeg_decoder::word_clear(void *p, uint16 c, uint n)
- {
- uint8 *pD = (uint8*)p;
- const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF;
- while (n)
- {
- pD[0] = l; pD[1] = h; pD += 2;
- n--;
- }
- }
-
- // Refill the input buffer.
- // This method will sit in a loop until (A) the buffer is full or (B)
- // the stream's read() method reports and end of file condition.
- void jpeg_decoder::prep_in_buffer()
- {
- m_in_buf_left = 0;
- m_pIn_buf_ofs = m_in_buf;
-
- if (m_eof_flag)
- return;
-
- do
- {
- int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag);
- if (bytes_read == -1)
- stop_decoding(JPGD_STREAM_READ);
-
- m_in_buf_left += bytes_read;
- } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag));
-
- m_total_bytes_read += m_in_buf_left;
-
- // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid).
- // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.)
- word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64);
- }
-
- // Read a Huffman code table.
- void jpeg_decoder::read_dht_marker()
- {
- int i, index, count;
- uint8 huff_num[17];
- uint8 huff_val[256];
-
- uint num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_DHT_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- index = get_bits(8);
-
- huff_num[0] = 0;
-
- count = 0;
-
- for (i = 1; i <= 16; i++)
- {
- huff_num[i] = static_cast(get_bits(8));
- count += huff_num[i];
- }
-
- if (count > 255)
- stop_decoding(JPGD_BAD_DHT_COUNTS);
-
- for (i = 0; i < count; i++)
- huff_val[i] = static_cast(get_bits(8));
-
- i = 1 + 16 + count;
-
- if (num_left < (uint)i)
- stop_decoding(JPGD_BAD_DHT_MARKER);
-
- num_left -= i;
-
- if ((index & 0x10) > 0x10)
- stop_decoding(JPGD_BAD_DHT_INDEX);
-
- index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1);
-
- if (index >= JPGD_MAX_HUFF_TABLES)
- stop_decoding(JPGD_BAD_DHT_INDEX);
-
- if (!m_huff_num[index])
- m_huff_num[index] = (uint8 *)alloc(17);
-
- if (!m_huff_val[index])
- m_huff_val[index] = (uint8 *)alloc(256);
-
- m_huff_ac[index] = (index & 0x10) != 0;
- memcpy(m_huff_num[index], huff_num, 17);
- memcpy(m_huff_val[index], huff_val, 256);
- }
- }
-
- // Read a quantization table.
- void jpeg_decoder::read_dqt_marker()
- {
- int n, i, prec;
- uint num_left;
- uint temp;
-
- num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_DQT_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- n = get_bits(8);
- prec = n >> 4;
- n &= 0x0F;
-
- if (n >= JPGD_MAX_QUANT_TABLES)
- stop_decoding(JPGD_BAD_DQT_TABLE);
-
- if (!m_quant[n])
- m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t));
-
- // read quantization entries, in zag order
- for (i = 0; i < 64; i++)
- {
- temp = get_bits(8);
-
- if (prec)
- temp = (temp << 8) + get_bits(8);
-
- m_quant[n][i] = static_cast(temp);
- }
-
- i = 64 + 1;
-
- if (prec)
- i += 64;
-
- if (num_left < (uint)i)
- stop_decoding(JPGD_BAD_DQT_LENGTH);
-
- num_left -= i;
- }
- }
-
- // Read the start of frame (SOF) marker.
- void jpeg_decoder::read_sof_marker()
- {
- int i;
- uint num_left;
-
- num_left = get_bits(16);
-
- if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */
- stop_decoding(JPGD_BAD_PRECISION);
-
- m_image_y_size = get_bits(16);
-
- if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT))
- stop_decoding(JPGD_BAD_HEIGHT);
-
- m_image_x_size = get_bits(16);
-
- if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH))
- stop_decoding(JPGD_BAD_WIDTH);
-
- m_comps_in_frame = get_bits(8);
-
- if (m_comps_in_frame > JPGD_MAX_COMPONENTS)
- stop_decoding(JPGD_TOO_MANY_COMPONENTS);
-
- if (num_left != (uint)(m_comps_in_frame * 3 + 8))
- stop_decoding(JPGD_BAD_SOF_LENGTH);
-
- for (i = 0; i < m_comps_in_frame; i++)
- {
- m_comp_ident[i] = get_bits(8);
- m_comp_h_samp[i] = get_bits(4);
- m_comp_v_samp[i] = get_bits(4);
- m_comp_quant[i] = get_bits(8);
- }
- }
-
- // Used to skip unrecognized markers.
- void jpeg_decoder::skip_variable_marker()
- {
- uint num_left;
-
- num_left = get_bits(16);
-
- if (num_left < 2)
- stop_decoding(JPGD_BAD_VARIABLE_MARKER);
-
- num_left -= 2;
-
- while (num_left)
- {
- get_bits(8);
- num_left--;
- }
- }
-
- // Read a define restart interval (DRI) marker.
- void jpeg_decoder::read_dri_marker()
- {
- if (get_bits(16) != 4)
- stop_decoding(JPGD_BAD_DRI_LENGTH);
-
- m_restart_interval = get_bits(16);
- }
-
- // Read a start of scan (SOS) marker.
- void jpeg_decoder::read_sos_marker()
- {
- uint num_left;
- int i, ci, n, c, cc;
-
- num_left = get_bits(16);
-
- n = get_bits(8);
-
- m_comps_in_scan = n;
-
- num_left -= 3;
-
- if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) )
- stop_decoding(JPGD_BAD_SOS_LENGTH);
-
- for (i = 0; i < n; i++)
- {
- cc = get_bits(8);
- c = get_bits(8);
- num_left -= 2;
-
- for (ci = 0; ci < m_comps_in_frame; ci++)
- if (cc == m_comp_ident[ci])
- break;
-
- if (ci >= m_comps_in_frame)
- stop_decoding(JPGD_BAD_SOS_COMP_ID);
-
- m_comp_list[i] = ci;
- m_comp_dc_tab[ci] = (c >> 4) & 15;
- m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1);
- }
-
- m_spectral_start = get_bits(8);
- m_spectral_end = get_bits(8);
- m_successive_high = get_bits(4);
- m_successive_low = get_bits(4);
-
- if (!m_progressive_flag)
- {
- m_spectral_start = 0;
- m_spectral_end = 63;
- }
-
- num_left -= 3;
-
- while (num_left) /* read past whatever is num_left */
- {
- get_bits(8);
- num_left--;
- }
- }
-
- // Finds the next marker.
- int jpeg_decoder::next_marker()
- {
- uint c, bytes;
-
- bytes = 0;
-
- do
- {
- do
- {
- bytes++;
- c = get_bits(8);
- } while (c != 0xFF);
-
- do
- {
- c = get_bits(8);
- } while (c == 0xFF);
-
- } while (c == 0);
-
- // If bytes > 0 here, there where extra bytes before the marker (not good).
-
- return c;
- }
-
- // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is
- // encountered.
- int jpeg_decoder::process_markers()
- {
- int c;
-
- for ( ; ; )
- {
- c = next_marker();
-
- switch (c)
- {
- case M_SOF0:
- case M_SOF1:
- case M_SOF2:
- case M_SOF3:
- case M_SOF5:
- case M_SOF6:
- case M_SOF7:
- // case M_JPG:
- case M_SOF9:
- case M_SOF10:
- case M_SOF11:
- case M_SOF13:
- case M_SOF14:
- case M_SOF15:
- case M_SOI:
- case M_EOI:
- case M_SOS:
- {
- return c;
- }
- case M_DHT:
- {
- read_dht_marker();
- break;
- }
- // No arithmitic support - dumb patents!
- case M_DAC:
- {
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
- break;
- }
- case M_DQT:
- {
- read_dqt_marker();
- break;
- }
- case M_DRI:
- {
- read_dri_marker();
- break;
- }
- //case M_APP0: /* no need to read the JFIF marker */
-
- case M_JPG:
- case M_RST0: /* no parameters */
- case M_RST1:
- case M_RST2:
- case M_RST3:
- case M_RST4:
- case M_RST5:
- case M_RST6:
- case M_RST7:
- case M_TEM:
- {
- stop_decoding(JPGD_UNEXPECTED_MARKER);
- break;
- }
- default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */
- {
- skip_variable_marker();
- break;
- }
- }
- }
- }
-
- // Finds the start of image (SOI) marker.
- // This code is rather defensive: it only checks the first 512 bytes to avoid
- // false positives.
- void jpeg_decoder::locate_soi_marker()
- {
- uint lastchar, thischar;
- uint bytesleft;
-
- lastchar = get_bits(8);
-
- thischar = get_bits(8);
-
- /* ok if it's a normal JPEG file without a special header */
-
- if ((lastchar == 0xFF) && (thischar == M_SOI))
- return;
-
- bytesleft = 4096; //512;
-
- for ( ; ; )
- {
- if (--bytesleft == 0)
- stop_decoding(JPGD_NOT_JPEG);
-
- lastchar = thischar;
-
- thischar = get_bits(8);
-
- if (lastchar == 0xFF)
- {
- if (thischar == M_SOI)
- break;
- else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end
- stop_decoding(JPGD_NOT_JPEG);
- }
- }
-
- // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad.
- thischar = (m_bit_buf >> 24) & 0xFF;
-
- if (thischar != 0xFF)
- stop_decoding(JPGD_NOT_JPEG);
- }
-
- // Find a start of frame (SOF) marker.
- void jpeg_decoder::locate_sof_marker()
- {
- locate_soi_marker();
-
- int c = process_markers();
-
- switch (c)
- {
- case M_SOF2:
- m_progressive_flag = JPGD_TRUE;
- case M_SOF0: /* baseline DCT */
- case M_SOF1: /* extended sequential DCT */
- {
- read_sof_marker();
- break;
- }
- case M_SOF9: /* Arithmitic coding */
- {
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
- break;
- }
- default:
- {
- stop_decoding(JPGD_UNSUPPORTED_MARKER);
- break;
- }
- }
- }
-
- // Find a start of scan (SOS) marker.
- int jpeg_decoder::locate_sos_marker()
- {
- int c;
-
- c = process_markers();
-
- if (c == M_EOI)
- return JPGD_FALSE;
- else if (c != M_SOS)
- stop_decoding(JPGD_UNEXPECTED_MARKER);
-
- read_sos_marker();
-
- return JPGD_TRUE;
- }
-
- // Reset everything to default/uninitialized state.
- void jpeg_decoder::init(jpeg_decoder_stream *pStream)
- {
- m_pMem_blocks = NULL;
- m_error_code = JPGD_SUCCESS;
- m_ready_flag = false;
- m_image_x_size = m_image_y_size = 0;
- m_pStream = pStream;
- m_progressive_flag = JPGD_FALSE;
-
- memset(m_huff_ac, 0, sizeof(m_huff_ac));
- memset(m_huff_num, 0, sizeof(m_huff_num));
- memset(m_huff_val, 0, sizeof(m_huff_val));
- memset(m_quant, 0, sizeof(m_quant));
-
- m_scan_type = 0;
- m_comps_in_frame = 0;
-
- memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp));
- memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp));
- memset(m_comp_quant, 0, sizeof(m_comp_quant));
- memset(m_comp_ident, 0, sizeof(m_comp_ident));
- memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks));
- memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks));
-
- m_comps_in_scan = 0;
- memset(m_comp_list, 0, sizeof(m_comp_list));
- memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab));
- memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab));
-
- m_spectral_start = 0;
- m_spectral_end = 0;
- m_successive_low = 0;
- m_successive_high = 0;
- m_max_mcu_x_size = 0;
- m_max_mcu_y_size = 0;
- m_blocks_per_mcu = 0;
- m_max_blocks_per_row = 0;
- m_mcus_per_row = 0;
- m_mcus_per_col = 0;
- m_expanded_blocks_per_component = 0;
- m_expanded_blocks_per_mcu = 0;
- m_expanded_blocks_per_row = 0;
- m_freq_domain_chroma_upsample = false;
-
- memset(m_mcu_org, 0, sizeof(m_mcu_org));
-
- m_total_lines_left = 0;
- m_mcu_lines_left = 0;
- m_real_dest_bytes_per_scan_line = 0;
- m_dest_bytes_per_scan_line = 0;
- m_dest_bytes_per_pixel = 0;
-
- memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs));
-
- memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs));
- memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs));
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- m_eob_run = 0;
-
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- m_pIn_buf_ofs = m_in_buf;
- m_in_buf_left = 0;
- m_eof_flag = false;
- m_tem_flag = 0;
-
- memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start));
- memset(m_in_buf, 0, sizeof(m_in_buf));
- memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end));
-
- m_restart_interval = 0;
- m_restarts_left = 0;
- m_next_restart_num = 0;
-
- m_max_mcus_per_row = 0;
- m_max_blocks_per_mcu = 0;
- m_max_mcus_per_col = 0;
-
- memset(m_last_dc_val, 0, sizeof(m_last_dc_val));
- m_pMCU_coefficients = NULL;
- m_pSample_buf = NULL;
-
- m_total_bytes_read = 0;
-
- m_pScan_line_0 = NULL;
- m_pScan_line_1 = NULL;
-
- // Ready the input buffer.
- prep_in_buffer();
-
- // Prime the bit buffer.
- m_bits_left = 16;
- m_bit_buf = 0;
-
- get_bits(16);
- get_bits(16);
-
- for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++)
- m_mcu_block_max_zag[i] = 64;
- }
-
-#define SCALEBITS 16
-#define ONE_HALF ((int) 1 << (SCALEBITS-1))
-#define FIX(x) ((int) ((x) * (1L<> SCALEBITS;
- m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS;
- m_crg[i] = (-FIX(0.71414f)) * k;
- m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF;
- }
- }
-
- // This method throws back into the stream any bytes that where read
- // into the bit buffer during initial marker scanning.
- void jpeg_decoder::fix_in_buffer()
- {
- // In case any 0xFF's where pulled into the buffer during marker scanning.
- JPGD_ASSERT((m_bits_left & 7) == 0);
-
- if (m_bits_left == 16)
- stuff_char( (uint8)(m_bit_buf & 0xFF));
-
- if (m_bits_left >= 8)
- stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF));
-
- stuff_char((uint8)((m_bit_buf >> 16) & 0xFF));
- stuff_char((uint8)((m_bit_buf >> 24) & 0xFF));
-
- m_bits_left = 16;
- get_bits_no_markers(16);
- get_bits_no_markers(16);
- }
-
- void jpeg_decoder::transform_mcu(int mcu_row)
- {
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64;
-
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
- pSrc_ptr += 64;
- pDst_ptr += 64;
- }
- }
-
- static const uint8 s_max_rc[64] =
- {
- 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86,
- 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136,
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136,
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136
- };
-
- void jpeg_decoder::transform_mcu_expand(int mcu_row)
- {
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64;
-
- // Y IDCT
- int mcu_block;
- for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++)
- {
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
- pSrc_ptr += 64;
- pDst_ptr += 64;
- }
-
- // Chroma IDCT, with upsampling
- jpgd_block_t temp_block[64];
-
- for (int i = 0; i < 2; i++)
- {
- DCT_Upsample::Matrix44 P, Q, R, S;
-
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1);
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64);
-
- switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1])
- {
- case 1*16+1:
- DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr);
- break;
- case 1*16+2:
- DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr);
- break;
- case 2*16+2:
- DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+2:
- DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+3:
- DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr);
- break;
- case 3*16+4:
- DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr);
- break;
- case 4*16+4:
- DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+4:
- DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+5:
- DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr);
- break;
- case 5*16+6:
- DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr);
- break;
- case 6*16+6:
- DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+6:
- DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+7:
- DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr);
- break;
- case 7*16+8:
- DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr);
- break;
- case 8*16+8:
- DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr);
- DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr);
- break;
- default:
- JPGD_ASSERT(false);
- }
-
- DCT_Upsample::Matrix44 a(P + Q); P -= Q;
- DCT_Upsample::Matrix44& b = P;
- DCT_Upsample::Matrix44 c(R + S); R -= S;
- DCT_Upsample::Matrix44& d = R;
-
- DCT_Upsample::Matrix44::add_and_store(temp_block, a, c);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::add_and_store(temp_block, b, d);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d);
- idct_4x4(temp_block, pDst_ptr);
- pDst_ptr += 64;
-
- pSrc_ptr += 64;
- }
- }
-
- // Loads and dequantizes the next row of (already decoded) coefficients.
- // Progressive images only.
- void jpeg_decoder::load_next_row()
- {
- int i;
- jpgd_block_t *p;
- jpgd_quant_t *q;
- int mcu_row, mcu_block, row_block = 0;
- int component_num, component_id;
- int block_x_mcu[JPGD_MAX_COMPONENTS];
-
- memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int));
-
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
-
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- component_id = m_mcu_org[mcu_block];
- q = m_quant[m_comp_quant[component_id]];
-
- p = m_pMCU_coefficients + 64 * mcu_block;
-
- jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
- jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
- p[0] = pDC[0];
- memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t));
-
- for (i = 63; i > 0; i--)
- if (p[g_ZAG[i]])
- break;
-
- m_mcu_block_max_zag[mcu_block] = i + 1;
-
- for ( ; i >= 0; i--)
- if (p[g_ZAG[i]])
- p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]);
-
- row_block++;
-
- if (m_comps_in_scan == 1)
- block_x_mcu[component_id]++;
- else
- {
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
- {
- block_x_mcu_ofs = 0;
-
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
- {
- block_y_mcu_ofs = 0;
-
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
- }
- }
- }
- }
-
- if (m_freq_domain_chroma_upsample)
- transform_mcu_expand(mcu_row);
- else
- transform_mcu(mcu_row);
- }
-
- if (m_comps_in_scan == 1)
- m_block_y_mcu[m_comp_list[0]]++;
- else
- {
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- component_id = m_comp_list[component_num];
-
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
- }
- }
- }
-
- // Restart interval processing.
- void jpeg_decoder::process_restart()
- {
- int i;
- int c = 0;
-
- // Align to a byte boundry
- // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers!
- //get_bits_no_markers(m_bits_left & 7);
-
- // Let's scan a little bit to find the marker, but not _too_ far.
- // 1536 is a "fudge factor" that determines how much to scan.
- for (i = 1536; i > 0; i--)
- if (get_char() == 0xFF)
- break;
-
- if (i == 0)
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- for ( ; i > 0; i--)
- if ((c = get_char()) != 0xFF)
- break;
-
- if (i == 0)
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- // Is it the expected marker? If not, something bad happened.
- if (c != (m_next_restart_num + M_RST0))
- stop_decoding(JPGD_BAD_RESTART_MARKER);
-
- // Reset each component's DC prediction values.
- memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
-
- m_eob_run = 0;
-
- m_restarts_left = m_restart_interval;
-
- m_next_restart_num = (m_next_restart_num + 1) & 7;
-
- // Get the bit buffer going again...
-
- m_bits_left = 16;
- get_bits_no_markers(16);
- get_bits_no_markers(16);
- }
-
- static inline int dequantize_ac(int c, int q) { c *= q; return c; }
-
- // Decodes and dequantizes the next row of coefficients.
- void jpeg_decoder::decode_next_row()
- {
- int row_block = 0;
-
- for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- if ((m_restart_interval) && (m_restarts_left == 0))
- process_restart();
-
- jpgd_block_t* p = m_pMCU_coefficients;
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64)
- {
- int component_id = m_mcu_org[mcu_block];
- jpgd_quant_t* q = m_quant[m_comp_quant[component_id]];
-
- int r, s;
- s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r);
- s = HUFF_EXTEND(r, s);
-
- m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]);
-
- p[0] = static_cast(s * q[0]);
-
- int prev_num_set = m_mcu_block_max_zag[mcu_block];
-
- huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]];
-
- int k;
- for (k = 1; k < 64; k++)
- {
- int extra_bits;
- s = huff_decode(pH, extra_bits);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if (r)
- {
- if ((k + r) > 63)
- stop_decoding(JPGD_DECODE_ERROR);
-
- if (k < prev_num_set)
- {
- int n = JPGD_MIN(r, prev_num_set - k);
- int kt = k;
- while (n--)
- p[g_ZAG[kt++]] = 0;
- }
-
- k += r;
- }
-
- s = HUFF_EXTEND(extra_bits, s);
-
- JPGD_ASSERT(k < 64);
-
- p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k];
- }
- else
- {
- if (r == 15)
- {
- if ((k + 16) > 64)
- stop_decoding(JPGD_DECODE_ERROR);
-
- if (k < prev_num_set)
- {
- int n = JPGD_MIN(16, prev_num_set - k);
- int kt = k;
- while (n--)
- {
- JPGD_ASSERT(kt <= 63);
- p[g_ZAG[kt++]] = 0;
- }
- }
-
- k += 16 - 1; // - 1 because the loop counter is k
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0);
- // END EPIC MOD
- }
- else
- break;
- }
- }
-
- if (k < prev_num_set)
- {
- int kt = k;
- while (kt < prev_num_set)
- p[g_ZAG[kt++]] = 0;
- }
-
- m_mcu_block_max_zag[mcu_block] = k;
-
- row_block++;
- }
-
- if (m_freq_domain_chroma_upsample)
- transform_mcu_expand(mcu_row);
- else
- transform_mcu(mcu_row);
-
- m_restarts_left--;
- }
- }
-
- // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB
- void jpeg_decoder::H1V1Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d = m_pScan_line_0;
- uint8 *s = m_pSample_buf + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int j = 0; j < 8; j++)
- {
- int y = s[j];
- int cb = s[64+j];
- int cr = s[128+j];
-
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d[0] = clamp(y + m_cbb[cb]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_crr[cr]);
- d[3] = 255;
- }
- else
- {
- d[0] = clamp(y + m_crr[cr]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_cbb[cb]);
- d[3] = 255;
- }
- d += 4;
- }
-
- s += 64*3;
- }
- }
-
- // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB
- void jpeg_decoder::H2V1Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *y = m_pSample_buf + row * 8;
- uint8 *c = m_pSample_buf + 2*64 + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int l = 0; l < 2; l++)
- {
- for (int j = 0; j < 4; j++)
- {
- int cb = c[0];
- int cr = c[64];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j<<1];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[(j<<1)+1];
- d0[4] = clamp(yy+bc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+rc);
- d0[7] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[(j<<1)+1];
- d0[4] = clamp(yy+rc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+bc);
- d0[7] = 255;
- }
-
- d0 += 8;
-
- c++;
- }
- y += 64;
- }
-
- y += 64*4 - 64*2;
- c += 64*4 - 8;
- }
- }
-
- // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB
- void jpeg_decoder::H1V2Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *d1 = m_pScan_line_1;
- uint8 *y;
- uint8 *c;
-
- if (row < 8)
- y = m_pSample_buf + row * 8;
- else
- y = m_pSample_buf + 64*1 + (row & 7) * 8;
-
- c = m_pSample_buf + 64*2 + (row >> 1) * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int j = 0; j < 8; j++)
- {
- int cb = c[0+j];
- int cr = c[64+j];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[8+j];
- d1[0] = clamp(yy+bc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+rc);
- d1[3] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[8+j];
- d1[0] = clamp(yy+rc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+bc);
- d1[3] = 255;
- }
-
- d0 += 4;
- d1 += 4;
- }
-
- y += 64*4;
- c += 64*4;
- }
- }
-
- // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB
- void jpeg_decoder::H2V2Convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d0 = m_pScan_line_0;
- uint8 *d1 = m_pScan_line_1;
- uint8 *y;
- uint8 *c;
-
- if (row < 8)
- y = m_pSample_buf + row * 8;
- else
- y = m_pSample_buf + 64*2 + (row & 7) * 8;
-
- c = m_pSample_buf + 64*4 + (row >> 1) * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int l = 0; l < 2; l++)
- {
- for (int j = 0; j < 8; j += 2)
- {
- int cb = c[0];
- int cr = c[64];
-
- int rc = m_crr[cr];
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
- int bc = m_cbb[cb];
-
- int yy = y[j];
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d0[0] = clamp(yy+bc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+rc);
- d0[3] = 255;
- yy = y[j+1];
- d0[4] = clamp(yy+bc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+rc);
- d0[7] = 255;
- yy = y[j+8];
- d1[0] = clamp(yy+bc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+rc);
- d1[3] = 255;
- yy = y[j+8+1];
- d1[4] = clamp(yy+bc);
- d1[5] = clamp(yy+gc);
- d1[6] = clamp(yy+rc);
- d1[7] = 255;
- }
- else
- {
- d0[0] = clamp(yy+rc);
- d0[1] = clamp(yy+gc);
- d0[2] = clamp(yy+bc);
- d0[3] = 255;
- yy = y[j+1];
- d0[4] = clamp(yy+rc);
- d0[5] = clamp(yy+gc);
- d0[6] = clamp(yy+bc);
- d0[7] = 255;
- yy = y[j+8];
- d1[0] = clamp(yy+rc);
- d1[1] = clamp(yy+gc);
- d1[2] = clamp(yy+bc);
- d1[3] = 255;
- yy = y[j+8+1];
- d1[4] = clamp(yy+rc);
- d1[5] = clamp(yy+gc);
- d1[6] = clamp(yy+bc);
- d1[7] = 255;
- }
-
- d0 += 8;
- d1 += 8;
-
- c++;
- }
- y += 64;
- }
-
- y += 64*6 - 64*2;
- c += 64*6 - 8;
- }
- }
-
- // Y (1 block per MCU) to 8-bit grayscale
- void jpeg_decoder::gray_convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
- uint8 *d = m_pScan_line_0;
- uint8 *s = m_pSample_buf + row * 8;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- *(uint *)d = *(uint *)s;
- *(uint *)(&d[4]) = *(uint *)(&s[4]);
-
- s += 64;
- d += 8;
- }
- }
-
- void jpeg_decoder::expanded_convert()
- {
- int row = m_max_mcu_y_size - m_mcu_lines_left;
-
- uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8;
-
- uint8* d = m_pScan_line_0;
-
- for (int i = m_max_mcus_per_row; i > 0; i--)
- {
- for (int k = 0; k < m_max_mcu_x_size; k += 8)
- {
- const int Y_ofs = k * 8;
- const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component;
- const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2;
- for (int j = 0; j < 8; j++)
- {
- int y = Py[Y_ofs + j];
- int cb = Py[Cb_ofs + j];
- int cr = Py[Cr_ofs + j];
-
- if (jpg_format == ERGBFormatJPG::BGRA)
- {
- d[0] = clamp(y + m_cbb[cb]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_crr[cr]);
- d[3] = 255;
- }
- else
- {
- d[0] = clamp(y + m_crr[cr]);
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
- d[2] = clamp(y + m_cbb[cb]);
- d[3] = 255;
- }
-
- d += 4;
- }
- }
-
- Py += 64 * m_expanded_blocks_per_mcu;
- }
- }
-
- // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream.
- void jpeg_decoder::find_eoi()
- {
- if (!m_progressive_flag)
- {
- // Attempt to read the EOI marker.
- //get_bits_no_markers(m_bits_left & 7);
-
- // Prime the bit buffer
- m_bits_left = 16;
- get_bits(16);
- get_bits(16);
-
- // The next marker _should_ be EOI
- process_markers();
- }
-
- m_total_bytes_read -= m_in_buf_left;
- }
-
- int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len)
- {
- if ((m_error_code) || (!m_ready_flag))
- return JPGD_FAILED;
-
- if (m_total_lines_left == 0)
- return JPGD_DONE;
-
- if (m_mcu_lines_left == 0)
- {
- if (setjmp(m_jmp_state))
- return JPGD_FAILED;
-
- if (m_progressive_flag)
- load_next_row();
- else
- decode_next_row();
-
- // Find the EOI marker if that was the last row.
- if (m_total_lines_left <= m_max_mcu_y_size)
- find_eoi();
-
- m_mcu_lines_left = m_max_mcu_y_size;
- }
-
- if (m_freq_domain_chroma_upsample)
- {
- expanded_convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- {
- switch (m_scan_type)
- {
- case JPGD_YH2V2:
- {
- if ((m_mcu_lines_left & 1) == 0)
- {
- H2V2Convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- *pScan_line = m_pScan_line_1;
-
- break;
- }
- case JPGD_YH2V1:
- {
- H2V1Convert();
- *pScan_line = m_pScan_line_0;
- break;
- }
- case JPGD_YH1V2:
- {
- if ((m_mcu_lines_left & 1) == 0)
- {
- H1V2Convert();
- *pScan_line = m_pScan_line_0;
- }
- else
- *pScan_line = m_pScan_line_1;
-
- break;
- }
- case JPGD_YH1V1:
- {
- H1V1Convert();
- *pScan_line = m_pScan_line_0;
- break;
- }
- case JPGD_GRAYSCALE:
- {
- gray_convert();
- *pScan_line = m_pScan_line_0;
-
- break;
- }
- }
- }
-
- *pScan_line_len = m_real_dest_bytes_per_scan_line;
-
- m_mcu_lines_left--;
- m_total_lines_left--;
-
- return JPGD_SUCCESS;
- }
-
- // Creates the tables needed for efficient Huffman decoding.
- void jpeg_decoder::make_huff_table(int index, huff_tables *pH)
- {
- int p, i, l, si;
- uint8 huffsize[257];
- uint huffcode[257];
- uint code;
- uint subtree;
- int code_size;
- int lastp;
- int nextfreeentry;
- int currententry;
-
- pH->ac_table = m_huff_ac[index] != 0;
-
- p = 0;
-
- for (l = 1; l <= 16; l++)
- {
- for (i = 1; i <= m_huff_num[index][l]; i++)
- huffsize[p++] = static_cast(l);
- }
-
- huffsize[p] = 0;
-
- lastp = p;
-
- code = 0;
- si = huffsize[0];
- p = 0;
-
- while (huffsize[p])
- {
- while (huffsize[p] == si)
- {
- huffcode[p++] = code;
- code++;
- }
-
- code <<= 1;
- si++;
- }
-
- memset(pH->look_up, 0, sizeof(pH->look_up));
- memset(pH->look_up2, 0, sizeof(pH->look_up2));
- memset(pH->tree, 0, sizeof(pH->tree));
- memset(pH->code_size, 0, sizeof(pH->code_size));
-
- nextfreeentry = -1;
-
- p = 0;
-
- while (p < lastp)
- {
- i = m_huff_val[index][p];
- code = huffcode[p];
- code_size = huffsize[p];
-
- pH->code_size[i] = static_cast(code_size);
-
- if (code_size <= 8)
- {
- code <<= (8 - code_size);
-
- for (l = 1 << (8 - code_size); l > 0; l--)
- {
- JPGD_ASSERT(i < 256);
-
- pH->look_up[code] = i;
-
- bool has_extrabits = false;
- int extra_bits = 0;
- int num_extra_bits = i & 15;
-
- int bits_to_fetch = code_size;
- if (num_extra_bits)
- {
- int total_codesize = code_size + num_extra_bits;
- if (total_codesize <= 8)
- {
- has_extrabits = true;
- extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize));
- JPGD_ASSERT(extra_bits <= 0x7FFF);
- bits_to_fetch += num_extra_bits;
- }
- }
-
- if (!has_extrabits)
- pH->look_up2[code] = i | (bits_to_fetch << 8);
- else
- pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8);
-
- code++;
- }
- }
- else
- {
- subtree = (code >> (code_size - 8)) & 0xFF;
-
- currententry = pH->look_up[subtree];
-
- if (currententry == 0)
- {
- pH->look_up[subtree] = currententry = nextfreeentry;
- pH->look_up2[subtree] = currententry = nextfreeentry;
-
- nextfreeentry -= 2;
- }
-
- code <<= (16 - (code_size - 8));
-
- for (l = code_size; l > 9; l--)
- {
- if ((code & 0x8000) == 0)
- currententry--;
-
- if (pH->tree[-currententry - 1] == 0)
- {
- pH->tree[-currententry - 1] = nextfreeentry;
-
- currententry = nextfreeentry;
-
- nextfreeentry -= 2;
- }
- else
- currententry = pH->tree[-currententry - 1];
-
- code <<= 1;
- }
-
- if ((code & 0x8000) == 0)
- currententry--;
-
- pH->tree[-currententry - 1] = i;
- }
-
- p++;
- }
- }
-
- // Verifies the quantization tables needed for this scan are available.
- void jpeg_decoder::check_quant_tables()
- {
- for (int i = 0; i < m_comps_in_scan; i++)
- if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL)
- stop_decoding(JPGD_UNDEFINED_QUANT_TABLE);
- }
-
- // Verifies that all the Huffman tables needed for this scan are available.
- void jpeg_decoder::check_huff_tables()
- {
- for (int i = 0; i < m_comps_in_scan; i++)
- {
- if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL))
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
-
- if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL))
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
- }
-
- for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++)
- if (m_huff_num[i])
- {
- if (!m_pHuff_tabs[i])
- m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables));
-
- make_huff_table(i, m_pHuff_tabs[i]);
- }
- }
-
- // Determines the component order inside each MCU.
- // Also calcs how many MCU's are on each row, etc.
- void jpeg_decoder::calc_mcu_block_order()
- {
- int component_num, component_id;
- int max_h_samp = 0, max_v_samp = 0;
-
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
- {
- if (m_comp_h_samp[component_id] > max_h_samp)
- max_h_samp = m_comp_h_samp[component_id];
-
- if (m_comp_v_samp[component_id] > max_v_samp)
- max_v_samp = m_comp_v_samp[component_id];
- }
-
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
- {
- m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8;
- m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8;
- }
-
- if (m_comps_in_scan == 1)
- {
- m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]];
- m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]];
- }
- else
- {
- m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp;
- m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp;
- }
-
- if (m_comps_in_scan == 1)
- {
- m_mcu_org[0] = m_comp_list[0];
-
- m_blocks_per_mcu = 1;
- }
- else
- {
- m_blocks_per_mcu = 0;
-
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- int num_blocks;
-
- component_id = m_comp_list[component_num];
-
- num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id];
-
- while (num_blocks--)
- m_mcu_org[m_blocks_per_mcu++] = component_id;
- }
- }
- }
-
- // Starts a new scan.
- int jpeg_decoder::init_scan()
- {
- if (!locate_sos_marker())
- return JPGD_FALSE;
-
- calc_mcu_block_order();
-
- check_huff_tables();
-
- check_quant_tables();
-
- memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
-
- m_eob_run = 0;
-
- if (m_restart_interval)
- {
- m_restarts_left = m_restart_interval;
- m_next_restart_num = 0;
- }
-
- fix_in_buffer();
-
- return JPGD_TRUE;
- }
-
- // Starts a frame. Determines if the number of components or sampling factors
- // are supported.
- void jpeg_decoder::init_frame()
- {
- int i;
-
- if (m_comps_in_frame == 1)
- {
- if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1))
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
-
- m_scan_type = JPGD_GRAYSCALE;
- m_max_blocks_per_mcu = 1;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 8;
- }
- else if (m_comps_in_frame == 3)
- {
- if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) ||
- ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) )
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
-
- if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1))
- {
- m_scan_type = JPGD_YH1V1;
-
- m_max_blocks_per_mcu = 3;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 8;
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1))
- {
- m_scan_type = JPGD_YH2V1;
- m_max_blocks_per_mcu = 4;
- m_max_mcu_x_size = 16;
- m_max_mcu_y_size = 8;
- }
- else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2))
- {
- m_scan_type = JPGD_YH1V2;
- m_max_blocks_per_mcu = 4;
- m_max_mcu_x_size = 8;
- m_max_mcu_y_size = 16;
- }
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2))
- {
- m_scan_type = JPGD_YH2V2;
- m_max_blocks_per_mcu = 6;
- m_max_mcu_x_size = 16;
- m_max_mcu_y_size = 16;
- }
- else
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
- }
- else
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
-
- m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size;
- m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size;
-
- // These values are for the *destination* pixels: after conversion.
- if (m_scan_type == JPGD_GRAYSCALE)
- m_dest_bytes_per_pixel = 1;
- else
- m_dest_bytes_per_pixel = 4;
-
- m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel;
-
- m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel);
-
- // Initialize two scan line buffers.
- m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
- if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2))
- m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
-
- m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu;
-
- // Should never happen
- if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW)
- stop_decoding(JPGD_ASSERTION_ERROR);
-
- // Allocate the coefficient buffer, enough for one MCU
- m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t));
-
- for (i = 0; i < m_max_blocks_per_mcu; i++)
- m_mcu_block_max_zag[i] = 64;
-
- m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0];
- m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame;
- m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu;
- // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor.
-// BEGIN EPIC MOD
-#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING
- m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3);
-#else
- m_freq_domain_chroma_upsample = 0;
-#endif
-// END EPIC MOD
-
- if (m_freq_domain_chroma_upsample)
- m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64);
- else
- m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64);
-
- m_total_lines_left = m_image_y_size;
-
- m_mcu_lines_left = 0;
-
- create_look_ups();
- }
-
- // The coeff_buf series of methods originally stored the coefficients
- // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache
- // was used to make this process more efficient. Now, we can store the entire
- // thing in RAM.
- jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y)
- {
- coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf));
-
- cb->block_num_x = block_num_x;
- cb->block_num_y = block_num_y;
- cb->block_len_x = block_len_x;
- cb->block_len_y = block_len_y;
- cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t);
- cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true);
- return cb;
- }
-
- inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y)
- {
- JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y));
- return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x));
- }
-
- // The following methods decode the various types of m_blocks encountered
- // in progressively encoded images.
- void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int s, r;
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
-
- if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0)
- {
- r = pD->get_bits_no_markers(s);
- s = HUFF_EXTEND(r, s);
- }
-
- pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]);
-
- p[0] = static_cast(s << pD->m_successive_low);
- }
-
- void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- if (pD->get_bits_no_markers(1))
- {
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
-
- p[0] |= (1 << pD->m_successive_low);
- }
- }
-
- void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int k, s, r;
-
- if (pD->m_eob_run)
- {
- pD->m_eob_run--;
- return;
- }
-
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
-
- for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++)
- {
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if ((k += r) > 63)
- pD->stop_decoding(JPGD_DECODE_ERROR);
-
- r = pD->get_bits_no_markers(s);
- s = HUFF_EXTEND(r, s);
-
- p[g_ZAG[k]] = static_cast(s << pD->m_successive_low);
- }
- else
- {
- if (r == 15)
- {
- if ((k += 15) > 63)
- pD->stop_decoding(JPGD_DECODE_ERROR);
- }
- else
- {
- pD->m_eob_run = 1 << r;
-
- if (r)
- pD->m_eob_run += pD->get_bits_no_markers(r);
-
- pD->m_eob_run--;
-
- break;
- }
- }
- }
- }
-
- void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
- {
- int s, k, r;
- int p1 = 1 << pD->m_successive_low;
- int m1 = (-1) << pD->m_successive_low;
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
-
- k = pD->m_spectral_start;
-
- if (pD->m_eob_run == 0)
- {
- for ( ; k <= pD->m_spectral_end; k++)
- {
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
-
- r = s >> 4;
- s &= 15;
-
- if (s)
- {
- if (s != 1)
- pD->stop_decoding(JPGD_DECODE_ERROR);
-
- if (pD->get_bits_no_markers(1))
- s = p1;
- else
- s = m1;
- }
- else
- {
- if (r != 15)
- {
- pD->m_eob_run = 1 << r;
-
- if (r)
- pD->m_eob_run += pD->get_bits_no_markers(r);
-
- break;
- }
- }
-
- do
- {
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64);
- // END EPIC MOD
-
- jpgd_block_t *this_coef = p + g_ZAG[k];
-
- if (*this_coef != 0)
- {
- if (pD->get_bits_no_markers(1))
- {
- if ((*this_coef & p1) == 0)
- {
- if (*this_coef >= 0)
- *this_coef = static_cast(*this_coef + p1);
- else
- *this_coef = static_cast(*this_coef + m1);
- }
- }
- }
- else
- {
- if (--r < 0)
- break;
- }
-
- k++;
-
- } while (k <= pD->m_spectral_end);
-
- if ((s) && (k < 64))
- {
- p[g_ZAG[k]] = static_cast(s);
- }
- }
- }
-
- if (pD->m_eob_run > 0)
- {
- for ( ; k <= pD->m_spectral_end; k++)
- {
- // BEGIN EPIC MOD
- JPGD_ASSERT(k < 64);
- // END EPIC MOD
-
- jpgd_block_t *this_coef = p + g_ZAG[k];
-
- if (*this_coef != 0)
- {
- if (pD->get_bits_no_markers(1))
- {
- if ((*this_coef & p1) == 0)
- {
- if (*this_coef >= 0)
- *this_coef = static_cast(*this_coef + p1);
- else
- *this_coef = static_cast(*this_coef + m1);
- }
- }
- }
- }
-
- pD->m_eob_run--;
- }
- }
-
- // Decode a scan in a progressively encoded image.
- void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func)
- {
- int mcu_row, mcu_col, mcu_block;
- int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS];
-
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
-
- for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++)
- {
- int component_num, component_id;
-
- memset(block_x_mcu, 0, sizeof(block_x_mcu));
-
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
- {
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
-
- if ((m_restart_interval) && (m_restarts_left == 0))
- process_restart();
-
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
- {
- component_id = m_mcu_org[mcu_block];
-
- decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
-
- if (m_comps_in_scan == 1)
- block_x_mcu[component_id]++;
- else
- {
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
- {
- block_x_mcu_ofs = 0;
-
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
- {
- block_y_mcu_ofs = 0;
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
- }
- }
- }
- }
-
- m_restarts_left--;
- }
-
- if (m_comps_in_scan == 1)
- m_block_y_mcu[m_comp_list[0]]++;
- else
- {
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
- {
- component_id = m_comp_list[component_num];
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
- }
- }
- }
- }
-
- // Decode a progressively encoded image.
- void jpeg_decoder::init_progressive()
- {
- int i;
-
- if (m_comps_in_frame == 4)
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
-
- // Allocate the coefficient buffers.
- for (i = 0; i < m_comps_in_frame; i++)
- {
- m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1);
- m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8);
- }
-
- for ( ; ; )
- {
- int dc_only_scan, refinement_scan;
- pDecode_block_func decode_block_func;
-
- if (!init_scan())
- break;
-
- dc_only_scan = (m_spectral_start == 0);
- refinement_scan = (m_successive_high != 0);
-
- if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63))
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
-
- if (dc_only_scan)
- {
- if (m_spectral_end)
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
- }
- else if (m_comps_in_scan != 1) /* AC scans can only contain one component */
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
-
- if ((refinement_scan) && (m_successive_low != m_successive_high - 1))
- stop_decoding(JPGD_BAD_SOS_SUCCESSIVE);
-
- if (dc_only_scan)
- {
- if (refinement_scan)
- decode_block_func = decode_block_dc_refine;
- else
- decode_block_func = decode_block_dc_first;
- }
- else
- {
- if (refinement_scan)
- decode_block_func = decode_block_ac_refine;
- else
- decode_block_func = decode_block_ac_first;
- }
-
- decode_scan(decode_block_func);
-
- m_bits_left = 16;
- get_bits(16);
- get_bits(16);
- }
-
- m_comps_in_scan = m_comps_in_frame;
-
- for (i = 0; i < m_comps_in_frame; i++)
- m_comp_list[i] = i;
-
- calc_mcu_block_order();
- }
-
- void jpeg_decoder::init_sequential()
- {
- if (!init_scan())
- stop_decoding(JPGD_UNEXPECTED_MARKER);
- }
-
- void jpeg_decoder::decode_start()
- {
- init_frame();
-
- if (m_progressive_flag)
- init_progressive();
- else
- init_sequential();
- }
-
- void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream)
- {
- init(pStream);
- locate_sof_marker();
- }
-
- jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream)
- {
- if (setjmp(m_jmp_state))
- return;
- decode_init(pStream);
- }
-
- int jpeg_decoder::begin_decoding()
- {
- if (m_ready_flag)
- return JPGD_SUCCESS;
-
- if (m_error_code)
- return JPGD_FAILED;
-
- if (setjmp(m_jmp_state))
- return JPGD_FAILED;
-
- decode_start();
-
- m_ready_flag = true;
-
- return JPGD_SUCCESS;
- }
-
- jpeg_decoder::~jpeg_decoder()
- {
- free_all_blocks();
- }
-
- jpeg_decoder_file_stream::jpeg_decoder_file_stream()
- {
- m_pFile = NULL;
- m_eof_flag = false;
- m_error_flag = false;
- }
-
- void jpeg_decoder_file_stream::close()
- {
- if (m_pFile)
- {
- fclose(m_pFile);
- m_pFile = NULL;
- }
-
- m_eof_flag = false;
- m_error_flag = false;
- }
-
- jpeg_decoder_file_stream::~jpeg_decoder_file_stream()
- {
- close();
- }
-
- bool jpeg_decoder_file_stream::open(const char *Pfilename)
- {
- close();
-
- m_eof_flag = false;
- m_error_flag = false;
-
-#if defined(_MSC_VER)
- m_pFile = NULL;
- fopen_s(&m_pFile, Pfilename, "rb");
-#else
- m_pFile = fopen(Pfilename, "rb");
-#endif
- return m_pFile != NULL;
- }
-
- int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
- {
- if (!m_pFile)
- return -1;
-
- if (m_eof_flag)
- {
- *pEOF_flag = true;
- return 0;
- }
-
- if (m_error_flag)
- return -1;
-
- int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile));
- if (bytes_read < max_bytes_to_read)
- {
- if (ferror(m_pFile))
- {
- m_error_flag = true;
- return -1;
- }
-
- m_eof_flag = true;
- *pEOF_flag = true;
- }
-
- return bytes_read;
- }
-
- bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size)
- {
- close();
- m_pSrc_data = pSrc_data;
- m_ofs = 0;
- m_size = size;
- return true;
- }
-
- int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
- {
- *pEOF_flag = false;
-
- if (!m_pSrc_data)
- return -1;
-
- uint bytes_remaining = m_size - m_ofs;
- if ((uint)max_bytes_to_read > bytes_remaining)
- {
- max_bytes_to_read = bytes_remaining;
- *pEOF_flag = true;
- }
-
- memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read);
- m_ofs += max_bytes_to_read;
-
- return max_bytes_to_read;
- }
-
- unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps)
- {
- if (!actual_comps)
- return NULL;
- *actual_comps = 0;
-
- if ((!pStream) || (!width) || (!height) || (!req_comps))
- return NULL;
-
- if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4))
- return NULL;
-
- jpeg_decoder decoder(pStream);
- if (decoder.get_error_code() != JPGD_SUCCESS)
- return NULL;
-
- const int image_width = decoder.get_width(), image_height = decoder.get_height();
- *width = image_width;
- *height = image_height;
- *actual_comps = decoder.get_num_components();
-
- if (decoder.begin_decoding() != JPGD_SUCCESS)
- return NULL;
-
- const int dst_bpl = image_width * req_comps;
-
- uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height);
- if (!pImage_data)
- return NULL;
-
- for (int y = 0; y < image_height; y++)
- {
- const uint8* pScan_line = 0;
- uint scan_line_len;
- if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS)
- {
- jpgd_free(pImage_data);
- return NULL;
- }
-
- uint8 *pDst = pImage_data + y * dst_bpl;
-
- if (((req_comps == 4) && (decoder.get_num_components() == 3)) ||
- ((req_comps == 1) && (decoder.get_num_components() == 1)))
- {
- memcpy(pDst, pScan_line, dst_bpl);
- }
- else if (decoder.get_num_components() == 1)
- {
- if (req_comps == 3)
- {
- for (int x = 0; x < image_width; x++)
- {
- uint8 luma = pScan_line[x];
- pDst[0] = luma;
- pDst[1] = luma;
- pDst[2] = luma;
- pDst += 3;
- }
- }
- else
- {
- for (int x = 0; x < image_width; x++)
- {
- uint8 luma = pScan_line[x];
- pDst[0] = luma;
- pDst[1] = luma;
- pDst[2] = luma;
- pDst[3] = 255;
- pDst += 4;
- }
- }
- }
- else if (decoder.get_num_components() == 3)
- {
- if (req_comps == 1)
- {
- const int YR = 19595, YG = 38470, YB = 7471;
- for (int x = 0; x < image_width; x++)
- {
- int r = pScan_line[x*4+0];
- int g = pScan_line[x*4+1];
- int b = pScan_line[x*4+2];
- *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16);
- }
- }
- else
- {
- for (int x = 0; x < image_width; x++)
- {
- pDst[0] = pScan_line[x*4+0];
- pDst[1] = pScan_line[x*4+1];
- pDst[2] = pScan_line[x*4+2];
- pDst += 3;
- }
- }
- }
- }
-
- return pImage_data;
- }
-
-// BEGIN EPIC MOD
- unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format)
- {
- jpg_format = (ERGBFormatJPG)format;
-// EMD EPIC MOD
- jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size);
- return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps);
- }
-
- unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps)
- {
- jpgd::jpeg_decoder_file_stream file_stream;
- if (!file_stream.open(pSrc_filename))
- return NULL;
- return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps);
- }
-
-} // namespace jpgd
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/ocr_head.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/ocr_head.py
deleted file mode 100644
index e180e10276e9a4d794501d0f399740c189775673..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/ocr_head.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-
-from mmseg.ops import resize
-from ..builder import HEADS
-from ..utils import SelfAttentionBlock as _SelfAttentionBlock
-from .cascade_decode_head import BaseCascadeDecodeHead
-
-
-class SpatialGatherModule(nn.Module):
- """Aggregate the context features according to the initial predicted
- probability distribution.
-
- Employ the soft-weighted method to aggregate the context.
- """
-
- def __init__(self, scale):
- super(SpatialGatherModule, self).__init__()
- self.scale = scale
-
- def forward(self, feats, probs):
- """Forward function."""
- batch_size, num_classes, height, width = probs.size()
- channels = feats.size(1)
- probs = probs.view(batch_size, num_classes, -1)
- feats = feats.view(batch_size, channels, -1)
- # [batch_size, height*width, num_classes]
- feats = feats.permute(0, 2, 1)
- # [batch_size, channels, height*width]
- probs = F.softmax(self.scale * probs, dim=2)
- # [batch_size, channels, num_classes]
- ocr_context = torch.matmul(probs, feats)
- ocr_context = ocr_context.permute(0, 2, 1).contiguous().unsqueeze(3)
- return ocr_context
-
-
-class ObjectAttentionBlock(_SelfAttentionBlock):
- """Make a OCR used SelfAttentionBlock."""
-
- def __init__(self, in_channels, channels, scale, conv_cfg, norm_cfg,
- act_cfg):
- if scale > 1:
- query_downsample = nn.MaxPool2d(kernel_size=scale)
- else:
- query_downsample = None
- super(ObjectAttentionBlock, self).__init__(
- key_in_channels=in_channels,
- query_in_channels=in_channels,
- channels=channels,
- out_channels=in_channels,
- share_key_query=False,
- query_downsample=query_downsample,
- key_downsample=None,
- key_query_num_convs=2,
- key_query_norm=True,
- value_out_num_convs=1,
- value_out_norm=True,
- matmul_norm=True,
- with_out=True,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- self.bottleneck = ConvModule(
- in_channels * 2,
- in_channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, query_feats, key_feats):
- """Forward function."""
- context = super(ObjectAttentionBlock,
- self).forward(query_feats, key_feats)
- output = self.bottleneck(torch.cat([context, query_feats], dim=1))
- if self.query_downsample is not None:
- output = resize(query_feats)
-
- return output
-
-
-@HEADS.register_module()
-class OCRHead(BaseCascadeDecodeHead):
- """Object-Contextual Representations for Semantic Segmentation.
-
- This head is the implementation of `OCRNet
- `_.
-
- Args:
- ocr_channels (int): The intermediate channels of OCR block.
- scale (int): The scale of probability map in SpatialGatherModule in
- Default: 1.
- """
-
- def __init__(self, ocr_channels, scale=1, **kwargs):
- super(OCRHead, self).__init__(**kwargs)
- self.ocr_channels = ocr_channels
- self.scale = scale
- self.object_context_block = ObjectAttentionBlock(
- self.channels,
- self.ocr_channels,
- self.scale,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.spatial_gather_module = SpatialGatherModule(self.scale)
-
- self.bottleneck = ConvModule(
- self.in_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs, prev_output):
- """Forward function."""
- x = self._transform_inputs(inputs)
- feats = self.bottleneck(x)
- context = self.spatial_gather_module(feats, prev_output)
- object_context = self.object_context_block(feats, context)
- output = self.cls_seg(object_context)
-
- return output
diff --git a/spaces/HOSSTOS/README/README.md b/spaces/HOSSTOS/README/README.md
deleted file mode 100644
index e5536888744def46c9e6d011d885583acc5e91b5..0000000000000000000000000000000000000000
--- a/spaces/HOSSTOS/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: 🦀
-colorFrom: pink
-colorTo: green
-sdk: static
-pinned: false
----
-
-Edit this `README.md` markdown file to author your organization card 🔥
diff --git a/spaces/Hazem/roop/roop/ui.py b/spaces/Hazem/roop/roop/ui.py
deleted file mode 100644
index ba693dac116bd416b91518734fa550e9dfb95c7b..0000000000000000000000000000000000000000
--- a/spaces/Hazem/roop/roop/ui.py
+++ /dev/null
@@ -1,231 +0,0 @@
-import os
-import webbrowser
-import customtkinter as ctk
-from typing import Callable, Tuple
-import cv2
-from PIL import Image, ImageOps
-
-import roop.globals
-import roop.metadata
-from roop.face_analyser import get_one_face
-from roop.capturer import get_video_frame, get_video_frame_total
-from roop.predicter import predict_frame
-from roop.processors.frame.core import get_frame_processors_modules
-from roop.utilities import is_image, is_video, resolve_relative_path
-
-ROOT = None
-ROOT_HEIGHT = 700
-ROOT_WIDTH = 600
-
-PREVIEW = None
-PREVIEW_MAX_HEIGHT = 700
-PREVIEW_MAX_WIDTH = 1200
-
-RECENT_DIRECTORY_SOURCE = None
-RECENT_DIRECTORY_TARGET = None
-RECENT_DIRECTORY_OUTPUT = None
-
-preview_label = None
-preview_slider = None
-source_label = None
-target_label = None
-status_label = None
-
-
-def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
- global ROOT, PREVIEW
-
- ROOT = create_root(start, destroy)
- PREVIEW = create_preview(ROOT)
-
- return ROOT
-
-
-def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
- global source_label, target_label, status_label
-
- ctk.deactivate_automatic_dpi_awareness()
- ctk.set_appearance_mode('system')
- ctk.set_default_color_theme(resolve_relative_path('ui.json'))
-
- root = ctk.CTk()
- root.minsize(ROOT_WIDTH, ROOT_HEIGHT)
- root.title(f'{roop.metadata.name} {roop.metadata.version}')
- root.configure()
- root.protocol('WM_DELETE_WINDOW', lambda: destroy())
-
- source_label = ctk.CTkLabel(root, text=None)
- source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25)
-
- target_label = ctk.CTkLabel(root, text=None)
- target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
-
- source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path())
- source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1)
-
- target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path())
- target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1)
-
- keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps)
- keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps))
- keep_fps_checkbox.place(relx=0.1, rely=0.6)
-
- keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames)
- keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get()))
- keep_frames_switch.place(relx=0.1, rely=0.65)
-
- keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio)
- keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get()))
- keep_audio_switch.place(relx=0.6, rely=0.6)
-
- many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces)
- many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get()))
- many_faces_switch.place(relx=0.6, rely=0.65)
-
- start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start))
- start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05)
-
- stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy())
- stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05)
-
- preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview())
- preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05)
-
- status_label = ctk.CTkLabel(root, text=None, justify='center')
- status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
-
- donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2')
- donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
- donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color'))
- donate_label.bind('