diff --git a/errors.txt b/errors.txt
deleted file mode 100644
index bffb2da1857ee9acfaec082d59101e7064dcb01d..0000000000000000000000000000000000000000
--- a/errors.txt
+++ /dev/null
@@ -1,24 +0,0 @@
-ky2k/Toxicity_Classifier_POC
-tialenAdioni/chat-gpt-api
-Narsil/myspace
-arxify/RVC-beta-v2-0618
-WitchHuntTV/WinnieThePoohSVC_sovits4
-yizhangliu/Grounded-Segment-Anything
-Robert001/UniControl-Demo
-internetsignal/audioLDM
-inamXcontru/PoeticTTS
-dcarpintero/nlp-summarizer-pegasus
-SungBeom/chatwine-korean
-x6/BingAi
-1gistliPinn/ChatGPT4
-colakin/video-generater
-stomexserde/gpt4-ui
-quidiaMuxgu/Expedit-SAM
-NasirKhalid24/Dalle2-Diffusion-Prior
-joaopereirajp/livvieChatBot
-diacanFperku/AutoGPT
-tioseFevbu/cartoon-converter
-chuan-hd/law-assistant-chatbot
-mshukor/UnIVAL
-xuyingliKepler/openai_play_tts
-TNR-5/lib111
\ No newline at end of file
diff --git a/spaces/07jeancms/minima/app.py b/spaces/07jeancms/minima/app.py
deleted file mode 100644
index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000
--- a/spaces/07jeancms/minima/app.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import gradio as gr
-
-def greet(name):
- return "Hello " + name + "!!"
-
-iface = gr.Interface(fn=greet, inputs="text", outputs="text")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Trials A Common but Costly Phenomenon in the Courts.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Trials A Common but Costly Phenomenon in the Courts.md
deleted file mode 100644
index babeb36e33ba018f75d008341d3ecc13cf961113..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Trials A Common but Costly Phenomenon in the Courts.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
What is a Cracked Trial and Why Does It Matter?
-
A cracked trial is a term used in the criminal justice system to describe a trial that has been scheduled for a not guilty hearing but does not proceed on the day, either because the defendant changes their plea to guilty or the prosecution drops the case. A cracked trial means that the case is resolved without a trial, but it also means that the court time and resources have been wasted, and the witnesses have been inconvenienced or distressed.
-
According to the guidance issued by the judiciary, a cracked trial can have a negative impact on the confidence in the system, as it may suggest that the case was not properly prepared or reviewed, or that there was undue pressure on the parties to reach a resolution. A cracked trial can also affect the victim's satisfaction and sense of justice, as they may feel that their voice was not heard or that the outcome was not fair.
The statistics published by Full Fact show that in 2014/15, about 35% of trials in the crown court and 37% in the magistrates' court were cracked, and that the main reason for this was late guilty pleas by the defendants. The report also found that only 2.1% of trials in the crown court and 6.8% of trials in the magistrates' court were cracked because of witness issues, such as absence or withdrawal of evidence.
-
The definition of a cracked trial in A Dictionary of Law Enforcement states that a trial that has been listed for a not guilty hearing on a particular day but does not proceed, either because the defendant pleads guilty to the whole or part of the indictment, or an alternative charge, or because the prosecution offers no evidence.
-
A cracked trial is different from an ineffective trial, which is a trial that has been listed for a hearing but cannot start or continue on the day for reasons beyond the control of the parties, such as illness, unavailability of a judge or jury, or technical problems. An ineffective trial has to be rescheduled for another date.
-
A cracked trial is also different from a vacated trial, which is a trial that has been listed for a hearing but is cancelled before the day for reasons within the control of the parties, such as an agreement to resolve the case by another means, such as a plea bargain or a diversion scheme. A vacated trial does not require any further court time.
-
Conclusion
-
A cracked trial is a common occurrence in the criminal justice system, but it can have negative consequences for the efficiency and effectiveness of the system, as well as for the satisfaction and well-being of the victims and witnesses. Reducing the number of cracked trials is one of the challenges faced by the courts and prosecutors, who have to balance the interests of justice with the realities of resource constraints and human factors.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackle Crackle Free Movies.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackle Crackle Free Movies.md
deleted file mode 100644
index ad1db2e7ad362200c8cfe7a793c85817517e77e0..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackle Crackle Free Movies.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
How to Watch Crackle Crackle Free Movies Online
-
If you are looking for a way to watch free movies online, you may have heard of Crackle Crackle. Crackle Crackle is a website that offers a large collection of movies and TV shows that you can stream for free. You can find movies from various genres, such as action, comedy, drama, horror, thriller, and more. You can also watch original content from Crackle Crackle, such as The Oath, Snatch, and StartUp.
However, Crackle Crackle is not available in all countries, and you may encounter some issues when trying to access it. For example, you may see a message that says "Sorry, this content is not available in your region" or "This video is not available in your country". This is because Crackle Crackle uses geo-restrictions to limit its content to certain regions. If you are outside of those regions, you will not be able to watch Crackle Crackle free movies online.
-
But don't worry, there is a solution to this problem. You can use a VPN (Virtual Private Network) to bypass the geo-restrictions and watch Crackle Crackle free movies online from anywhere in the world. A VPN is a service that allows you to connect to a server in another country and change your IP address. This way, you can trick Crackle Crackle into thinking that you are in a region where its content is available. You can also enjoy other benefits of using a VPN, such as protecting your privacy and security online.
-
Here are the steps to watch Crackle Crackle free movies online with a VPN:
-
-
Choose a VPN service that has servers in the countries where Crackle Crackle is available, such as the US, Canada, Australia, or the UK. Some of the best VPNs for streaming are ExpressVPN, NordVPN, Surfshark, and CyberGhost.
-
Download and install the VPN app on your device. You can use a VPN on your computer, smartphone, tablet, or smart TV.
-
Launch the VPN app and sign in with your account. If you don't have an account yet, you can create one on the VPN website.
-
Select a server in a country where Crackle Crackle is available and connect to it. For example, if you want to watch Crackle Crackle free movies online from India, you can connect to a server in the US.
-
Open your browser and go to the Crackle Crackle website. You should be able to access it without any issues.
-
Browse through the categories and genres and choose a movie or TV show that you want to watch. Click on it and enjoy watching Crackle Crackle free movies online with a VPN.
-
-
Note: You may need to disable your ad blocker or allow pop-ups on the Crackle Crackle website, as some of its content may be supported by ads.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Patch Bad Piggies 1.5.0 Pc and Build Your Own Crazy Vehicles.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Patch Bad Piggies 1.5.0 Pc and Build Your Own Crazy Vehicles.md
deleted file mode 100644
index e62d0e72991cf5aac1508e038038e5d2493a9829..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Patch Bad Piggies 1.5.0 Pc and Build Your Own Crazy Vehicles.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
Download Patch Bad Piggies 1.5.0 Pc: A Guide for Angry Birds Fans
-
Are you a fan of Angry Birds, the popular physics-based puzzle game that has taken the world by storm? If so, you might be interested in trying out Bad Piggies, a spin-off game that lets you play as the villains instead of the heroes. In this game, you have to help the greedy pigs build vehicles and machines to steal eggs from the angry birds.
But wait, there's more! If you want to enhance your gaming experience and enjoy more levels, features, and fun, you can download patch Bad Piggies 1.5.0 for PC and install it on your computer. This patch will update your game to the latest version and give you access to new sandbox modes, achievements, and more.
-
In this article, we will show you how to download and install Bad Piggies 1.5.0 on PC, as well as how to download and install patch Bad Piggies 1.5.0 on PC. Follow our step-by-step guide and you'll be playing this addictive game in no time.
-
What is Bad Piggies?
-
Bad Piggies is a game developed by Rovio Entertainment Corporation, the same company that created Angry Birds. It was released in September 2012 for various platforms, including Windows, Android, iOS, Mac, and more.
-
Unlike Angry Birds, where you have to launch birds at pigs using a slingshot, in Bad Piggies you have to construct vehicles and machines using various objects and materials to help the pigs reach their goal. The goal can be an egg, a star, a button, or anything else that the pigs desire.
-
The game has over 200 levels of egg-snatching and pig-flying fun, as well as over 40 bonus levels that you can unlock by earning three stars in each level. You can also play in sandbox mode, where you can create your own levels and vehicles using unlimited items.
-
What are the features of Bad Piggies?
-
Bad Piggies is a game that offers a lot of features and benefits for its players. Some of them are:
-
-
Original and innovative gameplay that challenges your creativity and logic
-
Bright and colorful graphics that are in line with the Angry Birds style
-
Funny and cute characters that make you laugh and sympathize with them
-
Varied and dynamic levels that require different strategies and solutions
-
Multiple sandbox modes that allow for endless creation and experimentation
-
Achievements and leaderboards that let you compete with your friends and other players
-
Regular updates that add new levels, items, features, and more
-
-
What are the requirements to play Bad Piggies on PC?
-
If you want to play Bad Piggies on PC, you need to make sure that your computer meets the minimum system requirements for the game. These are:
-
-
Operating System: Windows XP/Vista/7/8/10
-
Processor: Intel or AMD Processor
-
RAM: at least 512 MB
-
HDD: at least 100 MB of free disk space
-
Graphics Card: any compatible card with OpenGL support
-
Sound Card: any compatible card with DirectX support
-
-
If your computer fulfills these requirements, you can proceed to download and install Bad Piggies on PC.
-
How to download and install Bad Piggies 1.5.0 on PC?
-
To download and install Bad Piggies 1.5.0 on PC, you need to use an emulator that can run Android apps on your computer. One of the best emulators for this purpose is BlueStacks, which is free, fast, and easy to use.
-
How to download patch bad piggies 1.5.0 for pc
-Bad piggies 1.5.0 patch download free pc
-Download bad piggies 1.5.0 full version with patch for pc
-Bad piggies 1.5.0 pc patch download link
-Patch bad piggies 1.5.0 pc download no survey
-Download patch bad piggies 1.5.0 for windows 10 pc
-Bad piggies 1.5.0 patch download pc offline
-Download patch bad piggies 1.5.0 for pc crack
-Bad piggies 1.5.0 patch download pc full game
-Patch bad piggies 1.5.0 pc download without password
-Download patch bad piggies 1.5.0 for mac pc
-Bad piggies 1.5.0 patch download pc latest version
-Download patch bad piggies 1.5.0 for pc softonic
-Bad piggies 1.5.0 patch download pc rar file
-Patch bad piggies 1.5.0 pc download mediafire
-Download patch bad piggies 1.5.0 for pc apk
-Bad piggies 1.5.0 patch download pc zip file
-Download patch bad piggies 1.5.0 for pc mod
-Bad piggies 1.5.0 patch download pc torrent
-Patch bad piggies 1.5.0 pc download mega
-Download patch bad piggies 1.5.0 for linux pc
-Bad piggies 1.5.0 patch download pc direct link
-Download patch bad piggies 1.5.0 for pc online
-Bad piggies 1.5.0 patch download pc setup file
-Patch bad piggies 1.5.0 pc download google drive
-Download patch bad piggies 1.5.0 for android pc
-Bad piggies 1.5.0 patch download pc exe file
-Download patch bad piggies 1.5.0 for ios pc
-Bad piggies 1.5.0 patch download pc iso file
-Patch bad piggies 1.5.0 pc download zippyshare
-Download patch bad piggies 1.5.0 for chromebook pc
-Bad piggies 1.5.0 patch download pc compressed file
-Download patch bad piggies 1.5.0 for ubuntu pc
-Bad piggies 1.5.0 patch download pc highly compressed
-Patch bad piggies 1
-
Here are the steps to download and install Bad Piggies 1.5.0 on PC using BlueStacks:
-
Step 1: Download BlueStacks emulator
-
The first thing you need to do is to download BlueStacks emulator from its official website https://www.bluestacks.com/. You can choose between BlueStacks 4 or BlueStacks 5 depending on your preference.
-
Once you have downloaded the installer file, double-click on it to start the installation process.
-
Step 2: Install BlueStacks on your PC
-
The next thing you need to do is to install BlueStacks on your PC by following the instructions on the screen.
-
You may need to grant some permissions or accept some terms and conditions during the installation process.
-
You may also need to sign in with your Google account or create one if you don't have one already.
-
After the installation is complete, launch BlueStacks from your desktop or start menu.
-
Step 3: Search for Bad Piggies on BlueStacks
-
The third thing you need to do is to search for Bad Piggies on BlueStacks using its built-in search bar.
-
Type "Bad Piggies" in the search bar and hit enter.
-
You will see a list of results from various sources such as Google Play Store, App Center, or Game Center.
-
Step 4: Install Bad Piggies from the search results
-
The fourth thing you need to do is to install Bad Piggies from the search results by clicking on its icon.
-
You will be redirected to its page where you can see more information about the game such as its description, rating, reviews, screenshots, etc.
-
To install it, click on the "Install" button at the top right corner of the page.
-
The installation process will begin and may take a few minutes depending on your internet speed.
-
Step 5: Launch Bad Piggies and enjoy the game
-
The fifth thing you need to do is to launch Bad Piggies and enjoy the game.
-
To launch it, click on its icon on your home screen or app drawer.
-
You will see a loading screen followed by a welcome screen where you can choose between playing online or offline.
-
Select your preferred option and start playing this fun and addictive game.
-
How to download and install patch Bad Piggies 1.5.0 on PC?
-
If you want to download and install patch Bad Piggies 1.5.0 on PC, you need to follow these steps:
-
Step 1: Download patch Bad Piggies 1.5.0 from a reliable source
- 487.weebly.com/blog/bad-piggies-150-download">https://lasopabg487.weebly.com/blog/bad-piggies-150-download. This is a website that provides a link to download the patch file for free and without any viruses or malware.
-
Once you have downloaded the patch file, which is in ZIP format, save it to your computer and remember its location.
-
Step 2: Extract the patch files to your game folder
-
The next thing you need to do is to extract the patch files to your game folder where you have installed Bad Piggies.
-
To do this, you need to use a program that can unzip ZIP files such as WinRAR, 7-Zip, or PeaZip.
-
Right-click on the patch file and select "Extract here" or "Extract to" depending on your program.
-
You will see a folder named "Bad Piggies 1.5.0" that contains two files: "BadPiggies.exe" and "Patch.exe".
-
Copy these two files and paste them into your game folder, which is usually located at "C:\Program Files (x86)\Rovio Entertainment Ltd\Bad Piggies".
-
Replace the existing files if prompted.
-
Step 3: Run the patch executable file and follow the instructions
-
The third thing you need to do is to run the patch executable file and follow the instructions.
-
To do this, double-click on the "Patch.exe" file that you have copied to your game folder.
-
You will see a window that asks you to select your language. Choose English or any other language that you prefer.
-
Then, you will see another window that asks you to select your game version. Choose "Bad Piggies 1.5.0" from the drop-down menu.
-
Finally, you will see a window that shows the progress of the patching process. Wait until it is finished and click on "Exit".
-
Step 4: Restart your game and enjoy the new features
-
The fourth thing you need to do is to restart your game and enjoy the new features.
-
To do this, close your game if it is running and launch it again from BlueStacks or from your desktop shortcut.
-
You will see a new splash screen that shows the version number 1.5.0 at the bottom right corner.
-
You will also notice some new features such as:
-
-
New sandbox mode: The Road to El Porkado
-
New achievements: Road Hogs and Star Collector
-
New items: grappling hook, boxing glove, air pump, etc.
-
New levels: 15 new levels in Rise and Swine episode
-
New mechanics: suction cup wheels, spring-loaded boxing gloves, etc.
-
Bug fixes and performance improvements
-
-
Conclusion
-
In conclusion, Bad Piggies is a fun and addictive game that lets you play as the pigs from Angry Birds and help them build vehicles and machines to steal eggs from the birds. You can download and install Bad Piggies 1.5.0 on PC using BlueStacks emulator and enjoy over 200 levels of pig-flying fun. You can also download and install patch Bad Piggies 1.5.0 on PC using our guide and enjoy new features such as new sandbox mode, new achievements, new items, new levels, new mechanics, and more.
-
We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy gaming!
-
Frequently Asked Questions
-
-
Is Bad Piggies free to play?
-
Yes, Bad Piggies is free to play on PC using BlueStacks emulator. However, some features may require in-app purchases or watching ads.
-
Is Bad Piggies safe to download?
-
Yes, Bad Piggies is safe to download from Google Play Store or App Center on BlueStacks emulator. However, if you download it from other sources, make sure they are reliable and trustworthy.
Yes, you can play Bad Piggies offline on PC using BlueStacks emulator. However, some features may require internet connection such as online leaderboards or cloud save.
-
Can I play Bad Piggies with friends?
-
No, Bad Piggies does not have a multiplayer mode or a co-op mode. However, you can compete with your friends and other players on online leaderboards or share your creations on social media.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Al Fatih 1453 Subtitle Indonesia Download WORK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Al Fatih 1453 Subtitle Indonesia Download WORK.md
deleted file mode 100644
index c0f961c023ac2542a25783252a8769ffa4eb5e05..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Al Fatih 1453 Subtitle Indonesia Download WORK.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
please install quicktime for downloading videos. try different methods for free! imdb picks. download or stream the new movie! watch all the latest new movies online or download. watch latest movies online download subtitle indonesia tv dramas and movies. download
-
download film al fatih 1453 subtitle indonesia download
watch murakami film sejarah islam with english subtitle indonesia bollywood kollywood movie online, download murakami film sejarah islam in mp3, and watch murakami film sejarah islam. watch film fetih 1453 (2012) with english subtitle indonesia free download movie, watch film fetih 1453 (2012) with english subtitle indonesia 3gp, download film fetih 1453 (2012) with english subtitle indonesia free mp3 download, buy film fetih 1453 (2012) dvd book from online and download book store.
-
there are many reasons for the different rating of movies and tv shows on imdb, including copyright, which is automatically applied by the system. if a lower rating is available for a title it is because a lower rating is available for the movie. get the best download movies as you want. enjoy the best streaming and download collection online. mobile application only have to scan the qr code and connect to the preferred servers. as soon as we find the exact solution to a problem, we will post it on the web. the term madhya pradesh is not allowed for naming a state in india.
-
partners with the best and brightest print, television, radio, and digital media to deliver a unique audience experience with a world-class publication that embraces. if you are facing any issues or confusion, please contact help me. if you have any queries, please feel free to contact us. you can always unsubscribe from the list with a single click.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Hile 2022 APK Play Offline or Online with Standard or Snooker Rules.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Hile 2022 APK Play Offline or Online with Standard or Snooker Rules.md
deleted file mode 100644
index 7efc312105e269d01a7c95b6e83684aa87dd05ec..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Hile 2022 APK Play Offline or Online with Standard or Snooker Rules.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
8 Ball Pool Hile 2022 Apk: How to Download and Use the Best Cheat for 8 Ball Pool
-
Introduction
-
Do you love playing pool games on your smartphone or tablet? If yes, then you must have heard of 8 Ball Pool, the most popular and addictive pool game in the world. Developed by Miniclip, this game lets you play with millions of players online, compete in tournaments, win trophies, and collect cues and coins.
But what if you want to have more fun and excitement in your pool games? What if you want to have an edge over your opponents and win every match easily? Well, there is a way to do that. You just need to download and use 8 Ball Pool Hile 2022 Apk, the best cheat for 8 Ball Pool.
-
What is 8 Ball Pool Hile 2022 Apk? It is a modified version of the original game that gives you unlimited access to all the features and resources of the game. With this cheat, you can:
-
-
Get unlimited coins and cash
-
Unlock all cues and tables
-
Use extended guidelines and aim assist
-
Enable auto-win mode and instant win option
-
Bypass anti-cheat detection and ban protection
-
So, how can you download and install 8 Ball Pool Hile 2022 Apk on your device? It's very easy. Just follow these simple steps:
-
-
Click on the link below to download the apk file of 8 Ball Pool Hile 2022 Apk.
-
Go to your device settings and enable the option to install apps from unknown sources.
-
Locate the downloaded apk file and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to complete.
8 ball pool hileli apk indir 2022
-8 ball pool hile nasıl yapılır 2022
-8 ball pool hile mod apk son sürüm
-8 ball pool hile apk dayı
-8 ball pool hile apk android oyun club
-8 ball pool hile apk para hilesi
-8 ball pool hile apk sınırsız para
-8 ball pool hile apk güncel
-8 ball pool hile apk hızlı vuruş
-8 ball pool hile apk uzun çubuk
-8 ball pool hile apk mega mod
-8 ball pool hile apk anti ban
-8 ball pool hile apk online
-8 ball pool hile apk vip
-8 ball pool hile apk elmas hilesi
-8 ball pool hile apk level atlama
-8 ball pool hile apk tüm masalar açık
-8 ball pool hile apk tüm toplar açık
-8 ball pool hile apk tüm kıyafetler açık
-8 ball pool hile apk tüm sopalar açık
-8 ball pool hile apk tüm ödüller açık
-8 ball pool hile apk tüm turnuvalar açık
-8 ball pool hile apk tüm özellikler açık
-8 ball pool hile apk tüm modlar açık
-8 ball pool hile apk tüm skinler açık
-8 ball pool hile apk bedava indir
-8 ball pool hile apk ücretsiz indir
-8 ball pool hile apk full indir
-8 ball pool hile apk linkli indir
-8 ball pool hile apk direk indir
-8 ball pool hile apk kolay indir
-8 ball pool hile apk güvenli indir
-8 ball pool hile apk virüssüz indir
-8 ball pool hile apk reklamsız indir
-8 ball pool hile apk kurulumu nasıl yapılır
-8 ball pool hile apk kullanımı nasıl yapılır
-8 ball pool hile apk yorumları nasıl yapılır
-8 ball pool hile apk puanları nasıl yapılır
-8 ball pool hile apk güncelleme nasıl yapılır
-8 ball pool hile apk silme nasıl yapılır
-8 ball pool hilesi nasıl indirilir ve kurulur 2022
-8 ball pool hack mod menu download for android
-how to get unlimited coins and cash in 8 ball pool
-best tricks and tips for playing 8 ball pool
-how to win every game in 8 ball pool
-how to unlock all cues and tables in 8 ball pool
-how to play with friends in 8 ball pool
-how to get free spins and scratchers in 8 ball pool
-
How to Play 8 Ball Pool with 8 Ball Pool Hile 2022 Apk
-
Now that you have installed 8 Ball Pool Hile 2022 Apk, you are ready to play and win every game. But how do you play 8 Ball Pool with this cheat? Well, it's very similar to playing the original game, but with some extra features and options. Here are some tips and tricks to help you play better:
-
How to choose your game mode and table
-
When you launch the game, you will see four different game modes: Play 1 on 1, Play with Friends, Play in Tournaments, and Practice Offline. You can choose any mode you want, depending on your preference and skill level. You can also choose from different tables, ranging from London to Venice, each with different entry fees and rewards.
-
With 8 Ball Pool Hile 2022 Apk, you don't have to worry about losing your coins or cash, because you have unlimited amounts of them. You can also unlock all the tables for free, without having to level up or pay anything. Just tap on the table you want to play on and start the game.
-
How to rack the balls and break effectively
-
After choosing your game mode and table, you will see the balls arranged in a triangle on the table. This is called the rack. The player who breaks the rack is called the breaker. The breaker is decided randomly or by a coin toss. The breaker has to hit the cue ball with the cue stick and make contact with any ball in the rack. The goal is to spread the balls across the table and pocket one or more balls.
-
With 8 Ball Pool Hile 2022 Apk, you can use extended guidelines and aim assist to help you break better. These features show you the trajectory and angle of your shots, as well as the possible outcomes of your shots. You can also enable auto-win mode or instant win option, which will automatically make you win the game after breaking, regardless of what happens next.
-
How to use the cue stick and spin the cue ball
-
The cue stick is the tool that you use to hit the cue ball. The cue ball is the white ball that you control with your cue stick. You can adjust the power and direction of your shots by dragging your finger on the screen. You can also apply spin to the cue ball by tapping on the spin icon on the bottom left corner of the screen. Spin can affect how the cue ball moves after hitting another ball or a cushion.
-
With 8 Ball Pool Hile 2022 Apk, you can customize your cue stick and pool table with different designs and colors. You can also unlock exclusive cues that have better stats and abilities, such as more power, accuracy, spin, and time. You can also use unlimited cues without having to recharge them.
-
How to pocket your balls and win the game
-
The objective of 8 Ball Pool is to pocket all your balls (either solids or stripes) before your opponent does, and then pocket the black 8 ball in a designated pocket. You have to call your shots before making them, by tapping on the pocket where you want to send your ball. If you pocket a ball of your type, you get another turn. If you miss or foul (such as hitting your opponent's ball first, or not hitting any ball at all), your turn ends and your opponent gets a chance.
- the game after pocketing any ball, regardless of the rules or the outcome.
-
Tips and Tricks for Using 8 Ball Pool Hile 2022 Apk
-
Now that you know how to play 8 Ball Pool with 8 Ball Pool Hile 2022 Apk, you might be wondering how to make the most of this cheat. Here are some tips and tricks that will help you enjoy the game more and improve your skills:
-
How to customize your cue and pool table
-
One of the fun aspects of 8 Ball Pool is that you can customize your cue and pool table with different styles and themes. You can choose from hundreds of cues and tables, each with different looks and features. You can also create your own cue and table by mixing and matching different parts and colors.
-
With 8 Ball Pool Hile 2022 Apk, you can unlock all the cues and tables for free, without having to spend any coins or cash. You can also access exclusive cues and tables that are not available in the original game, such as the VIP cue, the Galaxy cue, and the Halloween table. You can change your cue and table anytime you want, by tapping on the gear icon on the top right corner of the screen.
-
How to earn more coins and cash
-
Coins and cash are the main currencies in 8 Ball Pool. You need coins to enter matches, buy cues, and upgrade your skills. You need cash to buy premium items, such as surprise boxes, scratch cards, and chat packs. You can earn coins and cash by winning matches, completing missions, watching ads, spinning the wheel, and opening chests.
-
With 8 Ball Pool Hile 2022 Apk, you don't have to worry about earning coins and cash, because you have unlimited amounts of them. You can also use them to buy anything you want in the game, without any restrictions or limitations. You can also use them to tip your opponents or send gifts to your friends.
-
How to unlock exclusive items and cues
-
Another way to spice up your 8 Ball Pool experience is to unlock exclusive items and cues that are not available in the regular game. These items and cues have special designs, effects, and abilities that make them stand out from the rest. Some examples of these items and cues are:
-
-
-
Item/Cue
-
Description
-
-
-
The King Cue
-
A golden cue that has a crown on its tip. It has high stats and a royal aura.
-
-
-
The Firestorm Cue
-
A fiery cue that has flames on its shaft. It has high power and a burning effect.
-
-
-
The Ice Cue
-
A frosty cue that has ice crystals on its butt. It has high spin and a freezing effect.
-
-
-
The Dragon Cue
-
A mythical cue that has a dragon head on its tip. It has high accuracy and a dragon breath effect.
-
-
-
The Legendary Cue Collection
-
A collection of 20 cues that have unique designs and abilities. They also have a chance to recharge themselves after every shot.
-
-
-
The VIP Cue Collection
-
A collection of 10 cues that are only available for VIP members. They have high stats and a VIP badge.
-
-
-
The Surprise Box
-
A box that contains a random item or cue. It can be opened with cash or keys.
-
-
-
The Scratch Card
-
A card that can be scratched to reveal a prize. It can be bought with cash or earned by playing matches.
-
-
-
-
A pack of chat messages that can be used to communicate with other players. It can be bought with cash or earned by playing matches.
-
-
-
With 8 Ball Pool Hile 2022 Apk, you can unlock all these items and cues for free, without having to spend any coins, cash, or keys. You can also access them anytime you want, by tapping on the shop icon on the top left corner of the screen.
-
How to challenge your friends and other players online
-
One of the best features of 8 Ball Pool is that you can play with your friends and other players online, in real-time. You can challenge anyone you want, from anywhere in the world, and show off your skills and style. You can also chat with your opponents, send them emojis, and tip them coins.
-
With 8 Ball Pool Hile 2022 Apk, you can challenge anyone you want, without any restrictions or limitations. You can also use extended guidelines and aim assist to help you win every match easily. You can also enable auto-win mode or instant win option, which will automatically make you win the game after making any shot, regardless of the rules or the outcome.
-
How to avoid getting banned or detected by Miniclip
-
The only downside of using 8 Ball Pool Hile 2022 Apk is that you might get banned or detected by Miniclip, the developer of the original game. Miniclip has a strict policy against cheating and hacking, and they use various methods to detect and ban users who use cheats or hacks. If you get banned or detected, you might lose your account, your progress, and your items.
-
However, with 8 Ball Pool Hile 2022 Apk, you don't have to worry about getting banned or detected, because this cheat has a built-in anti-cheat detection and ban protection system. This system prevents Miniclip from detecting your cheat usage and banning your account. It also hides your IP address and encrypts your data, making it impossible for Miniclip to trace your identity or location.
-
Conclusion
-
In conclusion, 8 Ball Pool Hile 2022 Apk is the best cheat for 8 Ball Pool that you can find online. It gives you unlimited access to all the features and resources of the game, such as coins, cash, cues, tables, items, and modes. It also gives you extra features and options, such as extended guidelines, aim assist, auto-win mode, instant win option, anti-cheat detection, and ban protection. It is easy to download and install, safe and virus-free, compatible with any device or platform, and free of charge.
-
If you love playing 8 Ball Pool and want to have more fun and excitement in your pool games, then you should definitely try out 8 Ball Pool Hile 2022 Apk. You will not regret it. You will enjoy the game more than ever before, and you will become a pool master in no time.
-
So what are you waiting for? Download 8 Ball Pool Hile 2022 Apk now and start playing like a pro!
What is the difference between 8 Ball Pool Hile 2022 Apk and other cheats?
-
8 Ball Pool Hile 2022 Apk is different from other cheats because it is a modified version of the original game that gives you unlimited access to all the features and resources of the game. Other cheats are usually external tools or apps that require you to run them separately from the game or inject them into the game. These cheats are more risky and less effective than 8 Ball Pool Hile 2022 Apk.
-
Is 8 Ball Pool Hile 2022 Apk safe and virus-free?
-
8 Ball Pool Hile 2022 Apk is safe and virus-free because it is developed by a team of professional programmers who have tested it thoroughly before releasing it to the public. It does not contain any malware or spyware that could harm your device or steal your personal information. It also does not require any permissions or access to your device's functions or data.
-
Can I use 8 Ball Pool Hile 2022 Apk on any device or platform?
-
8 Ball Pool Hile 2022 Apk can be used on any device or platform that supports Android applications. You can use it on your smartphone, tablet, laptop, or desktop, as long as they have an Android operating system. You can also use it on other platforms, such as iOS, Windows, or Mac, by using an Android emulator, such as BlueStacks or Nox Player.
-
Do I need to root or jailbreak my device to use 8 Ball Pool Hile 2022 Apk?
-
No, you do not need to root or jailbreak your device to use 8 Ball Pool Hile 2022 Apk. This cheat does not require any modifications or alterations to your device's system or firmware. It works perfectly fine on any device, regardless of its root or jailbreak status.
-
How can I contact the developers of 8 Ball Pool Hile 2022 Apk if I have any questions or issues?
-
If you have any questions or issues regarding 8 Ball Pool Hile 2022 Apk, you can contact the developers of this cheat by visiting their official website or social media pages. You can also leave a comment or a review on the download page of this cheat. The developers are very responsive and helpful, and they will try to solve your problems as soon as possible.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dawn Awakening The Ultimate Guide to Surviving the Post-Apocalyptic World.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dawn Awakening The Ultimate Guide to Surviving the Post-Apocalyptic World.md
deleted file mode 100644
index 5e1562a27a47d5f021b5670e0740523b3f93d041..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dawn Awakening The Ultimate Guide to Surviving the Post-Apocalyptic World.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
What is Dawn Awakening and Why is it Important?
-
Dawn awakening is a practice of waking up naturally with the sunrise, without the use of an alarm clock or other artificial means. It is a way of aligning our sleep cycle with the natural rhythm of light and darkness, which has many benefits for our physical and mental health.
Improved sleep quality: Waking up with the light signals our brain to stop producing melatonin, the hormone that regulates sleep. This helps us feel more refreshed and alert in the morning.
-
Enhanced mood: Exposure to natural light in the morning boosts our serotonin levels, the neurotransmitter that regulates mood, happiness, and well-being.
-
Reduced stress: Waking up gradually and peacefully reduces the cortisol levels, the hormone that triggers stress, anxiety, and inflammation.
-
Better immunity: Waking up with the sun strengthens our immune system by stimulating the production of natural killer cells, which fight infections and diseases.
-
Increased energy: Waking up with the light increases our metabolism and blood circulation, which provide us with more energy throughout the day.
-
-
How to practice dawn awakening:
-
Practicing dawn awakening is not as difficult as it may seem. Here are some tips and techniques to help you wake up naturally with the sunrise:
-
-
Go to bed early: The first step to wake up early is to go to bed early. Aim for at least seven to eight hours of sleep per night, and avoid caffeine, alcohol, and screens before bedtime.
-
Use curtains or blinds: If you live in an area where there is too much artificial light at night, use curtains or blinds to block it out. This will help you fall asleep faster and deeper.
-
Open your windows: If possible, open your windows before you go to bed or when you wake up. This will allow fresh air and natural light to enter your room, which will help you wake up more easily.
-
Avoid snoozing: When you wake up with the light, resist the temptation to snooze or go back to sleep. Snoozing disrupts your sleep cycle and makes you feel more groggy and tired.
-
Have a morning routine: Having a morning routine can motivate you to get out of bed and start your day. You can do some stretching, meditation, journaling, or anything that makes you feel good.
-
-
The Science Behind Dawn Awakening
-
Dawn awakening is not only a spiritual practice but also a scientific one. There is a lot of research that supports the benefits of waking up with the sun for our health and well-being.
-
dawn awakening game
-dawn awakening tencent
-dawn awakening release date
-dawn awakening apk
-dawn awakening android
-dawn awakening ios
-dawn awakening download
-dawn awakening gameplay
-dawn awakening beta
-dawn awakening bluestacks
-dawn awakening open world survival
-dawn awakening zombie survival game
-dawn awakening unreal engine 4
-dawn awakening pre register
-dawn awakening official website
-dawn awakening english version
-dawn awakening system requirements
-dawn awakening trailer
-dawn awakening review
-dawn awakening reddit
-dawn awakening discord
-dawn awakening wiki
-dawn awakening mod apk
-dawn awakening cheats
-dawn awakening tips and tricks
-dawn awakening best weapons
-dawn awakening best skills
-dawn awakening best class
-dawn awakening character creation
-dawn awakening crafting guide
-dawn awakening base building
-dawn awakening coop mode
-dawn awakening multiplayer mode
-dawn awakening pvp mode
-dawn awakening online mode
-dawn awakening offline mode
-dawn awakening emulator
-dawn awakening pc version
-dawn awakening mac version
-dawn awakening windows version
-dawn awakening linux version
-dawn awakening steam version
-dawn awakening google play store
-dawn awakening app store
-dawn awakening facebook page
-dawn awakening youtube channel
-dawn awakening twitter account
-dawn awakening instagram account
-dawn awakening tiktok account
-
The main reason why dawn awakening works is because it affects our circadian rhythm, which is our internal clock that regulates our sleep-wake cycle. Our circadian rhythm is influenced by external cues, such as light and temperature, which tell us when to sleep and when to wake up.
-
Light is the most powerful cue for our circadian rhythm. When we are exposed to natural light in the morning, it activates a part of our brain called the suprachiasmatic nucleus (SCN), which sends signals to other parts of our body to regulate our hormones, metabolism, temperature, and mood.
-
However, when we are exposed to artificial light at night, such as from screens, lamps, or streetlights, it confuses our circadian rhythm and disrupts our sleep quality. Artificial light suppresses the production of melatonin, which makes it harder for us to fall asleep and stay asleep. It also affects our serotonin levels, which can lead to depression, anxiety, and mood disorders.
-
One way to overcome the negative effects of artificial light is to use dawn simulation devices, which are special lamps that mimic the natural sunrise in your bedroom. These devices gradually increase the brightness and color temperature of the light in the morning, which helps you wake up more naturally and comfortably.
-
Dawn simulation devices have been shown to have many advantages over conventional alarm clocks, such as:
-
-
Improving sleep quality and duration: Studies have found that dawn simulation devices can improve the quality and duration of sleep by reducing the number of awakenings and increasing the amount of deep sleep.
-
Enhancing mood and cognitive performance: Studies have also found that dawn simulation devices can enhance mood and cognitive performance by increasing alertness, attention, memory, and executive function.
-
Reducing seasonal affective disorder (SAD): Studies have also found that dawn simulation devices can reduce the symptoms of seasonal affective disorder (SAD), which is a type of depression that occurs during the winter months due to lack of sunlight.
-
-
The Spiritual Meaning of Dawn Awakening
-
Dawn awakening is not only a scientific practice but also a spiritual one. Waking up with the sun can help us connect with nature and the divine, and inspire us to live more creatively, gratefully, and optimistically.
-
Waking up with the sun can help us connect with nature and the divine by:
-
-
Aligning ourselves with the natural cycle of life: Waking up with the sun reminds us that we are part of nature and that we follow the same cycle of birth, growth, decay, and death. It helps us appreciate the beauty and wonder of creation and feel more in harmony with ourselves and the world.
-
Acknowledging the presence and power of a higher force: Waking up with the sun also reminds us that there is a higher force that governs the universe and that we are not alone. It helps us feel more humble, grateful, and trusting in the divine plan and guidance.
-
-
Waking up with the sun can also inspire us to live more creatively, gratefully, and optimistically by:
-
-
Stimulating our imagination and expression: Waking up with the sun can stimulate our imagination and expression by exposing us to different colors, shapes, sounds, and sensations. It can help us see things from a fresh perspective and express ourselves more authentically and artistically.
-
Cultivating our gratitude and appreciation: Waking up with the sun can also cultivate our gratitude and appreciation by making us aware of the gifts and opportunities that each day brings. It can help us focus on what we have rather than what we lack, and on what we can do rather than what we can't.
-
Fostering our optimism and hope: Waking up with the sun can also foster our optimism and hope by showing us that every day is a new beginning and a chance to start over. It can help us overcome our fears and challenges, and embrace our dreams and possibilities.
-
-
The Challenges and Solutions of Dawn Awakening
-
Dawn awakening is a rewarding practice but it also comes with some challenges. Some of the obstacles that may prevent us from waking up with the sun are:
-
-
The modern lifestyle: Our modern lifestyle is often incompatible with dawn awakening. We tend to stay up late, work long hours, use artificial light, consume stimulants, and live in noisy environments. These factors interfere with our natural sleep cycle and make it harder for us to wake up early.
-
The different seasons: The different seasons also affect our ability to wake up with the sun. In winter, the days are shorter and darker, which makes it harder for us to get enough light exposure in the morning. In summer, the days are longer and brighter, which makes it harder for us to fall asleep at night.
-
The different climates: The different climates also influence our sleep-wake cycle. In hot climates, we may feel more uncomfortable sleeping at night due to high temperatures and humidity. In cold climates, we may feel more reluctant to get out of bed in the morning due to low temperatures and frost.
-
The different time zones: The different time zones also pose a challenge for dawn awakening. When we travel across different time zones, we may experience jet lag, which is a disruption of our circadian rhythm
caused by the mismatch between our internal clock and the external time. This can make us feel tired, irritable, and confused.
-
How to overcome the challenges of dawn awakening:
-
Despite these challenges, there are some solutions that can help us practice dawn awakening more easily and consistently. Here are some suggestions:
-
-
Adjust your schedule: The best way to overcome the modern lifestyle is to adjust your schedule to fit your natural sleep cycle. Try to go to bed and wake up at the same time every day, and avoid activities that can disrupt your sleep, such as working, watching TV, or using your phone at night.
-
Use light therapy: The best way to overcome the different seasons is to use light therapy, which is a treatment that involves exposing yourself to artificial light that mimics the natural sunlight. You can use a light therapy device in the morning to help you wake up, or in the evening to help you fall asleep.
-
Use temperature regulation: The best way to overcome the different climates is to use temperature regulation, which is a method that involves adjusting the temperature of your bedroom and your body to optimize your sleep quality. You can use a fan, an air conditioner, a heater, or a humidifier to create a comfortable sleeping environment. You can also use a warm or cold shower, a hot or cold drink, or a heating pad or ice pack to regulate your body temperature.
-
Use melatonin supplements: The best way to overcome the different time zones is to use melatonin supplements, which are pills that contain the hormone that regulates sleep. You can take melatonin before you travel to help you adjust to the new time zone, or after you arrive to help you fall asleep.
-
-
Conclusion
-
Dawn awakening is a practice that can improve our health, happiness, and spirituality. By waking up naturally with the sunrise, we can align ourselves with the natural rhythm of life, connect with nature and the divine, and inspire ourselves to live more creatively, gratefully, and optimistically.
-
Dawn awakening is not without its challenges, but they can be overcome with some adjustments and techniques. By following some simple tips and using some helpful tools, we can make dawn awakening a part of our daily routine and enjoy its benefits.
-
If you are interested in learning more about dawn awakening and how to practice it, here are some resources and recommendations for further reading:
- FAQs: Q: What is dawn awakening? A: Dawn awakening is a practice of waking up naturally with the sunrise, without the use of an alarm clock or other artificial means. Q: What are the benefits of dawn awakening? A: Some of the benefits of dawn awakening are improved sleep quality, enhanced mood, reduced stress, better immunity, and increased energy. Q: How do I practice dawn awakening? A: Some of the tips and techniques for practicing dawn awakening are going to bed early, using curtains or blinds, opening your windows, avoiding snoozing, and having a morning routine. Q: What are some of the challenges of dawn awakening? A: Some of the challenges of dawn awakening are the modern lifestyle, the different seasons, the different climates, and the different time zones. Q: How do I overcome the challenges of dawn awakening? A: Some of the solutions for overcoming the challenges of dawn awakening are adjusting your schedule, using light therapy, using temperature regulation, and using melatonin supplements. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/APKPure The Ultimate App Store for Android Users.md b/spaces/1phancelerku/anime-remove-background/APKPure The Ultimate App Store for Android Users.md
deleted file mode 100644
index 5ae70003a437f61698720cc95f59893e3cf06fe2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/APKPure The Ultimate App Store for Android Users.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
What is Apukpure and why you should use it
-
If you are an Android user, you might have heard of Apukpure, an alternative app store that allows you to download all sorts of applications that you can't find in Google Play Store. But what is Apukpure exactly and what makes it different from other app stores? In this article, we will answer these questions and show you how to use Apukpure to download apps and games on your Android device.
Apukpure is an online platform that provides APK files for Android users. APK files are the installation packages for Android applications, similar to EXE files for Windows. By downloading APK files from Apukpure, you can install apps and games that are not available in your country, region, or device. You can also access older versions of apps and games that have been removed or updated in Google Play Store.
-
How does Apukpure work?
-
Apukpure works by scanning and verifying the APK files from various sources on the internet. It then uploads them to its own servers and provides a download link for users. Apukpure also has an app that you can install on your Android device, which acts as a browser and downloader for the APK files. You can use the app to search, download, and install apps and games from Apukpure.
-
What are the benefits of Apukpure?
-
There are many benefits of using Apukpure, such as:
-
-
You can access apps and games that are not available in Google Play Store due to geo-restrictions, compatibility issues, or censorship.
-
You can download apps and games faster and easier than using Google Play Store.
-
You can update apps and games without waiting for Google Play Store to release them.
-
You can downgrade apps and games to older versions if you don't like the new updates.
-
You can discover new and interesting apps and games that are not featured in Google Play Store.
-
-
How to download and install Apukpure on your Android device
-
To use Apukpure, you need to download and install its app on your Android device. Here are the steps to do so:
-
Step 1: Enable unknown sources
-
Before you can install any APK file on your Android device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than Google Play Store. To enable unknown sources, follow these steps:
-
APKPure app download for Android
-APKPure apk file free download
-APKPure alternative app store
-APKPure games download
-APKPure app store review
-APKPure for PC Windows
-APKPure mod apk download
-APKPure app not working
-APKPure vs Aptoide
-APKPure app update
-APKPure app install
-APKPure app lock
-APKPure app manager
-APKPure app backup
-APKPure app uninstaller
-APKPure app downloader
-APKPure app categories
-APKPure app ratings
-APKPure app recommendations
-APKPure app search
-APKPure app history
-APKPure app settings
-APKPure app notifications
-APKPure app permissions
-APKPure app security
-APKPure app size
-APKPure app speed
-APKPure app language
-APKPure app region
-APKPure app support
-APKPure app feedback
-APKPure app features
-APKPure app benefits
-APKPure app disadvantages
-APKPure app pros and cons
-APKPure app comparison
-APKPure app alternatives
-APKPure app competitors
-APKPure app advantages
-APKPure app disadvantages
-
-
Go to your device's settings.
-
Tap on security or privacy.
-
Find and toggle on unknown sources or allow installation from unknown sources.
-
-
Step 2: Download the Apukpure APK file
-
Next, you need to download the Apukpure APK file from its official website. To do so, follow these steps:
Finally, you need to install the Apukpure app on your device. To do so, follow these steps:
-
-
Locate the Apukpure APK file in your downloads folder or notification bar.
-
Tap on the file to open it.
-
Tap on install and wait for the installation to complete.
-
Tap on open to launch the Apukpure app.
-
-
How to use Apukpure to download apps and games
-
Now that you have installed the Apukpure app, you can use it to download apps and games on your device. Here are the steps to do so:
-
Step 1: Open the Apukpure app
-
Open the Apukpure app from your app drawer or home screen. You will see a simple and user-friendly interface with various categories and tabs. You can browse the featured, popular, new, and updated apps and games on the home page. You can also use the menu button on the top left corner to access more options and settings.
-
Step 2: Search for the app or game you want
-
If you have a specific app or game in mind, you can use the search bar on the top right corner to find it. Just type in the name of the app or game and tap on the magnifying glass icon. You will see a list of results that match your query. You can also filter the results by category, rating, price, size, and more.
-
Step 3: Download and install the app or game
-
Once you have found the app or game you want, tap on it to open its details page. You will see a brief description, screenshots, ratings, reviews, and more information about the app or game. You will also see a green download button at the bottom of the page. Tap on it to start downloading the APK file. You will see a progress bar and a notification on your screen. When the download is finished, tap on install to install the app or game on your device. You can then open it from your app drawer or home screen.
-
How to update apps and games with Apukpure
-
One of the advantages of using Apukpure is that you can update apps and games without waiting for Google Play Store to release them. Here are the steps to do so:
-
Step 1: Check for updates in the Apukpure app
-
To check for updates, open the Apukpure app and tap on the menu button on the top left corner. Then tap on update. You will see a list of apps and games that have new versions available. You can also tap on check all to scan all your installed apps and games for updates.
-
Step 2: Download and install the updates
-
To download and install the updates, tap on update all or select the apps and games you want to update individually. Then tap on download. You will see a progress bar and a notification on your screen. When the download is finished, tap on install to install the updates on your device. You can then open them from your app drawer or home screen.
-
Conclusion
-
In conclusion, Apukpure is an alternative app store that allows you to download apps and games that are not available in Google Play Store. It also lets you update apps and games faster and easier than using Google Play Store. To use Apukpure, you need to download and install its app on your Android device. Then you can use it to search, download, and install apps and games from its platform.
-
We hope this article has helped you understand what is Apukpure and how to use it. If you have any questions or feedback, please feel free to leave a comment below.
-
Frequently Asked Questions
-
-
Is Apukpure safe?
-
Apukpure claims that it scans and verifies all the APK files before uploading them to its servers. However, there is always a risk of downloading APK files from unknown sources as they may contain malware or viruses. Therefore, we recommend that you use a reliable antivirus software on your device and only download APK files from trusted sources.
-
Is Apukpure legal?
-
Apukpure does not host any pirated or illegal content on its platform. It only provides links to APK files that are freely available on the internet. However, some of these APK files may violate the terms and conditions of Google Play Store or other app developers. Therefore, we advise that you use Apukpure at your own discretion and respect the intellectual property rights of others.
-
Here are some more FAQs to complete the article:
-
-
How to uninstall Apukpure?
-
If you want to uninstall Apukpure from your device, you can do so by following these steps:
-
-
Go to your device's settings.
-
Tap on apps or applications.
-
Find and tap on Apukpure.
-
Tap on uninstall and confirm.
-
-
How to contact Apukpure?
-
If you have any questions, suggestions, or complaints about Apukpure, you can contact them by using the following methods:
-
-
Email: support@apkpure.com
-
Facebook: https://www.facebook.com/apkpure
-
Twitter: https://twitter.com/apkpure
-
-
What are some alternatives to Apukpure?
-
If you are looking for some alternatives to Apukpure, you can try these app stores:
-
-
Aptoide: A decentralized app store that allows users to create and manage their own app stores.
-
Uptodown: A multi-platform app store that offers apps and games for Android, Windows, Mac, Linux, and more.
-
APKMirror: A website that hosts APK files for popular apps and games that are updated frequently.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Android Oyun Club Car Parking Son Srm The Most Popular Parking Game on Google Play.md b/spaces/1phancelerku/anime-remove-background/Android Oyun Club Car Parking Son Srm The Most Popular Parking Game on Google Play.md
deleted file mode 100644
index 34b4ecd861dca77be98bc384647f1a9926065f45..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Android Oyun Club Car Parking Son Srm The Most Popular Parking Game on Google Play.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
Android Oyun Club Car Parking Son Sürüm: A Review
-
If you are looking for a new and exciting game to play on your Android device, you might want to check out Android Oyun Club Car Parking Son Sürüm. This is the latest version of the popular Car Parking Multiplayer game, which is available for free on the Android Oyun Club platform. In this article, we will review what Android Oyun Club is, what Car Parking Multiplayer is, and what the latest version of the game offers. We will also provide you with the download link and instructions on how to install and play the game.
-
What is Android Oyun Club?
-
Android Oyun Club is a platform for sharing and downloading Android games. It is a community of gamers and developers who love playing and creating games for Android devices. On Android Oyun Club, you can find thousands of games in various genres, such as action, adventure, racing, simulation, puzzle, and more. You can also find games that are not available on Google Play Store, or that are paid on Google Play Store but free on Android Oyun Club. You can also interact with other users by leaving comments, ratings, and tips on the games.
A platform for sharing and downloading Android games
-
Android Oyun Club allows you to download and play any game you want for free. You can browse through the categories or search for your favorite game by name or keyword. You can also see the ratings, reviews, screenshots, and videos of the games before downloading them. You can also upload your own games or mods to share with other users. You can also request games that are not available on the platform, and the developers will try to find them for you.
-
A community of gamers and developers
-
Android Oyun Club is not just a platform for downloading games, but also a community of gamers and developers who love Android games. You can join the forums and chat rooms to discuss games, share tips, ask questions, or make friends with other users. You can also follow the news and updates about the latest games, events, contests, and giveaways on the platform. You can also support the developers by donating or buying their premium games.
-
android oyun club car parking multiplayer indir
-android oyun club car parking son sürüm apk
-android oyun club car parking mod menu
-android oyun club car parking hileli
-android oyun club car parking 3d
-android oyun club car parking pro
-android oyun club car parking yeni sürüm
-android oyun club car parking online
-android oyun club car parking simulator
-android oyun club car parking 2023
-android oyun club car parking hack
-android oyun club car parking premium
-android oyun club car parking güncel sürüm
-android oyun club car parking para hilesi
-android oyun club car parking real
-android oyun club car parking vip
-android oyun club car parking full sürüm
-android oyun club car parking mod apk indir
-android oyun club car parking update
-android oyun club car parking free download
-android oyun club car parking unlimited money
-android oyun club car parking latest version
-android oyun club car parking cheats
-android oyun club car parking cracked
-android oyun club car parking offline
-android oyun club car parking classic
-android oyun club car parking mod menu indir
-android oyun club car parking yeni araçlar
-android oyun club car parking beta sürümü
-android oyun club car parking drift mode
-android oyun club car parking extreme
-android oyun club car parking gold hilesi
-android oyun club car parking ios indir
-android oyun club car parking keyifli oyuncu
-android oyun club car parking lite sürümü
-android oyun club car parking modifiye hilesi
-android oyun club car parking no ads
-android oyun club car parking oyuncu tv
-android oyun club car parking premium apk indir
-android oyun club car parking quizlet answers
-
What is Car Parking Multiplayer?
-
Car Parking Multiplayer is one of the most popular games on Android Oyun Club. It is a realistic and fun parking game that challenges you to park your car in various scenarios. You can also enjoy a multiplayer open world mode where you can explore, race, chat, and trade with other players. You can also customize your car with different parts, colors, stickers, and accessories.
-
A realistic and fun parking game
-
Car Parking Multiplayer offers you 82 real-life parking and driving challenges. You can choose from 100 cars with real interiors and physics. You can park cars, trucks, buses, or any other vehicle you want. You have to follow the traffic rules, avoid obstacles, use indicators, mirrors, cameras, and sensors to park your car correctly. You can also adjust the difficulty level, camera angle, steering mode, weather condition, time of day, and more to suit your preference.
-
A multiplayer open world mode
-
Car Parking Multiplayer also lets you enjoy a multiplayer open world mode where you can free roam in a large map with real gas stations and car services. You can compete against real players in multiplayer racing or join them in cooperative missions. You can also chat with them using voice or text messages. You can also exchange cars with other players or buy and sell them in the market.
-
A car customization feature
-
Car Parking Multiplayer
Car Parking Multiplayer also lets you customize your car with different parts, colors, stickers, and accessories. You can change the engine, suspension, wheels, tires, brakes, exhaust, turbo, transmission, and more to improve the performance of your car. You can also paint your car with any color you want, or apply decals and vinyls to make it look unique. You can also add spoilers, bumpers, hoods, grills, lights, horns, and more to enhance the appearance of your car.
-
What is the latest version of Car Parking Multiplayer?
-
The latest version of Car Parking Multiplayer is 4.8.4.1, which was released on June 15, 2023. This version has some new features and improvements that make the game more enjoyable and realistic. Here are some of the highlights of the latest version:
-
The new features and improvements
-
-
A new map with a city, a desert, and a highway.
-
A new garage with more space and options for car customization.
-
A new car wash system that lets you clean your car and earn money.
-
A new police mode that lets you chase or be chased by the cops.
-
A new chat system that lets you send emojis and stickers to other players.
-
A new radio system that lets you listen to music from your device or online stations.
-
A new weather system that changes the climate and the lighting of the map.
-
A new traffic system that adds more cars and pedestrians to the map.
-
A new damage system that shows the effects of collisions and accidents on your car.
-
A new physics system that makes the car handling more realistic and responsive.
-
-
The download link and instructions
-
If you want to download and play the latest version of Car Parking Multiplayer, you can follow these steps:
-
-
Go to [Android Oyun Club] and search for Car Parking Multiplayer.
-
Click on the download button and wait for the file to be downloaded.
-
Open the file manager on your device and locate the downloaded file.
-
Tap on the file and allow the installation of unknown sources if prompted.
-
Wait for the installation to be completed and launch the game.
-
-
Conclusion
-
Android Oyun Club Car Parking Son Sürüm is a great game for anyone who loves parking games or driving games. It offers a realistic and fun parking experience with a variety of cars, challenges, and modes. It also has a multiplayer open world mode where you can explore, race, chat, and trade with other players. You can also customize your car with different parts, colors, stickers, and accessories. The latest version of the game has some new features and improvements that make it even better. You can download it for free from Android Oyun Club and enjoy it on your Android device.
-
Why you should try Android Oyun Club Car Parking Son Sürüm
-
You should try Android Oyun Club Car Parking Son Sürüm because:
-
-
It is free to download and play.
-
It is realistic and fun to play.
-
It has a lot of cars, challenges, and modes to choose from.
-
It has a multiplayer open world mode where you can interact with other players.
-
It has a car customization feature where you can make your car unique.
-
It has a new version with new features and improvements.
-
-
FAQs
-
Here are some frequently asked questions about Android Oyun Club Car Parking Son Sürüm:
-
-
Question
Answer
-
Is Android Oyun Club safe to use?
Yes, Android Oyun Club is safe to use. It does not contain any viruses or malware. However, you should always be careful when downloading files from unknown sources and scan them with an antivirus before opening them.
-
Is Car Parking Multiplayer online or offline?
Car Parking Multiplayer can be played both online and offline. You can play the parking challenges offline without an internet connection. You can also play the multiplayer open world mode online with other players. However, you will need an internet connection to download the game and update it to the latest version.
-
How can I get more money in Car Parking Multiplayer?
You can get more money in Car Parking Multiplayer by completing the parking challenges, washing your car, selling or exchanging your car with other players, or buying premium cars with real money.
-
How can I play Car Parking Multiplayer with my friends?
You can play Car Parking Multiplayer with your friends by joining the same server and map. You can also create your own private server and invite your friends to join. You can also add your friends as contacts and chat with them in the game.
-
How can I update Car Parking Multiplayer to the latest version?
You can update Car Parking Multiplayer to the latest version by downloading it from Android Oyun Club. You can also check for updates in the game settings and download them from there. You should always update the game to the latest version to enjoy the new features and improvements.
-
-
I hope this article has helped you learn more about Android Oyun Club Car Parking Son Sürüm. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy parking!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Animal Tycoon - Zoo Craft Game Mod Apk The Ultimate Idle Zoo Simulation.md b/spaces/1phancelerku/anime-remove-background/Animal Tycoon - Zoo Craft Game Mod Apk The Ultimate Idle Zoo Simulation.md
deleted file mode 100644
index c636b31dfe9b2ce9aba6350272856d6cf7fceea9..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Animal Tycoon - Zoo Craft Game Mod Apk The Ultimate Idle Zoo Simulation.md
+++ /dev/null
@@ -1,190 +0,0 @@
-
-
Animal Tycoon - Zoo Craft Game Mod Apk: A Fun and Creative Way to Build Your Own Zoo
-
Do you love animals and dream of running your own zoo? If so, you might want to check out Animal Tycoon - Zoo Craft Game, a simulation game that lets you build and manage a wildlife park full of exotic creatures. And if you want to make the game even more fun and easy, you can download the mod apk version of Animal Tycoon - Zoo Craft Game, which gives you unlimited money, gems, and access to all the animals and items in the game. In this article, we will tell you more about this game, why you should download the mod apk, how to install it, and some tips and tricks to play it.
Animal Tycoon - Zoo Craft Game is a simulation game developed by Mini Games Inc. It was released in March 2021 and has over 50,000 downloads on Google Play. The game is rated 4.1 out of 5 stars by the users.
-
A simulation game that lets you create and manage a wildlife park
-
In Animal Tycoon - Zoo Craft Game, you are the owner and zookeeper of a wildlife park. Your goal is to create a beautiful and profitable zoo that attracts visitors from all over the world. You can choose from hundreds of animals, habitats, decorations, and attractions to design your park according to your preferences. You can also breed new animals, feed them, play with them, and watch them interact with each other.
-
A game that features hundreds of animals, habitats, decorations, and attractions
-
Animal Tycoon - Zoo Craft Game has a huge variety of animals to choose from. You can find common animals like lions, tigers, elephants, giraffes, zebras, pandas, monkeys, bears, penguins, flamingos, etc. You can also find rare animals like unicorns, dragons, dinosaurs, phoenixes, etc. Each animal has its own personality, needs, and preferences. You can also customize their habitats with different types of fences, plants, rocks, water features, etc. You can also decorate your park with various items like statues, fountains, benches, lamps, signs, etc. You can also add attractions like roller coasters, ferris wheels, carousels, etc. to make your park more fun and exciting.
-
A game that challenges you to satisfy your visitors and earn money
-
Animal Tycoon - Zoo Craft Game is not just about building your zoo. You also have to manage it well. You have to make sure that your animals are happy and healthy. You have to provide them with food, water, toys, medicine, etc. You also have to keep your park clean and safe. You have to hire staff like cleaners, vets, security guards, etc. You also have to attract visitors to your park by setting ticket prices, advertising campaigns, etc. You have to satisfy their needs and wants by providing them with food, drinks, restrooms, souvenirs, etc. You have to earn money from your visitors and use it to expand and improve your park. You can also earn money from completing quests and achievements.
-
Why download the mod apk version of Animal Tycoon - Zoo Craft Game?
-
Animal Tycoon - Zoo Craft Game is a free game to download and play, but it also has some limitations and drawbacks. For example, you have to wait for a long time to collect money and gems, which are the main currencies in the game. You also have to watch ads or spend real money to get more money and gems. You also have to unlock the animals and items by reaching certain levels or paying with money and gems. If you want to enjoy the game without these restrictions and annoyances, you can download the mod apk version of Animal Tycoon - Zoo Craft Game, which offers you many benefits and advantages.
-
To enjoy unlimited money and gems
-
The mod apk version of Animal Tycoon - Zoo Craft Game gives you unlimited money and gems from the start. You don't have to wait or watch ads or spend real money to get more money and gems. You can use them to buy anything you want in the game, such as animals, habitats, decorations, attractions, etc. You can also use them to speed up the processes, such as breeding, building, upgrading, etc. You can also use them to skip the quests and achievements if you don't feel like doing them.
-
To unlock all the animals and items
-
The mod apk version of Animal Tycoon - Zoo Craft Game also unlocks all the animals and items in the game. You don't have to reach certain levels or pay with money and gems to unlock them. You can access them from the beginning and use them to create your dream zoo. You can also mix and match different animals and items to create unique combinations and designs.
-
animal tycoon zoo craft game hack apk
-animal tycoon zoo craft game unlimited money apk
-animal tycoon zoo craft game mod apk download
-animal tycoon zoo craft game latest version mod apk
-animal tycoon zoo craft game offline mod apk
-animal tycoon zoo craft game cheats apk
-animal tycoon zoo craft game premium mod apk
-animal tycoon zoo craft game free shopping mod apk
-animal tycoon zoo craft game unlocked mod apk
-animal tycoon zoo craft game pro mod apk
-animal tycoon zoo craft game full mod apk
-animal tycoon zoo craft game cracked mod apk
-animal tycoon zoo craft game mega mod apk
-animal tycoon zoo craft game vip mod apk
-animal tycoon zoo craft game no ads mod apk
-animal tycoon zoo craft game android mod apk
-animal tycoon zoo craft game ios mod apk
-animal tycoon zoo craft game pc mod apk
-animal tycoon zoo craft game online mod apk
-animal tycoon zoo craft game 3d mod apk
-animal tycoon zoo craft game simulation mod apk
-animal tycoon zoo craft game idle mod apk
-animal tycoon zoo craft game strategy mod apk
-animal tycoon zoo craft game management mod apk
-animal tycoon zoo craft game adventure mod apk
-animal tycoon zoo craft game fun mod apk
-animal tycoon zoo craft game cute mod apk
-animal tycoon zoo craft game realistic mod apk
-animal tycoon zoo craft game wild mod apk
-animal tycoon zoo craft game exotic mod apk
-animal tycoon zoo craft game rare mod apk
-animal tycoon zoo craft game endangered mod apk
-animal tycoon zoo craft game rescue mod apk
-animal tycoon zoo craft game breeding mod apk
-animal tycoon zoo craft game evolution mod apk
-animal tycoon zoo craft game genetics mod apk
-animal tycoon zoo craft game hybrid mod apk
-animal tycoon zoo craft game mutation mod apk
-animal tycoon zoo craft game customization mod apk
-animal tycoon zoo craft game decoration mod apk
-
To remove ads and in-app purchases
-
The mod apk version of Animal Tycoon - Zoo Craft Game also removes all the ads and in-app purchases in the game. You don't have to watch ads or spend real money to enjoy the game. You can play the game without any interruptions or distractions. You can also play the game offline without any internet connection.
-
How to download and install Animal Tycoon - Zoo Craft Game mod apk?
-
If you are interested in downloading and installing Animal Tycoon - Zoo Craft Game mod apk, you can follow these simple steps:
-
Find a reliable source for the mod apk file
-
The first step is to find a reliable source for the mod apk file of Animal Tycoon - Zoo Craft Game. There are many websites that offer mod apk files for various games, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also provide fake or outdated mod apk files that don't work or cause problems in the game. Therefore, you should be careful and do some research before downloading any mod apk file from any website. You should check the reviews, ratings, comments, feedbacks, etc. of other users who have downloaded the mod apk file from that website. You should also scan the mod apk file with an antivirus or anti-malware program before installing it on your device.
-
Enable unknown sources on your device settings
-
The second step is to enable unknown sources on your device settings. This is because your device may not allow you to install apps from sources other than Google Play Store by default. To enable unknown sources, you can go to your device settings > security > unknown sources > toggle on. This will allow you to install apps from sources other than Google Play Store.
-
Download and install the mod apk file
-
The third step is to download and install the mod apk file of Animal Tycoon - Zoo Craft Game on your device. To do this, you can go to the website where you found the mod apk file and click on the download button. The mod apk file will be downloaded to your device storage. Then, you can go to your device storage > downloads > find the mod apk file > tap on it > install. The installation process may take a few seconds or minutes depending on your device performance.
-
Launch the game and enjoy
-
The final step is to launch the game and enjoy it with unlimited money, gems, animals, items, etc. To do this, you can go to your device home screen > find the game icon > tap on it > start playing. You will see that you have unlimited money and gems in your account. You will also see that all the animals and items are unlocked in the game. You can also enjoy the game without any ads or in-app purchases. You can also play the game offline without any internet connection.
-
What are some tips and tricks to play Animal Tycoon - Zoo Craft Game?
-
Animal Tycoon - Zoo Craft Game is a fun and creative game, but it can also be challenging and complex. If you want to play the game well and achieve your goals, you can follow these tips and tricks:
-
Plan your park layout carefully
-
One of the most important aspects of Animal Tycoon - Zoo Craft Game is the park layout. You have to plan your park layout carefully to make the best use of the space and resources. You have to consider the needs and preferences of your animals, visitors, and staff. You have to balance the aesthetics and functionality of your park. You have to create a park that is attractive, comfortable, convenient, and profitable. Here are some tips to plan your park layout:
-
-
Use different types of fences to separate different habitats and zones.
-
Use paths to connect different habitats and attractions.
-
Use signs to guide your visitors and staff.
-
Use plants, rocks, water features, etc. to decorate your habitats and park.
-
Use statues, fountains, benches, lamps, etc. to add more charm and beauty to your park.
-
Use roller coasters, ferris wheels, carousels, etc. to add more fun and excitement to your park.
-
Use food stalls, drink stands, restrooms, souvenir shops, etc. to provide services and amenities to your visitors.
-
Use a table to compare the pros and cons of different types of fences, paths, signs, plants, rocks, water features, statues, fountains, benches, lamps, roller coasters, ferris wheels, carousels, food stalls, drink stands, restrooms, souvenir shops, etc.
-
-
-
-
Type
-
Pros
-
Cons
-
-
-
Fences
-
- Different types of fences have different durability, cost, and appearance. - Fences can keep your animals safe and secure. - Fences can create different habitats and zones in your park.
-
- Fences can block the view of your animals and attractions. - Fences can be damaged by animals or visitors. - Fences can take up space in your park.
-
-
-
Paths
-
- Paths can connect different habitats and attractions in your park. - Paths can make your park more accessible and convenient for your visitors and staff. - Paths can enhance the look of your park with different colors and patterns.
-
- Paths can be expensive to build and maintain. - Paths can be crowded by visitors and staff. - Paths can limit the space for your animals and attractions.
-
-
-
Signs
-
- Signs can guide your visitors and staff to different habitats and attractions in your park. - Signs can inform your visitors and staff about the names, facts, and rules of your animals and attractions. - Signs can add more personality and style to your park with different fonts and designs.
-
- Signs can be costly to buy and install. - Signs can be vandalized by visitors or staff. - Signs can clutter your park with too many words and images.
-
-
-
Plants
-
- Plants can beautify your habitats and park with different colors and shapes. - Plants can provide shade, oxygen, and food for your animals and visitors. - Plants can attract more wildlife and biodiversity to your park.
-
- Plants can be expensive to buy and plant. - Plants can require watering, pruning, fertilizing, etc. - Plants can grow out of control or die if not cared for properly.
-
-
Rocks
-
- Rocks can decorate your habitats and park with different textures and shapes. - Rocks can provide shelter, hiding places, and basking spots for your animals. - Rocks can create natural barriers and boundaries in your park.
-
- Rocks can be heavy and hard to move and place. - Rocks can be sharp and dangerous for your animals and visitors. - Rocks can erode or crack over time.
-
-
-
Water features
-
- Water features can beautify your habitats and park with different sounds and movements. - Water features can provide water, humidity, and cooling for your animals and visitors. - Water features can attract more aquatic and amphibious animals to your park.
-
- Water features can be costly to build and operate. - Water features can require filtering, pumping, cleaning, etc. - Water features can leak, overflow, or freeze if not maintained properly.
-
-
-
Statues
-
- Statues can beautify your habitats and park with different artistic and cultural expressions. - Statues can represent your animals, attractions, or themes in your park. - Statues can add more prestige and value to your park.
-
- Statues can be expensive to buy and install. - Statues can be damaged by weather, animals, or visitors. - Statues can take up space in your habitats and park.
-
-
-
Fountains
-
- Fountains can beautify your habitats and park with different water effects and lights. - Fountains can provide water, humidity, and cooling for your animals and visitors. - Fountains can create a relaxing and soothing atmosphere in your park.
-
- Fountains can be costly to build and operate. - Fountains can require filtering, pumping, cleaning, etc. - Fountains can leak, overflow, or freeze if not maintained properly.
-
-
-
Benches
-
- Benches can provide seating and resting places for your visitors and staff. - Benches can enhance the comfort and convenience of your park. - Benches can come in different styles and materials to suit your park theme.
-
- Benches can be expensive to buy and install. - Benches can be damaged by weather, animals, or visitors. - Benches can be occupied by unwanted guests or littered with trash.
-
-
-
Lamps
-
- Lamps can provide lighting and visibility for your habitats and park at night. - Lamps can enhance the beauty and ambiance of your park at night. - Lamps can come in different colors and shapes to suit your park theme.
-
- Lamps can be expensive to buy and install. - Lamps can consume electricity and generate heat. - Lamps can break or malfunction if not maintained properly.
-
-
-
Roller coasters
-
- Roller coasters can provide thrill and excitement for your visitors. - Roller coasters can attract more visitors to your park. - Roller coasters can come in different types, sizes, and designs to suit your park theme.
-
- Roller coasters can be very expensive to build and operate. - Roller coasters can require safety inspections, maintenance, repairs, etc. - Roller coasters can cause noise, pollution, or accidents if not managed properly.
-
-
Ferris wheels
-
- Ferris wheels can provide a panoramic view of your habitats and park for your visitors. - Ferris wheels can attract more visitors to your park. - Ferris wheels can come in different heights, diameters, and designs to suit your park theme.
-
- Ferris wheels can be very expensive to build and operate. - Ferris wheels can require safety inspections, maintenance, repairs, etc. - Ferris wheels can cause noise, pollution, or accidents if not managed properly.
-
-
-
Carousels
-
- Carousels can provide a fun and nostalgic ride for your visitors. - Carousels can attract more visitors to your park. - Carousels can come in different themes, styles, and animals to suit your park theme.
-
- Carousels can be expensive to build and operate. - Carousels can require safety inspections, maintenance, repairs, etc. - Carousels can cause noise, pollution, or accidents if not managed properly.
-
-
-
Food stalls
-
- Food stalls can provide food and drinks for your visitors and staff. - Food stalls can enhance the satisfaction and loyalty of your visitors and staff. - Food stalls can come in different cuisines, menus, and prices to suit your park theme.
-
- Food stalls can be expensive to buy and install. - Food stalls can require food safety inspections, hygiene standards, inventory management, etc. - Food stalls can cause waste, litter, or pests if not cleaned properly.
-
-
-
Drink stands
-
- Drink stands can provide drinks for your visitors and staff. - Drink stands can enhance the satisfaction and loyalty of your visitors and staff. - Drink stands can come in different types, flavors, and prices to suit your park theme.
-
- Drink stands can be expensive to buy and install. - Drink stands can require food safety inspections, hygiene standards, inventory management, etc. - Drink stands can cause waste, litter, or pests if not cleaned properly.
-
-
-
Restrooms
-
- Restrooms can provide sanitary facilities for your visitors and staff. - Restrooms can enhance the comfort and convenience of your visitors and staff. - Restrooms can come in different sizes, locations, and designs to suit your park theme.
-
- Restrooms can be expensive to build and maintain. - Restrooms can require plumbing, ventilation, cleaning, etc. - Restrooms can cause odor, pollution, or vandalism if not managed properly.
-
-
-
Souvenir shops
-
- Souvenir shops can provide souvenirs for your visitors and staff. - Souvenir shops can enhance the memory and loyalty of your visitors and staff. - Souvenir shops can come in different types, items, and prices to suit your park theme.
-
- Souvenir shops can be expensive to buy and install. - Souvenir shops can require inventory management, marketing, sales, etc. - Souvenir shops can cause waste, litter, or theft if not managed properly.
-
-
-
Conclusion
-
Animal Tycoon - Zoo Craft Game is a fun and creative game that lets you build and manage your own zoo. You can choose from hundreds of animals, habitats, decorations, and attractions to create your dream zoo. You can also download the mod apk version of Animal Tycoon - Zoo Craft Game to enjoy unlimited money, gems, animals, items, etc. You can also follow some tips and tricks to play the game well and achieve your goals. If you love animals and zoos, you should definitely try Animal Tycoon - Zoo Craft Game mod apk.
-
FAQs
-
Here are some frequently asked questions about Animal Tycoon - Zoo Craft Game mod apk:
-
Q: Is Animal Tycoon - Zoo Craft Game mod apk safe to download and install?
-
A: Yes, Animal Tycoon - Zoo Craft Game mod apk is safe to download and install if you find a reliable source for the mod apk file. You should also scan the mod apk file with an antivirus or anti-malware program before installing it on your device. You should also enable unknown sources on your device settings to allow the installation of apps from sources other than Google Play Store.
-
Q: Is Animal Tycoon - Zoo Craft Game mod apk compatible with my device?
-
A: Animal Tycoon - Zoo Craft Game mod apk is compatible with most Android devices that have Android 4.4 or higher. However, some devices may have different specifications or features that may affect the performance or compatibility of the game. You should check the compatibility of your device with the game before downloading and installing it. You can also contact the developer of the game for more information or support.
-
Q: How can I update Animal Tycoon - Zoo Craft Game mod apk?
-
A: Animal Tycoon - Zoo Craft Game mod apk may not update automatically like the original version of the game. You may have to download and install the latest version of the mod apk file manually whenever there is a new update available. You can check the website where you downloaded the mod apk file for any updates or notifications. You can also backup your game data before updating to avoid losing your progress or settings.
-
Q: How can I uninstall Animal Tycoon - Zoo Craft Game mod apk?
-
A: Animal Tycoon - Zoo Craft Game mod apk can be uninstalled like any other app on your device. You can go to your device settings > apps > find Animal Tycoon - Zoo Craft Game > tap on it > uninstall. You can also delete the mod apk file from your device storage if you don't need it anymore.
-
Q: Can I play Animal Tycoon - Zoo Craft Game mod apk with my friends?
-
A: Animal Tycoon - Zoo Craft Game mod apk does not have a multiplayer or online mode. You can only play the game solo on your device. However, you can still share your park creations and achievements with your friends through social media or screenshots. You can also compare your park ratings and rankings with other players around the world.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Fifa Street 4 PC Download - Enjoy Street Soccer in High Resolution.md b/spaces/1phancelerku/anime-remove-background/Fifa Street 4 PC Download - Enjoy Street Soccer in High Resolution.md
deleted file mode 100644
index 624a04cb1b4fd2f6eee733a3cea5d5434f3b8889..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Fifa Street 4 PC Download - Enjoy Street Soccer in High Resolution.md
+++ /dev/null
@@ -1,76 +0,0 @@
-
- - H2: Requirements and Recommendations: What you need to run FIFA Street 4 on PC smoothly - H2: Methods and Steps: How to download and install FIFA Street 4 on PC using different options - H2: Tips and Tricks: How to optimize your FIFA Street 4 experience on PC - Conclusion: A summary of the main points and a call to action | | H2: Requirements and Recommendations | - H3: Minimum and Recommended System Requirements: The hardware and software specifications for running FIFA Street 4 on PC - H3: Best Emulators for FIFA Street 4: The pros and cons of different emulators that can run FIFA Street 4 on PC - H3: Best Controllers for FIFA Street 4: The advantages and disadvantages of different controllers that can enhance your FIFA Street 4 gameplay on PC | | H2: Methods and Steps | - H3: Method 1: Download FIFA Street 4 from Steam: The easiest and most convenient way to get FIFA Street 4 on PC - H3: Method 2: Download FIFA Street 4 from Origin: Another official and reliable way to get FIFA Street 4 on PC - H3: Method 3: Download FIFA Street 4 from a Torrent Site: A risky but possible way to get FIFA Street 4 on PC for free - H3: Method 4: Download FIFA Street 4 from a ROM Site: A similar but safer way to get FIFA Street 4 on PC for free - H3: Method 5: Download FIFA Street 4 from a Mod Site: A creative and fun way to get FIFA Street 4 on PC with extra features | | H2: Tips and Tricks | - H3: How to Configure Your Emulator Settings for FIFA Street 4: How to adjust the graphics, audio, input, and performance settings of your emulator for optimal FIFA Street 4 gameplay - H3: How to Customize Your Controller Settings for FIFA Street 4: How to map the buttons, sticks, triggers, and vibration of your controller for intuitive FIFA Street 4 controls - H3: How to Access the Online Features of FIFA Street 4 on PC: How to connect to the EA servers, play online matches, join tournaments, and unlock rewards in FIFA Street 4 on PC | Table 2: Article with HTML formatting
Download FIFA Street 4 PC: How to Play the Ultimate Street Soccer Game on Your Computer
-
If you are a fan of soccer games, you have probably heard of or played FIFA Street 4, also known as FIFA Street 2012. It is a spin-off of the popular FIFA series that focuses on street soccer, where you can showcase your skills, style, and creativity in various urban locations around the world. It features over 50 teams, over 35 locations, over 500 players, and a variety of modes, such as World Tour, Hit the Streets, Freestyle, Last Man Standing, Panna Rules, Futsal, and Custom Match.
FIFA Street 4 was originally released for PlayStation 3 and Xbox 360 in March 2012. It received positive reviews from critics and players alike, who praised its gameplay, graphics, sound, customization, and online features. It was also a commercial success, selling over one million copies worldwide.
-
But what if you want to play FIFA Street 4 on your PC? Unfortunately, there is no official PC version of the game. However, there are some ways to download and install FIFA Street 4 on your computer using different methods. In this article, we will show you how to do that step by step. We will also give you some tips and tricks on how to optimize your FIFA Street 4 experience on PC.
-
Requirements and Recommendations
-
Before you download and install FIFA Street 4 on your PC, you need to make sure that your computer meets the minimum and recommended system requirements for running the game smoothly. You also need to choose the best emulator for playing FIFA Street 4 on your PC. And finally, you need to decide which controller you want to use for enjoying FIFA Street 4 gameplay.
-
Minimum and Recommended System Requirements
-
The minimum and recommended system requirements for running FIFA Street 4 on your PC are as follows: - Minimum System Requirements: - CPU: Intel Core 2 Duo 2.0 GHz or higher - RAM: 2 GB - VIDEO CARD: DirectX 9.0c/Shader3.0 or higher compatible, NVIDIA GeForce 8600 series or higher, ATI Radeon (TM) X 1900 or higher, VRAM :512MB or higher - Recommended System Requirements: - CPU: Intel Core i5 or higher - RAM: 4 GB or more - VIDEO CARD: DirectX 11 or higher compatible, NVIDIA GeForce GTX 1050 or higher, AMD Radeon RX 560 or higher, VRAM :2GB or more These system requirements are based on the ones for Street Fighter IV, which is a similar game in terms of graphics and gameplay. However, since FIFA Street 4 is not officially supported on PC, you may encounter some issues or errors depending on your hardware and software configuration.
Best Emulators for FIFA Street 4
- An emulator is a software that allows you to run games or applications that are designed for a different platform on your PC. For example, you can use an emulator to play PlayStation 3 or Xbox 360 games on your PC. There are many emulators available for different platforms, but not all of them are compatible with FIFA Street 4. Here are some of the best emulators that can run FIFA Street 4 on PC: - RPCS3: This is an open source emulator for PlayStation 3 games. It is one of the most advanced and stable emulators for PS3, and it can run many games at high resolution and frame rate. It also supports online features, custom settings, and controller mapping. You can download RPCS3 from its official website or from its GitHub page . You will also need a PS3 BIOS file and a FIFA Street 4 ISO file to run the game on RPCS3. - Xenia: This is an open source emulator for Xbox 360 games. It is still in development and has some limitations and bugs, but it can run some games at decent performance and quality. It also supports online features, custom settings, and controller mapping. You can download Xenia from its official website or from its GitHub page . You will also need an Xbox 360 BIOS file and a FIFA Street 4 ISO file to run the game on Xenia. - PCSX2: This is an open source emulator for PlayStation 2 games. It is one of the most popular and reliable emulators for PS2, and it can run many games at high resolution and frame rate. It also supports online features, custom settings, and controller mapping. You can download PCSX2 from its official website or from its GitHub page . You will also need a PS2 BIOS file and a FIFA Street ISO file to run the game on PCSX2. Each emulator has its own advantages and disadvantages, so you may want to try them out and see which one works best for you. You can also check out some YouTube videos that show how to download, install, and configure each emulator for FIFA Street 4. For example, you can watch this video for RPCS3, this video for Xenia, and this video for PCSX2.
Best Controllers for FIFA Street 4
- A controller is a device that allows you to control the game using buttons, sticks, triggers, and vibration. A controller can enhance your FIFA Street 4 gameplay by giving you more precision, comfort, and feedback. There are many controllers available for different platforms, but not all of them are compatible with FIFA Street 4 on PC. Here are some of the best controllers that can work with FIFA Street 4 on PC: - Xbox One Controller: This is the official controller for Xbox One consoles. It is one of the most widely used and supported controllers for PC gaming, as it has native compatibility with Windows 10 and many games and emulators. It has a sleek design, ergonomic grip, responsive buttons, analog sticks, triggers, bumpers, D-pad, and vibration motors. It also has a headphone jack, a micro USB port, and a wireless adapter. You can connect it to your PC via USB cable or Bluetooth. - DualShock 4 Controller: This is the official controller for PlayStation 4 consoles. It is another popular and versatile controller for PC gaming, as it has native compatibility with Steam and some games and emulators. It has a stylish design, comfortable grip, responsive buttons, analog sticks, triggers, bumpers, D-pad, and vibration motors. It also has a touchpad, a light bar, a speaker, a headphone jack, a micro USB port, and a wireless adapter. You can connect it to your PC via USB cable or Bluetooth. - Logitech F310 Controller: This is a wired controller for PC gaming. It is one of the most affordable and reliable controllers for PC gaming, as it has native compatibility with Windows and many games and emulators. It has a classic design, durable grip, responsive buttons, analog sticks, triggers, bumpers, D-pad, and vibration motors. It also has a mode switch, a back button, and a start button. You can connect it to your PC via USB cable. - 8BitDo SN30 Pro Controller: This is a wireless controller for PC gaming. It is one of the most retro and stylish controllers for PC gaming, as it has native compatibility with Windows, Android, macOS, Steam, Switch, and Raspberry Pi. It has a nostalgic design, comfortable grip, responsive buttons, analog sticks, triggers, bumpers, D-pad, and vibration motors. It also has a turbo function, a screenshot button, a home button, and a star button. You can connect it to your PC via USB cable or Bluetooth. Each controller has its own advantages and disadvantages, so you may want to try them out and see which one suits your preference and budget. You can also check out some YouTube videos that show how to connect and configure each controller for FIFA Street 4 on PC. For example, you can watch this video for Xbox One Controller, this video for DualShock 4 Controller, this video for Logitech F310 Controller, and this video for 8BitDo SN30 Pro Controller.
Methods and Steps
-
Now that you have checked the system requirements, chosen the emulator, and decided the controller for playing FIFA Street 4 on PC, you are ready to download and install the game on your computer. There are several methods to do that using different sources and options. In this section, we will show you how to download and install FIFA Street 4 on PC using five different methods:
-
How to download fifa street 4 pc free full version highly compressed
-Fifa street 4 pc performance test on rpcs3 ps3 emulator
-Fifa street 4 installer v.3.1 pc download from 4shared
-Fifa street 4 pc system requirements and installation guide
-Fifa street 4 pc gameplay and review
-Fifa street 4 pc cheats and tips
-Fifa street 4 pc best teams and players
-Fifa street 4 pc online multiplayer mode
-Fifa street 4 pc mods and patches
-Fifa street 4 pc vs ps3 vs xbox 360 comparison
-Fifa street 4 pc download torrent link
-Fifa street 4 pc crack and serial key
-Fifa street 4 pc controller support and settings
-Fifa street 4 pc graphics and sound quality
-Fifa street 4 pc features and modes
-Fifa street 4 pc download size and speed
-Fifa street 4 pc problems and solutions
-Fifa street 4 pc demo and trial version
-Fifa street 4 pc release date and price
-Fifa street 4 pc official website and forum
-Fifa street 4 pc screenshots and videos
-Fifa street 4 pc news and updates
-Fifa street 4 pc ratings and reviews
-Fifa street 4 pc download from steam or origin
-Fifa street 4 pc minimum and recommended specs
-Fifa street 4 pc world tour and tournaments
-Fifa street 4 pc legends and unlockables
-Fifa street 4 pc customizations and options
-Fifa street 4 pc tricks and skills
-Fifa street 4 pc fun and challenges
- - Method 1: Download FIFA Street 4 from Steam - Method 2: Download FIFA Street 4 from Origin - Method 3: Download FIFA Street 4 from a Torrent Site - Method 4: Download FIFA Street 4 from a ROM Site - Method 5: Download FIFA Street 4 from a Mod Site
Each method has its own pros and cons, so you may want to choose the one that works best for you. However, we recommend that you use the official and legal methods (Method 1 and Method 2) as much as possible to avoid any potential risks or issues.
-
Method 1: Download FIFA Street 4 from Steam
-
Steam is the most popular and convenient platform for downloading and playing PC games. It offers thousands of games across various genres and categories at reasonable prices. It also provides various features such as cloud saving, achievements, friends, reviews, and community. You can download FIFA Street 4 from Steam by following these steps: - Step 1: Create a Steam account or log in to your existing account. You can do this by visiting the Steam website or by downloading the Steam client and installing it on your PC. - Step 2: Search for FIFA Street 4 on the Steam store or click on this link to go directly to the game page. You will see the game details, screenshots, videos, reviews, and system requirements. - Step 3: Click on the "Add to Cart" button to purchase the game. You will need to pay $19.99 USD or equivalent in your currency. You can use various payment methods such as credit card, debit card, PayPal, Steam Wallet, or gift card. - Step 4: After completing the payment, click on the "Library" tab on the Steam client. You will see FIFA Street 4 in your list of games. Click on it to start downloading and installing the game on your PC. - Step 5: Once the download and installation are finished, click on the "Play" button to launch the game. You will need to log in to your EA account or create a new one to access the online features of the game. This method is the easiest and most convenient way to get FIFA Street 4 on PC. However, it also has some drawbacks, such as: - You need to have a stable internet connection and enough disk space to download and install the game. - You need to pay for the game and agree to the terms and conditions of Steam and EA. - You need to have a compatible emulator and controller to play the game on PC. - You may encounter some compatibility or performance issues depending on your system configuration and emulator settings.
Method 2: Download FIFA Street 4 from Origin
- Origin is another popular and reliable platform for downloading and playing PC games. It is owned by EA, the publisher of FIFA Street 4. It offers many EA games across various genres and categories at reasonable prices. It also provides various features such as cloud saving, achievements, friends, reviews, and community. You can download FIFA Street 4 from Origin by following these steps: - Step 1: Create an Origin account or log in to your existing account. You can do this by visiting the Origin website or by downloading the Origin client and installing it on your PC. - Step 2: Search for FIFA Street 4 on the Origin store or click on this link to go directly to the game page. You will see the game details, screenshots, videos, reviews, and system requirements. - Step 3: Click on the "Buy Now" button to purchase the game. You will need to pay $19.99 USD or equivalent in your currency. You can use various payment methods such as credit card, debit card, PayPal, Origin Wallet, or gift card. - Step 4: After completing the payment, click on the "My Game Library" tab on the Origin client. You will see FIFA Street 4 in your list of games. Click on it to start downloading and installing the game on your PC. - Step 5: Once the download and installation are finished, click on the "Play" button to launch the game. You will need to log in to your EA account or create a new one to access the online features of the game. This method is another official and reliable way to get FIFA Street 4 on PC. However, it also has some drawbacks, such as: - You need to have a stable internet connection and enough disk space to download and install the game. - You need to pay for the game and agree to the terms and conditions of Origin and EA. - You need to have a compatible emulator and controller to play the game on PC. - You may encounter some compatibility or performance issues depending on your system configuration and emulator settings.
Method 3: Download FIFA Street 4 from a Torrent Site
- A torrent site is a website that allows you to download files from other users using a peer-to-peer network. A torrent file is a small file that contains information about the files that you want to download, such as the name, size, type, and location. A torrent client is a software that allows you to open and download the files from the torrent file using the peer-to-peer network. There are many torrent sites and torrent clients available for different platforms, but not all of them are safe and legal. Here are some of the best torrent sites and torrent clients that can help you download FIFA Street 4 on PC: - Torrent Site: The Pirate Bay: This is one of the most popular and notorious torrent sites in the world. It offers millions of torrent files across various categories and genres, including games, movies, music, software, and more. It also has a simple and user-friendly interface, a search engine, a comment section, and a rating system. You can access The Pirate Bay by visiting its official website or by using a proxy or a VPN service. - Torrent Client: uTorrent: This is one of the most widely used and trusted torrent clients for PC gaming. It is a lightweight and powerful software that allows you to download and manage your torrent files easily and efficiently. It also supports various features such as magnet links, streaming, bandwidth control, encryption, remote access, and more. You can download uTorrent from its official website or from its GitHub page . You can download FIFA Street 4 from a torrent site by following these steps: - Step 1: Open your web browser and go to The Pirate Bay website or use a proxy or a VPN service to access it. - Step 2: Search for FIFA Street 4 on the search bar or click on this link to go directly to the game page. You will see the game details, screenshots, videos, comments, and ratings. - Step 3: Click on the "Get this torrent" button to download the torrent file of FIFA Street 4. You will need to choose a location to save the file on your PC. - Step 4: Open your torrent client and add the torrent file of FIFA Street 4. You will see the game files that you want to download, such as the ISO file, the crack file, the readme file, and more. - Step 5: Start downloading the game files from the peer-to-peer network. You will need to have a stable internet connection and enough disk space to download the game files. - Step 6: Once the download is finished, open the ISO file of FIFA Street 4 using a software such as WinRAR or Daemon Tools. You will see the game folder that contains the setup file, the crack file, the readme file, and more. - Step 7: Run the setup file of FIFA Street 4 and follow the instructions to install the game on your PC. You will need to choose a location to install the game on your PC. - Step 8: Copy the crack file of FIFA Street 4 from the ISO file and paste it into the game folder where you installed the game on your PC. This will overwrite the original game file and allow you to play the game without any restrictions. - Step 9: Run the game file of FIFA Street 4 from the game folder where you installed the game on your PC. You will need to log in to your EA account or create a new one to access the online features of the game. This method is a risky but possible way to get FIFA Street 4 on PC for free. However, it also has some drawbacks, such as: - You need to have a stable internet connection and enough disk space to download and install the game files. - You may encounter some legal or ethical issues for downloading and using pirated or cracked games. - You may expose your PC to viruses, malware, or spyware that can harm your system or data. - You may face some compatibility or performance issues depending on your system configuration and emulator settings.
Method 4: Download FIFA Street 4 from a ROM Site
- A ROM site is a website that allows you to download ROM files of games that are designed for a different platform. A ROM file is a file that contains the data of a game that can be read by an emulator. There are many ROM sites available for different platforms, but not all of them are safe and legal. Here are some of the best ROM sites that can help you download FIFA Street 4 on PC: - ROM Site: CoolROM: This is one of the most popular and trusted ROM sites in the world. It offers thousands of ROM files across various platforms and genres, including PlayStation 3, Xbox 360, PlayStation 2, and more. It also has a simple and user-friendly interface, a search engine, a comment section, and a rating system. You can access CoolROM by visiting its official website or by using a proxy or a VPN service. - ROM Site: EmuParadise: This is another popular and reliable ROM site in the world. It offers thousands of ROM files across various platforms and genres, including PlayStation 3, Xbox 360, PlayStation 2, and more. It also has a simple and user-friendly interface, a search engine, a comment section, and a rating system. You can access EmuParadise by visiting its official website or by using a proxy or a VPN service. You can download FIFA Street 4 from a ROM site by following these steps: - Step 1: Open your web browser and go to CoolROM or EmuParadise website or use a proxy or a VPN service to access it. - Step 2: Search for FIFA Street 4 on the search bar or click on this link for CoolROM or this link for EmuParadise to go directly to the game page. You will see the game details, screenshots, videos, comments, and ratings. - Step 3: Click on the "Download Now" button to download the ROM file of FIFA Street 4. You will need to choose a location to save the file on your PC. - Step 4: Open your emulator and add the ROM file of FIFA Street 4. You will see the game files that you want to play, such as the ISO file, the readme file, and more. - Step 5: Start playing the game from your emulator. You will need to log in to your EA account or create a new one to access the online features of the game. This method is a similar but safer way to get FIFA Street 4 on PC for free. However, it also has some drawbacks, such as: - You need to have a stable internet connection and enough disk space to download and play the game files. - You may encounter some legal or ethical issues for downloading and using ROM files of games that are not in the public domain or that you do not own. - You may expose your PC to viruses, malware, or spyware that can harm your system or data. - You may face some compatibility or performance issues depending on your system configuration and emulator settings.
Method 5: Download FIFA Street 4 from a Mod Site
- A mod site is a website that allows you to download mod files of games that are modified or enhanced by other users. A mod file is a file that contains the data of a game that can be changed by an emulator or a software. There are many mod sites available for different platforms, but not all of them are safe and legal. Here are some of the best mod sites that can help you download FIFA Street 4 on PC: - Mod Site: Nexus Mods: This is one of the most popular and trusted mod sites in the world. It offers thousands of mod files across various platforms and genres, including PC, PlayStation, Xbox, Nintendo, and more. It also has a simple and user-friendly interface, a search engine, a comment section, and a rating system. You can access Nexus Mods by visiting its official website or by using a proxy or a VPN service. - Mod Site: Mod DB: This is another popular and reliable mod site in the world. It offers thousands of mod files across various platforms and genres, including PC, PlayStation, Xbox, Nintendo, and more. It also has a simple and user-friendly interface, a search engine, a comment section, and a rating system. You can access Mod DB by visiting its official website or by using a proxy or a VPN service. You can download FIFA Street 4 from a mod site by following these steps: - Step 1: Open your web browser and go to Nexus Mods or Mod DB website or use a proxy or a VPN service to access it. - Step 2: Search for FIFA Street 4 on the search bar or click on this link for Nexus Mods or this link for Mod DB to go directly to the game page. You will see the game details, screenshots, videos, comments, and ratings. - Step 3: Click on the "Download" button to download the mod file of FIFA Street 4. You will need to choose a location to save the file on your PC. - Step 4: Open your emulator or software and add the mod file of FIFA Street 4. You will see the game files that you want to play, such as the ISO file, the readme file, and more. - Step 5: Start playing the game from your emulator or software. You will need to log in to your EA account or create a new one to access the online features of the game. This method is a creative and fun way to get FIFA Street 4 on PC with extra features. However, it also has some drawbacks, such as: - You need to have a stable internet connection and enough disk space to download and play the game files. - You may encounter some legal or ethical issues for downloading and using mod files of games that are not authorized or approved by the original developers or publishers. - You may expose your PC to viruses, malware, or spyware that can harm your system or data. - You may face some compatibility or performance issues depending on your system configuration and emulator or software settings.
Tips and Tricks
-
Now that you have downloaded and installed FIFA Street 4 on your PC using one of the methods above, you are ready to enjoy the ultimate street soccer game on your computer. However, you may want to optimize your FIFA Street 4 experience on PC by adjusting some settings and customizing some features. In this section, we will give you some tips and tricks on how to do that:
- - How to Configure Your Emulator Settings for FIFA Street 4 - How to Customize Your Controller Settings for FIFA Street 4 - How to Access the Online Features of FIFA Street 4 on PC
How to Configure Your Emulator Settings for FIFA Street 4
-
An emulator is a software that allows you to run games or applications that are designed for a different platform on your PC. For example, you can use an emulator to play PlayStation 3 or Xbox 360 games on your PC. However, an emulator may not run the game perfectly by default, and you may need to configure some settings to improve the graphics, audio, input, and performance of the game. Here are some steps on how to configure your emulator settings for FIFA Street 4:
- - Step 1: Open your emulator and go to the settings menu. You will see different options and tabs for various settings, such as graphics, audio, input, and performance. - Step 2: Adjust the graphics settings according to your preference and system capability. You can change the resolution, aspect ratio, frame rate, anti-aliasing, texture filtering, shaders, and more. You can also enable or disable fullscreen mode, vsync, and windowed mode. The higher the graphics settings, the better the game will look, but the more demanding it will be on your system. - Step 3: Adjust the audio settings according to your preference and system capability. You can change the volume, output device, sample rate, latency, and more. You can also enable or disable sound effects, music, voice, and subtitles. The higher the audio settings, the better the game will sound, but the more demanding it will be on your system. - Step 4: Adjust the input settings according to your preference and controller type. You can map the buttons, sticks, triggers, and vibration of your controller to match the game controls. You can also enable or disable analog mode, dead zone, sensitivity, and rumble. The more accurate the input settings, the better the game will respond, but the more complex it will be to set up. - Step 5: Adjust the performance settings according to your preference and system capability. You can change the emulation speed, CPU mode, GPU mode, cache size, and more. You can also enable or disable hacks, cheats, patches, and logs. The higher the performance settings, the faster the game will run, but the more unstable it will be on your system. You can also use some presets or profiles that are already configured for FIFA Street 4 by other users or developers. You can find them on the emulator website, forum, or wiki. You can also save your own settings and load them later.
How to Customize Your Controller Settings for FIFA Street 4
-
A controller is a device that allows you to control the game using buttons, sticks, triggers, and vibration. A controller can enhance your FIFA Street 4 gameplay by giving you more precision, comfort, and feedback. However, a controller may not work well with the game by default, and you may need to customize some settings to improve the mapping, sensitivity, and vibration of the controller. Here are some steps on how to customize your controller settings for FIFA Street 4:
- - Step 1: Open your emulator and go to the input menu. You will see different options and tabs for various input devices, such as keyboard, mouse, joystick, gamepad, and more. - Step 2: Choose the input device that you want to use for playing FIFA Street 4. You can use a keyboard and mouse, but we recommend using a controller for better gameplay. - Step 3: Configure the mapping of your controller according to your preference and game controls. You can assign the buttons, sticks, triggers, and vibration of your controller to match the actions, movements, skills, and feedback of the game. You can also enable or disable analog mode, dead zone, sensitivity, and rumble. - Step 4: Test your controller settings by playing a practice match or a tutorial in FIFA Street 4. You can check if the controller works properly and if you are comfortable with the settings. You can also adjust the settings as you play until you find the best configuration for you. - Step 5: Save your controller settings and load them later. You can also use some presets or profiles that are already configured for FIFA Street 4 by other users or developers. You can find them on the emulator website, forum, or wiki.
How to Access the Online Features of FIFA Street 4 on PC
-
One of the best features of FIFA Street 4 is its online mode, where you can connect to the EA servers, play online matches, join tournaments, and unlock rewards. However, accessing the online features of FIFA Street 4 on PC may not be easy or possible depending on your method and emulator. Here are some tips on how to access the online features of FIFA Street 4 on PC:
- - Tip 1: Use an official and legal method (Method 1 or Method 2) to download and install FIFA Street 4 on PC. This will ensure that you have a valid copy of the game and that you can log in to your EA account without any issues. - Tip 2: Use a compatible and stable emulator (RPCS3 or Xenia) to play FIFA Street 4 on PC. This will ensure that you can run the game smoothly and that you can connect to the EA servers without any errors. - Tip 3: Use a reliable and secure internet connection to play FIFA Street 4 on PC. This will ensure that you can play online matches without any lag or disconnection. - Tip 4: Use a VPN service to play FIFA Street 4 on PC. This will ensure that you can bypass any regional or network restrictions that may prevent you from accessing the online features of the game. - Tip 5: Use a mod file or a patch file to play FIFA Street 4 on PC. This will ensure that you can unlock some extra features or fix some bugs that may affect the online mode of the game.
Conclusion
-
FIFA Street 4 is one of the best soccer games ever made. It offers a unique and exciting street soccer experience that showcases your skills, style, and creativity in various urban locations around the world. It features over 50 teams, over 35 locations, over 500 players, and a variety of modes, such as World Tour, Hit the Streets, Freestyle, Last Man Standing, Panna Rules, Futsal, and Custom Match.
-
However, if you want to play FIFA Street 4 on your PC, you may face some challenges, as there is no official PC version of the game. But don't worry, we have got you covered. In this article, we have shown you how to download and install FIFA Street 4 on PC using five different methods:
- - Method 1: Download FIFA Street 4 from Steam - Method 2: Download FIFA Street 4 from Origin - Method 3: Download FIFA Street 4 from a Torrent Site - Method 4: Download FIFA Street 4 from a ROM Site - Method 5: Download FIFA Street 4 from a Mod Site
We have also given you some tips and tricks on how to optimize your FIFA Street 4 experience on PC by configuring your emulator settings, customizing your controller settings, and accessing the online features of the game.
-
We hope that this article has helped you to play FIFA Street 4 on PC and enjoy the ultimate street soccer game on your computer. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and family who are also fans of soccer games.
-
FAQs
-
Here are some of the frequently asked questions about FIFA Street 4 on PC:
-
Q: Is FIFA Street 4 available for PC?
-
A: No, FIFA Street 4 is not officially available for PC. It was only released for PlayStation 3 and Xbox 360 in March 2012. However, you can use some methods and emulators to play FIFA Street 4 on PC.
-
Q: What is the best method to download and install FIFA Street 4 on PC?
-
A: The best method to download and install FIFA Street 4 on PC depends on your preference and situation. However, we recommend that you use the official and legal methods (Method 1 and Method 2) as much as possible to avoid any potential risks or issues.
-
Q: What is the best emulator to play FIFA Street 4 on PC?
-
A: The best emulator to play FIFA Street 4 on PC depends on your system configuration and performance. However, we recommend that you use RPCS3 or Xenia as they are the most compatible and stable emulators for PlayStation 3 and Xbox 360 games.
-
Q: What is the best controller to play FIFA Street 4 on PC?
-
A: The best controller to play FIFA Street 4 on PC depends on your preference and budget. However, we recommend that you use Xbox One Controller or DualShock 4 Controller as they are the most widely used and supported controllers for PC gaming.
-
Q: How can I access the online features of FIFA Street 4 on PC?
-
A: You can access the online features of FIFA Street 4 on PC by using an official and legal method (Method 1 or Method 2) to download and install the game, using a compatible and stable emulator (RPCS3 or Xenia) to play the game, using a reliable and secure internet connection to connect to the EA servers, using a VPN service to bypass any regional or network restrictions, and using a mod file or a patch file to unlock some extra features or fix some bugs.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/3B-Group/ConvRe-Leaderboard/app.py b/spaces/3B-Group/ConvRe-Leaderboard/app.py
deleted file mode 100644
index 4016501d624e0225c6fe4d30053b6bb9e0206456..0000000000000000000000000000000000000000
--- a/spaces/3B-Group/ConvRe-Leaderboard/app.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import gradio as gr
-import pandas as pd
-
-from src.css_html import custom_css
-from src.utils import (
- AutoEvalColumn,
- fields,
- make_clickable_names,
- make_plot_data
-)
-from src.demo import (
- generate,
- random_examples,
- return_ground_truth,
-)
-
-
-DEFAULT_SYSTEM_PROMPT = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
-MAX_MAX_NEW_TOKENS = 1024
-DEFAULT_MAX_NEW_TOKENS = 512
-
-
-df = pd.read_csv("data/eval_board.csv")
-
-COLS = [c.name for c in fields(AutoEvalColumn)]
-TYPES = [c.type for c in fields(AutoEvalColumn)]
-COLS_LITE = [c.name for c in fields(AutoEvalColumn) if c.displayed_by_default and not c.hidden]
-TYPES_LITE = [c.type for c in fields(AutoEvalColumn) if c.displayed_by_default and not c.hidden]
-
-
-def add_new_eval(
- model: str,
- re2text_easy_precision: str,
- re2text_hard_precision: str,
- text2re_easy_precision: str,
- text2re_hard_precision: str,
- links: str,
-):
- print("adding new eval")
-
- eval_entry = {
- "model": model,
- "re2text_easy": re2text_easy_precision,
- "re2text_hard": re2text_hard_precision,
- "text2re_easy": text2re_easy_precision,
- "text2re_hard": text2re_hard_precision,
- "link": links
- }
-
-
-
-def select_columns(df, columns):
- always_here_cols = [
- AutoEvalColumn.model.name
- ]
- # We use COLS to maintain sorting
- filtered_df = df[
- always_here_cols + [c for c in COLS if c in df.columns and c in columns]
- ]
- return filtered_df
-
-
-df["pure_name"] = df['Models']
-df = make_clickable_names(df)
-demo = gr.Blocks(css=custom_css)
-
-with demo:
- with gr.Row():
- gr.Markdown(
- """
-
🤖 ConvRe 🤯 Leaderboard
-
-
-""",
- elem_classes="markdown-text",
- )
-
- gr.Markdown("""🤖**ConvRe**🤯 is the benchmark proposed in our EMNLP 2023 main conference paper: [An Investigation of LLMs’ Inefficacy in Understanding Converse Relations](https://arxiv.org/abs/2310.05163).
-It aims to evaluate LLMs' ability on understanding converse relations.
-Converse relation is defined as the opposite of semantic relation while keeping the surface form of the triple unchanged.
-For example, the triple `(x, has part, y)` is interpreted as "x has a part called y" in normal relation, while "y has a part called x" in converse relation 🔁.
-
-The experiments in our paper suggested that LLMs often resort to shortcut learning (or superficial correlations) and still face challenges on our 🤖ConvRe🤯 benchmark even for powerful models like GPT-4.
- """, elem_classes="markdown-text")
-
- with gr.Tabs(elem_classes="tab-buttons") as tabs:
- with gr.TabItem("🔢 Data", id=0):
- with gr.Accordion("➡️ See All Columns", open=False):
- shown_columns = gr.CheckboxGroup(
- choices=[
- c for c in COLS if c not in [AutoEvalColumn.model.name]
- ],
- value=[
- c for c in COLS_LITE if c not in [AutoEvalColumn.model.name]
- ],
- label="",
- elem_id="column-select",
- interactive=True
- )
- leaderboard_df_re2text = gr.components.Dataframe(
- value=df[
- [
- AutoEvalColumn.model.name,
- ] + shown_columns.value
- ],
- headers=[
- AutoEvalColumn.model.name,
- ] + shown_columns.value,
- datatype=TYPES,
- elem_id="leaderboard-table",
- interactive=False,
- )
-
- hidden_leaderboard_df_re2text = gr.components.DataFrame(
- value=df,
- headers=COLS,
- datatype=["str" for _ in range(len(COLS))],
- visible=False,
- )
-
- shown_columns.change(
- select_columns,
- [hidden_leaderboard_df_re2text, shown_columns],
- leaderboard_df_re2text
- )
-
- with gr.TabItem("📊 Plot", id=1):
- with gr.Row():
- with gr.Column():
- gr.LinePlot(
- make_plot_data(df, task="Re2Text"),
- x="Setting",
- y="Accuracy",
- color="Symbol",
- title="Re2Text",
- y_lim=[0, 100],
- x_label_angle=0,
- height=400,
- width=500,
- )
-
- with gr.Column():
- gr.LinePlot(
- make_plot_data(df, task="Text2Re"),
- x="Setting",
- y="Accuracy",
- color="Symbol",
- title="Text2Re",
- y_lim=[0, 100],
- x_label_angle=0,
- height=400,
- width=500,
- )
-
- with gr.TabItem("Submit results 🚀", id=3):
- gr.Markdown("""
-
Comming Soon ❤️
-
-
-""")
-
- with gr.Column():
- gr.Markdown(
- """
🤖ConvRe🤯 Demo (Llama-2-Chat-7B🦙)
\
- \
- """,
- elem_classes="markdown-text",
- )
-
- output_box = gr.Textbox(lines=10, max_lines=10, label="Llama-2-Chat-7B Answer", interactive=False)
-
- input_box = gr.Textbox(lines=12, max_lines=12, label="User Input")
-
- ground_truth_display = gr.Textbox("", lines=1, max_lines=1, label="😊Correct Answer😊", interactive=False)
-
- with gr.Column():
-
-
- with gr.Accordion("Additional Inputs", open=False):
- sys_prompt = gr.Textbox(label="System prompt", value=DEFAULT_SYSTEM_PROMPT, lines=6)
-
- max_new_tokens=gr.Slider(
- label="Max new tokens",
- minimum=1,
- maximum=MAX_MAX_NEW_TOKENS,
- step=1,
- value=DEFAULT_MAX_NEW_TOKENS,
- )
-
- temperature = gr.Slider(
- label="Temperature",
- minimum=0.1,
- maximum=4.0,
- step=0.1,
- value=0.1,
- )
-
-
- with gr.Row():
- re2text_easy_btn = gr.Button("Random Re2Text Easy Example 😄")
- re2text_easy_btn.click(
- fn=random_examples,
- inputs=gr.Text("re2text-easy", visible=False),
- outputs = input_box,
- )
-
- re2text_hard_btn = gr.Button("Random Re2Text Hard Example 🤯")
- re2text_hard_btn.click(
- fn=random_examples,
- inputs=gr.Text("re2text-hard", visible=False),
- outputs=input_box,
- )
-
- text2re_easy_btn = gr.Button("Random Text2Re Easy Example 😄")
- text2re_easy_btn.click(
- fn=random_examples,
- inputs=gr.Text("text2re-easy", visible=False),
- outputs = input_box,
- )
-
- text2re_hard_btn = gr.Button("Random Text2Re Hard Example 🤯")
- text2re_hard_btn.click(
- fn=random_examples,
- inputs=gr.Text("text2re-hard", visible=False),
- outputs = input_box,
- )
-
- with gr.Row():
- gr.ClearButton([input_box, output_box])
- submit_btn = gr.Button("Submit🏃")
- submit_btn.click(generate, inputs=[input_box, sys_prompt, temperature, max_new_tokens], outputs=[output_box])
-
- answer_btn = gr.Button("Answer🤔")
- answer_btn.click(return_ground_truth, inputs=[], outputs=[ground_truth_display])
-
-
-demo.queue(max_size=32).launch(enable_queue=True)
\ No newline at end of file
diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_verification_dataset.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_verification_dataset.py
deleted file mode 100644
index cecd8ed8ac100b80d5087fa47f22f92c84fea032..0000000000000000000000000000000000000000
--- a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker_verification_dataset.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from speaker_encoder.data_objects.random_cycler import RandomCycler
-from speaker_encoder.data_objects.speaker_batch import SpeakerBatch
-from speaker_encoder.data_objects.speaker import Speaker
-from speaker_encoder.params_data import partials_n_frames
-from torch.utils.data import Dataset, DataLoader
-from pathlib import Path
-
-# TODO: improve with a pool of speakers for data efficiency
-
-class SpeakerVerificationDataset(Dataset):
- def __init__(self, datasets_root: Path):
- self.root = datasets_root
- speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()]
- if len(speaker_dirs) == 0:
- raise Exception("No speakers found. Make sure you are pointing to the directory "
- "containing all preprocessed speaker directories.")
- self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs]
- self.speaker_cycler = RandomCycler(self.speakers)
-
- def __len__(self):
- return int(1e10)
-
- def __getitem__(self, index):
- return next(self.speaker_cycler)
-
- def get_logs(self):
- log_string = ""
- for log_fpath in self.root.glob("*.txt"):
- with log_fpath.open("r") as log_file:
- log_string += "".join(log_file.readlines())
- return log_string
-
-
-class SpeakerVerificationDataLoader(DataLoader):
- def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None,
- batch_sampler=None, num_workers=0, pin_memory=False, timeout=0,
- worker_init_fn=None):
- self.utterances_per_speaker = utterances_per_speaker
-
- super().__init__(
- dataset=dataset,
- batch_size=speakers_per_batch,
- shuffle=False,
- sampler=sampler,
- batch_sampler=batch_sampler,
- num_workers=num_workers,
- collate_fn=self.collate,
- pin_memory=pin_memory,
- drop_last=False,
- timeout=timeout,
- worker_init_fn=worker_init_fn
- )
-
- def collate(self, speakers):
- return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
-
\ No newline at end of file
diff --git a/spaces/801artistry/RVC801/LazyImport.py b/spaces/801artistry/RVC801/LazyImport.py
deleted file mode 100644
index 5bdb05ddd5a546a43adba7274b4c3465bb77f2f5..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/LazyImport.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from importlib.util import find_spec, LazyLoader, module_from_spec
-from sys import modules
-
-def lazyload(name):
- if name in modules:
- return modules[name]
- else:
- spec = find_spec(name)
- loader = LazyLoader(spec.loader)
- module = module_from_spec(spec)
- modules[name] = module
- loader.exec_module(module)
- return module
\ No newline at end of file
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/docs/faq.md b/spaces/AI-Hobbyist/Hoyo-RVC/docs/faq.md
deleted file mode 100644
index 74eff82d9e4f96f50ad0aed628c253d08e16a426..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/docs/faq.md
+++ /dev/null
@@ -1,89 +0,0 @@
-## Q1:ffmpeg error/utf8 error.
-
-大概率不是ffmpeg问题,而是音频路径问题;
-ffmpeg读取路径带空格、()等特殊符号,可能出现ffmpeg error;训练集音频带中文路径,在写入filelist.txt的时候可能出现utf8 error;
-
-## Q2:一键训练结束没有索引
-
-显示"Training is done. The program is closed."则模型训练成功,后续紧邻的报错是假的;
-
-一键训练结束完成没有added开头的索引文件,可能是因为训练集太大卡住了添加索引的步骤;已通过批处理add索引解决内存add索引对内存需求过大的问题。临时可尝试再次点击"训练索引"按钮。
-
-## Q3:训练结束推理没看到训练集的音色
-点刷新音色再看看,如果还没有看看训练有没有报错,控制台和webui的截图,logs/实验名下的log,都可以发给开发者看看。
-
-## Q4:如何分享模型
- rvc_root/logs/实验名 下面存储的pth不是用来分享模型用来推理的,而是为了存储实验状态供复现,以及继续训练用的。用来分享的模型应该是weights文件夹下大小为60+MB的pth文件;
- 后续将把weights/exp_name.pth和logs/exp_name/added_xxx.index合并打包成weights/exp_name.zip省去填写index的步骤,那么zip文件用来分享,不要分享pth文件,除非是想换机器继续训练;
- 如果你把logs文件夹下的几百MB的pth文件复制/分享到weights文件夹下强行用于推理,可能会出现f0,tgt_sr等各种key不存在的报错。你需要用ckpt选项卡最下面,手工或自动(本地logs下如果能找到相关信息则会自动)选择是否携带音高、目标音频采样率的选项后进行ckpt小模型提取(输入路径填G开头的那个),提取完在weights文件夹下会出现60+MB的pth文件,刷新音色后可以选择使用。
-
-## Q5:Connection Error.
-也许你关闭了控制台(黑色窗口)。
-
-## Q6:WebUI弹出Expecting value: line 1 column 1 (char 0).
-请关闭系统局域网代理/全局代理。
-
-这个不仅是客户端的代理,也包括服务端的代理(例如你使用autodl设置了http_proxy和https_proxy学术加速,使用时也需要unset关掉)
-
-## Q7:不用WebUI如何通过命令训练推理
-训练脚本:
-可先跑通WebUI,消息窗内会显示数据集处理和训练用命令行;
-
-推理脚本:
-https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/myinfer.py
-
-例子:
-
-runtime\python.exe myinfer.py 0 "E:\codes\py39\RVC-beta\todo-songs\1111.wav" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "test.wav" "weights/mi-test.pth" 0.6 cuda:0 True
-
-f0up_key=sys.argv[1]
-input_path=sys.argv[2]
-index_path=sys.argv[3]
-f0method=sys.argv[4]#harvest or pm
-opt_path=sys.argv[5]
-model_path=sys.argv[6]
-index_rate=float(sys.argv[7])
-device=sys.argv[8]
-is_half=bool(sys.argv[9])
-
-## Q8:Cuda error/Cuda out of memory.
-小概率是cuda配置问题、设备不支持;大概率是显存不够(out of memory);
-
-训练的话缩小batch size(如果缩小到1还不够只能更换显卡训练),推理的话酌情缩小config.py结尾的x_pad,x_query,x_center,x_max。4G以下显存(例如1060(3G)和各种2G显卡)可以直接放弃,4G显存显卡还有救。
-
-## Q9:total_epoch调多少比较好
-
-如果训练集音质差底噪大,20~30足够了,调太高,底模音质无法带高你的低音质训练集
-如果训练集音质高底噪低时长多,可以调高,200是ok的(训练速度很快,既然你有条件准备高音质训练集,显卡想必条件也不错,肯定不在乎多一些训练时间)
-
-## Q10:需要多少训练集时长
- 推荐10min至50min
- 保证音质高底噪低的情况下,如果有个人特色的音色统一,则多多益善
- 高水平的训练集(精简+音色有特色),5min至10min也是ok的,仓库作者本人就经常这么玩
- 也有人拿1min至2min的数据来训练并且训练成功的,但是成功经验是其他人不可复现的,不太具备参考价值。这要求训练集音色特色非常明显(比如说高频气声较明显的萝莉少女音),且音质高;
- 1min以下时长数据目前没见有人尝试(成功)过。不建议进行这种鬼畜行为。
-
-## Q11:index rate干嘛用的,怎么调(科普)
- 如果底模和推理源的音质高于训练集的音质,他们可以带高推理结果的音质,但代价可能是音色往底模/推理源的音色靠,这种现象叫做"音色泄露";
- index rate用来削减/解决音色泄露问题。调到1,则理论上不存在推理源的音色泄露问题,但音质更倾向于训练集。如果训练集音质比推理源低,则index rate调高可能降低音质。调到0,则不具备利用检索混合来保护训练集音色的效果;
- 如果训练集优质时长多,可调高total_epoch,此时模型本身不太会引用推理源和底模的音色,很少存在"音色泄露"问题,此时index_rate不重要,你甚至可以不建立/分享index索引文件。
-
-## Q11:推理怎么选gpu
-config.py文件里device cuda:后面选择卡号;
-卡号和显卡的映射关系,在训练选项卡的显卡信息栏里能看到。
-
-## Q12:如何推理训练中间保存的pth
-通过ckpt选项卡最下面提取小模型。
-
-
-## Q13:如何中断和继续训练
-现阶段只能关闭WebUI控制台双击go-web.bat重启程序。网页参数也要刷新重新填写;
-继续训练:相同网页参数点训练模型,就会接着上次的checkpoint继续训练。
-
-## Q14:训练时出现文件页面/内存error
-进程开太多了,内存炸了。你可能可以通过如下方式解决
-1、"提取音高和处理数据使用的CPU进程数" 酌情拉低;
-2、训练集音频手工切一下,不要太长。
-
-
-
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_meshes.py b/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_meshes.py
deleted file mode 100644
index 7070b01171c97069fa013c6eba8eee217017f08e..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_meshes.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import numpy as np
-import pytest
-import trimesh
-
-from pyrender import (Mesh, Primitive)
-
-
-def test_meshes():
-
- with pytest.raises(TypeError):
- x = Mesh()
- with pytest.raises(TypeError):
- x = Primitive()
- with pytest.raises(ValueError):
- x = Primitive([], mode=10)
-
- # Basics
- x = Mesh([])
- assert x.name is None
- assert x.is_visible
- assert x.weights is None
-
- x.name = 'str'
-
- # From Trimesh
- x = Mesh.from_trimesh(trimesh.creation.box())
- assert isinstance(x, Mesh)
- assert len(x.primitives) == 1
- assert x.is_visible
- assert np.allclose(x.bounds, np.array([
- [-0.5, -0.5, -0.5],
- [0.5, 0.5, 0.5]
- ]))
- assert np.allclose(x.centroid, np.zeros(3))
- assert np.allclose(x.extents, np.ones(3))
- assert np.allclose(x.scale, np.sqrt(3))
- assert not x.is_transparent
-
- # Test some primitive functions
- x = x.primitives[0]
- with pytest.raises(ValueError):
- x.normals = np.zeros(10)
- with pytest.raises(ValueError):
- x.tangents = np.zeros(10)
- with pytest.raises(ValueError):
- x.texcoord_0 = np.zeros(10)
- with pytest.raises(ValueError):
- x.texcoord_1 = np.zeros(10)
- with pytest.raises(TypeError):
- x.material = np.zeros(10)
- assert x.targets is None
- assert np.allclose(x.bounds, np.array([
- [-0.5, -0.5, -0.5],
- [0.5, 0.5, 0.5]
- ]))
- assert np.allclose(x.centroid, np.zeros(3))
- assert np.allclose(x.extents, np.ones(3))
- assert np.allclose(x.scale, np.sqrt(3))
- x.material.baseColorFactor = np.array([0.0, 0.0, 0.0, 0.0])
- assert x.is_transparent
-
- # From two trimeshes
- x = Mesh.from_trimesh([trimesh.creation.box(),
- trimesh.creation.cylinder(radius=0.1, height=2.0)],
- smooth=False)
- assert isinstance(x, Mesh)
- assert len(x.primitives) == 2
- assert x.is_visible
- assert np.allclose(x.bounds, np.array([
- [-0.5, -0.5, -1.0],
- [0.5, 0.5, 1.0]
- ]))
- assert np.allclose(x.centroid, np.zeros(3))
- assert np.allclose(x.extents, [1.0, 1.0, 2.0])
- assert np.allclose(x.scale, np.sqrt(6))
- assert not x.is_transparent
-
- # From bad data
- with pytest.raises(TypeError):
- x = Mesh.from_trimesh(None)
-
- # With instancing
- poses = np.tile(np.eye(4), (5,1,1))
- poses[:,0,3] = np.array([0,1,2,3,4])
- x = Mesh.from_trimesh(trimesh.creation.box(), poses=poses)
- assert np.allclose(x.bounds, np.array([
- [-0.5, -0.5, -0.5],
- [4.5, 0.5, 0.5]
- ]))
- poses = np.eye(4)
- x = Mesh.from_trimesh(trimesh.creation.box(), poses=poses)
- poses = np.eye(3)
- with pytest.raises(ValueError):
- x = Mesh.from_trimesh(trimesh.creation.box(), poses=poses)
-
- # From textured meshes
- fm = trimesh.load('tests/data/fuze.obj')
- x = Mesh.from_trimesh(fm)
- assert isinstance(x, Mesh)
- assert len(x.primitives) == 1
- assert x.is_visible
- assert not x.is_transparent
- assert x.primitives[0].material.baseColorTexture is not None
-
- x = Mesh.from_trimesh(fm, smooth=False)
- fm.visual = fm.visual.to_color()
- fm.visual.face_colors = np.array([1.0, 0.0, 0.0, 1.0])
- x = Mesh.from_trimesh(fm, smooth=False)
- with pytest.raises(ValueError):
- x = Mesh.from_trimesh(fm, smooth=True)
-
- fm.visual.vertex_colors = np.array([1.0, 0.0, 0.0, 0.5])
- x = Mesh.from_trimesh(fm, smooth=False)
- x = Mesh.from_trimesh(fm, smooth=True)
- assert x.primitives[0].color_0 is not None
- assert x.is_transparent
-
- bm = trimesh.load('tests/data/WaterBottle.glb').dump()[0]
- x = Mesh.from_trimesh(bm)
- assert x.primitives[0].material.baseColorTexture is not None
- assert x.primitives[0].material.emissiveTexture is not None
- assert x.primitives[0].material.metallicRoughnessTexture is not None
-
- # From point cloud
- x = Mesh.from_points(fm.vertices)
-
-# def test_duck():
-# bm = trimesh.load('tests/data/Duck.glb').dump()[0]
-# x = Mesh.from_trimesh(bm)
-# assert x.primitives[0].material.baseColorTexture is not None
-# pixel = x.primitives[0].material.baseColorTexture.source[100, 100]
-# yellowish = np.array([1.0, 0.7411765, 0.0, 1.0])
-# assert np.allclose(pixel, yellowish)
diff --git a/spaces/Abhay1210/prompt-generator_V1/app.py b/spaces/Abhay1210/prompt-generator_V1/app.py
deleted file mode 100644
index 736b3892146a2e0b94d03891a7d2d1edec33791a..0000000000000000000000000000000000000000
--- a/spaces/Abhay1210/prompt-generator_V1/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-import gradio as gr
-
-tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompts-bart-long")
-model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompts-bart-long", from_tf=True)
-
-def generate(prompt):
-
- batch = tokenizer(prompt, return_tensors="pt")
- generated_ids = model.generate(batch["input_ids"], max_new_tokens=150)
- output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
- return output[0]
-
-input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer")
-output_component = gr.Textbox(label = "Prompt")
-examples = [["photographer"], ["Linux Admin"]]
-description = "This app generates ChatGPT prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). Simply enter a persona that you want the prompt to be generated based on."
-gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "👨 ChatGPT Prompt Generator 👨", description=description).launch()
diff --git a/spaces/Abubakari/Sales_Prediction/app.py b/spaces/Abubakari/Sales_Prediction/app.py
deleted file mode 100644
index c1cd67695b9c1cb2ef73fdd5f48006cd0f608f52..0000000000000000000000000000000000000000
--- a/spaces/Abubakari/Sales_Prediction/app.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import pandas as pd
-import streamlit as st
-import numpy as np
-from matplotlib import pyplot as plt
-import pickle
-import sklearn
-import joblib
-from PIL import Image
-import base64
-
-
-num_imputer = joblib.load('numerical_imputer.joblib')
-cat_imputer = joblib.load('categorical_imputer.joblib')
-encoder = joblib.load('encoder.joblib')
-scaler = joblib.load('scaler.joblib')
-dt_model = joblib.load('Final_model.joblib')
-
-# Add a title and subtitle
-st.write("
Sales Prediction App
", unsafe_allow_html=True)
-
-#image = Image.open("grocery_shopping_woman.png")
-
-# Display the image
-#st.image(image, width=600)
-
-# Load the image
-image = Image.open("grocery_shopping_woman.png")
-
-# Set up the layout
-col1, col2, col3 = st.columns([1, 3, 3])
-col2.image(image, width=600)
-
-
-#st.image("https://www.example.com/logo.png", width=200)
-# Add a subtitle or description
-st.write("This app uses machine learning to predict sales based on certain input parameters. Simply enter the required information and click 'Predict' to get a sales prediction!")
-
-st.subheader("Enter the details to predict sales")
-
-# Add some text
-#st.write("Enter some data for Prediction.")
-
- # Create the input fields
-input_data = {}
-col1,col2 = st.columns(2)
-with col1:
- input_data['store_nbr'] = st.slider("store_nbr",0,54)
- input_data['products'] = st.selectbox("products", ['AUTOMOTIVE', 'CLEANING', 'BEAUTY', 'FOODS', 'STATIONERY',
- 'CELEBRATION', 'GROCERY', 'HARDWARE', 'HOME', 'LADIESWEAR',
- 'LAWN AND GARDEN', 'CLOTHING', 'LIQUOR,WINE,BEER', 'PET SUPPLIES'])
- input_data['onpromotion'] =st.number_input("onpromotion",step=1)
- input_data['state'] = st.selectbox("state", ['Pichincha', 'Cotopaxi', 'Chimborazo', 'Imbabura',
- 'Santo Domingo de los Tsachilas', 'Bolivar', 'Pastaza',
- 'Tungurahua', 'Guayas', 'Santa Elena', 'Los Rios', 'Azuay', 'Loja',
- 'El Oro', 'Esmeraldas', 'Manabi'])
- input_data['store_type'] = st.selectbox("store_type",['D', 'C', 'B', 'E', 'A'])
- input_data['cluster'] = st.number_input("cluster",step=1)
-
-with col2:
- input_data['dcoilwtico'] = st.number_input("dcoilwtico",step=1)
- input_data['year'] = st.number_input("year",step=1)
- input_data['month'] = st.slider("month",1,12)
- input_data['day'] = st.slider("day",1,31)
- input_data['dayofweek'] = st.number_input("dayofweek,0=Sun and 6=Sat",step=1)
- input_data['end_month'] = st.selectbox("end_month",['True','False'])
-
-
-# Define CSS style for the download button
-# Define the custom CSS
-predict_button_css = """
-
-"""
-
-download_button_css = """
-
-"""
-
-# Display the custom CSS
-st.markdown(predict_button_css + download_button_css, unsafe_allow_html=True)
-
-
- # Create a button to make a prediction
-
-if st.button("Predict", key="predict_button", help="Click to make a prediction."):
- # Convert the input data to a pandas DataFrame
- input_df = pd.DataFrame([input_data])
-
-
-# Selecting categorical and numerical columns separately
- cat_columns = [col for col in input_df.columns if input_df[col].dtype == 'object']
- num_columns = [col for col in input_df.columns if input_df[col].dtype != 'object']
-
-
-# Apply the imputers
- input_df_imputed_cat = cat_imputer.transform(input_df[cat_columns])
- input_df_imputed_num = num_imputer.transform(input_df[num_columns])
-
-
- # Encode the categorical columns
- input_encoded_df = pd.DataFrame(encoder.transform(input_df_imputed_cat).toarray(),
- columns=encoder.get_feature_names(cat_columns))
-
-# Scale the numerical columns
- input_df_scaled = scaler.transform(input_df_imputed_num)
- input_scaled_df = pd.DataFrame(input_df_scaled , columns = num_columns)
-
-#joining the cat encoded and num scaled
- final_df = pd.concat([input_encoded_df, input_scaled_df], axis=1)
-
-# Make a prediction
- prediction = dt_model.predict(final_df)[0]
-
-
-# Display the prediction
- st.write(f"The predicted sales are: {prediction}.")
- input_df.to_csv("data.csv", index=False)
- st.table(input_df)
-
-# Define custom CSS
-css = """
-table {
- background-color: #f2f2f2;
- color: #333333;
-}
-"""
-
-# Set custom CSS
-st.write(f'', unsafe_allow_html=True)
-
-
-# Add the download button
-def download_csv():
- with open("data.csv", "r") as f:
- csv = f.read()
- b64 = base64.b64encode(csv.encode()).decode()
- button = f''
- return button
-
-st.markdown(
- f'
{download_csv()}
',
- unsafe_allow_html=True
-)
diff --git a/spaces/Adr740/SmartHadithFR/app.py b/spaces/Adr740/SmartHadithFR/app.py
deleted file mode 100644
index 058d9f011669de121ece41a138f2a99de76696ce..0000000000000000000000000000000000000000
--- a/spaces/Adr740/SmartHadithFR/app.py
+++ /dev/null
@@ -1,41 +0,0 @@
-
-import gradio as gr
-from functools import partial
-from get_similar_hadiths import search_hadiths
-import pandas as pd
-
-title = "Smart Hadith (Version Française) -> [English version here](https://huggingface.co/spaces/Adr740/Hadith_AI_Explorer)"
-desc = "Bienvenue dans Smart Hadith. Smart Hadith est un outil de recherche sémantique de hadith utilisant l'intelligence artificelle. Contact suggestions/questions: [hdthaiexplorer@gmail.com](mailto:hdthaiexplorer@gmail.com)"
-
-# "This is a tool that helps you find quickly relevant hadiths on a topic or a problem you have. Just type in plain English what you are looking for in the box below.\n\n"
-warning = "\n\n**AVERTISSEMENT (seulement environ 3000 hadiths sont présents)**\nCET OUTIL EST DESTINÉ À DES FINS DE RÉFÉRENCE AFIN DE FACILITER LA RECHERCHE SUR LES HADITHS (PAROLES ET ACTES PROPHÉTIQUES), IL N'EST PAS DESTINÉ À ÊTRE UTILISÉ COMME OUTIL DE GUIDANCE OU DANS TOUT AUTRE BUT. LES UTILISATEURS SONT RESPONSABLES DE CONDUIRE LEURS PROPRES RECHERCHES ET DE DEMANDER DES CONSEILS AUX SAVANTS RELIGIEUX.\nVEUILLEZ NOTER QUE LE CONTENU AFFICHÉ PAR CET OUTIL N'EST PAS GARANTI COMME ÉTANT PRÉCIS, COMPLET OU À JOUR, ET N'EST PAS DESTINÉ À ÊTRE UTILISÉ COMME SOURCE RELIGIEUSE UNIQUE.\nLES DÉVELOPPEURS DE CET OUTIL NE SERONT PAS TENUS RESPONSABLES DE TOUTE DÉCISION OU UTILISATION FAITE PAR LES UTILISATEURS DE CET OUTIL."
-disclaimer = "\n## DISCLAIMER\n\nTHIS TOOL IS INTENDED FOR REFERENCE PURPOSES ONLY AND IS NOT INTENDED TO BE TAKEN AS RELIGIOUS ADVICE. THE HADITHS DISPLAYED BY THIS TOOL ARE NOT INTENDED TO BE USED AS A SOLE SOURCE OF RELIGIOUS GUIDANCE. USERS ARE RESPONSIBLE FOR CONDUCTING THEIR OWN RESEARCH AND SEEKING GUIDANCE FROM RELIGIOUS SCHOLARS.\n\nPLEASE NOTE THAT THE CONTENT DISPLAYED BY THIS TOOL IS NOT GUARANTEED TO BE ACCURATE, COMPLETE, OR UP-TO-DATE.\n\nTHE DEVELOPERS OF THIS TOOL WILL NOT BE HELD RESPONSIBLE FOR ANY DECISIONS MADE BY THE USERS OF THIS TOOL THAT ARE BASED ON THE CONTENT DISPLAYED BY THIS TOOL.\n\nHadiths gathered from this repository: https:\/\/www.kaggle.com\/datasets\/fahd09\/hadith-dataset"
-def iter_grid(n_rows, n_cols):
- for _ in range(n_rows):
- with gr.Row():
- for _ in range(n_cols):
- with gr.Column():
- yield
-with gr.Blocks(title=title) as demo:
- gr.Markdown(f"## {title}")
- gr.Markdown(desc+warning)
- # gr.Markdown(warning)
- with gr.Row():
- with gr.Column(scale=4):
- text_area = gr.Textbox(placeholder="Écrivez ici... Exemple: 'Hadiths sur le bon comportement et la nourriture'", lines=3, label="Décrivez avec vos mots ce que vous recherchez (mot-clé, sujet etc...)")
- with gr.Column(scale=1):
- number_to_display = gr.Number(value=10,label = "Nombre de hadiths à afficher")
- submit_button = gr.Button(value="Trouver des hadiths")
- pass
-
- fn = partial(search_hadiths)
-
- with gr.Accordion("Tous les résultats:"):
- ll = gr.Markdown("Vide")
-
-
- submit_button.click(fn=fn, inputs=[text_area,number_to_display], outputs=[ll])
-
-
-
-demo.launch( enable_queue=True,max_threads=40)
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/ResolveChildrenWidth.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/ResolveChildrenWidth.js
deleted file mode 100644
index 77e4f6b28113ac390185701f6a1782fb4331494e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/ResolveChildrenWidth.js
+++ /dev/null
@@ -1,16 +0,0 @@
-var ResolveChildrenWidth = function (parentWidth) {
- // Resolve width of sizer children
- var child, childWidth;
- var colWidth;
- for (var i in this.sizerChildren) {
- child = this.sizerChildren[i];
- if (child && child.isRexSizer && !child.ignoreLayout) {
- colWidth = this.getColumnWidth(parseInt(i) % this.columnCount);
- childWidth = this.getExpandedChildWidth(child, colWidth);
- childWidth = child.resolveWidth(childWidth);
- child.resolveChildrenWidth(childWidth);
- }
- }
-}
-
-export default ResolveChildrenWidth;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Press.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Press.js
deleted file mode 100644
index 1cf74edb5aaef00b93bb199769cc55fe86df1e84..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/press/Press.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import { Press } from '../../../plugins/gestures.js';
-export default Press;
\ No newline at end of file
diff --git a/spaces/Ajay07pandey/Netfilx_Movie_Recommendation_System/README.md b/spaces/Ajay07pandey/Netfilx_Movie_Recommendation_System/README.md
deleted file mode 100644
index 1f8339747ca190a8c30f6034293943a3a056ee10..0000000000000000000000000000000000000000
--- a/spaces/Ajay07pandey/Netfilx_Movie_Recommendation_System/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Netfilx Movie Recommendation System
-emoji: 👀
-colorFrom: pink
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.27.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/utils.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/utils.py
deleted file mode 100644
index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000
--- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/utils.py
+++ /dev/null
@@ -1,226 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.ERROR)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
\ No newline at end of file
diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/waiter.h
deleted file mode 100644
index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000
--- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/waiter.h
+++ /dev/null
@@ -1,83 +0,0 @@
-#pragma once
-
-#include
-#include
-#include
-#include
-
-#include "libipc/def.h"
-#include "libipc/mutex.h"
-#include "libipc/condition.h"
-#include "libipc/platform/detail.h"
-
-namespace ipc {
-namespace detail {
-
-class waiter {
- ipc::sync::condition cond_;
- ipc::sync::mutex lock_;
- std::atomic quit_ {false};
-
-public:
- static void init();
-
- waiter() = default;
- waiter(char const *name) {
- open(name);
- }
-
- ~waiter() {
- close();
- }
-
- bool valid() const noexcept {
- return cond_.valid() && lock_.valid();
- }
-
- bool open(char const *name) noexcept {
- quit_.store(false, std::memory_order_relaxed);
- if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) {
- return false;
- }
- if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) {
- cond_.close();
- return false;
- }
- return valid();
- }
-
- void close() noexcept {
- cond_.close();
- lock_.close();
- }
-
- template
- bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept {
- IPC_UNUSED_ std::lock_guard guard {lock_};
- while ([this, &pred] {
- return !quit_.load(std::memory_order_relaxed)
- && std::forward(pred)();
- }()) {
- if (!cond_.wait(lock_, tm)) return false;
- }
- return true;
- }
-
- bool notify() noexcept {
- std::lock_guard{lock_}; // barrier
- return cond_.notify(lock_);
- }
-
- bool broadcast() noexcept {
- std::lock_guard{lock_}; // barrier
- return cond_.broadcast(lock_);
- }
-
- bool quit_waiting() {
- quit_.store(true, std::memory_order_release);
- return broadcast();
- }
-};
-
-} // namespace detail
-} // namespace ipc
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/__init__.py b/spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/iadb.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/iadb.py
deleted file mode 100644
index 1f421ee0ea4c21c66d94d7ba27ab1aeaac80de7d..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/iadb.py
+++ /dev/null
@@ -1,149 +0,0 @@
-from typing import List, Optional, Tuple, Union
-
-import torch
-
-from diffusers import DiffusionPipeline
-from diffusers.configuration_utils import ConfigMixin
-from diffusers.pipeline_utils import ImagePipelineOutput
-from diffusers.schedulers.scheduling_utils import SchedulerMixin
-
-
-class IADBScheduler(SchedulerMixin, ConfigMixin):
- """
- IADBScheduler is a scheduler for the Iterative α-(de)Blending denoising method. It is simple and minimalist.
-
- For more details, see the original paper: https://arxiv.org/abs/2305.03486 and the blog post: https://ggx-research.github.io/publication/2023/05/10/publication-iadb.html
- """
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- x_alpha: torch.FloatTensor,
- ) -> torch.FloatTensor:
- """
- Predict the sample at the previous timestep by reversing the ODE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model. It is the direction from x0 to x1.
- timestep (`float`): current timestep in the diffusion chain.
- x_alpha (`torch.FloatTensor`): x_alpha sample for the current timestep
-
- Returns:
- `torch.FloatTensor`: the sample at the previous timestep
-
- """
- if self.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- alpha = timestep / self.num_inference_steps
- alpha_next = (timestep + 1) / self.num_inference_steps
-
- d = model_output
-
- x_alpha = x_alpha + (alpha_next - alpha) * d
-
- return x_alpha
-
- def set_timesteps(self, num_inference_steps: int):
- self.num_inference_steps = num_inference_steps
-
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- alpha: torch.FloatTensor,
- ) -> torch.FloatTensor:
- return original_samples * alpha + noise * (1 - alpha)
-
- def __len__(self):
- return self.config.num_train_timesteps
-
-
-class IADBPipeline(DiffusionPipeline):
- r"""
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Parameters:
- unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
- [`DDPMScheduler`], or [`DDIMScheduler`].
- """
-
- def __init__(self, unet, scheduler):
- super().__init__()
-
- self.register_modules(unet=unet, scheduler=scheduler)
-
- @torch.no_grad()
- def __call__(
- self,
- batch_size: int = 1,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- num_inference_steps: int = 50,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ) -> Union[ImagePipelineOutput, Tuple]:
- r"""
- Args:
- batch_size (`int`, *optional*, defaults to 1):
- The number of images to generate.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
-
- Returns:
- [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if `return_dict` is
- True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.
- """
-
- # Sample gaussian noise to begin loop
- if isinstance(self.unet.config.sample_size, int):
- image_shape = (
- batch_size,
- self.unet.config.in_channels,
- self.unet.config.sample_size,
- self.unet.config.sample_size,
- )
- else:
- image_shape = (batch_size, self.unet.config.in_channels, *self.unet.config.sample_size)
-
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- image = torch.randn(image_shape, generator=generator, device=self.device, dtype=self.unet.dtype)
-
- # set step values
- self.scheduler.set_timesteps(num_inference_steps)
- x_alpha = image.clone()
- for t in self.progress_bar(range(num_inference_steps)):
- alpha = t / num_inference_steps
-
- # 1. predict noise model_output
- model_output = self.unet(x_alpha, torch.tensor(alpha, device=x_alpha.device)).sample
-
- # 2. step
- x_alpha = self.scheduler.step(model_output, t, x_alpha)
-
- image = (x_alpha * 0.5 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae_flax.py
deleted file mode 100644
index b8f5b1d0e399ab8e58d81d396d19b6f082192f5a..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae_flax.py
+++ /dev/null
@@ -1,869 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# JAX implementation of VQGAN from taming-transformers https://github.com/CompVis/taming-transformers
-
-import math
-from functools import partial
-from typing import Tuple
-
-import flax
-import flax.linen as nn
-import jax
-import jax.numpy as jnp
-from flax.core.frozen_dict import FrozenDict
-
-from ..configuration_utils import ConfigMixin, flax_register_to_config
-from ..utils import BaseOutput
-from .modeling_flax_utils import FlaxModelMixin
-
-
-@flax.struct.dataclass
-class FlaxDecoderOutput(BaseOutput):
- """
- Output of decoding method.
-
- Args:
- sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)`):
- The decoded output sample from the last layer of the model.
- dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`):
- The `dtype` of the parameters.
- """
-
- sample: jnp.ndarray
-
-
-@flax.struct.dataclass
-class FlaxAutoencoderKLOutput(BaseOutput):
- """
- Output of AutoencoderKL encoding method.
-
- Args:
- latent_dist (`FlaxDiagonalGaussianDistribution`):
- Encoded outputs of `Encoder` represented as the mean and logvar of `FlaxDiagonalGaussianDistribution`.
- `FlaxDiagonalGaussianDistribution` allows for sampling latents from the distribution.
- """
-
- latent_dist: "FlaxDiagonalGaussianDistribution"
-
-
-class FlaxUpsample2D(nn.Module):
- """
- Flax implementation of 2D Upsample layer
-
- Args:
- in_channels (`int`):
- Input channels
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
-
- in_channels: int
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.conv = nn.Conv(
- self.in_channels,
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- def __call__(self, hidden_states):
- batch, height, width, channels = hidden_states.shape
- hidden_states = jax.image.resize(
- hidden_states,
- shape=(batch, height * 2, width * 2, channels),
- method="nearest",
- )
- hidden_states = self.conv(hidden_states)
- return hidden_states
-
-
-class FlaxDownsample2D(nn.Module):
- """
- Flax implementation of 2D Downsample layer
-
- Args:
- in_channels (`int`):
- Input channels
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
-
- in_channels: int
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.conv = nn.Conv(
- self.in_channels,
- kernel_size=(3, 3),
- strides=(2, 2),
- padding="VALID",
- dtype=self.dtype,
- )
-
- def __call__(self, hidden_states):
- pad = ((0, 0), (0, 1), (0, 1), (0, 0)) # pad height and width dim
- hidden_states = jnp.pad(hidden_states, pad_width=pad)
- hidden_states = self.conv(hidden_states)
- return hidden_states
-
-
-class FlaxResnetBlock2D(nn.Module):
- """
- Flax implementation of 2D Resnet Block.
-
- Args:
- in_channels (`int`):
- Input channels
- out_channels (`int`):
- Output channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- groups (:obj:`int`, *optional*, defaults to `32`):
- The number of groups to use for group norm.
- use_nin_shortcut (:obj:`bool`, *optional*, defaults to `None`):
- Whether to use `nin_shortcut`. This activates a new layer inside ResNet block
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
-
- in_channels: int
- out_channels: int = None
- dropout: float = 0.0
- groups: int = 32
- use_nin_shortcut: bool = None
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- out_channels = self.in_channels if self.out_channels is None else self.out_channels
-
- self.norm1 = nn.GroupNorm(num_groups=self.groups, epsilon=1e-6)
- self.conv1 = nn.Conv(
- out_channels,
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- self.norm2 = nn.GroupNorm(num_groups=self.groups, epsilon=1e-6)
- self.dropout_layer = nn.Dropout(self.dropout)
- self.conv2 = nn.Conv(
- out_channels,
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- use_nin_shortcut = self.in_channels != out_channels if self.use_nin_shortcut is None else self.use_nin_shortcut
-
- self.conv_shortcut = None
- if use_nin_shortcut:
- self.conv_shortcut = nn.Conv(
- out_channels,
- kernel_size=(1, 1),
- strides=(1, 1),
- padding="VALID",
- dtype=self.dtype,
- )
-
- def __call__(self, hidden_states, deterministic=True):
- residual = hidden_states
- hidden_states = self.norm1(hidden_states)
- hidden_states = nn.swish(hidden_states)
- hidden_states = self.conv1(hidden_states)
-
- hidden_states = self.norm2(hidden_states)
- hidden_states = nn.swish(hidden_states)
- hidden_states = self.dropout_layer(hidden_states, deterministic)
- hidden_states = self.conv2(hidden_states)
-
- if self.conv_shortcut is not None:
- residual = self.conv_shortcut(residual)
-
- return hidden_states + residual
-
-
-class FlaxAttentionBlock(nn.Module):
- r"""
- Flax Convolutional based multi-head attention block for diffusion-based VAE.
-
- Parameters:
- channels (:obj:`int`):
- Input channels
- num_head_channels (:obj:`int`, *optional*, defaults to `None`):
- Number of attention heads
- num_groups (:obj:`int`, *optional*, defaults to `32`):
- The number of groups to use for group norm
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
-
- """
- channels: int
- num_head_channels: int = None
- num_groups: int = 32
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.num_heads = self.channels // self.num_head_channels if self.num_head_channels is not None else 1
-
- dense = partial(nn.Dense, self.channels, dtype=self.dtype)
-
- self.group_norm = nn.GroupNorm(num_groups=self.num_groups, epsilon=1e-6)
- self.query, self.key, self.value = dense(), dense(), dense()
- self.proj_attn = dense()
-
- def transpose_for_scores(self, projection):
- new_projection_shape = projection.shape[:-1] + (self.num_heads, -1)
- # move heads to 2nd position (B, T, H * D) -> (B, T, H, D)
- new_projection = projection.reshape(new_projection_shape)
- # (B, T, H, D) -> (B, H, T, D)
- new_projection = jnp.transpose(new_projection, (0, 2, 1, 3))
- return new_projection
-
- def __call__(self, hidden_states):
- residual = hidden_states
- batch, height, width, channels = hidden_states.shape
-
- hidden_states = self.group_norm(hidden_states)
-
- hidden_states = hidden_states.reshape((batch, height * width, channels))
-
- query = self.query(hidden_states)
- key = self.key(hidden_states)
- value = self.value(hidden_states)
-
- # transpose
- query = self.transpose_for_scores(query)
- key = self.transpose_for_scores(key)
- value = self.transpose_for_scores(value)
-
- # compute attentions
- scale = 1 / math.sqrt(math.sqrt(self.channels / self.num_heads))
- attn_weights = jnp.einsum("...qc,...kc->...qk", query * scale, key * scale)
- attn_weights = nn.softmax(attn_weights, axis=-1)
-
- # attend to values
- hidden_states = jnp.einsum("...kc,...qk->...qc", value, attn_weights)
-
- hidden_states = jnp.transpose(hidden_states, (0, 2, 1, 3))
- new_hidden_states_shape = hidden_states.shape[:-2] + (self.channels,)
- hidden_states = hidden_states.reshape(new_hidden_states_shape)
-
- hidden_states = self.proj_attn(hidden_states)
- hidden_states = hidden_states.reshape((batch, height, width, channels))
- hidden_states = hidden_states + residual
- return hidden_states
-
-
-class FlaxDownEncoderBlock2D(nn.Module):
- r"""
- Flax Resnet blocks-based Encoder block for diffusion-based VAE.
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- out_channels (:obj:`int`):
- Output channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of Resnet layer block
- resnet_groups (:obj:`int`, *optional*, defaults to `32`):
- The number of groups to use for the Resnet block group norm
- add_downsample (:obj:`bool`, *optional*, defaults to `True`):
- Whether to add downsample layer
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- out_channels: int
- dropout: float = 0.0
- num_layers: int = 1
- resnet_groups: int = 32
- add_downsample: bool = True
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnets = []
- for i in range(self.num_layers):
- in_channels = self.in_channels if i == 0 else self.out_channels
-
- res_block = FlaxResnetBlock2D(
- in_channels=in_channels,
- out_channels=self.out_channels,
- dropout=self.dropout,
- groups=self.resnet_groups,
- dtype=self.dtype,
- )
- resnets.append(res_block)
- self.resnets = resnets
-
- if self.add_downsample:
- self.downsamplers_0 = FlaxDownsample2D(self.out_channels, dtype=self.dtype)
-
- def __call__(self, hidden_states, deterministic=True):
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, deterministic=deterministic)
-
- if self.add_downsample:
- hidden_states = self.downsamplers_0(hidden_states)
-
- return hidden_states
-
-
-class FlaxUpDecoderBlock2D(nn.Module):
- r"""
- Flax Resnet blocks-based Decoder block for diffusion-based VAE.
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- out_channels (:obj:`int`):
- Output channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of Resnet layer block
- resnet_groups (:obj:`int`, *optional*, defaults to `32`):
- The number of groups to use for the Resnet block group norm
- add_upsample (:obj:`bool`, *optional*, defaults to `True`):
- Whether to add upsample layer
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- out_channels: int
- dropout: float = 0.0
- num_layers: int = 1
- resnet_groups: int = 32
- add_upsample: bool = True
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnets = []
- for i in range(self.num_layers):
- in_channels = self.in_channels if i == 0 else self.out_channels
- res_block = FlaxResnetBlock2D(
- in_channels=in_channels,
- out_channels=self.out_channels,
- dropout=self.dropout,
- groups=self.resnet_groups,
- dtype=self.dtype,
- )
- resnets.append(res_block)
-
- self.resnets = resnets
-
- if self.add_upsample:
- self.upsamplers_0 = FlaxUpsample2D(self.out_channels, dtype=self.dtype)
-
- def __call__(self, hidden_states, deterministic=True):
- for resnet in self.resnets:
- hidden_states = resnet(hidden_states, deterministic=deterministic)
-
- if self.add_upsample:
- hidden_states = self.upsamplers_0(hidden_states)
-
- return hidden_states
-
-
-class FlaxUNetMidBlock2D(nn.Module):
- r"""
- Flax Unet Mid-Block module.
-
- Parameters:
- in_channels (:obj:`int`):
- Input channels
- dropout (:obj:`float`, *optional*, defaults to 0.0):
- Dropout rate
- num_layers (:obj:`int`, *optional*, defaults to 1):
- Number of Resnet layer block
- resnet_groups (:obj:`int`, *optional*, defaults to `32`):
- The number of groups to use for the Resnet and Attention block group norm
- num_attention_heads (:obj:`int`, *optional*, defaults to `1`):
- Number of attention heads for each attention block
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int
- dropout: float = 0.0
- num_layers: int = 1
- resnet_groups: int = 32
- num_attention_heads: int = 1
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- resnet_groups = self.resnet_groups if self.resnet_groups is not None else min(self.in_channels // 4, 32)
-
- # there is always at least one resnet
- resnets = [
- FlaxResnetBlock2D(
- in_channels=self.in_channels,
- out_channels=self.in_channels,
- dropout=self.dropout,
- groups=resnet_groups,
- dtype=self.dtype,
- )
- ]
-
- attentions = []
-
- for _ in range(self.num_layers):
- attn_block = FlaxAttentionBlock(
- channels=self.in_channels,
- num_head_channels=self.num_attention_heads,
- num_groups=resnet_groups,
- dtype=self.dtype,
- )
- attentions.append(attn_block)
-
- res_block = FlaxResnetBlock2D(
- in_channels=self.in_channels,
- out_channels=self.in_channels,
- dropout=self.dropout,
- groups=resnet_groups,
- dtype=self.dtype,
- )
- resnets.append(res_block)
-
- self.resnets = resnets
- self.attentions = attentions
-
- def __call__(self, hidden_states, deterministic=True):
- hidden_states = self.resnets[0](hidden_states, deterministic=deterministic)
- for attn, resnet in zip(self.attentions, self.resnets[1:]):
- hidden_states = attn(hidden_states)
- hidden_states = resnet(hidden_states, deterministic=deterministic)
-
- return hidden_states
-
-
-class FlaxEncoder(nn.Module):
- r"""
- Flax Implementation of VAE Encoder.
-
- This model is a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module)
- subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
- general usage and behavior.
-
- Finally, this model supports inherent JAX features such as:
- - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit)
- - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation)
- - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap)
- - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
-
- Parameters:
- in_channels (:obj:`int`, *optional*, defaults to 3):
- Input channels
- out_channels (:obj:`int`, *optional*, defaults to 3):
- Output channels
- down_block_types (:obj:`Tuple[str]`, *optional*, defaults to `(DownEncoderBlock2D)`):
- DownEncoder block type
- block_out_channels (:obj:`Tuple[str]`, *optional*, defaults to `(64,)`):
- Tuple containing the number of output channels for each block
- layers_per_block (:obj:`int`, *optional*, defaults to `2`):
- Number of Resnet layer for each block
- norm_num_groups (:obj:`int`, *optional*, defaults to `32`):
- norm num group
- act_fn (:obj:`str`, *optional*, defaults to `silu`):
- Activation function
- double_z (:obj:`bool`, *optional*, defaults to `False`):
- Whether to double the last output channels
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- Parameters `dtype`
- """
- in_channels: int = 3
- out_channels: int = 3
- down_block_types: Tuple[str] = ("DownEncoderBlock2D",)
- block_out_channels: Tuple[int] = (64,)
- layers_per_block: int = 2
- norm_num_groups: int = 32
- act_fn: str = "silu"
- double_z: bool = False
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- block_out_channels = self.block_out_channels
- # in
- self.conv_in = nn.Conv(
- block_out_channels[0],
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- # downsampling
- down_blocks = []
- output_channel = block_out_channels[0]
- for i, _ in enumerate(self.down_block_types):
- input_channel = output_channel
- output_channel = block_out_channels[i]
- is_final_block = i == len(block_out_channels) - 1
-
- down_block = FlaxDownEncoderBlock2D(
- in_channels=input_channel,
- out_channels=output_channel,
- num_layers=self.layers_per_block,
- resnet_groups=self.norm_num_groups,
- add_downsample=not is_final_block,
- dtype=self.dtype,
- )
- down_blocks.append(down_block)
- self.down_blocks = down_blocks
-
- # middle
- self.mid_block = FlaxUNetMidBlock2D(
- in_channels=block_out_channels[-1],
- resnet_groups=self.norm_num_groups,
- num_attention_heads=None,
- dtype=self.dtype,
- )
-
- # end
- conv_out_channels = 2 * self.out_channels if self.double_z else self.out_channels
- self.conv_norm_out = nn.GroupNorm(num_groups=self.norm_num_groups, epsilon=1e-6)
- self.conv_out = nn.Conv(
- conv_out_channels,
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- def __call__(self, sample, deterministic: bool = True):
- # in
- sample = self.conv_in(sample)
-
- # downsampling
- for block in self.down_blocks:
- sample = block(sample, deterministic=deterministic)
-
- # middle
- sample = self.mid_block(sample, deterministic=deterministic)
-
- # end
- sample = self.conv_norm_out(sample)
- sample = nn.swish(sample)
- sample = self.conv_out(sample)
-
- return sample
-
-
-class FlaxDecoder(nn.Module):
- r"""
- Flax Implementation of VAE Decoder.
-
- This model is a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module)
- subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
- general usage and behavior.
-
- Finally, this model supports inherent JAX features such as:
- - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit)
- - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation)
- - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap)
- - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
-
- Parameters:
- in_channels (:obj:`int`, *optional*, defaults to 3):
- Input channels
- out_channels (:obj:`int`, *optional*, defaults to 3):
- Output channels
- up_block_types (:obj:`Tuple[str]`, *optional*, defaults to `(UpDecoderBlock2D)`):
- UpDecoder block type
- block_out_channels (:obj:`Tuple[str]`, *optional*, defaults to `(64,)`):
- Tuple containing the number of output channels for each block
- layers_per_block (:obj:`int`, *optional*, defaults to `2`):
- Number of Resnet layer for each block
- norm_num_groups (:obj:`int`, *optional*, defaults to `32`):
- norm num group
- act_fn (:obj:`str`, *optional*, defaults to `silu`):
- Activation function
- double_z (:obj:`bool`, *optional*, defaults to `False`):
- Whether to double the last output channels
- dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
- parameters `dtype`
- """
- in_channels: int = 3
- out_channels: int = 3
- up_block_types: Tuple[str] = ("UpDecoderBlock2D",)
- block_out_channels: int = (64,)
- layers_per_block: int = 2
- norm_num_groups: int = 32
- act_fn: str = "silu"
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- block_out_channels = self.block_out_channels
-
- # z to block_in
- self.conv_in = nn.Conv(
- block_out_channels[-1],
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- # middle
- self.mid_block = FlaxUNetMidBlock2D(
- in_channels=block_out_channels[-1],
- resnet_groups=self.norm_num_groups,
- num_attention_heads=None,
- dtype=self.dtype,
- )
-
- # upsampling
- reversed_block_out_channels = list(reversed(block_out_channels))
- output_channel = reversed_block_out_channels[0]
- up_blocks = []
- for i, _ in enumerate(self.up_block_types):
- prev_output_channel = output_channel
- output_channel = reversed_block_out_channels[i]
-
- is_final_block = i == len(block_out_channels) - 1
-
- up_block = FlaxUpDecoderBlock2D(
- in_channels=prev_output_channel,
- out_channels=output_channel,
- num_layers=self.layers_per_block + 1,
- resnet_groups=self.norm_num_groups,
- add_upsample=not is_final_block,
- dtype=self.dtype,
- )
- up_blocks.append(up_block)
- prev_output_channel = output_channel
-
- self.up_blocks = up_blocks
-
- # end
- self.conv_norm_out = nn.GroupNorm(num_groups=self.norm_num_groups, epsilon=1e-6)
- self.conv_out = nn.Conv(
- self.out_channels,
- kernel_size=(3, 3),
- strides=(1, 1),
- padding=((1, 1), (1, 1)),
- dtype=self.dtype,
- )
-
- def __call__(self, sample, deterministic: bool = True):
- # z to block_in
- sample = self.conv_in(sample)
-
- # middle
- sample = self.mid_block(sample, deterministic=deterministic)
-
- # upsampling
- for block in self.up_blocks:
- sample = block(sample, deterministic=deterministic)
-
- sample = self.conv_norm_out(sample)
- sample = nn.swish(sample)
- sample = self.conv_out(sample)
-
- return sample
-
-
-class FlaxDiagonalGaussianDistribution(object):
- def __init__(self, parameters, deterministic=False):
- # Last axis to account for channels-last
- self.mean, self.logvar = jnp.split(parameters, 2, axis=-1)
- self.logvar = jnp.clip(self.logvar, -30.0, 20.0)
- self.deterministic = deterministic
- self.std = jnp.exp(0.5 * self.logvar)
- self.var = jnp.exp(self.logvar)
- if self.deterministic:
- self.var = self.std = jnp.zeros_like(self.mean)
-
- def sample(self, key):
- return self.mean + self.std * jax.random.normal(key, self.mean.shape)
-
- def kl(self, other=None):
- if self.deterministic:
- return jnp.array([0.0])
-
- if other is None:
- return 0.5 * jnp.sum(self.mean**2 + self.var - 1.0 - self.logvar, axis=[1, 2, 3])
-
- return 0.5 * jnp.sum(
- jnp.square(self.mean - other.mean) / other.var + self.var / other.var - 1.0 - self.logvar + other.logvar,
- axis=[1, 2, 3],
- )
-
- def nll(self, sample, axis=[1, 2, 3]):
- if self.deterministic:
- return jnp.array([0.0])
-
- logtwopi = jnp.log(2.0 * jnp.pi)
- return 0.5 * jnp.sum(logtwopi + self.logvar + jnp.square(sample - self.mean) / self.var, axis=axis)
-
- def mode(self):
- return self.mean
-
-
-@flax_register_to_config
-class FlaxAutoencoderKL(nn.Module, FlaxModelMixin, ConfigMixin):
- r"""
- Flax implementation of a VAE model with KL loss for decoding latent representations.
-
- This model inherits from [`FlaxModelMixin`]. Check the superclass documentation for it's generic methods
- implemented for all models (such as downloading or saving).
-
- This model is a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module)
- subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its
- general usage and behavior.
-
- Inherent JAX features such as the following are supported:
-
- - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit)
- - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation)
- - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap)
- - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
-
- Parameters:
- in_channels (`int`, *optional*, defaults to 3):
- Number of channels in the input image.
- out_channels (`int`, *optional*, defaults to 3):
- Number of channels in the output.
- down_block_types (`Tuple[str]`, *optional*, defaults to `(DownEncoderBlock2D)`):
- Tuple of downsample block types.
- up_block_types (`Tuple[str]`, *optional*, defaults to `(UpDecoderBlock2D)`):
- Tuple of upsample block types.
- block_out_channels (`Tuple[str]`, *optional*, defaults to `(64,)`):
- Tuple of block output channels.
- layers_per_block (`int`, *optional*, defaults to `2`):
- Number of ResNet layer for each block.
- act_fn (`str`, *optional*, defaults to `silu`):
- The activation function to use.
- latent_channels (`int`, *optional*, defaults to `4`):
- Number of channels in the latent space.
- norm_num_groups (`int`, *optional*, defaults to `32`):
- The number of groups for normalization.
- sample_size (`int`, *optional*, defaults to 32):
- Sample input size.
- scaling_factor (`float`, *optional*, defaults to 0.18215):
- The component-wise standard deviation of the trained latent space computed using the first batch of the
- training set. This is used to scale the latent space to have unit variance when training the diffusion
- model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
- diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
- / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
- Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
- dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`):
- The `dtype` of the parameters.
- """
- in_channels: int = 3
- out_channels: int = 3
- down_block_types: Tuple[str] = ("DownEncoderBlock2D",)
- up_block_types: Tuple[str] = ("UpDecoderBlock2D",)
- block_out_channels: Tuple[int] = (64,)
- layers_per_block: int = 1
- act_fn: str = "silu"
- latent_channels: int = 4
- norm_num_groups: int = 32
- sample_size: int = 32
- scaling_factor: float = 0.18215
- dtype: jnp.dtype = jnp.float32
-
- def setup(self):
- self.encoder = FlaxEncoder(
- in_channels=self.config.in_channels,
- out_channels=self.config.latent_channels,
- down_block_types=self.config.down_block_types,
- block_out_channels=self.config.block_out_channels,
- layers_per_block=self.config.layers_per_block,
- act_fn=self.config.act_fn,
- norm_num_groups=self.config.norm_num_groups,
- double_z=True,
- dtype=self.dtype,
- )
- self.decoder = FlaxDecoder(
- in_channels=self.config.latent_channels,
- out_channels=self.config.out_channels,
- up_block_types=self.config.up_block_types,
- block_out_channels=self.config.block_out_channels,
- layers_per_block=self.config.layers_per_block,
- norm_num_groups=self.config.norm_num_groups,
- act_fn=self.config.act_fn,
- dtype=self.dtype,
- )
- self.quant_conv = nn.Conv(
- 2 * self.config.latent_channels,
- kernel_size=(1, 1),
- strides=(1, 1),
- padding="VALID",
- dtype=self.dtype,
- )
- self.post_quant_conv = nn.Conv(
- self.config.latent_channels,
- kernel_size=(1, 1),
- strides=(1, 1),
- padding="VALID",
- dtype=self.dtype,
- )
-
- def init_weights(self, rng: jax.random.KeyArray) -> FrozenDict:
- # init input tensors
- sample_shape = (1, self.in_channels, self.sample_size, self.sample_size)
- sample = jnp.zeros(sample_shape, dtype=jnp.float32)
-
- params_rng, dropout_rng, gaussian_rng = jax.random.split(rng, 3)
- rngs = {"params": params_rng, "dropout": dropout_rng, "gaussian": gaussian_rng}
-
- return self.init(rngs, sample)["params"]
-
- def encode(self, sample, deterministic: bool = True, return_dict: bool = True):
- sample = jnp.transpose(sample, (0, 2, 3, 1))
-
- hidden_states = self.encoder(sample, deterministic=deterministic)
- moments = self.quant_conv(hidden_states)
- posterior = FlaxDiagonalGaussianDistribution(moments)
-
- if not return_dict:
- return (posterior,)
-
- return FlaxAutoencoderKLOutput(latent_dist=posterior)
-
- def decode(self, latents, deterministic: bool = True, return_dict: bool = True):
- if latents.shape[-1] != self.config.latent_channels:
- latents = jnp.transpose(latents, (0, 2, 3, 1))
-
- hidden_states = self.post_quant_conv(latents)
- hidden_states = self.decoder(hidden_states, deterministic=deterministic)
-
- hidden_states = jnp.transpose(hidden_states, (0, 3, 1, 2))
-
- if not return_dict:
- return (hidden_states,)
-
- return FlaxDecoderOutput(sample=hidden_states)
-
- def __call__(self, sample, sample_posterior=False, deterministic: bool = True, return_dict: bool = True):
- posterior = self.encode(sample, deterministic=deterministic, return_dict=return_dict)
- if sample_posterior:
- rng = self.make_rng("gaussian")
- hidden_states = posterior.latent_dist.sample(rng)
- else:
- hidden_states = posterior.latent_dist.mode()
-
- sample = self.decode(hidden_states, return_dict=return_dict).sample
-
- if not return_dict:
- return (sample,)
-
- return FlaxDecoderOutput(sample=sample)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint.py
deleted file mode 100644
index 4856babce807214a29d38264fe5479294ff3f1e0..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_inpaint.py
+++ /dev/null
@@ -1,561 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from transformers import CLIPImageProcessor, CLIPTokenizer
-
-from ...configuration_utils import FrozenDict
-from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from ...utils import PIL_INTERPOLATION, deprecate, logging
-from ..onnx_utils import ORT_TO_NP_TYPE, OnnxRuntimeModel
-from ..pipeline_utils import DiffusionPipeline
-from . import StableDiffusionPipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-NUM_UNET_INPUT_CHANNELS = 9
-NUM_LATENT_CHANNELS = 4
-
-
-def prepare_mask_and_masked_image(image, mask, latents_shape):
- image = np.array(image.convert("RGB").resize((latents_shape[1] * 8, latents_shape[0] * 8)))
- image = image[None].transpose(0, 3, 1, 2)
- image = image.astype(np.float32) / 127.5 - 1.0
-
- image_mask = np.array(mask.convert("L").resize((latents_shape[1] * 8, latents_shape[0] * 8)))
- masked_image = image * (image_mask < 127.5)
-
- mask = mask.resize((latents_shape[1], latents_shape[0]), PIL_INTERPOLATION["nearest"])
- mask = np.array(mask.convert("L"))
- mask = mask.astype(np.float32) / 255.0
- mask = mask[None, None]
- mask[mask < 0.5] = 0
- mask[mask >= 0.5] = 1
-
- return mask, masked_image
-
-
-class OnnxStableDiffusionInpaintPipeline(DiffusionPipeline):
- r"""
- Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPImageProcessor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- vae_encoder: OnnxRuntimeModel
- vae_decoder: OnnxRuntimeModel
- text_encoder: OnnxRuntimeModel
- tokenizer: CLIPTokenizer
- unet: OnnxRuntimeModel
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler]
- safety_checker: OnnxRuntimeModel
- feature_extractor: CLIPImageProcessor
-
- _optional_components = ["safety_checker", "feature_extractor"]
- _is_onnx = True
-
- def __init__(
- self,
- vae_encoder: OnnxRuntimeModel,
- vae_decoder: OnnxRuntimeModel,
- text_encoder: OnnxRuntimeModel,
- tokenizer: CLIPTokenizer,
- unet: OnnxRuntimeModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: OnnxRuntimeModel,
- feature_extractor: CLIPImageProcessor,
- requires_safety_checker: bool = True,
- ):
- super().__init__()
- logger.info("`OnnxStableDiffusionInpaintPipeline` is experimental and will very likely change in the future.")
-
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
- )
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["clip_sample"] = False
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
-
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
-
- self.register_modules(
- vae_encoder=vae_encoder,
- vae_decoder=vae_decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_onnx_stable_diffusion.OnnxStableDiffusionPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt: Union[str, List[str]],
- num_images_per_prompt: Optional[int],
- do_classifier_free_guidance: bool,
- negative_prompt: Optional[str],
- prompt_embeds: Optional[np.ndarray] = None,
- negative_prompt_embeds: Optional[np.ndarray] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`):
- prompt to be encoded
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- prompt_embeds (`np.ndarray`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`np.ndarray`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- """
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="np",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
-
- if not np.array_equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
-
- prompt_embeds = np.repeat(prompt_embeds, num_images_per_prompt, axis=0)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt] * batch_size
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="np",
- )
- negative_prompt_embeds = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0]
-
- if do_classifier_free_guidance:
- negative_prompt_embeds = np.repeat(negative_prompt_embeds, num_images_per_prompt, axis=0)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = np.concatenate([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_onnx_stable_diffusion.OnnxStableDiffusionPipeline.check_inputs
- def check_inputs(
- self,
- prompt: Union[str, List[str]],
- height: Optional[int],
- width: Optional[int],
- callback_steps: int,
- negative_prompt: Optional[str] = None,
- prompt_embeds: Optional[np.ndarray] = None,
- negative_prompt_embeds: Optional[np.ndarray] = None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- image: PIL.Image.Image,
- mask_image: PIL.Image.Image,
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[np.random.RandomState] = None,
- latents: Optional[np.ndarray] = None,
- prompt_embeds: Optional[np.ndarray] = None,
- negative_prompt_embeds: Optional[np.ndarray] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
- callback_steps: int = 1,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`PIL.Image.Image`):
- `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
- be masked out with `mask_image` and repainted according to `prompt`.
- mask_image (`PIL.Image.Image`):
- `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
- repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
- to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
- instead of 3, so the expected shape would be `(B, H, W, 1)`.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`np.random.RandomState`, *optional*):
- A np.random.RandomState to make generation deterministic.
- latents (`np.ndarray`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`np.ndarray`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`np.ndarray`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
-
- # check inputs. Raise error if not correct
- self.check_inputs(
- prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
- )
-
- # define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if generator is None:
- generator = np.random
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- prompt_embeds = self._encode_prompt(
- prompt,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- )
-
- num_channels_latents = NUM_LATENT_CHANNELS
- latents_shape = (batch_size * num_images_per_prompt, num_channels_latents, height // 8, width // 8)
- latents_dtype = prompt_embeds.dtype
- if latents is None:
- latents = generator.randn(*latents_shape).astype(latents_dtype)
- else:
- if latents.shape != latents_shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
-
- # prepare mask and masked_image
- mask, masked_image = prepare_mask_and_masked_image(image, mask_image, latents_shape[-2:])
- mask = mask.astype(latents.dtype)
- masked_image = masked_image.astype(latents.dtype)
-
- masked_image_latents = self.vae_encoder(sample=masked_image)[0]
- masked_image_latents = 0.18215 * masked_image_latents
-
- # duplicate mask and masked_image_latents for each generation per prompt
- mask = mask.repeat(batch_size * num_images_per_prompt, 0)
- masked_image_latents = masked_image_latents.repeat(batch_size * num_images_per_prompt, 0)
-
- mask = np.concatenate([mask] * 2) if do_classifier_free_guidance else mask
- masked_image_latents = (
- np.concatenate([masked_image_latents] * 2) if do_classifier_free_guidance else masked_image_latents
- )
-
- num_channels_mask = mask.shape[1]
- num_channels_masked_image = masked_image_latents.shape[1]
-
- unet_input_channels = NUM_UNET_INPUT_CHANNELS
- if num_channels_latents + num_channels_mask + num_channels_masked_image != unet_input_channels:
- raise ValueError(
- "Incorrect configuration settings! The config of `pipeline.unet` expects"
- f" {unet_input_channels} but received `num_channels_latents`: {num_channels_latents} +"
- f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}"
- f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of"
- " `pipeline.unet` or your `mask_image` or `image` input."
- )
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * np.float64(self.scheduler.init_noise_sigma)
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- timestep_dtype = next(
- (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)"
- )
- timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
-
- for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
- # concat latents, mask, masked_image_latnets in the channel dimension
- latent_model_input = self.scheduler.scale_model_input(torch.from_numpy(latent_model_input), t)
- latent_model_input = latent_model_input.cpu().numpy()
- latent_model_input = np.concatenate([latent_model_input, mask, masked_image_latents], axis=1)
-
- # predict the noise residual
- timestep = np.array([t], dtype=timestep_dtype)
- noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)[
- 0
- ]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- scheduler_output = self.scheduler.step(
- torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs
- )
- latents = scheduler_output.prev_sample.numpy()
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- latents = 1 / 0.18215 * latents
- # image = self.vae_decoder(latent_sample=latents)[0]
- # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
- image = np.concatenate(
- [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
- )
-
- image = np.clip(image / 2 + 0.5, 0, 1)
- image = image.transpose((0, 2, 3, 1))
-
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(
- self.numpy_to_pil(image), return_tensors="np"
- ).pixel_values.astype(image.dtype)
- # safety_checker does not support batched inputs yet
- images, has_nsfw_concept = [], []
- for i in range(image.shape[0]):
- image_i, has_nsfw_concept_i = self.safety_checker(
- clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
- )
- images.append(image_i)
- has_nsfw_concept.append(has_nsfw_concept_i[0])
- image = np.concatenate(images)
- else:
- has_nsfw_concept = None
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting.py
deleted file mode 100644
index 1317fcb64e81da9bd9f5b3325ee7a4bcc1abac17..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_inpainting.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import random
-import unittest
-
-import torch
-
-from diffusers import IFInpaintingPipeline
-from diffusers.utils import floats_tensor
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
-
-from ..pipeline_params import (
- TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
- TEXT_GUIDED_IMAGE_INPAINTING_PARAMS,
-)
-from ..test_pipelines_common import PipelineTesterMixin
-from . import IFPipelineTesterMixin
-
-
-@skip_mps
-class IFInpaintingPipelineFastTests(PipelineTesterMixin, IFPipelineTesterMixin, unittest.TestCase):
- pipeline_class = IFInpaintingPipeline
- params = TEXT_GUIDED_IMAGE_INPAINTING_PARAMS - {"width", "height"}
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS
- required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"}
-
- def get_dummy_components(self):
- return self._get_dummy_components()
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
-
- image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
- mask_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
-
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "image": image,
- "mask_image": mask_image,
- "generator": generator,
- "num_inference_steps": 2,
- "output_type": "numpy",
- }
-
- return inputs
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_xformers_attention_forwardGenerator_pass(self):
- self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3)
-
- def test_save_load_optional_components(self):
- self._test_save_load_optional_components()
-
- @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
- def test_save_load_float16(self):
- # Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
- super().test_save_load_float16(expected_max_diff=1e-1)
-
- def test_attention_slicing_forward_pass(self):
- self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
-
- def test_save_load_local(self):
- self._test_save_load_local()
-
- def test_inference_batch_single_identical(self):
- self._test_inference_batch_single_identical(
- expected_max_diff=1e-2,
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/overwrite_expected_slice.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/overwrite_expected_slice.py
deleted file mode 100644
index 7aa66727150a120241e9e1020acc1d395dc2e5f2..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/utils/overwrite_expected_slice.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import argparse
-from collections import defaultdict
-
-
-def overwrite_file(file, class_name, test_name, correct_line, done_test):
- _id = f"{file}_{class_name}_{test_name}"
- done_test[_id] += 1
-
- with open(file, "r") as f:
- lines = f.readlines()
-
- class_regex = f"class {class_name}("
- test_regex = f"{4 * ' '}def {test_name}("
- line_begin_regex = f"{8 * ' '}{correct_line.split()[0]}"
- another_line_begin_regex = f"{16 * ' '}{correct_line.split()[0]}"
- in_class = False
- in_func = False
- in_line = False
- insert_line = False
- count = 0
- spaces = 0
-
- new_lines = []
- for line in lines:
- if line.startswith(class_regex):
- in_class = True
- elif in_class and line.startswith(test_regex):
- in_func = True
- elif in_class and in_func and (line.startswith(line_begin_regex) or line.startswith(another_line_begin_regex)):
- spaces = len(line.split(correct_line.split()[0])[0])
- count += 1
-
- if count == done_test[_id]:
- in_line = True
-
- if in_class and in_func and in_line:
- if ")" not in line:
- continue
- else:
- insert_line = True
-
- if in_class and in_func and in_line and insert_line:
- new_lines.append(f"{spaces * ' '}{correct_line}")
- in_class = in_func = in_line = insert_line = False
- else:
- new_lines.append(line)
-
- with open(file, "w") as f:
- for line in new_lines:
- f.write(line)
-
-
-def main(correct, fail=None):
- if fail is not None:
- with open(fail, "r") as f:
- test_failures = {l.strip() for l in f.readlines()}
- else:
- test_failures = None
-
- with open(correct, "r") as f:
- correct_lines = f.readlines()
-
- done_tests = defaultdict(int)
- for line in correct_lines:
- file, class_name, test_name, correct_line = line.split(";")
- if test_failures is None or "::".join([file, class_name, test_name]) in test_failures:
- overwrite_file(file, class_name, test_name, correct_line, done_tests)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--correct_filename", help="filename of tests with expected result")
- parser.add_argument("--fail_filename", help="filename of test failures", type=str, default=None)
- args = parser.parse_args()
-
- main(args.correct_filename, args.fail_filename)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/base.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/base.py
deleted file mode 100644
index f845256729458ced821762a1b8ef881e17ff9955..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/hooks/logger/base.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numbers
-from abc import ABCMeta, abstractmethod
-
-import numpy as np
-import torch
-
-from ..hook import Hook
-
-
-class LoggerHook(Hook):
- """Base class for logger hooks.
-
- Args:
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging.
- by_epoch (bool): Whether EpochBasedRunner is used.
- """
-
- __metaclass__ = ABCMeta
-
- def __init__(self,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- by_epoch=True):
- self.interval = interval
- self.ignore_last = ignore_last
- self.reset_flag = reset_flag
- self.by_epoch = by_epoch
-
- @abstractmethod
- def log(self, runner):
- pass
-
- @staticmethod
- def is_scalar(val, include_np=True, include_torch=True):
- """Tell the input variable is a scalar or not.
-
- Args:
- val: Input variable.
- include_np (bool): Whether include 0-d np.ndarray as a scalar.
- include_torch (bool): Whether include 0-d torch.Tensor as a scalar.
-
- Returns:
- bool: True or False.
- """
- if isinstance(val, numbers.Number):
- return True
- elif include_np and isinstance(val, np.ndarray) and val.ndim == 0:
- return True
- elif include_torch and isinstance(val, torch.Tensor) and len(val) == 1:
- return True
- else:
- return False
-
- def get_mode(self, runner):
- if runner.mode == 'train':
- if 'time' in runner.log_buffer.output:
- mode = 'train'
- else:
- mode = 'val'
- elif runner.mode == 'val':
- mode = 'val'
- else:
- raise ValueError(f"runner mode should be 'train' or 'val', "
- f'but got {runner.mode}')
- return mode
-
- def get_epoch(self, runner):
- if runner.mode == 'train':
- epoch = runner.epoch + 1
- elif runner.mode == 'val':
- # normal val mode
- # runner.epoch += 1 has been done before val workflow
- epoch = runner.epoch
- else:
- raise ValueError(f"runner mode should be 'train' or 'val', "
- f'but got {runner.mode}')
- return epoch
-
- def get_iter(self, runner, inner_iter=False):
- """Get the current training iteration step."""
- if self.by_epoch and inner_iter:
- current_iter = runner.inner_iter + 1
- else:
- current_iter = runner.iter + 1
- return current_iter
-
- def get_lr_tags(self, runner):
- tags = {}
- lrs = runner.current_lr()
- if isinstance(lrs, dict):
- for name, value in lrs.items():
- tags[f'learning_rate/{name}'] = value[0]
- else:
- tags['learning_rate'] = lrs[0]
- return tags
-
- def get_momentum_tags(self, runner):
- tags = {}
- momentums = runner.current_momentum()
- if isinstance(momentums, dict):
- for name, value in momentums.items():
- tags[f'momentum/{name}'] = value[0]
- else:
- tags['momentum'] = momentums[0]
- return tags
-
- def get_loggable_tags(self,
- runner,
- allow_scalar=True,
- allow_text=False,
- add_mode=True,
- tags_to_skip=('time', 'data_time')):
- tags = {}
- for var, val in runner.log_buffer.output.items():
- if var in tags_to_skip:
- continue
- if self.is_scalar(val) and not allow_scalar:
- continue
- if isinstance(val, str) and not allow_text:
- continue
- if add_mode:
- var = f'{self.get_mode(runner)}/{var}'
- tags[var] = val
- tags.update(self.get_lr_tags(runner))
- tags.update(self.get_momentum_tags(runner))
- return tags
-
- def before_run(self, runner):
- for hook in runner.hooks[::-1]:
- if isinstance(hook, LoggerHook):
- hook.reset_flag = True
- break
-
- def before_epoch(self, runner):
- runner.log_buffer.clear() # clear logs of last epoch
-
- def after_train_iter(self, runner):
- if self.by_epoch and self.every_n_inner_iters(runner, self.interval):
- runner.log_buffer.average(self.interval)
- elif not self.by_epoch and self.every_n_iters(runner, self.interval):
- runner.log_buffer.average(self.interval)
- elif self.end_of_epoch(runner) and not self.ignore_last:
- # not precise but more stable
- runner.log_buffer.average(self.interval)
-
- if runner.log_buffer.ready:
- self.log(runner)
- if self.reset_flag:
- runner.log_buffer.clear_output()
-
- def after_train_epoch(self, runner):
- if runner.log_buffer.ready:
- self.log(runner)
- if self.reset_flag:
- runner.log_buffer.clear_output()
-
- def after_val_epoch(self, runner):
- runner.log_buffer.average()
- self.log(runner)
- if self.reset_flag:
- runner.log_buffer.clear_output()
diff --git a/spaces/Anonymous-sub/Rerender/LICENSE.md b/spaces/Anonymous-sub/Rerender/LICENSE.md
deleted file mode 100644
index 8c866a85fe5a1f47e4781d403a93766b8810d6bc..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/LICENSE.md
+++ /dev/null
@@ -1,201 +0,0 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
\ No newline at end of file
diff --git a/spaces/AntNikYab/NaturalLanguageProcessing/pages/pushkin.py b/spaces/AntNikYab/NaturalLanguageProcessing/pages/pushkin.py
deleted file mode 100644
index d7292ffe3ab18e55271729b74a7d3f7e2c669172..0000000000000000000000000000000000000000
--- a/spaces/AntNikYab/NaturalLanguageProcessing/pages/pushkin.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import streamlit as st
-import textwrap
-import torch
-from transformers import GPT2LMHeadModel, GPT2Tokenizer
-
-DEVICE = torch.device("cpu")
-# Load GPT-2 model and tokenizer
-tokenizer = GPT2Tokenizer.from_pretrained('sberbank-ai/rugpt3small_based_on_gpt2')
-model_finetuned = GPT2LMHeadModel.from_pretrained(
- 'sberbank-ai/rugpt3small_based_on_gpt2',
- output_attentions = False,
- output_hidden_states = False,
-)
-if torch.cuda.is_available():
- model_finetuned.load_state_dict(torch.load('models/model_pushkin.pt'))
-else:
- model_finetuned.load_state_dict(torch.load('models/model_pushkin.pt', map_location=torch.device('cpu')))
-model_finetuned.eval()
-
-# Function to generate text
-def generate_text(prompt, temperature, top_p, max_length, top_k):
- input_ids = tokenizer.encode(prompt, return_tensors="pt")
-
- with torch.no_grad():
- out = model_finetuned.generate(
- input_ids,
- do_sample=True,
- num_beams=5,
- temperature=temperature,
- top_p=top_p,
- max_length=max_length,
- top_k=top_k,
- no_repeat_ngram_size=3,
- num_return_sequences=1,
- )
-
- generated_text = list(map(tokenizer.decode, out))
- return generated_text
-
-# Streamlit app
-def main():
- st.title("Генерация текста GPT-моделью в стиле А.С. Пушкина")
-
- # User inputs
- prompt = st.text_area("Введите начало текста")
- temperature = st.slider("Temperature", min_value=0.2, max_value=2.5, value=1.8, step=0.1)
- top_p = st.slider("Top-p", min_value=0.1, max_value=1.0, value=0.9, step=0.1)
- max_length = st.slider("Max Length", min_value=10, max_value=300, value=100, step=10)
- top_k = st.slider("Top-k", min_value=1, max_value=500, value=500, step=10)
- num_return_sequences = st.slider("Number of Sequences", min_value=1, max_value=5, value=1, step=1)
-
- if st.button("Generate Text"):
- st.subheader("Generated Text:")
- for i in range(num_return_sequences):
- generated_text = generate_text(prompt, temperature, top_p, max_length, top_k)
- st.write(f"Generated Text {i + 1}:")
- wrapped_text = textwrap.fill(generated_text[0], width=80)
- st.write(wrapped_text)
- st.write("------------------")
-
-st.sidebar.image('images/pushkin.jpeg', use_column_width=True)
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/__init__.py
deleted file mode 100644
index 858a41014169b8f0eb1b905fa3bb69c753a1bda5..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/__init__.py
+++ /dev/null
@@ -1,132 +0,0 @@
-"""
-Package containing all pip commands
-"""
-
-import importlib
-from collections import namedtuple
-from typing import Any, Dict, Optional
-
-from pip._internal.cli.base_command import Command
-
-CommandInfo = namedtuple("CommandInfo", "module_path, class_name, summary")
-
-# This dictionary does a bunch of heavy lifting for help output:
-# - Enables avoiding additional (costly) imports for presenting `--help`.
-# - The ordering matters for help display.
-#
-# Even though the module path starts with the same "pip._internal.commands"
-# prefix, the full path makes testing easier (specifically when modifying
-# `commands_dict` in test setup / teardown).
-commands_dict: Dict[str, CommandInfo] = {
- "install": CommandInfo(
- "pip._internal.commands.install",
- "InstallCommand",
- "Install packages.",
- ),
- "download": CommandInfo(
- "pip._internal.commands.download",
- "DownloadCommand",
- "Download packages.",
- ),
- "uninstall": CommandInfo(
- "pip._internal.commands.uninstall",
- "UninstallCommand",
- "Uninstall packages.",
- ),
- "freeze": CommandInfo(
- "pip._internal.commands.freeze",
- "FreezeCommand",
- "Output installed packages in requirements format.",
- ),
- "inspect": CommandInfo(
- "pip._internal.commands.inspect",
- "InspectCommand",
- "Inspect the python environment.",
- ),
- "list": CommandInfo(
- "pip._internal.commands.list",
- "ListCommand",
- "List installed packages.",
- ),
- "show": CommandInfo(
- "pip._internal.commands.show",
- "ShowCommand",
- "Show information about installed packages.",
- ),
- "check": CommandInfo(
- "pip._internal.commands.check",
- "CheckCommand",
- "Verify installed packages have compatible dependencies.",
- ),
- "config": CommandInfo(
- "pip._internal.commands.configuration",
- "ConfigurationCommand",
- "Manage local and global configuration.",
- ),
- "search": CommandInfo(
- "pip._internal.commands.search",
- "SearchCommand",
- "Search PyPI for packages.",
- ),
- "cache": CommandInfo(
- "pip._internal.commands.cache",
- "CacheCommand",
- "Inspect and manage pip's wheel cache.",
- ),
- "index": CommandInfo(
- "pip._internal.commands.index",
- "IndexCommand",
- "Inspect information available from package indexes.",
- ),
- "wheel": CommandInfo(
- "pip._internal.commands.wheel",
- "WheelCommand",
- "Build wheels from your requirements.",
- ),
- "hash": CommandInfo(
- "pip._internal.commands.hash",
- "HashCommand",
- "Compute hashes of package archives.",
- ),
- "completion": CommandInfo(
- "pip._internal.commands.completion",
- "CompletionCommand",
- "A helper command used for command completion.",
- ),
- "debug": CommandInfo(
- "pip._internal.commands.debug",
- "DebugCommand",
- "Show information useful for debugging.",
- ),
- "help": CommandInfo(
- "pip._internal.commands.help",
- "HelpCommand",
- "Show help for commands.",
- ),
-}
-
-
-def create_command(name: str, **kwargs: Any) -> Command:
- """
- Create an instance of the Command class with the given name.
- """
- module_path, class_name, summary = commands_dict[name]
- module = importlib.import_module(module_path)
- command_class = getattr(module, class_name)
- command = command_class(name=name, summary=summary, **kwargs)
-
- return command
-
-
-def get_similar_commands(name: str) -> Optional[str]:
- """Command name auto-correct."""
- from difflib import get_close_matches
-
- name = name.lower()
-
- close_commands = get_close_matches(name, commands_dict.keys())
-
- if close_commands:
- return close_commands[0]
- else:
- return None
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/search.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/search.py
deleted file mode 100644
index 03ed925b246dd551ec2ef45095ed6cad00fd2745..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/search.py
+++ /dev/null
@@ -1,174 +0,0 @@
-import logging
-import shutil
-import sys
-import textwrap
-import xmlrpc.client
-from collections import OrderedDict
-from optparse import Values
-from typing import TYPE_CHECKING, Dict, List, Optional
-
-from pip._vendor.packaging.version import parse as parse_version
-
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.req_command import SessionCommandMixin
-from pip._internal.cli.status_codes import NO_MATCHES_FOUND, SUCCESS
-from pip._internal.exceptions import CommandError
-from pip._internal.metadata import get_default_environment
-from pip._internal.models.index import PyPI
-from pip._internal.network.xmlrpc import PipXmlrpcTransport
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.misc import write_output
-
-if TYPE_CHECKING:
- from typing import TypedDict
-
- class TransformedHit(TypedDict):
- name: str
- summary: str
- versions: List[str]
-
-
-logger = logging.getLogger(__name__)
-
-
-class SearchCommand(Command, SessionCommandMixin):
- """Search for PyPI packages whose name or summary contains ."""
-
- usage = """
- %prog [options] """
- ignore_require_venv = True
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "-i",
- "--index",
- dest="index",
- metavar="URL",
- default=PyPI.pypi_url,
- help="Base URL of Python Package Index (default %default)",
- )
-
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- if not args:
- raise CommandError("Missing required argument (search query).")
- query = args
- pypi_hits = self.search(query, options)
- hits = transform_hits(pypi_hits)
-
- terminal_width = None
- if sys.stdout.isatty():
- terminal_width = shutil.get_terminal_size()[0]
-
- print_results(hits, terminal_width=terminal_width)
- if pypi_hits:
- return SUCCESS
- return NO_MATCHES_FOUND
-
- def search(self, query: List[str], options: Values) -> List[Dict[str, str]]:
- index_url = options.index
-
- session = self.get_default_session(options)
-
- transport = PipXmlrpcTransport(index_url, session)
- pypi = xmlrpc.client.ServerProxy(index_url, transport)
- try:
- hits = pypi.search({"name": query, "summary": query}, "or")
- except xmlrpc.client.Fault as fault:
- message = "XMLRPC request failed [code: {code}]\n{string}".format(
- code=fault.faultCode,
- string=fault.faultString,
- )
- raise CommandError(message)
- assert isinstance(hits, list)
- return hits
-
-
-def transform_hits(hits: List[Dict[str, str]]) -> List["TransformedHit"]:
- """
- The list from pypi is really a list of versions. We want a list of
- packages with the list of versions stored inline. This converts the
- list from pypi into one we can use.
- """
- packages: Dict[str, "TransformedHit"] = OrderedDict()
- for hit in hits:
- name = hit["name"]
- summary = hit["summary"]
- version = hit["version"]
-
- if name not in packages.keys():
- packages[name] = {
- "name": name,
- "summary": summary,
- "versions": [version],
- }
- else:
- packages[name]["versions"].append(version)
-
- # if this is the highest version, replace summary and score
- if version == highest_version(packages[name]["versions"]):
- packages[name]["summary"] = summary
-
- return list(packages.values())
-
-
-def print_dist_installation_info(name: str, latest: str) -> None:
- env = get_default_environment()
- dist = env.get_distribution(name)
- if dist is not None:
- with indent_log():
- if dist.version == latest:
- write_output("INSTALLED: %s (latest)", dist.version)
- else:
- write_output("INSTALLED: %s", dist.version)
- if parse_version(latest).pre:
- write_output(
- "LATEST: %s (pre-release; install"
- " with `pip install --pre`)",
- latest,
- )
- else:
- write_output("LATEST: %s", latest)
-
-
-def print_results(
- hits: List["TransformedHit"],
- name_column_width: Optional[int] = None,
- terminal_width: Optional[int] = None,
-) -> None:
- if not hits:
- return
- if name_column_width is None:
- name_column_width = (
- max(
- [
- len(hit["name"]) + len(highest_version(hit.get("versions", ["-"])))
- for hit in hits
- ]
- )
- + 4
- )
-
- for hit in hits:
- name = hit["name"]
- summary = hit["summary"] or ""
- latest = highest_version(hit.get("versions", ["-"]))
- if terminal_width is not None:
- target_width = terminal_width - name_column_width - 5
- if target_width > 10:
- # wrap and indent summary to fit terminal
- summary_lines = textwrap.wrap(summary, target_width)
- summary = ("\n" + " " * (name_column_width + 3)).join(summary_lines)
-
- name_latest = f"{name} ({latest})"
- line = f"{name_latest:{name_column_width}} - {summary}"
- try:
- write_output(line)
- print_dist_installation_info(name, latest)
- except UnicodeEncodeError:
- pass
-
-
-def highest_version(versions: List[str]) -> str:
- return max(versions, key=parse_version)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/hooks.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/hooks.py
deleted file mode 100644
index 52c321f979726b8aa89ba34874bc6729a75b70b4..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/engine/hooks.py
+++ /dev/null
@@ -1,686 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import datetime
-import itertools
-import logging
-import math
-import operator
-import os
-import tempfile
-import time
-import warnings
-from collections import Counter
-import torch
-from fvcore.common.checkpoint import Checkpointer
-from fvcore.common.checkpoint import PeriodicCheckpointer as _PeriodicCheckpointer
-from fvcore.common.param_scheduler import ParamScheduler
-from fvcore.common.timer import Timer
-from fvcore.nn.precise_bn import get_bn_modules, update_bn_stats
-
-import detectron2.utils.comm as comm
-from detectron2.evaluation.testing import flatten_results_dict
-from detectron2.solver import LRMultiplier
-from detectron2.utils.events import EventStorage, EventWriter
-from detectron2.utils.file_io import PathManager
-
-from .train_loop import HookBase
-
-__all__ = [
- "CallbackHook",
- "IterationTimer",
- "PeriodicWriter",
- "PeriodicCheckpointer",
- "BestCheckpointer",
- "LRScheduler",
- "AutogradProfiler",
- "EvalHook",
- "PreciseBN",
- "TorchProfiler",
- "TorchMemoryStats",
-]
-
-
-"""
-Implement some common hooks.
-"""
-
-
-class CallbackHook(HookBase):
- """
- Create a hook using callback functions provided by the user.
- """
-
- def __init__(self, *, before_train=None, after_train=None, before_step=None, after_step=None):
- """
- Each argument is a function that takes one argument: the trainer.
- """
- self._before_train = before_train
- self._before_step = before_step
- self._after_step = after_step
- self._after_train = after_train
-
- def before_train(self):
- if self._before_train:
- self._before_train(self.trainer)
-
- def after_train(self):
- if self._after_train:
- self._after_train(self.trainer)
- # The functions may be closures that hold reference to the trainer
- # Therefore, delete them to avoid circular reference.
- del self._before_train, self._after_train
- del self._before_step, self._after_step
-
- def before_step(self):
- if self._before_step:
- self._before_step(self.trainer)
-
- def after_step(self):
- if self._after_step:
- self._after_step(self.trainer)
-
-
-class IterationTimer(HookBase):
- """
- Track the time spent for each iteration (each run_step call in the trainer).
- Print a summary in the end of training.
-
- This hook uses the time between the call to its :meth:`before_step`
- and :meth:`after_step` methods.
- Under the convention that :meth:`before_step` of all hooks should only
- take negligible amount of time, the :class:`IterationTimer` hook should be
- placed at the beginning of the list of hooks to obtain accurate timing.
- """
-
- def __init__(self, warmup_iter=3):
- """
- Args:
- warmup_iter (int): the number of iterations at the beginning to exclude
- from timing.
- """
- self._warmup_iter = warmup_iter
- self._step_timer = Timer()
- self._start_time = time.perf_counter()
- self._total_timer = Timer()
-
- def before_train(self):
- self._start_time = time.perf_counter()
- self._total_timer.reset()
- self._total_timer.pause()
-
- def after_train(self):
- logger = logging.getLogger(__name__)
- total_time = time.perf_counter() - self._start_time
- total_time_minus_hooks = self._total_timer.seconds()
- hook_time = total_time - total_time_minus_hooks
-
- num_iter = self.trainer.storage.iter + 1 - self.trainer.start_iter - self._warmup_iter
-
- if num_iter > 0 and total_time_minus_hooks > 0:
- # Speed is meaningful only after warmup
- # NOTE this format is parsed by grep in some scripts
- logger.info(
- "Overall training speed: {} iterations in {} ({:.4f} s / it)".format(
- num_iter,
- str(datetime.timedelta(seconds=int(total_time_minus_hooks))),
- total_time_minus_hooks / num_iter,
- )
- )
-
- logger.info(
- "Total training time: {} ({} on hooks)".format(
- str(datetime.timedelta(seconds=int(total_time))),
- str(datetime.timedelta(seconds=int(hook_time))),
- )
- )
-
- def before_step(self):
- self._step_timer.reset()
- self._total_timer.resume()
-
- def after_step(self):
- # +1 because we're in after_step, the current step is done
- # but not yet counted
- iter_done = self.trainer.storage.iter - self.trainer.start_iter + 1
- if iter_done >= self._warmup_iter:
- sec = self._step_timer.seconds()
- self.trainer.storage.put_scalars(time=sec)
- else:
- self._start_time = time.perf_counter()
- self._total_timer.reset()
-
- self._total_timer.pause()
-
-
-class PeriodicWriter(HookBase):
- """
- Write events to EventStorage (by calling ``writer.write()``) periodically.
-
- It is executed every ``period`` iterations and after the last iteration.
- Note that ``period`` does not affect how data is smoothed by each writer.
- """
-
- def __init__(self, writers, period=20):
- """
- Args:
- writers (list[EventWriter]): a list of EventWriter objects
- period (int):
- """
- self._writers = writers
- for w in writers:
- assert isinstance(w, EventWriter), w
- self._period = period
-
- def after_step(self):
- if (self.trainer.iter + 1) % self._period == 0 or (
- self.trainer.iter == self.trainer.max_iter - 1
- ):
- for writer in self._writers:
- writer.write()
-
- def after_train(self):
- for writer in self._writers:
- # If any new data is found (e.g. produced by other after_train),
- # write them before closing
- writer.write()
- writer.close()
-
-
-class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase):
- """
- Same as :class:`detectron2.checkpoint.PeriodicCheckpointer`, but as a hook.
-
- Note that when used as a hook,
- it is unable to save additional data other than what's defined
- by the given `checkpointer`.
-
- It is executed every ``period`` iterations and after the last iteration.
- """
-
- def before_train(self):
- self.max_iter = self.trainer.max_iter
-
- def after_step(self):
- # No way to use **kwargs
- self.step(self.trainer.iter)
-
-
-class BestCheckpointer(HookBase):
- """
- Checkpoints best weights based off given metric.
-
- This hook should be used in conjunction to and executed after the hook
- that produces the metric, e.g. `EvalHook`.
- """
-
- def __init__(
- self,
- eval_period: int,
- checkpointer: Checkpointer,
- val_metric: str,
- mode: str = "max",
- file_prefix: str = "model_best",
- ) -> None:
- """
- Args:
- eval_period (int): the period `EvalHook` is set to run.
- checkpointer: the checkpointer object used to save checkpoints.
- val_metric (str): validation metric to track for best checkpoint, e.g. "bbox/AP50"
- mode (str): one of {'max', 'min'}. controls whether the chosen val metric should be
- maximized or minimized, e.g. for "bbox/AP50" it should be "max"
- file_prefix (str): the prefix of checkpoint's filename, defaults to "model_best"
- """
- self._logger = logging.getLogger(__name__)
- self._period = eval_period
- self._val_metric = val_metric
- assert mode in [
- "max",
- "min",
- ], f'Mode "{mode}" to `BestCheckpointer` is unknown. It should be one of {"max", "min"}.'
- if mode == "max":
- self._compare = operator.gt
- else:
- self._compare = operator.lt
- self._checkpointer = checkpointer
- self._file_prefix = file_prefix
- self.best_metric = None
- self.best_iter = None
-
- def _update_best(self, val, iteration):
- if math.isnan(val) or math.isinf(val):
- return False
- self.best_metric = val
- self.best_iter = iteration
- return True
-
- def _best_checking(self):
- metric_tuple = self.trainer.storage.latest().get(self._val_metric)
- if metric_tuple is None:
- self._logger.warning(
- f"Given val metric {self._val_metric} does not seem to be computed/stored."
- "Will not be checkpointing based on it."
- )
- return
- else:
- latest_metric, metric_iter = metric_tuple
-
- if self.best_metric is None:
- if self._update_best(latest_metric, metric_iter):
- additional_state = {"iteration": metric_iter}
- self._checkpointer.save(f"{self._file_prefix}", **additional_state)
- self._logger.info(
- f"Saved first model at {self.best_metric:0.5f} @ {self.best_iter} steps"
- )
- elif self._compare(latest_metric, self.best_metric):
- additional_state = {"iteration": metric_iter}
- self._checkpointer.save(f"{self._file_prefix}", **additional_state)
- self._logger.info(
- f"Saved best model as latest eval score for {self._val_metric} is "
- f"{latest_metric:0.5f}, better than last best score "
- f"{self.best_metric:0.5f} @ iteration {self.best_iter}."
- )
- self._update_best(latest_metric, metric_iter)
- else:
- self._logger.info(
- f"Not saving as latest eval score for {self._val_metric} is {latest_metric:0.5f}, "
- f"not better than best score {self.best_metric:0.5f} @ iteration {self.best_iter}."
- )
-
- def after_step(self):
- # same conditions as `EvalHook`
- next_iter = self.trainer.iter + 1
- if (
- self._period > 0
- and next_iter % self._period == 0
- and next_iter != self.trainer.max_iter
- ):
- self._best_checking()
-
- def after_train(self):
- # same conditions as `EvalHook`
- if self.trainer.iter + 1 >= self.trainer.max_iter:
- self._best_checking()
-
-
-class LRScheduler(HookBase):
- """
- A hook which executes a torch builtin LR scheduler and summarizes the LR.
- It is executed after every iteration.
- """
-
- def __init__(self, optimizer=None, scheduler=None):
- """
- Args:
- optimizer (torch.optim.Optimizer):
- scheduler (torch.optim.LRScheduler or fvcore.common.param_scheduler.ParamScheduler):
- if a :class:`ParamScheduler` object, it defines the multiplier over the base LR
- in the optimizer.
-
- If any argument is not given, will try to obtain it from the trainer.
- """
- self._optimizer = optimizer
- self._scheduler = scheduler
-
- def before_train(self):
- self._optimizer = self._optimizer or self.trainer.optimizer
- if isinstance(self.scheduler, ParamScheduler):
- self._scheduler = LRMultiplier(
- self._optimizer,
- self.scheduler,
- self.trainer.max_iter,
- last_iter=self.trainer.iter - 1,
- )
- self._best_param_group_id = LRScheduler.get_best_param_group_id(self._optimizer)
-
- @staticmethod
- def get_best_param_group_id(optimizer):
- # NOTE: some heuristics on what LR to summarize
- # summarize the param group with most parameters
- largest_group = max(len(g["params"]) for g in optimizer.param_groups)
-
- if largest_group == 1:
- # If all groups have one parameter,
- # then find the most common initial LR, and use it for summary
- lr_count = Counter([g["lr"] for g in optimizer.param_groups])
- lr = lr_count.most_common()[0][0]
- for i, g in enumerate(optimizer.param_groups):
- if g["lr"] == lr:
- return i
- else:
- for i, g in enumerate(optimizer.param_groups):
- if len(g["params"]) == largest_group:
- return i
-
- def after_step(self):
- lr = self._optimizer.param_groups[self._best_param_group_id]["lr"]
- self.trainer.storage.put_scalar("lr", lr, smoothing_hint=False)
- self.scheduler.step()
-
- @property
- def scheduler(self):
- return self._scheduler or self.trainer.scheduler
-
- def state_dict(self):
- if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler):
- return self.scheduler.state_dict()
- return {}
-
- def load_state_dict(self, state_dict):
- if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler):
- logger = logging.getLogger(__name__)
- logger.info("Loading scheduler from state_dict ...")
- self.scheduler.load_state_dict(state_dict)
-
-
-class TorchProfiler(HookBase):
- """
- A hook which runs `torch.profiler.profile`.
-
- Examples:
- ::
- hooks.TorchProfiler(
- lambda trainer: 10 < trainer.iter < 20, self.cfg.OUTPUT_DIR
- )
-
- The above example will run the profiler for iteration 10~20 and dump
- results to ``OUTPUT_DIR``. We did not profile the first few iterations
- because they are typically slower than the rest.
- The result files can be loaded in the ``chrome://tracing`` page in chrome browser,
- and the tensorboard visualizations can be visualized using
- ``tensorboard --logdir OUTPUT_DIR/log``
- """
-
- def __init__(self, enable_predicate, output_dir, *, activities=None, save_tensorboard=True):
- """
- Args:
- enable_predicate (callable[trainer -> bool]): a function which takes a trainer,
- and returns whether to enable the profiler.
- It will be called once every step, and can be used to select which steps to profile.
- output_dir (str): the output directory to dump tracing files.
- activities (iterable): same as in `torch.profiler.profile`.
- save_tensorboard (bool): whether to save tensorboard visualizations at (output_dir)/log/
- """
- self._enable_predicate = enable_predicate
- self._activities = activities
- self._output_dir = output_dir
- self._save_tensorboard = save_tensorboard
-
- def before_step(self):
- if self._enable_predicate(self.trainer):
- if self._save_tensorboard:
- on_trace_ready = torch.profiler.tensorboard_trace_handler(
- os.path.join(
- self._output_dir,
- "log",
- "profiler-tensorboard-iter{}".format(self.trainer.iter),
- ),
- f"worker{comm.get_rank()}",
- )
- else:
- on_trace_ready = None
- self._profiler = torch.profiler.profile(
- activities=self._activities,
- on_trace_ready=on_trace_ready,
- record_shapes=True,
- profile_memory=True,
- with_stack=True,
- with_flops=True,
- )
- self._profiler.__enter__()
- else:
- self._profiler = None
-
- def after_step(self):
- if self._profiler is None:
- return
- self._profiler.__exit__(None, None, None)
- if not self._save_tensorboard:
- PathManager.mkdirs(self._output_dir)
- out_file = os.path.join(
- self._output_dir, "profiler-trace-iter{}.json".format(self.trainer.iter)
- )
- if "://" not in out_file:
- self._profiler.export_chrome_trace(out_file)
- else:
- # Support non-posix filesystems
- with tempfile.TemporaryDirectory(prefix="detectron2_profiler") as d:
- tmp_file = os.path.join(d, "tmp.json")
- self._profiler.export_chrome_trace(tmp_file)
- with open(tmp_file) as f:
- content = f.read()
- with PathManager.open(out_file, "w") as f:
- f.write(content)
-
-
-class AutogradProfiler(TorchProfiler):
- """
- A hook which runs `torch.autograd.profiler.profile`.
-
- Examples:
- ::
- hooks.AutogradProfiler(
- lambda trainer: 10 < trainer.iter < 20, self.cfg.OUTPUT_DIR
- )
-
- The above example will run the profiler for iteration 10~20 and dump
- results to ``OUTPUT_DIR``. We did not profile the first few iterations
- because they are typically slower than the rest.
- The result files can be loaded in the ``chrome://tracing`` page in chrome browser.
-
- Note:
- When used together with NCCL on older version of GPUs,
- autograd profiler may cause deadlock because it unnecessarily allocates
- memory on every device it sees. The memory management calls, if
- interleaved with NCCL calls, lead to deadlock on GPUs that do not
- support ``cudaLaunchCooperativeKernelMultiDevice``.
- """
-
- def __init__(self, enable_predicate, output_dir, *, use_cuda=True):
- """
- Args:
- enable_predicate (callable[trainer -> bool]): a function which takes a trainer,
- and returns whether to enable the profiler.
- It will be called once every step, and can be used to select which steps to profile.
- output_dir (str): the output directory to dump tracing files.
- use_cuda (bool): same as in `torch.autograd.profiler.profile`.
- """
- warnings.warn("AutogradProfiler has been deprecated in favor of TorchProfiler.")
- self._enable_predicate = enable_predicate
- self._use_cuda = use_cuda
- self._output_dir = output_dir
-
- def before_step(self):
- if self._enable_predicate(self.trainer):
- self._profiler = torch.autograd.profiler.profile(use_cuda=self._use_cuda)
- self._profiler.__enter__()
- else:
- self._profiler = None
-
-
-class EvalHook(HookBase):
- """
- Run an evaluation function periodically, and at the end of training.
-
- It is executed every ``eval_period`` iterations and after the last iteration.
- """
-
- def __init__(self, eval_period, eval_function):
- """
- Args:
- eval_period (int): the period to run `eval_function`. Set to 0 to
- not evaluate periodically (but still after the last iteration).
- eval_function (callable): a function which takes no arguments, and
- returns a nested dict of evaluation metrics.
-
- Note:
- This hook must be enabled in all or none workers.
- If you would like only certain workers to perform evaluation,
- give other workers a no-op function (`eval_function=lambda: None`).
- """
- self._period = eval_period
- self._func = eval_function
-
- def _do_eval(self):
- results = self._func()
-
- if results:
- assert isinstance(
- results, dict
- ), "Eval function must return a dict. Got {} instead.".format(results)
-
- flattened_results = flatten_results_dict(results)
- for k, v in flattened_results.items():
- try:
- v = float(v)
- except Exception as e:
- raise ValueError(
- "[EvalHook] eval_function should return a nested dict of float. "
- "Got '{}: {}' instead.".format(k, v)
- ) from e
- self.trainer.storage.put_scalars(**flattened_results, smoothing_hint=False)
-
- # Evaluation may take different time among workers.
- # A barrier make them start the next iteration together.
- comm.synchronize()
-
- def after_step(self):
- next_iter = self.trainer.iter + 1
- if self._period > 0 and next_iter % self._period == 0:
- # do the last eval in after_train
- if next_iter != self.trainer.max_iter:
- self._do_eval()
-
- def after_train(self):
- # This condition is to prevent the eval from running after a failed training
- if self.trainer.iter + 1 >= self.trainer.max_iter:
- self._do_eval()
- # func is likely a closure that holds reference to the trainer
- # therefore we clean it to avoid circular reference in the end
- del self._func
-
-
-class PreciseBN(HookBase):
- """
- The standard implementation of BatchNorm uses EMA in inference, which is
- sometimes suboptimal.
- This class computes the true average of statistics rather than the moving average,
- and put true averages to every BN layer in the given model.
-
- It is executed every ``period`` iterations and after the last iteration.
- """
-
- def __init__(self, period, model, data_loader, num_iter):
- """
- Args:
- period (int): the period this hook is run, or 0 to not run during training.
- The hook will always run in the end of training.
- model (nn.Module): a module whose all BN layers in training mode will be
- updated by precise BN.
- Note that user is responsible for ensuring the BN layers to be
- updated are in training mode when this hook is triggered.
- data_loader (iterable): it will produce data to be run by `model(data)`.
- num_iter (int): number of iterations used to compute the precise
- statistics.
- """
- self._logger = logging.getLogger(__name__)
- if len(get_bn_modules(model)) == 0:
- self._logger.info(
- "PreciseBN is disabled because model does not contain BN layers in training mode."
- )
- self._disabled = True
- return
-
- self._model = model
- self._data_loader = data_loader
- self._num_iter = num_iter
- self._period = period
- self._disabled = False
-
- self._data_iter = None
-
- def after_step(self):
- next_iter = self.trainer.iter + 1
- is_final = next_iter == self.trainer.max_iter
- if is_final or (self._period > 0 and next_iter % self._period == 0):
- self.update_stats()
-
- def update_stats(self):
- """
- Update the model with precise statistics. Users can manually call this method.
- """
- if self._disabled:
- return
-
- if self._data_iter is None:
- self._data_iter = iter(self._data_loader)
-
- def data_loader():
- for num_iter in itertools.count(1):
- if num_iter % 100 == 0:
- self._logger.info(
- "Running precise-BN ... {}/{} iterations.".format(num_iter, self._num_iter)
- )
- # This way we can reuse the same iterator
- yield next(self._data_iter)
-
- with EventStorage(): # capture events in a new storage to discard them
- self._logger.info(
- "Running precise-BN for {} iterations... ".format(self._num_iter)
- + "Note that this could produce different statistics every time."
- )
- update_bn_stats(self._model, data_loader(), self._num_iter)
-
-
-class TorchMemoryStats(HookBase):
- """
- Writes pytorch's cuda memory statistics periodically.
- """
-
- def __init__(self, period=20, max_runs=10):
- """
- Args:
- period (int): Output stats each 'period' iterations
- max_runs (int): Stop the logging after 'max_runs'
- """
-
- self._logger = logging.getLogger(__name__)
- self._period = period
- self._max_runs = max_runs
- self._runs = 0
-
- def after_step(self):
- if self._runs > self._max_runs:
- return
-
- if (self.trainer.iter + 1) % self._period == 0 or (
- self.trainer.iter == self.trainer.max_iter - 1
- ):
- if torch.cuda.is_available():
- max_reserved_mb = torch.cuda.max_memory_reserved() / 1024.0 / 1024.0
- reserved_mb = torch.cuda.memory_reserved() / 1024.0 / 1024.0
- max_allocated_mb = torch.cuda.max_memory_allocated() / 1024.0 / 1024.0
- allocated_mb = torch.cuda.memory_allocated() / 1024.0 / 1024.0
-
- self._logger.info(
- (
- " iter: {} "
- " max_reserved_mem: {:.0f}MB "
- " reserved_mem: {:.0f}MB "
- " max_allocated_mem: {:.0f}MB "
- " allocated_mem: {:.0f}MB "
- ).format(
- self.trainer.iter,
- max_reserved_mb,
- reserved_mb,
- max_allocated_mb,
- allocated_mb,
- )
- )
-
- self._runs += 1
- if self._runs == self._max_runs:
- mem_summary = torch.cuda.memory_summary()
- self._logger.info("\n" + mem_summary)
-
- torch.cuda.reset_peak_memory_stats()
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/pascal_voc_evaluation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/pascal_voc_evaluation.py
deleted file mode 100644
index 1d1abcde2f87bb5f103e73cb364aaabbecb6e619..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/pascal_voc_evaluation.py
+++ /dev/null
@@ -1,300 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import logging
-import numpy as np
-import os
-import tempfile
-import xml.etree.ElementTree as ET
-from collections import OrderedDict, defaultdict
-from functools import lru_cache
-import torch
-
-from detectron2.data import MetadataCatalog
-from detectron2.utils import comm
-from detectron2.utils.file_io import PathManager
-
-from .evaluator import DatasetEvaluator
-
-
-class PascalVOCDetectionEvaluator(DatasetEvaluator):
- """
- Evaluate Pascal VOC style AP for Pascal VOC dataset.
- It contains a synchronization, therefore has to be called from all ranks.
-
- Note that the concept of AP can be implemented in different ways and may not
- produce identical results. This class mimics the implementation of the official
- Pascal VOC Matlab API, and should produce similar but not identical results to the
- official API.
- """
-
- def __init__(self, dataset_name):
- """
- Args:
- dataset_name (str): name of the dataset, e.g., "voc_2007_test"
- """
- self._dataset_name = dataset_name
- meta = MetadataCatalog.get(dataset_name)
-
- # Too many tiny files, download all to local for speed.
- annotation_dir_local = PathManager.get_local_path(
- os.path.join(meta.dirname, "Annotations/")
- )
- self._anno_file_template = os.path.join(annotation_dir_local, "{}.xml")
- self._image_set_path = os.path.join(meta.dirname, "ImageSets", "Main", meta.split + ".txt")
- self._class_names = meta.thing_classes
- assert meta.year in [2007, 2012], meta.year
- self._is_2007 = meta.year == 2007
- self._cpu_device = torch.device("cpu")
- self._logger = logging.getLogger(__name__)
-
- def reset(self):
- self._predictions = defaultdict(list) # class name -> list of prediction strings
-
- def process(self, inputs, outputs):
- for input, output in zip(inputs, outputs):
- image_id = input["image_id"]
- instances = output["instances"].to(self._cpu_device)
- boxes = instances.pred_boxes.tensor.numpy()
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
- for box, score, cls in zip(boxes, scores, classes):
- xmin, ymin, xmax, ymax = box
- # The inverse of data loading logic in `datasets/pascal_voc.py`
- xmin += 1
- ymin += 1
- self._predictions[cls].append(
- f"{image_id} {score:.3f} {xmin:.1f} {ymin:.1f} {xmax:.1f} {ymax:.1f}"
- )
-
- def evaluate(self):
- """
- Returns:
- dict: has a key "segm", whose value is a dict of "AP", "AP50", and "AP75".
- """
- all_predictions = comm.gather(self._predictions, dst=0)
- if not comm.is_main_process():
- return
- predictions = defaultdict(list)
- for predictions_per_rank in all_predictions:
- for clsid, lines in predictions_per_rank.items():
- predictions[clsid].extend(lines)
- del all_predictions
-
- self._logger.info(
- "Evaluating {} using {} metric. "
- "Note that results do not use the official Matlab API.".format(
- self._dataset_name, 2007 if self._is_2007 else 2012
- )
- )
-
- with tempfile.TemporaryDirectory(prefix="pascal_voc_eval_") as dirname:
- res_file_template = os.path.join(dirname, "{}.txt")
-
- aps = defaultdict(list) # iou -> ap per class
- for cls_id, cls_name in enumerate(self._class_names):
- lines = predictions.get(cls_id, [""])
-
- with open(res_file_template.format(cls_name), "w") as f:
- f.write("\n".join(lines))
-
- for thresh in range(50, 100, 5):
- rec, prec, ap = voc_eval(
- res_file_template,
- self._anno_file_template,
- self._image_set_path,
- cls_name,
- ovthresh=thresh / 100.0,
- use_07_metric=self._is_2007,
- )
- aps[thresh].append(ap * 100)
-
- ret = OrderedDict()
- mAP = {iou: np.mean(x) for iou, x in aps.items()}
- ret["bbox"] = {"AP": np.mean(list(mAP.values())), "AP50": mAP[50], "AP75": mAP[75]}
- return ret
-
-
-##############################################################################
-#
-# Below code is modified from
-# https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/voc_eval.py
-# --------------------------------------------------------
-# Fast/er R-CNN
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Bharath Hariharan
-# --------------------------------------------------------
-
-"""Python implementation of the PASCAL VOC devkit's AP evaluation code."""
-
-
-@lru_cache(maxsize=None)
-def parse_rec(filename):
- """Parse a PASCAL VOC xml file."""
- with PathManager.open(filename) as f:
- tree = ET.parse(f)
- objects = []
- for obj in tree.findall("object"):
- obj_struct = {}
- obj_struct["name"] = obj.find("name").text
- obj_struct["pose"] = obj.find("pose").text
- obj_struct["truncated"] = int(obj.find("truncated").text)
- obj_struct["difficult"] = int(obj.find("difficult").text)
- bbox = obj.find("bndbox")
- obj_struct["bbox"] = [
- int(bbox.find("xmin").text),
- int(bbox.find("ymin").text),
- int(bbox.find("xmax").text),
- int(bbox.find("ymax").text),
- ]
- objects.append(obj_struct)
-
- return objects
-
-
-def voc_ap(rec, prec, use_07_metric=False):
- """Compute VOC AP given precision and recall. If use_07_metric is true, uses
- the VOC 07 11-point method (default:False).
- """
- if use_07_metric:
- # 11 point metric
- ap = 0.0
- for t in np.arange(0.0, 1.1, 0.1):
- if np.sum(rec >= t) == 0:
- p = 0
- else:
- p = np.max(prec[rec >= t])
- ap = ap + p / 11.0
- else:
- # correct AP calculation
- # first append sentinel values at the end
- mrec = np.concatenate(([0.0], rec, [1.0]))
- mpre = np.concatenate(([0.0], prec, [0.0]))
-
- # compute the precision envelope
- for i in range(mpre.size - 1, 0, -1):
- mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
-
- # to calculate area under PR curve, look for points
- # where X axis (recall) changes value
- i = np.where(mrec[1:] != mrec[:-1])[0]
-
- # and sum (\Delta recall) * prec
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
- return ap
-
-
-def voc_eval(detpath, annopath, imagesetfile, classname, ovthresh=0.5, use_07_metric=False):
- """rec, prec, ap = voc_eval(detpath,
- annopath,
- imagesetfile,
- classname,
- [ovthresh],
- [use_07_metric])
-
- Top level function that does the PASCAL VOC evaluation.
-
- detpath: Path to detections
- detpath.format(classname) should produce the detection results file.
- annopath: Path to annotations
- annopath.format(imagename) should be the xml annotations file.
- imagesetfile: Text file containing the list of images, one image per line.
- classname: Category name (duh)
- [ovthresh]: Overlap threshold (default = 0.5)
- [use_07_metric]: Whether to use VOC07's 11 point AP computation
- (default False)
- """
- # assumes detections are in detpath.format(classname)
- # assumes annotations are in annopath.format(imagename)
- # assumes imagesetfile is a text file with each line an image name
-
- # first load gt
- # read list of images
- with PathManager.open(imagesetfile, "r") as f:
- lines = f.readlines()
- imagenames = [x.strip() for x in lines]
-
- # load annots
- recs = {}
- for imagename in imagenames:
- recs[imagename] = parse_rec(annopath.format(imagename))
-
- # extract gt objects for this class
- class_recs = {}
- npos = 0
- for imagename in imagenames:
- R = [obj for obj in recs[imagename] if obj["name"] == classname]
- bbox = np.array([x["bbox"] for x in R])
- difficult = np.array([x["difficult"] for x in R]).astype(np.bool)
- # difficult = np.array([False for x in R]).astype(np.bool) # treat all "difficult" as GT
- det = [False] * len(R)
- npos = npos + sum(~difficult)
- class_recs[imagename] = {"bbox": bbox, "difficult": difficult, "det": det}
-
- # read dets
- detfile = detpath.format(classname)
- with open(detfile, "r") as f:
- lines = f.readlines()
-
- splitlines = [x.strip().split(" ") for x in lines]
- image_ids = [x[0] for x in splitlines]
- confidence = np.array([float(x[1]) for x in splitlines])
- BB = np.array([[float(z) for z in x[2:]] for x in splitlines]).reshape(-1, 4)
-
- # sort by confidence
- sorted_ind = np.argsort(-confidence)
- BB = BB[sorted_ind, :]
- image_ids = [image_ids[x] for x in sorted_ind]
-
- # go down dets and mark TPs and FPs
- nd = len(image_ids)
- tp = np.zeros(nd)
- fp = np.zeros(nd)
- for d in range(nd):
- R = class_recs[image_ids[d]]
- bb = BB[d, :].astype(float)
- ovmax = -np.inf
- BBGT = R["bbox"].astype(float)
-
- if BBGT.size > 0:
- # compute overlaps
- # intersection
- ixmin = np.maximum(BBGT[:, 0], bb[0])
- iymin = np.maximum(BBGT[:, 1], bb[1])
- ixmax = np.minimum(BBGT[:, 2], bb[2])
- iymax = np.minimum(BBGT[:, 3], bb[3])
- iw = np.maximum(ixmax - ixmin + 1.0, 0.0)
- ih = np.maximum(iymax - iymin + 1.0, 0.0)
- inters = iw * ih
-
- # union
- uni = (
- (bb[2] - bb[0] + 1.0) * (bb[3] - bb[1] + 1.0)
- + (BBGT[:, 2] - BBGT[:, 0] + 1.0) * (BBGT[:, 3] - BBGT[:, 1] + 1.0)
- - inters
- )
-
- overlaps = inters / uni
- ovmax = np.max(overlaps)
- jmax = np.argmax(overlaps)
-
- if ovmax > ovthresh:
- if not R["difficult"][jmax]:
- if not R["det"][jmax]:
- tp[d] = 1.0
- R["det"][jmax] = 1
- else:
- fp[d] = 1.0
- else:
- fp[d] = 1.0
-
- # compute precision recall
- fp = np.cumsum(fp)
- tp = np.cumsum(tp)
- rec = tp / float(npos)
- # avoid divide by zero in case the first detection matches a difficult
- # ground truth
- prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
- ap = voc_ap(rec, prec, use_07_metric)
-
- return rec, prec, ap
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/datasets.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/datasets.md
deleted file mode 100644
index 91103f64264aa6f3059611c5fe06ecd65bcb986f..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/datasets.md
+++ /dev/null
@@ -1,290 +0,0 @@
-# Use Custom Datasets
-
-This document explains how the dataset APIs
-([DatasetCatalog](../modules/data.html#detectron2.data.DatasetCatalog), [MetadataCatalog](../modules/data.html#detectron2.data.MetadataCatalog))
-work, and how to use them to add custom datasets.
-
-Datasets that have builtin support in detectron2 are listed in [builtin datasets](builtin_datasets.md).
-If you want to use a custom dataset while also reusing detectron2's data loaders,
-you will need to:
-
-1. __Register__ your dataset (i.e., tell detectron2 how to obtain your dataset).
-2. Optionally, __register metadata__ for your dataset.
-
-Next, we explain the above two concepts in detail.
-
-The [Colab tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
-has a live example of how to register and train on a dataset of custom formats.
-
-### Register a Dataset
-
-To let detectron2 know how to obtain a dataset named "my_dataset", users need to implement
-a function that returns the items in your dataset and then tell detectron2 about this
-function:
-```python
-def my_dataset_function():
- ...
- return list[dict] in the following format
-
-from detectron2.data import DatasetCatalog
-DatasetCatalog.register("my_dataset", my_dataset_function)
-# later, to access the data:
-data: List[Dict] = DatasetCatalog.get("my_dataset")
-```
-
-Here, the snippet associates a dataset named "my_dataset" with a function that returns the data.
-The function must return the same data (with same order) if called multiple times.
-The registration stays effective until the process exits.
-
-The function can do arbitrary things and should return the data in `list[dict]`, each dict in either
-of the following formats:
-1. Detectron2's standard dataset dict, described below. This will make it work with many other builtin
- features in detectron2, so it's recommended to use it when it's sufficient.
-2. Any custom format. You can also return arbitrary dicts in your own format,
- such as adding extra keys for new tasks.
- Then you will need to handle them properly downstream as well.
- See below for more details.
-
-#### Standard Dataset Dicts
-
-For standard tasks
-(instance detection, instance/semantic/panoptic segmentation, keypoint detection),
-we load the original dataset into `list[dict]` with a specification similar to COCO's annotations.
-This is our standard representation for a dataset.
-
-Each dict contains information about one image.
-The dict may have the following fields,
-and the required fields vary based on what the dataloader or the task needs (see more below).
-
-```eval_rst
-.. list-table::
- :header-rows: 1
-
- * - Task
- - Fields
- * - Common
- - file_name, height, width, image_id
-
- * - Instance detection/segmentation
- - annotations
-
- * - Semantic segmentation
- - sem_seg_file_name
-
- * - Panoptic segmentation
- - pan_seg_file_name, segments_info
-```
-
-+ `file_name`: the full path to the image file.
-+ `height`, `width`: integer. The shape of the image.
-+ `image_id` (str or int): a unique id that identifies this image. Required by many
- evaluators to identify the images, but a dataset may use it for different purposes.
-+ `annotations` (list[dict]): Required by __instance detection/segmentation or keypoint detection__ tasks.
- Each dict corresponds to annotations of one instance in this image, and
- may contain the following keys:
- + `bbox` (list[float], required): list of 4 numbers representing the bounding box of the instance.
- + `bbox_mode` (int, required): the format of bbox. It must be a member of
- [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode).
- Currently supports: `BoxMode.XYXY_ABS`, `BoxMode.XYWH_ABS`.
- + `category_id` (int, required): an integer in the range [0, num_categories-1] representing the category label.
- The value num_categories is reserved to represent the "background" category, if applicable.
- + `segmentation` (list[list[float]] or dict): the segmentation mask of the instance.
- + If `list[list[float]]`, it represents a list of polygons, one for each connected component
- of the object. Each `list[float]` is one simple polygon in the format of `[x1, y1, ..., xn, yn]` (n≥3).
- The Xs and Ys are absolute coordinates in unit of pixels.
- + If `dict`, it represents the per-pixel segmentation mask in COCO's compressed RLE format.
- The dict should have keys "size" and "counts". You can convert a uint8 segmentation mask of 0s and
- 1s into such dict by `pycocotools.mask.encode(np.asarray(mask, order="F"))`.
- `cfg.INPUT.MASK_FORMAT` must be set to `bitmask` if using the default data loader with such format.
- + `keypoints` (list[float]): in the format of [x1, y1, v1,..., xn, yn, vn].
- v[i] means the [visibility](http://cocodataset.org/#format-data) of this keypoint.
- `n` must be equal to the number of keypoint categories.
- The Xs and Ys are absolute real-value coordinates in range [0, W or H].
-
- (Note that the keypoint coordinates in COCO format are integers in range [0, W-1 or H-1], which is different
- from our standard format. Detectron2 adds 0.5 to COCO keypoint coordinates to convert them from discrete
- pixel indices to floating point coordinates.)
- + `iscrowd`: 0 (default) or 1. Whether this instance is labeled as COCO's "crowd
- region". Don't include this field if you don't know what it means.
-
- If `annotations` is an empty list, it means the image is labeled to have no objects.
- Such images will by default be removed from training,
- but can be included using `DATALOADER.FILTER_EMPTY_ANNOTATIONS`.
-
-+ `sem_seg_file_name` (str):
- The full path to the semantic segmentation ground truth file.
- It should be a grayscale image whose pixel values are integer labels.
-+ `pan_seg_file_name` (str):
- The full path to panoptic segmentation ground truth file.
- It should be an RGB image whose pixel values are integer ids encoded using the
- [panopticapi.utils.id2rgb](https://github.com/cocodataset/panopticapi/) function.
- The ids are defined by `segments_info`.
- If an id does not appear in `segments_info`, the pixel is considered unlabeled
- and is usually ignored in training & evaluation.
-+ `segments_info` (list[dict]): defines the meaning of each id in panoptic segmentation ground truth.
- Each dict has the following keys:
- + `id` (int): integer that appears in the ground truth image.
- + `category_id` (int): an integer in the range [0, num_categories-1] representing the category label.
- + `iscrowd`: 0 (default) or 1. Whether this instance is labeled as COCO's "crowd region".
-
-
-```eval_rst
-
-.. note::
-
- The PanopticFPN model does not use the panoptic segmentation
- format defined here, but a combination of both instance segmentation and semantic segmentation data
- format. See :doc:`builtin_datasets` for instructions on COCO.
-
-```
-
-Fast R-CNN (with pre-computed proposals) models are rarely used today.
-To train a Fast R-CNN, the following extra keys are needed:
-
-+ `proposal_boxes` (array): 2D numpy array with shape (K, 4) representing K precomputed proposal boxes for this image.
-+ `proposal_objectness_logits` (array): numpy array with shape (K, ), which corresponds to the objectness
- logits of proposals in 'proposal_boxes'.
-+ `proposal_bbox_mode` (int): the format of the precomputed proposal bbox.
- It must be a member of
- [structures.BoxMode](../modules/structures.html#detectron2.structures.BoxMode).
- Default is `BoxMode.XYXY_ABS`.
-
-
-
-#### Custom Dataset Dicts for New Tasks
-
-In the `list[dict]` that your dataset function returns, the dictionary can also have __arbitrary custom data__.
-This will be useful for a new task that needs extra information not covered
-by the standard dataset dicts. In this case, you need to make sure the downstream code can handle your data
-correctly. Usually this requires writing a new `mapper` for the dataloader (see [Use Custom Dataloaders](./data_loading.md)).
-
-When designing a custom format, note that all dicts are stored in memory
-(sometimes serialized and with multiple copies).
-To save memory, each dict is meant to contain __small__ but sufficient information
-about each sample, such as file names and annotations.
-Loading full samples typically happens in the data loader.
-
-For attributes shared among the entire dataset, use `Metadata` (see below).
-To avoid extra memory, do not save such information inside each sample.
-
-### "Metadata" for Datasets
-
-Each dataset is associated with some metadata, accessible through
-`MetadataCatalog.get(dataset_name).some_metadata`.
-Metadata is a key-value mapping that contains information that's shared among
-the entire dataset, and usually is used to interpret what's in the dataset, e.g.,
-names of classes, colors of classes, root of files, etc.
-This information will be useful for augmentation, evaluation, visualization, logging, etc.
-The structure of metadata depends on what is needed from the corresponding downstream code.
-
-If you register a new dataset through `DatasetCatalog.register`,
-you may also want to add its corresponding metadata through
-`MetadataCatalog.get(dataset_name).some_key = some_value`, to enable any features that need the metadata.
-You can do it like this (using the metadata key "thing_classes" as an example):
-
-```python
-from detectron2.data import MetadataCatalog
-MetadataCatalog.get("my_dataset").thing_classes = ["person", "dog"]
-```
-
-Here is a list of metadata keys that are used by builtin features in detectron2.
-If you add your own dataset without these metadata, some features may be
-unavailable to you:
-
-* `thing_classes` (list[str]): Used by all instance detection/segmentation tasks.
- A list of names for each instance/thing category.
- If you load a COCO format dataset, it will be automatically set by the function `load_coco_json`.
-
-* `thing_colors` (list[tuple(r, g, b)]): Pre-defined color (in [0, 255]) for each thing category.
- Used for visualization. If not given, random colors will be used.
-
-* `stuff_classes` (list[str]): Used by semantic and panoptic segmentation tasks.
- A list of names for each stuff category.
-
-* `stuff_colors` (list[tuple(r, g, b)]): Pre-defined color (in [0, 255]) for each stuff category.
- Used for visualization. If not given, random colors are used.
-
-* `ignore_label` (int): Used by semantic and panoptic segmentation tasks. Pixels in ground-truth
- annotations with this category label should be ignored in evaluation. Typically these are "unlabeled"
- pixels.
-
-* `keypoint_names` (list[str]): Used by keypoint detection. A list of names for each keypoint.
-
-* `keypoint_flip_map` (list[tuple[str]]): Used by keypoint detection. A list of pairs of names,
- where each pair are the two keypoints that should be flipped if the image is
- flipped horizontally during augmentation.
-* `keypoint_connection_rules`: list[tuple(str, str, (r, g, b))]. Each tuple specifies a pair of keypoints
- that are connected and the color (in [0, 255]) to use for the line between them when visualized.
-
-Some additional metadata that are specific to the evaluation of certain datasets (e.g. COCO):
-
-* `thing_dataset_id_to_contiguous_id` (dict[int->int]): Used by all instance detection/segmentation tasks in the COCO format.
- A mapping from instance class ids in the dataset to contiguous ids in range [0, #class).
- Will be automatically set by the function `load_coco_json`.
-
-* `stuff_dataset_id_to_contiguous_id` (dict[int->int]): Used when generating prediction json files for
- semantic/panoptic segmentation.
- A mapping from semantic segmentation class ids in the dataset
- to contiguous ids in [0, num_categories). It is useful for evaluation only.
-
-* `json_file`: The COCO annotation json file. Used by COCO evaluation for COCO-format datasets.
-* `panoptic_root`, `panoptic_json`: Used by COCO-format panoptic evaluation.
-* `evaluator_type`: Used by the builtin main training script to select
- evaluator. Don't use it in a new training script.
- You can just provide the [DatasetEvaluator](../modules/evaluation.html#detectron2.evaluation.DatasetEvaluator)
- for your dataset directly in your main script.
-
-```eval_rst
-.. note::
-
- In recognition, sometimes we use the term "thing" for instance-level tasks,
- and "stuff" for semantic segmentation tasks.
- Both are used in panoptic segmentation tasks.
- For background on the concept of "thing" and "stuff", see
- `On Seeing Stuff: The Perception of Materials by Humans and Machines
- `_.
-```
-
-### Register a COCO Format Dataset
-
-If your instance-level (detection, segmentation, keypoint) dataset is already a json file in the COCO format,
-the dataset and its associated metadata can be registered easily with:
-```python
-from detectron2.data.datasets import register_coco_instances
-register_coco_instances("my_dataset", {}, "json_annotation.json", "path/to/image/dir")
-```
-
-If your dataset is in COCO format but need to be further processed, or has extra custom per-instance annotations,
-the [load_coco_json](../modules/data.html#detectron2.data.datasets.load_coco_json)
-function might be useful.
-
-### Update the Config for New Datasets
-
-Once you've registered the dataset, you can use the name of the dataset (e.g., "my_dataset" in
-example above) in `cfg.DATASETS.{TRAIN,TEST}`.
-There are other configs you might want to change to train or evaluate on new datasets:
-
-* `MODEL.ROI_HEADS.NUM_CLASSES` and `MODEL.RETINANET.NUM_CLASSES` are the number of thing classes
- for R-CNN and RetinaNet models, respectively.
-* `MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS` sets the number of keypoints for Keypoint R-CNN.
- You'll also need to set [Keypoint OKS](http://cocodataset.org/#keypoints-eval)
- with `TEST.KEYPOINT_OKS_SIGMAS` for evaluation.
-* `MODEL.SEM_SEG_HEAD.NUM_CLASSES` sets the number of stuff classes for Semantic FPN & Panoptic FPN.
-* `TEST.DETECTIONS_PER_IMAGE` controls the maximum number of objects to be detected.
- Set it to a larger number if test images may contain >100 objects.
-* If you're training Fast R-CNN (with precomputed proposals), `DATASETS.PROPOSAL_FILES_{TRAIN,TEST}`
- need to match the datasets. The format of proposal files are documented
- [here](../modules/data.html#detectron2.data.load_proposals_into_dataset).
-
-New models
-(e.g. [TensorMask](../../projects/TensorMask),
-[PointRend](../../projects/PointRend))
-often have similar configs of their own that need to be changed as well.
-
-```eval_rst
-.. tip::
-
- After changing the number of classes, certain layers in a pre-trained model will become incompatible
- and therefore cannot be loaded to the new model.
- This is expected, and loading such pre-trained models will produce warnings about such layers.
-```
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/custom_build_augmentation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/custom_build_augmentation.py
deleted file mode 100644
index 7d91f21edb082c079c5a1e85bdf669c7b55bad9a..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/custom_build_augmentation.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import logging
-import numpy as np
-import pycocotools.mask as mask_util
-import torch
-from fvcore.common.file_io import PathManager
-from PIL import Image
-
-from detectron2.structures import (
- BitMasks,
- Boxes,
- BoxMode,
- Instances,
- Keypoints,
- PolygonMasks,
- RotatedBoxes,
- polygons_to_bitmask,
-)
-
-from detectron2.data import transforms as T
-from .transforms.custom_augmentation_impl import EfficientDetResizeCrop
-
-def build_custom_augmentation(cfg, is_train):
- """
- Create a list of default :class:`Augmentation` from config.
- Now it includes resizing and flipping.
-
- Returns:
- list[Augmentation]
- """
- if cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge':
- if is_train:
- min_size = cfg.INPUT.MIN_SIZE_TRAIN
- max_size = cfg.INPUT.MAX_SIZE_TRAIN
- sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING
- else:
- min_size = cfg.INPUT.MIN_SIZE_TEST
- max_size = cfg.INPUT.MAX_SIZE_TEST
- sample_style = "choice"
- augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)]
- elif cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop':
- if is_train:
- scale = cfg.INPUT.SCALE_RANGE
- size = cfg.INPUT.TRAIN_SIZE
- else:
- scale = (1, 1)
- size = cfg.INPUT.TEST_SIZE
- augmentation = [EfficientDetResizeCrop(size, scale)]
- else:
- assert 0, cfg.INPUT.CUSTOM_AUG
-
- if is_train:
- augmentation.append(T.RandomFlip())
- return augmentation
-
-
-build_custom_transform_gen = build_custom_augmentation
-"""
-Alias for backward-compatibility.
-"""
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Anime Negro Apk.md b/spaces/Benson/text-generation/Examples/Descargar Anime Negro Apk.md
deleted file mode 100644
index 6124ac871351666c94889809362990a34e6a54f5..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Anime Negro Apk.md
+++ /dev/null
@@ -1,64 +0,0 @@
-
-
Descargar Anime Negro APK: La aplicación definitiva para los fans de anime
-
Si eres un amante del anime, probablemente sabes lo difícil que puede ser encontrar una buena aplicación para ver tus programas favoritos en tu dispositivo Android. Hay muchas aplicaciones por ahí, pero la mayoría de ellos son de baja calidad, poco fiable, o lleno de anuncios. Es por eso que usted necesita para descargar Anime Negro APK, la aplicación definitiva para los fans del anime.
Anime Negro APK es una aplicación para Android que le permite ver y descargar miles de episodios de anime de varios géneros y categorías. Puede disfrutar viendo anime en calidad HD, con transmisión rápida y reproducción suave. También puedes descargar episodios de anime para verlos sin conexión, para que puedas verlos en cualquier momento y en cualquier lugar.
-
Características del anime negro APK
-
Anime Negro APK tiene muchas características que lo convierten en una de las mejores aplicaciones para los amantes del anime. Aquí están algunos de ellos:
-
Ver miles de episodios de anime en calidad HD
-
Anime Negro APK tiene una enorme biblioteca de espectáculos de anime, desde los últimos lanzamientos a los clásicos. Puedes encontrar anime de diferentes géneros, como acción, comedia, romance, terror, ciencia ficción, fantasía y más. También puede buscar anime por nombre, género o popularidad. Puede ver anime en calidad HD, con sonido claro y subtítulos. También puede ajustar la calidad del vídeo según su velocidad de Internet y el uso de datos.
-
Descargar episodios de anime para ver sin conexión
-
Si desea ver anime sin conexión a Internet, puede descargar episodios de anime para ver sin conexión. Puede elegir la calidad de descarga y la ubicación en su dispositivo. También puede administrar sus descargas y eliminarlas cuando haya terminado de verlas.
-
Acceso a múltiples fuentes y servidores
-
-
Personaliza la configuración y las preferencias de tu app
-
Anime Negro APK le permite personalizar la configuración de la aplicación y las preferencias de acuerdo a su gusto. Puede cambiar el tema de la aplicación, el idioma, el tamaño de la fuente, la configuración de notificaciones y más. También puede activar o desactivar anuncios, reproducción automática, descarga automática y otras funciones.
-
-
Cómo descargar e instalar Anime Negro APK?
-
Descargar e instalar Anime Black APK es muy fácil y simple. Solo tienes que seguir estos pasos:
-
Paso 1: Habilitar fuentes desconocidas en su dispositivo
-
Dado que Anime Negro APK no está disponible en el Google Play Store, es necesario habilitar fuentes desconocidas en su dispositivo para instalarlo. Para hacer esto, vaya a la configuración del dispositivo > seguridad > fuentes desconocidas > habilitar.
-
Paso 2: Descargar el archivo APK de una fuente de confianza
-
Puede descargar el archivo APK de una fuente de confianza como [ANIME NEGRO APK (Android App) - تنزيل مجاني - APKCombo]( 1 ). Asegúrate de descargar la última versión de la aplicación.
-
Paso 3: Instalar el archivo APK en su dispositivo
-
Una vez que haya descargado el archivo APK, localizarlo en su dispositivo y toque en él para instalarlo. Siga las instrucciones en la pantalla para completar la instalación.
-
Paso 4: Inicie la aplicación y disfrute viendo anime
-
Después de instalar la aplicación, se puede iniciar y empezar a ver anime. Puede navegar a través de las categorías de la aplicación, o utilizar la función de búsqueda para encontrar su anime favorito. También puede agregar anime a su lista de favoritos, o ver los últimos episodios de la página principal de la aplicación. También puede consultar las secciones de actualizaciones, noticias y comentarios de la aplicación para obtener más información y soporte.
-
Pros y contras de anime negro APK
-
Como cualquier otra aplicación, Anime Negro APK tiene sus pros y contras. Aquí están algunos de ellos:
-
Pros
-
-
Es gratis para descargar y usar.
-
Tiene una gran colección de programas de anime en calidad HD.
-
-
Tiene múltiples fuentes y servidores para cada espectáculo de anime.
-
Tiene una interfaz fácil de usar y ajustes personalizables.
-
Tiene actualizaciones regulares y correcciones de errores.
-
-
Contras
-
-
No está disponible en Google Play Store.
-
Puede contener algunos anuncios y ventanas emergentes.
-
Puede que no funcione en algunos dispositivos o regiones.
-
Puede tener algunos errores o fallos ocasionalmente.
-
-
Conclusión
-
Anime Negro APK es una gran aplicación para los fans del anime que quieren ver y descargar sus programas favoritos en sus dispositivos Android. Tiene muchas características que lo convierten en una de las mejores aplicaciones para los amantes del anime. Es fácil de descargar e instalar, y es de uso gratuito. Sin embargo, también tiene algunos inconvenientes que debe tener en cuenta antes de usarlo. En general, Anime Negro APK es una aplicación imprescindible para cualquier fan del anime que quiere disfrutar viendo anime en cualquier momento y en cualquier lugar.
-
Si usted tiene alguna pregunta o comentario acerca de Anime Negro APK, puede dejar un comentario a continuación o póngase en contacto con el desarrollador de la aplicación. También puede compartir este artículo con sus amigos que también están en el anime. Gracias por leer!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más comunes que la gente pregunta acerca de Anime Negro APK:
-
-
Es Anime Negro APK seguro de usar?
-
Anime Negro APK es seguro de usar, siempre y cuando se descarga de una fuente de confianza, tales como [ANIME NEGRO APK (Android App) - تنزيل مجاني - APKCombo]. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier aplicación de fuentes desconocidas, ya que pueden contener virus o malware que pueden dañar su dispositivo. También debe escanear el archivo APK con una aplicación antivirus antes de instalarlo.
-
Es Anime Negro APK legal de usar?
-
-
¿Cómo puedo actualizar Anime negro APK?
-
Anime Negro APK tiene actualizaciones regulares que corrigen errores y añadir nuevas características. Puedes buscar actualizaciones en el menú de configuración de la aplicación, o visitar el sitio web oficial de la aplicación o las páginas de redes sociales para obtener más información. También puede descargar la última versión de la aplicación de [ANIME NEGRO APK (Android App) - Cuando hay una nueva actualización disponible.
-
¿Cómo puedo solicitar un espectáculo de anime que no está disponible en Anime Black APK?
-
Anime Negro APK tiene una sección de comentarios donde se puede solicitar un espectáculo de anime que no está disponible en la aplicación. También puede ponerse en contacto con el desarrollador de la aplicación a través de correo electrónico o plataformas de redes sociales y sugerir un anime que desea ver o descargar. Sin embargo, no hay garantía de que su solicitud se cumplirá, ya que depende de la disponibilidad y compatibilidad de la serie de anime con la aplicación.
-
¿Cómo puedo apoyar Anime negro APK?
-
Anime Negro APK es una aplicación gratuita que no cobra ninguna cuota o suscripciones por sus servicios. Sin embargo, si desea apoyar el desarrollo y mantenimiento de la aplicación, puede hacerlo mediante una donación a través de PayPal o Patreon, o mediante la compra de funciones premium como el modo sin anuncios o descargas ilimitadas. También puedes dar soporte a la aplicación calificándola, revisándola, compartiéndola o dándole feedback.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_scripts.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_scripts.py
deleted file mode 100644
index f09bd644207e5c5a891d3605cb6aff4f00d70c8a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/install_scripts.py
+++ /dev/null
@@ -1,61 +0,0 @@
-"""distutils.command.install_scripts
-
-Implements the Distutils 'install_scripts' command, for installing
-Python scripts."""
-
-# contributed by Bastian Kleineidam
-
-import os
-from distutils.core import Command
-from distutils import log
-from stat import ST_MODE
-
-
-class install_scripts(Command):
-
- description = "install scripts (Python or otherwise)"
-
- user_options = [
- ('install-dir=', 'd', "directory to install scripts to"),
- ('build-dir=', 'b', "build directory (where to install from)"),
- ('force', 'f', "force installation (overwrite existing files)"),
- ('skip-build', None, "skip the build steps"),
- ]
-
- boolean_options = ['force', 'skip-build']
-
- def initialize_options(self):
- self.install_dir = None
- self.force = 0
- self.build_dir = None
- self.skip_build = None
-
- def finalize_options(self):
- self.set_undefined_options('build', ('build_scripts', 'build_dir'))
- self.set_undefined_options(
- 'install',
- ('install_scripts', 'install_dir'),
- ('force', 'force'),
- ('skip_build', 'skip_build'),
- )
-
- def run(self):
- if not self.skip_build:
- self.run_command('build_scripts')
- self.outfiles = self.copy_tree(self.build_dir, self.install_dir)
- if os.name == 'posix':
- # Set the executable bits (owner, group, and world) on
- # all the scripts we just installed.
- for file in self.get_outputs():
- if self.dry_run:
- log.info("changing mode of %s", file)
- else:
- mode = ((os.stat(file)[ST_MODE]) | 0o555) & 0o7777
- log.info("changing mode of %s to %o", file, mode)
- os.chmod(file, mode)
-
- def get_inputs(self):
- return self.distribution.scripts or []
-
- def get_outputs(self):
- return self.outfiles or []
diff --git a/spaces/Boadiwaa/Recipes/openai/util.py b/spaces/Boadiwaa/Recipes/openai/util.py
deleted file mode 100644
index becd7d14db63d7671b570cd2e9778b91ca96bce2..0000000000000000000000000000000000000000
--- a/spaces/Boadiwaa/Recipes/openai/util.py
+++ /dev/null
@@ -1,185 +0,0 @@
-import logging
-import os
-import re
-import sys
-from enum import Enum
-from typing import Optional
-
-import openai
-
-OPENAI_LOG = os.environ.get("OPENAI_LOG")
-
-logger = logging.getLogger("openai")
-
-__all__ = [
- "log_info",
- "log_debug",
- "log_warn",
- "logfmt",
-]
-
-api_key_to_header = (
- lambda api, key: {"Authorization": f"Bearer {key}"}
- if api == ApiType.OPEN_AI
- else {"api-key": f"{key}"}
-)
-
-
-class ApiType(Enum):
- AZURE = 1
- OPEN_AI = 2
-
- @staticmethod
- def from_str(label):
- if label.lower() == "azure":
- return ApiType.AZURE
- elif label.lower() in ("open_ai", "openai"):
- return ApiType.OPEN_AI
- else:
- raise openai.error.InvalidAPIType(
- "The API type provided in invalid. Please select one of the supported API types: 'azure', 'open_ai'"
- )
-
-
-def _console_log_level():
- if openai.log in ["debug", "info"]:
- return openai.log
- elif OPENAI_LOG in ["debug", "info"]:
- return OPENAI_LOG
- else:
- return None
-
-
-def log_debug(message, **params):
- msg = logfmt(dict(message=message, **params))
- if _console_log_level() == "debug":
- print(msg, file=sys.stderr)
- logger.debug(msg)
-
-
-def log_info(message, **params):
- msg = logfmt(dict(message=message, **params))
- if _console_log_level() in ["debug", "info"]:
- print(msg, file=sys.stderr)
- logger.info(msg)
-
-
-def log_warn(message, **params):
- msg = logfmt(dict(message=message, **params))
- print(msg, file=sys.stderr)
- logger.warn(msg)
-
-
-def logfmt(props):
- def fmt(key, val):
- # Handle case where val is a bytes or bytesarray
- if hasattr(val, "decode"):
- val = val.decode("utf-8")
- # Check if val is already a string to avoid re-encoding into ascii.
- if not isinstance(val, str):
- val = str(val)
- if re.search(r"\s", val):
- val = repr(val)
- # key should already be a string
- if re.search(r"\s", key):
- key = repr(key)
- return "{key}={val}".format(key=key, val=val)
-
- return " ".join([fmt(key, val) for key, val in sorted(props.items())])
-
-
-def get_object_classes():
- # This is here to avoid a circular dependency
- from openai.object_classes import OBJECT_CLASSES
-
- return OBJECT_CLASSES
-
-
-def convert_to_openai_object(
- resp,
- api_key=None,
- api_version=None,
- organization=None,
- engine=None,
- plain_old_data=False,
-):
- # If we get a OpenAIResponse, we'll want to return a OpenAIObject.
-
- response_ms: Optional[int] = None
- if isinstance(resp, openai.openai_response.OpenAIResponse):
- organization = resp.organization
- response_ms = resp.response_ms
- resp = resp.data
-
- if plain_old_data:
- return resp
- elif isinstance(resp, list):
- return [
- convert_to_openai_object(
- i, api_key, api_version, organization, engine=engine
- )
- for i in resp
- ]
- elif isinstance(resp, dict) and not isinstance(
- resp, openai.openai_object.OpenAIObject
- ):
- resp = resp.copy()
- klass_name = resp.get("object")
- if isinstance(klass_name, str):
- klass = get_object_classes().get(
- klass_name, openai.openai_object.OpenAIObject
- )
- else:
- klass = openai.openai_object.OpenAIObject
-
- return klass.construct_from(
- resp,
- api_key=api_key,
- api_version=api_version,
- organization=organization,
- response_ms=response_ms,
- engine=engine,
- )
- else:
- return resp
-
-
-def convert_to_dict(obj):
- """Converts a OpenAIObject back to a regular dict.
-
- Nested OpenAIObjects are also converted back to regular dicts.
-
- :param obj: The OpenAIObject to convert.
-
- :returns: The OpenAIObject as a dict.
- """
- if isinstance(obj, list):
- return [convert_to_dict(i) for i in obj]
- # This works by virtue of the fact that OpenAIObjects _are_ dicts. The dict
- # comprehension returns a regular dict and recursively applies the
- # conversion to each value.
- elif isinstance(obj, dict):
- return {k: convert_to_dict(v) for k, v in obj.items()}
- else:
- return obj
-
-
-def merge_dicts(x, y):
- z = x.copy()
- z.update(y)
- return z
-
-
-def default_api_key() -> str:
- if openai.api_key_path:
- with open(openai.api_key_path, "rt") as k:
- api_key = k.read().strip()
- if not api_key.startswith("sk-"):
- raise ValueError(f"Malformed API key in {openai.api_key_path}.")
- return api_key
- elif openai.api_key is not None:
- return openai.api_key
- else:
- raise openai.error.AuthenticationError(
- "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com for details, or email support@openai.com if you have any questions."
- )
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/swap.h b/spaces/CVPR/LIVE/thrust/thrust/detail/swap.h
deleted file mode 100644
index 96783c762bd9c2ebae3ca5318fe04f15457c545f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/swap.h
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
-inline void swap(Assignable1 &a, Assignable2 &b)
-{
- Assignable1 temp = a;
- a = b;
- b = temp;
-} // end swap()
-
-} // end namespace thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/zip_iterator_base.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/zip_iterator_base.h
deleted file mode 100644
index b1603aed4d209bbe3d8b8f211857c250b0bb7c3e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/zip_iterator_base.h
+++ /dev/null
@@ -1,405 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-// forward declare zip_iterator for zip_iterator_base
-template class zip_iterator;
-
-namespace detail
-{
-
-
-// Functors to be used with tuple algorithms
-//
-template
-class advance_iterator
-{
-public:
- inline __host__ __device__
- advance_iterator(DiffType step) : m_step(step) {}
-
- __thrust_exec_check_disable__
- template
- inline __host__ __device__
- void operator()(Iterator& it) const
- { thrust::advance(it, m_step); }
-
-private:
- DiffType m_step;
-}; // end advance_iterator
-
-
-struct increment_iterator
-{
- __thrust_exec_check_disable__
- template
- inline __host__ __device__
- void operator()(Iterator& it)
- { ++it; }
-}; // end increment_iterator
-
-
-struct decrement_iterator
-{
- __thrust_exec_check_disable__
- template
- inline __host__ __device__
- void operator()(Iterator& it)
- { --it; }
-}; // end decrement_iterator
-
-
-struct dereference_iterator
-{
- template
- struct apply
- {
- typedef typename
- iterator_traits::reference
- type;
- }; // end apply
-
- // XXX silence warnings of the form "calling a __host__ function from a __host__ __device__ function is not allowed
- __thrust_exec_check_disable__
- template
- __host__ __device__
- typename apply::type operator()(Iterator const& it)
- {
- return *it;
- }
-}; // end dereference_iterator
-
-
-// The namespace tuple_impl_specific provides two meta-
-// algorithms and two algorithms for tuples.
-namespace tuple_impl_specific
-{
-
-// define apply1 for tuple_meta_transform_impl
-template
- struct apply1
- : UnaryMetaFunctionClass::template apply
-{
-}; // end apply1
-
-
-// define apply2 for tuple_meta_accumulate_impl
-template
- struct apply2
- : UnaryMetaFunctionClass::template apply
-{
-}; // end apply2
-
-
-// Meta-accumulate algorithm for tuples. Note: The template
-// parameter StartType corresponds to the initial value in
-// ordinary accumulation.
-//
-template
- struct tuple_meta_accumulate;
-
-template<
- typename Tuple
- , class BinaryMetaFun
- , typename StartType
->
- struct tuple_meta_accumulate_impl
-{
- typedef typename apply2<
- BinaryMetaFun
- , typename Tuple::head_type
- , typename tuple_meta_accumulate<
- typename Tuple::tail_type
- , BinaryMetaFun
- , StartType
- >::type
- >::type type;
-};
-
-
-template<
- typename Tuple
- , class BinaryMetaFun
- , typename StartType
->
-struct tuple_meta_accumulate
- : thrust::detail::eval_if<
- thrust::detail::is_same::value
- , thrust::detail::identity_
- , tuple_meta_accumulate_impl<
- Tuple
- , BinaryMetaFun
- , StartType
- >
- > // end eval_if
-{
-}; // end tuple_meta_accumulate
-
-
-// transform algorithm for tuples. The template parameter Fun
-// must be a unary functor which is also a unary metafunction
-// class that computes its return type based on its argument
-// type. For example:
-//
-// struct to_ptr
-// {
-// template
-// struct apply
-// {
-// typedef Arg* type;
-// }
-//
-// template
-// Arg* operator()(Arg x);
-// };
-
-
-
-// for_each algorithm for tuples.
-template
-inline __host__ __device__
-Fun tuple_for_each(thrust::null_type, Fun f)
-{
- return f;
-} // end tuple_for_each()
-
-
-template
-inline __host__ __device__
-Fun tuple_for_each(Tuple& t, Fun f)
-{
- f( t.get_head() );
- return tuple_for_each(t.get_tail(), f);
-} // end tuple_for_each()
-
-
-// Equality of tuples. NOTE: "==" for tuples currently (7/2003)
-// has problems under some compilers, so I just do my own.
-// No point in bringing in a bunch of #ifdefs here. This is
-// going to go away with the next tuple implementation anyway.
-//
-__host__ __device__
-inline bool tuple_equal(thrust::null_type, thrust::null_type)
-{ return true; }
-
-
-template
-__host__ __device__
-bool tuple_equal(Tuple1 const& t1, Tuple2 const& t2)
-{
- return t1.get_head() == t2.get_head() &&
- tuple_equal(t1.get_tail(), t2.get_tail());
-} // end tuple_equal()
-
-} // end end tuple_impl_specific
-
-
-// Metafunction to obtain the type of the tuple whose element types
-// are the value_types of an iterator tupel.
-//
-template
- struct tuple_of_value_types
- : tuple_meta_transform<
- IteratorTuple,
- iterator_value
- >
-{
-}; // end tuple_of_value_types
-
-
-struct minimum_category_lambda
-{
- template
- struct apply : minimum_category
- {};
-};
-
-
-
-// Metafunction to obtain the minimal traversal tag in a tuple
-// of iterators.
-//
-template
-struct minimum_traversal_category_in_iterator_tuple
-{
- typedef typename tuple_meta_transform<
- IteratorTuple
- , thrust::iterator_traversal
- >::type tuple_of_traversal_tags;
-
- typedef typename tuple_impl_specific::tuple_meta_accumulate<
- tuple_of_traversal_tags
- , minimum_category_lambda
- , thrust::random_access_traversal_tag
- >::type type;
-};
-
-
-struct minimum_system_lambda
-{
- template
- struct apply : minimum_system
- {};
-};
-
-
-
-// Metafunction to obtain the minimal system tag in a tuple
-// of iterators.
-template
-struct minimum_system_in_iterator_tuple
-{
- typedef typename thrust::detail::tuple_meta_transform<
- IteratorTuple,
- thrust::iterator_system
- >::type tuple_of_system_tags;
-
- typedef typename tuple_impl_specific::tuple_meta_accumulate<
- tuple_of_system_tags,
- minimum_system_lambda,
- thrust::any_system_tag
- >::type type;
-};
-
-namespace zip_iterator_base_ns
-{
-
-
-template
- struct tuple_elements_helper
- : eval_if<
- (i < tuple_size::value),
- tuple_element,
- identity_
- >
-{};
-
-
-template
- struct tuple_elements
-{
- typedef typename tuple_elements_helper<0,Tuple>::type T0;
- typedef typename tuple_elements_helper<1,Tuple>::type T1;
- typedef typename tuple_elements_helper<2,Tuple>::type T2;
- typedef typename tuple_elements_helper<3,Tuple>::type T3;
- typedef typename tuple_elements_helper<4,Tuple>::type T4;
- typedef typename tuple_elements_helper<5,Tuple>::type T5;
- typedef typename tuple_elements_helper<6,Tuple>::type T6;
- typedef typename tuple_elements_helper<7,Tuple>::type T7;
- typedef typename tuple_elements_helper<8,Tuple>::type T8;
- typedef typename tuple_elements_helper<9,Tuple>::type T9;
-};
-
-
-template
- struct tuple_of_iterator_references
-{
- // get a thrust::tuple of the iterators' references
- typedef typename tuple_meta_transform<
- IteratorTuple,
- iterator_reference
- >::type tuple_of_references;
-
- // get at the individual tuple element types by name
- typedef tuple_elements elements;
-
- // map thrust::tuple to tuple_of_iterator_references
- typedef thrust::detail::tuple_of_iterator_references<
- typename elements::T0,
- typename elements::T1,
- typename elements::T2,
- typename elements::T3,
- typename elements::T4,
- typename elements::T5,
- typename elements::T6,
- typename elements::T7,
- typename elements::T8,
- typename elements::T9
- > type;
-};
-
-
-} // end zip_iterator_base_ns
-
-///////////////////////////////////////////////////////////////////
-//
-// Class zip_iterator_base
-//
-// Builds and exposes the iterator facade type from which the zip
-// iterator will be derived.
-//
-template
- struct zip_iterator_base
-{
- //private:
- // reference type is the type of the tuple obtained from the
- // iterators' reference types.
- typedef typename zip_iterator_base_ns::tuple_of_iterator_references::type reference;
-
- // Boost's Value type is the same as reference type.
- //typedef reference value_type;
- typedef typename tuple_of_value_types::type value_type;
-
- // Difference type is the first iterator's difference type
- typedef typename thrust::iterator_traits<
- typename thrust::tuple_element<0, IteratorTuple>::type
- >::difference_type difference_type;
-
- // Iterator system is the minimum system tag in the
- // iterator tuple
- typedef typename
- minimum_system_in_iterator_tuple::type system;
-
- // Traversal category is the minimum traversal category in the
- // iterator tuple
- typedef typename
- minimum_traversal_category_in_iterator_tuple::type traversal_category;
-
- public:
-
- // The iterator facade type from which the zip iterator will
- // be derived.
- typedef thrust::iterator_facade<
- zip_iterator,
- value_type,
- system,
- traversal_category,
- reference,
- difference_type
- > type;
-}; // end zip_iterator_base
-
-} // end detail
-
-} // end thrust
-
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/malloc_and_free.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/malloc_and_free.h
deleted file mode 100644
index b1ad7a7341bf701b7f333033059c18b98096c2d7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/malloc_and_free.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits malloc & free
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/uninitialized_copy.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/uninitialized_copy.h
deleted file mode 100644
index 8d916e33ba2a09662839b0ef97277c5e1a671adb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/uninitialized_copy.h
+++ /dev/null
@@ -1,116 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace cuda_cub {
-
-namespace __uninitialized_copy {
-
- template
- struct functor
- {
- InputIt input;
- OutputIt output;
-
- typedef typename iterator_traits::value_type InputType;
- typedef typename iterator_traits::value_type OutputType;
-
- THRUST_FUNCTION
- functor(InputIt input_, OutputIt output_)
- : input(input_), output(output_) {}
-
- template
- void THRUST_DEVICE_FUNCTION operator()(Size idx)
- {
- InputType const &in = raw_reference_cast(input[idx]);
- OutputType & out = raw_reference_cast(output[idx]);
-
-#if defined(__CUDA__) && defined(__clang__)
- // XXX unsafe, but clang is seemngly unable to call in-place new
- out = in;
-#else
- ::new (static_cast(&out)) OutputType(in);
-#endif
- }
- }; // struct functor
-
-} // namespace __uninitialized_copy
-
-template
-OutputIt __host__ __device__
-uninitialized_copy_n(execution_policy &policy,
- InputIt first,
- Size count,
- OutputIt result)
-{
- typedef __uninitialized_copy::functor functor_t;
-
- cuda_cub::parallel_for(policy,
- functor_t(first, result),
- count);
-
- cuda_cub::throw_on_error(
- cuda_cub::synchronize(policy)
- , "uninitialized_copy_n: failed to synchronize"
- );
-
- return result + count;
-}
-
-template
-OutputIt __host__ __device__
-uninitialized_copy(execution_policy& policy,
- InputIt first,
- InputIt last,
- OutputIt result)
-{
- return cuda_cub::uninitialized_copy_n(policy,
- first,
- thrust::distance(first, last),
- result);
-}
-
-} // namespace cuda_
-
-} // end namespace thrust
-#endif
diff --git a/spaces/CVPR/WALT/mmdet/datasets/custom.py b/spaces/CVPR/WALT/mmdet/datasets/custom.py
deleted file mode 100644
index 356f01ede6456312920b6fe8fa618258d8898075..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/datasets/custom.py
+++ /dev/null
@@ -1,334 +0,0 @@
-import os.path as osp
-import warnings
-from collections import OrderedDict
-
-import mmcv
-import numpy as np
-from mmcv.utils import print_log
-from torch.utils.data import Dataset
-
-from mmdet.core import eval_map, eval_recalls
-from .builder import DATASETS
-from .pipelines import Compose
-
-
-@DATASETS.register_module()
-class CustomDataset(Dataset):
- """Custom dataset for detection.
-
- The annotation format is shown as follows. The `ann` field is optional for
- testing.
-
- .. code-block:: none
-
- [
- {
- 'filename': 'a.jpg',
- 'width': 1280,
- 'height': 720,
- 'ann': {
- 'bboxes': (n, 4) in (x1, y1, x2, y2) order.
- 'labels': (n, ),
- 'bboxes_ignore': (k, 4), (optional field)
- 'labels_ignore': (k, 4) (optional field)
- }
- },
- ...
- ]
-
- Args:
- ann_file (str): Annotation file path.
- pipeline (list[dict]): Processing pipeline.
- classes (str | Sequence[str], optional): Specify classes to load.
- If is None, ``cls.CLASSES`` will be used. Default: None.
- data_root (str, optional): Data root for ``ann_file``,
- ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified.
- test_mode (bool, optional): If set True, annotation will not be loaded.
- filter_empty_gt (bool, optional): If set true, images without bounding
- boxes of the dataset's classes will be filtered out. This option
- only works when `test_mode=False`, i.e., we never filter images
- during tests.
- """
-
- CLASSES = None
-
- def __init__(self,
- ann_file,
- pipeline,
- classes=None,
- data_root=None,
- img_prefix='',
- seg_prefix=None,
- proposal_file=None,
- test_mode=False,
- filter_empty_gt=True):
- self.ann_file = ann_file
- self.data_root = data_root
- self.img_prefix = img_prefix
- self.seg_prefix = seg_prefix
- self.proposal_file = proposal_file
- self.test_mode = test_mode
- self.filter_empty_gt = filter_empty_gt
- self.CLASSES = self.get_classes(classes)
-
- # join paths if data_root is specified
- if self.data_root is not None:
- if not osp.isabs(self.ann_file):
- self.ann_file = osp.join(self.data_root, self.ann_file)
- if not (self.img_prefix is None or osp.isabs(self.img_prefix)):
- self.img_prefix = osp.join(self.data_root, self.img_prefix)
- if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)):
- self.seg_prefix = osp.join(self.data_root, self.seg_prefix)
- if not (self.proposal_file is None
- or osp.isabs(self.proposal_file)):
- self.proposal_file = osp.join(self.data_root,
- self.proposal_file)
- # load annotations (and proposals)
- self.data_infos = self.load_annotations(self.ann_file)
-
- if self.proposal_file is not None:
- self.proposals = self.load_proposals(self.proposal_file)
- else:
- self.proposals = None
-
- # filter images too small and containing no annotations
- if not test_mode:
- valid_inds = self._filter_imgs()
- self.data_infos = [self.data_infos[i] for i in valid_inds]
- if self.proposals is not None:
- self.proposals = [self.proposals[i] for i in valid_inds]
- # set group flag for the sampler
- self._set_group_flag()
-
- # processing pipeline
- self.pipeline = Compose(pipeline)
-
- def __len__(self):
- """Total number of samples of data."""
- return len(self.data_infos)
-
- def load_annotations(self, ann_file):
- """Load annotation from annotation file."""
- return mmcv.load(ann_file)
-
- def load_proposals(self, proposal_file):
- """Load proposal from proposal file."""
- return mmcv.load(proposal_file)
-
- def get_ann_info(self, idx):
- """Get annotation by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Annotation info of specified index.
- """
-
- return self.data_infos[idx]['ann']
-
- def get_cat_ids(self, idx):
- """Get category ids by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- list[int]: All categories in the image of specified index.
- """
-
- return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist()
-
- def pre_pipeline(self, results):
- """Prepare results dict for pipeline."""
- results['img_prefix'] = self.img_prefix
- results['seg_prefix'] = self.seg_prefix
- results['proposal_file'] = self.proposal_file
- results['bbox_fields'] = []
- results['mask_fields'] = []
- results['seg_fields'] = []
-
- def _filter_imgs(self, min_size=32):
- """Filter images too small."""
- if self.filter_empty_gt:
- warnings.warn(
- 'CustomDataset does not support filtering empty gt images.')
- valid_inds = []
- for i, img_info in enumerate(self.data_infos):
- if min(img_info['width'], img_info['height']) >= min_size:
- valid_inds.append(i)
- return valid_inds
-
- def _set_group_flag(self):
- """Set flag according to image aspect ratio.
-
- Images with aspect ratio greater than 1 will be set as group 1,
- otherwise group 0.
- """
- self.flag = np.zeros(len(self), dtype=np.uint8)
- for i in range(len(self)):
- img_info = self.data_infos[i]
- if img_info['width'] / img_info['height'] > 1:
- self.flag[i] = 1
-
- def _rand_another(self, idx):
- """Get another random index from the same group as the given index."""
- pool = np.where(self.flag == self.flag[idx])[0]
- return np.random.choice(pool)
-
- def __getitem__(self, idx):
- """Get training/test data after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Training/test data (with annotation if `test_mode` is set \
- True).
- """
-
- if self.test_mode:
- while 1:
- try:
- return self.prepare_test_img(idx)
- except:
- idx = idx+1
- #return self.prepare_test_img(idx+1)
-
- #return self.prepare_test_img(idx)
- while True:
- try:
- data = self.prepare_train_img(idx)
- except:
- data = self.prepare_train_img(idx-1)
-
- if data is None:
- idx = self._rand_another(idx)
- continue
- return data
-
- def prepare_train_img(self, idx):
- """Get training data and annotations after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Training data and annotation after pipeline with new keys \
- introduced by pipeline.
- """
-
- img_info = self.data_infos[idx]
- ann_info = self.get_ann_info(idx)
- results = dict(img_info=img_info, ann_info=ann_info)
- if self.proposals is not None:
- results['proposals'] = self.proposals[idx]
- self.pre_pipeline(results)
- return self.pipeline(results)
-
- def prepare_test_img(self, idx):
- """Get testing data after pipeline.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- dict: Testing data after pipeline with new keys introduced by \
- pipeline.
- """
-
- img_info = self.data_infos[idx]
- results = dict(img_info=img_info)
- if self.proposals is not None:
- results['proposals'] = self.proposals[idx]
- self.pre_pipeline(results)
- return self.pipeline(results)
-
- @classmethod
- def get_classes(cls, classes=None):
- """Get class names of current dataset.
-
- Args:
- classes (Sequence[str] | str | None): If classes is None, use
- default CLASSES defined by builtin dataset. If classes is a
- string, take it as a file name. The file contains the name of
- classes where each line contains one class name. If classes is
- a tuple or list, override the CLASSES defined by the dataset.
-
- Returns:
- tuple[str] or list[str]: Names of categories of the dataset.
- """
- if classes is None:
- return cls.CLASSES
-
- if isinstance(classes, str):
- # take it as a file path
- class_names = mmcv.list_from_file(classes)
- elif isinstance(classes, (tuple, list)):
- class_names = classes
- else:
- raise ValueError(f'Unsupported type {type(classes)} of classes.')
-
- return class_names
-
- def format_results(self, results, **kwargs):
- """Place holder to format result to dataset specific output."""
-
- def evaluate(self,
- results,
- metric='mAP',
- logger=None,
- proposal_nums=(100, 300, 1000),
- iou_thr=0.5,
- scale_ranges=None):
- """Evaluate the dataset.
-
- Args:
- results (list): Testing results of the dataset.
- metric (str | list[str]): Metrics to be evaluated.
- logger (logging.Logger | None | str): Logger used for printing
- related information during evaluation. Default: None.
- proposal_nums (Sequence[int]): Proposal number used for evaluating
- recalls, such as recall@100, recall@1000.
- Default: (100, 300, 1000).
- iou_thr (float | list[float]): IoU threshold. Default: 0.5.
- scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP.
- Default: None.
- """
-
- if not isinstance(metric, str):
- assert len(metric) == 1
- metric = metric[0]
- allowed_metrics = ['mAP', 'recall']
- if metric not in allowed_metrics:
- raise KeyError(f'metric {metric} is not supported')
- annotations = [self.get_ann_info(i) for i in range(len(self))]
- eval_results = OrderedDict()
- iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr
- if metric == 'mAP':
- assert isinstance(iou_thrs, list)
- mean_aps = []
- for iou_thr in iou_thrs:
- print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}')
- mean_ap, _ = eval_map(
- results,
- annotations,
- scale_ranges=scale_ranges,
- iou_thr=iou_thr,
- dataset=self.CLASSES,
- logger=logger)
- mean_aps.append(mean_ap)
- eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3)
- eval_results['mAP'] = sum(mean_aps) / len(mean_aps)
- elif metric == 'recall':
- gt_bboxes = [ann['bboxes'] for ann in annotations]
- recalls = eval_recalls(
- gt_bboxes, results, proposal_nums, iou_thr, logger=logger)
- for i, num in enumerate(proposal_nums):
- for j, iou in enumerate(iou_thrs):
- eval_results[f'recall@{num}@{iou}'] = recalls[i, j]
- if recalls.shape[1] > 1:
- ar = recalls.mean(axis=1)
- for i, num in enumerate(proposal_nums):
- eval_results[f'AR@{num}'] = ar[i]
- return eval_results
diff --git a/spaces/Chaitanya01/InvestingPlatform/notifier.py b/spaces/Chaitanya01/InvestingPlatform/notifier.py
deleted file mode 100644
index 3d691e9a1abb2ac354eec22068e0e2b2a0581593..0000000000000000000000000000000000000000
--- a/spaces/Chaitanya01/InvestingPlatform/notifier.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import threading
-from config import *
-import requests
-import slack
-import json
-from datetime import datetime
-import time
-arr = []
-def symbol_info(req_params, i):
- global arr
- url = "https://api.binance.com/api/v3/ticker/24hr"
- val = requests.get(url,params = req_params)
- try:
- data = json.loads(val.text)
-
- x = arr[i]
- try:
- if float(data["priceChangePercent"])>=x:
- client = slack.WebClient(token = SLACK_TOKEN)
- client.chat_postMessage(channel = "#bot_alerts",
- text = f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} {data['symbol']} 24Hchange={float(data['priceChangePercent'])}% new benchmark {x+5}%")
- arr[i] = arr[i] + 5
- except:
- pass
- except:
- print("Could not connect")
-
-for i in range(len(crypto_symbols)):
- arr.append(20)
-
-while True:
- for i in range(len(crypto_symbols)):
- today = datetime.now()
- if today.hour + today.minute + today.second == 0:
- for i in range(len(crypto_symbols)):
- arr[i] = 20
- req_params = dict(symbol = crypto_symbols[i] + "USDT")
- thread = threading.Thread(target = symbol_info, args = (req_params,i,))
- thread.start()
- time.sleep(15)
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/friend.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/friend.js
deleted file mode 100644
index ce3c17b40efbf828ac9cea4589bea90795583c73..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/friend.js
+++ /dev/null
@@ -1,22 +0,0 @@
-import cfg from '../../lib/config/config.js'
-import common from '../../lib/common/common.js'
-
-export class friend extends plugin {
- constructor () {
- super({
- name: 'autoFriend',
- dsc: '自动同意好友',
- event: 'request.friend'
- })
- }
-
- async accept() {
- if (this.e.sub_type == 'add' || this.e.sub_type == 'single') {
- if (cfg.other.autoFriend == 1) {
- logger.mark(`[自动同意][添加好友] ${this.e.user_id}`)
- await common.sleep(2000)
- this.e.approve(true)
- }
- }
- }
-}
\ No newline at end of file
diff --git a/spaces/Codecooker/rvcapi/src/infer_pack/commons.py b/spaces/Codecooker/rvcapi/src/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/src/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/roi_boundary_predictors.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/roi_boundary_predictors.py
deleted file mode 100644
index 9727592b5ca4d6280a4c017d5501f40f6a0d16d5..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/roi_heads/boundary_head/roi_boundary_predictors.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from maskrcnn_benchmark.layers import Conv2d
-from maskrcnn_benchmark.layers import ConvTranspose2d
-
-from maskrcnn_benchmark import layers
-
-class BOUNDARYRCNNC4Predictor(nn.Module):
- def __init__(self, cfg):
- super(BOUNDARYRCNNC4Predictor, self).__init__()
- dim_reduced = cfg.MODEL.ROI_BOUNDARY_HEAD.CONV_LAYERS[-1]
- self.resol = cfg.MODEL.ROI_BOUNDARY_HEAD.RESOLUTION # 56
-
- if cfg.MODEL.ROI_HEADS.USE_FPN:
- num_inputs = dim_reduced
- else:
- stage_index = 4
- stage2_relative_factor = 2 ** (stage_index - 1)
- res2_out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS #256
- num_inputs = res2_out_channels * stage2_relative_factor
-
- self.bo_input_xy = Conv2d(num_inputs, num_inputs, 1, 1, 0)
- nn.init.kaiming_normal_(self.bo_input_xy.weight,
- mode='fan_out', nonlinearity='relu')
- nn.init.constant_(self.bo_input_xy.bias, 0)
-
- self.conv5_bo_xy = ConvTranspose2d(num_inputs, dim_reduced, 2, 2, 0)
- nn.init.kaiming_normal_(self.conv5_bo_xy.weight,
- mode='fan_out', nonlinearity='relu')
- nn.init.constant_(self.conv5_bo_xy.bias, 0)
-
- self.bo_input_1_1 = Conv2d(dim_reduced, dim_reduced, 1, 1, 0)
- nn.init.kaiming_normal_(self.bo_input_1_1.weight,
- mode='fan_out', nonlinearity='relu')
- nn.init.constant_(self.bo_input_1_1.bias, 0)
-
- self.bo_input_2_1 = Conv2d(dim_reduced, dim_reduced, 1, 1, 0)
- nn.init.kaiming_normal_(self.bo_input_2_1.weight,
- mode='fan_out', nonlinearity='relu')
- nn.init.constant_(self.bo_input_2_1.bias, 0)
-
- self.conv5_bo_x = Conv2d(dim_reduced, 1, (3, 1), 1, (1,0)) # H W
- nn.init.kaiming_normal_(self.conv5_bo_x.weight,
- mode='fan_out', nonlinearity='relu') # 'relu'
- nn.init.constant_(self.conv5_bo_x.bias, 0)
-
- self.conv5_bo_y = Conv2d(dim_reduced, 1, (1, 3), 1, (0,1)) # H W
- nn.init.kaiming_normal_(self.conv5_bo_y.weight,
- mode='fan_out', nonlinearity='relu')
- nn.init.constant_(self.conv5_bo_y.bias, 0)
- self.up_scale=2
-
-
- def forward(self, ft):
- ft = self.bo_input_xy(ft)
- ft_2x = self.conv5_bo_xy(ft)
-
- ft_2x = layers.interpolate(ft_2x, size = (48,48), mode='bilinear', align_corners=True)
-
- x = self.bo_input_1_1(ft_2x)
- y = self.bo_input_2_1(ft_2x)
-
- x = self.conv5_bo_x(x)
- y = self.conv5_bo_y(y)
-
- return x, y
-
-
-
-_ROI_KE_PREDICTOR = {"BoundaryRCNNC4Predictor": BOUNDARYRCNNC4Predictor}
-
-
-def make_roi_boundary_predictor(cfg):
- func = _ROI_KE_PREDICTOR[cfg.MODEL.ROI_BOUNDARY_HEAD.PREDICTOR]
- return func(cfg)
diff --git a/spaces/DHEIVER/Kidney_Image_Classifier/README.md b/spaces/DHEIVER/Kidney_Image_Classifier/README.md
deleted file mode 100644
index 3431572d70b55040ed838e932804d8784cf9df8f..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/Kidney_Image_Classifier/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Kidney Disease Classification CT Scan
-emoji: 🏢
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-duplicated_from: ahmedxeno/kidney_disease_classification_CT_scan
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-390bcf9f.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-390bcf9f.js
deleted file mode 100644
index eaa2c0d41ad677012f6b49b3e086e84e92e0db5f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-390bcf9f.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as v,e as g,s as d,a9 as r,N as q,K as o,as as h,U as f,p as b,ab as w,ac as R,ad as j,z as C,v as S,A as z}from"./index-1d65707a.js";function A(i){let e,_,s;const u=i[6].default,a=r(u,i,i[5],null);return{c(){e=q("div"),a&&a.c(),o(e,"id",i[1]),o(e,"class",_=h(i[2].join(" "))+" svelte-15lo0d8"),f(e,"compact",i[4]==="compact"),f(e,"panel",i[4]==="panel"),f(e,"unequal-height",i[0]===!1),f(e,"stretch",i[0]),f(e,"hide",!i[3])},m(l,t){b(l,e,t),a&&a.m(e,null),s=!0},p(l,[t]){a&&a.p&&(!s||t&32)&&w(a,u,l,l[5],s?j(u,l[5],t,null):R(l[5]),null),(!s||t&2)&&o(e,"id",l[1]),(!s||t&4&&_!==(_=h(l[2].join(" "))+" svelte-15lo0d8"))&&o(e,"class",_),(!s||t&20)&&f(e,"compact",l[4]==="compact"),(!s||t&20)&&f(e,"panel",l[4]==="panel"),(!s||t&5)&&f(e,"unequal-height",l[0]===!1),(!s||t&5)&&f(e,"stretch",l[0]),(!s||t&12)&&f(e,"hide",!l[3])},i(l){s||(C(a,l),s=!0)},o(l){S(a,l),s=!1},d(l){l&&z(e),a&&a.d(l)}}}function K(i,e,_){let{$$slots:s={},$$scope:u}=e,{equal_height:a=!0}=e,{elem_id:l}=e,{elem_classes:t=[]}=e,{visible:m=!0}=e,{variant:c="default"}=e;return i.$$set=n=>{"equal_height"in n&&_(0,a=n.equal_height),"elem_id"in n&&_(1,l=n.elem_id),"elem_classes"in n&&_(2,t=n.elem_classes),"visible"in n&&_(3,m=n.visible),"variant"in n&&_(4,c=n.variant),"$$scope"in n&&_(5,u=n.$$scope)},[a,l,t,m,c,u,s]}class N extends v{constructor(e){super(),g(this,e,K,A,d,{equal_height:0,elem_id:1,elem_classes:2,visible:3,variant:4})}}const k=N,B=["static"];export{k as Component,B as modes};
-//# sourceMappingURL=index-390bcf9f.js.map
diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/routes/settings/+page.server.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/routes/settings/+page.server.ts
deleted file mode 100644
index 315d474f5f893140ed8a2948f50087f6c1f6dc09..0000000000000000000000000000000000000000
--- a/spaces/DaFujaTyping/hf-Chat-ui/src/routes/settings/+page.server.ts
+++ /dev/null
@@ -1,45 +0,0 @@
-import { base } from "$app/paths";
-import { collections } from "$lib/server/database";
-import { redirect } from "@sveltejs/kit";
-import { z } from "zod";
-import { defaultModel, models } from "$lib/server/models";
-import { validateModel } from "$lib/utils/models.js";
-
-export const actions = {
- default: async function ({ request, locals }) {
- const formData = await request.formData();
-
- const { ethicsModalAccepted, ...settings } = z
- .object({
- shareConversationsWithModelAuthors: z.boolean({ coerce: true }).default(true),
- ethicsModalAccepted: z.boolean({ coerce: true }).optional(),
- activeModel: validateModel(models),
- })
- .parse({
- shareConversationsWithModelAuthors: formData.get("shareConversationsWithModelAuthors"),
- ethicsModalAccepted: formData.get("ethicsModalAccepted"),
- activeModel: formData.get("activeModel") ?? defaultModel.id,
- });
-
- await collections.settings.updateOne(
- {
- sessionId: locals.sessionId,
- },
- {
- $set: {
- ...settings,
- ...(ethicsModalAccepted && { ethicsModalAcceptedAt: new Date() }),
- updatedAt: new Date(),
- },
- $setOnInsert: {
- createdAt: new Date(),
- },
- },
- {
- upsert: true,
- }
- );
-
- throw redirect(303, request.headers.get("referer") || base || "/");
- },
-};
diff --git a/spaces/DanLeBossDeESGI/Musica/README.md b/spaces/DanLeBossDeESGI/Musica/README.md
deleted file mode 100644
index 6357123bb119a1f427dd178f11881f2d174b8aa1..0000000000000000000000000000000000000000
--- a/spaces/DanLeBossDeESGI/Musica/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Musica
-emoji: 😻
-colorFrom: gray
-colorTo: red
-sdk: streamlit
-sdk_version: 1.26.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Detomo/ai-avatar-backend/helpers/blendshapeNames.js b/spaces/Detomo/ai-avatar-backend/helpers/blendshapeNames.js
deleted file mode 100644
index 96432a4e821cf0ee4838bc25aae66d46bc253cb6..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-avatar-backend/helpers/blendshapeNames.js
+++ /dev/null
@@ -1,55 +0,0 @@
-module.exports = ["eyeBlinkLeft",
-"eyeLookDownLeft",
-"eyeLookInLeft",
-"eyeLookOutLeft",
-"eyeLookUpLeft",
-"eyeSquintLeft",
-"eyeWideLeft",
-"eyeBlinkRight",
-"eyeLookDownRight",
-"eyeLookInRight",
-"eyeLookOutRight",
-"eyeLookUpRight",
-"eyeSquintRight",
-"eyeWideRight",
-"jawForward",
-"jawLeft",
-"jawRight",
-"jawOpen",
-"mouthClose",
-"mouthFunnel",
-"mouthPucker",
-"mouthLeft",
-"mouthRight",
-"mouthSmileLeft",
-"mouthSmileRight",
-"mouthFrownLeft",
-"mouthFrownRight",
-"mouthDimpleLeft",
-"mouthDimpleRight",
-"mouthStretchLeft",
-"mouthStretchRight",
-"mouthRollLower",
-"mouthRollUpper",
-"mouthShrugLower",
-"mouthShrugUpper",
-"mouthPressLeft",
-"mouthPressRight",
-"mouthLowerDownLeft",
-"mouthLowerDownRight",
-"mouthUpperUpLeft",
-"mouthUpperUpRight",
-"browDownLeft",
-"browDownRight",
-"browInnerUp",
-"browOuterUpLeft",
-"browOuterUpRight",
-"cheekPuff",
-"cheekSquintLeft",
-"cheekSquintRight",
-"noseSneerLeft",
-"noseSneerRight",
-"tongueOut",
-"headRoll",
-"leftEyeRoll",
-"rightEyeRoll"]
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/filtered_lrelu.py b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/filtered_lrelu.py
deleted file mode 100644
index 1a77f7e7a0ed0e951435cf6c7171d1baac8cf834..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/filtered_lrelu.py
+++ /dev/null
@@ -1,307 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import numpy as np
-import torch
-import warnings
-
-from .. import custom_ops
-from .. import misc
-from . import upfirdn2d
-from . import bias_act
-
-# ----------------------------------------------------------------------------
-
-_plugin = None
-
-
-def _init():
- global _plugin
- if _plugin is None:
- _plugin = custom_ops.get_plugin(
- module_name='filtered_lrelu_plugin',
- sources=['filtered_lrelu.cpp', 'filtered_lrelu_wr.cu',
- 'filtered_lrelu_rd.cu', 'filtered_lrelu_ns.cu'],
- headers=['filtered_lrelu.h', 'filtered_lrelu.cu'],
- source_dir=os.path.dirname(__file__),
- extra_cuda_cflags=['--use_fast_math',
- '--allow-unsupported-compiler'],
- )
- return True
-
-
-def _get_filter_size(f):
- if f is None:
- return 1, 1
- assert isinstance(f, torch.Tensor)
- assert 1 <= f.ndim <= 2
- return f.shape[-1], f.shape[0] # width, height
-
-
-def _parse_padding(padding):
- if isinstance(padding, int):
- padding = [padding, padding]
- assert isinstance(padding, (list, tuple))
- assert all(isinstance(x, (int, np.integer)) for x in padding)
- padding = [int(x) for x in padding]
- if len(padding) == 2:
- px, py = padding
- padding = [px, px, py, py]
- px0, px1, py0, py1 = padding
- return px0, px1, py0, py1
-
-# ----------------------------------------------------------------------------
-
-
-def filtered_lrelu(x, fu=None, fd=None, b=None, up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False, impl='cuda'):
- r"""Filtered leaky ReLU for a batch of 2D images.
-
- Performs the following sequence of operations for each channel:
-
- 1. Add channel-specific bias if provided (`b`).
-
- 2. Upsample the image by inserting N-1 zeros after each pixel (`up`).
-
- 3. Pad the image with the specified number of zeros on each side (`padding`).
- Negative padding corresponds to cropping the image.
-
- 4. Convolve the image with the specified upsampling FIR filter (`fu`), shrinking it
- so that the footprint of all output pixels lies within the input image.
-
- 5. Multiply each value by the provided gain factor (`gain`).
-
- 6. Apply leaky ReLU activation function to each value.
-
- 7. Clamp each value between -clamp and +clamp, if `clamp` parameter is provided.
-
- 8. Convolve the image with the specified downsampling FIR filter (`fd`), shrinking
- it so that the footprint of all output pixels lies within the input image.
-
- 9. Downsample the image by keeping every Nth pixel (`down`).
-
- The fused op is considerably more efficient than performing the same calculation
- using standard PyTorch ops. It supports gradients of arbitrary order.
-
- Args:
- x: Float32/float16/float64 input tensor of the shape
- `[batch_size, num_channels, in_height, in_width]`.
- fu: Float32 upsampling FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- fd: Float32 downsampling FIR filter of the shape
- `[filter_height, filter_width]` (non-separable),
- `[filter_taps]` (separable), or
- `None` (identity).
- b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type
- as `x`. The length of vector must must match the channel dimension of `x`.
- up: Integer upsampling factor (default: 1).
- down: Integer downsampling factor. (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- gain: Overall scaling factor for signal magnitude (default: sqrt(2)).
- slope: Slope on the negative side of leaky ReLU (default: 0.2).
- clamp: Maximum magnitude for leaky ReLU output (default: None).
- flip_filter: False = convolution, True = correlation (default: False).
- impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- assert isinstance(x, torch.Tensor)
- assert impl in ['ref', 'cuda']
- if impl == 'cuda' and x.device.type == 'cuda' and _init():
- return _filtered_lrelu_cuda(up=up, down=down, padding=padding, gain=gain, slope=slope, clamp=clamp, flip_filter=flip_filter).apply(x, fu, fd, b, None, 0, 0)
- return _filtered_lrelu_ref(x, fu=fu, fd=fd, b=b, up=up, down=down, padding=padding, gain=gain, slope=slope, clamp=clamp, flip_filter=flip_filter)
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def _filtered_lrelu_ref(x, fu=None, fd=None, b=None, up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False):
- """Slow and memory-inefficient reference implementation of `filtered_lrelu()` using
- existing `upfirdn2n()` and `bias_act()` ops.
- """
- assert isinstance(x, torch.Tensor) and x.ndim == 4
- fu_w, fu_h = _get_filter_size(fu)
- fd_w, fd_h = _get_filter_size(fd)
- if b is not None:
- assert isinstance(b, torch.Tensor) and b.dtype == x.dtype
- misc.assert_shape(b, [x.shape[1]])
- assert isinstance(up, int) and up >= 1
- assert isinstance(down, int) and down >= 1
- px0, px1, py0, py1 = _parse_padding(padding)
- assert gain == float(gain) and gain > 0
- assert slope == float(slope) and slope >= 0
- assert clamp is None or (clamp == float(clamp) and clamp >= 0)
-
- # Calculate output size.
- batch_size, channels, in_h, in_w = x.shape
- in_dtype = x.dtype
- out_w = (in_w * up + (px0 + px1) - (fu_w - 1) -
- (fd_w - 1) + (down - 1)) // down
- out_h = (in_h * up + (py0 + py1) - (fu_h - 1) -
- (fd_h - 1) + (down - 1)) // down
-
- # Compute using existing ops.
- x = bias_act.bias_act(x=x, b=b) # Apply bias.
- # Upsample.
- x = upfirdn2d.upfirdn2d(x=x, f=fu, up=up, padding=[
- px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
- # Bias, leaky ReLU, clamp.
- x = bias_act.bias_act(x=x, act='lrelu', alpha=slope,
- gain=gain, clamp=clamp)
- # Downsample.
- x = upfirdn2d.upfirdn2d(x=x, f=fd, down=down, flip_filter=flip_filter)
-
- # Check output shape & dtype.
- misc.assert_shape(x, [batch_size, channels, out_h, out_w])
- assert x.dtype == in_dtype
- return x
-
-# ----------------------------------------------------------------------------
-
-
-_filtered_lrelu_cuda_cache = dict()
-
-
-def _filtered_lrelu_cuda(up=1, down=1, padding=0, gain=np.sqrt(2), slope=0.2, clamp=None, flip_filter=False):
- """Fast CUDA implementation of `filtered_lrelu()` using custom ops.
- """
- assert isinstance(up, int) and up >= 1
- assert isinstance(down, int) and down >= 1
- px0, px1, py0, py1 = _parse_padding(padding)
- assert gain == float(gain) and gain > 0
- gain = float(gain)
- assert slope == float(slope) and slope >= 0
- slope = float(slope)
- assert clamp is None or (clamp == float(clamp) and clamp >= 0)
- clamp = float(clamp if clamp is not None else 'inf')
-
- # Lookup from cache.
- key = (up, down, px0, px1, py0, py1, gain, slope, clamp, flip_filter)
- if key in _filtered_lrelu_cuda_cache:
- return _filtered_lrelu_cuda_cache[key]
-
- # Forward op.
- class FilteredLReluCuda(torch.autograd.Function):
- @staticmethod
- def forward(ctx, x, fu, fd, b, si, sx, sy): # pylint: disable=arguments-differ
- assert isinstance(x, torch.Tensor) and x.ndim == 4
-
- # Replace empty up/downsample kernels with full 1x1 kernels (faster than separable).
- if fu is None:
- fu = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- if fd is None:
- fd = torch.ones([1, 1], dtype=torch.float32, device=x.device)
- assert 1 <= fu.ndim <= 2
- assert 1 <= fd.ndim <= 2
-
- # Replace separable 1x1 kernels with full 1x1 kernels when scale factor is 1.
- if up == 1 and fu.ndim == 1 and fu.shape[0] == 1:
- fu = fu.square()[None]
- if down == 1 and fd.ndim == 1 and fd.shape[0] == 1:
- fd = fd.square()[None]
-
- # Missing sign input tensor.
- if si is None:
- si = torch.empty([0])
-
- # Missing bias tensor.
- if b is None:
- b = torch.zeros([x.shape[1]], dtype=x.dtype, device=x.device)
-
- # Construct internal sign tensor only if gradients are needed.
- write_signs = (si.numel() == 0) and (
- x.requires_grad or b.requires_grad)
-
- # Warn if input storage strides are not in decreasing order due to e.g. channels-last layout.
- strides = [x.stride(i) for i in range(x.ndim) if x.size(i) > 1]
- if any(a < b for a, b in zip(strides[:-1], strides[1:])):
- warnings.warn(
- "low-performance memory layout detected in filtered_lrelu input", RuntimeWarning)
-
- # Call C++/Cuda plugin if datatype is supported.
- if x.dtype in [torch.float16, torch.float32]:
- if torch.cuda.current_stream(x.device) != torch.cuda.default_stream(x.device):
- warnings.warn(
- "filtered_lrelu called with non-default cuda stream but concurrent execution is not supported", RuntimeWarning)
- y, so, return_code = _plugin.filtered_lrelu(
- x, fu, fd, b, si, up, down, px0, px1, py0, py1, sx, sy, gain, slope, clamp, flip_filter, write_signs)
- else:
- return_code = -1
-
- # No Cuda kernel found? Fall back to generic implementation. Still more memory efficient than the reference implementation because
- # only the bit-packed sign tensor is retained for gradient computation.
- if return_code < 0:
- warnings.warn(
- "filtered_lrelu called with parameters that have no optimized CUDA kernel, using generic fallback", RuntimeWarning)
-
- y = x.add(b.unsqueeze(-1).unsqueeze(-1)) # Add bias.
- # Upsample.
- y = upfirdn2d.upfirdn2d(x=y, f=fu, up=up, padding=[
- px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
- # Activation function and sign handling. Modifies y in-place.
- so = _plugin.filtered_lrelu_act_(
- y, si, sx, sy, gain, slope, clamp, write_signs)
- # Downsample.
- y = upfirdn2d.upfirdn2d(
- x=y, f=fd, down=down, flip_filter=flip_filter)
-
- # Prepare for gradient computation.
- ctx.save_for_backward(fu, fd, (si if si.numel() else so))
- ctx.x_shape = x.shape
- ctx.y_shape = y.shape
- ctx.s_ofs = sx, sy
- return y
-
- @staticmethod
- def backward(ctx, dy): # pylint: disable=arguments-differ
- fu, fd, si = ctx.saved_tensors
- _, _, xh, xw = ctx.x_shape
- _, _, yh, yw = ctx.y_shape
- sx, sy = ctx.s_ofs
- dx = None # 0
- dfu = None
- assert not ctx.needs_input_grad[1]
- dfd = None
- assert not ctx.needs_input_grad[2]
- db = None # 3
- dsi = None
- assert not ctx.needs_input_grad[4]
- dsx = None
- assert not ctx.needs_input_grad[5]
- dsy = None
- assert not ctx.needs_input_grad[6]
-
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[3]:
- pp = [
- (fu.shape[-1] - 1) + (fd.shape[-1] - 1) - px0,
- xw * up - yw * down + px0 - (up - 1),
- (fu.shape[0] - 1) + (fd.shape[0] - 1) - py0,
- xh * up - yh * down + py0 - (up - 1),
- ]
- gg = gain * (up ** 2) / (down ** 2)
- ff = (not flip_filter)
- sx = sx - (fu.shape[-1] - 1) + px0
- sy = sy - (fu.shape[0] - 1) + py0
- dx = _filtered_lrelu_cuda(up=down, down=up, padding=pp, gain=gg, slope=slope,
- clamp=None, flip_filter=ff).apply(dy, fd, fu, None, si, sx, sy)
-
- if ctx.needs_input_grad[3]:
- db = dx.sum([0, 2, 3])
-
- return dx, dfu, dfd, db, dsi, dsx, dsy
-
- # Add to cache.
- _filtered_lrelu_cuda_cache[key] = FilteredLReluCuda
- return FilteredLReluCuda
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/ElainaFanBoy/MusicGen/tests/data/test_audio_utils.py b/spaces/ElainaFanBoy/MusicGen/tests/data/test_audio_utils.py
deleted file mode 100644
index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/tests/data/test_audio_utils.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import julius
-import torch
-import pytest
-
-from audiocraft.data.audio_utils import (
- _clip_wav,
- convert_audio_channels,
- convert_audio,
- normalize_audio
-)
-from ..common_utils import get_batch_white_noise
-
-
-class TestConvertAudioChannels:
-
- def test_convert_audio_channels_downmix(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=2)
- assert list(mixed.shape) == [b, 2, t]
-
- def test_convert_audio_channels_nochange(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=c)
- assert list(mixed.shape) == list(audio.shape)
-
- def test_convert_audio_channels_upmix(self):
- b, c, t = 2, 1, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=3)
- assert list(mixed.shape) == [b, 3, t]
-
- def test_convert_audio_channels_upmix_error(self):
- b, c, t = 2, 2, 100
- audio = get_batch_white_noise(b, c, t)
- with pytest.raises(ValueError):
- convert_audio_channels(audio, channels=3)
-
-
-class TestConvertAudio:
-
- def test_convert_audio_channels_downmix(self):
- b, c, dur = 2, 3, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2)
- assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]]
-
- def test_convert_audio_channels_upmix(self):
- b, c, dur = 2, 1, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3)
- assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]]
-
- def test_convert_audio_upsample(self):
- b, c, dur = 2, 1, 4.
- sr = 2
- new_sr = 3
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
- def test_convert_audio_resample(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- new_sr = 2
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
-
-class TestNormalizeAudio:
-
- def test_clip_wav(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- _clip_wav(audio)
- assert audio.abs().max() <= 1
-
- def test_normalize_audio_clip(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='clip')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_rms(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='rms')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_peak(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='peak')
- assert norm_audio.abs().max() <= 1
diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/attentions.py b/spaces/EyanAn/vits-uma-genshin-honkai/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/EyanAn/vits-uma-genshin-honkai/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/agent.py b/spaces/FantasticGNU/AnomalyGPT/model/agent.py
deleted file mode 100644
index a5199b6497f739f5800dbd38daade944459457e8..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/agent.py
+++ /dev/null
@@ -1,81 +0,0 @@
-from header import *
-
-class DeepSpeedAgent:
-
- def __init__(self, model, args):
- super(DeepSpeedAgent, self).__init__()
- self.args = args
- self.model = model
- self.load_stage_1_parameters(args["delta_ckpt_path"])
-
-
-
- for name, param in self.model.named_parameters():
- param.requires_grad = False
-
- for name, param in self.model.image_decoder.named_parameters():
- param.requires_grad = True
-
- for name, param in self.model.prompt_learner.named_parameters():
- param.requires_grad = True
-
-
-
-
- # load config parameters of deepspeed
- ds_params = json.load(open(self.args['ds_config_path']))
- ds_params['scheduler']['params']['total_num_steps'] = self.args['total_steps']
- ds_params['scheduler']['params']['warmup_num_steps'] = max(10, int(self.args['total_steps'] * self.args['warmup_rate']))
- self.ds_engine, self.optimizer, _ , _ = deepspeed.initialize(
- model=self.model,
- model_parameters=self.model.parameters(),
- config_params=ds_params,
- dist_init_required=True,
- args=types.SimpleNamespace(**args)
- )
-
- @torch.no_grad()
- def predict(self, batch):
- self.model.eval()
- string = self.model.generate_one_sample(batch)
- return string
-
- def train_model(self, batch, current_step=0, pbar=None):
- self.ds_engine.module.train()
- loss, mle_acc = self.ds_engine(batch)
-
- self.ds_engine.backward(loss)
- self.ds_engine.step()
- pbar.set_description(f'[!] loss: {round(loss.item(), 4)}; token_acc: {round(mle_acc*100, 2)}')
- pbar.update(1)
- if self.args['local_rank'] == 0 and self.args['log_path'] and current_step % self.args['logging_step'] == 0:
- elapsed = pbar.format_dict['elapsed']
- rate = pbar.format_dict['rate']
- remaining = (pbar.total - pbar.n) / rate if rate and pbar.total else 0
- remaining = str(datetime.timedelta(seconds=remaining))
- logging.info(f'[!] progress: {round(pbar.n/pbar.total, 5)}; remaining time: {remaining}; loss: {round(loss.item(), 4)}; token_acc: {round(mle_acc*100, 2)}')
-
- mle_acc *= 100
- return mle_acc
-
- def save_model(self, path, current_step):
- # only save trainable model parameters
- param_grad_dic = {
- k: v.requires_grad for (k, v) in self.ds_engine.module.named_parameters()
- }
- state_dict = self.ds_engine.module.state_dict()
- checkpoint = OrderedDict()
- for k, v in self.ds_engine.module.named_parameters():
- if v.requires_grad:
- print(k)
- checkpoint[k] = v
- torch.save(checkpoint, f'{path}/pytorch_model.pt')
- # save tokenizer
- self.model.llama_tokenizer.save_pretrained(path)
- # save configuration
- self.model.llama_model.config.save_pretrained(path)
- print(f'[!] save model into {path}')
-
- def load_stage_1_parameters(self, path):
- delta_ckpt = torch.load(path, map_location=torch.device('cpu'))
- self.model.load_state_dict(delta_ckpt, strict=False)
diff --git a/spaces/Frankapp/bingai/Dockerfile b/spaces/Frankapp/bingai/Dockerfile
deleted file mode 100644
index c9e1a0107fd8a62a46a78796ab01e4109eb683e8..0000000000000000000000000000000000000000
--- a/spaces/Frankapp/bingai/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符-仅可进行对话,如需绘画,需要修改为自己的token
-ENV Go_Proxy_BingAI_USER_TOKEN_1="1rd3l3wy1i_ONyrF_wE1fXCWb9_T1NgqppNOmY77Xz0W6okats_7RK_mOO_qZ3rcWWf0qUkED7TkyzAwV7pwXBN1SIxzqY07S8us6LgCGx7MeI0WBJ5HatLQpZ02uJb_ZMRgzxarl_K_LFkAQwbIeYreMdVRd96F96us_hwVjB6wgQDK6eLhCKBQ8awUgY45s2rlUjQcf-LRGWPExgYyG7wjjjj"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/attentions.py b/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/attentions.py
deleted file mode 100644
index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from lib.infer_pack import commons
-from lib.infer_pack import modules
-from lib.infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/GXSA/bingo/src/components/chat-list.tsx b/spaces/GXSA/bingo/src/components/chat-list.tsx
deleted file mode 100644
index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/components/chat-list.tsx
+++ /dev/null
@@ -1,28 +0,0 @@
-import React from 'react'
-
-import { Separator } from '@/components/ui/separator'
-import { ChatMessage } from '@/components/chat-message'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-
-export interface ChatList {
- messages: ChatMessageModel[]
-}
-
-export function ChatList({ messages }: ChatList) {
- if (!messages.length) {
- return null
- }
-
- return (
-
This is a demo to demonstrate the capabilities of OctoCoder model by showing how it can be used to generate code by following the instructions provided in the input.
-
OctoCoder is an instruction tuned model with 15.5B parameters created by finetuning StarCoder on CommitPackFT & OASST
-
-"""
-disclaimer = """⚠️Any use or sharing of this demo constitues your acceptance of the BigCode [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) License Agreement and the use restrictions included within.\
- **Intended Use**: this app and its [supporting model](https://huggingface.co/bigcode) are provided for demonstration purposes; not to serve as replacement for human expertise. For more details on the model's limitations in terms of factuality and biases, see the [model card.](https://huggingface.co/bigcode)"""
-
-examples = [
- ['Please write a function in Python that performs bubble sort.', 256],
- ['''Explain the following piece of code
-def count_unique(s):
- s = s.lower()
- s_split = list(s)
- valid_chars = [char for char in s_split if char.isalpha() or char == " "]
- valid_sentence = "".join(valid_chars)
- uniques = set(valid_sentence.split(" "))
- return len(uniques)''', 512],
- [
- 'Write an efficient Python function that takes a given text and returns its Morse code equivalent without using any third party library',
- 512],
- ['Write a html and css code to render a clock', 8000],
-]
-
-with gr.Blocks(theme=theme, analytics_enabled=False, css=css) as demo:
- with gr.Column():
- gr.Markdown(description)
- with gr.Row():
- with gr.Column():
- with gr.Accordion("Settings", open=True):
- with gr.Row():
- column_1, column_2 = gr.Column(), gr.Column()
- with column_1:
- temperature = gr.Slider(
- label="Temperature",
- value=0.2,
- minimum=0.0,
- maximum=1.0,
- step=0.05,
- interactive=True,
- info="Higher values produce more diverse outputs",
- )
- max_new_tokens = gr.Slider(
- label="Max new tokens",
- value=256,
- minimum=0,
- maximum=8192,
- step=64,
- interactive=True,
- info="The maximum numbers of new tokens",
- )
- with column_2:
- top_p = gr.Slider(
- label="Top-p (nucleus sampling)",
- value=0.90,
- minimum=0.0,
- maximum=1,
- step=0.05,
- interactive=True,
- info="Higher values sample more low-probability tokens",
- )
- repetition_penalty = gr.Slider(
- label="Repetition penalty",
- value=1.2,
- minimum=1.0,
- maximum=2.0,
- step=0.05,
- interactive=True,
- info="Penalize repeated tokens",
- )
-
- with gr.Row():
- with gr.Column():
- instruction = gr.Textbox(
- placeholder="Enter your query here",
- lines=5,
- label="Input",
- elem_id="q-input",
- )
- submit = gr.Button("Generate", variant="primary")
- output = gr.Code(elem_id="q-output", lines=30, label="Output")
- gr.Markdown(disclaimer)
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html, visible=True)
- loading_icon = gr.HTML(loading_icon_html, visible=True)
- share_button = gr.Button(
- "Share to community", elem_id="share-btn", visible=True
- )
- gr.Examples(
- examples=examples,
- inputs=[instruction, max_new_tokens],
- cache_examples=False,
- fn=process_example,
- outputs=[output],
- )
-
- submit.click(
- generate,
- inputs=[instruction, temperature, max_new_tokens, top_p, repetition_penalty],
- outputs=[output],
- )
- share_button.click(None, [], [], _js=share_js)
-demo.queue(concurrency_count=16).launch(debug=True)
diff --git a/spaces/binly/ChatGPT4/README.md b/spaces/binly/ChatGPT4/README.md
deleted file mode 100644
index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000
--- a/spaces/binly/ChatGPT4/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Chat-with-GPT4
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.21.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: ysharma/ChatGPT4
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bioriAsaeru/text-to-voice/Free Download Final Destination 4 Full Movie In Hindi.md b/spaces/bioriAsaeru/text-to-voice/Free Download Final Destination 4 Full Movie In Hindi.md
deleted file mode 100644
index 31c6d840f85692d890d0fb24493ce16cf8220330..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Free Download Final Destination 4 Full Movie In Hindi.md
+++ /dev/null
@@ -1,69 +0,0 @@
-
-
Free Download Final Destination 4 Full Movie In Hindi
-
If you are a fan of horror and thriller movies, you might be interested in downloading Final Destination 4 full movie in Hindi. This is the fourth installment of the popular Final Destination franchise, which revolves around the concept of death's design and how some people manage to cheat it, only to face its wrath later.
-
In this article, we will tell you how you can download Final Destination 4 full movie in Hindi for free and enjoy watching it on your device. We will also give you a brief overview of the movie plot and the cast, as well as some interesting facts about the movie.
-
Free Download Final Destination 4 Full Movie In Hindi
How to Download Final Destination 4 Full Movie In Hindi for Free
-
There are many websites that offer free download of Final Destination 4 full movie in Hindi, but not all of them are safe and legal. Some of them may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Therefore, you should be careful while choosing a website to download the movie from.
-
One of the best ways to download Final Destination 4 full movie in Hindi for free is to use a torrent site. Torrent sites are platforms that allow users to share files with each other using peer-to-peer technology. You can find many torrent sites that have Final Destination 4 full movie in Hindi available for download in different qualities and sizes.
-
-
To download Final Destination 4 full movie in Hindi from a torrent site, you will need a torrent client software that can download the file from the torrent link. Some of the popular torrent clients are uTorrent, BitTorrent, qBittorrent, etc. You can download any of them from their official websites and install them on your device.
-
After installing the torrent client, you can search for Final Destination 4 full movie in Hindi on any torrent site that you prefer. Some of the popular torrent sites are The Pirate Bay, 1337x, RARBG, YTS, etc. You can use the search bar or browse through the categories to find the movie. Once you find it, you can click on the torrent link and open it with your torrent client. Then, you can choose the destination folder and start downloading the movie.
-
However, you should be aware that downloading movies from torrent sites is illegal in many countries and may result in legal action or fines. Therefore, you should always use a VPN service to hide your IP address and encrypt your traffic while downloading movies from torrent sites. A VPN service will also help you bypass any geo-restrictions or censorship that may prevent you from accessing some torrent sites.
-
Final Destination 4 Movie Plot and Cast
-
Final Destination 4 is a 2009 American horror thriller film directed by David R. Ellis and written by Eric Bress. It stars Bobby Campo, Shantel VanSanten, Nick Zano, Haley Webb, Krista Allen, Andrew Fiscella, Mykelti Williamson, and Richard T. Jones.
-
The movie follows Nick O'Bannon (Campo), who has a premonition of a deadly car race at McKinley Speedway that will result in many casualties, including several people that are in the audience. He convinces his girlfriend Lori (VanSanten), along with his friends Hunt (Zano) and Janet (Webb) to leave. A security guard named George Lanter (Williamson), along with a racist named Carter (Fiscella), a mother and her two sons, and several other people follow Nick out.
-
Shortly after they leave, Nick's premonition comes true and a massive pile-up occurs on the track, killing many spectators and drivers. The survivors soon realize that they have cheated death's design and that death will come after them one by one in a series of gruesome accidents. Nick tries to use his visions to save the remaining survivors and stop death's plan before it is too late.
-
Interesting Facts About Final Destination 4
-
-
Final Destination 4 is also known as The Final Destination because it was initially intended to be the last film of the franchise. However, due to its commercial success, a fifth film was made in 2011.
-
Final Destination 4 is the first film of the franchise to be shot in 3D and the second one to be directed by David R. Ellis, who also directed Final Destination 2.
-
Final Destination 4 has the shortest runtime of all the films in the franchise, with only 82 minutes.
-
Final Destination 4 has the highest body count of all the films in the franchise, with 52 deaths.
-
Final Destination 4 features several references and homages to previous films in the franchise, such as the number 180, which is associated with death's design; the song "Dust in the Wind" by Kansas, which plays during several death scenes; and the character of William Bludworth (Tony Todd), who appears as a voice on a phone.
-
-
Conclusion
-
If you are looking for a thrilling and gory movie to watch, you can download Final Destination 4 full movie in Hindi for free from any torrent site that you trust. However, you should always use a VPN service to protect your privacy and security while downloading movies from torrent sites. You should also be aware of the legal risks involved in downloading movies from illegal sources.
-
We hope this article has helped you learn how to download Final Destination 4 full movie in Hindi for free and enjoy watching it on your device. If you have any questions or feedback, feel free to leave a comment below.
-
Why You Should Watch Final Destination 4 Full Movie In Hindi
-
Final Destination 4 is a movie that will keep you on the edge of your seat with its thrilling and gory scenes. The movie is a perfect blend of action, horror, and suspense, as it shows how death can strike at any moment and in any way. The movie also has some dark humor and clever twists that will surprise you.
-
If you are a fan of the Final Destination franchise, you will enjoy watching Final Destination 4 full movie in Hindi, as it continues the legacy of the previous films and adds some new elements to the story. The movie also features some impressive 3D effects that enhance the experience of watching the movie.
-
If you are looking for a movie that will make you scream, laugh, and gasp, you should watch Final Destination 4 full movie in Hindi. The movie is a roller coaster ride of emotions that will leave you breathless and entertained.
-
Where to Watch Final Destination 4 Full Movie In Hindi
-
If you want to watch Final Destination 4 full movie in Hindi, you have several options to choose from. You can either stream it online or download it offline from various platforms. However, you should be careful about the quality and legality of the platforms that you use.
-
One of the best ways to watch Final Destination 4 full movie in Hindi is to use a streaming service that offers the movie in high quality and with subtitles. Some of the popular streaming services that have Final Destination 4 full movie in Hindi are Netflix, Amazon Prime Video, Disney+ Hotstar, etc. You can subscribe to any of these services and enjoy watching the movie on your device.
-
Another way to watch Final Destination 4 full movie in Hindi is to use a legal download site that allows you to download the movie in different formats and sizes. Some of the legal download sites that have Final Destination 4 full movie in Hindi are iTunes, Google Play Movies, YouTube Movies, etc. You can purchase or rent the movie from any of these sites and download it on your device.
-
However, you should avoid using illegal or pirated sites that offer free download of Final Destination 4 full movie in Hindi. These sites may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. They may also violate the copyright laws and result in legal action or fines. Therefore, you should always use legal and safe platforms to watch Final Destination 4 full movie in Hindi.
-
Conclusion
-
Final Destination 4 is a movie that will thrill you with its action-packed and gruesome scenes. The movie is a must-watch for horror and thriller fans, as it shows how death can be unpredictable and unstoppable. The movie also has some witty dialogues and unexpected twists that will keep you engaged.
-
If you want to watch Final Destination 4 full movie in Hindi, you can either stream it online or download it offline from various platforms. However, you should always use legal and safe platforms that offer high quality and subtitles for the movie. You should also avoid using illegal or pirated sites that may harm your device or privacy.
-
We hope this article has helped you learn how to watch Final Destination 4 full movie in Hindi and enjoy watching it on your device. If you have any questions or feedback, feel free to leave a comment below.
-
What is the Final Destination Franchise
-
Final Destination 4 is the fourth movie of the Final Destination franchise, which is a series of horror films that are based on the idea that death has a plan for everyone and that it cannot be cheated. The franchise was created by Jeffrey Reddick, who wrote the original screenplay for the first movie.
-
The franchise consists of five movies that were released between 2000 and 2011. The movies are not direct sequels to each other, but they share the same premise and some recurring elements. The movies are as follows:
-
-
Final Destination (2000): The first movie introduces the concept of death's design and follows a group of teenagers who survive a plane explosion after one of them has a premonition.
-
Final Destination 2 (2003): The second movie expands the scope of death's design and follows a group of strangers who survive a highway pile-up after one of them has a premonition.
-
Final Destination 3 (2006): The third movie adds the element of photographs that foreshadow the deaths and follows a group of high school students who survive a roller coaster derailment after one of them has a premonition.
-
Final Destination 4 (2009): The fourth movie is the first one to be shot in 3D and follows a group of friends who survive a car race crash after one of them has a premonition.
-
Final Destination 5 (2011): The fifth movie introduces the concept of kill or be killed and follows a group of co-workers who survive a bridge collapse after one of them has a premonition.
-
-
The franchise has been praised for its originality, creativity, and suspense, as well as its graphic and inventive death scenes. The franchise has also been criticized for its lack of character development, plot holes, and repetitive formula. The franchise has been a commercial success, grossing over $665 million worldwide.
-
How to Download Other Movies in the Final Destination Franchise
-
If you enjoyed watching Final Destination 4 full movie in Hindi, you might want to watch the other movies in the Final Destination franchise as well. You can download them for free from various torrent sites, just like you did for Final Destination 4. However, you should always use a VPN service to protect your privacy and security while downloading movies from torrent sites.
-
To download other movies in the Final Destination franchise, you can follow these steps:
-
-
Choose a torrent site that you trust and search for the movie that you want to download. For example, you can search for Final Destination 5 full movie in Hindi.
-
Find the torrent link that matches your preferences in terms of quality, size, and language. For example, you can choose a torrent link that offers Final Destination 5 full movie in Hindi in 1080p BluRay quality.
-
Click on the torrent link and open it with your torrent client. For example, you can use uTorrent to download Final Destination 5 full movie in Hindi.
-
Select the destination folder and start downloading the movie.
-
-
However, you should be aware that downloading movies from torrent sites is illegal in many countries and may result in legal action or fines. Therefore, you should always use legal and safe platforms to watch other movies in the Final Destination franchise.
-
Conclusion
-
In this article, we have shown you how to download Final Destination 4 full movie in Hindi for free from torrent sites. We have also given you an overview of the movie plot and cast, as well as some interesting facts about the movie. We have also explained what is the Final Destination franchise and how to download other movies in the franchise.
-
We hope this article has helped you learn how to download Final Destination 4 full movie in Hindi and enjoy watching it on your device. If you have any questions or feedback, feel free to leave a comment below.
-
Conclusion
-
In this article, we have shown you how to download Final Destination 4 full movie in Hindi for free from torrent sites. We have also given you an overview of the movie plot and cast, as well as some interesting facts about the movie. We have also explained what is the Final Destination franchise and how to download other movies in the franchise.
-
We hope this article has helped you learn how to download Final Destination 4 full movie in Hindi and enjoy watching it on your device. If you have any questions or feedback, feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/embed.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/embed.py
deleted file mode 100644
index 163eebe9a663f4d46adbbd66af0546a16f32b200..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/embed.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-from typing import Any, Dict, List
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import CfgNode
-from detectron2.structures import Instances
-
-from densepose.data.meshes.catalog import MeshCatalog
-from densepose.modeling.cse.utils import normalize_embeddings, squared_euclidean_distance_matrix
-
-from .embed_utils import PackedCseAnnotations
-from .utils import BilinearInterpolationHelper
-
-
-class EmbeddingLoss:
- """
- Computes losses for estimated embeddings given annotated vertices.
- Instances in a minibatch that correspond to the same mesh are grouped
- together. For each group, loss is computed as cross-entropy for
- unnormalized scores given ground truth mesh vertex ids.
- Scores are based on squared distances between estimated vertex embeddings
- and mesh vertex embeddings.
- """
-
- def __init__(self, cfg: CfgNode):
- """
- Initialize embedding loss from config
- """
- self.embdist_gauss_sigma = cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBEDDING_DIST_GAUSS_SIGMA
-
- def __call__(
- self,
- proposals_with_gt: List[Instances],
- densepose_predictor_outputs: Any,
- packed_annotations: PackedCseAnnotations,
- interpolator: BilinearInterpolationHelper,
- embedder: nn.Module,
- ) -> Dict[int, torch.Tensor]:
- """
- Produces losses for estimated embeddings given annotated vertices.
- Embeddings for all the vertices of a mesh are computed by the embedder.
- Embeddings for observed pixels are estimated by a predictor.
- Losses are computed as cross-entropy for squared distances between
- observed vertex embeddings and all mesh vertex embeddings given
- ground truth vertex IDs.
-
- Args:
- proposals_with_gt (list of Instances): detections with associated
- ground truth data; each item corresponds to instances detected
- on 1 image; the number of items corresponds to the number of
- images in a batch
- densepose_predictor_outputs: an object of a dataclass that contains predictor
- outputs with estimated values; assumed to have the following attributes:
- * embedding - embedding estimates, tensor of shape [N, D, S, S], where
- N = number of instances (= sum N_i, where N_i is the number of
- instances on image i)
- D = embedding space dimensionality (MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBED_SIZE)
- S = output size (width and height)
- packed_annotations (PackedCseAnnotations): contains various data useful
- for loss computation, each data is packed into a single tensor
- interpolator (BilinearInterpolationHelper): bilinear interpolation helper
- embedder (nn.Module): module that computes vertex embeddings for different meshes
- Return:
- dict(int -> tensor): losses for different mesh IDs
- """
- losses = {}
- for mesh_id_tensor in packed_annotations.vertex_mesh_ids_gt.unique():
- mesh_id = mesh_id_tensor.item()
- mesh_name = MeshCatalog.get_mesh_name(mesh_id)
- # valid points are those that fall into estimated bbox
- # and correspond to the current mesh
- j_valid = interpolator.j_valid * ( # pyre-ignore[16]
- packed_annotations.vertex_mesh_ids_gt == mesh_id
- )
- if not torch.any(j_valid):
- continue
- # extract estimated embeddings for valid points
- # -> tensor [J, D]
- vertex_embeddings_i = normalize_embeddings(
- interpolator.extract_at_points(
- densepose_predictor_outputs.embedding,
- slice_fine_segm=slice(None),
- w_ylo_xlo=interpolator.w_ylo_xlo[:, None], # pyre-ignore[16]
- w_ylo_xhi=interpolator.w_ylo_xhi[:, None], # pyre-ignore[16]
- w_yhi_xlo=interpolator.w_yhi_xlo[:, None], # pyre-ignore[16]
- w_yhi_xhi=interpolator.w_yhi_xhi[:, None], # pyre-ignore[16]
- )[j_valid, :]
- )
- # extract vertex ids for valid points
- # -> tensor [J]
- vertex_indices_i = packed_annotations.vertex_ids_gt[j_valid]
- # embeddings for all mesh vertices
- # -> tensor [K, D]
- mesh_vertex_embeddings = embedder(mesh_name)
- # unnormalized scores for valid points
- # -> tensor [J, K]
- scores = squared_euclidean_distance_matrix(
- vertex_embeddings_i, mesh_vertex_embeddings
- ) / (-self.embdist_gauss_sigma)
- losses[mesh_name] = F.cross_entropy(scores, vertex_indices_i, ignore_index=-1)
-
- # pyre-fixme[29]:
- # `Union[BoundMethod[typing.Callable(torch.Tensor.__iter__)[[Named(self,
- # torch.Tensor)], typing.Iterator[typing.Any]], torch.Tensor], nn.Module,
- # torch.Tensor]` is not a function.
- for mesh_name in embedder.mesh_names:
- if mesh_name not in losses:
- losses[mesh_name] = self.fake_value(
- densepose_predictor_outputs, embedder, mesh_name
- )
- return losses
-
- def fake_values(self, densepose_predictor_outputs: Any, embedder: nn.Module):
- losses = {}
- # pyre-fixme[29]:
- # `Union[BoundMethod[typing.Callable(torch.Tensor.__iter__)[[Named(self,
- # torch.Tensor)], typing.Iterator[typing.Any]], torch.Tensor], nn.Module,
- # torch.Tensor]` is not a function.
- for mesh_name in embedder.mesh_names:
- losses[mesh_name] = self.fake_value(densepose_predictor_outputs, embedder, mesh_name)
- return losses
-
- def fake_value(self, densepose_predictor_outputs: Any, embedder: nn.Module, mesh_name: str):
- return densepose_predictor_outputs.embedding.sum() * 0 + embedder(mesh_name).sum() * 0
diff --git a/spaces/caffeinum/VToonify/vtoonify/style_transfer.py b/spaces/caffeinum/VToonify/vtoonify/style_transfer.py
deleted file mode 100644
index 3e6ba13ca84dc595dfa9eb9ef85a638889d8cdd3..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify/style_transfer.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import os
-#os.environ['CUDA_VISIBLE_DEVICES'] = "0"
-import argparse
-import numpy as np
-import cv2
-import dlib
-import torch
-from torchvision import transforms
-import torch.nn.functional as F
-from tqdm import tqdm
-from model.vtoonify import VToonify
-from model.bisenet.model import BiSeNet
-from model.encoder.align_all_parallel import align_face
-from util import save_image, load_image, visualize, load_psp_standalone, get_video_crop_parameter, tensor2cv2
-
-
-class TestOptions():
- def __init__(self):
-
- self.parser = argparse.ArgumentParser(description="Style Transfer")
- self.parser.add_argument("--content", type=str, default='./data/077436.jpg', help="path of the content image/video")
- self.parser.add_argument("--style_id", type=int, default=26, help="the id of the style image")
- self.parser.add_argument("--style_degree", type=float, default=0.5, help="style degree for VToonify-D")
- self.parser.add_argument("--color_transfer", action="store_true", help="transfer the color of the style")
- self.parser.add_argument("--ckpt", type=str, default='./checkpoint/vtoonify_d_cartoon/vtoonify_s_d.pt', help="path of the saved model")
- self.parser.add_argument("--output_path", type=str, default='./output/', help="path of the output images")
- self.parser.add_argument("--scale_image", action="store_true", help="resize and crop the image to best fit the model")
- self.parser.add_argument("--style_encoder_path", type=str, default='./checkpoint/encoder.pt', help="path of the style encoder")
- self.parser.add_argument("--exstyle_path", type=str, default=None, help="path of the extrinsic style code")
- self.parser.add_argument("--faceparsing_path", type=str, default='./checkpoint/faceparsing.pth', help="path of the face parsing model")
- self.parser.add_argument("--video", action="store_true", help="if true, video stylization; if false, image stylization")
- self.parser.add_argument("--cpu", action="store_true", help="if true, only use cpu")
- self.parser.add_argument("--backbone", type=str, default='dualstylegan', help="dualstylegan | toonify")
- self.parser.add_argument("--padding", type=int, nargs=4, default=[200,200,200,200], help="left, right, top, bottom paddings to the face center")
- self.parser.add_argument("--batch_size", type=int, default=4, help="batch size of frames when processing video")
- self.parser.add_argument("--parsing_map_path", type=str, default=None, help="path of the refined parsing map of the target video")
-
- def parse(self):
- self.opt = self.parser.parse_args()
- if self.opt.exstyle_path is None:
- self.opt.exstyle_path = os.path.join(os.path.dirname(self.opt.ckpt), 'exstyle_code.npy')
- args = vars(self.opt)
- print('Load options')
- for name, value in sorted(args.items()):
- print('%s: %s' % (str(name), str(value)))
- return self.opt
-
-if __name__ == "__main__":
-
- parser = TestOptions()
- args = parser.parse()
- print('*'*98)
-
-
- device = "cpu" if args.cpu else "cuda"
-
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- vtoonify = VToonify(backbone = args.backbone)
- vtoonify.load_state_dict(torch.load(args.ckpt, map_location=lambda storage, loc: storage)['g_ema'])
- vtoonify.to(device)
-
- parsingpredictor = BiSeNet(n_classes=19)
- parsingpredictor.load_state_dict(torch.load(args.faceparsing_path, map_location=lambda storage, loc: storage))
- parsingpredictor.to(device).eval()
-
- modelname = './checkpoint/shape_predictor_68_face_landmarks.dat'
- if not os.path.exists(modelname):
- import wget, bz2
- wget.download('http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2', modelname+'.bz2')
- zipfile = bz2.BZ2File(modelname+'.bz2')
- data = zipfile.read()
- open(modelname, 'wb').write(data)
- landmarkpredictor = dlib.shape_predictor(modelname)
-
- pspencoder = load_psp_standalone(args.style_encoder_path, device)
-
- if args.backbone == 'dualstylegan':
- exstyles = np.load(args.exstyle_path, allow_pickle='TRUE').item()
- stylename = list(exstyles.keys())[args.style_id]
- exstyle = torch.tensor(exstyles[stylename]).to(device)
- with torch.no_grad():
- exstyle = vtoonify.zplus2wplus(exstyle)
-
- if args.video and args.parsing_map_path is not None:
- x_p_hat = torch.tensor(np.load(args.parsing_map_path))
-
- print('Load models successfully!')
-
-
- filename = args.content
- basename = os.path.basename(filename).split('.')[0]
- scale = 1
- kernel_1d = np.array([[0.125],[0.375],[0.375],[0.125]])
- print('Processing ' + os.path.basename(filename) + ' with vtoonify_' + args.backbone[0])
- if args.video:
- cropname = os.path.join(args.output_path, basename + '_input.mp4')
- savename = os.path.join(args.output_path, basename + '_vtoonify_' + args.backbone[0] + '.mp4')
-
- video_cap = cv2.VideoCapture(filename)
- num = int(video_cap.get(7))
-
- first_valid_frame = True
- batch_frames = []
- for i in tqdm(range(num)):
- success, frame = video_cap.read()
- if success == False:
- assert('load video frames error')
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
- # We proprocess the video by detecting the face in the first frame,
- # and resizing the frame so that the eye distance is 64 pixels.
- # Centered on the eyes, we crop the first frame to almost 400x400 (based on args.padding).
- # All other frames use the same resizing and cropping parameters as the first frame.
- if first_valid_frame:
- if args.scale_image:
- paras = get_video_crop_parameter(frame, landmarkpredictor, args.padding)
- if paras is None:
- continue
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR video, we apply gaussian blur to the frames to avoid flickers caused by bilinear downsampling
- # this can also prevent over-sharp stylization results.
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
- else:
- H, W = frame.shape[0], frame.shape[1]
-
- fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- videoWriter = cv2.VideoWriter(cropname, fourcc, video_cap.get(5), (W, H))
- videoWriter2 = cv2.VideoWriter(savename, fourcc, video_cap.get(5), (4*W, 4*H))
-
- # For each video, we detect and align the face in the first frame for pSp to obtain the style code.
- # This style code is used for all other frames.
- with torch.no_grad():
- I = align_face(frame, landmarkpredictor)
- I = transform(I).unsqueeze(dim=0).to(device)
- s_w = pspencoder(I)
- s_w = vtoonify.zplus2wplus(s_w)
- if vtoonify.backbone == 'dualstylegan':
- if args.color_transfer:
- s_w = exstyle
- else:
- s_w[:,:7] = exstyle[:,:7]
- first_valid_frame = False
- elif args.scale_image:
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
-
- videoWriter.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
-
- batch_frames += [transform(frame).unsqueeze(dim=0).to(device)]
-
- if len(batch_frames) == args.batch_size or (i+1) == num:
- x = torch.cat(batch_frames, dim=0)
- batch_frames = []
- with torch.no_grad():
- # parsing network works best on 512x512 images, so we predict parsing maps on upsmapled frames
- # followed by downsampling the parsing maps
- if args.video and args.parsing_map_path is not None:
- x_p = x_p_hat[i+1-x.size(0):i+1].to(device)
- else:
- x_p = F.interpolate(parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- # we give parsing maps lower weight (1/16)
- inputs = torch.cat((x, x_p/16.), dim=1)
- # d_s has no effect when backbone is toonify
- y_tilde = vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = args.style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
- for k in range(y_tilde.size(0)):
- videoWriter2.write(tensor2cv2(y_tilde[k].cpu()))
-
- videoWriter.release()
- videoWriter2.release()
- video_cap.release()
-
-
- else:
- cropname = os.path.join(args.output_path, basename + '_input.jpg')
- savename = os.path.join(args.output_path, basename + '_vtoonify_' + args.backbone[0] + '.jpg')
-
- frame = cv2.imread(filename)
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
-
- # We detect the face in the image, and resize the image so that the eye distance is 64 pixels.
- # Centered on the eyes, we crop the image to almost 400x400 (based on args.padding).
- if args.scale_image:
- paras = get_video_crop_parameter(frame, landmarkpredictor, args.padding)
- if paras is not None:
- h,w,top,bottom,left,right,scale = paras
- H, W = int(bottom-top), int(right-left)
- # for HR image, we apply gaussian blur to it to avoid over-sharp stylization results
- if scale <= 0.75:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- if scale <= 0.375:
- frame = cv2.sepFilter2D(frame, -1, kernel_1d, kernel_1d)
- frame = cv2.resize(frame, (w, h))[top:bottom, left:right]
-
- with torch.no_grad():
- I = align_face(frame, landmarkpredictor)
- I = transform(I).unsqueeze(dim=0).to(device)
- s_w = pspencoder(I)
- s_w = vtoonify.zplus2wplus(s_w)
- if vtoonify.backbone == 'dualstylegan':
- if args.color_transfer:
- s_w = exstyle
- else:
- s_w[:,:7] = exstyle[:,:7]
-
- x = transform(frame).unsqueeze(dim=0).to(device)
- # parsing network works best on 512x512 images, so we predict parsing maps on upsmapled frames
- # followed by downsampling the parsing maps
- x_p = F.interpolate(parsingpredictor(2*(F.interpolate(x, scale_factor=2, mode='bilinear', align_corners=False)))[0],
- scale_factor=0.5, recompute_scale_factor=False).detach()
- # we give parsing maps lower weight (1/16)
- inputs = torch.cat((x, x_p/16.), dim=1)
- # d_s has no effect when backbone is toonify
- y_tilde = vtoonify(inputs, s_w.repeat(inputs.size(0), 1, 1), d_s = args.style_degree)
- y_tilde = torch.clamp(y_tilde, -1, 1)
-
- cv2.imwrite(cropname, cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
- save_image(y_tilde[0].cpu(), savename)
-
- print('Transfer style successfully!')
\ No newline at end of file
diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/__init__.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/projects/README.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/projects/README.md
deleted file mode 100644
index 95afe7ff8c8a9bd2f56621fcc3c1bdac11c256a9..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/projects/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-
-Projects live in the [`projects` directory](../../projects) under the root of this repository, but not here.
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/tracking/vanilla_hungarian_bbox_iou_tracker.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/tracking/vanilla_hungarian_bbox_iou_tracker.py
deleted file mode 100644
index 5629f7383adcafeaa1ebdae1f38f968437149652..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/tracking/vanilla_hungarian_bbox_iou_tracker.py
+++ /dev/null
@@ -1,129 +0,0 @@
-#!/usr/bin/env python3
-# Copyright 2004-present Facebook. All Rights Reserved.
-
-import numpy as np
-from typing import List
-
-from detectron2.config import CfgNode as CfgNode_
-from detectron2.config import configurable
-from detectron2.structures import Instances
-from detectron2.structures.boxes import pairwise_iou
-from detectron2.tracking.utils import LARGE_COST_VALUE, create_prediction_pairs
-
-from .base_tracker import TRACKER_HEADS_REGISTRY
-from .hungarian_tracker import BaseHungarianTracker
-
-
-@TRACKER_HEADS_REGISTRY.register()
-class VanillaHungarianBBoxIOUTracker(BaseHungarianTracker):
- """
- Hungarian algo based tracker using bbox iou as metric
- """
-
- @configurable
- def __init__(
- self,
- *,
- video_height: int,
- video_width: int,
- max_num_instances: int = 200,
- max_lost_frame_count: int = 0,
- min_box_rel_dim: float = 0.02,
- min_instance_period: int = 1,
- track_iou_threshold: float = 0.5,
- **kwargs,
- ):
- """
- Args:
- video_height: height the video frame
- video_width: width of the video frame
- max_num_instances: maximum number of id allowed to be tracked
- max_lost_frame_count: maximum number of frame an id can lost tracking
- exceed this number, an id is considered as lost
- forever
- min_box_rel_dim: a percentage, smaller than this dimension, a bbox is
- removed from tracking
- min_instance_period: an instance will be shown after this number of period
- since its first showing up in the video
- track_iou_threshold: iou threshold, below this number a bbox pair is removed
- from tracking
- """
- super().__init__(
- video_height=video_height,
- video_width=video_width,
- max_num_instances=max_num_instances,
- max_lost_frame_count=max_lost_frame_count,
- min_box_rel_dim=min_box_rel_dim,
- min_instance_period=min_instance_period,
- )
- self._track_iou_threshold = track_iou_threshold
-
- @classmethod
- def from_config(cls, cfg: CfgNode_):
- """
- Old style initialization using CfgNode
-
- Args:
- cfg: D2 CfgNode, config file
- Return:
- dictionary storing arguments for __init__ method
- """
- assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS
- assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS
- video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT")
- video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH")
- max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200)
- max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0)
- min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02)
- min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1)
- track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5)
- return {
- "_target_": "detectron2.tracking.vanilla_hungarian_bbox_iou_tracker.VanillaHungarianBBoxIOUTracker", # noqa
- "video_height": video_height,
- "video_width": video_width,
- "max_num_instances": max_num_instances,
- "max_lost_frame_count": max_lost_frame_count,
- "min_box_rel_dim": min_box_rel_dim,
- "min_instance_period": min_instance_period,
- "track_iou_threshold": track_iou_threshold,
- }
-
- def build_cost_matrix(self, instances: Instances, prev_instances: Instances) -> np.ndarray:
- """
- Build the cost matrix for assignment problem
- (https://en.wikipedia.org/wiki/Assignment_problem)
-
- Args:
- instances: D2 Instances, for current frame predictions
- prev_instances: D2 Instances, for previous frame predictions
-
- Return:
- the cost matrix in numpy array
- """
- assert instances is not None and prev_instances is not None
- # calculate IoU of all bbox pairs
- iou_all = pairwise_iou(
- boxes1=instances.pred_boxes,
- boxes2=self._prev_instances.pred_boxes,
- )
- bbox_pairs = create_prediction_pairs(
- instances, self._prev_instances, iou_all, threshold=self._track_iou_threshold
- )
- # assign large cost value to make sure pair below IoU threshold won't be matched
- cost_matrix = np.full((len(instances), len(prev_instances)), LARGE_COST_VALUE)
- return self.assign_cost_matrix_values(cost_matrix, bbox_pairs)
-
- def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pairs: List) -> np.ndarray:
- """
- Based on IoU for each pair of bbox, assign the associated value in cost matrix
-
- Args:
- cost_matrix: np.ndarray, initialized 2D array with target dimensions
- bbox_pairs: list of bbox pair, in each pair, iou value is stored
- Return:
- np.ndarray, cost_matrix with assigned values
- """
- for pair in bbox_pairs:
- # assign -1 for IoU above threshold pairs, algorithms will minimize cost
- cost_matrix[pair["idx"]][pair["prev_idx"]] = -1
- return cost_matrix
diff --git a/spaces/chendl/compositional_test/transformers/docs/TRANSLATING.md b/spaces/chendl/compositional_test/transformers/docs/TRANSLATING.md
deleted file mode 100644
index c6f5c45baf029146c061113dae7f42a1bdb14b3a..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/docs/TRANSLATING.md
+++ /dev/null
@@ -1,57 +0,0 @@
-### Translating the Transformers documentation into your language
-
-As part of our mission to democratize machine learning, we'd love to make the Transformers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.
-
-**🗞️ Open an issue**
-
-To get started, navigate to the [Issues](https://github.com/huggingface/transformers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button.
-
-Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.
-
-
-**🍴 Fork the repository**
-
-First, you'll need to [fork the Transformers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.
-
-Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:
-
-```bash
-git clone https://github.com/YOUR-USERNAME/transformers.git
-```
-
-**📋 Copy-paste the English version with a new language code**
-
-The documentation files are in one leading directory:
-
-- [`docs/source`](https://github.com/huggingface/transformers/tree/main/docs/source): All the documentation materials are organized here by language.
-
-You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/transformers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:
-
-```bash
-cd ~/path/to/transformers/docs
-cp -r source/en source/LANG-ID
-```
-
-Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.
-
-**✍️ Start translating**
-
-The fun part comes - translating the text!
-
-The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website.
-
-> 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory!
-
-The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml):
-
-```yaml
-- sections:
- - local: pipeline_tutorial # Do not change this! Use the same name for your .md file
- title: Pipelines for inference # Translate this!
- ...
- title: Tutorials # Translate this!
-```
-
-Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.
-
-> 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/transformers/issues) and tag @sgugger.
diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/text-generation/README.md b/spaces/chendl/compositional_test/transformers/examples/pytorch/text-generation/README.md
deleted file mode 100644
index 2177c45c3b884a50d59146af96016361b6988738..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/pytorch/text-generation/README.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-## Language generation
-
-Based on the script [`run_generation.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-generation/run_generation.py).
-
-Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL, XLNet, CTRL.
-A similar script is used for our official demo [Write With Transfomer](https://transformer.huggingface.co), where you
-can try out the different models available in the library.
-
-Example usage:
-
-```bash
-python run_generation.py \
- --model_type=gpt2 \
- --model_name_or_path=gpt2
-```
diff --git a/spaces/chrisclark1016/Untappd_Predictor/README.md b/spaces/chrisclark1016/Untappd_Predictor/README.md
deleted file mode 100644
index a12fb2cbf749c729a6673713c431a57cb2a1e182..0000000000000000000000000000000000000000
--- a/spaces/chrisclark1016/Untappd_Predictor/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Untappd Predictor
-emoji: 🚀
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cihyFjudo/fairness-paper-search/Chyby Se Staly (ale Ne Mou Vinou) Epub.md b/spaces/cihyFjudo/fairness-paper-search/Chyby Se Staly (ale Ne Mou Vinou) Epub.md
deleted file mode 100644
index 1b922bfadf936c9dc0781cd30b077f83747fe7c9..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Chyby Se Staly (ale Ne Mou Vinou) Epub.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Descarga la version estable del nuevo navegador Microsoft Edge (Microsoft Edge Chromium) y disfruta de sus ventajas.md b/spaces/cihyFjudo/fairness-paper-search/Descarga la version estable del nuevo navegador Microsoft Edge (Microsoft Edge Chromium) y disfruta de sus ventajas.md
deleted file mode 100644
index cd6d603ed214d2a9c67bfdc686fd51bb406f54e3..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Descarga la version estable del nuevo navegador Microsoft Edge (Microsoft Edge Chromium) y disfruta de sus ventajas.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
El nuevo Edge se puede descargar desde este enlace dentro de los canales de prueba, mientras que para hacerte con la actualización del nuevo Edge puedes hacerlo accediendo desde el navegador en el menú "Configuración > Acerca de" como ya vimos en el tutorial que vimos en su día.
-
Descarga la version estable del nuevo navegador Microsoft Edge (Microsoft Edge Chromium)
Componente de protocolos de inicio automático de Microsoft Edge. Microsoft Edge 96 presenta el componente de protocolos de inicio automático que contiene listas de diccionarios de origen de esquema para permitir o bloquear automáticamente. Esto protege a los clientes de esquemas peligrosos (por ejemplo, un controlador de protocolo con 0 días) al tiempo que elimina las solicitudes de emparejamientos seguros conocidos (por ejemplo, el sitio web de Teams puede abrir la aplicación cliente de Teams). Si, por algún motivo, no quiere que Microsoft Edge bloquee los controladores de protocolo vulnerables y permita emparejamientos seguros conocidos, use el botón de alternancia en edge://settings/content/applicationLinks o establezca la directiva AutoLaunchProtocolsComponentEnabled en False.
-
Microsoft Edge ha completado el paso a una cadencia de 4 semanas para actualizaciones. Hemos adoptado un nuevo ciclo de lanzamiento de 4 semanas para las versiones principales. Más información aquí: -release-cycles-microsoft-edge-extended-stable/
-
Los usuarios pueden establecer Microsoft Edge como su explorador predeterminado directamente desde la Configuración de Microsoft Edge. Esto hace que sea más fácil para los usuarios cambiar el explorador predeterminado, dentro del contexto del explorador, en lugar de tener que buscar en la configuración del sistema operativo. Para usar esta característica, ve a edge://settings/defaultBrowser y haz clic en Establecer como predeterminado.
-
-
Actualmente, Microsoft todavía no ha lanzado ninguna versión del canal Beta más estable, sin embargo, ya puedes empezar a descargar las versiones de Microsoft Edge con Chromium en sus canales Dev y Canary. Para ello, el primer paso es ir a la web de Microsoft Edge Insider Channels entrando en microsoftedgeinsider.com. Ahí dentro, pulsa en el botón Download Dev Channel para bajar la versión Dev, o pulsa en More platforms and channels para elegir otro canal.
-
El nuevo Edge de Microsoft ha optado por Chromium como motor en lugar de Edge HTML y el cambio le ha sentado muy bien. Cada vez son más los usuarios que deciden darle una oportunidad al navegador de Microsoft, ya sea en la versión estable o por medio de alguna de las tres opciones de desarrollo que se pueden descargar AQUI..
-
Edge Canary se encuentra en Google Play; y para descargarlo al dispositivo Android basta con pulsar en el pertinente botón de descarga. La versión de Chromium de este navegador en pruebas es la número 91, por encima de la versión estable de los sistemas de escritorio (versión 90) y sobresaliendo en gran medida de la versión que Microsoft distribuye en Android de manera estable (la número 77).
-
Aunque lo hayas desinstalado, Microsoft puede volver a intentar instalar Edge en tu ordenador a través de las actualizaciones de Windows. Siguiendo estos pasos, podrás evitar que Windows reinstale Edge en tu ordenador en la próxima actualización. Podrás utilizar el Microsoft Edge Chromium Blocker Toolkit, que podrás encontrar en la web, para establecer una clave del registro con un nuevo valor (esto puedes hacerlo con las siguientes instrucciones o de forma manual, es decir, sin descargar nada).
-
La beta del nuevo navegador basado en Chromium estaba disponible para el uso desde hacía unos meses desde la página de Edge. Esta es la primera versión oficial, aunque, técnicamente, es la «versión estable» 79a. Se espera que en febrero llegue la Edge 80 y también se han anunciado actualizaciones periódicas, que llegarán cada seis semanas a partir de la actualización 80.
-
Durante las próximas semanas, el nuevo navegador Edge será introducido automáticamente para los usuarios de Windows 10, aunque se puede descargar manualmente desde su página desde hoy. Los usuarios que tengan la versión anterior de Edge instalada, verán como se actualiza automáticamente, conservando toda la información personalizada.
-
Con una competencia creciente entre los navegadores, está mejora siendo desplegada en los equipos con Windows y macOS con una activación en el lado del servidor y que ahora está activa en Edge, en la versión estable (para todos los usuarios) que podemos descargar tanto en la App Store para iOS como en Google Play Store para Android.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Facehacker For Mac Dmg Free 5 Temporada Wally Au.md b/spaces/cihyFjudo/fairness-paper-search/Download Facehacker For Mac Dmg Free 5 Temporada Wally Au.md
deleted file mode 100644
index dc41b1e206f5b4bd06acc05bd8c6c002dfb5245f..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Download Facehacker For Mac Dmg Free 5 Temporada Wally Au.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Download Facehacker For Mac Dmg Free 5 temporada wally au
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Gail Gotti Gailsta EP Download How to Get the Full Album for Free.md b/spaces/cihyFjudo/fairness-paper-search/Gail Gotti Gailsta EP Download How to Get the Full Album for Free.md
deleted file mode 100644
index 8dd07554268d270b59d7ea794516dd56bac173c3..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Gail Gotti Gailsta EP Download How to Get the Full Album for Free.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Simcity 4 Deluxe Edition For Mac Download How to Install and Play.md b/spaces/cihyFjudo/fairness-paper-search/Simcity 4 Deluxe Edition For Mac Download How to Install and Play.md
deleted file mode 100644
index e9b3baaebb25c42a38a68ccd96c1094a5d978b41..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Simcity 4 Deluxe Edition For Mac Download How to Install and Play.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/bertwarper.py
deleted file mode 100644
index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000
--- a/spaces/codelion/Grounding_DINO_demo/groundingdino/models/GroundingDINO/bertwarper.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from torch import Tensor, nn
-from torchvision.ops.boxes import nms
-from transformers import BertConfig, BertModel, BertPreTrainedModel
-from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions
-
-
-class BertModelWarper(nn.Module):
- def __init__(self, bert_model):
- super().__init__()
- # self.bert = bert_modelc
-
- self.config = bert_model.config
- self.embeddings = bert_model.embeddings
- self.encoder = bert_model.encoder
- self.pooler = bert_model.pooler
-
- self.get_extended_attention_mask = bert_model.get_extended_attention_mask
- self.invert_attention_mask = bert_model.invert_attention_mask
- self.get_head_mask = bert_model.get_head_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = (
- output_attentions if output_attentions is not None else self.config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if self.config.is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- batch_size, seq_length = input_shape
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- # past_key_values_length
- past_key_values_length = (
- past_key_values[0][0].shape[2] if past_key_values is not None else 0
- )
-
- if attention_mask is None:
- attention_mask = torch.ones(
- ((batch_size, seq_length + past_key_values_length)), device=device
- )
- if token_type_ids is None:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(
- attention_mask, input_shape, device
- )
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.config.is_decoder and encoder_hidden_states is not None:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- token_type_ids=token_type_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
-
-class TextEncoderShell(nn.Module):
- def __init__(self, text_encoder):
- super().__init__()
- self.text_encoder = text_encoder
- self.config = self.text_encoder.config
-
- def forward(self, **kw):
- # feed into text encoder
- return self.text_encoder(**kw)
-
-
-def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
-
- previous_col = col
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long)
-
-
-def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- cate_to_token_mask_list = [[] for _ in range(bs)]
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
- c2t_maski = torch.zeros((num_token), device=input_ids.device).bool()
- c2t_maski[previous_col + 1 : col] = True
- cate_to_token_mask_list[row].append(c2t_maski)
- previous_col = col
-
- cate_to_token_mask_list = [
- torch.stack(cate_to_token_mask_listi, dim=0)
- for cate_to_token_mask_listi in cate_to_token_mask_list
- ]
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dpxenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dpxenc.c
deleted file mode 100644
index e136cc1b9ea9477e30f729bc498c6a12b9abfce2..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dpxenc.c
+++ /dev/null
@@ -1,295 +0,0 @@
-/*
- * DPX (.dpx) image encoder
- * Copyright (c) 2011 Peter Ross
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/common.h"
-#include "libavutil/intreadwrite.h"
-#include "libavutil/imgutils.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "encode.h"
-#include "version.h"
-
-typedef struct DPXContext {
- int big_endian;
- int bits_per_component;
- int num_components;
- int descriptor;
- int planar;
-} DPXContext;
-
-static av_cold int encode_init(AVCodecContext *avctx)
-{
- DPXContext *s = avctx->priv_data;
- const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt);
-
- s->big_endian = !!(desc->flags & AV_PIX_FMT_FLAG_BE);
- s->bits_per_component = desc->comp[0].depth;
- s->num_components = desc->nb_components;
- s->descriptor = (desc->flags & AV_PIX_FMT_FLAG_ALPHA) ? 51 : 50;
- s->planar = !!(desc->flags & AV_PIX_FMT_FLAG_PLANAR);
-
- switch (avctx->pix_fmt) {
- case AV_PIX_FMT_ABGR:
- s->descriptor = 52;
- break;
- case AV_PIX_FMT_GRAY16BE:
- case AV_PIX_FMT_GRAY16LE:
- case AV_PIX_FMT_GRAY8:
- s->descriptor = 6;
- break;
- case AV_PIX_FMT_GBRP10BE:
- case AV_PIX_FMT_GBRP10LE:
- case AV_PIX_FMT_GBRP12BE:
- case AV_PIX_FMT_GBRP12LE:
- case AV_PIX_FMT_RGB24:
- case AV_PIX_FMT_RGBA64BE:
- case AV_PIX_FMT_RGBA64LE:
- case AV_PIX_FMT_RGBA:
- break;
- case AV_PIX_FMT_RGB48LE:
- case AV_PIX_FMT_RGB48BE:
- if (avctx->bits_per_raw_sample)
- s->bits_per_component = avctx->bits_per_raw_sample;
- break;
- }
-
- return 0;
-}
-
-static av_always_inline void write16_internal(int big_endian, void *p, int value)
-{
- if (big_endian) AV_WB16(p, value);
- else AV_WL16(p, value);
-}
-
-static av_always_inline void write32_internal(int big_endian, void *p, int value)
-{
- if (big_endian) AV_WB32(p, value);
- else AV_WL32(p, value);
-}
-
-#define write16(p, value) write16_internal(s->big_endian, p, value)
-#define write32(p, value) write32_internal(s->big_endian, p, value)
-
-static void encode_rgb48_10bit(AVCodecContext *avctx, const AVFrame *pic,
- uint8_t *dst)
-{
- DPXContext *s = avctx->priv_data;
- const uint8_t *src = pic->data[0];
- int x, y;
-
- for (y = 0; y < avctx->height; y++) {
- for (x = 0; x < avctx->width; x++) {
- int value;
- if (s->big_endian) {
- value = ((AV_RB16(src + 6*x + 4) & 0xFFC0U) >> 4)
- | ((AV_RB16(src + 6*x + 2) & 0xFFC0U) << 6)
- | ((AV_RB16(src + 6*x + 0) & 0xFFC0U) << 16);
- } else {
- value = ((AV_RL16(src + 6*x + 4) & 0xFFC0U) >> 4)
- | ((AV_RL16(src + 6*x + 2) & 0xFFC0U) << 6)
- | ((AV_RL16(src + 6*x + 0) & 0xFFC0U) << 16);
- }
- write32(dst, value);
- dst += 4;
- }
- src += pic->linesize[0];
- }
-}
-
-static void encode_gbrp10(AVCodecContext *avctx, const AVFrame *pic, uint8_t *dst)
-{
- DPXContext *s = avctx->priv_data;
- const uint8_t *src[3] = {pic->data[0], pic->data[1], pic->data[2]};
- int x, y, i;
-
- for (y = 0; y < avctx->height; y++) {
- for (x = 0; x < avctx->width; x++) {
- int value;
- if (s->big_endian) {
- value = (AV_RB16(src[0] + 2*x) << 12)
- | (AV_RB16(src[1] + 2*x) << 2)
- | ((unsigned)AV_RB16(src[2] + 2*x) << 22);
- } else {
- value = (AV_RL16(src[0] + 2*x) << 12)
- | (AV_RL16(src[1] + 2*x) << 2)
- | ((unsigned)AV_RL16(src[2] + 2*x) << 22);
- }
- write32(dst, value);
- dst += 4;
- }
- for (i = 0; i < 3; i++)
- src[i] += pic->linesize[i];
- }
-}
-
-static void encode_gbrp12(AVCodecContext *avctx, const AVFrame *pic, uint8_t *dst)
-{
- DPXContext *s = avctx->priv_data;
- const uint16_t *src[3] = {(uint16_t*)pic->data[0],
- (uint16_t*)pic->data[1],
- (uint16_t*)pic->data[2]};
- int x, y, i, pad;
- pad = avctx->width*6;
- pad = (FFALIGN(pad, 4) - pad) >> 1;
- for (y = 0; y < avctx->height; y++) {
- for (x = 0; x < avctx->width; x++) {
- uint16_t value[3];
- if (s->big_endian) {
- value[1] = AV_RB16(src[0] + x) << 4;
- value[2] = AV_RB16(src[1] + x) << 4;
- value[0] = AV_RB16(src[2] + x) << 4;
- } else {
- value[1] = AV_RL16(src[0] + x) << 4;
- value[2] = AV_RL16(src[1] + x) << 4;
- value[0] = AV_RL16(src[2] + x) << 4;
- }
- for (i = 0; i < 3; i++, dst += 2)
- write16(dst, value[i]);
- }
- for (i = 0; i < pad; i++, dst += 2)
- AV_WN16(dst, 0);
- for (i = 0; i < 3; i++)
- src[i] += pic->linesize[i]/2;
- }
-}
-
-static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
- const AVFrame *frame, int *got_packet)
-{
- DPXContext *s = avctx->priv_data;
- int size, ret, need_align, len;
- uint8_t *buf;
-
-#define HEADER_SIZE 1664 /* DPX Generic header */
- if (s->bits_per_component == 10)
- size = avctx->height * avctx->width * 4;
- else if (s->bits_per_component == 12) {
- // 3 components, 12 bits put on 16 bits
- len = avctx->width*6;
- size = FFALIGN(len, 4);
- need_align = size - len;
- size *= avctx->height;
- } else {
- // N components, M bits
- len = avctx->width * s->num_components * s->bits_per_component >> 3;
- size = FFALIGN(len, 4);
- need_align = size - len;
- size *= avctx->height;
- }
- if ((ret = ff_get_encode_buffer(avctx, pkt, size + HEADER_SIZE, 0)) < 0)
- return ret;
- buf = pkt->data;
-
- memset(buf, 0, HEADER_SIZE);
-
- /* File information header */
- write32(buf, MKBETAG('S','D','P','X'));
- write32(buf + 4, HEADER_SIZE);
- memcpy (buf + 8, "V1.0", 4);
- write32(buf + 20, 1); /* new image */
- write32(buf + 24, HEADER_SIZE);
- if (!(avctx->flags & AV_CODEC_FLAG_BITEXACT))
- memcpy (buf + 160, LIBAVCODEC_IDENT, FFMIN(sizeof(LIBAVCODEC_IDENT), 100));
- write32(buf + 660, 0xFFFFFFFF); /* unencrypted */
-
- /* Image information header */
- write16(buf + 768, 0); /* orientation; left to right, top to bottom */
- write16(buf + 770, 1); /* number of elements */
- write32(buf + 772, avctx->width);
- write32(buf + 776, avctx->height);
- buf[800] = s->descriptor;
- buf[801] = 2; /* linear transfer */
- buf[802] = 2; /* linear colorimetric */
- buf[803] = s->bits_per_component;
- write16(buf + 804, (s->bits_per_component == 10 || s->bits_per_component == 12) ?
- 1 : 0); /* packing method */
- write32(buf + 808, HEADER_SIZE); /* data offset */
-
- /* Image source information header */
- write32(buf + 1628, avctx->sample_aspect_ratio.num);
- write32(buf + 1632, avctx->sample_aspect_ratio.den);
-
- switch(s->bits_per_component) {
- case 8:
- case 16:
- if (need_align) {
- int j;
- const uint8_t *src = frame->data[0];
- uint8_t *dst = pkt->data + HEADER_SIZE;
- size = (len + need_align) * avctx->height;
- for (j=0; jheight; j++) {
- memcpy(dst, src, len);
- memset(dst + len, 0, need_align);
- dst += len + need_align;
- src += frame->linesize[0];
- }
- } else {
- size = av_image_copy_to_buffer(buf + HEADER_SIZE, pkt->size - HEADER_SIZE,
- (const uint8_t**)frame->data, frame->linesize,
- avctx->pix_fmt,
- avctx->width, avctx->height, 1);
- }
- if (size < 0)
- return size;
- break;
- case 10:
- if (s->planar)
- encode_gbrp10(avctx, frame, buf + HEADER_SIZE);
- else
- encode_rgb48_10bit(avctx, frame, buf + HEADER_SIZE);
- break;
- case 12:
- encode_gbrp12(avctx, frame, buf + HEADER_SIZE);
- break;
- default:
- av_log(avctx, AV_LOG_ERROR, "Unsupported bit depth: %d\n", s->bits_per_component);
- return -1;
- }
-
- size += HEADER_SIZE;
-
- write32(buf + 16, size); /* file size */
-
- *got_packet = 1;
-
- return 0;
-}
-
-const FFCodec ff_dpx_encoder = {
- .p.name = "dpx",
- CODEC_LONG_NAME("DPX (Digital Picture Exchange) image"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_DPX,
- .p.capabilities = AV_CODEC_CAP_DR1,
- .priv_data_size = sizeof(DPXContext),
- .init = encode_init,
- FF_CODEC_ENCODE_CB(encode_frame),
- .p.pix_fmts = (const enum AVPixelFormat[]){
- AV_PIX_FMT_GRAY8,
- AV_PIX_FMT_RGB24, AV_PIX_FMT_RGBA, AV_PIX_FMT_ABGR,
- AV_PIX_FMT_GRAY16LE, AV_PIX_FMT_GRAY16BE,
- AV_PIX_FMT_RGB48LE, AV_PIX_FMT_RGB48BE,
- AV_PIX_FMT_RGBA64LE, AV_PIX_FMT_RGBA64BE,
- AV_PIX_FMT_GBRP10LE, AV_PIX_FMT_GBRP10BE,
- AV_PIX_FMT_GBRP12LE, AV_PIX_FMT_GBRP12BE,
- AV_PIX_FMT_NONE},
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dvdsub.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dvdsub.h
deleted file mode 100644
index 3f51c3f80569b526a9825212634ac4a7f62dbc6a..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dvdsub.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_DVDSUB_H
-#define AVCODEC_DVDSUB_H
-
-#include
-
-void ff_dvdsub_parse_palette(uint32_t *palette, const char *p);
-
-#endif /* AVCODEC_DVDSUB_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopus.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopus.c
deleted file mode 100644
index 3d3b740a83c3251d6e25af50ced2b5680b21e019..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopus.c
+++ /dev/null
@@ -1,47 +0,0 @@
-/*
- * libopus encoder/decoder common code
- * Copyright (c) 2012 Nicolas George
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#include "libavutil/error.h"
-#include "libopus.h"
-
-int ff_opus_error_to_averror(int err)
-{
- switch (err) {
- case OPUS_BAD_ARG:
- return AVERROR(EINVAL);
- case OPUS_BUFFER_TOO_SMALL:
- return AVERROR_UNKNOWN;
- case OPUS_INTERNAL_ERROR:
- return AVERROR(EFAULT);
- case OPUS_INVALID_PACKET:
- return AVERROR_INVALIDDATA;
- case OPUS_UNIMPLEMENTED:
- return AVERROR(ENOSYS);
- case OPUS_INVALID_STATE:
- return AVERROR_UNKNOWN;
- case OPUS_ALLOC_FAIL:
- return AVERROR(ENOMEM);
- default:
- return AVERROR(EINVAL);
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h264pred_mips.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h264pred_mips.h
deleted file mode 100644
index 136e2912524557c6762941cde7af7ccc034bb49e..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h264pred_mips.h
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Copyright (c) 2015 Zhou Xiaoyong
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_MIPS_H264PRED_MIPS_H
-#define AVCODEC_MIPS_H264PRED_MIPS_H
-
-#include "constants.h"
-#include "libavcodec/h264pred.h"
-
-void ff_pred16x16_vertical_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred16x16_horizontal_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred16x16_dc_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred8x8l_top_dc_8_mmi(uint8_t *src, int has_topleft, int has_topright,
- ptrdiff_t stride);
-void ff_pred8x8l_dc_8_mmi(uint8_t *src, int has_topleft, int has_topright,
- ptrdiff_t stride);
-void ff_pred8x8l_vertical_8_mmi(uint8_t *src, int has_topleft,
- int has_topright, ptrdiff_t stride);
-void ff_pred4x4_dc_8_mmi(uint8_t *src, const uint8_t *topright,
- ptrdiff_t stride);
-void ff_pred8x8_vertical_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred8x8_horizontal_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred16x16_plane_svq3_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred16x16_plane_rv40_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred16x16_plane_h264_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred8x8_top_dc_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred8x8_dc_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred8x16_vertical_8_mmi(uint8_t *src, ptrdiff_t stride);
-void ff_pred8x16_horizontal_8_mmi(uint8_t *src, ptrdiff_t stride);
-
-#endif /* AVCODEC_MIPS_H264PRED_MIPS_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Craftsman Building Craft on PC - Experience the Creative Sandbox and Design Simulator with LDPlayer.md b/spaces/congsaPfin/Manga-OCR/logs/Craftsman Building Craft on PC - Experience the Creative Sandbox and Design Simulator with LDPlayer.md
deleted file mode 100644
index bfe95c76985ae9939da5b17e6340e67c97d055b9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Craftsman Building Craft on PC - Experience the Creative Sandbox and Design Simulator with LDPlayer.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
How to Download Craftsman: Building Craft for PC
-
Do you love building games where you can unleash your creativity and design your own world? If so, you might want to check out Craftsman: Building Craft, a free sandbox game that lets you create anything you can imagine with blocks. Whether you want to build a castle, a city, or a forest, you can do it in this game with various tools and resources. You can also explore different game modes, such as survival, creative, or multiplayer, and have fun with your friends.
But what if you want to play this game on a bigger screen with better controls and performance? Well, you're in luck because in this article, we will show you how to download Craftsman: Building Craft for PC using different methods. You can choose from using an Android emulator like BlueStacks or MEmu, or playing online in browser with now.gg. Read on to find out more about these methods and how they work.
-
What is Craftsman: Building Craft?
-
Craftsman: Building Craft is a simulation
Craftsman: Building Craft is a simulation game that is inspired by Minecraft, one of the most popular sandbox games of all time. In this game, you can build anything you want with blocks of different materials, shapes, and colors. You can also use tools like hammers, axes, and pickaxes to break and place blocks. You can also craft items like doors, windows, ladders, and furniture to decorate your buildings.
-
The game has three modes that you can choose from: survival, creative, and multiplayer. In survival mode, you have to gather resources, craft items, and fight against enemies like zombies and spiders. You also have to manage your health and hunger levels. In creative mode, you have unlimited resources and no enemies. You can just focus on building whatever you want without any restrictions. In multiplayer mode, you can join other players online and collaborate or compete with them. You can chat with them, share your creations, and explore their worlds.
-
Why Play Craftsman: Building Craft on PC?
-
While Craftsman: Building Craft is a fun and addictive game to play on your mobile device, you might want to try playing it on your PC for a better gaming experience. Here are some of the benefits of playing Craftsman: Building Craft on PC:
-
-
Bigger screen: You can enjoy the game's graphics and details more on a larger screen. You can also see more of your surroundings and have a wider view of your creations.
-
Better controls: You can use your keyboard and mouse to control the game more easily and precisely. You can also customize your key mappings and adjust your sensitivity settings according to your preferences.
-
Improved performance: You can run the game faster and smoother on your PC without any lag or glitches. You can also avoid draining your battery or overheating your device.
-
-
So how do you play Craftsman: Building Craft on PC? Well, there are different methods that you can use, depending on what suits you best. In the next section, we will show you how to download and install Craftsman: Building Craft on PC using three different methods: BlueStacks, MEmu, and now.gg.
-
How to Download and Install Craftsman: Building Craft on PC?
-
To play Craftsman: Building Craft on PC, you need to use a software that can run Android apps on your computer. There are two types of software that can do this: Android emulators and cloud gaming platforms. Android emulators are programs that simulate an Android device on your PC, allowing you to install and run Android apps on it. Cloud gaming platforms are websites that stream Android games from their servers to your browser, allowing you to play them online without downloading anything.
-
In this article, we will show you how to use three of the most popular and reliable software that can run Craftsman: Building Craft on PC: BlueStacks, MEmu, and now.gg. Each of these software has its own advantages and disadvantages, so you can choose the one that works best for you. Here are the steps for each method:
-
Method 1: Using BlueStacks
-
BlueStacks is one of the most popular and widely used Android emulators in the world. It has over 500 million users and supports thousands of Android games and apps. It also has many features that enhance your gaming experience, such as multi-instance, macro recorder, game controls, and game center. Here's how to use BlueStacks to play Craftsman: Building Craft on PC:
-
How to play craftsman building craft on pc with memu[^1^]
-Bluestacks app player for craftsman building craft on pc[^2^]
-Craftsman building craft online in browser[^3^]
-LDPlayer emulator for craftsman building craft on pc[^4^]
-Craftsman building craft pc download free full version
-Craftsman building craft simulation game for pc
-Craftsman building craft building games for pc
-Craftsman building craft pixel graphics for pc
-Craftsman building craft creative sandbox for pc
-Craftsman building craft design simulator for pc
-Craftsman building craft multiplayer mode for pc
-Craftsman building craft net energy gain for pc
-Craftsman building craft holy grail fusion experiment for pc
-Craftsman building craft stargame22 developer for pc
-Craftsman building craft android gaming platform for pc
-Craftsman building craft high definition graphics for pc
-Craftsman building craft advanced keymapping function for pc
-Craftsman building craft record feature for pc
-Craftsman building craft minimalistic visuals for pc
-Craftsman building craft realistic sound effects for pc
-Craftsman building craft various game modes for pc
-Craftsman building craft skyscrapers and cabins for pc
-Craftsman building craft tools and resources for pc
-Craftsman building craft imagination and fun for pc
-Craftsman building craft windows 10/8/7 for pc
-Craftsman building craft mac os x for pc
-Craftsman building craft linux ubuntu for pc
-Craftsman building craft chrome os chromebook for pc
-Craftsman building craft nox app player for pc
-Craftsman building craft gameloop tencent gaming buddy for pc
-Craftsman building craft smartgaga android emulator for pc
-Craftsman building craft genymotion virtual device manager for pc
-Craftsman building craft andy android emulator for pc
-Craftsman building craft koplayer android emulator for pc
-Craftsman building craft droid4x android simulator for pc
-Craftsman building craft remix os player android emulator for pc
-Craftsman building craft mumu app player mac version for pc
-Craftsman building craft phoenix os android based operating system for pc
-Craftsman building craft primeos android x86 operating system for pc
-Craftsman building craft open thos android x86 operating system for pc
-
Step 1: Download and Install BlueStacks
-
To download BlueStacks, go to https://www.bluestacks.com/ and click on the "Download BlueStacks" button. This will download the installer file to your PC. Once the download is complete, open the file and follow the instructions to install BlueStacks on your PC.
-
Step 2: Sign in to Google Play Store
-
After installing BlueStacks, launch it from your desktop or start menu. You will see a welcome screen where you need to sign in to Google Play Store using your Google account. If you don't have a Google account, you can create one for free by clicking on the "Create account" link.
-
Step 3: Search for Craftsman: Building Craft
-
Once you sign in to Google Play Store, you will see the home screen of BlueStacks with various apps and games. To search for Craftsman: Building Craft, click on the search icon at the top right corner of the screen and type in "Craftsman: Building Craft" in the
search bar. You will see the game's icon and name in the search results. Click on it to go to the game's page on Google Play Store.
-
Step 4: Install Craftsman: Building Craft
-
On the game's page, you will see an "Install" button. Click on it to start downloading and installing the game on BlueStacks. You will see a progress bar and a notification when the installation is complete.
-
Step 5: Start Playing
-
After installing the game, you can start playing it by clicking on the game's icon on the home screen of BlueStacks. You can also find it in the "My games" tab on the sidebar. You will see the game's loading screen and then the main menu. You can choose your game mode and start building your world.
-
Method 2: Using MEmu
-
MEmu is another Android emulator that you can use to play Craftsman: Building Craft on PC. It is fast, stable, and compatible with many Android games and apps. It also has features like keyboard mapping, screen recording, and multiple instances. Here's how to use MEmu to play Craftsman: Building Craft on PC:
-
Step 1: Download and Install MEmu
-
To download MEmu, go to https://www.memuplay.com/ and click on the "Download" button. This will download the installer file to your PC. Once the download is complete, open the file and follow the instructions to install MEmu on your PC.
-
Step 2: Start MEmu and Open Google Play
-
After installing MEmu, launch it from your desktop or start menu. You will see the MEmu desktop with various icons and widgets. To open Google Play, click on the "Play Store" icon on the bottom right corner of the screen. You will need to sign in to Google Play using your Google account or create one if needed.
-
Step 3: Search for Craftsman: Building Craft
-
Once you sign in to Google Play, you will see the home screen of Google Play with various apps and games. To search for Craftsman: Building Craft, click on the search icon at the top left corner of the screen and type in "Craftsman: Building Craft" in the search bar. You will see the game's icon and name in the search results. Click on it to go to the game's page on Google Play.
-
Step 4: Download and Install Craftsman: Building Craft
-
On the game's page, you will see a "Download" button. Click on it to start downloading and installing the game on MEmu. You will see a progress bar and a notification when the installation is complete.
-
Step 5: Start Playing
-
After installing the game, you can start playing it by clicking on the game's icon on the MEmu desktop. You can also find it in the "My games" folder on the sidebar. You will see the game's loading screen and then the main menu. You can choose your game mode and start building your world.
-
Method 3: Using now.gg
-
now.gg is a cloud gaming platform that allows you to play Android games online in browser without downloading anything. It streams games from its servers to your browser, so you don't need to install any software or emulator on your PC. It also has features like cloud save, social login, and chat. Here's how to use now.gg to play Craftsman: Building Craft online:
-
Step 1: Go to now.gg Website
-
To access now.gg, go to https://now.gg/ using any browser that supports HTML5, such as Chrome, Firefox, or Edge. You will see the now.gg website with various games and categories.
-
Step 2: Search for Craftsman: Building Craft
-
To search for Craftsman: Building Craft, click on the search icon at the top right corner of the website and type in "Craftsman: Building Craft" in
the search bar. You will see the game's icon and name in the search results. Click on it to go to the game's page on now.gg.
-
Step 3: Click and Play Instantly
-
On the game's page, you will see a "Play" button. Click on it to start playing the game online in browser without downloading anything. You will see the game's loading screen and then the main menu. You can choose your game mode and start building your world.
-
Conclusion
-
Craftsman: Building Craft is a fun and creative game that lets you build anything you want with blocks. You can also explore different game modes and play with your friends online. If you want to play this game on PC, you can use one of the three methods we showed you in this article: BlueStacks, MEmu, or now.gg. Each of these methods has its own pros and cons, so you can choose the one that suits you best. We hope you found this article helpful and informative. Now go ahead and download Craftsman: Building Craft for PC and enjoy building your own world.
-
FAQs
-
Here are some of the frequently asked questions and answers about Craftsman: Building Craft and how to play it on PC:
-
-
Q: Is Craftsman: Building Craft free to play?
-
A: Yes, Craftsman: Building Craft is a free-to-play game that you can download from Google Play Store or play online on now.gg. However, the game may contain ads or in-app purchases that require real money.
-
Q: Is Craftsman: Building Craft safe to play?
-
A: Yes, Craftsman: Building Craft is a safe game to play as long as you download it from a trusted source like Google Play Store or now.gg. You should also avoid clicking on any suspicious links or ads that may appear in the game or on the website.
-
Q: Can I play Craftsman: Building Craft offline?
-
A: Yes, you can play Craftsman: Building Craft offline if you download it on your device using an Android emulator like BlueStacks or MEmu. However, you will not be able to access some features like multiplayer mode or cloud save when you are offline.
-
Q: Can I play Craftsman: Building Craft with my friends?
-
A: Yes, you can play Craftsman: Building Craft with your friends online if you have an internet connection and a Google account. You can join other players online and chat with them, share your creations, and explore their worlds.
-
Q: How can I contact the developer of Craftsman: Building Craft?
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World Full Version PC Download and Play the Game with BlueStacks Emulator.md b/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World Full Version PC Download and Play the Game with BlueStacks Emulator.md
deleted file mode 100644
index dd5b4e153183d113757cb25a483dd5fa7b9566fd..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World Full Version PC Download and Play the Game with BlueStacks Emulator.md
+++ /dev/null
@@ -1,196 +0,0 @@
-
-
How to Download Geometry Dash World Full Version PC
-
If you are looking for a challenging and fun game to play on your PC, you might want to check out Geometry Dash World. This is a spin-off of the popular Geometry Dash series, which combines rhythm-based action platforming with colorful graphics and catchy music. In this article, we will explain what Geometry Dash World is, why you should download it, and how to do it in two easy ways.
Geometry Dash World is a game developed by RobTop Games, a Swedish indie game studio. It was released in December 2016 as a free-to-play game for mobile devices, and later ported to Windows PC. It is a spin-off of Geometry Dash, which was first released in 2013 and has become one of the most popular games in the genre.
-
A spin-off of Geometry Dash
-
Geometry Dash is a series of games that feature a simple but addictive gameplay mechanic. You control a geometric shape that can jump, fly, and flip through various obstacles in sync with the music. The game is known for its high difficulty level, as one mistake can send you back to the beginning of the level. The game also allows you to create your own levels and share them online with other players.
-
Geometry Dash World is a spin-off that introduces new features and content to the original game. It has two worlds with five levels each, new enemies, new music, new icons, and new secrets. It also has a shop, a vault, daily quests, rewards, and hidden chests. The game is designed to showcase the new features of Geometry Dash version 2.1, which was released in January 2017.
-
How to download geometry dash world on pc for free
-Geometry dash world pc download windows 10
-Geometry dash world full version free download pc
-Geometry dash world steam download
-Geometry dash world emulator for pc
-Geometry dash world online play on pc
-Geometry dash world pc game download
-Geometry dash world bluestacks download
-Geometry dash world apk download for pc
-Geometry dash world pc requirements
-Geometry dash world mac download
-Geometry dash world pc cheats
-Geometry dash world pc mods
-Geometry dash world pc hack
-Geometry dash world pc controller support
-Geometry dash world pc gameplay
-Geometry dash world pc review
-Geometry dash world pc tips and tricks
-Geometry dash world pc update
-Geometry dash world pc level editor
-Geometry dash world pc custom levels
-Geometry dash world pc achievements
-Geometry dash world pc soundtrack download
-Geometry dash world pc nox player download
-Geometry dash world pc memu play download
-Geometry dash world pc ldplayer download
-Geometry dash world pc gameloop download
-Geometry dash world pc koplayer download
-Geometry dash world pc droid4x download
-Geometry dash world pc andy emulator download
-Geometry dash world windows 7 download
-Geometry dash world windows 8 download
-Geometry dash world windows xp download
-Geometry dash world linux download
-Geometry dash world chromebook download
-Geometry dash world desktop download
-Geometry dash world laptop download
-Download geometry dash subzero for pc full version free
-Download geometry dash meltdown for pc full version free
-Download geometry dash lite for pc full version free
-Download geometry dash for pc full version free 2023 update
-Download geometry dash 2.2 for pc full version free
-Download geometry dash 2.1 for pc full version free
-Download geometry dash 2.0 for pc full version free
-Download geometry dash 1.9 for pc full version free
-Download geometry dash 1.8 for pc full version free
-Download geometry dash 1.7 for pc full version free
-Download geometry dash 1.6 for pc full version free
-Download geometry dash 1.5 for pc full version free
-
A rhythm-based action platformer
-
Geometry Dash World is a game that combines rhythm and action elements in a platformer style. The game is based on timing your jumps and movements to match the beat of the music. The game has various modes, such as normal mode, practice mode, and online mode. In normal mode, you have to complete each level without dying. In practice mode, you can use checkpoints to save your progress and practice difficult parts. In online mode, you can play levels created by other players or upload your own levels.
-
The game has different types of levels, such as normal levels, demon levels, gauntlet levels, map pack levels, and featured levels. Normal levels are the official levels made by the developer, RobTop Games. Demon levels are the hardest levels in the game, and require a lot of skill and practice to complete. Gauntlet levels are collections of five levels with a common theme or difficulty. Map pack levels are collections of three levels with a common theme or difficulty. Featured levels are the best levels created by the community, and are selected by the moderators of the game.
-
A game with two worlds and ten levels
-
Geometry Dash World has two worlds: Dashlands and Toxic Factory. Each world has five levels, which are named as follows:
-
-
-
World
-
Level
-
Difficulty
-
Song
-
-
-
Dashlands
-
Payload
-
Easy
-
Payload by Dex Arson
-
-
-
Dashlands
-
Beast Mode
-
Normal
-
Beast Mode by Dex Arson
-
-
-
Dashlands
-
Machina
-
Hard
-
Machina by Dex Arson
-
-
-
Dashlands
-
Years
-
Harder
-
Years by Dex Arson & Waterflame
-
-
-
Dashlands
-
Frontlines
-
Insane
-
Frontlines by Dex Arson & Waterflame
-
-
-
Toxic Factory
-
Spike Factory
-
Easy
-
Spike Factory by F-777 & Dex Arson
-
-
-
Toxic Factory
-
Piano Masters
-
Normal
-
Piano Masters by F-777 & Dex Arson
-
Toxic Factory
-
Space Pirates
-
Hard
-
Space Pirates by Waterflame
-
-
-
Toxic Factory
-
Striker
-
Harder
-
Striker by Waterflame
-
-
-
Toxic Factory
-
Embers
-
Insane
-
Embers by Dex Arson & Waterflame
-
-
-
To complete each level, you have to avoid the obstacles and reach the end portal. You can collect stars, orbs, diamonds, keys, and coins along the way. You can also unlock achievements and rewards for completing certain tasks or challenges. The game has a colorful and dynamic design, with different themes and backgrounds for each level. The game also has a catchy and energetic soundtrack, with songs from various artists such as Dex Arson, Waterflame, and F-777.
-
Why Download Geometry Dash World Full Version PC?
-
Geometry Dash World is a game that can be enjoyed by anyone who likes rhythm-based action platformers. However, if you want to have the best experience possible, you might want to download the full version of the game for your PC. Here are some reasons why:
-
To enjoy better graphics and performance
-
One of the advantages of playing Geometry Dash World on your PC is that you can enjoy better graphics and performance than on your mobile device. You can adjust the resolution, quality, and frame rate of the game to suit your preferences and capabilities of your PC. You can also play the game in full screen mode, which can enhance your immersion and concentration. Playing on your PC can also reduce the risk of lagging, crashing, or overheating that might occur on your mobile device.
-
To access more features and content
-
Another benefit of downloading Geometry Dash World full version PC is that you can access more features and content than on your mobile device. For example, you can use the level editor to create your own levels and share them online with other players. You can also download and play thousands of online levels created by the community, which can offer you endless variety and challenge. You can also use the practice mode to improve your skills and master difficult levels. You can also customize your character with different icons, colors, trails, and effects.
-
To play online levels created by the community
-
A final reason why you should download Geometry Dash World full version PC is that you can play online levels created by the community. These are levels that are uploaded by other players using the level editor. They can range from easy to demon difficulty, and from simple to complex design. They can also feature different themes, music, gameplay mechanics, and secrets. Playing online levels can give you a new perspective on the game, as well as challenge your skills and creativity. You can also rate, comment, and like the levels that you play, as well as follow your favorite creators.
How to Download Geometry Dash World Full Version PC?
-
Now that you know what Geometry Dash World is and why you should download it, you might be wondering how to do it. Well, there are two easy options that you can choose from. You can either download the game from the official website of RobTop Games, or from a third-party platform such as Steam, Epic Games, or GOG. Here are the steps for each option:
-
Option 1: From the official website
-
This is the simplest and most direct way to download Geometry Dash World full version PC. All you need to do is visit the official website of RobTop Games, which is https://www.robtopgames.com/. There, you will find a download button for Geometry Dash World, as well as other games from the same developer. Here are the steps to follow:
-
Step 1: Visit the website and click on the download button
-
Go to https://www.robtopgames.com/ and look for the Geometry Dash World section. You will see a download button that says "Download for Windows". Click on it and you will be redirected to a page where you can choose your preferred language and location.
-
Step 2: Run the installer and follow the instructions
-
After choosing your language and location, you will see a link that says "Download Geometry Dash World". Click on it and you will start downloading the installer file for the game. The file size is about 50 MB, so it should not take long to download. Once the download is complete, run the installer file and follow the instructions on the screen. You will be asked to agree to the terms and conditions, choose a destination folder, and create a shortcut.
-
Step 3: Launch the game and start playing
-
Once the installation is done, you will see a message that says "Geometry Dash World has been installed successfully". You can either click on "Finish" to close the installer, or on "Play" to launch the game. You can also launch the game from your desktop shortcut or your start menu. The game will open in a windowed mode, but you can switch to full screen mode by pressing F11. You can also adjust the settings of the game from the options menu. Now you can start playing Geometry Dash World on your PC!
-
Option 2: From a third-party platform
-
This is another way to download Geometry Dash World full version PC, but it requires you to use a third-party platform such as Steam, Epic Games, or GOG. These are platforms that allow you to buy, download, and play games from various developers and publishers. They also offer other features such as cloud saving, achievements, social features, and more. Here are the steps to follow:
-
Step 1: Choose a platform such as Steam, Epic Games, or GOG
-
The first step is to choose which platform you want to use to download Geometry Dash World. There are several options available, but some of the most popular ones are Steam, Epic Games, and GOG. Each platform has its own advantages and disadvantages, so you might want to do some research before deciding which one suits you best. Here are some brief descriptions of each platform:
-
-
Steam: This is the most popular and widely used platform for PC gaming. It has over 30,000 games in its library, including Geometry Dash World. It also has a large and active community of players and creators. You can access Steam from your web browser or download its launcher for your PC.
-
Epic Games: This is another popular platform for PC gaming. It has over 500 games in its library, including Geometry Dash World. It also offers free games every week and exclusive deals on some titles. You can access Epic Games from your web browser or download its launcher for your PC.
-
GOG: This is a platform that focuses on DRM-free games for PC gaming. It has over 3,000 games in its library, including Geometry Dash World. It also offers classic games, indie games, and curated collections. You can access GOG from your web browser or download its launcher for your PC.
-
-
You can choose any of these platforms or any other platform that offers Geometry Dash World for PC. Just make sure that the platform is trustworthy and secure before using it.
-
Step 2: Create an account and install the launcher
-
The next step is to create an account on the platform that you chose and install its launcher on your PC. You will need to provide some basic information such as your name, email, password, and username. You might also need to verify your email or phone number. After creating your account, you will be able to access the platform's website or launcher. You will need to download and install the launcher on your PC, which is a program that allows you to manage your games and access other features of the platform.
-
Step 3: Search for Geometry Dash World and purchase or get it for free
-
The third step is to search for Geometry Dash World on the platform that you chose and purchase or get it for free. You can use the search bar or the browse function to find the game. You will see a page that shows the game's details, such as its description, screenshots, videos, reviews, and ratings. You will also see a button that says "Buy", "Get", or something similar. Depending on the platform and the game, you might need to pay a certain amount of money or get it for free. If you need to pay, you will need to provide your payment information and confirm your purchase. If you get it for free, you will just need to click on the button and add the game to your library.
-
Step 4: Download and install the game from your library
-
The fourth step is to download and install Geometry Dash World from your library. You can access your library from the platform's website or launcher. You will see a list of games that you own or have access to. You will find Geometry Dash World among them. You will see a button that says "Download", "Install", or something similar. Click on it and you will start downloading the game files to your PC. The file size is about 50 MB, so it should not take long to download. Once the download is complete, you will see a button that says "Play", "Launch", or something similar. Click on it and you will start installing the game on your PC. The installation process is similar to the one from the official website, so you just need to follow the instructions on the screen.
-
Step 5: Launch the game and start playing
-
The final step is to launch Geometry Dash World and start playing. You can launch the game from your library or from your desktop shortcut or start menu. The game will open in a windowed mode, but you can switch to full screen mode by pressing F11. You can also adjust the settings of the game from the options menu. Now you can start playing Geometry Dash World on your PC!
-
Conclusion
-
Geometry Dash World is a fun and challenging game that can keep you entertained for hours. It is a spin-off of Geometry Dash, which is a popular rhythm-based action platformer series. It has two worlds with ten levels each, as well as online levels created by the community. It also has a level editor, a shop, a vault, daily quests, rewards, and hidden chests.
-
If you want to download Geometry Dash World full version PC, you have two easy options. You can either download it from the official website of RobTop Games, or from a third-party platform such as Steam, Epic Games, or GOG. Both options are simple and straightforward, and only require a few steps to follow.
-
Here are some tips and recommendations for playing Geometry Dash World:
-
-
Practice mode is your friend. Use it to learn the patterns and timings of each level.
-
Collect stars, orbs, diamonds, keys, and coins to unlock new icons, colors, trails, effects, and secrets.
-
Use the level editor to create your own levels and share them online with other players.
-
Play online levels created by the community for more variety and challenge.
-
Have fun and don't give up!
-
-
FAQs
-
Q1. How much does Geometry Dash World cost?
-
A1. Geometry Dash World is free-to-play on mobile devices and PC. However, some features and content might require in-app purchases or subscriptions.
-
Q2. What are the system requirements for Geometry Dash World?
-
A2. Geometry Dash World does not have high system requirements for PC. Here are the minimum requirements:
-
-
OS: Windows XP/Vista/7/8/10
-
Processor: 2 GHz
-
Memory: 512 MB RAM
-
Graphics: OpenGL 2.0 support
-
Storage: 100 MB available space
-
-
Q3. How can I customize my character in Geometry Dash World?
-
A3. You can customize your character by changing its icon, color, trail, and effect. You can unlock new icons, colors, trails, and effects by collecting stars, orbs, diamonds, keys, and coins. You can also buy them from the shop or get them from the vault. To change your character's appearance, go to the options menu and click on the icon button. There, you can choose from different categories such as cubes, ships, balls, UFOs, waves, robots, and spiders. You can also select your primary and secondary colors, as well as your trail and effect.
-
Q4. How can I create my own levels in Geometry Dash World?
-
A4. You can create your own levels in Geometry Dash World by using the level editor. The level editor is a tool that allows you to design and build your own levels using various blocks, objects, enemies, triggers, and effects. You can also choose the music, background, ground, and color of your level. To access the level editor, go to the main menu and click on the create button. There, you can start a new level or edit an existing one. You can also test your level by clicking on the play button. To upload your level online, go to the online menu and click on the upload button. There, you can name your level, choose its difficulty and description, and submit it for approval.
-
Q5. How can I play Geometry Dash World on other devices?
-
A5. You can play Geometry Dash World on other devices such as Android or iOS smartphones or tablets. To do so, you need to download the game from the Google Play Store or the App Store. The game is compatible with most devices that run Android 4.0 or higher or iOS 8.0 or higher. The game is free-to-play on both platforms, but some features and content might require in-app purchases or subscriptions.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Laughing Minions Wallpapers Funny and Cute Despicable Me Characters.md b/spaces/congsaPfin/Manga-OCR/logs/Laughing Minions Wallpapers Funny and Cute Despicable Me Characters.md
deleted file mode 100644
index e369df46da9b647c1950a7dec2dcdb699d7aea1c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Laughing Minions Wallpapers Funny and Cute Despicable Me Characters.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Minions Wallpaper: A Fun Way to Decorate Your Device
-
If you are a fan of the cute and hilarious yellow creatures known as minions, you might want to show your love by using a minions wallpaper on your device. Whether it is your desktop, laptop, tablet, or phone, a minions wallpaper can add some color and fun to your screen. In this article, we will tell you what are minions, what are minions wallpapers, what are the types and benefits of minions wallpapers, and how to find and download them.
Minions are an all-male species of fictional yellow creatures that appear in Illumination's Despicable Me franchise. They are characterized by their childlike behavior and their language, which is largely unintelligible. They have one or two eyes, wear denim overalls and goggles, and have three fingers on each hand. They are loyal followers of the most despicable villains in history, such as Gru, Scarlet Overkill, and the Vicious 6.
-
Minions are popular because they are adorable, funny, and relatable. They have a sense of humor, curiosity, and adventure that appeals to people of all ages. They also have a unique style and personality that makes them stand out from other animated characters. They have become icons of pop culture and have spawned many merchandise, games, books, spin-offs, and memes.
-
What are Minions Wallpapers and How to Use Them?
-
Minions wallpapers are digital images that feature minions in various poses, settings, or situations. They can be used as backgrounds for your device's screen to make it more attractive and personalized. You can choose a minions wallpaper that matches your mood, preference, or theme.
-
To use a minions wallpaper, you need to download it from a reliable source and save it on your device. Then, you need to go to your device's settings and select the option to change your wallpaper. You can then browse through your saved images and pick the one you want to use. You can also adjust the size, position, or orientation of the image to fit your screen.
-
Types of Minions Wallpapers
-
There are many types of minions wallpapers that you can choose from. Here are some of the most common ones:
-
Minions from Despicable Me Movies
-
These are wallpapers that feature minions from the Despicable Me movies (2010-2017), where they serve as Gru's henchmen and adoptive children's friends. You can find wallpapers that show them in different scenes or activities from the movies, such as stealing the moon, fighting Vector or El Macho, or celebrating Christmas.
-
Minions from Minions Movies
-
These are wallpapers that feature minions from the Minions movies (2015-2022), where they star as the main protagonists and seek out new villains to serve. You can find wallpapers that show them in different locations or eras from the movies, such as ancient Egypt, medieval England, or 1970s New York.
-
minions wallpaper hd
-minions wallpaper 4k
-minions wallpaper for laptop
-minions wallpaper for phone
-minions wallpaper download
-minions wallpaper cute
-minions wallpaper funny
-minions wallpaper 1920x1080
-minions wallpaper despicable me
-minions wallpaper the avengers
-minions wallpaper hitman
-minions wallpaper banana
-minions wallpaper movie
-minions wallpaper bob
-minions wallpaper stuart
-minions wallpaper kevin
-minions wallpaper 8k
-minions wallpaper 5k
-minions wallpaper 2k
-minions wallpaper 1080p
-minions wallpaper free
-minions wallpaper iphone
-minions wallpaper android
-minions wallpaper samsung
-minions wallpaper windows 10
-minions wallpaper macbook
-minions wallpaper ipad
-minions wallpaper desktop
-minions wallpaper pc
-minions wallpaper widescreen
-minions wallpaper high resolution
-minions wallpaper high quality
-minions wallpaper simple background
-minions wallpaper white background
-minions wallpaper yellow background
-minions wallpaper blue background
-minions wallpaper black background
-minions wallpaper pink background
-minions wallpaper purple background
-minions wallpaper green background
-minions wallpaper red background
-minions wallpaper orange background
-minions wallpaper rainbow background
-minions wallpaper christmas theme
-minions wallpaper halloween theme
-minions wallpaper valentine theme
-minions wallpaper summer theme
-minions wallpaper winter theme
-minions wallpaper spring theme
-minions wallpaper autumn theme
-
Minions with Other Characters or Themes
-
These are wallpapers that feature minions with other characters or themes from popular culture or media. You can find wallpapers that show them dressed up as superheroes, pirates, ninjas, or other costumes. You can also find wallpapers that show them interacting with other characters from movies, TV shows, games, or comics.
-
Benefits of Minions Wallpapers
Using a minions wallpaper can have several benefits for you and your device. Here are some of them:
-
Brighten Up Your Mood and Device
-
Minions wallpapers can make your device look more lively and cheerful. The bright colors, funny expressions, and amusing situations of the minions can bring a smile to your face and lighten up your mood. You can also change your wallpaper according to your mood or the season, such as using a festive or spooky one for holidays.
-
Express Your Personality and Fandom
-
Minions wallpapers can also help you express your personality and fandom. You can choose a wallpaper that reflects your interests, hobbies, or preferences. For example, if you like music, you can use a wallpaper that shows the minions playing instruments or singing. You can also show your support and appreciation for the minions and the Despicable Me franchise by using a wallpaper that features them.
-
Support the Creators and Franchise
-
Another benefit of using a minions wallpaper is that you can support the creators and franchise of the minions. By using a wallpaper that is official or licensed, you can help them generate revenue and recognition. You can also spread the word and popularity of the minions by sharing your wallpaper with others or recommending them to watch the movies or buy the merchandise.
-
How to Find and Download Minions Wallpapers
-
If you are looking for minions wallpapers, there are many ways to find and download them. Here are some of the best methods:
-
Use Bing Image Search with Keywords and Filters
-
One of the easiest and fastest ways to find minions wallpapers is to use Bing image search with keywords and filters. You can type in "minions wallpaper" or any other related term in the search box and hit enter. You can then use the filters on the top or side of the page to narrow down your results by size, color, type, layout, or license. You can also use the "Related searches" section at the bottom of the page to find more suggestions.
-
Visit Official or Fan Websites for High-Quality Images
-
Another way to find minions wallpapers is to visit official or fan websites that offer high-quality images. You can go to the official website of Illumination or Minions to find wallpapers that are created by the studio or approved by them. You can also go to fan websites such as Minion Land or Minions Art to find wallpapers that are made by fans or artists who love the minions.
-
Check the License and Resolution Before Downloading
-
Before you download a minions wallpaper, you should check the license and resolution of the image. The license tells you how you can use the image legally and ethically. You should look for images that are free to use, share, or modify, or that require attribution or permission from the owner. The resolution tells you how clear and sharp the image will look on your screen. You should look for images that have a high resolution (at least 1920 x 1080 pixels) or that match your device's screen size.
-
Conclusion
-
Minions wallpapers are a fun way to decorate your device and show your love for the yellow creatures. They come in various types and styles, and they have many benefits for you and your device. You can find and download them easily by using Bing image search, visiting official or fan websites, and checking the license and resolution of the image. So what are you waiting for? Go ahead and choose a minions wallpaper that suits your taste and enjoy!
-
Frequently Asked Questions
-
What are minions?
-
Minions are an all-male species of fictional yellow creatures that appear in Illumination's Despicable Me franchise. They are loyal followers of the most despicable villains in history.
-
What are minions wallpapers?
-
Minions wallpapers are digital images that feature minions in various poses, settings, or situations. They can be used as backgrounds for your device's screen.
-
What are the types of minions wallpapers?
-
There are many types of minions wallpapers, such as minions from Despicable Me movies, minions from Minions movies, or minions with other characters or themes.
-
What are the benefits of minions wallpapers?
-
Some of the benefits of minions wallpapers are that they can brighten up your mood and device, express your personality and fandom, and support the creators and franchise.
-
How to find and download minions wallpapers?
-
You can find and download minions wallpapers by using Bing image search with keywords and filters, visiting official or fan websites, and checking the license and resolution of the image.
-
I hope this article has helped you learn more about minions wallpapers and how to use them. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and have a great day!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Raft Survival Ocean Nomad - Multiplayer Mod APK for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/Raft Survival Ocean Nomad - Multiplayer Mod APK for Android Devices.md
deleted file mode 100644
index c1650be34247bdd99b9e09ad99b7cd1cd1b00de0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Raft Survival Ocean Nomad - Multiplayer Mod APK for Android Devices.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Raft Survival: Ocean Nomad Multiplayer Mod APK - A Thrilling Adventure on the Sea
-
Introduction
-
Do you love survival games? Do you want to experience a challenging and exciting adventure on the ocean? Do you want to play with your friends and other players online? If you answered yes to any of these questions, then you should try Raft Survival: Ocean Nomad Multiplayer Mod APK.
-
What is Raft Survival: Ocean Nomad?
-
Raft Survival: Ocean Nomad is a survival simulator game that puts you in the role of a survivor who has to survive on a raft in the middle of the ocean. You have to look for food and water, craft items, build your raft, fight sharks and other enemies, and explore islands and wrecks. You can also customize your character and your raft with different skins and items.
Multiplayer Mod APK is a modified version of Raft Survival: Ocean Nomad that allows you to play online with other players. You can join or create a server, chat with other players, cooperate or compete with them, and share your resources and items. You can also enjoy unlimited coins, resources, and no ads in this mod.
-
Why You Should Try Raft Survival: Ocean Nomad Multiplayer Mod APK
-
Raft Survival: Ocean Nomad Multiplayer Mod APK is a fun and addictive game that will keep you entertained for hours. You can enjoy the following benefits when you play this mod:
-
Features of Raft Survival: Ocean Nomad Multiplayer Mod APK
-
Unlimited Coins
-
Coins are the currency in Raft Survival: Ocean Nomad that you can use to buy skins, items, and upgrades. With unlimited coins, you can buy anything you want without worrying about running out of money. You can also use coins to revive yourself if you die in the game.
-
Unlimited Resources
-
Resources are the materials that you need to craft items, build your raft, and survive on the ocean. You can find resources by fishing, collecting debris, harvesting plants, and looting islands and wrecks. With unlimited resources, you can craft and build anything you need without having to search for them. You can also use resources to trade with other players.
-
Raft Survival: Ocean Nomad (MOD, Unlimited Coins) 1.214.7 APK
-Download Raft Survival: Ocean Nomad Mod Apk for Android
-Raft Survival: Ocean Nomad - Simulator with Multiplayer Mode
-How to Install Raft Survival: Ocean Nomad Mod Apk on PC
-Raft Survival: Ocean Nomad Hack - Get Free Coins and Resources
-Raft Survival: Ocean Nomad Mod Apk Features and Gameplay
-Best Tips and Tricks for Raft Survival: Ocean Nomad Game
-Raft Survival: Ocean Nomad Review - A Fun and Challenging Survival Game
-Raft Survival: Ocean Nomad Cheats - How to Get Unlimited Coins and Resources
-Raft Survival: Ocean Nomad Online - Play with Friends and Other Players
-Raft Survival: Ocean Nomad Mod Apk Download Link and Instructions
-Raft Survival: Ocean Nomad Wiki - Everything You Need to Know About the Game
-Raft Survival: Ocean Nomad Update - What's New in the Latest Version
-Raft Survival: Ocean Nomad Guide - How to Survive and Thrive in the Ocean
-Raft Survival: Ocean Nomad Mod Menu - How to Access and Use It
-Raft Survival: Ocean Nomad for PC - How to Play on Windows and Mac
-Raft Survival: Ocean Nomad Apk + Mod + Data for Android
-Raft Survival: Ocean Nomad Multiplayer - How to Join and Host a Server
-Raft Survival: Ocean Nomad Crafting - How to Make Tools, Weapons, and Items
-Raft Survival: Ocean Nomad Modded Apk - How to Download and Install It
-Raft Survival: Ocean Nomad Gameplay - How to Play and Enjoy the Game
-Raft Survival: Ocean Nomad Hack Apk - How to Get Free Coins and Resources
-Raft Survival: Ocean Nomad Mod Apk Unlimited Money and Resources
-Raft Survival: Ocean Nomad Simulator - How to Explore and Survive in the Ocean
-Raft Survival: Ocean Nomad Free Download for Android Devices
-Raft Survival: Ocean Nomad Tips and Hints - How to Improve Your Skills and Strategy
-Raft Survival: Ocean Nomad Mod Apk Latest Version for Android
-Raft Survival: Ocean Nomad No Ads - How to Remove Ads from the Game
-Raft Survival: Ocean Nomad Offline - How to Play Without Internet Connection
-Raft Survival: Ocean Nomad Mod Apk All Unlocked - How to Unlock Everything in the Game
-
No Ads
-
Ads are the annoying pop-ups that interrupt your gameplay and make you watch videos or install apps. With no ads, you can enjoy the game without any distractions or interruptions. You can also save your data and battery by not having to load ads.
-
How to Download and Install Raft Survival: Ocean Nomad Multiplayer Mod APK
-
If you want to try Raft Survival: Ocean Nomad Multiplayer Mod APK, you have to follow these simple steps:
-
Step 1: Enable Unknown Sources
-
Before you can install the mod, you have to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.
-
Step 2: Download the APK File
-
Next, you have to download the APK file of Raft Survival: Ocean Nomad Multiplayer Mod APK. You can find the link to download it here . Make sure you have enough storage space on your device before downloading it.
-
Step 3: Install the APK File
-
After downloading the APK file, you have to install it on your device. To do this, locate the file in your downloads folder and tap on it. Then, follow the instructions on the screen to complete the installation.
-
Step 4: Launch the Game and Enjoy
-
Finally, you can launch the game and enjoy playing Raft Survival: Ocean Nomad Multiplayer Mod APK. You can create or join a server, chat with other players, and survive on the ocean with unlimited coins and resources.
-
Tips and Tricks for Raft Survival: Ocean Nomad Multiplayer Mod APK
-
To make the most out of Raft Survival: Ocean Nomad Multiplayer Mod APK, here are some tips and tricks that you should know:
-
Build Your Raft Wisely
-
Your raft is your home and your lifeline on the ocean. You have to build it wisely and expand it as you go. You can use different items and structures to make your raft more comfortable and functional. For example, you can use a sail to move faster, a bed to sleep and save your progress, a water purifier to get fresh water, a grill to cook food, a chest to store items, a crop plot to grow plants, a research table to unlock new items, and more. You can also decorate your raft with different skins and items to make it look more appealing.
-
Explore the Islands and Wrecks
-
The ocean is full of islands and wrecks that you can explore for loot and resources. You can find different items such as blueprints, weapons, armor, tools, seeds, food, water, and more. You can also encounter different enemies such as pirates, mutants, zombies, and more. You can use a raft or a boat to travel between islands and wrecks. You can also use an anchor to stop your raft from drifting away while you explore.
-
Craft Weapons and Tools
-
You will need weapons and tools to survive on the ocean. You can craft different weapons such as spears, axes, knives, bows, guns, and more. You can use them to fight sharks and other enemies, hunt animals and fish, and chop trees and plants. You can also craft different tools such as hooks, nets, paddles, binoculars, compasses, and more. You can use them to collect debris, catch fish, move your raft, see farther, navigate the ocean, and more.
-
Watch Out for Sharks and Other Dangers
-
The ocean is not a safe place. You have to watch out for sharks and other dangers that can harm you or your raft. Sharks can attack you or your raft and cause damage or death. You can use weapons or shark bait to fend them off or distract them. You can also encounter other dangers such as storms, waves, radiation, hunger, thirst, cold, heat, and more. You have to prepare yourself and your raft for these situations and use items such as clothes, armor, bandages, medicine, food, water, fire, and more.
-
Conclusion
-
Raft Survival: Ocean Nomad Multiplayer Mod APK is a game that will test your survival skills and creativity on the ocean. You can play online with other players and enjoy unlimited coins and resources. You can also explore islands and wrecks, craft weapons and tools, build your raft, and fight sharks and other enemies. If you are looking for a thrilling and fun adventure on the sea, you should download Raft Survival: Ocean Nomad Multiplayer Mod APK today.
-
FAQs
-
Here are some frequently asked questions about Raft Survival: Ocean Nomad Multiplayer Mod APK:
-
-
Is Raft Survival: Ocean Nomad Multiplayer Mod APK safe to download and install?
-
Yes, Raft Survival: Ocean Nomad Multiplayer Mod APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it with an antivirus before installing it.
-
Is Raft Survival: Ocean Nomad Multiplayer Mod APK compatible with my device?
-
Raft Survival: Ocean Nomad Multiplayer Mod APK is compatible with most Android devices that have Android 4.1 or higher. However, some devices may experience performance issues or crashes due to hardware limitations or compatibility issues. You should check the minimum requirements of the game before downloading it.
-
Can I play Raft Survival: Ocean Nomad Multiplayer Mod APK offline?
-
No, Raft Survival: Ocean Nomad Multiplayer Mod APK requires an internet connection to play online with other players. You cannot play it offline or without a network connection.
-
Can I play Raft Survival: Ocean Nomad Multiplayer Mod APK with my friends?
-
Yes, you can play Raft Survival: Ocean Nomad Multiplayer Mod APK with your friends and other players online. You can join or create a server, chat with other players, cooperate or compete with them, and share your resources and items.
-
How can I update Raft Survival: Ocean Nomad Multiplayer Mod APK?
-
You can update Raft Survival: Ocean Nomad Multiplayer Mod APK by downloading the latest version of the mod from the same source where you downloaded it before. You should always update the mod to enjoy the latest features and bug fixes.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Rubik 39s Cube Solver 3x3 Mod Apk.md b/spaces/congsaPfin/Manga-OCR/logs/Rubik 39s Cube Solver 3x3 Mod Apk.md
deleted file mode 100644
index 499c0524d54eccb84f74dd80f3536aa8c59f48af..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Rubik 39s Cube Solver 3x3 Mod Apk.md
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
Rubik's Cube Solver 3x3 Mod Apk: How to Solve the Cube in Seconds
-
The Rubik's cube is one of the most popular and challenging puzzles in the world. It was invented by Hungarian professor Erno Rubik in 1974 and has since sold over 350 million units worldwide. The cube consists of six faces, each with nine stickers of one of six colors: white, yellow, green, blue, orange, and red. The goal is to twist and turn the cube until each face has only one color.
-
However, solving a Rubik's cube is not easy. There are over 43 quintillion possible combinations of the cube, but only one correct solution. It takes a lot of skill, patience, and practice to master the cube. Some people can solve it in less than 10 seconds, while others may take hours or even days.
That's why many people use a Rubik's cube solver 3x3 mod apk to help them solve the cube in seconds. A Rubik's cube solver 3x3 mod apk is a modified application that uses your device's camera to scan your cube and then shows you the steps to solve it. You can follow the instructions on your screen and turn your cube accordingly until it is solved.
-
Using a Rubik's cube solver 3x3 mod apk has many benefits. You can save time and frustration by solving your cube quickly and easily. You can also learn from the app and improve your own skills and techniques. You can have fun and impress your friends and family with your amazing cube solving abilities.
-
How to Download and Install a Rubik's Cube Solver 3x3 Mod Apk
-
If you want to use a Rubik's cube solver 3x3 mod apk, you need to download and install it on your device first. Here are the steps to do so:
-
-
Find a reliable and safe Rubik's cube solver 3x3 mod apk online. There are many websites that offer such apps, but some of them may contain viruses or malware that can harm your device. You should always check the reviews and ratings of the app before downloading it. One of the best websites to find a Rubik's cube solver 3x3 mod apk is [Grubiks](^1^), which offers a fast, easy, and free online Rubik’s Cube Solver.
-
Download the Rubik's cube solver 3x3 mod apk file from the website. The file size may vary depending on the app, but it should not take too long to download. You may need to enable the option to install apps from unknown sources on your device settings.
-
Install the Rubik's cube solver 3x3 mod apk file on your device. You may need to grant some permissions to the app, such as access to your camera and storage. Follow the instructions on the screen and wait for the installation to complete.
-
Open the Rubik's cube solver 3x3 mod apk app on your device. You should see a camera view that allows you to scan your cube. Make sure that your cube is well-lit and that all the stickers are visible. You may need to adjust the angle and distance of your device to get a clear scan.
-
Scan your cube by following the app's instructions. You may need to scan each face of your cube separately or scan the whole cube at once, depending on the app. The app will analyze your cube and generate a solution for you.
-
Solve your cube by following the app's instructions. The app will show you the steps to solve your cube, either as text, images, or animations. You can follow the instructions on your screen and turn your cube accordingly until it is solved. You can also pause, resume, or restart the solution at any time.
-
-
Tips and Tricks for Solving a Rubik's Cube Faster and Easier
-
Using a Rubik's cube solver 3x3 mod apk can help you solve your cube in seconds, but it can also help you improve your own skills and techniques. Here are some tips and tricks for solving a Rubik's cube faster and easier:
-
-
Improve your cube recognition and finger speed. Cube recognition is the ability to identify the colors and patterns of your cube quickly and accurately. Finger speed is the ability to turn your cube fast and smoothly. You can improve both by practicing with different cubes, using a timer, and doing drills and exercises.
-
Learn and memorize the algorithms for solving a Rubik's cube. Algorithms are sequences of moves that change the position or orientation of some pieces of your cube without affecting others. There are many algorithms for solving a Rubik's cube, but you only need to know a few basic ones to solve any scramble. You can learn and memorize them by using mnemonics, flashcards, or apps.
-
Practice and challenge yourself with different scrambles and modes. Scrambles are random configurations of your cube that you need to solve. Modes are variations of solving methods that have different rules or goals. You can practice and challenge yourself by using online generators, apps, or books that provide different scrambles and modes for you to try.
-
-
Conclusion
-
Solving a Rubik's cube is a fun and rewarding activity that can boost your brain power, creativity, and confidence. However, it can also be frustrating and time-consuming if you don't know how to do it. That's why using a Rubik's cube solver 3x3 mod apk can be a great way to help you solve your cube in seconds.
-
A Rubik's cube solver 3x3 mod apk is a modified application that uses your device's camera to scan your cube and then shows you the steps to solve it. You can download and install a Rubik's cube solver 3x3 mod apk from a reliable website, such as [Grubiks], and then use it to scan and solve your cube easily and quickly.
-
You can also use a Rubik's cube solver 3x3 mod apk to learn from the app and improve your own skills and techniques. You can follow some tips and tricks for solving a Rubik's cube faster and easier, such as improving your cube recognition and finger speed, learning and memorizing the algorithms, and practicing and challenging yourself with different scrambles and modes.
-
-
If you want to have fun and impress your friends and family with your amazing cube solving abilities, download a Rubik's cube solver 3x3 mod apk today and see how fast you can solve your cube!
-
FAQs
-
What are some of the best Rubik's cube solver 3x3 mod apks available?
-
Some of the best Rubik's cube solver 3x3 mod apks available are:
-
-
Name
Features
-
[Grubiks]
- Fast, easy, and free online Rubik’s Cube Solver - Supports 2x2, 3x3, 4x4 and 5x5 cubes - Shows the solution as text, images, or animations - Allows you to customize the cube colors and background
-
[CubeX]
- Powerful and easy to use Rubik's Cube Solver - Supports 2x2, 3x3, 4x4, and 5x5 cubes - Shows the solution as text or animations - Allows you to adjust the camera settings and the cube size
-
[Rubik's Cube Solver]
- Simple and user-friendly Rubik's Cube Solver - Supports only 3x3 cubes - Shows the solution as text or images - Allows you to change the cube colors and the language
-
-
Is it cheating to use a Rubik's cube solver 3x3 mod apk?
-
It depends on your purpose and perspective. If you use a Rubik's cube solver 3x3 mod apk to solve your cube for fun or learning, then it is not cheating. You can enjoy the satisfaction of solving your cube and also learn from the app how to improve your skills and techniques. However, if you use a Rubik's cube solver 3x3 mod apk to solve your cube for competition or bragging, then it is cheating. You are not showing your true ability and you are not being fair to others who solve their cubes without any help.
-
How can I solve a Rubik's cube without using a mod apk?
-
If you want to solve a Rubik's cube without using a mod apk, you need to learn the basic principles and methods of solving a Rubik's cube. There are many online tutorials, videos, books, and courses that can teach you how to solve a Rubik's cube step by step. You can also join online forums, groups, or clubs where you can interact with other cubers and get tips and advice. The most common method for solving a Rubik's cube is the layer-by-layer method, which involves solving the cube in three stages: the first layer, the second layer, and the last layer.
-
How can I make my own Rubik's cube solver 3x3 mod apk?
-
If you want to make your own Rubik's cube solver 3x3 mod apk, you need to have some programming skills and knowledge of computer vision and artificial intelligence. You also need to have access to a software development kit (SDK) that allows you to create and modify applications for your device. You can use an existing SDK or create your own SDK using tools such as Android Studio, Xcode, or Visual Studio. You can also use open-source libraries or frameworks that provide functions and features for developing a Rubik's cube solver 3x3 mod apk, such as OpenCV, TensorFlow, or PyTorch.
-
How can I share my Rubik's cube solving results with others?
-
If you want to share your Rubik's cube solving results with others, you can use various platforms and methods to do so. You can take screenshots or record videos of your cube solving process and post them on social media platforms such as Facebook, Instagram, Twitter, or YouTube. You can also use online services or apps that allow you to upload and share your cube solving results with other cubers around the world, such as [Cube Timer], [Cubing Time], or [Cube Mania].
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stream Live TV Channels with Oreo TV APK v2.0.4 on Any Device.md b/spaces/congsaPfin/Manga-OCR/logs/Stream Live TV Channels with Oreo TV APK v2.0.4 on Any Device.md
deleted file mode 100644
index 9bc40078469a6be461985b2dd3f855b7342e0c2d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Stream Live TV Channels with Oreo TV APK v2.0.4 on Any Device.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Oreo TV 2.0 8 APK Download: How to Watch Free Live TV on Your Android Device
-
Do you want to watch free live TV on your Android device without paying for any subscription? If yes, then you should try Oreo TV APK, a popular streaming app that offers thousands of live TV channels, movies, shows, sports, and more for free. In this article, we will tell you what Oreo TV APK is, what features it has, and how to download and install it on your Android device.
Oreo TV APK is a streaming app that allows you to watch live TV channels from various countries and genres on your Android device. You can also watch movies, shows, web series, sports, news, and more on this app. Oreo TV APK is not available on the Google Play Store or any other official app store, so you have to download it from a third-party source. However, it is a safe and legal app that does not require any registration or subscription.
-
Features of Oreo TV APK
-
Oreo TV APK has many features that make it one of the best streaming apps for Android users. Here are some of them:
-
Live TV channels
-
Oreo TV APK offers more than 6000 live TV channels from different countries and regions, including India, USA, UK, Canada, Australia, Pakistan, Bangladesh, Nepal, Sri Lanka, and more. You can watch channels from various categories, such as entertainment, news, sports, movies, music, kids, religion, education, and more. You can also watch live IPL matches, cricket, football, basketball, tennis, and other sports events on this app.
-
Categorized catalog
-
Oreo TV APK has a well-organized catalog that makes it easy for you to find your favorite content. You can browse through different sections, such as movies, shows, web series, sports, news, etc. You can also search for any content by name or keyword. You can also filter the content by genre, language, country, quality, etc.
-
HD resolution
-
Oreo TV APK lets you watch all the content in high-definition quality up to 4K resolution. You can also adjust the video quality according to your internet speed and data usage. You can choose from 360p to 1080p quality options.
-
Free movies and TV shows
-
Oreo TV APK also provides you with a huge collection of movies and TV shows from various platforms and sources. You can watch the latest movies and shows from Netflix, Amazon Prime Video, Disney+, Hotstar, Zee5, SonyLiv, Voot, AltBalaji, MX Player, and more for free. You can also download any movie or show for offline viewing.
-
User-friendly interface
-
Oreo TV APK has a simple and easy-to-use interface that makes it user-friendly for everyone. You can navigate through the app with ease and access all the features without any hassle. You can also customize the app according to your preferences. You can change the theme, font size, playback speed, etc.
-
How to download and install Oreo TV APK on your Android device?
-
If you want to download and install Oreo TV APK on your Android device, you have to follow these simple steps:
-
How to install oreo tv apk on firestick for free live tv
-Oreo tv apk v4.0.8 June version 2023 download for free
-Oreo tv apk features and benefits
-Oreo tv app official website and download link
-Oreo tv apk latest version update and changelog
-Oreo tv apk review and rating by users
-Oreo tv apk alternatives and similar apps
-Oreo tv apk mod and premium version
-Oreo tv apk problems and solutions
-Oreo tv apk download for android tv box and smart tv
-Oreo tv apk download for windows pc and laptop
-Oreo tv apk download for ios and iphone
-Oreo tv apk download for mac and macbook
-Oreo tv apk download for linux and ubuntu
-Oreo tv apk download for roku and chromecast
-Oreo tv apk download for kodi and firestick
-Oreo tv apk download for samsung and lg smart tv
-Oreo tv apk download for sony and philips smart tv
-Oreo tv apk download for mi and realme smart tv
-Oreo tv apk download for nvidia shield and android box
-Oreo tv apk download for amazon firestick and fire tv cube
-Oreo tv apk download for jio phone and jio fiber
-Oreo tv apk download for airtel xstream and vi movies
-Oreo tv apk download for bsnl broadband and tata sky binge
-Oreo tv apk download for netflix and amazon prime video
-Oreo tv apk download for hotstar and disney plus hotstar
-Oreo tv apk download for zee5 and sony liv
-Oreo tv apk download for voot and mx player
-Oreo tv apk download for alt balaji and ullu
-Oreo tv apk download for eros now and hungama play
-Oreo tv apk download for youtube and youtube premium
-Oreo tv apk download for spotify and gaana
-Oreo tv apk download for jio cinema and jio saavn
-Oreo tv apk download for airtel xtreme and wynk music
-Oreo tv apk download for vi movies and vi music
-Oreo tv apk download for bsnl play and bsnl tunes
-Oreo tv apk download for tata sky binge plus and tata sky music plus
-Oreo tv apk download for netflix party and watch party
-Oreo tv apk download for telegram and whatsapp groups
-Oreo tv apk download for reddit and quora communities
-
Step 1: Enable unknown sources
-
Since Oreo TV APK is not available on the Google Play Store, you have to enable the option of unknown sources on your device. This will allow you to install apps from third-party sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 2: Download the Oreo TV APK file
-
Next, you have to download the Oreo TV APK file from a reliable source. You can use this link to download the latest version of Oreo TV APK (2.0 8) on your device. The file size is about 10 MB and it is compatible with Android 4.4 and above.
-
Step 3: Install the Oreo TV APK file
-
Once you have downloaded the Oreo TV APK file, you have to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a prompt asking for your permission to install the app. Tap on Install and wait for the installation process to complete.
-
Step 4: Launch the Oreo TV app and enjoy free live TV
-
After the installation is done, you can launch the Oreo TV app from your app drawer or home screen. You will see the main interface of the app with different sections and categories. You can choose any content that you want to watch and enjoy free live TV on your Android device.
-
How to download and install Oreo TV APK on your Firestick or Android TV box?
-
If you want to download and install Oreo TV APK on your Firestick or Android TV box, you have to follow these steps:
-
Step 1: Enable apps from unknown sources
-
Similar to Android devices, you have to enable the option of apps from unknown sources on your Firestick or Android TV box. This will allow you to install apps from third-party sources. To do this, go to Settings > My Fire TV > Developer Options > Apps from Unknown Sources and turn it on.
-
Step 2: Install the Downloader app
-
Next, you have to install the Downloader app on your Firestick or Android TV box. This is a free app that lets you download and install any APK file on your device. To do this, go to the Search option on your home screen and type Downloader. You will see the app icon in the results. Click on it and then click on Download or Get to install the app.
-
Step 3: Download the Oreo TV APK file using the Downloader app
-
Once you have installed the Downloader app, open it and enter this URL in the address bar. This will take you to the download page of Oreo TV APK (2.0 8). Click on Download Now and wait for the download process to finish.
-
Step 4: Install the Oreo TV APK file using the Downloader app
-
After the download is complete, you will see a prompt asking for your permission to install the app. Click on Install and wait for the installation process to complete.
-
Step 5: Launch the Oreo TV app and enjoy free live TV
-
After the installation is done, you can launch the Oreo TV app from your apps list or home screen. You will see the same interface as on Android devices with different sections and categories. You can choose any content that you want to watch and enjoy free live TV on your Firestick or Android TV box.
-
Conclusion
-
Oreo TV APK is a great streaming app that lets you watch free live TV channels, movies, shows, sports, and more on your Android device, Firestick, or Android TV box. It has many features that make it one of the best streaming apps for cord-cutters. You can download and install it easily by following our guide above. However, we recommend that you use a VPN service while using this app to protect your privacy and security online.
-
FAQs
-
Here are some frequently asked questions about Oreo TV APK:
-
-
Is Oreo TV APK safe and legal?
-
Oreo TV APK is safe and legal as long as you use it for personal and non-commercial purposes. However, some of the content on this app may be copyrighted or geo-restricted, so you should use a VPN service to avoid any legal issues.
-
Does Oreo TV APK require any registration or subscription?
-
No, Oreo TV APK does not require any registration or subscription. You can use it for free without creating an account or providing any personal information.
-
Does Oreo TV APK support Chromecast or other casting devices?
-
Yes, Oreo TV APK supports Chromecast and other casting devices. You can cast the content from your Android device to your TV screen using a compatible device.
-
Does Oreo TV APK have any ads or malware?
-
No, Oreo TV APK does not have any ads or malware. It is a clean and safe app that does not interfere with your user experience or device performance.
-
How can I update Oreo TV APK to the latest version?
-
You can update Oreo TV APK to the latest version by following the same steps as downloading and installing it. You can also check for updates within the app settings and download them from there.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/features.py b/spaces/cooelf/Multimodal-CoT/timm/models/features.py
deleted file mode 100644
index b1d6890f3ed07311c5484b4a397c3b1da555880a..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/models/features.py
+++ /dev/null
@@ -1,284 +0,0 @@
-""" PyTorch Feature Extraction Helpers
-
-A collection of classes, functions, modules to help extract features from models
-and provide a common interface for describing them.
-
-The return_layers, module re-writing idea inspired by torchvision IntermediateLayerGetter
-https://github.com/pytorch/vision/blob/d88d8961ae51507d0cb680329d985b1488b1b76b/torchvision/models/_utils.py
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-from collections import OrderedDict, defaultdict
-from copy import deepcopy
-from functools import partial
-from typing import Dict, List, Tuple
-
-import torch
-import torch.nn as nn
-
-
-class FeatureInfo:
-
- def __init__(self, feature_info: List[Dict], out_indices: Tuple[int]):
- prev_reduction = 1
- for fi in feature_info:
- # sanity check the mandatory fields, there may be additional fields depending on the model
- assert 'num_chs' in fi and fi['num_chs'] > 0
- assert 'reduction' in fi and fi['reduction'] >= prev_reduction
- prev_reduction = fi['reduction']
- assert 'module' in fi
- self.out_indices = out_indices
- self.info = feature_info
-
- def from_other(self, out_indices: Tuple[int]):
- return FeatureInfo(deepcopy(self.info), out_indices)
-
- def get(self, key, idx=None):
- """ Get value by key at specified index (indices)
- if idx == None, returns value for key at each output index
- if idx is an integer, return value for that feature module index (ignoring output indices)
- if idx is a list/tupple, return value for each module index (ignoring output indices)
- """
- if idx is None:
- return [self.info[i][key] for i in self.out_indices]
- if isinstance(idx, (tuple, list)):
- return [self.info[i][key] for i in idx]
- else:
- return self.info[idx][key]
-
- def get_dicts(self, keys=None, idx=None):
- """ return info dicts for specified keys (or all if None) at specified indices (or out_indices if None)
- """
- if idx is None:
- if keys is None:
- return [self.info[i] for i in self.out_indices]
- else:
- return [{k: self.info[i][k] for k in keys} for i in self.out_indices]
- if isinstance(idx, (tuple, list)):
- return [self.info[i] if keys is None else {k: self.info[i][k] for k in keys} for i in idx]
- else:
- return self.info[idx] if keys is None else {k: self.info[idx][k] for k in keys}
-
- def channels(self, idx=None):
- """ feature channels accessor
- """
- return self.get('num_chs', idx)
-
- def reduction(self, idx=None):
- """ feature reduction (output stride) accessor
- """
- return self.get('reduction', idx)
-
- def module_name(self, idx=None):
- """ feature module name accessor
- """
- return self.get('module', idx)
-
- def __getitem__(self, item):
- return self.info[item]
-
- def __len__(self):
- return len(self.info)
-
-
-class FeatureHooks:
- """ Feature Hook Helper
-
- This module helps with the setup and extraction of hooks for extracting features from
- internal nodes in a model by node name. This works quite well in eager Python but needs
- redesign for torcscript.
- """
-
- def __init__(self, hooks, named_modules, out_map=None, default_hook_type='forward'):
- # setup feature hooks
- modules = {k: v for k, v in named_modules}
- for i, h in enumerate(hooks):
- hook_name = h['module']
- m = modules[hook_name]
- hook_id = out_map[i] if out_map else hook_name
- hook_fn = partial(self._collect_output_hook, hook_id)
- hook_type = h['hook_type'] if 'hook_type' in h else default_hook_type
- if hook_type == 'forward_pre':
- m.register_forward_pre_hook(hook_fn)
- elif hook_type == 'forward':
- m.register_forward_hook(hook_fn)
- else:
- assert False, "Unsupported hook type"
- self._feature_outputs = defaultdict(OrderedDict)
-
- def _collect_output_hook(self, hook_id, *args):
- x = args[-1] # tensor we want is last argument, output for fwd, input for fwd_pre
- if isinstance(x, tuple):
- x = x[0] # unwrap input tuple
- self._feature_outputs[x.device][hook_id] = x
-
- def get_output(self, device) -> Dict[str, torch.tensor]:
- output = self._feature_outputs[device]
- self._feature_outputs[device] = OrderedDict() # clear after reading
- return output
-
-
-def _module_list(module, flatten_sequential=False):
- # a yield/iter would be better for this but wouldn't be compatible with torchscript
- ml = []
- for name, module in module.named_children():
- if flatten_sequential and isinstance(module, nn.Sequential):
- # first level of Sequential containers is flattened into containing model
- for child_name, child_module in module.named_children():
- combined = [name, child_name]
- ml.append(('_'.join(combined), '.'.join(combined), child_module))
- else:
- ml.append((name, name, module))
- return ml
-
-
-def _get_feature_info(net, out_indices):
- feature_info = getattr(net, 'feature_info')
- if isinstance(feature_info, FeatureInfo):
- return feature_info.from_other(out_indices)
- elif isinstance(feature_info, (list, tuple)):
- return FeatureInfo(net.feature_info, out_indices)
- else:
- assert False, "Provided feature_info is not valid"
-
-
-def _get_return_layers(feature_info, out_map):
- module_names = feature_info.module_name()
- return_layers = {}
- for i, name in enumerate(module_names):
- return_layers[name] = out_map[i] if out_map is not None else feature_info.out_indices[i]
- return return_layers
-
-
-class FeatureDictNet(nn.ModuleDict):
- """ Feature extractor with OrderedDict return
-
- Wrap a model and extract features as specified by the out indices, the network is
- partially re-built from contained modules.
-
- There is a strong assumption that the modules have been registered into the model in the same
- order as they are used. There should be no reuse of the same nn.Module more than once, including
- trivial modules like `self.relu = nn.ReLU`.
-
- Only submodules that are directly assigned to the model class (`model.feature1`) or at most
- one Sequential container deep (`model.features.1`, with flatten_sequent=True) can be captured.
- All Sequential containers that are directly assigned to the original model will have their
- modules assigned to this module with the name `model.features.1` being changed to `model.features_1`
-
- Arguments:
- model (nn.Module): model from which we will extract the features
- out_indices (tuple[int]): model output indices to extract features for
- out_map (sequence): list or tuple specifying desired return id for each out index,
- otherwise str(index) is used
- feature_concat (bool): whether to concatenate intermediate features that are lists or tuples
- vs select element [0]
- flatten_sequential (bool): whether to flatten sequential modules assigned to model
- """
- def __init__(
- self, model,
- out_indices=(0, 1, 2, 3, 4), out_map=None, feature_concat=False, flatten_sequential=False):
- super(FeatureDictNet, self).__init__()
- self.feature_info = _get_feature_info(model, out_indices)
- self.concat = feature_concat
- self.return_layers = {}
- return_layers = _get_return_layers(self.feature_info, out_map)
- modules = _module_list(model, flatten_sequential=flatten_sequential)
- remaining = set(return_layers.keys())
- layers = OrderedDict()
- for new_name, old_name, module in modules:
- layers[new_name] = module
- if old_name in remaining:
- # return id has to be consistently str type for torchscript
- self.return_layers[new_name] = str(return_layers[old_name])
- remaining.remove(old_name)
- if not remaining:
- break
- assert not remaining and len(self.return_layers) == len(return_layers), \
- f'Return layers ({remaining}) are not present in model'
- self.update(layers)
-
- def _collect(self, x) -> (Dict[str, torch.Tensor]):
- out = OrderedDict()
- for name, module in self.items():
- x = module(x)
- if name in self.return_layers:
- out_id = self.return_layers[name]
- if isinstance(x, (tuple, list)):
- # If model tap is a tuple or list, concat or select first element
- # FIXME this may need to be more generic / flexible for some nets
- out[out_id] = torch.cat(x, 1) if self.concat else x[0]
- else:
- out[out_id] = x
- return out
-
- def forward(self, x) -> Dict[str, torch.Tensor]:
- return self._collect(x)
-
-
-class FeatureListNet(FeatureDictNet):
- """ Feature extractor with list return
-
- See docstring for FeatureDictNet above, this class exists only to appease Torchscript typing constraints.
- In eager Python we could have returned List[Tensor] vs Dict[id, Tensor] based on a member bool.
- """
- def __init__(
- self, model,
- out_indices=(0, 1, 2, 3, 4), out_map=None, feature_concat=False, flatten_sequential=False):
- super(FeatureListNet, self).__init__(
- model, out_indices=out_indices, out_map=out_map, feature_concat=feature_concat,
- flatten_sequential=flatten_sequential)
-
- def forward(self, x) -> (List[torch.Tensor]):
- return list(self._collect(x).values())
-
-
-class FeatureHookNet(nn.ModuleDict):
- """ FeatureHookNet
-
- Wrap a model and extract features specified by the out indices using forward/forward-pre hooks.
-
- If `no_rewrite` is True, features are extracted via hooks without modifying the underlying
- network in any way.
-
- If `no_rewrite` is False, the model will be re-written as in the
- FeatureList/FeatureDict case by folding first to second (Sequential only) level modules into this one.
-
- FIXME this does not currently work with Torchscript, see FeatureHooks class
- """
- def __init__(
- self, model,
- out_indices=(0, 1, 2, 3, 4), out_map=None, out_as_dict=False, no_rewrite=False,
- feature_concat=False, flatten_sequential=False, default_hook_type='forward'):
- super(FeatureHookNet, self).__init__()
- assert not torch.jit.is_scripting()
- self.feature_info = _get_feature_info(model, out_indices)
- self.out_as_dict = out_as_dict
- layers = OrderedDict()
- hooks = []
- if no_rewrite:
- assert not flatten_sequential
- if hasattr(model, 'reset_classifier'): # make sure classifier is removed?
- model.reset_classifier(0)
- layers['body'] = model
- hooks.extend(self.feature_info.get_dicts())
- else:
- modules = _module_list(model, flatten_sequential=flatten_sequential)
- remaining = {f['module']: f['hook_type'] if 'hook_type' in f else default_hook_type
- for f in self.feature_info.get_dicts()}
- for new_name, old_name, module in modules:
- layers[new_name] = module
- for fn, fm in module.named_modules(prefix=old_name):
- if fn in remaining:
- hooks.append(dict(module=fn, hook_type=remaining[fn]))
- del remaining[fn]
- if not remaining:
- break
- assert not remaining, f'Return layers ({remaining}) are not present in model'
- self.update(layers)
- self.hooks = FeatureHooks(hooks, model.named_modules(), out_map=out_map)
-
- def forward(self, x):
- for name, module in self.items():
- x = module(x)
- out = self.hooks.get_output(x.device)
- return out if self.out_as_dict else list(out.values())
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/build.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/build.py
deleted file mode 100644
index 98a08f06ac58b21f3a738132b8c3ac62f51fa538..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/build.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import torch
-
-from annotator.oneformer.detectron2.utils.logger import _log_api_usage
-from annotator.oneformer.detectron2.utils.registry import Registry
-
-META_ARCH_REGISTRY = Registry("META_ARCH") # noqa F401 isort:skip
-META_ARCH_REGISTRY.__doc__ = """
-Registry for meta-architectures, i.e. the whole model.
-
-The registered object will be called with `obj(cfg)`
-and expected to return a `nn.Module` object.
-"""
-
-
-def build_model(cfg):
- """
- Build the whole model architecture, defined by ``cfg.MODEL.META_ARCHITECTURE``.
- Note that it does not load any weights from ``cfg``.
- """
- meta_arch = cfg.MODEL.META_ARCHITECTURE
- model = META_ARCH_REGISTRY.get(meta_arch)(cfg)
- model.to(torch.device(cfg.MODEL.DEVICE))
- _log_api_usage("modeling.meta_arch." + meta_arch)
- return model
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/base_module.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/base_module.py
deleted file mode 100644
index 617fad9bb89f10a9a0911d962dfb3bc8f3a3628c..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/base_module.py
+++ /dev/null
@@ -1,195 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import warnings
-from abc import ABCMeta
-from collections import defaultdict
-from logging import FileHandler
-
-import torch.nn as nn
-
-from annotator.uniformer.mmcv.runner.dist_utils import master_only
-from annotator.uniformer.mmcv.utils.logging import get_logger, logger_initialized, print_log
-
-
-class BaseModule(nn.Module, metaclass=ABCMeta):
- """Base module for all modules in openmmlab.
-
- ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional
- functionality of parameter initialization. Compared with
- ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes.
-
- - ``init_cfg``: the config to control the initialization.
- - ``init_weights``: The function of parameter
- initialization and recording initialization
- information.
- - ``_params_init_info``: Used to track the parameter
- initialization information. This attribute only
- exists during executing the ``init_weights``.
-
- Args:
- init_cfg (dict, optional): Initialization config dict.
- """
-
- def __init__(self, init_cfg=None):
- """Initialize BaseModule, inherited from `torch.nn.Module`"""
-
- # NOTE init_cfg can be defined in different levels, but init_cfg
- # in low levels has a higher priority.
-
- super(BaseModule, self).__init__()
- # define default value of init_cfg instead of hard code
- # in init_weights() function
- self._is_init = False
-
- self.init_cfg = copy.deepcopy(init_cfg)
-
- # Backward compatibility in derived classes
- # if pretrained is not None:
- # warnings.warn('DeprecationWarning: pretrained is a deprecated \
- # key, please consider using init_cfg')
- # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)
-
- @property
- def is_init(self):
- return self._is_init
-
- def init_weights(self):
- """Initialize the weights."""
-
- is_top_level_module = False
- # check if it is top-level module
- if not hasattr(self, '_params_init_info'):
- # The `_params_init_info` is used to record the initialization
- # information of the parameters
- # the key should be the obj:`nn.Parameter` of model and the value
- # should be a dict containing
- # - init_info (str): The string that describes the initialization.
- # - tmp_mean_value (FloatTensor): The mean of the parameter,
- # which indicates whether the parameter has been modified.
- # this attribute would be deleted after all parameters
- # is initialized.
- self._params_init_info = defaultdict(dict)
- is_top_level_module = True
-
- # Initialize the `_params_init_info`,
- # When detecting the `tmp_mean_value` of
- # the corresponding parameter is changed, update related
- # initialization information
- for name, param in self.named_parameters():
- self._params_init_info[param][
- 'init_info'] = f'The value is the same before and ' \
- f'after calling `init_weights` ' \
- f'of {self.__class__.__name__} '
- self._params_init_info[param][
- 'tmp_mean_value'] = param.data.mean()
-
- # pass `params_init_info` to all submodules
- # All submodules share the same `params_init_info`,
- # so it will be updated when parameters are
- # modified at any level of the model.
- for sub_module in self.modules():
- sub_module._params_init_info = self._params_init_info
-
- # Get the initialized logger, if not exist,
- # create a logger named `mmcv`
- logger_names = list(logger_initialized.keys())
- logger_name = logger_names[0] if logger_names else 'mmcv'
-
- from ..cnn import initialize
- from ..cnn.utils.weight_init import update_init_info
- module_name = self.__class__.__name__
- if not self._is_init:
- if self.init_cfg:
- print_log(
- f'initialize {module_name} with init_cfg {self.init_cfg}',
- logger=logger_name)
- initialize(self, self.init_cfg)
- if isinstance(self.init_cfg, dict):
- # prevent the parameters of
- # the pre-trained model
- # from being overwritten by
- # the `init_weights`
- if self.init_cfg['type'] == 'Pretrained':
- return
-
- for m in self.children():
- if hasattr(m, 'init_weights'):
- m.init_weights()
- # users may overload the `init_weights`
- update_init_info(
- m,
- init_info=f'Initialized by '
- f'user-defined `init_weights`'
- f' in {m.__class__.__name__} ')
-
- self._is_init = True
- else:
- warnings.warn(f'init_weights of {self.__class__.__name__} has '
- f'been called more than once.')
-
- if is_top_level_module:
- self._dump_init_info(logger_name)
-
- for sub_module in self.modules():
- del sub_module._params_init_info
-
- @master_only
- def _dump_init_info(self, logger_name):
- """Dump the initialization information to a file named
- `initialization.log.json` in workdir.
-
- Args:
- logger_name (str): The name of logger.
- """
-
- logger = get_logger(logger_name)
-
- with_file_handler = False
- # dump the information to the logger file if there is a `FileHandler`
- for handler in logger.handlers:
- if isinstance(handler, FileHandler):
- handler.stream.write(
- 'Name of parameter - Initialization information\n')
- for name, param in self.named_parameters():
- handler.stream.write(
- f'\n{name} - {param.shape}: '
- f"\n{self._params_init_info[param]['init_info']} \n")
- handler.stream.flush()
- with_file_handler = True
- if not with_file_handler:
- for name, param in self.named_parameters():
- print_log(
- f'\n{name} - {param.shape}: '
- f"\n{self._params_init_info[param]['init_info']} \n ",
- logger=logger_name)
-
- def __repr__(self):
- s = super().__repr__()
- if self.init_cfg:
- s += f'\ninit_cfg={self.init_cfg}'
- return s
-
-
-class Sequential(BaseModule, nn.Sequential):
- """Sequential module in openmmlab.
-
- Args:
- init_cfg (dict, optional): Initialization config dict.
- """
-
- def __init__(self, *args, init_cfg=None):
- BaseModule.__init__(self, init_cfg)
- nn.Sequential.__init__(self, *args)
-
-
-class ModuleList(BaseModule, nn.ModuleList):
- """ModuleList in openmmlab.
-
- Args:
- modules (iterable, optional): an iterable of modules to add.
- init_cfg (dict, optional): Initialization config dict.
- """
-
- def __init__(self, modules=None, init_cfg=None):
- BaseModule.__init__(self, init_cfg)
- nn.ModuleList.__init__(self, modules)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/utils/geometry.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/utils/geometry.py
deleted file mode 100644
index e3da8c75b5a8e39b4b58a4dcd827b84d79b9115c..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/utils/geometry.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# MIT License
-
-# Copyright (c) 2022 Intelligent Systems Lab Org
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-# File author: Shariq Farooq Bhat
-
-import numpy as np
-
-def get_intrinsics(H,W):
- """
- Intrinsics for a pinhole camera model.
- Assume fov of 55 degrees and central principal point.
- """
- f = 0.5 * W / np.tan(0.5 * 55 * np.pi / 180.0)
- cx = 0.5 * W
- cy = 0.5 * H
- return np.array([[f, 0, cx],
- [0, f, cy],
- [0, 0, 1]])
-
-def depth_to_points(depth, R=None, t=None):
-
- K = get_intrinsics(depth.shape[1], depth.shape[2])
- Kinv = np.linalg.inv(K)
- if R is None:
- R = np.eye(3)
- if t is None:
- t = np.zeros(3)
-
- # M converts from your coordinate to PyTorch3D's coordinate system
- M = np.eye(3)
- M[0, 0] = -1.0
- M[1, 1] = -1.0
-
- height, width = depth.shape[1:3]
-
- x = np.arange(width)
- y = np.arange(height)
- coord = np.stack(np.meshgrid(x, y), -1)
- coord = np.concatenate((coord, np.ones_like(coord)[:, :, [0]]), -1) # z=1
- coord = coord.astype(np.float32)
- # coord = torch.as_tensor(coord, dtype=torch.float32, device=device)
- coord = coord[None] # bs, h, w, 3
-
- D = depth[:, :, :, None, None]
- # print(D.shape, Kinv[None, None, None, ...].shape, coord[:, :, :, :, None].shape )
- pts3D_1 = D * Kinv[None, None, None, ...] @ coord[:, :, :, :, None]
- # pts3D_1 live in your coordinate system. Convert them to Py3D's
- pts3D_1 = M[None, None, None, ...] @ pts3D_1
- # from reference to targe tviewpoint
- pts3D_2 = R[None, None, None, ...] @ pts3D_1 + t[None, None, None, :, None]
- # pts3D_2 = pts3D_1
- # depth_2 = pts3D_2[:, :, :, 2, :] # b,1,h,w
- return pts3D_2[:, :, :, :3, 0][0]
-
-
-def create_triangles(h, w, mask=None):
- """
- Reference: https://github.com/google-research/google-research/blob/e96197de06613f1b027d20328e06d69829fa5a89/infinite_nature/render_utils.py#L68
- Creates mesh triangle indices from a given pixel grid size.
- This function is not and need not be differentiable as triangle indices are
- fixed.
- Args:
- h: (int) denoting the height of the image.
- w: (int) denoting the width of the image.
- Returns:
- triangles: 2D numpy array of indices (int) with shape (2(W-1)(H-1) x 3)
- """
- x, y = np.meshgrid(range(w - 1), range(h - 1))
- tl = y * w + x
- tr = y * w + x + 1
- bl = (y + 1) * w + x
- br = (y + 1) * w + x + 1
- triangles = np.array([tl, bl, tr, br, tr, bl])
- triangles = np.transpose(triangles, (1, 2, 0)).reshape(
- ((w - 1) * (h - 1) * 2, 3))
- if mask is not None:
- mask = mask.reshape(-1)
- triangles = triangles[mask[triangles].all(1)]
- return triangles
diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/train.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/train.py
deleted file mode 100644
index a01c0dfccdb8b02283100ec5b792c33afaf22f5e..0000000000000000000000000000000000000000
--- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/train.py
+++ /dev/null
@@ -1,225 +0,0 @@
-import argparse
-import datetime
-import logging
-import math
-import copy
-import random
-import time
-import torch
-from os import path as osp
-
-from basicsr.data import build_dataloader, build_dataset
-from basicsr.data.data_sampler import EnlargedSampler
-from basicsr.data.prefetch_dataloader import CPUPrefetcher, CUDAPrefetcher
-from basicsr.models import build_model
-from basicsr.utils import (MessageLogger, check_resume, get_env_info, get_root_logger, init_tb_logger,
- init_wandb_logger, make_exp_dirs, mkdir_and_rename, set_random_seed)
-from basicsr.utils.dist_util import get_dist_info, init_dist
-from basicsr.utils.options import dict2str, parse
-
-import warnings
-# ignore UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`.
-warnings.filterwarnings("ignore", category=UserWarning)
-
-def parse_options(root_path, is_train=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-opt', type=str, required=True, help='Path to option YAML file.')
- parser.add_argument('--launcher', choices=['none', 'pytorch', 'slurm'], default='none', help='job launcher')
- parser.add_argument('--local_rank', type=int, default=0)
- args = parser.parse_args()
- opt = parse(args.opt, root_path, is_train=is_train)
-
- # distributed settings
- if args.launcher == 'none':
- opt['dist'] = False
- print('Disable distributed.', flush=True)
- else:
- opt['dist'] = True
- if args.launcher == 'slurm' and 'dist_params' in opt:
- init_dist(args.launcher, **opt['dist_params'])
- else:
- init_dist(args.launcher)
-
- opt['rank'], opt['world_size'] = get_dist_info()
-
- # random seed
- seed = opt.get('manual_seed')
- if seed is None:
- seed = random.randint(1, 10000)
- opt['manual_seed'] = seed
- set_random_seed(seed + opt['rank'])
-
- return opt
-
-
-def init_loggers(opt):
- log_file = osp.join(opt['path']['log'], f"train_{opt['name']}.log")
- logger = get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=log_file)
- logger.info(get_env_info())
- logger.info(dict2str(opt))
-
- # initialize wandb logger before tensorboard logger to allow proper sync:
- if (opt['logger'].get('wandb') is not None) and (opt['logger']['wandb'].get('project') is not None):
- assert opt['logger'].get('use_tb_logger') is True, ('should turn on tensorboard when using wandb')
- init_wandb_logger(opt)
- tb_logger = None
- if opt['logger'].get('use_tb_logger'):
- tb_logger = init_tb_logger(log_dir=osp.join('tb_logger', opt['name']))
- return logger, tb_logger
-
-
-def create_train_val_dataloader(opt, logger):
- # create train and val dataloaders
- train_loader, val_loader = None, None
- for phase, dataset_opt in opt['datasets'].items():
- if phase == 'train':
- dataset_enlarge_ratio = dataset_opt.get('dataset_enlarge_ratio', 1)
- train_set = build_dataset(dataset_opt)
- train_sampler = EnlargedSampler(train_set, opt['world_size'], opt['rank'], dataset_enlarge_ratio)
- train_loader = build_dataloader(
- train_set,
- dataset_opt,
- num_gpu=opt['num_gpu'],
- dist=opt['dist'],
- sampler=train_sampler,
- seed=opt['manual_seed'])
-
- num_iter_per_epoch = math.ceil(
- len(train_set) * dataset_enlarge_ratio / (dataset_opt['batch_size_per_gpu'] * opt['world_size']))
- total_iters = int(opt['train']['total_iter'])
- total_epochs = math.ceil(total_iters / (num_iter_per_epoch))
- logger.info('Training statistics:'
- f'\n\tNumber of train images: {len(train_set)}'
- f'\n\tDataset enlarge ratio: {dataset_enlarge_ratio}'
- f'\n\tBatch size per gpu: {dataset_opt["batch_size_per_gpu"]}'
- f'\n\tWorld size (gpu number): {opt["world_size"]}'
- f'\n\tRequire iter number per epoch: {num_iter_per_epoch}'
- f'\n\tTotal epochs: {total_epochs}; iters: {total_iters}.')
-
- elif phase == 'val':
- val_set = build_dataset(dataset_opt)
- val_loader = build_dataloader(
- val_set, dataset_opt, num_gpu=opt['num_gpu'], dist=opt['dist'], sampler=None, seed=opt['manual_seed'])
- logger.info(f'Number of val images/folders in {dataset_opt["name"]}: ' f'{len(val_set)}')
- else:
- raise ValueError(f'Dataset phase {phase} is not recognized.')
-
- return train_loader, train_sampler, val_loader, total_epochs, total_iters
-
-
-def train_pipeline(root_path):
- # parse options, set distributed setting, set ramdom seed
- opt = parse_options(root_path, is_train=True)
-
- torch.backends.cudnn.benchmark = True
- # torch.backends.cudnn.deterministic = True
-
- # load resume states if necessary
- if opt['path'].get('resume_state'):
- device_id = torch.cuda.current_device()
- resume_state = torch.load(
- opt['path']['resume_state'], map_location=lambda storage, loc: storage.cuda(device_id))
- else:
- resume_state = None
-
- # mkdir for experiments and logger
- if resume_state is None:
- make_exp_dirs(opt)
- if opt['logger'].get('use_tb_logger') and opt['rank'] == 0:
- mkdir_and_rename(osp.join('tb_logger', opt['name']))
-
- # initialize loggers
- logger, tb_logger = init_loggers(opt)
-
- # create train and validation dataloaders
- result = create_train_val_dataloader(opt, logger)
- train_loader, train_sampler, val_loader, total_epochs, total_iters = result
-
- # create model
- if resume_state: # resume training
- check_resume(opt, resume_state['iter'])
- model = build_model(opt)
- model.resume_training(resume_state) # handle optimizers and schedulers
- logger.info(f"Resuming training from epoch: {resume_state['epoch']}, " f"iter: {resume_state['iter']}.")
- start_epoch = resume_state['epoch']
- current_iter = resume_state['iter']
- else:
- model = build_model(opt)
- start_epoch = 0
- current_iter = 0
-
- # create message logger (formatted outputs)
- msg_logger = MessageLogger(opt, current_iter, tb_logger)
-
- # dataloader prefetcher
- prefetch_mode = opt['datasets']['train'].get('prefetch_mode')
- if prefetch_mode is None or prefetch_mode == 'cpu':
- prefetcher = CPUPrefetcher(train_loader)
- elif prefetch_mode == 'cuda':
- prefetcher = CUDAPrefetcher(train_loader, opt)
- logger.info(f'Use {prefetch_mode} prefetch dataloader')
- if opt['datasets']['train'].get('pin_memory') is not True:
- raise ValueError('Please set pin_memory=True for CUDAPrefetcher.')
- else:
- raise ValueError(f'Wrong prefetch_mode {prefetch_mode}.' "Supported ones are: None, 'cuda', 'cpu'.")
-
- # training
- logger.info(f'Start training from epoch: {start_epoch}, iter: {current_iter+1}')
- data_time, iter_time = time.time(), time.time()
- start_time = time.time()
-
- for epoch in range(start_epoch, total_epochs + 1):
- train_sampler.set_epoch(epoch)
- prefetcher.reset()
- train_data = prefetcher.next()
-
- while train_data is not None:
- data_time = time.time() - data_time
-
- current_iter += 1
- if current_iter > total_iters:
- break
- # update learning rate
- model.update_learning_rate(current_iter, warmup_iter=opt['train'].get('warmup_iter', -1))
- # training
- model.feed_data(train_data)
- model.optimize_parameters(current_iter)
- iter_time = time.time() - iter_time
- # log
- if current_iter % opt['logger']['print_freq'] == 0:
- log_vars = {'epoch': epoch, 'iter': current_iter}
- log_vars.update({'lrs': model.get_current_learning_rate()})
- log_vars.update({'time': iter_time, 'data_time': data_time})
- log_vars.update(model.get_current_log())
- msg_logger(log_vars)
-
- # save models and training states
- if current_iter % opt['logger']['save_checkpoint_freq'] == 0:
- logger.info('Saving models and training states.')
- model.save(epoch, current_iter)
-
- # validation
- if opt.get('val') is not None and opt['datasets'].get('val') is not None \
- and (current_iter % opt['val']['val_freq'] == 0):
- model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img'])
-
- data_time = time.time()
- iter_time = time.time()
- train_data = prefetcher.next()
- # end of iter
-
- # end of epoch
-
- consumed_time = str(datetime.timedelta(seconds=int(time.time() - start_time)))
- logger.info(f'End of training. Time consumed: {consumed_time}')
- logger.info('Save the latest model.')
- model.save(epoch=-1, current_iter=-1) # -1 stands for the latest
- if opt.get('val') is not None and opt['datasets'].get('val'):
- model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img'])
- if tb_logger:
- tb_logger.close()
-
-
-if __name__ == '__main__':
- root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir))
- train_pipeline(root_path)
diff --git a/spaces/csuer/vits/commons.py b/spaces/csuer/vits/commons.py
deleted file mode 100644
index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000
--- a/spaces/csuer/vits/commons.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- try:
- ret[i] = x[i, :, idx_str:idx_end]
- except RuntimeError:
- print("?")
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/cvlab/zero123-live/taming-transformers/scripts/sample_conditional.py b/spaces/cvlab/zero123-live/taming-transformers/scripts/sample_conditional.py
deleted file mode 100644
index 174cf2af07c1a1ca4e6c35fc0e4f8d6e53591b56..0000000000000000000000000000000000000000
--- a/spaces/cvlab/zero123-live/taming-transformers/scripts/sample_conditional.py
+++ /dev/null
@@ -1,355 +0,0 @@
-import argparse, os, sys, glob, math, time
-import torch
-import numpy as np
-from omegaconf import OmegaConf
-import streamlit as st
-from streamlit import caching
-from PIL import Image
-from main import instantiate_from_config, DataModuleFromConfig
-from torch.utils.data import DataLoader
-from torch.utils.data.dataloader import default_collate
-
-
-rescale = lambda x: (x + 1.) / 2.
-
-
-def bchw_to_st(x):
- return rescale(x.detach().cpu().numpy().transpose(0,2,3,1))
-
-def save_img(xstart, fname):
- I = (xstart.clip(0,1)[0]*255).astype(np.uint8)
- Image.fromarray(I).save(fname)
-
-
-
-def get_interactive_image(resize=False):
- image = st.file_uploader("Input", type=["jpg", "JPEG", "png"])
- if image is not None:
- image = Image.open(image)
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
- print("upload image shape: {}".format(image.shape))
- img = Image.fromarray(image)
- if resize:
- img = img.resize((256, 256))
- image = np.array(img)
- return image
-
-
-def single_image_to_torch(x, permute=True):
- assert x is not None, "Please provide an image through the upload function"
- x = np.array(x)
- x = torch.FloatTensor(x/255.*2. - 1.)[None,...]
- if permute:
- x = x.permute(0, 3, 1, 2)
- return x
-
-
-def pad_to_M(x, M):
- hp = math.ceil(x.shape[2]/M)*M-x.shape[2]
- wp = math.ceil(x.shape[3]/M)*M-x.shape[3]
- x = torch.nn.functional.pad(x, (0,wp,0,hp,0,0,0,0))
- return x
-
-@torch.no_grad()
-def run_conditional(model, dsets):
- if len(dsets.datasets) > 1:
- split = st.sidebar.radio("Split", sorted(dsets.datasets.keys()))
- dset = dsets.datasets[split]
- else:
- dset = next(iter(dsets.datasets.values()))
- batch_size = 1
- start_index = st.sidebar.number_input("Example Index (Size: {})".format(len(dset)), value=0,
- min_value=0,
- max_value=len(dset)-batch_size)
- indices = list(range(start_index, start_index+batch_size))
-
- example = default_collate([dset[i] for i in indices])
-
- x = model.get_input("image", example).to(model.device)
-
- cond_key = model.cond_stage_key
- c = model.get_input(cond_key, example).to(model.device)
-
- scale_factor = st.sidebar.slider("Scale Factor", min_value=0.5, max_value=4.0, step=0.25, value=1.00)
- if scale_factor != 1.0:
- x = torch.nn.functional.interpolate(x, scale_factor=scale_factor, mode="bicubic")
- c = torch.nn.functional.interpolate(c, scale_factor=scale_factor, mode="bicubic")
-
- quant_z, z_indices = model.encode_to_z(x)
- quant_c, c_indices = model.encode_to_c(c)
-
- cshape = quant_z.shape
-
- xrec = model.first_stage_model.decode(quant_z)
- st.write("image: {}".format(x.shape))
- st.image(bchw_to_st(x), clamp=True, output_format="PNG")
- st.write("image reconstruction: {}".format(xrec.shape))
- st.image(bchw_to_st(xrec), clamp=True, output_format="PNG")
-
- if cond_key == "segmentation":
- # get image from segmentation mask
- num_classes = c.shape[1]
- c = torch.argmax(c, dim=1, keepdim=True)
- c = torch.nn.functional.one_hot(c, num_classes=num_classes)
- c = c.squeeze(1).permute(0, 3, 1, 2).float()
- c = model.cond_stage_model.to_rgb(c)
-
- st.write(f"{cond_key}: {tuple(c.shape)}")
- st.image(bchw_to_st(c), clamp=True, output_format="PNG")
-
- idx = z_indices
-
- half_sample = st.sidebar.checkbox("Image Completion", value=False)
- if half_sample:
- start = idx.shape[1]//2
- else:
- start = 0
-
- idx[:,start:] = 0
- idx = idx.reshape(cshape[0],cshape[2],cshape[3])
- start_i = start//cshape[3]
- start_j = start %cshape[3]
-
- if not half_sample and quant_z.shape == quant_c.shape:
- st.info("Setting idx to c_indices")
- idx = c_indices.clone().reshape(cshape[0],cshape[2],cshape[3])
-
- cidx = c_indices
- cidx = cidx.reshape(quant_c.shape[0],quant_c.shape[2],quant_c.shape[3])
-
- xstart = model.decode_to_img(idx[:,:cshape[2],:cshape[3]], cshape)
- st.image(bchw_to_st(xstart), clamp=True, output_format="PNG")
-
- temperature = st.number_input("Temperature", value=1.0)
- top_k = st.number_input("Top k", value=100)
- sample = st.checkbox("Sample", value=True)
- update_every = st.number_input("Update every", value=75)
-
- st.text(f"Sampling shape ({cshape[2]},{cshape[3]})")
-
- animate = st.checkbox("animate")
- if animate:
- import imageio
- outvid = "sampling.mp4"
- writer = imageio.get_writer(outvid, fps=25)
- elapsed_t = st.empty()
- info = st.empty()
- st.text("Sampled")
- if st.button("Sample"):
- output = st.empty()
- start_t = time.time()
- for i in range(start_i,cshape[2]-0):
- if i <= 8:
- local_i = i
- elif cshape[2]-i < 8:
- local_i = 16-(cshape[2]-i)
- else:
- local_i = 8
- for j in range(start_j,cshape[3]-0):
- if j <= 8:
- local_j = j
- elif cshape[3]-j < 8:
- local_j = 16-(cshape[3]-j)
- else:
- local_j = 8
-
- i_start = i-local_i
- i_end = i_start+16
- j_start = j-local_j
- j_end = j_start+16
- elapsed_t.text(f"Time: {time.time() - start_t} seconds")
- info.text(f"Step: ({i},{j}) | Local: ({local_i},{local_j}) | Crop: ({i_start}:{i_end},{j_start}:{j_end})")
- patch = idx[:,i_start:i_end,j_start:j_end]
- patch = patch.reshape(patch.shape[0],-1)
- cpatch = cidx[:, i_start:i_end, j_start:j_end]
- cpatch = cpatch.reshape(cpatch.shape[0], -1)
- patch = torch.cat((cpatch, patch), dim=1)
- logits,_ = model.transformer(patch[:,:-1])
- logits = logits[:, -256:, :]
- logits = logits.reshape(cshape[0],16,16,-1)
- logits = logits[:,local_i,local_j,:]
-
- logits = logits/temperature
-
- if top_k is not None:
- logits = model.top_k_logits(logits, top_k)
- # apply softmax to convert to probabilities
- probs = torch.nn.functional.softmax(logits, dim=-1)
- # sample from the distribution or take the most likely
- if sample:
- ix = torch.multinomial(probs, num_samples=1)
- else:
- _, ix = torch.topk(probs, k=1, dim=-1)
- idx[:,i,j] = ix
-
- if (i*cshape[3]+j)%update_every==0:
- xstart = model.decode_to_img(idx[:, :cshape[2], :cshape[3]], cshape,)
-
- xstart = bchw_to_st(xstart)
- output.image(xstart, clamp=True, output_format="PNG")
-
- if animate:
- writer.append_data((xstart[0]*255).clip(0, 255).astype(np.uint8))
-
- xstart = model.decode_to_img(idx[:,:cshape[2],:cshape[3]], cshape)
- xstart = bchw_to_st(xstart)
- output.image(xstart, clamp=True, output_format="PNG")
- #save_img(xstart, "full_res_sample.png")
- if animate:
- writer.close()
- st.video(outvid)
-
-
-def get_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-r",
- "--resume",
- type=str,
- nargs="?",
- help="load from logdir or checkpoint in logdir",
- )
- parser.add_argument(
- "-b",
- "--base",
- nargs="*",
- metavar="base_config.yaml",
- help="paths to base configs. Loaded from left-to-right. "
- "Parameters can be overwritten or added with command-line options of the form `--key value`.",
- default=list(),
- )
- parser.add_argument(
- "-c",
- "--config",
- nargs="?",
- metavar="single_config.yaml",
- help="path to single config. If specified, base configs will be ignored "
- "(except for the last one if left unspecified).",
- const=True,
- default="",
- )
- parser.add_argument(
- "--ignore_base_data",
- action="store_true",
- help="Ignore data specification from base configs. Useful if you want "
- "to specify a custom datasets on the command line.",
- )
- return parser
-
-
-def load_model_from_config(config, sd, gpu=True, eval_mode=True):
- if "ckpt_path" in config.params:
- st.warning("Deleting the restore-ckpt path from the config...")
- config.params.ckpt_path = None
- if "downsample_cond_size" in config.params:
- st.warning("Deleting downsample-cond-size from the config and setting factor=0.5 instead...")
- config.params.downsample_cond_size = -1
- config.params["downsample_cond_factor"] = 0.5
- try:
- if "ckpt_path" in config.params.first_stage_config.params:
- config.params.first_stage_config.params.ckpt_path = None
- st.warning("Deleting the first-stage restore-ckpt path from the config...")
- if "ckpt_path" in config.params.cond_stage_config.params:
- config.params.cond_stage_config.params.ckpt_path = None
- st.warning("Deleting the cond-stage restore-ckpt path from the config...")
- except:
- pass
-
- model = instantiate_from_config(config)
- if sd is not None:
- missing, unexpected = model.load_state_dict(sd, strict=False)
- st.info(f"Missing Keys in State Dict: {missing}")
- st.info(f"Unexpected Keys in State Dict: {unexpected}")
- if gpu:
- model.cuda()
- if eval_mode:
- model.eval()
- return {"model": model}
-
-
-def get_data(config):
- # get data
- data = instantiate_from_config(config.data)
- data.prepare_data()
- data.setup()
- return data
-
-
-@st.cache(allow_output_mutation=True, suppress_st_warning=True)
-def load_model_and_dset(config, ckpt, gpu, eval_mode):
- # get data
- dsets = get_data(config) # calls data.config ...
-
- # now load the specified checkpoint
- if ckpt:
- pl_sd = torch.load(ckpt, map_location="cpu")
- global_step = pl_sd["global_step"]
- else:
- pl_sd = {"state_dict": None}
- global_step = None
- model = load_model_from_config(config.model,
- pl_sd["state_dict"],
- gpu=gpu,
- eval_mode=eval_mode)["model"]
- return dsets, model, global_step
-
-
-if __name__ == "__main__":
- sys.path.append(os.getcwd())
-
- parser = get_parser()
-
- opt, unknown = parser.parse_known_args()
-
- ckpt = None
- if opt.resume:
- if not os.path.exists(opt.resume):
- raise ValueError("Cannot find {}".format(opt.resume))
- if os.path.isfile(opt.resume):
- paths = opt.resume.split("/")
- try:
- idx = len(paths)-paths[::-1].index("logs")+1
- except ValueError:
- idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt
- logdir = "/".join(paths[:idx])
- ckpt = opt.resume
- else:
- assert os.path.isdir(opt.resume), opt.resume
- logdir = opt.resume.rstrip("/")
- ckpt = os.path.join(logdir, "checkpoints", "last.ckpt")
- print(f"logdir:{logdir}")
- base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*-project.yaml")))
- opt.base = base_configs+opt.base
-
- if opt.config:
- if type(opt.config) == str:
- opt.base = [opt.config]
- else:
- opt.base = [opt.base[-1]]
-
- configs = [OmegaConf.load(cfg) for cfg in opt.base]
- cli = OmegaConf.from_dotlist(unknown)
- if opt.ignore_base_data:
- for config in configs:
- if hasattr(config, "data"): del config["data"]
- config = OmegaConf.merge(*configs, cli)
-
- st.sidebar.text(ckpt)
- gs = st.sidebar.empty()
- gs.text(f"Global step: ?")
- st.sidebar.text("Options")
- #gpu = st.sidebar.checkbox("GPU", value=True)
- gpu = True
- #eval_mode = st.sidebar.checkbox("Eval Mode", value=True)
- eval_mode = True
- #show_config = st.sidebar.checkbox("Show Config", value=False)
- show_config = False
- if show_config:
- st.info("Checkpoint: {}".format(ckpt))
- st.json(OmegaConf.to_container(config))
-
- dsets, model, global_step = load_model_and_dset(config, ckpt, gpu, eval_mode)
- gs.text(f"Global step: {global_step}")
- run_conditional(model, dsets)
diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/ui.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/ui.py
deleted file mode 100644
index acb36289511c190aea371c4f98511fcc44482599..0000000000000000000000000000000000000000
--- a/spaces/cymic/Waifu_Diffusion_Webui/modules/ui.py
+++ /dev/null
@@ -1,1414 +0,0 @@
-import base64
-import html
-import io
-import json
-import math
-import mimetypes
-import os
-import random
-import sys
-import time
-import traceback
-import platform
-import subprocess as sp
-from functools import reduce
-
-import numpy as np
-import torch
-from PIL import Image, PngImagePlugin
-import piexif
-
-import gradio as gr
-import gradio.utils
-import gradio.routes
-
-from modules import sd_hijack
-from modules.paths import script_path
-from modules.shared import opts, cmd_opts
-import modules.shared as shared
-from modules.sd_samplers import samplers, samplers_for_img2img
-from modules.sd_hijack import model_hijack
-import modules.ldsr_model
-import modules.scripts
-import modules.gfpgan_model
-import modules.codeformer_model
-import modules.styles
-import modules.generation_parameters_copypaste
-from modules import prompt_parser
-from modules.images import save_image
-import modules.textual_inversion.ui
-
-# this is a fix for Windows users. Without it, javascript files will be served with text/html content-type and the bowser will not show any UI
-mimetypes.init()
-mimetypes.add_type('application/javascript', '.js')
-
-
-if not cmd_opts.share and not cmd_opts.listen:
- # fix gradio phoning home
- gradio.utils.version_check = lambda: None
- gradio.utils.get_local_ip_address = lambda: '127.0.0.1'
-
-
-def gr_show(visible=True):
- return {"visible": visible, "__type__": "update"}
-
-
-sample_img2img = "assets/stable-samples/img2img/sketch-mountains-input.jpg"
-sample_img2img = sample_img2img if os.path.exists(sample_img2img) else None
-
-css_hide_progressbar = """
-.wrap .m-12 svg { display:none!important; }
-.wrap .m-12::before { content:"Loading..." }
-.progress-bar { display:none!important; }
-.meta-text { display:none!important; }
-"""
-
-# Using constants for these since the variation selector isn't visible.
-# Important that they exactly match script.js for tooltip to work.
-random_symbol = '\U0001f3b2\ufe0f' # 🎲️
-reuse_symbol = '\u267b\ufe0f' # ♻️
-art_symbol = '\U0001f3a8' # 🎨
-paste_symbol = '\u2199\ufe0f' # ↙
-folder_symbol = '\U0001f4c2' # 📂
-
-def plaintext_to_html(text):
- text = "
" + " \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "
"
- return text
-
-
-def image_from_url_text(filedata):
- if type(filedata) == list:
- if len(filedata) == 0:
- return None
-
- filedata = filedata[0]
-
- if filedata.startswith("data:image/png;base64,"):
- filedata = filedata[len("data:image/png;base64,"):]
-
- filedata = base64.decodebytes(filedata.encode('utf-8'))
- image = Image.open(io.BytesIO(filedata))
- return image
-
-
-def send_gradio_gallery_to_image(x):
- if len(x) == 0:
- return None
-
- return image_from_url_text(x[0])
-
-
-def save_files(js_data, images, index):
- import csv
- filenames = []
-
- #quick dictionary to class object conversion. Its neccesary due apply_filename_pattern requiring it
- class MyObject:
- def __init__(self, d=None):
- if d is not None:
- for key, value in d.items():
- setattr(self, key, value)
-
- data = json.loads(js_data)
-
- p = MyObject(data)
- path = opts.outdir_save
- save_to_dirs = opts.use_save_to_dirs_for_ui
- extension: str = opts.samples_format
- start_index = 0
-
- if index > -1 and opts.save_selected_only and (index >= data["index_of_first_image"]): # ensures we are looking at a specific non-grid picture, and we have save_selected_only
-
- images = [images[index]]
- start_index = index
-
- with open(os.path.join(opts.outdir_save, "log.csv"), "a", encoding="utf8", newline='') as file:
- at_start = file.tell() == 0
- writer = csv.writer(file)
- if at_start:
- writer.writerow(["prompt", "seed", "width", "height", "sampler", "cfgs", "steps", "filename", "negative_prompt"])
-
- for image_index, filedata in enumerate(images, start_index):
- if filedata.startswith("data:image/png;base64,"):
- filedata = filedata[len("data:image/png;base64,"):]
-
- image = Image.open(io.BytesIO(base64.decodebytes(filedata.encode('utf-8'))))
-
- is_grid = image_index < p.index_of_first_image
- i = 0 if is_grid else (image_index - p.index_of_first_image)
-
- fullfn = save_image(image, path, "", seed=p.all_seeds[i], prompt=p.all_prompts[i], extension=extension, info=p.infotexts[image_index], grid=is_grid, p=p, save_to_dirs=save_to_dirs)
-
- filename = os.path.relpath(fullfn, path)
- filenames.append(filename)
-
- writer.writerow([data["prompt"], data["seed"], data["width"], data["height"], data["sampler"], data["cfg_scale"], data["steps"], filenames[0], data["negative_prompt"]])
-
- return '', '', plaintext_to_html(f"Saved: {filenames[0]}")
-
-
-def wrap_gradio_call(func, extra_outputs=None):
- def f(*args, extra_outputs_array=extra_outputs, **kwargs):
- run_memmon = opts.memmon_poll_rate > 0 and not shared.mem_mon.disabled
- if run_memmon:
- shared.mem_mon.monitor()
- t = time.perf_counter()
-
- try:
- res = list(func(*args, **kwargs))
- except Exception as e:
- print("Error completing request", file=sys.stderr)
- print("Arguments:", args, kwargs, file=sys.stderr)
- print(traceback.format_exc(), file=sys.stderr)
-
- shared.state.job = ""
- shared.state.job_count = 0
-
- if extra_outputs_array is None:
- extra_outputs_array = [None, '']
-
- res = extra_outputs_array + [f"
{str(int(progress*100))+"%" if progress > 0.01 else ""}
"""
-
- image = gr_show(False)
- preview_visibility = gr_show(False)
-
- if opts.show_progress_every_n_steps > 0:
- if shared.parallel_processing_allowed:
-
- if shared.state.sampling_step - shared.state.current_image_sampling_step >= opts.show_progress_every_n_steps and shared.state.current_latent is not None:
- shared.state.current_image = modules.sd_samplers.sample_to_image(shared.state.current_latent)
- shared.state.current_image_sampling_step = shared.state.sampling_step
-
- image = shared.state.current_image
-
- if image is None:
- image = gr.update(value=None)
- else:
- preview_visibility = gr_show(True)
-
- if shared.state.textinfo is not None:
- textinfo_result = gr.HTML.update(value=shared.state.textinfo, visible=True)
- else:
- textinfo_result = gr_show(False)
-
- return f"{time.time()}
{progressbar}
", preview_visibility, image, textinfo_result
-
-
-def check_progress_call_initial(id_part):
- shared.state.job_count = -1
- shared.state.current_latent = None
- shared.state.current_image = None
- shared.state.textinfo = None
-
- return check_progress_call(id_part)
-
-
-def roll_artist(prompt):
- allowed_cats = set([x for x in shared.artist_db.categories() if len(opts.random_artist_categories)==0 or x in opts.random_artist_categories])
- artist = random.choice([x for x in shared.artist_db.artists if x.category in allowed_cats])
-
- return prompt + ", " + artist.name if prompt != '' else artist.name
-
-
-def visit(x, func, path=""):
- if hasattr(x, 'children'):
- for c in x.children:
- visit(c, func, path)
- elif x.label is not None:
- func(path + "/" + str(x.label), x)
-
-
-def add_style(name: str, prompt: str, negative_prompt: str):
- if name is None:
- return [gr_show(), gr_show()]
-
- style = modules.styles.PromptStyle(name, prompt, negative_prompt)
- shared.prompt_styles.styles[style.name] = style
- # Save all loaded prompt styles: this allows us to update the storage format in the future more easily, because we
- # reserialize all styles every time we save them
- shared.prompt_styles.save_styles(shared.styles_filename)
-
- return [gr.Dropdown.update(visible=True, choices=list(shared.prompt_styles.styles)) for _ in range(4)]
-
-
-def apply_styles(prompt, prompt_neg, style1_name, style2_name):
- prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, [style1_name, style2_name])
- prompt_neg = shared.prompt_styles.apply_negative_styles_to_prompt(prompt_neg, [style1_name, style2_name])
-
- return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=prompt_neg), gr.Dropdown.update(value="None"), gr.Dropdown.update(value="None")]
-
-
-def interrogate(image):
- prompt = shared.interrogator.interrogate(image)
-
- return gr_show(True) if prompt is None else prompt
-
-
-def create_seed_inputs():
- with gr.Row():
- with gr.Box():
- with gr.Row(elem_id='seed_row'):
- seed = (gr.Textbox if cmd_opts.use_textbox_seed else gr.Number)(label='Seed', value=-1)
- seed.style(container=False)
- random_seed = gr.Button(random_symbol, elem_id='random_seed')
- reuse_seed = gr.Button(reuse_symbol, elem_id='reuse_seed')
-
- with gr.Box(elem_id='subseed_show_box'):
- seed_checkbox = gr.Checkbox(label='Extra', elem_id='subseed_show', value=False)
-
- # Components to show/hide based on the 'Extra' checkbox
- seed_extras = []
-
- with gr.Row(visible=False) as seed_extra_row_1:
- seed_extras.append(seed_extra_row_1)
- with gr.Box():
- with gr.Row(elem_id='subseed_row'):
- subseed = gr.Number(label='Variation seed', value=-1)
- subseed.style(container=False)
- random_subseed = gr.Button(random_symbol, elem_id='random_subseed')
- reuse_subseed = gr.Button(reuse_symbol, elem_id='reuse_subseed')
- subseed_strength = gr.Slider(label='Variation strength', value=0.0, minimum=0, maximum=1, step=0.01)
-
- with gr.Row(visible=False) as seed_extra_row_2:
- seed_extras.append(seed_extra_row_2)
- seed_resize_from_w = gr.Slider(minimum=0, maximum=2048, step=64, label="Resize seed from width", value=0)
- seed_resize_from_h = gr.Slider(minimum=0, maximum=2048, step=64, label="Resize seed from height", value=0)
-
- random_seed.click(fn=lambda: -1, show_progress=False, inputs=[], outputs=[seed])
- random_subseed.click(fn=lambda: -1, show_progress=False, inputs=[], outputs=[subseed])
-
- def change_visibility(show):
- return {comp: gr_show(show) for comp in seed_extras}
-
- seed_checkbox.change(change_visibility, show_progress=False, inputs=[seed_checkbox], outputs=seed_extras)
-
- return seed, reuse_seed, subseed, reuse_subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox
-
-
-def connect_reuse_seed(seed: gr.Number, reuse_seed: gr.Button, generation_info: gr.Textbox, dummy_component, is_subseed):
- """ Connects a 'reuse (sub)seed' button's click event so that it copies last used
- (sub)seed value from generation info the to the seed field. If copying subseed and subseed strength
- was 0, i.e. no variation seed was used, it copies the normal seed value instead."""
- def copy_seed(gen_info_string: str, index):
- res = -1
-
- try:
- gen_info = json.loads(gen_info_string)
- index -= gen_info.get('index_of_first_image', 0)
-
- if is_subseed and gen_info.get('subseed_strength', 0) > 0:
- all_subseeds = gen_info.get('all_subseeds', [-1])
- res = all_subseeds[index if 0 <= index < len(all_subseeds) else 0]
- else:
- all_seeds = gen_info.get('all_seeds', [-1])
- res = all_seeds[index if 0 <= index < len(all_seeds) else 0]
-
- except json.decoder.JSONDecodeError as e:
- if gen_info_string != '':
- print("Error parsing JSON generation info:", file=sys.stderr)
- print(gen_info_string, file=sys.stderr)
-
- return [res, gr_show(False)]
-
- reuse_seed.click(
- fn=copy_seed,
- _js="(x, y) => [x, selected_gallery_index()]",
- show_progress=False,
- inputs=[generation_info, dummy_component],
- outputs=[seed, dummy_component]
- )
-
-
-def update_token_counter(text, steps):
- try:
- _, prompt_flat_list, _ = prompt_parser.get_multicond_prompt_list([text])
- prompt_schedules = prompt_parser.get_learned_conditioning_prompt_schedules(prompt_flat_list, steps)
-
- except Exception:
- # a parsing error can happen here during typing, and we don't want to bother the user with
- # messages related to it in console
- prompt_schedules = [[[steps, text]]]
-
- flat_prompts = reduce(lambda list1, list2: list1+list2, prompt_schedules)
- prompts = [prompt_text for step, prompt_text in flat_prompts]
- tokens, token_count, max_length = max([model_hijack.tokenize(prompt) for prompt in prompts], key=lambda args: args[1])
- style_class = ' class="red"' if (token_count > max_length) else ""
- return f"{token_count}/{max_length}"
-
-
-def create_toprow(is_img2img):
- id_part = "img2img" if is_img2img else "txt2img"
-
- with gr.Row(elem_id="toprow"):
- with gr.Column(scale=4):
- with gr.Row():
- with gr.Column(scale=80):
- with gr.Row():
- prompt = gr.Textbox(label="Prompt", elem_id=f"{id_part}_prompt", show_label=False, placeholder="Prompt", lines=2)
-
- with gr.Column(scale=1, elem_id="roll_col"):
- roll = gr.Button(value=art_symbol, elem_id="roll", visible=len(shared.artist_db.artists) > 0)
- paste = gr.Button(value=paste_symbol, elem_id="paste")
- token_counter = gr.HTML(value="", elem_id=f"{id_part}_token_counter")
- token_button = gr.Button(visible=False, elem_id=f"{id_part}_token_button")
-
- with gr.Column(scale=10, elem_id="style_pos_col"):
- prompt_style = gr.Dropdown(label="Style 1", elem_id=f"{id_part}_style_index", choices=[k for k, v in shared.prompt_styles.styles.items()], value=next(iter(shared.prompt_styles.styles.keys())), visible=len(shared.prompt_styles.styles) > 1)
-
- with gr.Row():
- with gr.Column(scale=8):
- negative_prompt = gr.Textbox(label="Negative prompt", elem_id="negative_prompt", show_label=False, placeholder="Negative prompt", lines=2)
-
- with gr.Column(scale=1, elem_id="style_neg_col"):
- prompt_style2 = gr.Dropdown(label="Style 2", elem_id=f"{id_part}_style2_index", choices=[k for k, v in shared.prompt_styles.styles.items()], value=next(iter(shared.prompt_styles.styles.keys())), visible=len(shared.prompt_styles.styles) > 1)
-
- with gr.Column(scale=1):
- with gr.Row():
- interrupt = gr.Button('Interrupt', elem_id=f"{id_part}_interrupt")
- submit = gr.Button('Generate', elem_id=f"{id_part}_generate", variant='primary')
-
- interrupt.click(
- fn=lambda: shared.state.interrupt(),
- inputs=[],
- outputs=[],
- )
-
- with gr.Row():
- if is_img2img:
- interrogate = gr.Button('Interrogate', elem_id="interrogate")
- else:
- interrogate = None
- prompt_style_apply = gr.Button('Apply style', elem_id="style_apply")
- save_style = gr.Button('Create style', elem_id="style_create")
-
- return prompt, roll, prompt_style, negative_prompt, prompt_style2, submit, interrogate, prompt_style_apply, save_style, paste, token_counter, token_button
-
-
-def setup_progressbar(progressbar, preview, id_part, textinfo=None):
- if textinfo is None:
- textinfo = gr.HTML(visible=False)
-
- check_progress = gr.Button('Check progress', elem_id=f"{id_part}_check_progress", visible=False)
- check_progress.click(
- fn=lambda: check_progress_call(id_part),
- show_progress=False,
- inputs=[],
- outputs=[progressbar, preview, preview, textinfo],
- )
-
- check_progress_initial = gr.Button('Check progress (first)', elem_id=f"{id_part}_check_progress_initial", visible=False)
- check_progress_initial.click(
- fn=lambda: check_progress_call_initial(id_part),
- show_progress=False,
- inputs=[],
- outputs=[progressbar, preview, preview, textinfo],
- )
-
-
-def create_ui(wrap_gradio_gpu_call):
- import modules.img2img
- import modules.txt2img
-
- with gr.Blocks(analytics_enabled=False) as txt2img_interface:
- txt2img_prompt, roll, txt2img_prompt_style, txt2img_negative_prompt, txt2img_prompt_style2, submit, _, txt2img_prompt_style_apply, txt2img_save_style, paste, token_counter, token_button = create_toprow(is_img2img=False)
- dummy_component = gr.Label(visible=False)
-
- with gr.Row(elem_id='txt2img_progress_row'):
- with gr.Column(scale=1):
- pass
-
- with gr.Column(scale=1):
- progressbar = gr.HTML(elem_id="txt2img_progressbar")
- txt2img_preview = gr.Image(elem_id='txt2img_preview', visible=False)
- setup_progressbar(progressbar, txt2img_preview, 'txt2img')
-
- with gr.Row().style(equal_height=False):
- with gr.Column(variant='panel'):
- steps = gr.Slider(minimum=1, maximum=150, step=1, label="Sampling Steps", value=20)
- sampler_index = gr.Radio(label='Sampling method', elem_id="txt2img_sampling", choices=[x.name for x in samplers], value=samplers[0].name, type="index")
-
- with gr.Group():
- width = gr.Slider(minimum=64, maximum=2048, step=64, label="Width", value=512)
- height = gr.Slider(minimum=64, maximum=2048, step=64, label="Height", value=512)
-
- with gr.Row():
- restore_faces = gr.Checkbox(label='Restore faces', value=False, visible=len(shared.face_restorers) > 1)
- tiling = gr.Checkbox(label='Tiling', value=False)
- enable_hr = gr.Checkbox(label='Highres. fix', value=False)
-
- with gr.Row(visible=False) as hr_options:
- scale_latent = gr.Checkbox(label='Scale latent', value=False)
- denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Denoising strength', value=0.7)
-
- with gr.Row():
- batch_count = gr.Slider(minimum=1, maximum=cmd_opts.max_batch_count, step=1, label='Batch count', value=1)
- batch_size = gr.Slider(minimum=1, maximum=8, step=1, label='Batch size', value=1)
-
- cfg_scale = gr.Slider(minimum=1.0, maximum=30.0, step=0.5, label='CFG Scale', value=7.0)
-
- seed, reuse_seed, subseed, reuse_subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox = create_seed_inputs()
-
- with gr.Group():
- custom_inputs = modules.scripts.scripts_txt2img.setup_ui(is_img2img=False)
-
- with gr.Column(variant='panel'):
-
- with gr.Group():
- txt2img_preview = gr.Image(elem_id='txt2img_preview', visible=False)
- txt2img_gallery = gr.Gallery(label='Output', show_label=False, elem_id='txt2img_gallery').style(grid=4)
-
- with gr.Group():
- with gr.Row():
- save = gr.Button('Save')
- send_to_img2img = gr.Button('Send to img2img')
- send_to_inpaint = gr.Button('Send to inpaint')
- send_to_extras = gr.Button('Send to extras')
- button_id = "hidden_element" if shared.cmd_opts.hide_ui_dir_config else 'open_folder'
- open_txt2img_folder = gr.Button(folder_symbol, elem_id=button_id)
-
- with gr.Group():
- html_info = gr.HTML()
- generation_info = gr.Textbox(visible=False)
-
- connect_reuse_seed(seed, reuse_seed, generation_info, dummy_component, is_subseed=False)
- connect_reuse_seed(subseed, reuse_subseed, generation_info, dummy_component, is_subseed=True)
-
- txt2img_args = dict(
- fn=wrap_gradio_gpu_call(modules.txt2img.txt2img),
- _js="submit",
- inputs=[
- txt2img_prompt,
- txt2img_negative_prompt,
- txt2img_prompt_style,
- txt2img_prompt_style2,
- steps,
- sampler_index,
- restore_faces,
- tiling,
- batch_count,
- batch_size,
- cfg_scale,
- seed,
- subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox,
- height,
- width,
- enable_hr,
- scale_latent,
- denoising_strength,
- ] + custom_inputs,
- outputs=[
- txt2img_gallery,
- generation_info,
- html_info
- ],
- show_progress=False,
- )
-
- txt2img_prompt.submit(**txt2img_args)
- submit.click(**txt2img_args)
-
- enable_hr.change(
- fn=lambda x: gr_show(x),
- inputs=[enable_hr],
- outputs=[hr_options],
- )
-
- save.click(
- fn=wrap_gradio_call(save_files),
- _js="(x, y, z) => [x, y, selected_gallery_index()]",
- inputs=[
- generation_info,
- txt2img_gallery,
- html_info,
- ],
- outputs=[
- html_info,
- html_info,
- html_info,
- ]
- )
-
- roll.click(
- fn=roll_artist,
- _js="update_txt2img_tokens",
- inputs=[
- txt2img_prompt,
- ],
- outputs=[
- txt2img_prompt,
- ]
- )
-
- txt2img_paste_fields = [
- (txt2img_prompt, "Prompt"),
- (txt2img_negative_prompt, "Negative prompt"),
- (steps, "Steps"),
- (sampler_index, "Sampler"),
- (restore_faces, "Face restoration"),
- (cfg_scale, "CFG scale"),
- (seed, "Seed"),
- (width, "Size-1"),
- (height, "Size-2"),
- (batch_size, "Batch size"),
- (subseed, "Variation seed"),
- (subseed_strength, "Variation seed strength"),
- (seed_resize_from_w, "Seed resize from-1"),
- (seed_resize_from_h, "Seed resize from-2"),
- (denoising_strength, "Denoising strength"),
- (enable_hr, lambda d: "Denoising strength" in d),
- (hr_options, lambda d: gr.Row.update(visible="Denoising strength" in d)),
- ]
- modules.generation_parameters_copypaste.connect_paste(paste, txt2img_paste_fields, txt2img_prompt)
- token_button.click(fn=update_token_counter, inputs=[txt2img_prompt, steps], outputs=[token_counter])
-
- with gr.Blocks(analytics_enabled=False) as img2img_interface:
- img2img_prompt, roll, img2img_prompt_style, img2img_negative_prompt, img2img_prompt_style2, submit, img2img_interrogate, img2img_prompt_style_apply, img2img_save_style, paste, token_counter, token_button = create_toprow(is_img2img=True)
-
- with gr.Row(elem_id='img2img_progress_row'):
- with gr.Column(scale=1):
- pass
-
- with gr.Column(scale=1):
- progressbar = gr.HTML(elem_id="img2img_progressbar")
- img2img_preview = gr.Image(elem_id='img2img_preview', visible=False)
- setup_progressbar(progressbar, img2img_preview, 'img2img')
-
- with gr.Row().style(equal_height=False):
- with gr.Column(variant='panel'):
-
- with gr.Tabs(elem_id="mode_img2img") as tabs_img2img_mode:
- with gr.TabItem('img2img', id='img2img'):
- init_img = gr.Image(label="Image for img2img", elem_id="img2img_image", show_label=False, source="upload", interactive=True, type="pil", tool=cmd_opts.gradio_img2img_tool)
-
- with gr.TabItem('Inpaint', id='inpaint'):
- init_img_with_mask = gr.Image(label="Image for inpainting with mask", show_label=False, elem_id="img2maskimg", source="upload", interactive=True, type="pil", tool="sketch", image_mode="RGBA")
-
- init_img_inpaint = gr.Image(label="Image for img2img", show_label=False, source="upload", interactive=True, type="pil", visible=False, elem_id="img_inpaint_base")
- init_mask_inpaint = gr.Image(label="Mask", source="upload", interactive=True, type="pil", visible=False, elem_id="img_inpaint_mask")
-
- mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=4)
-
- with gr.Row():
- mask_mode = gr.Radio(label="Mask mode", show_label=False, choices=["Draw mask", "Upload mask"], type="index", value="Draw mask", elem_id="mask_mode")
- inpainting_mask_invert = gr.Radio(label='Masking mode', show_label=False, choices=['Inpaint masked', 'Inpaint not masked'], value='Inpaint masked', type="index")
-
- inpainting_fill = gr.Radio(label='Masked content', choices=['fill', 'original', 'latent noise', 'latent nothing'], value='original', type="index")
-
- with gr.Row():
- inpaint_full_res = gr.Checkbox(label='Inpaint at full resolution', value=False)
- inpaint_full_res_padding = gr.Slider(label='Inpaint at full resolution padding, pixels', minimum=0, maximum=256, step=4, value=32)
-
- with gr.TabItem('Batch img2img', id='batch'):
- hidden = ' Disabled when launched with --hide-ui-dir-config.' if shared.cmd_opts.hide_ui_dir_config else ''
- gr.HTML(f"
Process images in a directory on the same machine where the server is running. Use an empty output directory to save pictures normally instead of writing to the output directory.{hidden}
', unsafe_allow_html=True)
-
-# Sidebar
-index = None
-doc = None
-with st.sidebar:
- user_secret = st.text_input(
- "OpenAI API Key",
- type="password",
- placeholder="Paste your OpenAI API key here (sk-...)",
- help="You can get your API key from https://platform.openai.com/account/api-keys.",
- value=st.session_state.get("OPENAI_API_KEY", ""),
- )
- if user_secret:
- set_openai_api_key(user_secret)
-
- uploaded_file = st.file_uploader(
- "Upload a pdf, docx, or txt file",
- type=["pdf", "docx", "txt", "csv", "pptx", "js", "py", "json", "html", "css", "md"],
- help="Scanned documents are not supported yet!",
- on_change=clear_submit,
- )
-
- if uploaded_file is not None:
- if uploaded_file.name.endswith(".pdf"):
- doc = parse_pdf(uploaded_file)
- elif uploaded_file.name.endswith(".docx"):
- doc = parse_docx(uploaded_file)
- elif uploaded_file.name.endswith(".csv"):
- doc = parse_csv(uploaded_file)
- elif uploaded_file.name.endswith(".txt"):
- doc = parse_txt(uploaded_file)
- elif uploaded_file.name.endswith(".pptx"):
- doc = parse_pptx(uploaded_file)
- else:
- doc = parse_any(uploaded_file)
- #st.error("File type not supported")
- #doc = None
- text = text_to_docs(doc)
- st.write(text)
- try:
- with st.spinner("Indexing document... This may take a while⏳"):
- index = embed_docs(text)
- st.session_state["api_key_configured"] = True
- except OpenAIError as e:
- st.error(e._message)
-
-tab1, tab2 = st.tabs(["Intro", "Chat with the File"])
-with tab1:
- st.markdown("### How does it work?")
- st.write("File GPT is a tool that allows you to ask questions about a document and get answers from the document. The tool uses the OpenAI API to embed the document and then uses the Embedding API to find the most similar documents to the question. The tool then uses LangChain to obtain the answer from the most similar documents.")
- st.write("The tool is currently in beta and is not perfect. It is recommended to use it with short documents.")
- st.write("""---""")
- st.markdown("### How to use it?")
- st.write("To use the tool you must first add your OpenAI API Key and then upload a document. The tool currently supports the following file types: pdf, docx, txt, csv, pptx. Once the document is uploaded, the tool will index the document and embed it. This may take a while depending on the size of the document. Once the document is indexed, you can ask questions about the document. The tool will return the answer to the question and the source of the answer.")
- st.markdown('
Darr The Mall 2: How to Download the Movie in DVDRip Quality
-
If you are a fan of horror movies, you might be interested in watching Darr The Mall 2, the sequel to the 2014 film Darr The Mall. The movie follows a group of friends who get trapped in a haunted mall and have to face their worst fears. The movie stars Jimmy Sheirgill, Nushrat Bharucha, Arif Zakaria, and Asif Basra.
-
But how can you watch Darr The Mall 2 online? Is there a way to download the movie in DVDRip quality? In this article, we will answer these questions and provide you with some tips and tricks to enjoy the movie at home.
DVDRip is a term that refers to a video file that has been ripped from a DVD. This means that the file has been copied and compressed to reduce its size and make it easier to download and stream. DVDRip quality is usually better than CAM or TS quality, which are recorded from a cinema screen or a TV broadcast. However, DVDRip quality is not as good as BluRay or WEB-DL quality, which are ripped from high-definition sources.
-
How to Download Darr The Mall 2 in DVDRip Quality?
-
There are several ways to download Darr The Mall 2 in DVDRip quality, but not all of them are legal or safe. Here are some of the options you can consider:
-
-
Torrent sites: Torrent sites are platforms that allow users to share files using peer-to-peer technology. You can find many torrent files for Darr The Mall 2 in DVDRip quality on sites like The Pirate Bay, 1337x, or RARBG. However, downloading from torrent sites is illegal in many countries and can expose you to malware, viruses, or legal issues. You should always use a VPN (virtual private network) to protect your identity and data when using torrent sites.
-
Streaming sites: Streaming sites are websites that host video files and allow users to watch them online. You can find many streaming links for Darr The Mall 2 in DVDRip quality on sites like Putlocker, Fmovies, or Gomovies. However, streaming from these sites is also illegal in many countries and can expose you to pop-up ads, redirects, or phishing scams. You should always use an ad-blocker and a VPN when using streaming sites.
-
Legal platforms: Legal platforms are services that offer movies and shows for a fee or a subscription. You can find many legal platforms that have Darr The Mall 2 in DVDRip quality or better on sites like Amazon Prime Video, Netflix, or Hotstar. However, these platforms may not be available in your region or may require a payment method that you don't have. You should always check the availability and pricing of these platforms before signing up.
-
-
Conclusion
-
Darr The Mall 2 is a horror movie that you can watch online or download in DVDRip quality. However, you should be careful about the sources you use and the risks you take when downloading or streaming the movie. We hope this article has helped you find the best way to enjoy Darr The Mall 2 at home.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Epubor Ultimate Converter 3.0.9.222 Key BETTER.md b/spaces/diacanFperku/AutoGPT/Epubor Ultimate Converter 3.0.9.222 Key BETTER.md
deleted file mode 100644
index 90baf2e942468af40aa1ca891cf1ba9726b815c8..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Epubor Ultimate Converter 3.0.9.222 Key BETTER.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Epubor Ultimate Converter 3.0.9.222 Key: How to Convert and Remove DRM from eBooks
-
-
If you are an avid eBook reader, you may have encountered some problems with different eBook formats and DRM protection. Some eBooks are only compatible with certain devices or apps, while others are locked by DRM that prevents you from copying, sharing, or printing them. How can you enjoy your eBooks without any limitations? The answer is Epubor Ultimate Converter 3.0.9.222 Key.
Epubor Ultimate Converter 3.0.9.222 Key is a powerful eBook converter and DRM remover that can convert any eBook format to EPUB, MOBI, PDF, AZW3, TXT, and other formats that can be read on any device or app. It can also remove DRM from eBooks purchased from Kindle, Google Play, Kobo, Barnes & Noble, and other platforms.
-
-
In this article, we will show you how to download and install Epubor Ultimate Converter 3.0.9.222 Key on your Windows PC. We will also explain the features and benefits of Epubor Ultimate Converter 3.0.9.222 Key and how to use it to convert and remove DRM from eBooks.
-
-
How to Download and Install Epubor Ultimate Converter 3.0.9.222 Key on Your Windows PC
-
-
To download and install Epubor Ultimate Converter 3.0.9.222 Key on your Windows PC, you need to follow these steps:
-
-
-
Click on this link to download Epubor Ultimate Converter 3.0.9.222 Key.
-
Extract the zip file and run the setup.exe file to install Epubor Ultimate Converter 3.0.9.222 on your PC.
-
Copy the key file from the key folder and paste it into the installation directory of Epubor Ultimate Converter 3.0.9.222.
-
Run Epubor Ultimate Converter 3.0.9.222 and enter the key from the key.txt file to activate it.
-
Enjoy Epubor Ultimate Converter 3.0.9.222 Key on your PC.
-
-
-
Note: You need to disable your antivirus software before installing Epubor Ultimate Converter 3.0.9.222 Key, as it may detect it as a false positive.
-
-
Features and Benefits of Epubor Ultimate Converter 3.0.9.222 Key
-
-
Epubor Ultimate Converter 3.0.9.222 Key has many features and benefits that make it the best eBook converter and DRM remover for Windows PC. Here are some of them:
-
-
-
-
It can convert any eBook format to EPUB, MOBI, PDF, AZW3, TXT, and other formats that can be read on any device or app.
-
It can remove DRM from eBooks purchased from Kindle, Google Play, Kobo, Barnes & Noble, and other platforms.
-
It can edit the metadata of eBooks, such as title, author, cover, introduction, etc.
-
It can batch convert and remove DRM from multiple eBooks at once.
-
It can automatically detect the device or app connected to your PC and load the eBooks accordingly.
-
It can sync with different eBook libraries, such as Kindle for PC/Mac, Adobe Digital Editions, Calibre, etc.
-
It has a user-friendly interface that makes it easy to use.
-
-
-
How to Use Epubor Ultimate Converter 3.0.9.222 Key to Convert and Remove DRM from eBooks
-
-
To use Epubor Ultimate Converter 3.0.9.222 Key to convert and remove DRM from eBooks, you need to follow these steps:
-
-
-
Launch Epubor Ultimate Converter 3.0.9.222 from your desktop or start menu.
-
Select the source library from the left sidebar where your eBooks are stored.
-
Select the eBooks that you want to convert or remove DRM from.
-
Select the output format from the bottom right corner that you want to convert your eBooks to.
-
Click on "Convert" button at the bottom right corner to start the conversion or DRM removal process.
-
Find your converted or DRM-free eBooks in the output folder or click on "Output Folder" button at the bottom right corner to open it.
-
-
-
That's it! You have successfully used Epubor Ultimate Converter 3.0.9.222 Key to convert and remove DRM from eBooks.
-
-
Conclusion
-
-
Epubor Ultimate Converter 3.0.9.222 Key is a powerful eBook converter and DRM remover that can convert any eBook format to EPUB, MOBI, PDF, AZW3, TXT, and other formats that can be read on any device or app.
-
-
It can also remove DRM from eBooks purchased from Kindle, Google Play, Kobo, Barnes & Noble,
and other platforms.
-
-
If you want to enjoy your eBooks without any limitations,
you should download and install Epubor Ultimate Converter 3.
0.
9.
222 Key on your Windows PC today.
-
-
We hope this article has helped you understand what Epubor Ultimate Converter 3.
0.
9.
222 Key is and how to use it to convert and remove DRM from eBooks.
-
-
If you have any questions or feedback,
feel free to leave a comment below.
-
FAQs about Epubor Ultimate Converter 3.0.9.222 Key
-
-
Here are some frequently asked questions and answers about Epubor Ultimate Converter 3.0.9.222 Key:
-
-
-
Is Epubor Ultimate Converter 3.0.9.222 Key safe to use?
-
Epubor Ultimate Converter 3.0.9.222 Key is safe to use as long as you download it from a trusted source and use it for legal purposes. However, you should always be careful when downloading and using eBooks from unknown sources, as they may contain viruses or malware.
-
Is Epubor Ultimate Converter 3.0.9.222 Key legal to use?
-
Epubor Ultimate Converter 3.0.9.222 Key is legal to use as a tool to convert and remove DRM from eBooks that you own or have the right to use. However, using Epubor Ultimate Converter 3.0.9.222 Key to convert and remove DRM from eBooks that you do not own or have the right to use is illegal and unethical. We do not condone or support eBook piracy in any way.
-
Does Epubor Ultimate Converter 3.0.9.222 Key work on Mac or Linux?
-
Epubor Ultimate Converter 3.0.9.222 Key is only compatible with Windows PC at the moment. However, Epubor also offers other products that can work on Mac or Linux, such as Epubor All DRM Removal for Mac or Epubor eBook Converter for Linux.
-
What are the system requirements for Epubor Ultimate Converter 3.0.9.222 Key?
-
The system requirements for Epubor Ultimate Converter 3.0.9.222 Key are as follows:
-
-
Operating System: Windows XP/Vista/7/8/10
-
Processor: 1 GHz or higher
-
RAM: 512 MB or higher
-
Disk Space: 100 MB or higher
-
Internet Connection: Required for activation and updates
-
-
How can I contact Epubor for support or feedback?
-
If you have any questions, problems, suggestions, or feedback about Epubor Ultimate Converter 3.0.9.222 Key or any other Epubor products, you can contact Epubor via email at support@epubor.com or via live chat on their website at https://www.epubor.com/.
-
-
Testimonials from Epubor Ultimate Converter 3.0.9.222 Key Users
-
-
Here are some testimonials from Epubor Ultimate Converter 3.0.9.222 Key users who have shared their feedback and experience with the product:
-
-
-
"I have been using Epubor Ultimate Converter for a few months now and I am very happy with it. It is very easy to use and it can convert and remove DRM from any eBook format that I need. It has saved me a lot of time and hassle when I want to read eBooks on different devices or apps. I highly recommend it to anyone who loves eBooks."
-- John Smith, eBook Reader
-
-
-
-
"Epubor Ultimate Converter is a lifesaver for me. I have a lot of eBooks that I bought from different platforms, but some of them are not compatible with my Kindle or iPad. With Epubor Ultimate Converter, I can easily convert them to the format that I want and remove the DRM protection without any loss of quality. It is very fast and reliable. I love it!"
-- Jane Doe, eBook Lover
-
-
-
-
"Epubor Ultimate Converter is the best eBook converter and DRM remover that I have ever used. It can handle any eBook format that I throw at it and it can remove DRM from any platform that I buy from. It also has a great feature that allows me to edit the metadata of my eBooks, such as title, author, cover, etc. It is very user-friendly and intuitive. It is worth every penny."
-- Mike Lee, eBook Enthusiast
-
-
Why You Should Choose Epubor Ultimate Converter 3.0.9.222 Key
-
-
There are many eBook converters and DRM removers available on the market, but Epubor Ultimate Converter 3.0.9.222 Key stands out from the crowd for several reasons. Here are some of the reasons why you should choose Epubor Ultimate Converter 3.0.9.222 Key over other products:
-
-
-
It supports a wide range of eBook formats and platforms, including EPUB, MOBI, PDF, AZW3, TXT, Kindle, Google Play, Kobo, Barnes & Noble, and more.
-
It can convert and remove DRM from eBooks with high speed and quality, without any loss of data or formatting.
-
It has a user-friendly interface that makes it easy to use for anyone, regardless of their technical skills or experience.
-
It has a built-in metadata editor that allows you to edit the title, author, cover, introduction, and other information of your eBooks.
-
It has a sync feature that automatically detects the device or app connected to your PC and loads the eBooks accordingly.
-
It has a free trial version that lets you try it before you buy it.
-
It has a giveaway program that offers free license keys for lucky winners every month.
-
It has a professional and responsive customer support team that can help you with any issues or questions you may have.
-
-
-
With Epubor Ultimate Converter 3.0.9.222 Key, you can enjoy your eBooks without any limitations or hassles. You can read them on any device or app you want, share them with your friends or family, print them out, or edit them as you like.
-
-
Epubor Ultimate Converter 3.0.9.222 Key is the ultimate solution for eBook lovers. Don't miss this opportunity to get it today.
-
Conclusion
-
-
Epubor Ultimate Converter 3.0.9.222 Key is a powerful eBook converter and DRM remover that can convert any eBook format to EPUB, MOBI, PDF, AZW3, TXT, and other formats that can be read on any device or app. It can also remove DRM from eBooks purchased from Kindle, Google Play, Kobo, Barnes & Noble, and other platforms.
-
-
If you want to enjoy your eBooks without any limitations or hassles, you should download and install Epubor Ultimate Converter 3.0.9.222 Key on your Windows PC today. You can also get it for free by participating in their giveaway program.
-
-
We hope this article has helped you understand what Epubor Ultimate Converter 3.0.9.222 Key is and how to use it to convert and remove DRM from eBooks.
-
-
If you have any questions or feedback, feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Golpitha Namdeo Dhasal Pdf 13.md b/spaces/diacanFperku/AutoGPT/Golpitha Namdeo Dhasal Pdf 13.md
deleted file mode 100644
index 5e28012d3c86845da43fe41835745dba166e9471..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Golpitha Namdeo Dhasal Pdf 13.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
F1-Score, Precision, and Recall -- Take the output from method sklearn.metrics.classification_report(), import to Pandas Data Fame, sorted, and graph it.
-
-Direct download via saving link. Related downloads. Nutritious Recipes in Education.pdf Nutritious Recipes in Education.ppt. PDF of presentation slides for any audience. Nutritious Recipes in Education.pptx. PDF of presentation slides for any audience. Nutritious Recipes in Education.odp. OLAP (OLAP stands for On-Line Analytical Processing) charts with SQL Server OLAP (Online Analytical Processing).ppt. SQL Server Analytical Server and its OLAP.pptx. online SQL Server data, with Microsoft Excel PPTX. Nutritious Recipes in Education.xls. In a Nutritious Recipes in Education.xlsx. View the presentation slides of our lesson on "Nutritious Recipes" for the classroom. Easily edit the presentation slides with Microsoft Word. Free presentation slides on the topic of Nutritious Recipes (Nutrition). The title of this presentation is: Nutritious Recipes (Nutrition) in Education. Nutritious Recipes (Nutrition) in Education.pptx.Q:
-
-Switching between 2 frames, in a single function?
-
-I have 2 frames in my app.
-
-I want to make one method to switch between them.
-
-I tried these things but the process is too complex.
-
-First I tried like this:
-
-private void switchImage(int i){
-
- new Thread()
-
- public void run()
-
- if(i == 0)
-
- new animation2();
-
-
-
- else if (i == 1)
-
- new animation();
-
-
-
- .start();
-
-
-
-And animation,animation2 is just simple animations such as ImageView.setImageResource(int) in each one.
-
-But this function is too complex and its too slow, animation 2 takes too much time.
-
-I wanted to make simple function to do this.
-
-I tried like this:
-
- if(i == 0){ 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/How To Hack Korek Telecom Card !!LINK!!.md b/spaces/falterWliame/Face_Mask_Detection/How To Hack Korek Telecom Card !!LINK!!.md
deleted file mode 100644
index c5499f0c22f1bcc05190090f90701e95119d7569..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/How To Hack Korek Telecom Card !!LINK!!.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
How To Hack Korek Telecom Card: A Complete Guide
-
Korek Telecom is one of the leading mobile operators in Iraq, offering a range of services and products to its customers. However, some people may want to hack Korek Telecom card for various reasons, such as getting free credit, bypassing restrictions, or accessing other networks. In this article, we will show you how to hack Korek Telecom card using different methods and tools.
Korek Telecom card is a SIM card that contains your personal and account information, such as your phone number, balance, tariff plan, and contacts. It also allows you to connect to the Korek Telecom network and use its services, such as voice calls, SMS, data, and roaming. Korek Telecom card can be purchased from any authorized dealer or outlet for 2,000 IQD, and it comes with 1,000 IQD credit.
-
Why Hack Korek Telecom Card?
-
There are many reasons why someone may want to hack Korek Telecom card, such as:
-
-
Getting free credit: Some hackers may try to manipulate the balance or validity of the Korek Telecom card to get more credit or extend the expiration date.
-
Bypassing restrictions: Some hackers may try to unlock the Korek Telecom card to use it with other devices or networks, or to access blocked or restricted services or websites.
-
Accessing other networks: Some hackers may try to clone the Korek Telecom card to use it with other SIM cards or networks, or to impersonate the original owner.
-
-
How To Hack Korek Telecom Card?
-
There are different ways and tools to hack Korek Telecom card, depending on the type and level of hacking you want to achieve. Here are some of the most common methods and tools:
-
AirMagnet WiFi Analyzer
-
AirMagnet WiFi Analyzer is a wireless network analysis tool that can scan and monitor the wireless traffic and activity in your area. It can also detect and crack the encryption keys of some wireless networks, such as WEP and WPA. You can use this tool to hack Korek Telecom card by intercepting and decrypting the data packets that are exchanged between the SIM card and the network. This way, you can get access to the account information and balance of the Korek Telecom card.
-
-
Aircrack-ng
-
Aircrack-ng is a suite of tools that can perform various attacks on wireless networks, such as cracking passwords, spoofing MAC addresses, injecting packets, and capturing handshakes. You can use this tool to hack Korek Telecom card by exploiting some vulnerabilities in the WEP encryption algorithm that are known as KoreK attacks. These attacks can recover the WEP key from a small number of captured packets, allowing you to decrypt the data and access the account information and balance of the Korek Telecom card.
-
Kali Linux
-
Kali Linux is a Linux distribution that is designed for penetration testing and ethical hacking. It comes with a variety of tools and applications that can perform various tasks and attacks on different systems and networks. You can use this tool to hack Korek Telecom card by using some of its features and commands, such as:
-
-
Nmap: A network scanner that can discover hosts, ports, services, and vulnerabilities on a network.
-
Metasploit: A framework that can exploit vulnerabilities and deliver payloads on a target system.
-
Wireshark: A packet analyzer that can capture and inspect network traffic.
-
Hydra: A password cracker that can perform brute-force attacks on various protocols and services.
-
-
By using these tools and applications, you can scan, exploit, capture, and crack the Korek Telecom card and its network.
-
Conclusion
-
Hacking Korek Telecom card is not an easy task, as it requires some skills, knowledge, tools, and time. Moreover, hacking Korek Telecom card is illegal and unethical, as it violates the terms and conditions of the service provider and the privacy of the user. Therefore, we do not recommend or condone hacking Korek Telecom card for any purpose. This article is for educational purposes only.
-
What are the Risks of Hacking Korek Telecom Card?
-
Hacking Korek Telecom card is not only illegal and unethical, but also risky and dangerous. There are many risks and consequences that you may face if you hack Korek Telecom card, such as:
-
-
Legal action: You may be sued or prosecuted by Korek Telecom or the authorities for violating the terms and conditions of the service provider and the laws of the country. You may face fines, imprisonment, or both.
-
Account suspension: You may lose your Korek Telecom card and account permanently if you are caught hacking or tampering with them. You may also lose your credit, contacts, and other data.
-
Malware infection: You may expose your device and network to malware, viruses, spyware, or ransomware if you use untrusted tools or applications to hack Korek Telecom card. These malicious programs may damage your device, steal your information, or lock your files.
-
Data breach: You may compromise your privacy and security if you hack Korek Telecom card. You may reveal your personal and account information, such as your phone number, balance, tariff plan, and contacts to hackers or third parties. You may also expose yourself to identity theft, fraud, or blackmail.
-
-
How To Protect Your Korek Telecom Card?
-
Instead of hacking Korek Telecom card, you should protect it from being hacked or stolen by others. There are some simple and effective ways to protect your Korek Telecom card, such as:
-
-
Use a PIN code: You should set a PIN code for your Korek Telecom card to prevent unauthorized access or use. You should also change your PIN code regularly and avoid using easy or obvious codes.
-
Keep it safe: You should keep your Korek Telecom card in a secure place and avoid losing or misplacing it. You should also avoid lending or sharing your Korek Telecom card with others.
-
Monitor your account: You should check your balance and activity regularly and report any suspicious or unusual transactions or charges to Korek Telecom. You should also update your account information and preferences periodically.
-
Avoid phishing: You should be careful of phishing emails, calls, or messages that claim to be from Korek Telecom or other entities and ask for your personal or account information. You should never click on unknown links or attachments or provide any information without verifying the source.
-
-
Conclusion
-
Korek Telecom card is a valuable asset that allows you to enjoy the services and products of Korek Telecom. Hacking Korek Telecom card is not a smart or ethical way to get more benefits or advantages from it. Instead, you should respect the rules and regulations of the service provider and the country and protect your Korek Telecom card from being hacked or stolen by others. This way, you can use your Korek Telecom card safely and securely.
-
What are the Alternatives to Hacking Korek Telecom Card?
-
If you want to get more benefits or advantages from your Korek Telecom card, hacking is not the only option. There are some legitimate and ethical alternatives that you can try, such as:
-
-
Switching plans: You can switch to a different tariff plan that suits your needs and budget better. You can choose from various plans that offer different rates, bundles, and features.
-
Using offers: You can use the various offers and promotions that Korek Telecom provides to its customers. You can get discounts, bonuses, rewards, or free services by using certain codes, vouchers, or scratch cards.
-
Recharging online: You can recharge your Korek Telecom card online using different methods, such as credit cards, bank transfers, or e-wallets. You can also get some benefits, such as extra credit, cashback, or coupons by recharging online.
-
Referring friends: You can refer your friends and family to join Korek Telecom and get some benefits, such as free credit, minutes, or data. You can also earn points and rewards by participating in the Korek Loyalty Program.
-
-
How To Contact Korek Telecom?
-
If you have any questions, complaints, or feedback about your Korek Telecom card or service, you can contact Korek Telecom through different channels, such as:
-
-
Phone: You can call 411 from your Korek Telecom phone or 0750 444 0411 from any other phone to speak to a customer service representative.
-
Email: You can send an email to info@korektel.com with your name, phone number, and inquiry.
-
Website: You can visit www.korektel.com and use the online chat or contact form to get in touch with Korek Telecom.
-
Social media: You can follow Korek Telecom on Facebook, Twitter, Instagram, or YouTube and send them a message or comment.
-
Store: You can visit any of the Korek Telecom stores or outlets across Iraq and get assistance from the staff.
-
-
Conclusion
-
Korek Telecom card is a valuable asset that allows you to enjoy the services and products of Korek Telecom. Hacking Korek Telecom card is not a smart or ethical way to get more benefits or advantages from it. Instead, you should respect the rules and regulations of the service provider and the country and protect your Korek Telecom card from being hacked or stolen by others. You should also use the legitimate and ethical alternatives that Korek Telecom offers to its customers. This way, you can use your Korek Telecom card safely and securely.
-
What are the Benefits of Korek Telecom Card?
-
Korek Telecom card is not only a SIM card, but also a smart card that offers many benefits and advantages to its users. Some of the benefits of Korek Telecom card are:
-
-
Wide coverage: Korek Telecom card allows you to enjoy the widest and most reliable network coverage in Iraq, with more than 4,000 sites and 95% population coverage.
-
High quality: Korek Telecom card enables you to experience the best voice and data quality, with clear sound, fast speed, and low latency.
-
Low cost: Korek Telecom card offers you the most competitive and affordable rates and tariffs, with no hidden fees or charges.
-
Flexible options: Korek Telecom card gives you the freedom and flexibility to choose from various options and services, such as prepaid or postpaid, local or international, voice or data, and more.
-
Value-added services: Korek Telecom card provides you with many value-added services and features, such as caller ID, call waiting, call forwarding, voicemail, SMS, MMS, internet, roaming, and more.
-
-
How To Activate Your Korek Telecom Card?
-
If you have purchased a new Korek Telecom card, you need to activate it before you can use it. The activation process is simple and easy. You can activate your Korek Telecom card by following these steps:
-
-
Insert your Korek Telecom card into your phone and turn it on.
-
Dial *212*1# to register your phone number.
-
Dial *212*2*1# to set your preferred language (Arabic or English).
-
Dial *212*2*2# to check your SIM card number.
-
Dial *212*3# to check your balance and validity.
-
Dial *212*4# to recharge your account using a scratch card or voucher.
-
-
Congratulations! You have successfully activated your Korek Telecom card. You can now enjoy the services and products of Korek Telecom.
-
Conclusion
-
Korek Telecom card is a valuable asset that allows you to enjoy the services and products of Korek Telecom. Hacking Korek Telecom card is not a smart or ethical way to get more benefits or advantages from it. Instead, you should respect the rules and regulations of the service provider and the country and protect your Korek Telecom card from being hacked or stolen by others. You should also use the legitimate and ethical alternatives that Korek Telecom offers to its customers. This way, you can use your Korek Telecom card safely and securely.
-
In this article, we have shown you how to hack Korek Telecom card using different methods and tools. We have also explained the risks and consequences of hacking Korek Telecom card, and the alternatives and benefits of using Korek Telecom card legitimately and ethically. We have also provided you with some tips and steps on how to protect and activate your Korek Telecom card. We hope that this article has been informative and helpful for you. However, we do not recommend or condone hacking Korek Telecom card for any purpose. This article is for educational purposes only.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Pirates Of The Caribbean Part 1 [PORTABLE] Free Download In 11.md b/spaces/falterWliame/Face_Mask_Detection/Pirates Of The Caribbean Part 1 [PORTABLE] Free Download In 11.md
deleted file mode 100644
index 8ef605d646e703a818d1f5666fdedb8acb3f7dd1..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Pirates Of The Caribbean Part 1 [PORTABLE] Free Download In 11.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
pirates of the caribbean part 1 free download in 11
-
-Along With The Gods The Last 49 Days ì‹ ê³¼í•¨ê»˜-ì¸ê³¼ ì—°. mkv 11-Feb-2020 09:35 ... Download Free Lovecraft Country Complete Season 1 720p HD 480p HD, Bluray, English, Dual Audio, Mp4, Avi, Mkv, Hindi, Coolmoviez, Watch Online, Fzmovies, ... Pirates of the Caribbean Dead Men Tell No Tales 2017 Dual Audio 720p ... 4d29de3e1b
-
-
-
diff --git a/spaces/fatiXbelha/sd/4carx.az How to Choose the Right Car Tires for Your Vehicle.md b/spaces/fatiXbelha/sd/4carx.az How to Choose the Right Car Tires for Your Vehicle.md
deleted file mode 100644
index a39606817c9705c9f7263c17e10b8450b70ff170..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/4carx.az How to Choose the Right Car Tires for Your Vehicle.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
4carx.az: The Best Place to Buy Car Tires in Sumqayit
-
If you are looking for a reliable and affordable car tire center in Sumqayit, Azerbaijan, you should check out 4carx.az. This is a website that allows you to order car tires online and have them delivered to your doorstep or installed at their workshop. In this article, we will tell you everything you need to know about 4carx.az, including their services, products, customer reviews, and how to contact them.
-
Introduction
-
What is 4carx.az?
-
4carx.az is a car tire center that operates in Sumqayit, a city in Azerbaijan. They have been in business since 2018 and have gained a reputation for providing high-quality car tires at competitive prices. They have a website where you can browse their catalog of car tires, choose the ones that suit your vehicle and budget, and place an order online. You can also visit their workshop in Sumqayit and get your car tires installed by their professional staff.
There are many reasons why you should choose 4carx.az for your car tire needs. Here are some of them:
-
-
They offer free delivery within Sumqayit and a small fee for other regions of Azerbaijan.
-
They have a wide range of car tires from different brands and sizes, so you can find the perfect match for your car.
-
They have a team of experienced and certified technicians who can install and maintain your car tires with care and precision.
-
They have a customer loyalty program that rewards you with discounts and bonuses for every purchase you make.
-
They have a customer service team that is available 24/7 to answer your questions and assist you with your orders.
-
They have a social media presence on Facebook and Instagram where you can follow their updates, promotions, and tips.
-
-
Services and Products
-
Online Ordering and Delivery
-
One of the best features of 4carx.az is their online ordering and delivery service. You can simply visit their website, www.4carx.az, and browse their catalog of car tires by brand, size, or type. You can also use their search function to find the exact model you are looking for. Once you have selected your car tires, you can add them to your cart and proceed to checkout. You can pay online using your credit card or cash on delivery. You can also choose whether you want your car tires delivered to your address or installed at their workshop.
-
Wide Range of Brands and Sizes
-
Another great feature of 4carx.az is their wide range of brands and sizes of car tires. They have car tires from well-known brands such as Bridgestone, Michelin, Pirelli, Goodyear, Dunlop, Continental, Hankook, Yokohama, Kumho, Nexen, Toyo, Nokian, Cooper, Firestone, BFGoodrich, Falken, Maxxis, Apollo, Vredestein, Kenda, MRF, CEAT, JK Tyre, Giti Tire, Linglong Tire, Westlake Tire, Double Coin Tire, Triangle Tire, Sailun Tire, Chengshan Tire, Aeolus Tire, Roadstone Tire, and more.
-
They also have car tires in various sizes, from 13 inches to 22 inches, to fit different types of vehicles, such as sedans, hatchbacks, SUVs, trucks, buses, and trailers. You can also find car tires for different seasons and terrains, such as summer tires, winter tires, all-season tires, all-terrain tires, mud tires, performance tires, run-flat tires, and eco-friendly tires.
-
Professional Installation and Maintenance
-
Besides online ordering and delivery, 4carx.az also offers professional installation and maintenance services for your car tires. You can visit their workshop in Sumqayit and have your car tires installed by their skilled and qualified technicians. They use modern equipment and tools to ensure that your car tires are mounted, balanced, aligned, and inflated properly. They also offer free tire rotation, tire repair, tire pressure check, and tire disposal services for their customers.
-
4carx.az Sumqayit
-Goform GH 18 tires at 4carx.az
-4carx.az Instagram
-Best car wheel center in Azerbaijan
-4carx.az reviews and ratings
-How to contact 4carx.az
-4carx.az discounts and offers
-4carx.az delivery and installation
-4carx.az warranty and service
-4carx.az location and directions
-4carx.az online shopping
-4carx.az customer testimonials
-4carx.az tire size guide
-4carx.az tire brands and models
-4carx.az tire pressure and maintenance
-4carx.az tire rotation and alignment
-4carx.az tire repair and replacement
-4carx.az tire safety and performance
-4carx.az tire comparison and selection
-4carx.az tire tips and tricks
-4carx.az Facebook page
-4carx.az YouTube channel
-4carx.az Twitter account
-4carx.az LinkedIn profile
-4carx.az blog and news
-4carx.az FAQ and help center
-4carx.az terms and conditions
-4carx.az privacy policy
-4carx.az refund policy
-4carx.az affiliate program
-4carx.az careers and jobs
-4carx.az history and mission
-4carx.az awards and recognition
-4carx.az social responsibility and sustainability
-4carx.az events and promotions
-4carx.az partnerships and collaborations
-4carx.az competitions and giveaways
-4carx.az coupons and vouchers
-4carx.az loyalty program and benefits
-4carx.az newsletter and updates
-
Customer Reviews and Testimonials
-
How to Find and Contact 4carx.az
-
If you want to find and contact 4carx.az, you can use any of the following methods:
-
-
Visit their website: www.4carx.az
-
Call their phone number: +994 50 555 55 55
-
Email them: info@4carx.az
-
Follow them on Facebook: www.facebook.com/4carx.az
-
Follow them on Instagram: www.instagram.com/4carx.az
-
Visit their workshop: Sumqayit şəhəri, Səməd Vurğun küçəsi 8A
-
-
What Customers Say About 4carx.az
-
4carx.az has received many positive reviews and testimonials from their satisfied customers. Here are some of them:
-
-
"I ordered four winter tires from 4carx.az and they delivered them to my home in Baku the next day. The delivery guy was very friendly and helpful. He even helped me install the tires on my car. The tires are of excellent quality and I feel much safer driving in the snow now. Thank you 4carx.az!" - Elvin M.
-
-
-
"I have been buying car tires from 4carx.az for two years now and I am very happy with their service. They have a wide selection of brands and sizes to choose from and their prices are very reasonable. Their staff are very professional and knowledgeable and they always give me good advice on which tires to buy for my car. I highly recommend 4carx.az to anyone who needs car tires in Sumqayit." - Nigar A.
-
-
-
"I had a flat tire on my way to work and I called 4carx.az for help. They came to my location in less than an hour and fixed my tire on the spot. They also checked the other tires and found that they were worn out and needed replacement. They offered me a great deal on four new tires and installed them for me in no time. They saved me a lot of time and hassle. I am very grateful to 4carx.az for their fast and reliable service." - Tural R.
-
-
How to Leave a Review or Feedback
-
If you have bought car tires from 4carx.az or used their services, you can leave a review or feedback on their website or social media pages. You can also rate them on Google or other online platforms. Your review or feedback will help them improve their service quality and customer satisfaction. It will also help other potential customers make informed decisions about buying car tires from 4carx.az.
-
Conclusion
-
Summary of the Main Points
-
In conclusion, 4carx.az is the best place to buy car tires in Sumqayit, Azerbaijan. They offer online ordering and delivery, a wide range of brands and sizes, professional installation and maintenance, customer loyalty program, customer service team, social media presence, and positive customer reviews and testimonials.
-
Call to Action
-
If you need new car tires for your vehicle, don't hesitate to visit their website or contact them today. You will be amazed by their quality products, competitive prices, and excellent service.
-
FAQs
-
Here are some frequently asked questions about 4carx.az:
-
-
What are the payment methods accepted by 4carx.az? You can pay online using your credit card or cash on delivery. You can also pay at their workshop when you get your car tires installed.
-
How long does it take to deliver the car tires? It depends on your location and the availability of the car tires. Usually, it takes one to two days to deliver the car tires within Sumqayit and three to five days for other regions of Azerbaijan.
-
How much does it cost to install the car tires? The installation cost is included in the price of the car tires. You don't have to pay extra for the installation service.
-
How often should I change my car tires? It depends on several factors, such as the type, quality, and usage of your car tires, the road and weather conditions, and your driving habits. Generally, you should change your car tires every 40,000 to 50,000 kilometers or every four to five years, whichever comes first. You should also check your car tires regularly for signs of wear and tear, such as cracks, bulges, cuts, punctures, or low tread depth.
-
What are the benefits of buying car tires online? Buying car tires online has many benefits, such as convenience, variety, affordability, and security. You can shop for car tires anytime and anywhere, compare different brands and prices, save money and time, and avoid scams and frauds. You can also read customer reviews and ratings, get expert advice and tips, and track your order status and delivery.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/APKPure APK How to Download and Install Any Android App or Game.md b/spaces/fatiXbelha/sd/APKPure APK How to Download and Install Any Android App or Game.md
deleted file mode 100644
index fa21addd56409acd1037733b705652b877a5c9b1..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/APKPure APK How to Download and Install Any Android App or Game.md
+++ /dev/null
@@ -1,216 +0,0 @@
-
-
What is APKPure and why you should use it
-
If you are an Android user who loves downloading and trying new apps and games, you may have heard of APKPure. But what is it exactly and why should you use it? In this article, we will answer these questions and show you how to use APKPure to get the most out of your Android device.
-
APKPure is a free online service that allows you to download and install Android apps and games that are not available in your country or region. It also lets you update your apps and games without using Google Play Store. With APKPure, you can access thousands of apps and games that are otherwise restricted or blocked by Google or your device manufacturer.
But that's not all. APKPure also offers many other features and benefits that make it a must-have app for Android users. Here are some of them:
-
-
Fast and easy downloads: You can download any app or game from APKPure with just one tap. No need to sign up or log in. You can also pause and resume your downloads at any time.
-
Safe and secure downloads: All the apps and games on APKPure are verified by MD5 hash algorithm to ensure that they are safe and virus-free. You can also scan the downloaded files with your own antivirus software before installing them.
-
Small size downloads: Unlike Google Play Store, which downloads the full package of an app or game, APKPure only downloads the parts that are needed for your device. This saves you storage space and data usage.
-
No ads or pop-ups: APKPure does not show any annoying ads or pop-ups on its website or app. You can enjoy a clean and smooth user experience.
-
Offline mode: You can use APKPure even when you are offline. You can browse and install the apps and games that you have downloaded before without an internet connection.
-
Multi-language support: APKPure supports over 40 languages, including English, Spanish, French, German, Chinese, Japanese, Korean, Arabic, Hindi, and more. You can easily switch between different languages in the app settings.
-
App community and feedback: APKPure has a large and active community of app users and developers. You can join the discussion forums, leave comments, ratings, and reviews, and share your feedback and suggestions with others.
-
New and trending apps and games: APKPure keeps you updated with the latest and hottest apps and games in the market. You can discover new and exciting apps and games every day on APKPure.
-
Customizable app experience: APKPure allows you to customize your app experience according to your preferences. You can change the theme, font size, notifications, and more in the app settings.
-
-
As you can see, APKPure is a powerful and versatile app that can enhance your Android experience. But how do you use it? In the following sections, we will show you how to download and install APKPure on your Android device, and how to use it to download and update apps and games, access region-locked apps and games, install modded apps and games, backup and restore your apps and games, manage your app permissions and settings, join the app community and share your feedback, discover new and trending apps and games, customize your app experience, troubleshoot common issues, contact APKPure support, and stay updated with the latest news and updates from APKPure. Let's get started!
-
How to download and install APKPure on your Android device
-
Downloading and installing APKPure on your Android device is very easy. Just follow these simple steps:
Tap on the Download APK button at the top of the homepage. This will start downloading the APK file of APKPure on your device.
-
Once the download is complete, tap on the downloaded file to open it. You may need to enable Unknown Sources in your device's settings to allow the installation of apps from sources other than Google Play Store.
-
Follow the on-screen instructions to install APKPure on your device. It may take a few seconds to complete the installation.
-
Once the installation is done, you will see the APKPure icon on your device's home screen or app drawer. Tap on it to launch APKPure.
-
-
Congratulations! You have successfully downloaded and installed APKPure on your Android device. Now you can use it to download and update apps and games that are not available in your country or region.
-
How to use APKPure to download and update apps and games
-
One of the main functions of APKPure is to download and update apps and games that are not available in your country or region. Here's how you can do that:
-
-
Launch APKPure on your device. You will see a list of recommended apps and games on the homepage. You can also use the Search function at the top of the screen to find any app or game you want.
-
Tap on the app or game you want to download or update. You will see a detailed page with information such as description, screenshots, ratings, reviews, version history, etc.
-
If you want to download the app or game for the first time, tap on the Install button at the bottom of the screen. This will start downloading the app or game on your device.
-
If you want to update an existing app or game on your device, tap on the Update button at the bottom of the screen. This will start downloading the latest version of the app or game on your device.
-
Once the download is complete, tap on the downloaded file to open it. Follow the on-screen instructions to install or update the app or game on your device.
-
You can also check for updates for all your apps and games at once by tapping on the Updates tab at the bottom of the screen. You will see a list of apps and games that have new versions available. You can tap on the Update All button to update them all at once, or tap on the individual Update buttons to update them one by one.
-
-
That's it! You have learned how to use APKPure to download and update apps and games that are not available in your country or region. You can now enjoy a wider range of apps and games on your Android device.
-
apk pure app download for android
-apk pure games free download
-apk pure modded apps and games
-apk pure latest version update
-apk pure alternative app store
-apk pure downloader for pc
-apk pure install and uninstall
-apk pure reviews and ratings
-apk pure safe and secure
-apk pure categories and genres
-apk pure regional and global apps
-apk pure editor's choice and recommendations
-apk pure trending and popular apps
-apk pure new and upcoming apps
-apk pure old and previous versions
-apk pure beta and early access apps
-apk pure lite and optimized apps
-apk pure offline and online apps
-apk pure premium and paid apps
-apk pure cracked and hacked apps
-apk pure tools and utilities apps
-apk pure social and communication apps
-apk pure entertainment and media apps
-apk pure education and learning apps
-apk pure health and fitness apps
-apk pure lifestyle and personalization apps
-apk pure business and productivity apps
-apk pure shopping and finance apps
-apk pure travel and navigation apps
-apk pure sports and gaming apps
-apk pure music and audio apps
-apk pure video and photo apps
-apk pure books and reference apps
-apk pure news and magazines apps
-apk pure art and design apps
-apk pure food and drink apps
-apk pure beauty and fashion apps
-apk pure dating and relationship apps
-apk pure parenting and family apps
-apk pure medical and wellness apps
-apk pure house and home apps
-apk pure events and tickets apps
-apk pure weather and climate apps
-apk pure maps and location apps
-apk pure comics and anime apps
-apk pure library and demo apps
-apk pure auto and vehicles apps
-
How to use APKPure to access region-locked apps and games
-
Another great feature of APKPure is that it allows you to access region-locked apps and games that are not available in your country or region. This means that you can download and play apps and games that are only released in certain countries or regions, such as Japan, China, Korea, etc. Here's how you can do that:
-
-
Launch APKPure on your device. Tap on the Menu icon at the top left corner of the screen. Then tap on the Settings option.
-
In the Settings menu, tap on the Location option. You will see a list of countries and regions that you can choose from.
-
Select the country or region that you want to access. For example, if you want to access Japanese apps and games, select Japan from the list.
-
Tap on the Save button at the top right corner of the screen. This will change your location settings in APKPure.
-
Go back to the homepage of APKPure. You will see a list of recommended apps and games based on your selected location. You can also use the Search function to find any app or game you want.
-
Tap on the app or game you want to download or update. Follow the same steps as before to download or update it on your device.
-
-
Note: Some apps and games may require additional steps to access them, such as creating an account, verifying your phone number, using a VPN, etc. Please follow the instructions provided by the app or game developer to access them properly.
-
You have learned how to use APKPure to access region-locked apps and games that are not available in your country or region. You can now explore and enjoy a variety of apps and games from different countries and regions on your Android device.
-
How to use APKPure to install modded apps and games
-
If you are feeling adventurous and want to try something different, you can use APKPure to install modded apps and games on your Android device. Modded apps and games are modified versions of original apps and games that offer extra features, such as unlimited money, gems, coins, lives, etc. They can also remove ads, unlock premium features, bypass restrictions, etc.
-
However, before you install modded apps and games, you should be aware of the risks and legality of doing so. Modded apps and games may contain malware, viruses, spyware, etc. that can harm your device or steal your personal information. They may also violate the terms of service and policies of the original app or game developer, which can result in legal actions or bans. Therefore, you should only install modded apps and games from trusted sources and at your own risk.
-
If you still want to install modded apps and games, here's how you can do that with APKPure:
-
-
Launch APKPure on your device. Tap on the Menu icon at the top left corner of the screen. Then tap on the Modded Games option.
-
You will see a list of modded games that are available on APKPure. You can also use the Search function to find any modded game you want.
-
Tap on the modded game you want to download or update. You will see a detailed page with information such as description, screenshots, ratings, reviews, mod features, etc.
-
If you want to download the modded game for the first time, tap on the Install button at the bottom of the screen. This will start downloading the modded game on your device.
-
If you want to update an existing modded game on your device, tap on the Update button at the bottom of the screen. This will start downloading the latest version of the modded game on your device.
-
Once the download is complete, tap on the downloaded file to open it. Follow the on-screen instructions to install or update the modded game on your device.
-
-
Note: Some modded games may require additional steps to install or run them, such as enabling root access, granting permissions, using a VPN, etc. Please follow the instructions provided by the modded game developer to install or run them properly.
-
You have learned how to use APKPure to install modded games on your Android device. You can now enjoy playing games with extra features and advantages.
-
How to use APKPure to backup and restore your apps and games
-
Another useful feature of APKPure is that it allows you to backup and restore your apps and games on your Android device. This means that you can save your app data and settings, and transfer them to another device or restore them in case of data loss. Here's how you can do that with APKPure:
-
-
Launch APKPure on your device. Tap on the Menu icon at the top left corner of the screen. Then tap on the Backup & Restore option.
-
You will see a list of apps and games that are installed on your device. You can select the ones that you want to backup by tapping on the checkbox next to them.
-
Tap on the Backup button at the bottom of the screen. This will start backing up your selected apps and games on your device's storage.
-
Once the backup is complete, you will see a message saying Backup Successful. You can tap on the View Backup button to see the backup files in your device's storage.
-
If you want to restore your apps and games from a backup, tap on the Restore button at the bottom of the screen. You will see a list of backup files that are available on your device's storage. You can select the ones that you want to restore by tapping on the checkbox next to them.
-
Tap on the Restore button at the bottom of the screen. This will start restoring your selected apps and games on your device.
-
Once the restore is complete, you will see a message saying Restore Successful. You can tap on the View Restored button to see the restored apps and games on your device.
-
-
Note: You can also backup and restore your apps and games to or from an external storage device, such as a SD card or a USB drive, by tapping on the Select Storage Location option in the Backup & Restore menu.
-
You have learned how to use APKPure to backup and restore your apps and games on your Android device. You can now save your app data and settings, and transfer them to another device or restore them in case of data loss.
-
How to use APKPure to manage your app permissions and settings
-
Another handy feature of APKPure is that it allows you to manage your app permissions and settings on your Android device. This means that you can view and modify your app permissions, clear cache, uninstall apps, etc. Here's how you can do that with APKPure:
-
-
Launch APKPure on your device. Tap on the Menu icon at the top left corner of the screen. Then tap on the App Manager option.
-
You will see a list of apps that are installed on your device. You can select the ones that you want to manage by tapping on them.
-
On the app page, you will see various options to manage your app permissions and settings, such as:
-
-
Permissions: You can view and modify the permissions that the app has access to, such as camera, microphone, contacts, location, etc. You can grant or revoke any permission by tapping on the toggle switch next to it.
-
Clear Cache: You can clear the cache data of the app, which may take up storage space and slow down your device. You can tap on the Clear Cache button to delete the cache data of the app.
-
Uninstall: You can uninstall the app from your device, which may free up storage space and improve your device performance. You can tap on the Uninstall button to remove the app from your device.
-
More Options: You can access more options to manage your app settings, such as force stop, disable, enable, move to SD card, etc. You can tap on the More Options button to see the available options for the app.
-
-
You can also sort and filter your apps by various criteria, such as name, size, date, etc. You can tap on the Sort & Filter button at the top right corner of the screen to see the available options.
-
-
You have learned how to use APKPure to manage your app permissions and settings on your Android device. You can now view and modify your app permissions, clear cache, uninstall apps, etc.
-
How to use APKPure to join the app community and share your feedback
-
Another fun feature of APKPure is that it allows you to join the app community and share your feedback with other users and developers. This means that you can comment, rate, review, and discuss apps and games with others. Here's how you can do that with APKPure:
-
-
Launch APKPure on your device. Tap on the app or game that you want to comment, rate, review, or discuss. You will see a detailed page with information such as description, screenshots, ratings, reviews, version history, etc.
-
If you want to comment on the app or game, tap on the Comments tab at the bottom of the screen. You will see a list of comments from other users. You can also write your own comment by tapping on the Add Comment button at the bottom of the screen.
-
If you want to rate or review the app or game, tap on the Ratings & Reviews tab at the bottom of the screen. You will see a list of ratings and reviews from other users. You can also write your own rating or review by tapping on the Add Rating & Review button at the bottom of the screen.
-
If you want to discuss the app or game with other users or developers, tap on the Forums tab at the bottom of the screen. You will see a list of topics and posts from other users and developers. You can also create your own topic or post by tapping on the Add Topic or Add Post button at the bottom of the screen.
-
-
Note: To comment, rate, review, or discuss apps and games on APKPure, you need to sign up or log in with your email or social media account. You can do that by tapping on the Me tab at the bottom of the screen and following the instructions.
-
You have learned how to use APKPure to join the app community and share your feedback with other users and developers. You can now comment, rate, review, and discuss apps and games with others.
-
How to use APKPure to discover new and trending apps and games
-
Another cool feature of APKPure is that it allows you to discover new and trending apps and games on your Android device. This means that you can find new and exciting apps and games every day on APKPure. Here's how you can do that with APKPure:
-
-
Launch APKPure on your device. Tap on the Discover tab at the bottom of the screen. You will see a list of various ways to discover new and trending apps and games on APKPure, such as:
-
-
Editors' Picks: You can see a curated list of apps and games that are hand-picked by APKPure's editors for their quality, popularity, or uniqueness.
-
Top Charts: You can see a ranked list of apps and games that are most downloaded, updated, rated, or reviewed by APKPure's users.
-
Collections: You can see a themed list of apps and games that are grouped by categories, genres, interests, or occasions.
-
New Releases: You can see a fresh list of apps and games that are newly released or updated on APKPure.
-
Trending Now: You can see a dynamic list of apps and games that are currently hot or viral on APKPure.
-
Similar Apps & Games: You can see a personalized list of apps and games that are similar to the ones you have installed or viewed on APKPure.
-
-
Tap on any of the options to see more details. You will see a list of apps and games that match your selected option. You can also use the Filter & Sort button at the top right corner of the screen to refine your results by various criteria, such as name, size, date, rating, etc.
-
Tap on any app or game that interests you. You will see a detailed page with information such as description, screenshots, ratings, reviews, version history, etc. You can also download or update it by following the same steps as before.
-
-
You have learned how to use APKPure to discover new and trending apps and games on your Android device. You can now find new and exciting apps and games every day on APKPure.
-
How to use APKPure to customize your app experience
-
Another nice feature of APKPure is that it allows you to customize your app experience according to your preferences. This means that you can change your APKPure settings, such as theme, font size, notifications, etc. Here's how you can do that with APKPure:
-
-
Launch APKPure on your device. Tap on the Me tab at the bottom of the screen. Then tap on the Settings option.
-
In the Settings menu, you will see various options to customize your app experience, such as:
-
-
Theme: You can choose between light or dark theme for your app appearance.
-
Font Size: You can adjust the font size for your app text.
-
Notifications: You can enable or disable notifications for various events, such as downloads, updates, comments, ratings, reviews, etc.
-
Data Usage: You can limit your data usage for downloads by setting a maximum file size or using Wi-Fi only.
-
Language: You can change your app language from over 40 languages available.
-
About Us: You can view information about APKPure's team, version, website, email, social media, etc.
-
Feedback & Help: You can send feedback or report issues to APKPure's , Twitter, Instagram, YouTube, etc. You can also send them a message or comment on their posts. They will respond to you as soon as possible.
-
Feedback & Help: You can send feedback or report issues to APKPure's customer service team via the app itself. You can do that by tapping on the Me tab at the bottom of the screen, then tapping on the Settings option, then tapping on the Feedback & Help option. You can fill out the form with your query or problem and attach screenshots if needed. They will reply to you as soon as possible.
-
-
You have learned how to contact APKPure's customer service team and get help. You can now reach out to them via email, social media, or feedback form.
-
How to stay updated with the latest news and updates from APKPure
-
If you want to stay updated with the latest news and updates from APKPure, you can subscribe to their newsletter, blog, and social media channels. Here are some ways to do that:
-
-
Newsletter: You can sign up for APKPure's newsletter and get the latest news and updates delivered to your email inbox. You can do that by going to APKPure's official website and entering your email address in the Subscribe box at the bottom of the homepage. You can also unsubscribe at any time by clicking on the Unsubscribe link in the newsletter.
-
Blog: You can read APKPure's blog and get the latest news and updates about APKPure's features, updates, promotions, etc. You can do that by going to APKPure's official blog and browsing through the posts. You can also comment on the posts and share your feedback and opinions.
-
Social media: You can follow APKPure on various social media platforms, such as Facebook, Twitter, Instagram, YouTube, etc. and get the latest news and updates about APKPure's features, updates, promotions, etc. You can also interact with them and other users by liking, commenting, sharing, or messaging them.
-
-
You have learned how to stay updated with the latest news and updates from APKPure. You can now subscribe to their newsletter, blog, and social media channels.
-
Conclusion
-
In this article, we have shown you what is APKPure and why you should use it. We have also shown you how to use APKPure to download and update apps and games, access region-locked apps and games, install modded apps and games, backup and restore your apps and games, manage your app permissions and settings, join the app community and share your feedback, discover new and trending apps and games, customize your app experience, troubleshoot common issues, contact APKPure support, and stay updated with the latest news and updates from APKPure.
-
We hope that this article has been helpful and informative for you. If you have any questions or suggestions, please feel free to contact us or leave a comment below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions and answers about APKPure:
-
-
Q: Is APKPure safe and legal?
-
A: Yes, APKPure is safe and legal. All the apps and games on APKPure are verified by MD5 hash algorithm to ensure that they are safe and virus-free. You can also scan the downloaded files with your own antivirus software before installing them. APKPure does not host any illegal or pirated content on its website or app. However, you should be careful when downloading or installing modded apps and games, as they may contain malware or violate the terms of service of the original app or game developer.
-
Q: Is APKPure free?
-
A: Yes, APKPure is free. You can download and install any app or game from APKPure without paying any fees or charges. However, some apps or games may require in-app purchases or subscriptions to access their full features or content.
-
Q: Does APKPure require root access?
-
A: No, APKPure does not require root access. You can use APKPure on any Android device without rooting or modifying it in any way. However , some modded apps or games may require root access to install or run them properly. You should only root your device if you know what you are doing and at your own risk.
-
Q: Does APKPure work on iOS devices?
-
A: No, APKPure does not work on iOS devices. APKPure only supports Android devices and APK files, which are not compatible with iOS devices and IPA files. However, you can use APKPure on your PC or Mac with an Android emulator, such as BlueStacks, NoxPlayer, etc.
-
Q: What is the difference between APKPure and Google Play Store?
-
A: APKPure and Google Play Store are both online services that allow you to download and install Android apps and games. However, there are some differences between them, such as:
-
-
APKPure allows you to download and install apps and games that are not available in your country or region, while Google Play Store may restrict or block them.
-
APKPure allows you to update your apps and games without using Google Play Store, while Google Play Store may require you to use it for updates.
-
APKPure only downloads the parts that are needed for your device, while Google Play Store downloads the full package of an app or game.
-
APKPure does not show any ads or pop-ups, while Google Play Store may show them.
-
APKPure has a larger and more diverse collection of apps and games, including modded apps and games, while Google Play Store may have a smaller and more limited collection.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Car Make Up Mod Apk Transform Your Car with Amazing Features and Effects.md b/spaces/fatiXbelha/sd/Car Make Up Mod Apk Transform Your Car with Amazing Features and Effects.md
deleted file mode 100644
index fe378cb81a41d0e8b3c6b817549ecade5924c05b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Car Make Up Mod Apk Transform Your Car with Amazing Features and Effects.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Car Make Up Mod Apk: How to Customize Your Car with Your Smartphone
-
Do you love cars and want to make them look more unique and stylish? Do you wish you could change the appearance of your car without spending a lot of money and time? If you answered yes to these questions, then you might be interested in car make up mod apk.
-
What is car make up mod apk?
-
Car make up mod apk is a term that refers to modified applications that allow you to customize your car with your smartphone. These apps let you change various aspects of your car, such as the color, wheels, bumper, headlights, stickers, decals, logos, and more. You can also save and share your creations with other users or use them in games.
Car make up mod apk has many benefits for car enthusiasts. You can express your creativity and personality by making your car look different from others. You can also experiment with different styles and designs without risking any damage or loss. You can also have fun and enjoy yourself by playing with different options and features.
-
How to download and install car make up mod apk?
-
If you want to try car make up mod apk, you will need an Android device and an internet connection. Here are the steps to download and install car make up mod apk on your smartphone:
-
car make up mod apk download
-car make up mod apk free
-car make up mod apk latest version
-car make up mod apk unlimited money
-car make up mod apk android
-car make up mod apk ios
-car make up mod apk offline
-car make up mod apk hack
-car make up mod apk 2023
-car make up mod apk no ads
-car make up mod apk full unlocked
-car make up mod apk revdl
-car make up mod apk rexdl
-car make up mod apk happymod
-car make up mod apk an1
-car make up mod apk apkpure
-car make up mod apk apkmody
-car make up mod apk mob.org
-car make up mod apk android 1
-car make up mod apk android oyun club
-car make up mod apk obb
-car make up mod apk data
-car make up mod apk file
-car make up mod apk mirror
-car make up mod apk uptodown
-car make up 3d tuning app mod apk
-car make up racing game mod apk
-car make up simulator game mod apk
-car make up design game mod apk
-car make up makeover game mod apk
-best car customize apps 2023 mod apk
-formaCar app for android and ios mod apk
-3DTuning app for android and ios mod apk
-tuning Car Racing by Process Games mod apk
-Car Master 3D app for android and ios mod apk
-Car++ app for android and ios mod apk
-Torque Drift app for android and ios mod apk
-Tuning Club Online app for android and ios mod apk
-Need For Speed No limits app for android and ios mod apk
-Crossout Mobile Craft War Cars app for android and ios mod apk
-Car Tuning Design Cars app for android and ios mod apk
-Idle Car Tuning: Car Simulator app for android and ios mod apk
-Car Make Up: Car Customization app for android and ios mod apk
-Car Make Up: Car Styling app for android and ios mod apk
-Car Make Up: Car Editor app for android and ios mod apk
-Car Make Up: Car Builder app for android and ios mod apk
-Car Make Up: Car Creator app for android and ios mod apk
-Car Make Up: Car Modding app for android and ios mod apk
-Car Make Up: Car Tuner app for android and ios mod apk
-
-
Find a reliable source for downloading car make up mod apk. You can search online or use one of the links provided in this article.
-
Select the app that you want to download and click on the download button.
-
Wait for the download to finish and then open the file.
-
Allow the installation of unknown sources if prompted by your device.
-
Follow the instructions on the screen to complete the installation.
-
Launch the app and enjoy customizing your car.
-
-
How to use car make up mod apk?
-
Once you have installed car make up mod apk on your device, you can start using it to customize your car. Here are some tips on how to use car make up mod apk:
-
How to change the color, wheels, bumper, headlights, and more of your car
-
To change the basic features of your car, such as the color, wheels, bumper, headlights, and more, you can use the following steps:
-
-
Select a car model that you want to customize from the app's library. You can choose from hundreds of different cars from various brands and countries.
-
Tap on the feature that you want to change. You can swipe left or right to see more options.
-
Pick a color or a design that you like from the palette or the gallery. You can also use the slider or the color picker tool to adjust the hue, saturation, brightness, and contrast.
-
Apply the changes and see how they look on your car. You can zoom in or out or rotate your view to see different
How to add stickers, decals, logos, and other accessories to your car
-
To add some extra flair and personality to your car, you can use stickers, decals, logos, and other accessories. You can use the following steps to do so:
-
-
Tap on the accessory icon on the bottom of the screen. You can choose from various categories, such as flags, flames, skulls, stars, and more.
-
Select the accessory that you want to add to your car. You can resize, rotate, move, or delete it as you wish.
-
Place the accessory on the desired spot on your car. You can also adjust the opacity and the blending mode to make it look more realistic.
-
Repeat the process for any other accessories that you want to add. You can also use the layer tool to arrange the order of the accessories.
-
Apply the changes and admire your car's new look.
-
-
How to save and share your car make up creations with others
-
After you have customized your car to your liking, you might want to save and share it with others. You can use the following steps to do so:
-
-
Tap on the save icon on the top right corner of the screen. You can choose to save your creation as an image or a project file.
-
Select a name and a location for your file. You can also add a description or a tag if you want.
-
Tap on the share icon on the top right corner of the screen. You can choose to share your creation via social media, email, or other apps.
-
Select the platform or the app that you want to use. You can also add a caption or a message if you want.
-
Send or post your creation and wait for the feedback from your friends or other users.
-
-
What are the best car make up mod apk apps?
-
There are many car make up mod apk apps available on the market, but not all of them are equally good. Some of them might have more features, better graphics, or easier interface than others. Here are some of the best car make up mod apk apps that you can try:
-
3DTuning: The most realistic and comprehensive car make up app
-
If you are looking for a car make up app that offers realistic and detailed graphics, then 3DTuning is the app for you. This app has over 1000 cars from 80 manufacturers and over 1000 parts and accessories to choose from. You can customize every aspect of your car, from the exterior to the interior, and even the engine and suspension. You can also view your car in different environments and lighting conditions. You can download 3DTuning from here.
-
FormaCar: The most social and interactive car make up app
-
If you are looking for a car make up app that allows you to interact with other users and join a community of car lovers, then FormaCar is the app for you. This app lets you create your own profile and showcase your creations to other users. You can also browse, like, comment, and follow other users' creations. You can also join clubs, participate in contests, and chat with other users. You can download FormaCar from here.
-
Tuning Car Racing: The most fun and exciting car make up app
-
If you are looking for a car make up app that combines customization with gaming, then Tuning Car Racing is the app for you. This app lets you customize your car and then use it in racing games against other players or AI opponents. You can also upgrade your car's performance and unlock new cars and parts as you progress. You can download Tuning Car Racing from here.
-
Conclusion
-
Car make up mod apk is a great way to customize your car with your smartphone. You can change various features of your car, such as the color, wheels, bumper, headlights, stickers, decals, logos, and more. You can also save and share your creations with others or use them in games. There are many car make up mod apk apps available on the market, but some of the best ones are 3DTuning, FormaCar, and Tuning Car Racing. So what are you waiting for? Download one of these apps today and start making your car look awesome!
-
Frequently Asked Questions
-
-
What is car make up mod apk?
-Car make up mod apk is a term that refers to modified applications that allow you to customize your car with your smartphone.
-
How to download and install car make up mod apk?
-To download and install car make up mod apk, you need to find a reliable source, select the app that you want, download the file, allow the installation of unknown sources, follow the instructions, and launch the app.
-
How to use car make up mod apk?
-To use car make up mod apk, you need to select a car model that you want to customize, tap on the feature that you want to change, pick a color or a design that you like, apply the changes, and repeat the process for any other features or accessories that you want to add. You can also save and share your creations with others or use them in games.
-
What are the best car make up mod apk apps?
-Some of the best car make up mod apk apps are 3DTuning, FormaCar, and Tuning Car Racing. These apps offer realistic and detailed graphics, social and interactive features, and fun and exciting gaming modes.
-
Is car make up mod apk safe and legal?
-Car make up mod apk is generally safe and legal as long as you download it from a trusted source and use it for personal and non-commercial purposes. However, you should always be careful about the permissions and data that the app requires and avoid any malicious or illegal activities.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download NEW STATE MOBILE and Join the Extreme Battle Royale Action on a 4x4 Desert Map.md b/spaces/fatiXbelha/sd/Download NEW STATE MOBILE and Join the Extreme Battle Royale Action on a 4x4 Desert Map.md
deleted file mode 100644
index ea75a6e42c7bbbbd092e7409b160b339c031f41b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download NEW STATE MOBILE and Join the Extreme Battle Royale Action on a 4x4 Desert Map.md
+++ /dev/null
@@ -1,159 +0,0 @@
-
-
Download Game New State Mobile: The Ultimate Guide
-
If you are a fan of battle royale games, you might have heard of New State Mobile, the latest game from PUBG Studios, the company behind the popular PlayerUnknown's Battlegrounds (PUBG). But what is New State Mobile, and how can you download it on your mobile device? In this article, we will answer these questions and more, as we guide you through everything you need to know about New State Mobile, including its features, gameplay, and tips and tricks. Let's get started!
New State Mobile is a new battle royale game that takes place in the year 2051, decades after the original PUBG. In this dystopian future, new factions emerge in an anarchic world, and the survival game evolves into a new battleground. As a player, you will dive into expansive maps, search for weapons, vehicles, and consumables, and fight for survival in an ever-shrinking play area. You will also experience new mechanics such as dodging, drone calls, and support requests, as well as new vehicles and weapons that are only available in New State Mobile.
-
A new battle royale game from PUBG Studios
-
New State Mobile is developed by PUBG Studios, the same company that created PUBG, one of the most popular and influential battle royale games of all time. PUBG Studios has years of experience and expertise in developing realistic and immersive battle royale games, and they have applied their knowledge and skills to create New State Mobile. New State Mobile is not a sequel or a spin-off of PUBG, but rather a new game that expands the PUBG universe and offers a fresh and unique experience to players.
-
Features and gameplay of New State Mobile
-
New State Mobile has many features and gameplay elements that make it stand out from other battle royale games. Here are some of them:
-
-
New mode Bounty Royale: This is a new mode that allows you to earn rewards by hunting down specific targets or by surviving as long as possible. You can also use special items such as bounties, decoys, traps, and shields to gain an edge over your enemies.
-
New map Lagna: This is a 4x4 desert map that offers dynamic and tactical gameplay with limited cover. You can use the various ridge heights to set up your attack or defense, or use the sandstorms to conceal your movements.
-
Akinta: This is a fast-paced mode that takes place in a 4x4 map with a playzone that starts shrinking as soon as the match begins. You will have to loot quickly and fight aggressively to survive in this mode.
-
Round Deathmatch: This is a best-of-seven, 4 vs 4 deathmatch series that tests your skills and teamwork. You will have to eliminate the opposing team or survive till the end of each round to win.
-
Survivor Pass: This is a feature that allows you to earn rewards by completing missions and leveling up your pass. You can also unlock exclusive skins, emotes, outfits, and more by purchasing the premium pass.
-
Various events: These are limited-time events that offer different challenges and rewards to players. You can participate in these events to earn coins, crates, skins, vouchers, and more.
-
-
How to download New State Mobile on Android and iOS devices
-
New
New State Mobile is available for both Android and iOS devices, and you can download it for free from the Google Play Store or the App Store. However, you will need a compatible device that meets the minimum requirements to run the game smoothly. Here are the minimum requirements for each platform:
-
-
-
Platform
-
Minimum Requirements
-
-
-
Android
-
OS: Android 6.0 or higher RAM: 2 GB or higher CPU: Snapdragon 625 or higher GPU: Adreno 506 or higher Storage: 3 GB or more
-
-
-
iOS
-
OS: iOS 11.0 or higher Device: iPhone 6s or higher Storage: 3 GB or more
-
-
-
To download New State Mobile on your device, follow these steps:
-
-
Go to the Google Play Store or the App Store and search for New State Mobile.
-
Tap on the Install button and wait for the game to download and install on your device.
-
Launch the game and accept the terms and conditions.
-
Create your account and choose your nickname.
-
Select your region and server.
-
Enjoy the game!
-
-
Why should you play New State Mobile?
-
New State Mobile is not just another battle royale game. It is a game that offers a new and exciting experience that will keep you hooked for hours. Here are some of the reasons why you should play New State Mobile:
-
How to download game new state mobile on android
-Download game new state mobile apk
-Download game new state mobile for ios
-Download game new state mobile for pc
-Download game new state mobile mod apk
-Download game new state mobile latest version
-Download game new state mobile obb file
-Download game new state mobile beta
-Download game new state mobile from play store
-Download game new state mobile from app store
-Download game new state mobile offline
-Download game new state mobile update
-Download game new state mobile hack
-Download game new state mobile cheats
-Download game new state mobile free fire
-Download game new state mobile pubg
-Download game new state mobile krafton
-Download game new state mobile review
-Download game new state mobile gameplay
-Download game new state mobile trailer
-Download game new state mobile tips and tricks
-Download game new state mobile system requirements
-Download game new state mobile size
-Download game new state mobile graphics settings
-Download game new state mobile best guns
-Download game new state mobile best settings
-Download game new state mobile best characters
-Download game new state mobile best vehicles
-Download game new state mobile best skins
-Download game new state mobile best maps
-Download game new state mobile best modes
-Download game new state mobile best strategies
-Download game new state mobile best teams
-Download game new state mobile best players
-Download game new state mobile ranking system
-Download game new state mobile rewards system
-Download game new state mobile events system
-Download game new state mobile patch notes
-Download game new state mobile news and updates
-Download game new state mobile community and forums
-Download game new state mobile support and feedback
-Download game new state mobile bugs and issues
-Download game new state mobile guides and tutorials
-Download game new state mobile videos and streams
-Download game new state mobile wallpapers and themes
-Download game new state mobile memes and jokes
-Download game new state mobile fan art and cosplay
-Download game new state mobile merchandise and accessories
-Download game new state mobile contests and giveaways
-
Next-generation graphics and performance
-
New State Mobile uses Unreal Engine 4, one of the most advanced game engines in the world, to deliver stunning graphics and realistic physics. You will be amazed by the detailed textures, lighting, shadows, reflections, and animations that make the game look like a console or PC game. You will also enjoy smooth and responsive gameplay with minimal lag and loading times, thanks to the optimization and customization options that let you adjust the graphics settings according to your device and preference.
-
Dynamic and strategic combat
-
New State Mobile offers a variety of combat options that allow you to fight in different ways and situations. You can use the new mechanics such as dodging, drone calls, and support requests to gain an edge over your enemies. You can also use different weapons and attachments that suit your playstyle and strategy. You can choose from assault rifles, sniper rifles, shotguns, SMGs, pistols, melee weapons, grenades, and more. You can also customize your weapons with scopes, silencers, magazines, stocks, grips, skins, and more.
-
Expansive and immersive map
-
New State Mobile features a large and diverse map that offers a variety of terrains, locations, and landmarks. You can explore urban areas, rural areas, industrial zones, military bases, deserts, forests, mountains, lakes, rivers, bridges, tunnels, and more. You can also interact with the environment by breaking windows, doors, fences, walls, vehicles, and more. You can also use vehicles such as cars, bikes, trucks, boats, helicopters, and more to travel faster and escape danger.
-
Customizable weapons and vehicles
-
New State Mobile allows you to customize your weapons and vehicles with different parts and skins that change their appearance and performance. You can find these parts and skins in crates or buy them with coins or vouchers. You can also upgrade your parts and skins with materials that enhance their stats. You can also create your own unique weapons and vehicles by combining different parts and skins in the workshop.
-
Tips and tricks for New State Mobile
-
If you want to improve your skills and win more matches in New State Mobile, you will need some tips and tricks that will help you survive and thrive in the game. Here are some of them:
-
Choose your landing spot wisely
-
The first thing you need to do when you start a match is to choose where to land on the map. This is a crucial decision that can determine your chances of survival and victory. You should consider several factors when choosing your landing spot, such as:
-
-
The distance from the plane's path: The closer you are to the plane's path, the faster you will land on the ground. However, this also means that more players will land near you, which increases the risk of early combat.
-
The loot quality: The loot quality varies depending on the location. Some locations have more loot than others, but they also attract more players. You should balance between finding enough loot for yourself and avoiding too much competition.
-
The circle The circle position: The circle is the play area that shrinks over time, forcing players to move closer to each other. You should try to land near the center of the circle, or at least within its range, to avoid being caught outside and taking damage from the blue zone.
-
The vehicle availability: Vehicles are useful for traveling faster and escaping danger, but they also make noise and attract attention. You should look for locations that have vehicle spawns nearby, but not too close to your landing spot.
-
-
You can use the map to scout the locations and plan your landing spot before you jump off the plane. You can also mark your landing spot with a pin and communicate with your teammates if you are playing in a squad.
-
Loot and equip the best gear
-
Once you land on the ground, you should look for loot as soon as possible. Loot includes weapons, attachments, armor, helmets, backpacks, consumables, and other items that can help you survive and fight. You should prioritize finding a weapon and some ammo first, then look for other items that can improve your defense and utility. You should also loot quickly and efficiently, by using the quick pick-up button or the auto pick-up feature that automatically loots items for you based on your preferences.
-
You should also equip the best gear that you can find or create. You should aim to have at least a level 2 armor and helmet, a level 2 or 3 backpack, and a variety of weapons and attachments that suit your playstyle and strategy. You should also upgrade your gear with materials that you can find or buy in the game. You can use the inventory menu to manage your gear and switch between different loadouts.
-
Use the environment to your advantage
-
New State Mobile has a dynamic and interactive environment that can affect your gameplay in various ways. You should use the environment to your advantage by doing the following:
-
-
Use cover and concealment: Cover is anything that can protect you from enemy fire, such as walls, rocks, trees, vehicles, etc. Concealment is anything that can hide you from enemy sight, such as bushes, grass, smoke, etc. You should use cover and concealment to avoid being exposed and vulnerable to enemy attacks.
-
Use height and angles: Height and angles can give you a better view and position over your enemies. You should use height and angles to spot enemies, snipe them from afar, or flank them from behind. You can use buildings, hills, bridges, towers, etc. to gain height and angles.
-
Use sound and vision: Sound and vision are important for detecting enemies and their movements. You should use sound and vision to locate enemies, track their direction, and anticipate their actions. You can use headphones, sound settings, mini-map, compass, indicators, etc. to enhance your sound and vision.
-
Use weather and time: Weather and time can change the atmosphere and visibility of the game. You should use weather and time to adapt your strategy and tactics accordingly. You can use weather and time to create diversions, ambushes, stealth attacks, etc.
-
-
Communicate and cooperate with your teammates
-
If you are playing in a squad mode with other players, you will need to communicate and cooperate with your teammates to increase your chances of survival and victory. You should communicate and cooperate with your teammates by doing the following:
-
-
Use voice chat or text chat: Voice chat or text chat are the main ways of communicating with your teammates in the game. You should use voice chat or text chat to share information, coordinate actions, ask for help, give commands, etc. You can also use quick chat or gestures to communicate with preset messages or expressions.
-
Use markers or pings: Markers or pings are visual aids that can help you communicate with your teammates without using voice or text chat. You can use markers or pings to mark locations, enemies, items, vehicles, etc. on the map or on the screen.
-
Use roles or strategies: Roles or strategies are predefined plans that can help you cooperate with your teammates more effectively. You can use roles or strategies to assign tasks, responsibilities , and goals for your team. You can also use roles or strategies to adapt to different situations and challenges in the game.
-
Use teamwork and synergy: Teamwork and synergy are the keys to success in a squad mode. You should use teamwork and synergy to support, protect, and assist your teammates in any way possible. You should also use teamwork and synergy to combine your skills, abilities, and resources to create a stronger and more effective team.
-
-
Survive till the end and claim victory
-
The ultimate goal of New State Mobile is to survive till the end and claim victory. You should do everything you can to achieve this goal by doing the following:
-
-
Stay in the circle: The circle is the play area that shrinks over time, forcing players to move closer to each other. You should stay in the circle as much as possible, or at least within its range, to avoid being caught outside and taking damage from the blue zone. You should also pay attention to the circle timer, direction, and speed, and plan your movements accordingly.
-
Avoid unnecessary fights: Fights are inevitable in a battle royale game, but not all fights are worth taking. You should avoid unnecessary fights that can expose you to danger, waste your resources, or distract you from your objective. You should only engage in fights that are necessary, advantageous, or unavoidable.
-
Play smart and safe: Playing smart and safe means using your brain and common sense to make the best decisions in the game. You should play smart and safe by observing your surroundings, analyzing your situation, choosing your actions, and anticipating the consequences. You should also play smart and safe by avoiding risks, mistakes, or traps that can cost you your life.
-
Be ready for anything: Anything can happen in a battle royale game, and you should be ready for anything that comes your way. You should be ready for anything by being prepared, flexible, and adaptable. You should also be ready for anything by being alert, aware, and responsive.
-
-
By following these tips and tricks, you will increase your chances of surviving till the end and claiming victory in New State Mobile.
-
Conclusion
-
New State Mobile is a new battle royale game that offers a new and exciting experience to players. It has next-generation graphics and performance, dynamic and strategic combat, expansive and immersive map, customizable weapons and vehicles, and various modes and events. It also has a simple and easy way to download it on Android and iOS devices. If you are looking for a new and fun game to play on your mobile device, you should definitely try New State Mobile. You will not regret it!
-
FAQs
-
Here are some of the frequently asked questions about New State Mobile:
-
-
Is New State Mobile free to play?
-
Yes, New State Mobile is free to play. However, it also has optional in-game purchases that can enhance your gameplay or appearance.
-
Is New State Mobile related to PUBG?
-
New State Mobile is developed by PUBG Studios, the same company that created PUBG. However, New State Mobile is not a sequel or a spin-off of PUBG, but rather a new game that expands the PUBG universe and offers a fresh and unique experience to players.
-
Is New State Mobile cross-platform?
-
No, New State Mobile is not cross-platform. It is only available for Android and iOS devices.
-
How can I play with my friends in New State Mobile?
-
You can play with your friends in New State Mobile by inviting them to join your squad or clan. You can also add them as friends or send them messages in the game.
-
How can I contact the customer service of New State Mobile?
-
You can contact the customer service of New State Mobile by using the in-game feedback system or by visiting their official website or social media channels.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatimahhussain/workoutwizard/pages/plank.py b/spaces/fatimahhussain/workoutwizard/pages/plank.py
deleted file mode 100644
index 936158dc6ce82b486b179c2c60ffdfb4605fa1b6..0000000000000000000000000000000000000000
--- a/spaces/fatimahhussain/workoutwizard/pages/plank.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import logging
-import queue
-from pathlib import Path
-from typing import List, NamedTuple
-import mediapipe as mp
-import av
-import cv2
-import numpy as np
-import streamlit as st
-from streamlit_webrtc import WebRtcMode, webrtc_streamer
-
-from sample_utils.turn import get_ice_servers
-
-
-# st.set_page_config(page_title="Plank Exercise")
-
-mp_face_detection = mp.solutions.face_detection
-mp_drawing = mp.solutions.drawing_utils
-
-def calculate_angle(a, b, c):
- a = np.array(a)
- b = np.array(b)
- c = np.array(c)
-
- radians = np.arctan2(c[1]-b[1], c[0]-b[0]) - np.arctan2(a[1]-b[1], a[0]-b[0])
- angle = np.abs(radians*180.0/np.pi)
-
- if angle > 180.0:
- angle = 360 - angle
-
- return angle
-
-
-
-def video_frame_callback(frame: av.VideoFrame) -> av.VideoFrame:
- image = frame.to_ndarray(format="rgb24")
- # image = image[:,::-1,:]
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- mp_drawing = mp.solutions.drawing_utils
- mp_pose = mp.solutions.pose
-
- with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose1:
- results = pose1.process(image) #previously image_rgb
-# results2 = pose2.process(image2)
-
- if results.pose_landmarks:
- landmarks1 = results.pose_landmarks.landmark
- ankle1 = [int(landmarks1[mp_pose.PoseLandmark.LEFT_ANKLE.value].x * image.shape[1]),
- int(landmarks1[mp_pose.PoseLandmark.LEFT_ANKLE.value].y * image.shape[0])]
- knee1 = [int(landmarks1[mp_pose.PoseLandmark.LEFT_KNEE.value].x * image.shape[1]),
- int(landmarks1[mp_pose.PoseLandmark.LEFT_KNEE.value].y * image.shape[0])]
- hip1 = [int(landmarks1[mp_pose.PoseLandmark.LEFT_HIP.value].x * image.shape[1]),
- int(landmarks1[mp_pose.PoseLandmark.LEFT_HIP.value].y * image.shape[0])]
- shoulder1 = [int(landmarks1[mp_pose.PoseLandmark.LEFT_SHOULDER.value].x * image.shape[1]),
- int(landmarks1[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y * image.shape[0])]
-
-
- angle1 = calculate_angle(shoulder1, hip1, knee1)
- cv2.putText(image, f'Angle: {round(angle1, 2)}', (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
- cv2.circle(image, tuple(shoulder1), 10, (255, 255, 255), -1)
- cv2.circle(image, tuple(hip1), 10, (255, 255, 255), -1)
- cv2.circle(image, tuple(knee1), 10, (255, 255, 255), -1)
-
- if 150 <= abs(angle1):
- cv2.putText(image, 'YES', (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
- else:
- cv2.putText(image, 'INCORRECT FORM', (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 1)
-
-
- return av.VideoFrame.from_ndarray(image, format="bgr24")
-
-
-
-def main():
- st.title("Plank Exercise")
-
- with st.sidebar:
- st.caption("This is for a plank. It will track the RIGHT side of your body, so please make sure that you are in full view on your webcam. ")
- st.caption("Please reference the video if you're confused!")
-
-
-
- # st.caption("This is for bicep curls. It will track your LEFT arm, so please position yourself at a slight angle (note: the camera is also flipped). See image below")
- # st.caption("Your ENTIRE body needs to be in the webcam screen")
- # st.caption("Slow and steady wins the race!")
-
- # stframe = st.empty()
- webrtc_streamer(
- key="object-detection",
- mode= WebRtcMode.SENDRECV,
- rtc_configuration={
- "iceServers": get_ice_servers(),
- "iceTransportPolicy": "relay",
- },
- video_frame_callback=video_frame_callback,
- media_stream_constraints={"video": True, "audio": False},
- async_processing=True,
- )
-
- # st_player("bicepcurl.mp4", playing=True, muted=True)
- st.video("videos/plank.mp4")
-
-
-if __name__ == '__main__':
- main()
-
-
-
\ No newline at end of file
diff --git "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py" "b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py"
deleted file mode 100644
index b4bcd56109b42d3023f24eade7c0cd5671d3c5a4..0000000000000000000000000000000000000000
--- "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\247\243\346\236\220JupyterNotebook.py"
+++ /dev/null
@@ -1,146 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-fast_debug = True
-
-
-class PaperFileGroup():
- def __init__(self):
- self.file_paths = []
- self.file_contents = []
- self.sp_file_contents = []
- self.sp_file_index = []
- self.sp_file_tag = []
-
- # count_token
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(
- enc.encode(txt, disallowed_special=()))
- self.get_token_num = get_token_num
-
- def run_file_split(self, max_token_limit=1900):
- """
- 将长文本分离开来
- """
- for index, file_content in enumerate(self.file_contents):
- if self.get_token_num(file_content) < max_token_limit:
- self.sp_file_contents.append(file_content)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index])
- else:
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- file_content, self.get_token_num, max_token_limit)
- for j, segment in enumerate(segments):
- self.sp_file_contents.append(segment)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(
- self.file_paths[index] + f".part-{j}.txt")
-
-
-
-def parseNotebook(filename, enable_markdown=1):
- import json
-
- CodeBlocks = []
- with open(filename, 'r', encoding='utf-8', errors='replace') as f:
- notebook = json.load(f)
- for cell in notebook['cells']:
- if cell['cell_type'] == 'code' and cell['source']:
- # remove blank lines
- cell['source'] = [line for line in cell['source'] if line.strip()
- != '']
- CodeBlocks.append("".join(cell['source']))
- elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']:
- cell['source'] = [line for line in cell['source'] if line.strip()
- != '']
- CodeBlocks.append("Markdown:"+"".join(cell['source']))
-
- Code = ""
- for idx, code in enumerate(CodeBlocks):
- Code += f"This is {idx+1}th code block: \n"
- Code += code+"\n"
-
- return Code
-
-
-def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
-
- if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
- enable_markdown = plugin_kwargs.get("advanced_arg", "1")
- try:
- enable_markdown = int(enable_markdown)
- except ValueError:
- enable_markdown = 1
-
- pfg = PaperFileGroup()
-
- for fp in file_manifest:
- file_content = parseNotebook(fp, enable_markdown=enable_markdown)
- pfg.file_paths.append(fp)
- pfg.file_contents.append(file_content)
-
- # <-------- 拆分过长的IPynb文件 ---------->
- pfg.run_file_split(max_token_limit=1024)
- n_split = len(pfg.sp_file_contents)
-
- inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." +
- r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " +
- r"Start a new line for a block and block num use Chinese." +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag]
- sys_prompt_array = ["You are a professional programmer."] * n_split
-
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=inputs_array,
- inputs_show_user_array=inputs_show_user_array,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[""] for _ in range(n_split)],
- sys_prompt_array=sys_prompt_array,
- # max_workers=5, # OpenAI所允许的最大并行过载
- scroller_max_len=80
- )
-
- # <-------- 整理结果,退出 ---------->
- block_result = " \n".join(gpt_response_collection)
- chatbot.append(("解析的结果如下", block_result))
- history.extend(["解析的结果如下", block_result])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # <-------- 写入文件,退出 ---------->
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-@CatchException
-def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- chatbot.append([
- "函数插件功能?",
- "对IPynb文件进行解析。Contributor: codycjy."])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- history = [] # 清空历史
- import glob
- import os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "":
- txt = '空空如也的输入栏'
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- if txt.endswith('.ipynb'):
- file_manifest = [txt]
- else:
- file_manifest = [f for f in glob.glob(
- f'{project_folder}/**/*.ipynb', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, )
diff --git a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_msra.sh b/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_msra.sh
deleted file mode 100644
index 397c3ea6adc3d9f275389509aa41d0e4050b3c14..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_msra.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_base_msra # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_msra/%x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-# export CUDA_VISIBLE_DEVICES='2'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_base
-
-TASK=msra
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/MSRA/
-PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train_dev.char.bmes \
- --valid_data test.char.bmes \
- --test_data test.char.bmes \
- --train_batchsize 32 \
- --valid_batchsize 16 \
- --max_seq_length 256 \
- --task_name msra \
- "
-
-MODEL_ARGS="\
- --learning_rate 3e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --markup bioes \
- --middle_prefix M- \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_f1 \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 800 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_f1:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 30 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 800 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Angry Birds Classic for Android - Download APK and Play Offline.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Angry Birds Classic for Android - Download APK and Play Offline.md
deleted file mode 100644
index af481f01d38959be1cf9c6f9f290824466647119..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Angry Birds Classic for Android - Download APK and Play Offline.md
+++ /dev/null
@@ -1,220 +0,0 @@
-
-
How to Download and Play Angry Birds Classic on Android
-
Angry Birds Classic is one of the most popular mobile games of all time. It has been downloaded over 100 million times on Google Play Store alone, and has received countless awards and positive reviews from critics and players alike. If you are looking for a fun, addictive, and challenging game that will keep you entertained for hours, you should definitely try out Angry Birds Classic on your Android device.
-
Introduction
-
In this article, we will show you how to download and play Angry Birds Classic on Android. We will cover the following topics:
What is Angry Birds Classic and why is it popular?
-
What are the main features and gameplay of Angry Birds Classic?
-
How to download Angry Birds Classic apk file from Google Play Store or other sources?
-
-
What is Angry Birds Classic and why is it popular?
-
Angry Birds Classic is a puzzle game that was released by Rovio Entertainment in 2009. The game features a flock of angry birds who are trying to get back their eggs from a group of greedy pigs who stole them. The game involves using a slingshot to launch the birds at various structures made of wood, stone, glass, and other materials, where the pigs are hiding. The goal is to destroy all the pigs and their defenses in each level using as few birds as possible.
-
Angry Birds Classic is popular because it has a simple yet addictive gameplay that appeals to players of all ages and backgrounds. The game also has a colorful and cartoonish graphics style, a catchy and humorous sound design, and a variety of levels, episodes, birds, pigs, power-ups, and other elements that keep the game fresh and exciting. The game also has a social aspect, as players can compete against each other in the Mighty League mode, where they can earn trophies, coins, and bragging rights.
-
What are the main features and gameplay of Angry Birds Classic?
-
Angry Birds Classic has many features that make it a fun and satisfying game to play. Some of these features are:
-
-
Fun and satisfying slingshot gameplay. The game uses a simple yet intuitive control scheme that involves dragging your finger on the screen to aim the slingshot, releasing it to launch the bird, and tapping again to activate its special power. The game also has a realistic physics engine that simulates the effects of gravity, friction, collision, explosion, etc., making the game more challenging and rewarding - Diverse and delightful birds and pigs. The game features a variety of birds and pigs, each with their own personality, appearance, and ability. For example, the red bird is the basic bird that can deal moderate damage, the yellow bird can speed up and pierce through materials, the black bird can explode and cause massive destruction, the green bird can boomerang back and hit targets from behind, and so on. The pigs also have different types, such as the helmet pig, the mustache pig, the king pig, etc., each with their own level of durability and difficulty. - Challenging and creative levels and episodes. The game has hundreds of levels and episodes, each with a different theme, setting, and objective. For example, the first episode is called "Poached Eggs", where the birds discover that their eggs have been stolen by the pigs. The second episode is called "Mighty Hoax", where the pigs use cardboard cutouts of their own kind to trick the birds. The third episode is called "Danger Above", where the birds chase the pigs in the sky using balloons and planes. The levels also have different layouts, obstacles, hazards, and hidden items that require strategy and skill to overcome. - Exciting and rewarding power-ups and boosters. The game also has various power-ups and boosters that can help the players in their quest to defeat the pigs. For example, the Mighty Eagle is a powerful bird that can be summoned once per hour to destroy all the pigs in a level. The power-ups include the Sling Scope, which shows the trajectory of the bird before launching it; the King Sling, which increases the speed and power of the bird; the Birdquake, which shakes the ground and causes the structures to collapse; and the Super Seeds, which enlarges the bird and makes it more destructive. - Competitive and social Mighty League mode. The game also has a Mighty League mode, where players can compete against other players from around the world in daily tournaments. The players can earn trophies, coins, and feathers by completing levels and ranking higher on the leaderboards. The players can also chat with each other, send gifts, and challenge their friends in friendly matches.
How to Download Angry Birds Classic apk file from Google Play Store or other sources?
-
Downloading Angry Birds Classic apk file is easy and fast. There are two main ways to do it: from Google Play Store or from other sources.
-
How to Download Angry Birds Classic apk file from Google Play Store?
-
The easiest way to download Angry Birds Classic apk file is from Google Play Store, which is the official app store for Android devices. Here are the steps to do it:
-
-
Open Google Play Store on your Android device.
-
Search for "Angry Birds Classic" in the search bar.
-
Select "Angry Birds Classic" from the list of results.
-
Tap on "Install" button to start downloading Angry Birds Classic apk file.
-
Wait for the download to finish and then tap on "Open" button to launch Angry Birds Classic on your Android device.
-
-
You can also download Angry Birds Classic apk file from Google Play Store using your web browser on your computer or other devices. Here are the steps to do it:
Click on "Install" button to start downloading Angry Birds Classic apk file.
-
Select your Android device from the list of devices that are connected to your Google account.
-
Wait for the download to finish and then open Angry Birds Classic on your Android device.
-
-
How to Download Angry Birds Classic apk file from other sources?
-
If you cannot access Google Play Store or prefer to download Angry Birds Classic apk file from other sources, you can also do that. However, you need to be careful when downloading Angry Birds Classic apk file from other sources, as some of them may contain viruses, malware, or other harmful content that can damage your device or compromise your privacy. Here are some tips to download Angry Birds Classic apk file from other sources safely:
-
angry birds classic game free download for android
-angry birds classic apk mod unlimited money
-angry birds classic apk latest version
-angry birds classic apk old version
-angry birds classic apk offline
-angry birds classic apk pure
-angry birds classic apk mirror
-angry birds classic apk revdl
-angry birds classic apk uptodown
-angry birds classic apk hack
-angry birds classic apk android 1
-angry birds classic apk android 2.3.6
-angry birds classic apk android 4.0.4
-angry birds classic apk android 4.4.2
-angry birds classic apk android 5.1.1
-angry birds classic apk android 6.0
-angry birds classic apk android 7.0
-angry birds classic apk android 8.0
-angry birds classic apk android 9.0
-angry birds classic apk android 10
-angry birds classic apk android 11
-angry birds classic download for android phone
-angry birds classic download for android tablet
-angry birds classic download for android tv
-angry birds classic download for android mobile
-angry birds classic download for android free
-angry birds classic download for android full version
-angry birds classic download for android without ads
-angry birds classic download for android no internet
-angry birds classic download for android play store
-how to download angry birds classic on android
-how to install angry birds classic on android
-how to play angry birds classic on android
-how to update angry birds classic on android
-how to uninstall angry birds classic on android
-how to get angry birds classic on android
-how to hack angry birds classic on android
-how to restore purchases on angry birds classic android
-how to transfer progress in angry birds classic android
-how to unlock all levels in angry birds classic android
-best site to download angry birds classic for android
-best way to download angry birds classic for android
-best alternative to download angry birds classic for android
-best tips and tricks for playing angry birds classic on android
-best cheats and codes for playing angry birds classic on android
-
-
Only download Angry Birds Classic apk file from trusted and reputable sources, such as official websites of Rovio Entertainment or other well-known app stores.
-
Avoid downloading Angry Birds Classic apk file from unknown or suspicious sources, such as pop-up ads, spam emails, or unverified links.
-
Check the reviews, ratings, comments, and feedback of other users who have downloaded Angry Birds Classic apk file from the same source before you download it.
-
Scan the Angry Birds Classic apk file with a reliable antivirus or anti-malware software before you install it on your device.
-
Backup your data and settings before you install Angry Birds Classic apk file on your device, in case something goes wrong or you need to uninstall it later.
-
-
Here are the steps to download Angry Birds Classic apk file from other sources:
-
-
Find a trusted and reputable source that offers Angry Birds Classic apk file for download.
-
Click on the download link or button to start downloading Angry Birds Classic apk file.
-
Save the Angry Birds Classic apk file to a location that you can easily access on your device, such as your downloads folder or your SD card.
-
Scan the Angry Birds Classic apk file with your antivirus or anti-malware software to make sure it is safe and clean.
-
Open the Angry Birds Classic apk file on your device and follow the instructions to install it.
-
-
How to Install Angry Birds Classic apk file on Android
-
Installing Angry Birds Classic apk file on Android is also easy and fast. However, you need to make sure that your device meets the requirements and that you have enabled the unknown sources option for apk file installation. Here are the details and steps to install Angry Birds Classic apk file on Android:
-
What are the requirements and steps to install Angry Birds Classic apk file on Android devices?
-
The requirements and steps to install Angry Birds Classic apk file on Android devices are:
-
-
Requirements: Your device must have Android 4.1 or higher version, at least 100 MB of free storage space, and a stable internet connection.
-
Steps:
-
-
Locate the Angry Birds Classic apk file that you have downloaded on your device.
-
Tap on the Angry Birds Classic apk file to open it.
-
You may see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source". Tap on "Settings" to go to the security settings of your device.
-
Find and enable the option that says "Allow from this source" or "Unknown sources". This will allow you to install Angry Birds Classic apk file on your device.
-
Go back to the Angry Birds Classic apk file and tap on "Install" to start the installation process.
-
Wait for the installation to finish and then tap on "Done" or "Open" to launch Angry Birds Classic on your device.
-
-
How to Troubleshoot Common Installation Issues and Errors for Angry Birds Classic apk file?
-
Sometimes, you may encounter some issues or errors when installing Angry Birds Classic apk file on your device. These issues or errors may prevent you from installing or launching the game successfully. Here are some common issues or errors and how to fix them:
-
Insufficient storage space
-
This issue occurs when your device does not have enough free storage space to install Angry Birds Classic apk file. To fix this issue, you need to free up some storage space on your device by deleting some unwanted files, apps, photos, videos, etc. You can also use a cleaning app or a file manager app to help you with this task. Alternatively, you can move some files or apps to your SD card if your device supports it.
-
Invalid or corrupted apk file
-
This issue occurs when the Angry Birds Classic apk file that you have downloaded is not valid or corrupted. This may happen due to various reasons, such as incomplete download, interrupted download, virus infection, etc. To fix this issue, you need to delete the Angry Birds Classic apk file that you have downloaded and download it again from a trusted and reputable source. You can also use a different browser or a download manager app to help you with this task.
-
App not installed
-
This issue occurs when the installation process of Angry Birds Classic apk file fails or is interrupted for some reason. This may happen due to various reasons, such as incompatible device, low battery, insufficient permissions, etc. To fix this issue, you need to check the following things:
-
-
Make sure your device meets the requirements and is compatible with Angry Birds Classic apk file.
-
Make sure your device has enough battery power and is not in power-saving mode.
-
Make sure you have enabled the unknown sources option and granted all the necessary permissions for Angry Birds Classic apk file installation.
-
Make sure you have closed all other apps and processes that may interfere with the installation process.
-
Make sure you have a stable internet connection and are not using a VPN or a proxy server.
-
-
If none of these things work, you can try to restart your device and install Angry Birds Classic apk file again.
-
How to Play Angry Birds Classic on Android
-
Playing Angry Birds Classic on Android is fun and easy. Once you have installed Angry Birds Classic apk file on your device, you can launch and start the game by following these steps:
-
-
Tap on the Angry Birds Classic icon on your home screen or app drawer to open the game.
-
You will see the main menu of the game, where you can choose from different options, such as Play, Mighty League, Settings, etc.
-
Tap on Play to start playing the game. You will see a map of different episodes and levels that you can play. You can swipe left or right to navigate through the map.
-
Tap on an episode that you want to play. You will see a list of levels that you can play in that episode. You can swipe up or down to navigate through the list.
-
Tap on a level that you want to play. You will see the gameplay screen of that level, where you can see the slingshot, the birds, the pigs, and their structures.
-
To play the level, you need to use the slingshot to launch the birds at the pigs and their structures. To do this, follow these steps:
-
-
Drag your finger on the screen to aim the slingshot. You will see a dotted line that shows the trajectory of the bird.
-
Release your finger to launch the bird. You will see the bird fly towards the pigs and their structures.
-
Tap on the screen again to activate the special power of the bird. Each bird has a different power that can help you destroy more pigs and structures.
-
-
Your goal is to destroy all the pigs and their structures in each level using as few birds as possible. You will earn stars based on how well you perform in each level. You will also earn coins that you can use to buy power-ups and boosters.
-
If you fail to destroy all the pigs in a level, you will lose a life and have to retry the level. You have five lives in total, which regenerate over time or can be refilled by watching ads or spending coins.
-
If you succeed in destroying all the pigs in a level, you will complete the level and unlock the next one. You will also see your score and rank for that level.
-
-
Tips and Tricks for Angry Birds Classic on Android
Tips and Tricks for Angry Birds Classic on Android
-
Angry Birds Classic is a game that requires skill, strategy, and luck to master. If you want to improve your performance and enjoy the game more, you can follow these tips and tricks:
-
How to master the unique traits and abilities of each bird in Angry Birds Classic?
-
Each bird in Angry Birds Classic has a unique trait and ability that can help you destroy more pigs and structures. You need to know how to use them effectively and wisely. Here are some examples:
-
-
Red bird: The red bird is the basic bird that can deal moderate damage. It has no special power, but it can be useful for hitting targets directly or breaking weak materials.
-
Yellow bird: The yellow bird can speed up and pierce through materials. It has a special power that allows it to accelerate when you tap on the screen. You can use it to hit hard-to-reach targets or break through strong materials.
-
Black bird: The black bird can explode and cause massive destruction. It has a special power that allows it to detonate when you tap on the screen or when it hits something. You can use it to blow up large structures or groups of pigs.
-
Green bird: The green bird can boomerang back and hit targets from behind. It has a special power that allows it to change direction when you tap on the screen. You can use it to hit targets that are hidden or protected by other structures.
-
White bird: The white bird can drop an egg bomb and fly away. It has a special power that allows it to drop an egg when you tap on the screen. You can use it to hit targets below or behind the white bird.
-
Blue bird: The blue bird can split into three smaller birds and cover a wider area. It has a special power that allows it to split when you tap on the screen. You can use it to hit multiple targets or break glass materials.
-
Big red bird: The big red bird is a larger version of the red bird that can deal more damage. It has no special power, but it can be useful for hitting large targets or breaking heavy materials.
-
-
How to plan your strategy and aim your shots in Angry Birds Classic?
-
Aiming your shots in Angry Birds Classic is not only about accuracy, but also about strategy. You need to consider the following factors when aiming your shots:
-
-
The type of bird: You need to choose the right bird for the right situation. For example, you may want to use the yellow bird to hit a target that is far away or behind a strong material, or you may want to use the black bird to blow up a large structure or a group of pigs.
-
The angle of the shot: You need to adjust the angle of the shot according to the trajectory of the bird and the gravity of the level. For example, you may want to aim higher or lower depending on how far or close the target is, or you may want to aim differently depending on whether the level is in space or underwater.
-
The timing of the shot: You need to time your shot according to the movement of the target and the activation of the special power. For example, you may want to wait for the right moment to launch the bird when the target is exposed or vulnerable, or you may want to tap on the screen at the right moment to activate the special power when it is most effective.
-
-
How to find hidden golden eggs and unlock bonus levels in Angry Birds Classic?
-
Angry Birds Classic has many hidden golden eggs that can unlock bonus levels with different themes and challenges. These golden eggs are usually hidden in secret locations or require certain actions to reveal them. Here are some examples of how to find some of them:
-
-
In level 1-8: Tap on the treasure chest in the background to open it and reveal a golden egg.
-
In level 2-2: Break the beach ball in the foreground to reveal a golden egg.
-
In level 4-7: Zoom out and tap on the yellow balloon in the top right corner of the screen to pop it and reveal a golden egg.
-
In level 5-19: Launch a white bird backwards and drop an egg on top of the cliff behind the slingshot to reveal a golden egg.
-
In level 6-14: Launch a yellow bird at the bottom right corner of the screen and activate its power to break the wooden block and reveal a golden egg.
-
In level 8-15: Launch a black bird at the TNT box in the bottom left corner of the screen and detonate it to blow up the structure and reveal a golden egg.
-
-
You can also find some golden eggs by tapping on certain objects or icons in the main menu or the episode selection screen. For example, you can tap on the sun in the main menu, the boomerang bird in the episode selection screen, or the Facebook icon in the settings menu to reveal some golden eggs.
-
How to save your progress and sync your game across devices in Angry Birds Classic?
-
Angry Birds Classic allows you to save your progress and sync your game across devices using your Rovio Account or your Facebook Account. This way, you can continue playing the game on different devices without losing your data or achievements. Here are the steps to do it:
-
-
Using Rovio Account:
-
-
Create a Rovio Account by tapping on the Rovio icon in the main menu or the settings menu of the game.
-
Enter your email address and password and tap on "Create Account".
-
Verify your email address by clicking on the link that is sent to your email.
-
Login to your Rovio Account by tapping on the Rovio icon in the main menu or the settings menu of the game.
-
Select "Sync" to sync your game data with your Rovio Account.
-
To sync your game data on another device, login to your Rovio Account on that device and select "Sync".
-
-
Using Facebook Account:
-
-
Login to your Facebook Account by tapping on the Facebook icon in the main menu or the settings menu of the game.
-
Grant permission for Angry Birds Classic to access your Facebook information.
-
Select "Sync" to sync your game data with your Facebook Account.
-
To sync your game data on another device, login to your Facebook Account on that device and select "Sync".
-
-
-
How to avoid ads, in-app purchases, and data charges in Angry Birds Classic?
-
Angry Birds Classic is a free-to-play game that supports ads and in-app purchases. However, some players may find these features annoying or distracting. If you want to avoid ads, in-app purchases, and data charges in Angry Birds Classic, you can follow these tips:
-
-
To avoid ads: You can turn off your internet connection or switch to airplane mode before launching Angry Birds Classic. This will prevent the game from showing any ads. However, this will also disable some features that require internet connection, such as Mighty League, power-ups, etc.
-
To avoid in-app purchases: You can disable in-app purchases by going to the settings of your device and turning off the option that allows apps to make purchases. This will prevent you from accidentally or intentionally buying any coins, power-ups, or other items in Angry Birds Classic.
-
To avoid data charges: You can use a Wi-Fi connection instead of a mobile data connection when playing Angry Birds Classic. This will reduce or eliminate any data charges that may occur due to downloading or updating Angry Birds Classic or its content.
-
-
Conclusion
-
Angry Birds Classic is a fun, addictive, and challenging game that will keep you entertained for hours. It has a simple yet satisfying slingshot gameplay, a diverse and delightful cast of birds and pigs, a challenging and creative set of levels and episodes, an exciting and rewarding set of power-ups and boosters, and a competitive and social Mighty League mode. You can download and play Angry Birds Classic on Android by following our guide above. We hope you enjoy playing Angry Birds Classic on Android as much as we do!
-
FAQs
-
Here are some common questions and answers about Angry Birds Classic on Android:
-
Q: How many levels are there in Angry Birds Classic?
-
A: There are over 800 levels in Angry Birds Classic, spread across 15 episodes. Each episode has a different theme, setting, and objective. You can unlock new episodes by completing previous ones or by finding hidden golden eggs.
-
Q: How do I get more coins in Angry Birds Classic?
-
A: Coins are the currency of Angry Birds Classic that you can use to buy power-ups and boosters. You can get more coins by completing levels, earning stars, ranking higher in the Mighty League, watching ads, or buying them with real money.
-
Q: How do I use the Mighty Eagle in Angry Birds Classic?
-
A: The Mighty Eagle is a powerful bird that can be summoned once per hour to destroy all the pigs in a level. You can use the Mighty Eagle by tapping on the eagle icon in the top left corner of the screen and then launching a can of sardines at the pigs. The Mighty Eagle will then swoop down and wipe out everything in its path. You can earn feathers by using the Mighty Eagle and achieving a certain percentage of destruction in each level.
-
Q: How do I update Angry Birds Classic on Android?
-
A: You can update Angry Birds Classic on Android by going to Google Play Store and tapping on the update button. You can also enable the auto-update option in the settings of Google Play Store to update Angry Birds Classic automatically whenever a new version is available. Updating Angry Birds Classic will give you access to new features, levels, episodes, bug fixes, and improvements.
-
Q: Is Angry Birds Classic safe and secure to play on Android?
-
A: Yes, Angry Birds Classic is safe and secure to play on Android. The game does not contain any viruses, malware, or other harmful content that can damage your device or compromise your privacy. The game also does not collect or share any personal or sensitive information from you or your device without your consent. The game only requires some permissions and access to your device's features, such as storage, internet, etc., to function properly and enhance your gaming experience.