diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Pacific Physics Volume 1 PDF and Ace Your A Level Physics Exams.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Pacific Physics Volume 1 PDF and Ace Your A Level Physics Exams.md
deleted file mode 100644
index 9a4519881298471ac255526086161c0d0b68fcbd..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Pacific Physics Volume 1 PDF and Ace Your A Level Physics Exams.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Pacific Physics Volume 1: A Comprehensive Guide for A Level Students
-
If you are an A level student who is looking for a reliable and comprehensive physics textbook, you might have heard of Pacific Physics Volume 1. This book is widely used by students and teachers in Singapore and other countries as a reference and study guide for physics. But what exactly is Pacific Physics Volume 1, and why should you read it? In this article, we will answer these questions and more. We will also show you how to download Pacific Physics Volume 1 in PDF format, so you can access it anytime and anywhere.
-
What is Pacific Physics Volume 1?
-
Pacific Physics Volume 1 is a physics textbook written by Poh Liong Yong, a former lecturer at Nanyang Technological University in Singapore. It was first published in 1996 by Pan Pacific, a leading educational publisher in Asia. It is designed to cover the topics required for the A level physics examination, which is taken by students who wish to pursue higher education in science, engineering, or medicine.
Poh Liong Yong is a well-known physics educator and author in Singapore. He has over 30 years of experience in teaching physics at various levels, from secondary school to university. He has also written several other physics books, such as Essential Concepts of Physics, Advanced Level Practical Work for Physics, and Understanding Mechanics.
-
What are the main topics covered in Pacific Physics Volume 1?
-
Pacific Physics Volume 1 covers the following topics:
-
-
Units and Dimensions
-
Measurements
-
Vectors
-
Static Equilibrium
-
Kinematics
-
Dynamics
-
Pressure in Liquids and Archimedes' Principle
-
Circular Motion
-
Universal Gravitation
-
Simple Harmonic Motion
-
Elasticity
-
Thermometry
-
Heat Capacity and Latent Heat
-
The Gas Laws
-
Thermodynamics
-
Thermal Conduction
-
Convection and Radiation
-
-
How is Pacific Physics Volume 1 different from other physics textbooks?
-
Pacific Physics Volume 1 is different from other physics textbooks in several ways:
-
-
It follows the latest syllabus and exam format of the A level physics examination, which is based on the Cambridge International AS and A Level Physics syllabus.
-
It provides clear and concise explanations of each concept, with diagrams, graphs, tables, and formulas to illustrate them.
-
It includes numerous examples and worked solutions to demonstrate how to apply the concepts to solve problems.
-
It offers a variety of exercises at the end of each chapter, ranging from multiple-choice questions to structured questions to essay questions. The answers and solutions are also provided at the end of the book.
-
It contains review questions and summary points at the end of each topic, to help students revise and consolidate their learning.
-
It features practical work sections that introduce students to the experimental aspects of physics, with instructions on how to perform experiments and record observations.
-
-
Why should you read Pacific Physics Volume 1?
-
Pacific Physics Volume 1 is a valuable resource for anyone who wants to learn physics at an advanced level. Here are some reasons why you should read it:
-
pacific physics volume 1 ebook free download
-download pacific physics volume 1 pdf online
-pacific physics volume 1 solutions manual pdf download
-how to download pacific physics volume 1 pdf for free
-pacific physics volume 1 by a. f. abbott pdf download
-pacific physics volume 1 textbook pdf download
-pacific physics volume 1 pdf download link
-pacific physics volume 1 pdf download reddit
-pacific physics volume 1 pdf download google drive
-pacific physics volume 1 pdf download quora
-pacific physics volume 1 pdf download torrent
-pacific physics volume 1 pdf download zip
-pacific physics volume 1 pdf download scribd
-pacific physics volume 1 pdf download slideshare
-pacific physics volume 1 pdf download z-library
-pacific physics volume 1 pdf download library genesis
-pacific physics volume 1 pdf download b-ok.org
-pacific physics volume 1 pdf download academia.edu
-pacific physics volume 1 pdf download researchgate.net
-pacific physics volume 1 pdf download worldcat.org
-pacific physics volume 1 pdf download goodreads.com
-pacific physics volume 1 pdf download amazon.com
-pacific physics volume 1 pdf download ebay.com
-pacific physics volume 1 pdf download flipkart.com
-pacific physics volume 1 pdf download alibris.com
-pacific physics volume 1 pdf download abebooks.com
-pacific physics volume 1 pdf download thriftbooks.com
-pacific physics volume 1 pdf download bookdepository.com
-pacific physics volume 1 pdf download betterworldbooks.com
-pacific physics volume 1 pdf download powells.com
-pacific physics volume 1 pdf download barnesandnoble.com
-pacific physics volume 1 pdf download walmart.com
-pacific physics volume 1 pdf download target.com
-pacific physics volume 1 pdf download kobo.com
-pacific physics volume 1 pdf download apple books
-pacific physics volume 1 pdf download google books
-pacific physics volume 1 pdf download open library
-pacific physics volume 1 pdf download project gutenberg
-pacific physics volume 1 pdf download internet archive
-pacific physics volume 1 pdf download libgen.io
-
Pacific Physics Volume 1 is aligned with the latest syllabus and exam requirements
-
If you are preparing for the A level physics examination, you need a textbook that covers all the topics that you need to know. Pacific Physics Volume 1 does exactly that. It follows the latest syllabus and exam format of the A level physics examination, which is based on the Cambridge International AS and A Level Physics syllabus. This means that you can be confident that you are learning the right content and skills for your exam.
-
Pacific Physics Volume 1 provides clear explanations and examples for each concept
-
If you want to understand physics concepts deeply and thoroughly, you need a textbook that explains them clearly and concisely. Pacific Physics Volume 1 does exactly that. It provides clear and concise explanations of each concept, with diagrams, graphs, tables, and formulas to illustrate them. It also includes numerous examples and worked solutions to demonstrate how to apply the concepts to solve problems. This means that you can grasp the concepts easily and effectively.
-
Pacific Physics Volume 1 offers plenty of exercises and solutions for practice and revision
-
If you want to master physics concepts fully and confidently, you need a textbook that offers plenty of exercises and solutions for practice and revision. Pacific Physics Volume 1 does exactly that. It offers a variety of exercises at the end of each chapter, ranging from multiple-choice questions to structured questions to essay questions. The answers and solutions are also provided at the end of the book. It also contains review questions and summary points at the end of each topic, to help students revise and consolidate their learning. This means that you can practice your skills and knowledge regularly and effectively.
-
How can you download Pacific Physics Volume 1 in PDF format?
-
If you want to access Pacific Physics Volume 1 anytime and anywhere, you might want to download it in PDF format. However, before you do so, you should be aware of the benefits and drawbacks of downloading it in PDF format. You should also know where to find reliable sources to download it in PDF format.
-
The benefits of downloading Pacific Physics Volume 1 in PDF format
-
Downloading Pacific Physics Volume 1 in PDF format has some benefits:
-
-
You can save money by not buying a physical copy of the book.
-
You can save space by not storing a bulky book on your shelf or bag.
-
You can access it anytime and anywhere on your computer or mobile device.
-
You can zoom in or out on any page or section of the book.
-
You can search for any word or phrase within the book.
-
You can highlight or annotate any part of the book.
-
-
The drawbacks of downloading Pacific Physics Volume 1 in PDF format
-
Downloading Pacific Physics Volume 1 in PDF format also has some drawbacks:
-
-
You might violate the copyright laws if you download it illegally or share it with others without permission.
-
You might expose your device to viruses or malware if you download it from untrustworthy sources.
-
You might compromise your reading experience if you download it with poor quality or formatting.
-
You might strain your eyes or battery if you read it on a screen for too long.
-
-
The best sources to download Pacific Physics Volume 1 in PDF format
-
If you decide to download Pacific Physics Volume 1 in PDF format, you should do so from reputable sources that offer high-quality downloads legally. Here are some sources that we recommend:
-
-
Name
Description
Link
-
Google Books
A service that allows users to preview or buy books online.
-
A non-profit library that offers free access to millions of books, movies, music, websites, etc.
-
Goodreads
A social networking site that allows users to discover, rate, and review books.
-
-
These sources are reliable and legal, but they might not have the latest edition or the complete content of Pacific Physics Volume 1. Therefore, you should always check the quality and validity of the PDF file before downloading it. You should also respect the author's rights and not distribute or reproduce the PDF file without permission.
-
Conclusion
-
Pacific Physics Volume 1 is a physics textbook written by Poh Liong Yong, a former lecturer at Nanyang Technological University in Singapore. It is designed to cover the topics required for the A level physics examination, which is taken by students who wish to pursue higher education in science, engineering, or medicine. It provides clear explanations and examples for each concept, and offers plenty of exercises and solutions for practice and revision. It also introduces students to the experimental aspects of physics, with practical work sections. Pacific Physics Volume 1 is a valuable resource for anyone who wants to learn physics at an advanced level. You can download it in PDF format from reputable sources such as Google Books, Internet Archive, or Goodreads. However, you should be aware of the benefits and drawbacks of downloading it in PDF format, and respect the author's rights and not distribute or reproduce the PDF file without permission.
-
FAQs
-
Here are some frequently asked questions about Pacific Physics Volume 1:
-
-
What is the difference between Pacific Physics Volume 1 and Volume 2?
-
Pacific Physics Volume 1 covers the topics required for the AS level physics examination, while Pacific Physics Volume 2 covers the topics required for the A level physics examination. The AS level physics examination is taken at the end of the first year of A level studies, while the A level physics examination is taken at the end of the second year of A level studies.
-
How many pages does Pacific Physics Volume 1 have?
-
Pacific Physics Volume 1 has 560 pages in total.
-
How much does Pacific Physics Volume 1 cost?
-
The price of Pacific Physics Volume 1 varies depending on the source and edition. The latest edition (2019) costs SGD$39.90 on Pan Pacific's website.
-
Is Pacific Physics Volume 1 suitable for self-study?
-
Pacific Physics Volume 1 is suitable for self-study, as it provides clear explanations and examples for each concept, and offers plenty of exercises and solutions for practice and revision. However, it is advisable to consult a teacher or tutor if you encounter any difficulties or doubts while studying.
-
Is Pacific Physics Volume 1 available in other languages?
-
Pacific Physics Volume 1 is only available in English.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Elsa 3.5 Audi Vw Data Serial Key.md b/spaces/1gistliPinn/ChatGPT4/Examples/Elsa 3.5 Audi Vw Data Serial Key.md
deleted file mode 100644
index 39083b94e87c0ab43d7782dfa8b8b28ea0b13bcf..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Elsa 3.5 Audi Vw Data Serial Key.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
The hacker(Victor) is very active and knows the programming of the car manufactures so he updates the Elsawin 5.3 software with all new features of the car manufactures and at the time he(Victor) updates the Elsawin software, the Elsawin update version 5.2 software activation guide(how to update Elsawin software) is share with us. In this article, we are going to share with you Elsawin 5.2 hack Elsawin 5.2 Final Code Keygen Elsawin 5.2 Offline Installer without Computer. You can use any tool to update Elsawin 5.2 software to Elsawin 5.3 Final Code for free. Elsawin 5.3 Final Code Keygen Free Download huddle Elsawin v5.3 Final Code Keygen Free Version 2017 Elsawin Final Code Keygen Free Download How To Install Elsawin Elsawin v. Software Hack Download Elsawin Final Code Elsawin 5.2 Offline Installer without Computer. Download Elsawin Hack Software and enjoy the company of the Elsawin v5.3 software completely for free. For this reason, we are sharing with you Elsawin 5.3 crack program for free. You can use any tool to update Elsawin 5.2 software to Elsawin 5.3 Final Code for free. Elsawin 5.3 Final Code Keygen Free Download huddle Elsawin v5.3 Final Code Keygen Free Version 2017 Elsawin Final Code Keygen Free Download How To Install Elsawin Elsawin 5.3 software completely for free. You can use any tool to update Elsawin 5.2 software to Elsawin 5.3 Final Code for free. Elsawin Final Code Keygen Free Download huddle Elsawin v5.3 Final Code Keygen Free Version 2017 Elsawin Final Code Keygen Free Download How To Install Elsawin Elsawin 5.3 software completely for free.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AN1 Presents My Talking Tom Friends MOD APK - Download and Have Fun.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AN1 Presents My Talking Tom Friends MOD APK - Download and Have Fun.md
deleted file mode 100644
index 19c180cca4396c1373c13e93b36abc2c818e2e11..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AN1 Presents My Talking Tom Friends MOD APK - Download and Have Fun.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Download My Talking Tom Friends Mod APK An1: A Fun and Interactive Game for All Ages
-
Do you love playing with cute and funny animals? Do you want to have a virtual pet that you can take care of, dress up, and play with? If you answered yes, then you will love My Talking Tom Friends, a popular game from Outfit7 Limited. In this game, you can join Tom, Angela, Hank, Ben, Ginger, and Becca as they live together in a cozy house. You can interact with them, feed them, bathe them, play with them, and watch them grow. You can also customize their appearance and their house, play mini-games, and chat with other players online. Sounds fun, right?
-
But what if you want to enjoy the game without any limitations or interruptions? What if you want to have unlimited money and diamonds, unlock all the characters and outfits, and remove all the ads? Well, there is a way to do that. You can download My Talking Tom Friends Mod APK An1, a modified version of the game that gives you access to all these features and more. In this article, we will tell you everything you need to know about My Talking Tom Friends Mod APK An1, including what it is, what are its benefits, how to download and install it, and what precautions to take before doing so. Let's get started!
My Talking Tom Friends is a casual simulation game that was released in June 2020 by Outfit7 Limited, the same developer behind the famous My Talking Tom series. The game has over 100 million downloads on Google Play Store and has received positive reviews from critics and players alike. The game is suitable for all ages and is available in multiple languages.
-
The gameplay of My Talking Tom Friends
-
The gameplay of My Talking Tom Friends is simple and intuitive. You start by choosing one of the six characters: Tom, Angela, Hank, Ben, Ginger, or Becca. Each character has its own personality, voice, and style. You can then move into a house with your chosen character and the rest of the gang. You can explore the house and interact with different objects and items. You can also interact with your characters by tapping on them, dragging them around, or speaking to them. They will respond to your actions and voice with cute expressions and sounds.
-
Your main goal in the game is to take care of your characters' needs and wants. You have to feed them when they are hungry, bathe them when they are dirty, put them to bed when they are sleepy, heal them when they are sick, and entertain them when they are bored. You can also fulfill their wishes by giving them gifts or taking them to different places. By doing so, you will increase their happiness level and earn coins.
-
The features of My Talking Tom Friends
-
My Talking Tom Friends has many features that make it fun and engaging. Here are some of them:
-
Customize your characters and house
-
You can customize your characters' appearance by changing their clothes, accessories, hairstyles, eye colors, skin tones, etc. You can also customize your house by changing the furniture, wallpaper, floor tiles, etc. You can buy new items from the shop using coins or diamonds.
-
Play mini-games and earn coins
-
You can play
mini-games with your characters and have fun. There are many mini-games to choose from, such as Bus Jump, Flappy Tom, Planet Hop, etc. You can earn coins by playing these games and use them to buy more items or gifts.
-
Interact with your friends and other players
-
You can interact with your friends and other players online by visiting their houses, sending them messages, or giving them likes. You can also join clubs and chat with other club members. You can also compete with other players in leaderboards and events.
-
What is My Talking Tom Friends Mod APK An1?
-
My Talking Tom Friends Mod APK An1 is a modified version of the original game that gives you some extra features and advantages. It is not an official app from Outfit7 Limited, but a third-party app created by some developers. You can download it from various websites that offer modded apps and games.
-
Download My Talking Tom Friends unlimited money mod apk an1
-How to download My Talking Tom Friends hack mod apk an1
-Download My Talking Tom Friends mod apk an1 latest version
-Download My Talking Tom Friends mod apk an1 for android
-Download My Talking Tom Friends mod apk an1 free
-Download My Talking Tom Friends mod apk an1 with all characters unlocked
-Download My Talking Tom Friends mod apk an1 offline
-Download My Talking Tom Friends mod apk an1 no root
-Download My Talking Tom Friends mod apk an1 with unlimited coins and diamonds
-Download My Talking Tom Friends mod apk an1 2023
-Download My Talking Tom Friends mod apk an1 gameplay
-Download My Talking Tom Friends mod apk an1 review
-Download My Talking Tom Friends mod apk an1 cheats
-Download My Talking Tom Friends mod apk an1 online
-Download My Talking Tom Friends mod apk an1 for pc
-Download My Talking Tom Friends mod apk an1 for ios
-Download My Talking Tom Friends mod apk an1 without ads
-Download My Talking Tom Friends mod apk an1 full version
-Download My Talking Tom Friends mod apk an1 update
-Download My Talking Tom Friends mod apk an1 new features
-Download My Talking Tom Friends mod apk an1 tips and tricks
-Download My Talking Tom Friends mod apk an1 best settings
-Download My Talking Tom Friends mod apk an1 download link
-Download My Talking Tom Friends mod apk an1 safe and secure
-Download My Talking Tom Friends mod apk an1 installation guide
-Download My Talking Tom Friends mod apk an1 requirements
-Download My Talking Tom Friends mod apk an1 size and compatibility
-Download My Talking Tom Friends mod apk an1 screenshots and videos
-Download My Talking Tom Friends mod apk an1 ratings and feedbacks
-Download My Talking Tom Friends mod apk an1 pros and cons
-Download My Talking Tom Friends mod apk an1 alternatives and similar apps
-Download My Talking Tom Friends mod apk an1 support and contact information
-Download My Talking Tom Friends mod apk an1 FAQs and answers
-Download My Talking Tom Friends mod apk an1 bugs and fixes
-Download My Talking Tom Friends mod apk an1 bonus and rewards
-Download My Talking Tom Friends mod apk an1 fun and entertainment
-Download My Talking Tom Friends mod apk an1 challenges and missions
-Download My Talking Tom Friends mod apk an1 customization and personalization
-Download My Talking Tom Friends mod apk an1 social and multiplayer features
-Download My Talking Tom Friends mod apk an1 educational and learning value
-Download My Talking Tom Friends mod apk an1 simulation and role-playing elements
-Download My Talking Tom Friends mod apk an1 adventure and exploration aspects
-Download My Talking Tom Friends mod apk an1 creativity and imagination boosters
-Download My Talking Tom Friends mod apk an1 relaxation and stress relief benefits
-Download My Talking Tom Friends mod apk an1 quality and performance improvements
-Download My Talking Tom Friends mod apk an1 originality and uniqueness factors
-Download My Talking Tom Friends mod apk an1 popularity and trendiness indicators
-Download My Talking Tom Friends mod apk an1 advantages and disadvantages comparison
-
The benefits of My Talking Tom Friends Mod APK An1
-
My Talking Tom Friends Mod APK An1 has many benefits that make it more enjoyable and convenient than the original game. Here are some of them:
-
Unlimited money and diamonds
-
With My Talking Tom Friends Mod APK An1, you will have unlimited money and diamonds in your account. You can use them to buy anything you want from the shop, such as clothes, furniture, gifts, etc. You can also use them to unlock new characters and outfits. You don't have to worry about running out of money or diamonds ever again.
-
Unlocked all characters and outfits
-
With My Talking Tom Friends Mod APK An1, you will have access to all the characters and outfits in the game. You don't have to wait for them to be unlocked or pay for them with real money. You can choose any character you like and dress them up in any outfit you want. You can also switch between characters anytime you want.
-
No ads and no root required
-
With My Talking Tom Friends Mod APK An1, you will not see any ads in the game. You can play the game without any interruptions or distractions. You can also enjoy the game without rooting your device. You don't have to risk damaging your device or losing your warranty by rooting it.
-
How to download and install My Talking Tom Friends Mod APK An1?
-
If you want to download and install My Talking Tom Friends Mod APK An1, you need to follow some simple steps. Here they are:
-
The steps to download and install My Talking Tom Friends Mod APK An1
-
-
Go to a website that offers My Talking Tom Friends Mod APK An1, such as [an1.com] or [apkdone.com].
-
Find the download link for My Talking Tom Friends Mod APK An1 and click on it.
-
Wait for the download to finish and then locate the file on your device.
-
Tap on the file and allow the installation from unknown sources if prompted.
-
Wait for the installation to complete and then launch the game.
-
Enjoy playing My Talking Tom Friends Mod APK An1 with unlimited money and diamonds, unlocked all characters and outfits, no ads, and no root required.
-
-
The precautions to take before downloading and installing My Talking Tom Friends Mod APK An1
-
Before downloading and installing My Talking Tom Friends Mod APK An1, you need to take some precautions to avoid any problems or risks. Here are some of them:
-
-
Make sure you have enough space on your device for the file size of My Talking Tom Friends Mod APK An1.
-
Make sure you have a stable internet connection for the download process.
-
Make sure you download My Talking Tom Friends Mod APK An1 from a reliable and trusted website that does not contain any viruses or malware.
-
Make sure you backup your original game data before installing My Talking Tom Friends Mod APK An1 in case you want to restore it later.
-
Make sure you do not use your real account or personal information when playing My Talking Tom Friends Mod APK An1 as it may get banned or hacked by the game developers or other players.
-
-
Conclusion
-
My Talking Tom Friends is a fun and interactive game that lets you play with cute and funny animals in a cozy house. You can take care of them, dress them up, play with them, and watch them grow. You can also customize your house, play mini-games, and chat with other players online.
-
If you want to enjoy the game without any limitations or interruptions, you can download My Talking Tom Friends Mod APK An1, a modified version of the game that gives you unlimited money and diamonds, unlocked all characters and outfits, no ads, and no root required. You can download it from various websites that offer modded apps and games, such as [an1.com] or [apkdone.com]. However, you need to take some precautions before downloading and installing My Talking Tom Friends Mod APK An1, such as making sure you have enough space on your device, a stable internet connection, a reliable and trusted website, a backup of your original game data, and a fake account or personal information.
-
We hope this article has helped you learn more about My Talking Tom Friends Mod APK An1 and how to download and install it. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading and have fun playing My Talking Tom Friends Mod APK An1!
-
FAQs
-
Here are some frequently asked questions about My Talking Tom Friends Mod APK An1:
-
-
What is the difference between My Talking Tom Friends and My Talking Tom Friends Mod APK An1?
-
My Talking Tom Friends is the original game from Outfit7 Limited that lets you play with cute and funny animals in a cozy house. My Talking Tom Friends Mod APK An1 is a modified version of the game that gives you unlimited money and diamonds, unlocked all characters and outfits, no ads, and no root required.
-
Is My Talking Tom Friends Mod APK An1 safe to download and install?
-
My Talking Tom Friends Mod APK An1 is generally safe to download and install if you follow the precautions we mentioned above, such as making sure you have enough space on your device, a stable internet connection, a reliable and trusted website, a backup of your original game data, and a fake account or personal information. However, there is always a risk of downloading and installing any modded app or game, so do it at your own discretion and responsibility.
-
Will I get banned or hacked by playing My Talking Tom Friends Mod APK An1?
-
There is a possibility that you may get banned or hacked by playing My Talking Tom Friends Mod APK An1, especially if you use your real account or personal information. The game developers or other players may detect that you are using a modded version of the game and take action against you. Therefore, we recommend that you use a fake account or personal information when playing My Talking Tom Friends Mod APK An1.
-
Can I play My Talking Tom Friends Mod APK An1 offline?
-
Yes, you can play My Talking Tom Friends Mod APK An1 offline without any internet connection. However, some features of the game may not work properly offline, such as visiting other players' houses, joining clubs, chatting with other club members, competing in leaderboards and events, etc. Therefore, we suggest that you play My Talking Tom Friends Mod APK An1 online for the best experience.
-
Can I update My Talking Tom Friends Mod APK An1 to the latest version?
-
No, you cannot update My Talking Tom Friends Mod APK An1 to the latest version from the Google Play Store or the official website of Outfit7 Limited. If you do so, you will lose all the modded features and advantages of My Talking Tom Friends Mod APK An1. Instead, you need to download and install the latest version of My Talking Tom Friends Mod APK An1 from the same website where you downloaded it before.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Black Duck and Get Complete Visibility into Your Application and Container Composition.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Black Duck and Get Complete Visibility into Your Application and Container Composition.md
deleted file mode 100644
index 99b136904ddf30ec409dc0ac7f0e9befe5b1353d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Black Duck and Get Complete Visibility into Your Application and Container Composition.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
How to Download Black Duck: A Guide for Open Source Security and Compliance
-
Open source software is widely used in modern applications and containers, but it also comes with some risks that need to be managed. These include security vulnerabilities, license compliance issues, and operational challenges. How can developers and organizations ensure that they are using open source safely and effectively?
One solution is Black Duck, a software composition analysis (SCA) tool that helps teams identify and manage the open source components in their codebase. Black Duck provides complete visibility into the open source usage, detects and prioritizes vulnerabilities, enforces license policies, and generates software bill of materials (SBOM). In this article, we will show you how to download and install Black Duck using Docker or Kubernetes, and highlight some of the benefits and alternatives of this tool.
-
How to Download Black Duck
-
Black Duck is deployed as a set of Docker containers, which together comprise the application. Each container fulfills a different role, such as processing UI requests, acting as an enterprise search platform, or storing data. To download and install Black Duck, you will need to meet some hardware and software requirements, such as:
-
-
A 64-bit 5 core processor
-
20 GB of RAM
-
250 GB of free space for the database and other containers
-
Docker 18.03.x or newer
-
An orchestration tool such as Docker Swarm or Kubernetes
-
A supported operating system such as CentOS 7.3 or Ubuntu 16.04.x
There are two main methods for installing Black Duck: using Docker Swarm or using Kubernetes. We will briefly describe each method below.
-
Using Docker Swarm
-
Docker Swarm is a native clustering tool for Docker that allows you to create and manage a group of Docker nodes as a single virtual system. To install Black Duck using Docker Swarm, you will need to follow these steps:
-
-
Install Docker CE on your host machine.
-
Initialize a swarm by running docker swarm init.
-
Create a new directory for Black Duck orchestration files and download them from GitHub.
-
Edit the docker-compose.local-overrides.yml file to customize your installation settings.
-
Run docker stack deploy -c docker-compose.yml -c docker-compose.local-overrides.yml blackduck to deploy the stack.
-
Wait for the containers to start up and check their status by running docker service ls.
-
Access the Black Duck UI by opening https://<host> in your browser.
-
-
Using Kubernetes
-
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. To install Black Duck using Kubernetes, you will need to follow these steps:
-
-
Install Kubernetes on your host machine.
-
Create a namespace for Black Duck by running kubectl create namespace blackduck.
-
Create a persistent volume claim (PVC) for the database by running kubectl create -f pvc.json -n blackduck.
-
Create a secret for the certificate by running kubectl create secret generic blackduck-webserver-certificate -n blackduck --from-file=WEBSERVER_CUSTOM_CERT_FILE --from-file=WEBSERVER_CUSTOM_KEY_FILE.
-
Create a secret for the proxy by running kubectl create secret generic blackduck-proxy -n blackduck --from-file=HUB_PROXY_HOST --from-file=HUB_PROXY_PORT --from-file=HUB_PROXY_USERNAME --from-file=HUB_PROXY_PASSWORD.
-
Download the Black Duck Helm chart from GitHub and extract it.
-
Edit the values.yaml file to customize your installation settings.
-
Run helm install ./blackduck -n blackduck --namespace blackduck to install the chart.
-
Wait for the pods to start up and check their status by running kubectl get pods -n blackduck.
-
Access the Black Duck UI by opening https://<host> in your browser.
-
-
Benefits of Black Duck
-
Black Duck is a powerful and comprehensive tool that helps teams manage their open source usage and mitigate the associated risks. Some of the benefits of using Black Duck are:
-
-
Visibility: Black Duck scans your codebase and identifies all the open source components, versions, licenses, and dependencies. It also creates a software bill of materials (SBOM) that documents the composition of your application.
-
Security: Black Duck monitors the open source components for known vulnerabilities and alerts you when new ones are discovered. It also provides remediation guidance and patch suggestions to help you fix the issues quickly and efficiently.
-
Compliance: Black Duck analyzes the licenses of the open source components and checks for any conflicts or obligations. It also helps you enforce your own license policies and generate reports for audits and due diligence.
-
Integration: Black Duck integrates with various tools and platforms that you use in your development lifecycle, such as IDEs, code repositories, build systems, CI/CD pipelines, and container registries. This enables you to scan your code at any stage and automate your workflows.
-
-
Alternatives to Black Duck
-
Black Duck is not the only tool that offers software composition analysis (SCA) functionality. There are some other tools that you can consider as alternatives or complements to Black Duck, such as:
A cloud-based SCA tool that helps teams manage their open source security, compliance, and quality. It also provides a unified dashboard for all your projects and integrations with various tools.
A developer-focused SCA tool that helps teams find and fix vulnerabilities in their open source dependencies. It also provides a CLI tool, a GitHub bot, and a vulnerability database.
A modern SCA tool that helps teams automate their open source compliance and license management. It also provides a web app, a CLI tool, and a GitHub integration.
A GitHub-native SCA tool that helps teams keep their dependencies up to date and secure. It also provides automated pull requests, security alerts, and configuration options.
-
-
Conclusion
-
In this article, we have shown you how to download and install Black Duck using Docker Swarm or Kubernetes, and highlighted some of the benefits and alternatives of this tool. Black Duck is a software composition analysis (SCA) tool that helps teams identify and manage the open source components in their codebase. It provides complete visibility into the open source usage, detects and prioritizes vulnerabilities, enforces license policies, and generates software bill of materials (SBOM). If you are looking for a solution to manage your open source security and compliance, you should give Black Duck a try.
-
FAQs
-
What is the difference between Black Duck and Synopsys?
-
Synopsys is the company that owns Black Duck. Synopsys is a leader in software security and quality solutions, offering a range of products and services for various industries and domains. Black Duck is one of the products under Synopsys' portfolio.
-
How much does Black Duck cost?
-
The pricing of Black Duck depends on various factors, such as the number of users, projects, scans, integrations, etc. You can request a quote from Synopsys by filling out this form.
-
How can I get support for Black Duck?
-
You can get support for Black Duck by contacting Synopsys through various channels, such as email, phone, chat, or web portal. You can also access the online documentation, knowledge base, community forum, and training resources for Black Duck.
-
What are the system requirements for Black Duck?
-
The system requirements for Black Duck vary depending on the deployment method and the scale of your application. However, some of the common requirements are:
-
-
A 64-bit 5 core processor
-
20 GB of RAM
-
250 GB of free space for the database and other containers
-
Docker 18.03.x or newer
-
An orchestration tool such as Docker Swarm or Kubernetes
-
A supported operating system such as CentOS 7.3 or Ubuntu 16.04.x
-
-
How can I update Black Duck?
-
You can update Black Duck by downloading the latest version of the orchestration files and running the appropriate commands for your deployment method. For example, if you are using Docker Swarm, you can run docker stack rm blackduck to remove the existing stack, and then run docker stack deploy -c docker-compose.yml -c docker-compose.local-overrides.yml blackduck to deploy the new version. You can find more details on how to update Black Duck in the Black Duck Docker Install Guide.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/All Songs Jukebox The Ultimate Music Streaming and Downloading Service.md b/spaces/1phancelerku/anime-remove-background/All Songs Jukebox The Ultimate Music Streaming and Downloading Service.md
deleted file mode 100644
index 99bfef5ca88280eb31195b3aca558cb71f3b8ce1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/All Songs Jukebox The Ultimate Music Streaming and Downloading Service.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
All Songs Download Jukebox: How to Enjoy Unlimited Music for Free
-
Do you love listening to music? Do you want to have access to thousands of songs from different genres, artists, and eras? Do you want to create your own playlists and share them with your friends? If you answered yes to any of these questions, then you might be interested in learning more about all songs download jukebox. A jukebox is a device that can play music from various sources, such as CDs, vinyl records, digital files, or online streaming services. You can use a jukebox to enjoy unlimited music for free, as long as you know how to download and play all songs on it. In this article, we will explain what a jukebox is, how to download all songs for jukebox, and how to play all songs on jukebox. Let's get started!
A jukebox is a machine that can play music from different media, such as CDs, vinyl records, digital files, or online streaming services. A jukebox usually has a coin-operated mechanism that allows users to select and play songs from a catalog or a playlist. A jukebox can also have speakers, amplifiers, lights, and other features that enhance the musical experience. A jukebox can be found in various places, such as bars, restaurants, cafes, arcades, or homes.
-
A Brief History of Jukeboxes
-
The first jukeboxes were invented in the late 19th century, when phonographs and gramophones were used to play music in public places. The term "jukebox" comes from the word "juke", which means a disorderly or rowdy place. The first coin-operated phonographs were introduced in 1889 by Louis Glass and William Arnold in San Francisco. They were called "nickel-in-the-slot machines" and could play one song per coin. The popularity of jukeboxes increased in the 1930s and 1940s, when they became more sophisticated and could play multiple songs from different records. The golden age of jukeboxes was in the 1950s and 1960s, when rock and roll music dominated the charts and jukeboxes became symbols of youth culture and rebellion. The decline of jukeboxes began in the 1970s and 1980s, when cassette tapes, CDs, and digital music players replaced vinyl records as the main source of music. However, jukeboxes never disappeared completely and are still used today by music lovers and collectors.
-
Types of Jukeboxes
-
There are many types of jukeboxes that can play different kinds of music from different sources. Some of the most common types are:
-
-
Vintage jukeboxes: These are the old-fashioned jukeboxes that use vinyl records or CDs to play music. They have a nostalgic appeal and a retro style that can add charm and character to any place. They usually have a limited number of songs and require manual selection and loading.
-
Modern jukeboxes: These are the new-generation jukeboxes that use digital files or online streaming services to play music. They have a sleek design and a touch screen interface that allows users to browse and select songs from a large catalog or a customized playlist. They usually have unlimited access to music and require internet connection.
-
Hybrid jukeboxes: These are the combination of vintage and modern jukeboxes that can play music from both vinyl records or CDs and digital files or online streaming services. They have a versatile and adaptable functionality that can suit different preferences and needs. They usually have a wide range of songs and require both manual and digital operation.
-
-
Benefits of Jukeboxes
-
Jukeboxes are not only fun and entertaining devices, but also have many benefits that can enhance your musical enjoyment. Some of the benefits are:
-
100 jukebox classics playlist Spotify
-Best of Arijit Singh free download
-Romantic hits by Jubin Nautiyal audio jukebox
-Meherbani lyrical song The Shaukeens
-Pyaar Toh Tha Bala movie song
-Korea Superconducting Tokamak Advanced Research experiment
-Best of Arijit Singh romantic songs with lyrics
-Best of Arijit Singh latest and top songs jukebox
-Best of Arijit Singh birthday special audio jukebox
-Best of Arijit Singh Jan 2015 jukebox
-Best of Arijit Singh unplugged songs jukebox
-DJ Muskurane Arijit Singh popular song India remix
-Old and new songs mashups Arijit Singh unplugged jukebox
-The best of Arijit Singh and Neha Kakkar songs audio jukebox
-Valentine's day special best of Arijit Singh romantic songs
-Atif Aslam Arijit Singh's hit song collections audio jukebox
-Best of Arijit Singh Hindi songs collection jukebox
-Series the best of Arijit Singh and Neha Kakkar songs 2016 audio jukebox
-2016 juke box best of Arijit Singh just listen the music pal
-Inside holy grail fusion experiments to create a mini Sun
-Nuclear fusion breakthrough as reactor runs seven times hotter than the Sun
-Korean nuclear fusion reactor achieves 100 million°C for 30 seconds
-How hot is each one of the layers of the Sun?
-Sun fact sheet NASA
-Solar core Wikipedia
-Sun Wikipedia
-Core Montana
-Spotify sign up to get unlimited songs and podcasts with occasional ads
-Spotify for work useful links support free mobile app
-Spotify company about jobs for the record communities for artists developers advertising investors vendors
-
-
Variety: Jukeboxes can play music from different genres, artists, and eras, giving you a diverse and eclectic musical experience. You can discover new songs, revisit old favorites, or mix and match different styles and moods.
-
Customization: Jukeboxes can allow you to create your own playlists and share them with your friends, giving you a personalized and social musical experience. You can choose the songs that suit your taste, mood, or occasion, or let your friends join in the fun and suggest their own songs.
-
Quality: Jukeboxes can deliver high-quality sound and performance, giving you a satisfying and immersive musical experience. You can enjoy the crisp and clear sound of vinyl records, the convenience and portability of digital files, or the freshness and diversity of online streaming services.
-
-
How to Download All Songs for Jukebox
-
If you want to enjoy unlimited music for free on your jukebox, you need to know how to download all songs for jukebox. There are two main ways to do this: using online streaming services or using free music download sites. Let's take a look at each option in more detail.
-
Online Streaming Services
-
Online streaming services are platforms that allow you to listen to music online without downloading it to your device. You can access millions of songs from various artists, genres, and eras, as well as create your own playlists and discover new music. Some of the most popular online streaming services are Spotify, Apple Music, and Amazon Music. Here are some features and tips for each service:
-
Spotify
-
Spotify is one of the most widely used online streaming services in the world, with over 350 million users. It offers a free plan that allows you to listen to music with ads, or a premium plan that allows you to listen to music without ads, download songs for offline listening, and enjoy other benefits. To use Spotify on your jukebox, you need to:
-
-
Create an account on Spotify.com or download the Spotify app on your device.
-
Search for the songs, albums, artists, or playlists that you want to listen to.
-
If you have a premium plan, you can download the songs by toggling the "Download" switch on the top right corner of the screen.
-
Connect your device to your jukebox using a cable or Bluetooth.
-
Play the songs on your device and enjoy them on your jukebox.
-
-
Apple Music
-
Apple Music is another popular online streaming service that has over 60 million users. It offers a free trial for three months, after which you need to pay a monthly fee to continue using it. It allows you to listen to music without ads, download songs for offline listening, and access exclusive content and features. To use Apple Music on your jukebox, you need to:
-
-
Create an account on Apple.com or download the Apple Music app on your device.
-
Search for the songs, albums, artists, or playlists that you want to listen to.
-
Download the songs by tapping the cloud icon next to each song.
-
Connect your device to your jukebox using a cable or Bluetooth.
-
Play the songs on your device and enjoy them on your jukebox.
-
-
Amazon Music
-
Amazon Music is another online streaming service that has over 55 million users. It offers a free plan that allows you to listen to music with ads, or a paid plan that allows you to listen to music without ads, download songs for offline listening, and access more songs and features. To use Amazon Music on your jukebox, you need to:
-
-
Create an account on Amazon.com or download the Amazon Music app on your device.
-
Search for the songs, albums, artists, or playlists that you want to listen to.
-
If you have a paid plan, you can download the songs by tapping the "More Options" icon next to each song and selecting "Download".
-
Connect your device to your jukebox using a cable or Bluetooth.
-
Enjoy the Music and Have Fun
-
The final step is to enjoy the music and have fun. You can adjust the volume, the bass, the treble, and other settings on your jukebox to suit your preferences. You can also skip, pause, or repeat songs as you wish. You can sing along, dance, or just relax and listen to the music. You can also invite your friends and family to join you and share your musical taste. You can have a party, a karaoke night, or a chill session with your jukebox.
-
Conclusion
-
All songs download jukebox is a great way to enjoy unlimited music for free. You can download all songs for jukebox using online streaming services or free music download sites. You can play all songs on jukebox by connecting your device to the jukebox and selecting your playlist or album. You can have fun and create your own musical atmosphere with your jukebox. Whether you prefer vintage or modern jukeboxes, you can find the one that suits your style and budget. All you need is a love for music and a desire to have a good time.
-
FAQs
-
Here are some frequently asked questions about all songs download jukebox:
-
-
Q: How much does a jukebox cost?
-
A: The price of a jukebox depends on its type, model, condition, and features. A vintage jukebox can cost anywhere from $500 to $10,000 or more, depending on its rarity and quality. A modern jukebox can cost anywhere from $200 to $2,000 or more, depending on its brand and functionality. A hybrid jukebox can cost anywhere from $300 to $3,000 or more, depending on its versatility and design.
-
Q: Where can I buy a jukebox?
-
A: You can buy a jukebox from various sources, such as online retailers, physical stores, auctions, or private sellers. Some of the best online retailers for jukeboxes are Amazon, eBay, Walmart, and Best Buy. Some of the best physical stores for jukeboxes are Target, Sears, Home Depot, and Lowe's. Some of the best auctions for jukeboxes are Sotheby's, Christie's, Heritage Auctions, and Bonhams. Some of the best private sellers for jukeboxes are Craigslist, Facebook Marketplace, OfferUp, and Letgo.
-
Q: How do I maintain my jukebox?
-
A: To keep your jukebox in good condition, you need to perform some regular maintenance tasks, such as cleaning, lubricating, repairing, and updating. You need to clean your jukebox with a soft cloth and a mild detergent to remove dust and dirt. You need to lubricate your jukebox with oil or grease to prevent rust and friction. You need to repair your jukebox with professional help or DIY tools if it has any damages or malfunctions. You need to update your jukebox with new software or firmware if it has any bugs or glitches.
-
Q: How do I customize my jukebox?
-
A: To make your jukebox more unique and personal, you can customize it with various accessories and decorations. You can add lights, stickers, decals, posters, or paintings to your jukebox to make it more colorful and attractive. You can add speakers, headphones, microphones, or karaoke machines to your jukebox to make it more loud and interactive. You can add coins, tokens, cards , or buttons to your jukebox to make it more fun and authentic. You can also change the color, shape, or size of your jukebox to make it more suitable for your space and style.
-
Q: How do I troubleshoot my jukebox?
-
A: If your jukebox is not working properly, you can try some basic troubleshooting steps, such as checking the power, the connection, the settings, and the songs. You can check the power by plugging and unplugging your jukebox and making sure that it is turned on. You can check the connection by reconnecting your device and your jukebox and making sure that they are paired and synced. You can check the settings by adjusting the volume, the bass, the treble, and other options on your jukebox and making sure that they are correct. You can check the songs by deleting and downloading them again and making sure that they are compatible and playable.
-
-
I hope you enjoyed this article on all songs download jukebox. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy listening!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Archero Hack APK and Enjoy Crossbow Archery with Amazing Features.md b/spaces/1phancelerku/anime-remove-background/Download Archero Hack APK and Enjoy Crossbow Archery with Amazing Features.md
deleted file mode 100644
index fbbe90b2f81bba153c0d4ca49c1ceec57a1ee8ee..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Archero Hack APK and Enjoy Crossbow Archery with Amazing Features.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Archero Hack APK: How to Download and Play the Modded Version of Archero
-
Are you a fan of Archero, the addictive roguelike game where you control a lone archer who must fight against waves of enemies and obstacles? Do you want to experience the game with unlimited resources, unlocked features, and enhanced gameplay? If yes, then you might be interested in trying out Archero Hack APK, a modded version of the game that offers many advantages over the original one. But before you do that, you should know what Archero Hack APK is, how it works, and what are the risks and precautions involved in using it. In this article, we will tell you everything you need to know about Archero Hack APK, including how to download and install it on your Android device, and how to enjoy it without getting banned or infected by malware. Read on to find out more!
Archero is a mobile game developed by Habby, a Chinese studio that specializes in casual games. It was released in 2019 and has since become one of the most popular games on Google Play Store and App Store, with over 50 million downloads and 4.3 stars rating.
-
The gameplay and features of Archero
-
The gameplay of Archero is simple but challenging. You play as an archer who must survive different stages filled with enemies, traps, and bosses. You can move around by swiping on the screen, but you can only shoot when you stop moving. You can also choose from various skills that appear randomly after each level, such as multishot, ricochet, bouncy wall, etc. These skills can help you deal more damage, dodge attacks, or heal yourself.
-
Archero also has many features that make it fun and engaging. You can unlock different heroes with unique abilities, such as Helix, Meowgik, Sylvan, etc. You can also equip different weapons, armors, rings, pets, and other items that can boost your stats or grant you special effects. You can also upgrade your equipment by fusing duplicates or spending coins or gems. Moreover, you can explore different worlds and maps with different themes and difficulties, such as desert, forest, dungeon, etc.
-
The challenges and rewards of Archero
-
Archero is not an easy game. It requires skill, strategy, and luck to progress through the stages. You have to dodge enemy attacks while aiming at them accurately. You have to choose the best skills that suit your playstyle and situation. You have to manage your health and energy wisely. And most importantly, you have to deal with the randomness of the game. Sometimes you may get lucky and get powerful skills or items that make you unstoppable. Other times you may get unlucky and get useless skills or items that make you vulnerable.
-
archero mod apk god mode
-archero unlimited gems apk
-archero hack apk download
-archero apk mod menu
-archero hack apk 2023
-archero mod apk latest version
-archero hack apk android
-archero mod apk one hit
-archero hack apk ios
-archero mod apk wall hack
-archero hack apk no root
-archero mod apk unlimited money
-archero hack apk online
-archero mod apk free shopping
-archero hack apk without verification
-archero mod apk high damage
-archero hack apk no human verification
-archero mod apk all unlocked
-archero hack apk reddit
-archero mod apk unlimited energy
-archero hack apk 4.14.0
-archero mod apk revdl
-archero hack apk happymod
-archero mod apk platinmods
-archero hack apk 2022
-archero mod apk unlimited coins and gems
-archero hack apk 4.13.0
-archero mod apk rexdl
-archero hack apk 4.12.0
-archero mod apk an1
-archero hack apk 4.11.0
-archero mod apk 4.14.0 download
-archero hack apk 4.10.0
-archero mod apk 4.13.0 download
-archero hack apk 4.9.0
-archero mod apk 4.12.0 download
-archero hack apk 4.8.0
-archero mod apk 4.11.0 download
-archero hack apk 4.7.0
-archero mod apk 4.10.0 download
-archero hack tool online generator
-
But despite the challenges, Archero is also very rewarding. It gives you a sense of accomplishment when you clear a stage or defeat a boss. It gives you a thrill when you discover a new skill or item that changes your gameplay. It gives you a satisfaction when you upgrade your equipment or unlock a new hero. And it gives you a motivation when you see your progress on the leaderboard or achievements.
-
What is Archero Hack APK and how does it differ from the original game?
-
Archero Hack APK is a modified version of Archero that has been altered by some third-party developers or hackers to provide some benefits or advantages over the original game. These benefits or advantages may include:
- Unlimited coins and gems that can be used to buy or upgrade anything in the game - Unlimited energy that allows you to play as long as you want without waiting for the energy bar to refill - Unlocked all heroes, weapons, items, skills, and maps that are otherwise restricted or require real money to access - Enhanced gameplay that gives you more damage, speed, health, and other perks that make you stronger and faster than the normal game Archero Hack APK differs from the original game in many ways. It basically gives you everything you need or want in the game without any effort or cost. It makes the game easier and more enjoyable for some players who want to have fun without any challenge or limitation. However, it also takes away some of the essential elements of the game that make it fun and engaging for other players who want to have a fair and balanced experience. It also poses some risks and precautions that you should be aware of before using it.
What are the benefits and drawbacks of using Archero Hack APK?
-
Using Archero Hack APK can have some benefits and drawbacks depending on your perspective and preference. Here are some of them:
-
The benefits of using Archero Hack APK
-
Some of the benefits of using Archero Hack APK are:
-
-
You can save time and money by getting unlimited resources and features without spending any real money or grinding for hours.
-
You can explore and experiment with different combinations of heroes, weapons, items, skills, and maps without any restriction or limitation.
-
You can have more fun and excitement by playing with enhanced gameplay and powerful abilities that make you feel like a god.
-
You can impress your friends or other players by showing off your achievements or progress in the game.
-
-
The drawbacks of using Archero Hack APK
-
Some of the drawbacks of using Archero Hack APK are:
-
-
You can lose the challenge and satisfaction of playing the game as it was intended by the developers. You may get bored or lose interest in the game after a while.
-
You can miss out on the updates and features that are added to the original game regularly. You may also encounter bugs or glitches that are not fixed in the modded version.
-
You can risk getting banned or suspended from the game if you are detected by the anti-cheat system or reported by other players. You may also lose your progress or account if that happens.
-
You can expose your device or data to malware or viruses that may be hidden in the modded APK file. You may also compromise your privacy or security if you grant permissions or access to unknown sources.
-
-
How to download and install Archero Hack APK on your Android device?
-
If you still want to try out Archero Hack APK despite the drawbacks and risks, you will need to follow some steps to download and install it on your Android device. Here are the steps:
The steps to download and install Archero Hack APK
-
The steps to download and install Archero Hack APK are:
-
-
Find a reliable and trustworthy source that provides the latest version of Archero Hack APK. You can search online or ask for recommendations from other users. Some of the popular sources are , but you should always check the reviews and ratings before downloading anything.
-
Download the Archero Hack APK file from the source you have chosen. You may need to enable the option to download from unknown sources in your device settings. You may also need to disable your antivirus or firewall temporarily if they block the download.
-
Locate the Archero Hack APK file in your device storage and tap on it to install it. You may need to grant some permissions or access to the app during the installation process. You may also need to verify your device or account if prompted.
-
Wait for the installation to finish and then launch the app from your home screen or app drawer. You may need to sign in with your Google Play account or create a new one if you don't have one.
-
Enjoy playing Archero Hack APK with unlimited resources, unlocked features, and enhanced gameplay!
-
-
The tips and tricks to enjoy Archero Hack APK
-
Some of the tips and tricks to enjoy Archero Hack APK are:
-
-
Use different heroes, weapons, items, skills, and maps to experiment with different strategies and combinations. You can also switch between them anytime you want without losing your progress.
-
Use the unlimited coins and gems to buy or upgrade anything you want in the game. You can also use them to revive yourself or skip levels if you get stuck or bored.
-
Use the enhanced gameplay to deal more damage, move faster, heal more, and avoid attacks. You can also use it to challenge yourself by increasing the difficulty or playing in different modes.
-
Be careful not to abuse the hack or cheat too much as it may ruin the fun or make the game too easy. You can also try playing the original game occasionally to compare and appreciate the difference.
-
Be respectful and responsible when playing online or with other players. Do not brag or boast about your hack or cheat as it may annoy or offend others. Do not use it to gain an unfair advantage or harm others as it may get you banned or reported.
-
-
Conclusion
-
In conclusion, Archero Hack APK is a modded version of Archero that offers many benefits and advantages over the original game, such as unlimited resources, unlocked features, and enhanced gameplay. However, it also has some drawbacks and risks, such as losing the challenge and satisfaction, missing out on the updates and features, risking getting banned or suspended, and exposing your device or data to malware or viruses. Therefore, you should be careful and cautious when using it, and follow some steps and tips to download and install it safely and enjoy it properly.
-
If you are interested in trying out Archero Hack APK, you can find it online from various sources, but make sure they are reliable and trustworthy. You can also ask for recommendations from other users who have used it before. But remember, use it at your own risk and discretion, and do not forget to have fun!
-
FAQs
-
What is Archero?
-
Archero is a mobile game developed by Habby that lets you play as an archer who must survive different stages filled with enemies, traps, and bosses. You can choose from various skills, heroes, weapons, items, and maps that can help you in your adventure.
-
What is Archero Hack APK?
-
Archero Hack APK is a modified version of Archero that has been altered by some third-party developers or hackers to provide some benefits or advantages over the original game, such as unlimited resources, unlocked features, and enhanced gameplay.
-
How do I download and install Archero Hack APK?
-
You can download and install Archero Hack APK by following these steps:
-
-
Find a reliable and trustworthy source that provides the latest version of Archero Hack APK.
-
Download the Archero Hack APK file from the source you have chosen.
-
Locate the Archero Hack APK file in your device storage and tap on it to install it.
-
Wait for the installation to finish and then launch the app from your home screen or app drawer.
-
Enjoy playing Archero Hack APK with unlimited resources, unlocked features, and enhanced gameplay!
-
-
Is Archero Hack APK safe to use?
Archero Hack APK is not safe to use, as it may violate the terms and conditions of the original game, and expose your device or data to malware or viruses. You may also risk getting banned or suspended from the game if you are detected by the anti-cheat system or reported by other players. Therefore, you should use it at your own risk and discretion, and take some precautions to protect yourself and your account.
-
What are some alternatives to Archero Hack APK?
-
If you are looking for some alternatives to Archero Hack APK that can provide you with similar benefits or advantages without the drawbacks or risks, you can try some of these options:
-
-
You can use some legitimate Archero cheats or tips that can help you improve your gameplay and progress faster in the game. For example, you can learn how to dodge enemy attacks, how to choose the best skills, how to optimize your equipment, etc. You can find some of these cheats or tips online or from other players.
-
You can use some Archero mod APKs that are verified and tested by reputable sources and do not contain any malware or viruses. These mod APKs may offer some features or enhancements that are not available in the original game, such as custom skins, graphics, sounds, etc. However, they may not offer unlimited resources or unlocked features as Archero Hack APK does.
-
You can use some Archero emulators that can allow you to play the game on your PC or laptop instead of your mobile device. These emulators may offer better performance, graphics, controls, and compatibility than the original game. However, they may also require more space, memory, and processing power than the original game.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/prior_transformer.py b/spaces/1toTree/lora_test/ppdiffusers/models/prior_transformer.py
deleted file mode 100644
index 8f28c72050dab88e85dfabc37dead90389a2df2f..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/models/prior_transformer.py
+++ /dev/null
@@ -1,220 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import Optional, Union
-
-import paddle
-import paddle.nn as nn
-import paddle.nn.functional as F
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..modeling_utils import ModelMixin
-from ..utils import BaseOutput
-from .attention import BasicTransformerBlock
-from .embeddings import TimestepEmbedding, Timesteps
-
-NEG_INF = -1e4
-
-
-@dataclass
-class PriorTransformerOutput(BaseOutput):
- """
- Args:
- predicted_image_embedding (`paddle.Tensor` of shape `(batch_size, embedding_dim)`):
- The predicted CLIP image embedding conditioned on the CLIP text embedding input.
- """
-
- predicted_image_embedding: paddle.Tensor
-
-
-class PriorTransformer(ModelMixin, ConfigMixin):
- """
- The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the
- transformer predicts the image embeddings through a denoising diffusion process.
-
- This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
- implements for all the models (such as downloading or saving, etc.)
-
- For more details, see the original paper: https://arxiv.org/abs/2204.06125
-
- Parameters:
- num_attention_heads (`int`, *optional*, defaults to 32): The number of heads to use for multi-head attention.
- attention_head_dim (`int`, *optional*, defaults to 64): The number of channels in each head.
- num_layers (`int`, *optional*, defaults to 20): The number of layers of Transformer blocks to use.
- embedding_dim (`int`, *optional*, defaults to 768): The dimension of the CLIP embeddings. Note that CLIP
- image embeddings and text embeddings are both the same dimension.
- num_embeddings (`int`, *optional*, defaults to 77): The max number of clip embeddings allowed. I.e. the
- length of the prompt after it has been tokenized.
- additional_embeddings (`int`, *optional*, defaults to 4): The number of additional tokens appended to the
- projected hidden_states. The actual length of the used hidden_states is `num_embeddings +
- additional_embeddings`.
- dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
-
- """
-
- @register_to_config
- def __init__(
- self,
- num_attention_heads: int = 32,
- attention_head_dim: int = 64,
- num_layers: int = 20,
- embedding_dim: int = 768,
- num_embeddings=77,
- additional_embeddings=4,
- dropout: float = 0.0,
- ):
- super().__init__()
- self.num_attention_heads = num_attention_heads
- self.attention_head_dim = attention_head_dim
- inner_dim = num_attention_heads * attention_head_dim
- self.additional_embeddings = additional_embeddings
-
- self.time_proj = Timesteps(inner_dim, True, 0)
- self.time_embedding = TimestepEmbedding(inner_dim, inner_dim)
-
- self.proj_in = nn.Linear(embedding_dim, inner_dim)
-
- self.embedding_proj = nn.Linear(embedding_dim, inner_dim)
- self.encoder_hidden_states_proj = nn.Linear(embedding_dim, inner_dim)
-
- self.positional_embedding = self.create_parameter(
- (1, num_embeddings + additional_embeddings, inner_dim),
- dtype=paddle.get_default_dtype(),
- default_initializer=nn.initializer.Constant(0.0),
- )
-
- self.prd_embedding = self.create_parameter(
- (1, 1, inner_dim), dtype=paddle.get_default_dtype(), default_initializer=nn.initializer.Constant(0.0)
- )
-
- self.transformer_blocks = nn.LayerList(
- [
- BasicTransformerBlock(
- inner_dim,
- num_attention_heads,
- attention_head_dim,
- dropout=dropout,
- activation_fn="gelu",
- attention_bias=True,
- )
- for d in range(num_layers)
- ]
- )
-
- self.norm_out = nn.LayerNorm(inner_dim)
- self.proj_to_clip_embeddings = nn.Linear(inner_dim, embedding_dim)
-
- causal_attention_mask = paddle.triu(
- paddle.full([num_embeddings + additional_embeddings, num_embeddings + additional_embeddings], NEG_INF), 1
- )
- causal_attention_mask = causal_attention_mask.unsqueeze(0)
- self.register_buffer("causal_attention_mask", causal_attention_mask, persistable=False)
-
- self.clip_mean = self.create_parameter(
- (1, embedding_dim), dtype=paddle.get_default_dtype(), default_initializer=nn.initializer.Constant(0.0)
- )
- self.clip_std = self.create_parameter(
- (1, embedding_dim), dtype=paddle.get_default_dtype(), default_initializer=nn.initializer.Constant(0.0)
- )
-
- def forward(
- self,
- hidden_states,
- timestep: Union[paddle.Tensor, float, int],
- proj_embedding: paddle.Tensor,
- encoder_hidden_states: paddle.Tensor,
- attention_mask: Optional[paddle.Tensor] = None,
- return_dict: bool = True,
- ):
- """
- Args:
- hidden_states (`paddle.Tensor` of shape `(batch_size, embedding_dim)`):
- x_t, the currently predicted image embeddings.
- timestep (`paddle.Tensor`):
- Current denoising step.
- proj_embedding (`paddle.Tensor` of shape `(batch_size, embedding_dim)`):
- Projected embedding vector the denoising process is conditioned on.
- encoder_hidden_states (`paddle.Tensor` of shape `(batch_size, num_embeddings, embedding_dim)`):
- Hidden states of the text embeddings the denoising process is conditioned on.
- attention_mask (`paddle.Tensor` of shape `(batch_size, num_embeddings)`):
- Text mask for the text embeddings.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`models.prior_transformer.PriorTransformerOutput`] instead of a plain
- tuple.
-
- Returns:
- [`~models.prior_transformer.PriorTransformerOutput`] or `tuple`:
- [`~models.prior_transformer.PriorTransformerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
- """
- batch_size = hidden_states.shape[0]
-
- timesteps = timestep
- if not paddle.is_tensor(timesteps):
- timesteps = paddle.to_tensor([timesteps], dtype=paddle.int64)
- elif paddle.is_tensor(timesteps) and len(timesteps.shape) == 0:
- timesteps = timesteps[None]
- # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
- timesteps = timesteps * paddle.ones((batch_size,), dtype=timesteps.dtype)
-
- timesteps_projected = self.time_proj(timesteps)
-
- # timesteps does not contain any weights and will always return f32 tensors
- # but time_embedding might be fp16, so we need to cast here.
- timesteps_projected = timesteps_projected.cast(dtype=self.dtype)
- time_embeddings = self.time_embedding(timesteps_projected)
-
- proj_embeddings = self.embedding_proj(proj_embedding)
- encoder_hidden_states = self.encoder_hidden_states_proj(encoder_hidden_states)
- hidden_states = self.proj_in(hidden_states)
- prd_embedding = self.prd_embedding.cast(hidden_states.dtype).expand([batch_size, -1, -1])
- positional_embeddings = self.positional_embedding.cast(hidden_states.dtype)
-
- hidden_states = paddle.concat(
- [
- encoder_hidden_states,
- proj_embeddings[:, None, :],
- time_embeddings[:, None, :],
- hidden_states[:, None, :],
- prd_embedding,
- ],
- axis=1,
- )
-
- hidden_states = hidden_states + positional_embeddings
-
- if attention_mask is not None:
- attention_mask = (1 - attention_mask.cast(hidden_states.dtype)) * -10000.0
- attention_mask = F.pad(
- attention_mask.unsqueeze(0), (0, self.additional_embeddings), value=0.0, data_format="NCL"
- ).squeeze(0)
- attention_mask = (attention_mask[:, None, :] + self.causal_attention_mask).cast(hidden_states.dtype)
- attention_mask = attention_mask.repeat_interleave(self.config.num_attention_heads, axis=0)
-
- for block in self.transformer_blocks:
- hidden_states = block(hidden_states, attention_mask=attention_mask)
-
- hidden_states = self.norm_out(hidden_states)
- hidden_states = hidden_states[:, -1]
- predicted_image_embedding = self.proj_to_clip_embeddings(hidden_states)
-
- if not return_dict:
- return (predicted_image_embedding,)
-
- return PriorTransformerOutput(predicted_image_embedding=predicted_image_embedding)
-
- def post_process_latents(self, prior_latents):
- prior_latents = (prior_latents * self.clip_std) + self.clip_mean
- return prior_latents
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/__init__.py
deleted file mode 100644
index 309b32b2d1129f07c0643c6cd1e7e0071ccf2045..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/__init__.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# flake8: noqa
-
-from ...utils import (
- OptionalDependencyNotAvailable,
- is_paddle_available,
- is_paddlenlp_available,
-)
-
-try:
- if not (is_paddlenlp_available() and is_paddle_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from ...utils.dummy_paddle_and_paddlenlp_objects import (
- VersatileDiffusionDualGuidedPipeline,
- VersatileDiffusionImageVariationPipeline,
- VersatileDiffusionPipeline,
- VersatileDiffusionTextToImagePipeline,
- )
-else:
- from .modeling_text_unet import UNetFlatConditionModel
- from .pipeline_versatile_diffusion import VersatileDiffusionPipeline
- from .pipeline_versatile_diffusion_dual_guided import (
- VersatileDiffusionDualGuidedPipeline,
- )
- from .pipeline_versatile_diffusion_image_variation import (
- VersatileDiffusionImageVariationPipeline,
- )
- from .pipeline_versatile_diffusion_text_to_image import (
- VersatileDiffusionTextToImagePipeline,
- )
diff --git a/spaces/7hao/bingo/src/components/chat-attachments.tsx b/spaces/7hao/bingo/src/components/chat-attachments.tsx
deleted file mode 100644
index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/components/chat-attachments.tsx
+++ /dev/null
@@ -1,37 +0,0 @@
-import Image from 'next/image'
-import ClearIcon from '@/assets/images/clear.svg'
-import RefreshIcon from '@/assets/images/refresh.svg'
-import { FileItem } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-import { useBing } from '@/lib/hooks/use-bing'
-
-type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'>
-
-export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) {
- return attachmentList.length ? (
-
- {attachmentList.map(file => (
-
- {file.status === 'loading' && (
-
-
-
)
- }
- {file.status !== 'error' && (
-
-
-
)
- }
- {file.status === 'error' && (
-
- uploadImage(file.url)} />
-
- )}
-
-
- ))}
-
- ) : null
-}
diff --git a/spaces/801artistry/RVC801/i18n/scan_i18n.py b/spaces/801artistry/RVC801/i18n/scan_i18n.py
deleted file mode 100644
index f3e52cf4f9f06d78877d77d2353f666aa759e36f..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/i18n/scan_i18n.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import ast
-import glob
-import json
-from collections import OrderedDict
-
-
-def extract_i18n_strings(node):
- i18n_strings = []
-
- if (
- isinstance(node, ast.Call)
- and isinstance(node.func, ast.Name)
- and node.func.id == "i18n"
- ):
- for arg in node.args:
- if isinstance(arg, ast.Str):
- i18n_strings.append(arg.s)
-
- for child_node in ast.iter_child_nodes(node):
- i18n_strings.extend(extract_i18n_strings(child_node))
-
- return i18n_strings
-
-
-# scan the directory for all .py files (recursively)
-# for each file, parse the code into an AST
-# for each AST, extract the i18n strings
-
-strings = []
-for filename in glob.iglob("**/*.py", recursive=True):
- with open(filename, "r") as f:
- code = f.read()
- if "I18nAuto" in code:
- tree = ast.parse(code)
- i18n_strings = extract_i18n_strings(tree)
- print(filename, len(i18n_strings))
- strings.extend(i18n_strings)
-code_keys = set(strings)
-"""
-n_i18n.py
-gui_v1.py 26
-app.py 16
-infer-web.py 147
-scan_i18n.py 0
-i18n.py 0
-lib/train/process_ckpt.py 1
-"""
-print()
-print("Total unique:", len(code_keys))
-
-
-standard_file = "i18n/locale/zh_CN.json"
-with open(standard_file, "r", encoding="utf-8") as f:
- standard_data = json.load(f, object_pairs_hook=OrderedDict)
-standard_keys = set(standard_data.keys())
-
-# Define the standard file name
-unused_keys = standard_keys - code_keys
-print("Unused keys:", len(unused_keys))
-for unused_key in unused_keys:
- print("\t", unused_key)
-
-missing_keys = code_keys - standard_keys
-print("Missing keys:", len(missing_keys))
-for missing_key in missing_keys:
- print("\t", missing_key)
-
-code_keys_dict = OrderedDict()
-for s in strings:
- code_keys_dict[s] = s
-
-# write back
-with open(standard_file, "w", encoding="utf-8") as f:
- json.dump(code_keys_dict, f, ensure_ascii=False, indent=4, sort_keys=True)
- f.write("\n")
diff --git a/spaces/801artistry/RVC801/tensorlowest.py b/spaces/801artistry/RVC801/tensorlowest.py
deleted file mode 100644
index eccd4dbf3494434e59f7defaae6ab91797263b90..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/tensorlowest.py
+++ /dev/null
@@ -1,123 +0,0 @@
-from tensorboard.backend.event_processing import event_accumulator
-
-import os
-from shutil import copy2
-from re import search as RSearch
-import pandas as pd
-from ast import literal_eval as LEval
-
-weights_dir = 'weights/'
-
-def find_biggest_tensorboard(tensordir):
- try:
- files = [f for f in os.listdir(tensordir) if f.endswith('.0')]
- if not files:
- print("No files with the '.0' extension found!")
- return
-
- max_size = 0
- biggest_file = ""
-
- for file in files:
- file_path = os.path.join(tensordir, file)
- if os.path.isfile(file_path):
- file_size = os.path.getsize(file_path)
- if file_size > max_size:
- max_size = file_size
- biggest_file = file
-
- return biggest_file
-
- except FileNotFoundError:
- print("Couldn't find your model!")
- return
-
-def main(model_name, save_freq, lastmdls):
- global lowestval_weight_dir, scl
-
- tensordir = os.path.join('logs', model_name)
- lowestval_weight_dir = os.path.join(tensordir, "lowestvals")
-
- latest_file = find_biggest_tensorboard(tensordir)
-
- if latest_file is None:
- print("Couldn't find a valid tensorboard file!")
- return
-
- tfile = os.path.join(tensordir, latest_file)
-
- ea = event_accumulator.EventAccumulator(tfile,
- size_guidance={
- event_accumulator.COMPRESSED_HISTOGRAMS: 500,
- event_accumulator.IMAGES: 4,
- event_accumulator.AUDIO: 4,
- event_accumulator.SCALARS: 0,
- event_accumulator.HISTOGRAMS: 1,
- })
-
- ea.Reload()
- ea.Tags()
-
- scl = ea.Scalars('loss/g/total')
-
- listwstep = {}
-
- for val in scl:
- if (val.step // save_freq) * save_freq in [val.step for val in scl]:
- listwstep[float(val.value)] = (val.step // save_freq) * save_freq
-
- lowest_vals = sorted(listwstep.keys())[:lastmdls]
-
- sorted_dict = {value: step for value, step in listwstep.items() if value in lowest_vals}
-
- return sorted_dict
-
-def selectweights(model_name, file_dict, weights_dir, lowestval_weight_dir):
- os.makedirs(lowestval_weight_dir, exist_ok=True)
- logdir = []
- files = []
- lbldict = {
- 'Values': {},
- 'Names': {}
- }
- weights_dir_path = os.path.join(weights_dir, "")
- low_val_path = os.path.join(os.getcwd(), os.path.join(lowestval_weight_dir, ""))
-
- try:
- file_dict = LEval(file_dict)
- except Exception as e:
- print(f"Error! {e}")
- return f"Couldn't load tensorboard file! {e}"
-
- weights = [f for f in os.scandir(weights_dir)]
- for key, value in file_dict.items():
- pattern = fr"^{model_name}_.*_s{value}\.pth$"
- matching_weights = [f.name for f in weights if f.is_file() and RSearch(pattern, f.name)]
- for weight in matching_weights:
- source_path = weights_dir_path + weight
- destination_path = os.path.join(lowestval_weight_dir, weight)
-
- copy2(source_path, destination_path)
-
- logdir.append(f"File = {weight} Value: {key}, Step: {value}")
-
- lbldict['Names'][weight] = weight
- lbldict['Values'][weight] = key
-
- files.append(low_val_path + weight)
-
- print(f"File = {weight} Value: {key}, Step: {value}")
-
- yield ('\n'.join(logdir), files, pd.DataFrame(lbldict))
-
-
- return ''.join(logdir), files, pd.DataFrame(lbldict)
-
-
-if __name__ == "__main__":
- model = str(input("Enter the name of the model: "))
- sav_freq = int(input("Enter save frequency of the model: "))
- ds = main(model, sav_freq)
-
- if ds: selectweights(model, ds, weights_dir, lowestval_weight_dir)
-
\ No newline at end of file
diff --git a/spaces/AIBoy1993/segment_anything_webui/inference.py b/spaces/AIBoy1993/segment_anything_webui/inference.py
deleted file mode 100644
index e96c0fad4ca171aa013c3fb31d91d0b8c45d0fd5..0000000000000000000000000000000000000000
--- a/spaces/AIBoy1993/segment_anything_webui/inference.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import os
-import cv2
-import torch
-import numpy as np
-import gradio as gr
-from PIL import Image, ImageDraw
-from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
-from transformers import OwlViTProcessor, OwlViTForObjectDetection
-import gc
-
-models = {
- 'vit_b': './checkpoints/sam_vit_b_01ec64.pth',
- 'vit_l': './checkpoints/sam_vit_l_0b3195.pth',
- 'vit_h': './checkpoints/sam_vit_h_4b8939.pth'
-}
-
-image_examples = [
- [os.path.join(os.path.dirname(__file__), "./images/53960-scaled.jpg"), 0, []],
- [os.path.join(os.path.dirname(__file__), "./images/2388455-scaled.jpg"), 1, []],
- [os.path.join(os.path.dirname(__file__), "./images/1.jpg"),2,[]],
- [os.path.join(os.path.dirname(__file__), "./images/2.jpg"),3,[]],
- [os.path.join(os.path.dirname(__file__), "./images/3.jpg"),4,[]],
- [os.path.join(os.path.dirname(__file__), "./images/4.jpg"),5,[]],
- [os.path.join(os.path.dirname(__file__), "./images/5.jpg"),6,[]],
- [os.path.join(os.path.dirname(__file__), "./images/6.jpg"),7,[]],
- [os.path.join(os.path.dirname(__file__), "./images/7.jpg"),8,[]],
- [os.path.join(os.path.dirname(__file__), "./images/8.jpg"),9,[]]
-]
-
-
-def plot_boxes(img, boxes):
- img_pil = Image.fromarray(np.uint8(img * 255)).convert('RGB')
- draw = ImageDraw.Draw(img_pil)
- for box in boxes:
- color = tuple(np.random.randint(0, 255, size=3).tolist())
- x0, y0, x1, y1 = box
- x0, y0, x1, y1 = int(x0), int(y0), int(x1), int(y1)
- draw.rectangle([x0, y0, x1, y1], outline=color, width=6)
- return img_pil
-
-
-def segment_one(img, mask_generator, seed=None):
- if seed is not None:
- np.random.seed(seed)
- masks = mask_generator.generate(img)
- sorted_anns = sorted(masks, key=(lambda x: x['area']), reverse=True)
- mask_all = np.ones((img.shape[0], img.shape[1], 3))
- for ann in sorted_anns:
- m = ann['segmentation']
- color_mask = np.random.random((1, 3)).tolist()[0]
- for i in range(3):
- mask_all[m == True, i] = color_mask[i]
- result = img / 255 * 0.3 + mask_all * 0.7
- return result, mask_all
-
-
-def generator_inference(device, model_type, points_per_side, pred_iou_thresh, stability_score_thresh,
- min_mask_region_area, stability_score_offset, box_nms_thresh, crop_n_layers, crop_nms_thresh,
- input_x, progress=gr.Progress()):
- # sam model
- sam = sam_model_registry[model_type](checkpoint=models[model_type]).to(device)
- mask_generator = SamAutomaticMaskGenerator(
- sam,
- points_per_side=points_per_side,
- pred_iou_thresh=pred_iou_thresh,
- stability_score_thresh=stability_score_thresh,
- stability_score_offset=stability_score_offset,
- box_nms_thresh=box_nms_thresh,
- crop_n_layers=crop_n_layers,
- crop_nms_thresh=crop_nms_thresh,
- crop_overlap_ratio=512 / 1500,
- crop_n_points_downscale_factor=1,
- point_grids=None,
- min_mask_region_area=min_mask_region_area,
- output_mode='binary_mask'
- )
-
- # input is image, type: numpy
- if type(input_x) == np.ndarray:
- result, mask_all = segment_one(input_x, mask_generator)
- return result, mask_all
- elif isinstance(input_x, str): # input is video, type: path (str)
- cap = cv2.VideoCapture(input_x) # read video
- frames_num = cap.get(cv2.CAP_PROP_FRAME_COUNT)
- W, H = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- fps = int(cap.get(cv2.CAP_PROP_FPS))
- out = cv2.VideoWriter("output.mp4", cv2.VideoWriter_fourcc('x', '2', '6', '4'), fps, (W, H), isColor=True)
- for _ in progress.tqdm(range(int(frames_num)),
- desc='Processing video ({} frames, size {}x{})'.format(int(frames_num), W, H)):
- ret, frame = cap.read() # read a frame
- result, mask_all = segment_one(frame, mask_generator, seed=2023)
- result = (result * 255).astype(np.uint8)
- out.write(result)
- out.release()
- cap.release()
- return 'output.mp4'
-
-
-def predictor_inference(device, model_type, input_x, input_text, selected_points, owl_vit_threshold=0.1):
- # sam model
- sam = sam_model_registry[model_type](checkpoint=models[model_type]).to(device)
- predictor = SamPredictor(sam)
- predictor.set_image(input_x) # Process the image to produce an image embedding
-
- if input_text != '':
- # split input text
- input_text = [input_text.split(',')]
- print(input_text)
- # OWL-ViT model
- processor = OwlViTProcessor.from_pretrained('./checkpoints/models--google--owlvit-base-patch32')
- owlvit_model = OwlViTForObjectDetection.from_pretrained("./checkpoints/models--google--owlvit-base-patch32").to(device)
- # get outputs
- input_text = processor(text=input_text, images=input_x, return_tensors="pt").to(device)
- outputs = owlvit_model(**input_text)
- target_size = torch.Tensor([input_x.shape[:2]]).to(device)
- results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_size,
- threshold=owl_vit_threshold)
-
- # get the box with best score
- scores = torch.sigmoid(outputs.logits)
- # best_scores, best_idxs = torch.topk(scores, k=1, dim=1)
- # best_idxs = best_idxs.squeeze(1).tolist()
-
- i = 0 # Retrieve predictions for the first image for the corresponding text queries
- boxes_tensor = results[i]["boxes"] # [best_idxs]
- boxes = boxes_tensor.cpu().detach().numpy()
- # boxes = boxes[np.newaxis, :, :]
- transformed_boxes = predictor.transform.apply_boxes_torch(torch.Tensor(boxes).to(device),
- input_x.shape[:2]) # apply transform to original boxes
- # transformed_boxes = transformed_boxes.unsqueeze(0)
- print(transformed_boxes.size(), boxes.shape)
- else:
- transformed_boxes = None
-
- # points
- if len(selected_points) != 0:
- points = torch.Tensor([p for p, _ in selected_points]).to(device).unsqueeze(1)
- labels = torch.Tensor([int(l) for _, l in selected_points]).to(device).unsqueeze(1)
- transformed_points = predictor.transform.apply_coords_torch(points, input_x.shape[:2])
- print(points.size(), transformed_points.size(), labels.size(), input_x.shape, points)
- else:
- transformed_points, labels = None, None
-
- # predict segmentation according to the boxes
- masks, scores, logits = predictor.predict_torch(
- point_coords=transformed_points,
- point_labels=labels,
- boxes=transformed_boxes, # only one box
- multimask_output=False,
- )
- masks = masks.cpu().detach().numpy()
- mask_all = np.ones((input_x.shape[0], input_x.shape[1], 3))
- for ann in masks:
- color_mask = np.random.random((1, 3)).tolist()[0]
- for i in range(3):
- mask_all[ann[0] == True, i] = color_mask[i]
- img = input_x / 255 * 0.3 + mask_all * 0.7
- if input_text != '':
- img = plot_boxes(img, boxes_tensor) # image + mask + boxes
-
- # free the memory
- if input_text != '':
- owlvit_model.cpu()
- del owlvit_model
- del input_text
- gc.collect()
- torch.cuda.empty_cache()
-
- return img, mask_all
-
-
-def run_inference(device, model_type, points_per_side, pred_iou_thresh, stability_score_thresh, min_mask_region_area,
- stability_score_offset, box_nms_thresh, crop_n_layers, crop_nms_thresh, owl_vit_threshold, input_x,
- input_text, selected_points):
- # if input_x is int, the image is selected from examples
- if isinstance(input_x, int):
- input_x = cv2.imread(image_examples[input_x][0])
- input_x = cv2.cvtColor(input_x, cv2.COLOR_BGR2RGB)
- if (input_text != '' and not isinstance(input_x, str)) or len(selected_points) != 0: # user input text or points
- print('use predictor_inference')
- print('prompt text: ', input_text)
- print('prompt points length: ', len(selected_points))
- return predictor_inference(device, model_type, input_x, input_text, selected_points, owl_vit_threshold)
- else:
- print('use generator_inference')
- return generator_inference(device, model_type, points_per_side, pred_iou_thresh, stability_score_thresh,
- min_mask_region_area, stability_score_offset, box_nms_thresh, crop_n_layers,
- crop_nms_thresh, input_x)
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/data/audio_utils.py b/spaces/AIConsultant/MusicGen/audiocraft/data/audio_utils.py
deleted file mode 100644
index 565b63a4ef78dcd802dda932b42ebe518ffe7397..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/data/audio_utils.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""Various utilities for audio convertion (pcm format, sample rate and channels),
-and volume normalization."""
-import sys
-import typing as tp
-
-import julius
-import torch
-import torchaudio
-
-
-def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
- """Convert audio to the given number of channels.
-
- Args:
- wav (torch.Tensor): Audio wave of shape [B, C, T].
- channels (int): Expected number of channels as output.
- Returns:
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
- """
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, and the stream has multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file has
- # a single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file has
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav: torch.Tensor, from_rate: float,
- to_rate: float, to_channels: int) -> torch.Tensor:
- """Convert audio to new sample rate and number of audio channels."""
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
- wav = convert_audio_channels(wav, to_channels)
- return wav
-
-
-def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
- """Normalize an input signal to a user loudness in dB LKFS.
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
-
- Args:
- wav (torch.Tensor): Input multichannel audio data.
- sample_rate (int): Sample rate.
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
- loudness_compressor (bool): Uses tanh for soft clipping.
- energy_floor (float): anything below that RMS level will not be rescaled.
- Returns:
- torch.Tensor: Loudness normalized output data.
- """
- energy = wav.pow(2).mean().sqrt().item()
- if energy < energy_floor:
- return wav
- transform = torchaudio.transforms.Loudness(sample_rate)
- input_loudness_db = transform(wav).item()
- # calculate the gain needed to scale to the desired loudness level
- delta_loudness = -loudness_headroom_db - input_loudness_db
- gain = 10.0 ** (delta_loudness / 20.0)
- output = gain * wav
- if loudness_compressor:
- output = torch.tanh(output)
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
- return output
-
-
-def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
- """Utility function to clip the audio with logging if specified."""
- max_scale = wav.abs().max()
- if log_clipping and max_scale > 1:
- clamp_prob = (wav.abs() > 1).float().mean().item()
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
- #wav.clamp_(-1, 1)
- wav = wav.clone().clamp_(-1, 1)
-
-
-def normalize_audio(wav: torch.Tensor, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, log_clipping: bool = False,
- sample_rate: tp.Optional[int] = None,
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
- """Normalize the audio according to the prescribed strategy (see after).
-
- Args:
- wav (torch.Tensor): Audio data.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): If True, uses tanh based soft clipping.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- sample_rate (int): Sample rate for the audio data (required for loudness).
- stem_name (str, optional): Stem name for clipping logging.
- Returns:
- torch.Tensor: Normalized audio.
- """
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
- scale_rms = 10 ** (-rms_headroom_db / 20)
- if strategy == 'peak':
- rescaling = (scale_peak / wav.abs().max())
- if normalize or rescaling < 1:
- wav = wav * rescaling
- elif strategy == 'clip':
- wav = wav.clamp(-scale_peak, scale_peak)
- elif strategy == 'rms':
- mono = wav.mean(dim=0)
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
- if normalize or rescaling < 1:
- wav = wav * rescaling
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- elif strategy == 'loudness':
- assert sample_rate is not None, "Loudness normalization requires sample rate."
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- else:
- assert wav.abs().max() < 1
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
- return wav
-
-
-def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to float 32 bits PCM format.
- """
- if wav.dtype.is_floating_point:
- return wav
- elif wav.dtype == torch.int16:
- return wav.float() / 2**15
- elif wav.dtype == torch.int32:
- return wav.float() / 2**31
- raise ValueError(f"Unsupported wav dtype: {wav.dtype}")
-
-
-def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to int 16 bits PCM format.
-
- ..Warning:: There exist many formula for doing this conversion. None are perfect
- due to the asymmetry of the int16 range. One either have possible clipping, DC offset,
- or inconsistencies with f32_pcm. If the given wav doesn't have enough headroom,
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
- """
- if wav.dtype.is_floating_point:
- assert wav.abs().max() <= 1
- candidate = (wav * 2 ** 15).round()
- if candidate.max() >= 2 ** 15: # clipping would occur
- candidate = (wav * (2 ** 15 - 1)).round()
- return candidate.short()
- else:
- assert wav.dtype == torch.int16
- return wav
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/GPT_eval_multi.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/GPT_eval_multi.py
deleted file mode 100644
index b5e3ebcb1199e42cf16748e60863b554a0046f00..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/GPT_eval_multi.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import os
-import torch
-import numpy as np
-from torch.utils.tensorboard import SummaryWriter
-import json
-import clip
-
-import options.option_transformer as option_trans
-import models.vqvae as vqvae
-import utils.utils_model as utils_model
-import utils.eval_trans as eval_trans
-from dataset import dataset_TM_eval
-import models.t2m_trans as trans
-from options.get_eval_option import get_opt
-from models.evaluator_wrapper import EvaluatorModelWrapper
-import warnings
-warnings.filterwarnings('ignore')
-
-##### ---- Exp dirs ---- #####
-args = option_trans.get_args_parser()
-torch.manual_seed(args.seed)
-
-args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
-os.makedirs(args.out_dir, exist_ok = True)
-
-##### ---- Logger ---- #####
-logger = utils_model.get_logger(args.out_dir)
-writer = SummaryWriter(args.out_dir)
-logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
-
-from utils.word_vectorizer import WordVectorizer
-w_vectorizer = WordVectorizer('./glove', 'our_vab')
-val_loader = dataset_TM_eval.DATALoader(args.dataname, True, 32, w_vectorizer)
-
-dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
-
-wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
-eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
-
-##### ---- Network ---- #####
-
-## load clip model and datasets
-clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training
-clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16
-clip_model.eval()
-for p in clip_model.parameters():
- p.requires_grad = False
-
-net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
- args.nb_code,
- args.code_dim,
- args.output_emb_width,
- args.down_t,
- args.stride_t,
- args.width,
- args.depth,
- args.dilation_growth_rate)
-
-
-trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code,
- embed_dim=args.embed_dim_gpt,
- clip_dim=args.clip_dim,
- block_size=args.block_size,
- num_layers=args.num_layers,
- n_head=args.n_head_gpt,
- drop_out_rate=args.drop_out_rate,
- fc_rate=args.ff_rate)
-
-
-print ('loading checkpoint from {}'.format(args.resume_pth))
-ckpt = torch.load(args.resume_pth, map_location='cpu')
-net.load_state_dict(ckpt['net'], strict=True)
-net.eval()
-net.cuda()
-
-if args.resume_trans is not None:
- print ('loading transformer checkpoint from {}'.format(args.resume_trans))
- ckpt = torch.load(args.resume_trans, map_location='cpu')
- trans_encoder.load_state_dict(ckpt['trans'], strict=True)
-trans_encoder.train()
-trans_encoder.cuda()
-
-
-fid = []
-div = []
-top1 = []
-top2 = []
-top3 = []
-matching = []
-multi = []
-repeat_time = 20
-
-
-for i in range(repeat_time):
- best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, best_multi, writer, logger = eval_trans.evaluation_transformer_test(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, best_multi=0, clip_model=clip_model, eval_wrapper=eval_wrapper, draw=False, savegif=False, save=False, savenpy=(i==0))
- fid.append(best_fid)
- div.append(best_div)
- top1.append(best_top1)
- top2.append(best_top2)
- top3.append(best_top3)
- matching.append(best_matching)
- multi.append(best_multi)
-
-print('final result:')
-print('fid: ', sum(fid)/repeat_time)
-print('div: ', sum(div)/repeat_time)
-print('top1: ', sum(top1)/repeat_time)
-print('top2: ', sum(top2)/repeat_time)
-print('top3: ', sum(top3)/repeat_time)
-print('matching: ', sum(matching)/repeat_time)
-print('multi: ', sum(multi)/repeat_time)
-
-fid = np.array(fid)
-div = np.array(div)
-top1 = np.array(top1)
-top2 = np.array(top2)
-top3 = np.array(top3)
-matching = np.array(matching)
-multi = np.array(multi)
-msg_final = f"FID. {np.mean(fid):.3f}, conf. {np.std(fid)*1.96/np.sqrt(repeat_time):.3f}, Diversity. {np.mean(div):.3f}, conf. {np.std(div)*1.96/np.sqrt(repeat_time):.3f}, TOP1. {np.mean(top1):.3f}, conf. {np.std(top1)*1.96/np.sqrt(repeat_time):.3f}, TOP2. {np.mean(top2):.3f}, conf. {np.std(top2)*1.96/np.sqrt(repeat_time):.3f}, TOP3. {np.mean(top3):.3f}, conf. {np.std(top3)*1.96/np.sqrt(repeat_time):.3f}, Matching. {np.mean(matching):.3f}, conf. {np.std(matching)*1.96/np.sqrt(repeat_time):.3f}, Multi. {np.mean(multi):.3f}, conf. {np.std(multi)*1.96/np.sqrt(repeat_time):.3f}"
-logger.info(msg_final)
\ No newline at end of file
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/smplify.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/smplify.py
deleted file mode 100644
index 580efef98dfdcf6e7486b7f5c5436820edfb6c4b..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/smplify.py
+++ /dev/null
@@ -1,279 +0,0 @@
-import torch
-import os, sys
-import pickle
-import smplx
-import numpy as np
-
-sys.path.append(os.path.dirname(__file__))
-from customloss import (camera_fitting_loss,
- body_fitting_loss,
- camera_fitting_loss_3d,
- body_fitting_loss_3d,
- )
-from prior import MaxMixturePrior
-from visualize.joints2smpl.src import config
-
-
-
-@torch.no_grad()
-def guess_init_3d(model_joints,
- j3d,
- joints_category="orig"):
- """Initialize the camera translation via triangle similarity, by using the torso joints .
- :param model_joints: SMPL model with pre joints
- :param j3d: 25x3 array of Kinect Joints
- :returns: 3D vector corresponding to the estimated camera translation
- """
- # get the indexed four
- gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder']
- gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints]
-
- if joints_category=="orig":
- joints_ind_category = [config.JOINT_MAP[joint] for joint in gt_joints]
- elif joints_category=="AMASS":
- joints_ind_category = [config.AMASS_JOINT_MAP[joint] for joint in gt_joints]
- else:
- print("NO SUCH JOINTS CATEGORY!")
-
- sum_init_t = (j3d[:, joints_ind_category] - model_joints[:, gt_joints_ind]).sum(dim=1)
- init_t = sum_init_t / 4.0
- return init_t
-
-
-# SMPLIfy 3D
-class SMPLify3D():
- """Implementation of SMPLify, use 3D joints."""
-
- def __init__(self,
- smplxmodel,
- step_size=1e-2,
- batch_size=1,
- num_iters=100,
- use_collision=False,
- use_lbfgs=True,
- joints_category="orig",
- device=torch.device('cuda:0'),
- ):
-
- # Store options
- self.batch_size = batch_size
- self.device = device
- self.step_size = step_size
-
- self.num_iters = num_iters
- # --- choose optimizer
- self.use_lbfgs = use_lbfgs
- # GMM pose prior
- self.pose_prior = MaxMixturePrior(prior_folder=config.GMM_MODEL_DIR,
- num_gaussians=8,
- dtype=torch.float32).to(device)
- # collision part
- self.use_collision = use_collision
- if self.use_collision:
- self.part_segm_fn = config.Part_Seg_DIR
-
- # reLoad SMPL-X model
- self.smpl = smplxmodel
-
- self.model_faces = smplxmodel.faces_tensor.view(-1)
-
- # select joint joint_category
- self.joints_category = joints_category
-
- if joints_category=="orig":
- self.smpl_index = config.full_smpl_idx
- self.corr_index = config.full_smpl_idx
- elif joints_category=="AMASS":
- self.smpl_index = config.amass_smpl_idx
- self.corr_index = config.amass_idx
- else:
- self.smpl_index = None
- self.corr_index = None
- print("NO SUCH JOINTS CATEGORY!")
-
- # ---- get the man function here ------
- def __call__(self, init_pose, init_betas, init_cam_t, j3d, conf_3d=1.0, seq_ind=0):
- """Perform body fitting.
- Input:
- init_pose: SMPL pose estimate
- init_betas: SMPL betas estimate
- init_cam_t: Camera translation estimate
- j3d: joints 3d aka keypoints
- conf_3d: confidence for 3d joints
- seq_ind: index of the sequence
- Returns:
- vertices: Vertices of optimized shape
- joints: 3D joints of optimized shape
- pose: SMPL pose parameters of optimized shape
- betas: SMPL beta parameters of optimized shape
- camera_translation: Camera translation
- """
-
- # # # add the mesh inter-section to avoid
- search_tree = None
- pen_distance = None
- filter_faces = None
-
- if self.use_collision:
- from mesh_intersection.bvh_search_tree import BVH
- import mesh_intersection.loss as collisions_loss
- from mesh_intersection.filter_faces import FilterFaces
-
- search_tree = BVH(max_collisions=8)
-
- pen_distance = collisions_loss.DistanceFieldPenetrationLoss(
- sigma=0.5, point2plane=False, vectorized=True, penalize_outside=True)
-
- if self.part_segm_fn:
- # Read the part segmentation
- part_segm_fn = os.path.expandvars(self.part_segm_fn)
- with open(part_segm_fn, 'rb') as faces_parents_file:
- face_segm_data = pickle.load(faces_parents_file, encoding='latin1')
- faces_segm = face_segm_data['segm']
- faces_parents = face_segm_data['parents']
- # Create the module used to filter invalid collision pairs
- filter_faces = FilterFaces(
- faces_segm=faces_segm, faces_parents=faces_parents,
- ign_part_pairs=None).to(device=self.device)
-
-
- # Split SMPL pose to body pose and global orientation
- body_pose = init_pose[:, 3:].detach().clone()
- global_orient = init_pose[:, :3].detach().clone()
- betas = init_betas.detach().clone()
-
- # use guess 3d to get the initial
- smpl_output = self.smpl(global_orient=global_orient,
- body_pose=body_pose,
- betas=betas)
- model_joints = smpl_output.joints
-
- init_cam_t = guess_init_3d(model_joints, j3d, self.joints_category).unsqueeze(1).detach()
- camera_translation = init_cam_t.clone()
-
- preserve_pose = init_pose[:, 3:].detach().clone()
- # -------------Step 1: Optimize camera translation and body orientation--------
- # Optimize only camera translation and body orientation
- body_pose.requires_grad = False
- betas.requires_grad = False
- global_orient.requires_grad = True
- camera_translation.requires_grad = True
-
- camera_opt_params = [global_orient, camera_translation]
-
- if self.use_lbfgs:
- camera_optimizer = torch.optim.LBFGS(camera_opt_params, max_iter=self.num_iters,
- lr=self.step_size, line_search_fn='strong_wolfe')
- for i in range(10):
- def closure():
- camera_optimizer.zero_grad()
- smpl_output = self.smpl(global_orient=global_orient,
- body_pose=body_pose,
- betas=betas)
- model_joints = smpl_output.joints
- # print('model_joints', model_joints.shape)
- # print('camera_translation', camera_translation.shape)
- # print('init_cam_t', init_cam_t.shape)
- # print('j3d', j3d.shape)
- loss = camera_fitting_loss_3d(model_joints, camera_translation,
- init_cam_t, j3d, self.joints_category)
- loss.backward()
- return loss
-
- camera_optimizer.step(closure)
- else:
- camera_optimizer = torch.optim.Adam(camera_opt_params, lr=self.step_size, betas=(0.9, 0.999))
-
- for i in range(20):
- smpl_output = self.smpl(global_orient=global_orient,
- body_pose=body_pose,
- betas=betas)
- model_joints = smpl_output.joints
-
- loss = camera_fitting_loss_3d(model_joints[:, self.smpl_index], camera_translation,
- init_cam_t, j3d[:, self.corr_index], self.joints_category)
- camera_optimizer.zero_grad()
- loss.backward()
- camera_optimizer.step()
-
- # Fix camera translation after optimizing camera
- # --------Step 2: Optimize body joints --------------------------
- # Optimize only the body pose and global orientation of the body
- body_pose.requires_grad = True
- global_orient.requires_grad = True
- camera_translation.requires_grad = True
-
- # --- if we use the sequence, fix the shape
- if seq_ind == 0:
- betas.requires_grad = True
- body_opt_params = [body_pose, betas, global_orient, camera_translation]
- else:
- betas.requires_grad = False
- body_opt_params = [body_pose, global_orient, camera_translation]
-
- if self.use_lbfgs:
- body_optimizer = torch.optim.LBFGS(body_opt_params, max_iter=self.num_iters,
- lr=self.step_size, line_search_fn='strong_wolfe')
- for i in range(self.num_iters):
- def closure():
- body_optimizer.zero_grad()
- smpl_output = self.smpl(global_orient=global_orient,
- body_pose=body_pose,
- betas=betas)
- model_joints = smpl_output.joints
- model_vertices = smpl_output.vertices
-
- loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation,
- j3d[:, self.corr_index], self.pose_prior,
- joints3d_conf=conf_3d,
- joint_loss_weight=600.0,
- pose_preserve_weight=5.0,
- use_collision=self.use_collision,
- model_vertices=model_vertices, model_faces=self.model_faces,
- search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces)
- loss.backward()
- return loss
-
- body_optimizer.step(closure)
- else:
- body_optimizer = torch.optim.Adam(body_opt_params, lr=self.step_size, betas=(0.9, 0.999))
-
- for i in range(self.num_iters):
- smpl_output = self.smpl(global_orient=global_orient,
- body_pose=body_pose,
- betas=betas)
- model_joints = smpl_output.joints
- model_vertices = smpl_output.vertices
-
- loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation,
- j3d[:, self.corr_index], self.pose_prior,
- joints3d_conf=conf_3d,
- joint_loss_weight=600.0,
- use_collision=self.use_collision,
- model_vertices=model_vertices, model_faces=self.model_faces,
- search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces)
- body_optimizer.zero_grad()
- loss.backward()
- body_optimizer.step()
-
- # Get final loss value
- with torch.no_grad():
- smpl_output = self.smpl(global_orient=global_orient,
- body_pose=body_pose,
- betas=betas, return_full_pose=True)
- model_joints = smpl_output.joints
- model_vertices = smpl_output.vertices
-
- final_loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation,
- j3d[:, self.corr_index], self.pose_prior,
- joints3d_conf=conf_3d,
- joint_loss_weight=600.0,
- use_collision=self.use_collision, model_vertices=model_vertices, model_faces=self.model_faces,
- search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces)
-
- vertices = smpl_output.vertices.detach()
- joints = smpl_output.joints.detach()
- pose = torch.cat([global_orient, body_pose], dim=-1).detach()
- betas = betas.detach()
-
- return vertices, joints, pose, betas, camera_translation, final_loss
diff --git a/spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube/TwoTranscriptQuotesFromIlyaSutskever.md b/spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube/TwoTranscriptQuotesFromIlyaSutskever.md
deleted file mode 100644
index 9dea84c732f631d7d5204fcc65b0c8e0c9b913b8..0000000000000000000000000000000000000000
--- a/spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube/TwoTranscriptQuotesFromIlyaSutskever.md
+++ /dev/null
@@ -1,71 +0,0 @@
-https://www.youtube.com/watch?v=9EN_HoEk3KY&t=172s
-
-
-1:42
-program the does very very well on your data then you will achieve the best
-1:48
-generalization possible with a little bit of modification you can turn it into a precise theorem
-1:54
-and on a very intuitive level it's easy to see what it should be the case if you
-2:01
-have some data and you're able to find a shorter program which generates this
-2:06
-data then you've essentially extracted all the all conceivable regularity from
-2:11
-this data into your program and then you can use these objects to make the best predictions possible like if if you have
-2:19
-data which is so complex but there is no way to express it as a shorter program
-2:25
-then it means that your data is totally random there is no way to extract any regularity from it whatsoever now there
-2:32
-is little known mathematical theory behind this and the proofs of these statements actually not even that hard
-2:38
-but the one minor slight disappointment is that it's actually not possible at
-2:44
-least given today's tools and understanding to find the best short program that
-
-
-
-https://youtu.be/9EN_HoEk3KY?t=442
-5
-to talk a little bit about reinforcement learning so reinforcement learning is a framework it's a framework of evaluating
-6:53
-agents in their ability to achieve goals and complicated stochastic environments
-6:58
-you've got an agent which is plugged into an environment as shown in the figure right here and for any given
-7:06
-agent you can simply run it many times and compute its average reward now the
-7:13
-thing that's interesting about the reinforcement learning framework is that there exist interesting useful
-7:20
-reinforcement learning algorithms the framework existed for a long time it
-7:25
-became interesting once we realized that good algorithms exist now these are there are perfect algorithms but they
-7:31
-are good enough to do interesting things and all you want the mathematical
-7:37
-problem is one where you need to maximize the expected reward now one
-7:44
-important way in which the reinforcement learning framework is not quite complete is that it assumes that the reward is
-7:50
-given by the environment you see this picture the agent sends an action while
-7:56
-the reward sends it an observation in a both the observation and the reward backwards that's what the environment
-8:01
-communicates back the way in which this is not the case in the real world is that we figure out
-8:11
-what the reward is from the observation we reward ourselves we are not told
-8:16
-environment doesn't say hey here's some negative reward it's our interpretation over census that lets us determine what
-8:23
-the reward is and there is only one real true reward in life and this is
-8:28
-existence or nonexistence and everything else is a corollary of that so well what
-8:35
-should our agent be you already know the answer should be a neural network because whenever you want to do
-8:41
-something dense it's going to be a neural network and you want the agent to map observations to actions so you let
-8:47
-it be parametrized with a neural net and you apply learning algorithm so I want to explain to you how reinforcement
-8:53
-learning works this is model free reinforcement learning the reinforcement learning has actually been used in practice everywhere but it's
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb16_cifar10.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb16_cifar10.py
deleted file mode 100644
index 166a1740b09c5fb74462a0672cd5fef54caae8f7..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb16_cifar10.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/resnet101_cifar.py',
- '../_base_/datasets/cifar10_bs16.py',
- '../_base_/schedules/cifar10_bs128.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/AchyuthGamer/ImMagician-Image-Generator/README.md b/spaces/AchyuthGamer/ImMagician-Image-Generator/README.md
deleted file mode 100644
index 05f0fcf86606e1e46129b103632826b8406443e5..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/ImMagician-Image-Generator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ImMagician Image
-emoji: 🪄
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/blocks.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/blocks.py
deleted file mode 100644
index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/blocks.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import torch
-import torch.nn as nn
-
-from .vit import (
- _make_pretrained_vitb_rn50_384,
- _make_pretrained_vitl16_384,
- _make_pretrained_vitb16_384,
- forward_vit,
-)
-
-def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",):
- if backbone == "vitl16_384":
- pretrained = _make_pretrained_vitl16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
- ) # ViT-L/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb_rn50_384":
- pretrained = _make_pretrained_vitb_rn50_384(
- use_pretrained,
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
- scratch = _make_scratch(
- [256, 512, 768, 768], features, groups=groups, expand=expand
- ) # ViT-H/16 - 85.0% Top1 (backbone)
- elif backbone == "vitb16_384":
- pretrained = _make_pretrained_vitb16_384(
- use_pretrained, hooks=hooks, use_readout=use_readout
- )
- scratch = _make_scratch(
- [96, 192, 384, 768], features, groups=groups, expand=expand
- ) # ViT-B/16 - 84.6% Top1 (backbone)
- elif backbone == "resnext101_wsl":
- pretrained = _make_pretrained_resnext101_wsl(use_pretrained)
- scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3
- elif backbone == "efficientnet_lite3":
- pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)
- scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3
- else:
- print(f"Backbone '{backbone}' not implemented")
- assert False
-
- return pretrained, scratch
-
-
-def _make_scratch(in_shape, out_shape, groups=1, expand=False):
- scratch = nn.Module()
-
- out_shape1 = out_shape
- out_shape2 = out_shape
- out_shape3 = out_shape
- out_shape4 = out_shape
- if expand==True:
- out_shape1 = out_shape
- out_shape2 = out_shape*2
- out_shape3 = out_shape*4
- out_shape4 = out_shape*8
-
- scratch.layer1_rn = nn.Conv2d(
- in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer2_rn = nn.Conv2d(
- in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer3_rn = nn.Conv2d(
- in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
- scratch.layer4_rn = nn.Conv2d(
- in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
- )
-
- return scratch
-
-
-def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):
- efficientnet = torch.hub.load(
- "rwightman/gen-efficientnet-pytorch",
- "tf_efficientnet_lite3",
- pretrained=use_pretrained,
- exportable=exportable
- )
- return _make_efficientnet_backbone(efficientnet)
-
-
-def _make_efficientnet_backbone(effnet):
- pretrained = nn.Module()
-
- pretrained.layer1 = nn.Sequential(
- effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]
- )
- pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])
- pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])
- pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])
-
- return pretrained
-
-
-def _make_resnet_backbone(resnet):
- pretrained = nn.Module()
- pretrained.layer1 = nn.Sequential(
- resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1
- )
-
- pretrained.layer2 = resnet.layer2
- pretrained.layer3 = resnet.layer3
- pretrained.layer4 = resnet.layer4
-
- return pretrained
-
-
-def _make_pretrained_resnext101_wsl(use_pretrained):
- resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl")
- return _make_resnet_backbone(resnet)
-
-
-
-class Interpolate(nn.Module):
- """Interpolation module.
- """
-
- def __init__(self, scale_factor, mode, align_corners=False):
- """Init.
-
- Args:
- scale_factor (float): scaling
- mode (str): interpolation mode
- """
- super(Interpolate, self).__init__()
-
- self.interp = nn.functional.interpolate
- self.scale_factor = scale_factor
- self.mode = mode
- self.align_corners = align_corners
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: interpolated data
- """
-
- x = self.interp(
- x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
- )
-
- return x
-
-
-class ResidualConvUnit(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True
- )
-
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
- out = self.relu(x)
- out = self.conv1(out)
- out = self.relu(out)
- out = self.conv2(out)
-
- return out + x
-
-
-class FeatureFusionBlock(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock, self).__init__()
-
- self.resConfUnit1 = ResidualConvUnit(features)
- self.resConfUnit2 = ResidualConvUnit(features)
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- output += self.resConfUnit1(xs[1])
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=True
- )
-
- return output
-
-
-
-
-class ResidualConvUnit_custom(nn.Module):
- """Residual convolution module.
- """
-
- def __init__(self, features, activation, bn):
- """Init.
-
- Args:
- features (int): number of features
- """
- super().__init__()
-
- self.bn = bn
-
- self.groups=1
-
- self.conv1 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- self.conv2 = nn.Conv2d(
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
- )
-
- if self.bn==True:
- self.bn1 = nn.BatchNorm2d(features)
- self.bn2 = nn.BatchNorm2d(features)
-
- self.activation = activation
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input
-
- Returns:
- tensor: output
- """
-
- out = self.activation(x)
- out = self.conv1(out)
- if self.bn==True:
- out = self.bn1(out)
-
- out = self.activation(out)
- out = self.conv2(out)
- if self.bn==True:
- out = self.bn2(out)
-
- if self.groups > 1:
- out = self.conv_merge(out)
-
- return self.skip_add.add(out, x)
-
- # return out + x
-
-
-class FeatureFusionBlock_custom(nn.Module):
- """Feature fusion block.
- """
-
- def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):
- """Init.
-
- Args:
- features (int): number of features
- """
- super(FeatureFusionBlock_custom, self).__init__()
-
- self.deconv = deconv
- self.align_corners = align_corners
-
- self.groups=1
-
- self.expand = expand
- out_features = features
- if self.expand==True:
- out_features = features//2
-
- self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
-
- self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)
- self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)
-
- self.skip_add = nn.quantized.FloatFunctional()
-
- def forward(self, *xs):
- """Forward pass.
-
- Returns:
- tensor: output
- """
- output = xs[0]
-
- if len(xs) == 2:
- res = self.resConfUnit1(xs[1])
- output = self.skip_add.add(output, res)
- # output += res
-
- output = self.resConfUnit2(output)
-
- output = nn.functional.interpolate(
- output, scale_factor=2, mode="bilinear", align_corners=self.align_corners
- )
-
- output = self.out_conv(output)
-
- return output
-
diff --git a/spaces/Adapting/TrendFlow/README.md b/spaces/Adapting/TrendFlow/README.md
deleted file mode 100644
index 940664486ea87ca563e543ea053a1cacca5e353b..0000000000000000000000000000000000000000
--- a/spaces/Adapting/TrendFlow/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: TrendFlow
-emoji: 📉
-colorFrom: indigo
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: mit
----
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Factory.js
deleted file mode 100644
index ec3147a0eaf6f16320c6017af7c319cc01c11fe3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import Buttons from './Buttons.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('buttons', function (config) {
- var gameObject = new Buttons(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.Buttons', Buttons);
-
-export default Buttons;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AlignMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AlignMethods.js
deleted file mode 100644
index 84755c9e14ebb090cba057d066f224f4b10b29c3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AlignMethods.js
+++ /dev/null
@@ -1,17 +0,0 @@
-import ALIGNMODE from '../utils/AlignConst.js';
-
-export default {
- getChildAlign(gameObject) {
- return this.getSizerConfig(gameObject).align;
- },
-
- setChildAlign(gameObject, align) {
- if (typeof (align) === 'string') {
- align = ALIGNMODE[align];
- }
-
- this.getSizerConfig(gameObject).align = align;
- return this;
- },
-
-}
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/outputs.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/outputs.md
deleted file mode 100644
index ec64d36498ee0eccf9f8b7955aef9c69fd151bd3..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/outputs.md
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
-# Outputs
-
-All models outputs are subclasses of [`~utils.BaseOutput`], data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries.
-
-For example:
-
-```python
-from diffusers import DDIMPipeline
-
-pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
-outputs = pipeline()
-```
-
-The `outputs` object is a [`~pipelines.ImagePipelineOutput`] which means it has an image attribute.
-
-You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get `None`:
-
-```python
-outputs.images
-outputs["images"]
-```
-
-When considering the `outputs` object as a tuple, it only considers the attributes that don't have `None` values.
-For instance, retrieving an image by indexing into it returns the tuple `(outputs.images)`:
-
-```python
-outputs[:1]
-```
-
-
-
-To check a specific pipeline or model output, refer to its corresponding API documentation.
-
-
-
-## BaseOutput
-
-[[autodoc]] utils.BaseOutput
- - to_tuple
-
-## ImagePipelineOutput
-
-[[autodoc]] pipelines.ImagePipelineOutput
-
-## FlaxImagePipelineOutput
-
-[[autodoc]] pipelines.pipeline_flax_utils.FlaxImagePipelineOutput
-
-## AudioPipelineOutput
-
-[[autodoc]] pipelines.AudioPipelineOutput
-
-## ImageTextPipelineOutput
-
-[[autodoc]] ImageTextPipelineOutput
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_random.py b/spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_random.py
deleted file mode 100644
index e313e9947bb3232a9458878fd219e1594ab93d57..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_random.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""Helpers for random number generators."""
-import numpy as np
-
-
-def ensure_rng(rng=None):
- """Coerces input into a random number generator.
-
- If the input is None, then a global random state is returned.
-
- If the input is a numeric value, then that is used as a seed to construct a
- random state. Otherwise the input is returned as-is.
-
- Adapted from [1]_.
-
- Args:
- rng (int | numpy.random.RandomState | None):
- if None, then defaults to the global rng. Otherwise this can be an
- integer or a RandomState class
- Returns:
- (numpy.random.RandomState) : rng -
- a numpy random number generator
-
- References:
- .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501
- """
-
- if rng is None:
- rng = np.random.mtrand._rand
- elif isinstance(rng, int):
- rng = np.random.RandomState(rng)
- else:
- rng = rng
- return rng
diff --git a/spaces/Andy1621/uniformer_image_detection/tools/slurm_test.sh b/spaces/Andy1621/uniformer_image_detection/tools/slurm_test.sh
deleted file mode 100644
index 6dd67e57442b741fc30f26102eb5afe16139edb1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/tools/slurm_test.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/env bash
-
-set -x
-
-PARTITION=$1
-JOB_NAME=$2
-CONFIG=$3
-CHECKPOINT=$4
-GPUS=${GPUS:-8}
-GPUS_PER_NODE=${GPUS_PER_NODE:-8}
-CPUS_PER_TASK=${CPUS_PER_TASK:-5}
-PY_ARGS=${@:5}
-SRUN_ARGS=${SRUN_ARGS:-""}
-
-PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
-srun -p ${PARTITION} \
- --job-name=${JOB_NAME} \
- --gres=gpu:${GPUS_PER_NODE} \
- --ntasks=${GPUS} \
- --ntasks-per-node=${GPUS_PER_NODE} \
- --cpus-per-task=${CPUS_PER_TASK} \
- --kill-on-bad-exit=1 \
- ${SRUN_ARGS} \
- python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS}
diff --git a/spaces/AnnasBlackHat/Image-Similarity/src/similarity/model_implements/vit_base.py b/spaces/AnnasBlackHat/Image-Similarity/src/similarity/model_implements/vit_base.py
deleted file mode 100644
index bdf66c16548834e61affb8e205d14eb3335b6735..0000000000000000000000000000000000000000
--- a/spaces/AnnasBlackHat/Image-Similarity/src/similarity/model_implements/vit_base.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from transformers import ViTFeatureExtractor, ViTModel
-from PIL import Image
-import numpy as np
-import torch
-
-class VitBase():
-
- def __init__(self):
- self.feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
- self.model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
-
- def extract_feature(self, imgs):
- features = []
- for img in imgs:
- inputs = self.feature_extractor(images=img, return_tensors="pt")
- with torch.no_grad():
- outputs = self.model(**inputs)
- last_hidden_states = outputs.last_hidden_state
- features.append(np.squeeze(last_hidden_states.numpy()).flatten())
- return features
diff --git a/spaces/Apex-X/nono/README.md b/spaces/Apex-X/nono/README.md
deleted file mode 100644
index 281d35c27bd39c7813abc0a0cc46f8a4e6d11d86..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/nono/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: nono
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Apex-X/nono/roop/ui.py b/spaces/Apex-X/nono/roop/ui.py
deleted file mode 100644
index ba693dac116bd416b91518734fa550e9dfb95c7b..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/nono/roop/ui.py
+++ /dev/null
@@ -1,231 +0,0 @@
-import os
-import webbrowser
-import customtkinter as ctk
-from typing import Callable, Tuple
-import cv2
-from PIL import Image, ImageOps
-
-import roop.globals
-import roop.metadata
-from roop.face_analyser import get_one_face
-from roop.capturer import get_video_frame, get_video_frame_total
-from roop.predicter import predict_frame
-from roop.processors.frame.core import get_frame_processors_modules
-from roop.utilities import is_image, is_video, resolve_relative_path
-
-ROOT = None
-ROOT_HEIGHT = 700
-ROOT_WIDTH = 600
-
-PREVIEW = None
-PREVIEW_MAX_HEIGHT = 700
-PREVIEW_MAX_WIDTH = 1200
-
-RECENT_DIRECTORY_SOURCE = None
-RECENT_DIRECTORY_TARGET = None
-RECENT_DIRECTORY_OUTPUT = None
-
-preview_label = None
-preview_slider = None
-source_label = None
-target_label = None
-status_label = None
-
-
-def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
- global ROOT, PREVIEW
-
- ROOT = create_root(start, destroy)
- PREVIEW = create_preview(ROOT)
-
- return ROOT
-
-
-def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
- global source_label, target_label, status_label
-
- ctk.deactivate_automatic_dpi_awareness()
- ctk.set_appearance_mode('system')
- ctk.set_default_color_theme(resolve_relative_path('ui.json'))
-
- root = ctk.CTk()
- root.minsize(ROOT_WIDTH, ROOT_HEIGHT)
- root.title(f'{roop.metadata.name} {roop.metadata.version}')
- root.configure()
- root.protocol('WM_DELETE_WINDOW', lambda: destroy())
-
- source_label = ctk.CTkLabel(root, text=None)
- source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25)
-
- target_label = ctk.CTkLabel(root, text=None)
- target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
-
- source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path())
- source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1)
-
- target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path())
- target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1)
-
- keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps)
- keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps))
- keep_fps_checkbox.place(relx=0.1, rely=0.6)
-
- keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames)
- keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get()))
- keep_frames_switch.place(relx=0.1, rely=0.65)
-
- keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio)
- keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get()))
- keep_audio_switch.place(relx=0.6, rely=0.6)
-
- many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces)
- many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get()))
- many_faces_switch.place(relx=0.6, rely=0.65)
-
- start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start))
- start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05)
-
- stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy())
- stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05)
-
- preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview())
- preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05)
-
- status_label = ctk.CTkLabel(root, text=None, justify='center')
- status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
-
- donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2')
- donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
- donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color'))
- donate_label.bind('