diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Electrotechnique Industrielle Guy Seguier Pdf Download A Modern and Comprehensive Treatment of Industrial Electrical Technology.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Electrotechnique Industrielle Guy Seguier Pdf Download A Modern and Comprehensive Treatment of Industrial Electrical Technology.md
deleted file mode 100644
index beae6c54dd31539971bac0207965f85d67871431..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Electrotechnique Industrielle Guy Seguier Pdf Download A Modern and Comprehensive Treatment of Industrial Electrical Technology.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Electrotechnique Industrielle Guy Seguier Pdf Download
-
Are you interested in learning more about industrial electrical engineering? Do you want to read a comprehensive and authoritative book on this subject? If so, you might want to download Electrotechnique Industrielle by Guy Seguier in PDF format. In this article, we will tell you what this book is about, who the author is, why it is important, and how you can get it for free. Let's get started!
-
What is Electrotechnique Industrielle?
-
Electrotechnique Industrielle is the French term for industrial electrical engineering. It is a branch of engineering that deals with the design, installation, operation, and maintenance of electrical systems and equipment used in industrial settings. Some of the topics covered by this field include:
-
Electrotechnique Industrielle Guy Seguier Pdf Download
Industrial electrical engineering is essential for the development and improvement of various industries, such as manufacturing, transportation, communication, energy, and more. It also contributes to the safety, efficiency, and sustainability of industrial processes and products.
-
Who is Guy Seguier?
-
Guy Seguier was a French engineer and professor who specialized in industrial electrical engineering. He was born in 1925 and died in 2013. He obtained his engineering degree from the Ecole Centrale de Paris in 1948 and his doctorate from the University of Paris in 1956. He worked as a research engineer at the French National Center for Scientific Research (CNRS) from 1949 to 1964. He then became a professor at the Ecole Nationale Supérieure d'Electricité et de Mécanique (ENSEM) in Nancy, where he taught until his retirement in 1990. He also served as the director of the Laboratory of Electrical Engineering and Industrial Electronics (LGEP) from 1970 to 1985.
-
Guy Seguier was a prolific author who wrote several books and articles on various aspects of industrial electrical engineering. He was also a respected expert who participated in many national and international committees and projects related to his field. He received several awards and honors for his contributions, such as the Grand Prix de l'Académie des Sciences in 1987 and the Legion of Honor in 1994.
-
Why is his book important?
-
One of his most famous books is Electrotechnique Industrielle, which he co-authored with Francis Notelet. This book was first published in 1977 by Technique et Documentation and has been revised and updated several times since then. The latest edition was published in 1994 by TEC et Doc and has 484 pages.
-
This book is considered to be one of the most comprehensive and authoritative references on industrial electrical engineering. It covers all the fundamental concepts and principles, as well as the practical applications and examples, of this field. It also includes many diagrams, tables, formulas, exercises, and solutions to help the readers understand and apply the theory. The book is written in a clear and concise style that makes it accessible to both students and professionals.
-
The book is divided into six parts:
-
Download Electrotechnique Industrielle by Guy Seguier in PDF format
-Guy Seguier Electrotechnique Industrielle PDF free download
-How to download Electrotechnique Industrielle Guy Seguier PDF book
-Electrotechnique Industrielle Guy Seguier PDF ebook download
-Download PDF of Electrotechnique Industrielle by Guy Seguier for free
-Guy Seguier Electrotechnique Industrielle book PDF download
-Electrotechnique Industrielle Guy Seguier PDF file download
-Where to download Electrotechnique Industrielle Guy Seguier PDF
-Electrotechnique Industrielle by Guy Seguier PDF download link
-Guy Seguier Electrotechnique Industrielle PDF online download
-Download Electrotechnique Industrielle Guy Seguier PDF for free
-Guy Seguier Electrotechnique Industrielle free PDF download
-Electrotechnique Industrielle Guy Seguier download PDF
-Download PDF Electrotechnique Industrielle by Guy Seguier
-Guy Seguier Electrotechnique Industrielle PDF download free
-Electrotechnique Industrielle by Guy Seguier free PDF download
-Download free PDF of Electrotechnique Industrielle Guy Seguier
-Guy Seguier Electrotechnique Industrielle download free PDF
-Electrotechnique Industrielle Guy Seguier PDF free online download
-Download free Electrotechnique Industrielle by Guy Seguier PDF
-Guy Seguier Electrotechnique Industrielle free online PDF download
-Electrotechnique Industrielle by Guy Seguier download free PDF
-Download free online PDF of Electrotechnique Industrielle Guy Seguier
-Guy Seguier Electrotechnique Industrielle online free PDF download
-Electrotechnique Industrielle by Guy Seguier online free PDF download
-Download online free PDF of Electrotechnique Industrielle Guy Seguier
-Guy Seguier Electrotechnique Industrielle online PDF free download
-Electrotechnique Industrielle by Guy Seguier online PDF free download
-Download online PDF of Electrotechnique Industrielle by Guy Seguier for free
-Guy Seguier Electrotechnique Industrielle online download PDF for free
-Free download of Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of Electrotechnique Industrielle by Guy Seguier in PDF format
-Free download of Electrotechnique Industrielle by Guy Seguier as a PDF file
-Free online download of Electrotechnique Industrielle by Guy Seguier as a PDF file
-Free download of the book Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of the book Electrotechnique Industrielle by Guy Seguier in PDF format
-Free download of the ebook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of the ebook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free download of the textbook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of the textbook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free access to the PDF version of Electrotechnique Industrielle by Guy Seguier
-Free access to the online PDF version of Electrotechnique Industrielle by Guy Seguier
-Get the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Get the online PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Access the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Access the online PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Read the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Read the online PDF version of Electrotechnique Industrielle by Guy Seguier for free
-View the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-
-
Generalities: This part introduces the basic notions of electrical engineering, such as voltage, current, power, energy, resistance, capacitance, inductance, impedance, etc.
-
Electrical machines: This part covers the different types of electrical machines used in industrial settings, such as transformers, generators, motors, alternators, etc.
-
Power electronics: This part deals with the devices and circuits that convert and control electrical power, such as rectifiers, inverters, choppers, cycloconverters, etc.
-
Electrical networks: This part explains how electrical power is transmitted and distributed through various types of networks, such as AC or DC networks, single-phase or three-phase networks, balanced or unbalanced networks, etc.
-
Control and automation: This part describes how electrical systems are regulated and automated using various methods and tools, such as feedback control, PID control, state-space control, PLCs, SCADA systems, etc.
-
Renewable energy sources: This part discusses how electrical power can be generated from renewable sources, such as solar energy, wind energy, hydroelectric energy, biomass energy, etc.
-
-
How to download his book in PDF format?
-
If you want to download Electrotechnique Industrielle by Guy Seguier in PDF format, you have several options:
-
-
You can buy the book online from various websites, such as Amazon, Google Books, or AbeBooks, and then download it to your device.
-
You can borrow the book from a library or a friend who has it, and then scan it or take photos of it with your smartphone or camera, and then convert them to PDF using an app or a website.
-
You can search for a free PDF version of the book on the internet, but be careful about the quality, the legality, and the security of the sources you use. Some websites that claim to offer free PDF downloads may be fraudulent, infringing, or infected with malware.
-
-
Conclusion
-
In conclusion, Electrotechnique Industrielle by Guy Seguier is a great book for anyone who wants to learn more about industrial electrical engineering. It covers all the essential topics, from theory to practice, in a clear and comprehensive way. It is suitable for both students and professionals who want to improve their knowledge and skills in this field. If you want to download this book in PDF format, you can either buy it online, borrow it from a library or a friend, or search for a free version on the internet. However, you should always be careful about the quality, the legality, and the security of the sources you use.
-
FAQs
-
-
What is industrial electrical engineering?
-Industrial electrical engineering is a branch of engineering that deals with the design, installation, operation, and maintenance of electrical systems and equipment used in industrial settings.
-
Who is Guy Seguier?
-Guy Seguier was a French engineer and professor who specialized in industrial electrical engineering. He wrote several books and articles on this subject, including Electrotechnique Industrielle.
-
Why is Electrotechnique Industrielle important?
-Electrotechnique Industrielle is one of the most comprehensive and authoritative references on industrial electrical engineering. It covers all the fundamental concepts and principles, as well as the practical applications and examples, of this field.
-
How many pages does Electrotechnique Industrielle have?
-Electrotechnique Industrielle has 484 pages in its latest edition published in 1994 by TEC et Doc.
-
How can I download Electrotechnique Industrielle in PDF format?
-You can download Electrotechnique Industrielle in PDF format by buying it online, borrowing it from a library or a friend, or searching for a free version on the internet. However, you should always be careful about the quality, the legality, and the security of the sources you use.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cyberpunk delivers free upgrade to Xbox owners Everything you need to know.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cyberpunk delivers free upgrade to Xbox owners Everything you need to know.md
deleted file mode 100644
index 94f3864005f6a4737e8650f6512b8763371511e6..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cyberpunk delivers free upgrade to Xbox owners Everything you need to know.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
Cyberpunk 2077 on PS5 and Xbox Series X/S will be a free upgrade for everyone who purchased a copy of the respective PS4 and Xbox One editions. It was originally planned for release this year but was understandably delayed considering how bugged and broken the last-gen versions were at launch.
-
You can also upgrade to PS5 versions if you have a physical PS4 game, as long as you bought the PS5 with a disc drive. You'll always need to use the PS4 disc to play the PS5 version; upgrading doesn't get you a free digital copy of the game. You'll still download the PS5 update from the PSN, but you won't need a PS5-specific disc -- your PS4 one will become an authenticator.
Sony initially said 2022 exclusive Horizon Forbidden West wouldn't let you upgrade from the PS4 to the PS5 version for free unless you bought the more expensive Digital Deluxe, Collector's or Regalla Edition. It later reversed course, saying anyone who bought the PS4 version would be entitled to a free PS5 upgrade.
-
Patch 1.5 adds ray-traced local light shadows, smooth gameplay at 60fps with dynamic 4K scaling and DualSense haptic feedback to the game for PS5 and Xbox Series X gamers, as well as platform-agnostic improvements like "various improvements to the game, numerous quest, and gameplay fixes, as well as a number of free DLC."
-
It's worth noting that the Cyberpunk 2077 next-gen upgrade will be free if you already own the game on last-gen consoles. When originally confirming the Xbox Series X Cyberpunk 2077 upgrade, CD Projekt Red said (via Twitter (opens in new tab)) that "gamers should never be forced to purchase the same game twice or pay for upgrades," and we've seen nothing to indicate that's going to change.
-
"Earlier in the year we announced that if you pick up Cyberpunk 2077 on Xbox One you'll be able to play it on Xbox Series X when the console launches," the stream states. "If you pick up Cyberpunk 2077 on PS4, you'll also be able to play it on PS5 when the console launches. And that's not all. There will be a free upgrade for Xbox Series X and PS5, but we'll have more details on that soon."
-
CD Projekt Red announced via Twitter that it has an Xbox Series X upgrade of Cyberpunk 2077 in the works. It also said that when it's ready, gamers who already purchased the title for Xbox One will get it for free. "Gamers should never be forced to purchase the same game twice or pay for upgrades," the developer said. "Owners of Cyberpunk 2077 will receive the Xbox Series X upgrade for free when it is available."
-
Quick, everyone act surprised! CD Projekt Red has confirmed that Cyberpunk 2077's free Xbox Series X and Xbox Series S upgrade is available TODAY, and you can start downloading it right now. It clocks in at around a whopping 62GB.
-
"Xbox One players will be able to upgrade to the next-gen version of this completely original open-world survival adventure game for free. Xbox Series X users will be able to choose between 4K or Ray Tracing functions (Ray Tracing unavailable on Xbox Series S)."
-
I bought the Witcher 3 goty for a shockingly low £6.99 in anticipation of the upgrade...I completed the base game on release ...but being goty edition It gives me extra incentive because they are separate achievements aswell
-
-
Cue a lot of disgruntled customers that cannot access the shiny new version of the game on their new-gen consoles because they can't find the upgrade option on the PlayStation or Xbox storefronts in their region. For those affected, the upgrade is either locked or showing up a paid upgrade (when the new-gen versions should be free to anyone that already owns the game).
-
For players on Xbox Series X|S and PlayStation 5, Patch 1.5 marks the introduction of a dedicated next-gen version of the game featuring enhancements like dynamic 4K scaling and ray-tracing features on Xbox Series X and PlayStation 5, faster loading times, and better reflections, among others. All of this, fueled by the extra power of next-gen hardware, is available to owners of the PlayStation 4 and Xbox One version of Cyberpunk 2077 via a free upgrade.
-
Furthermore, this latest update also comes with new pieces of free additional content that expands what Cyberpunk 2077 has to offer gamers: rentable apartments featuring unique player character interactions, fresh weapons and gear, new customization options, and more.
-
But what happens when developers release a game for the Xbox One X? Well, the Smart Delivery feature means you can enjoy games like Cyberpunk 2077 on the Xbox One X, as well as a free upgrade to the Xbox Series X. Whether you have a physical or digital copy of the game, all you need to do is launch it on your Xbox One or Series X|S console, and the best version will be installed for you. When the optimized version is released, the backward-compatible version will automatically be replaced.
-
Tying in with the latest Xbox Series X details, Cyberpunk 2077 developer CD Projekt Red has confirmed that the game will be coming to next-gen systems -- in a way, at least. The gist of it is that if you buy Cyberpunk 2077 on Xbox One, you'll be able to upgrade the title for free on Xbox Series X. Based on the company's tweet, we assume that the same will apply to the PlayStation 4 version of the release once the PlayStation 5 hits later this year.
-
"Gamers should never be forced to purchase the same game twice or pay for upgrades," writes the official Cyberpunk 2077 Twitter account. "Owners of #Cyberpunk2077 for Xbox One will receive the Xbox Series X upgrade for free when available."
-
@3Above But you are comparing an upgrade from PS4 to PS5, to different versions on different platforms with the port being made by a different studio on the Switch. Of course Nintendo won't accept the game being given for free on their console since they didn't have a cut on other the platform's sale. Try to buy a game on Steam and ask GoG or epic store for a free key, I doubt it will work and this will have nothing to do with the developer.
-
This is entirely different since it will be the first time a console with an eShop will be backword compatible. So this offers a whole new range of possibilities for developers, and CDPR is the very first studio who is talking about free upgrade across console generations
-
CD Projekt Red has announced that gamers who own the Xbox One version of the highly-anticipated title Cyberpunk 2077 will receive the Xbox Series X upgrade for free when it becomes available. You can check out the Twitter announcement below!
-
Owners of The Witcher 3: Wild Hunt on PlayStation 4 and Xbox One will receive a free "next-gen" upgrade to the current-gen PS5 and Xbox X/S consoles in 2022. Fans have been awaiting the opportunity to see every detail of the grizzled White Wolf since the enhancement was first announced back in 2020. PC players do not have to worry, as the new features coming with the update will also hit he PC version. The enhanced edition of The Witcher 3 was scheduled to be released in the back half of 2021, then later delayed until the second quarter of 2022. Unfortunately, no word was given as to why this setback occurred, but the rough launch of Cyberpunk 2077 is a likely suspect.
-
The Witcher 3: Wild Hunt was released on May 18, 2015, and has received two expansions. Players were immediately drawn in by the vast open world, topped with stunning visuals and exciting combat. The game lives on in 2022 as fans continue to make mods for The Witcher 3. These fun changes add replayability, by enhancing Geralt's combat capabilities or altering characters in various ways. The game reached a new audience when players got their hands on the Nintendo Switch release, in October 2019. Currently, CD Projekt Red has yet to give an exact date for the next-gen upgrade to The Witcher 3.
-
The reason given for the new delay was that the decision was, "Based on recommendations supplied by teams supervising the development of both games." Most likely, CD Projekt Red does not want to repeat the disastrous launch of Cyberpunk 2077 and is making sure the upgrades are as polished as possible. Based on the details given for the new version, Witcher 3 fans will be able to experience the riveting open-world game like never before.
-
Based on reports, the next-generation upgrade may feature enhancements from one notable modder who goes by the name Halk Hogan. In an article by Kotaku, they reported on Halk's announcement that his creation of The Witcher 3 HD Reworked Project may be officially implemented in the new release. CD Projekt Red has not yet confirmed this collaboration, but Witcher 3 has gone through many changes since its launch, and given that Halk already made major improvements to the graphics of the base game, a prolific modder officially working with the developers could make for the best overall upgrade. Whether the collaboration happens or not, players can expect to enjoy The Witcher 3 at 60 FPS and 4K resolution for PC, Xbox Series X/S, and PlayStation 5 sometime in the second quarter of 2022.
-
As expected the PlayStation 5 and Xbox Series X|S upgrades for Cyberpunk 2077 have been announced and they are available to download today alongside a huge Patch 1.5 update! Hoping to convince people that the game is now ready for prime time, a free trial is also available, giving you the first five hours of the game.
-
While Microsoft is making aggressive moves to ensure buyers grab the upcoming Xbox Series X, Sony is sort of taking a "wait and see" approach with the PS5. This lackadaisical attitude is putting developer CD Projekt Red (the studio behind The Witcher and Cyberpunk 2077) in a bind as it can't confirm if its upcoming open-world RPG will be able to offer a free upgrade to PS5 users.
-
One of the key selling points of the Xbox One version of Cyberpunk is that Microsoft will be offering a free upgrade to the Series X version when its next console launches. This means that players don't have to wait around for a "better" version of the game. They can simply buy Cyberpunk on release, begin playing the title, then get all of the visual enhancements if they decide on upgrading their console.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Drawings 6 Pro Crack REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Drawings 6 Pro Crack REPACK.md
deleted file mode 100644
index a3a5c9caed756e06d98f31a5c3982dc616318cce..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Drawings 6 Pro Crack REPACK.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
-In addition, for QuickEmbroidery product, there are "Text", "Complex", "Designer" and "Keyboards" modules as well.
-
-Wings 5 and Wings 6 are not compatible with each other.
-
-References
-
-External links
-
-Wings
-
-Category:Embroidery softwarePosts Tagged ‘clarion west’
-
-Wow, it’s been quite a while since I’ve posted. I’m well aware that it’s been quite a while since I’ve posted. I should make sure to share some of the great work I’ve been doing as well. But, mostly what I want to share is a song that has been keeping me company during the last month or so of being on my own for the first time in about 8 years, staying in a place that had lots of room and was relatively quiet and still.
-
-My name is Ross Farrar and I’m the singer, songwriter, and guitarist for the trio Clarity West. We have been around for a few years now, but I’m only just starting to understand what we do a little more clearly as we begin to play more shows. This is my first time posting anything I’ve written. I hope you enjoy it and that you can find a way to come see us sometime.
-
-Shepard Fairey, in another powerful remix, gives us the track “Losing Tomorrow” from the self-titled debut album from Portland, Oregon’s Clarion West. In addition to the track that originally appeared on the record, this remix also features remixes by The Weeping Choir, P.O.S., and Fatty Gainz.
-
-We’ve been playing some of our stuff lately at the North Shore Music Festival in Chicago. Check out a couple of videos and see what we’ve been doing and what we’re about. Hope you enjoy and get to see us out on a big stage soon.Q:
-
-What kind of tax does a head of household pay?
-
-I'm currently working as a software engineer and planning to file as self-employed. My earnings are going to come from two sources: direct contract, and consulting/contracting.
-
-What I'm confused about is:
-
-I can't charge more than a standard rate set by my state, so a freelance engineer will 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Webcam Bright Sn 21162510905.md b/spaces/1gistliPinn/ChatGPT4/Examples/Driver Webcam Bright Sn 21162510905.md
deleted file mode 100644
index 0034812dc96ab729f274f3b4d7e6e0b9e5f6d53f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Webcam Bright Sn 21162510905.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-ESET.NOD32.OnDemand.Scanner.17.03.2.rar free download java e book of khalid mugal scjp1 6 driver webcam bright sn 21162510905.rar 1fdad05405
-
-
-
diff --git a/spaces/1line/AutoGPT/autogpt/memory/no_memory.py b/spaces/1line/AutoGPT/autogpt/memory/no_memory.py
deleted file mode 100644
index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/memory/no_memory.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""A class that does not store any data. This is the default memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-from autogpt.memory.base import MemoryProviderSingleton
-
-
-class NoMemory(MemoryProviderSingleton):
- """
- A class that does not store any data. This is the default memory provider.
- """
-
- def __init__(self, cfg):
- """
- Initializes the NoMemory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- pass
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory. No action is taken in NoMemory.
-
- Args:
- data: The data to add.
-
- Returns: An empty string.
- """
- return ""
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
-
- Returns: None
- """
- return None
-
- def clear(self) -> str:
- """
- Clears the memory. No action is taken in NoMemory.
-
- Returns: An empty string.
- """
- return ""
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: None
- """
- return None
-
- def get_stats(self):
- """
- Returns: An empty dictionary as there are no stats in NoMemory.
- """
- return {}
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Ultima Version APK The Most Realistic Pool Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Ultima Version APK The Most Realistic Pool Game Ever.md
deleted file mode 100644
index f1e911fa26db0d3789539dc332e57cc182e9b9c7..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Ultima Version APK The Most Realistic Pool Game Ever.md
+++ /dev/null
@@ -1,155 +0,0 @@
-
-
8 Ball Pool Ultima Version APK: Everything You Need to Know
-
If you are a fan of pool games, you might have heard of 8 Ball Pool, one of the most popular and addictive online multiplayer games for Android devices. But do you know what is 8 Ball Pool Ultima Version APK and why you should download it? In this article, we will tell you everything you need to know about this amazing game, how to play it, and what benefits it can bring to you.
8 Ball Pool is a pool game developed by Miniclip.com that allows you to play with millions of players from around the world. You can choose from different game modes, such as 1-on-1 matches, tournaments, or practice mode. You can also customize your cue and pool table with various items that you can buy with coins or cash. Coins are the main currency of the game that you can earn by winning matches or spinning the wheel. Cash is the premium currency that you can use to buy exclusive cues, chat packs, or mini-games.
-
The difference between 8 Ball Pool and other pool games
-
Unlike other pool games that follow different rules and formats, 8 Ball Pool is based on the American style of eight-ball pool. This means that there are 15 object balls on the table, divided into two groups: solids (numbered 1-7) and stripes (numbered 9-15). The goal of the game is to pocket all the balls from your assigned group (either solids or stripes) and then pocket the black 8 ball in a called pocket. You have to do this before your opponent does or before you commit a foul. A foul occurs when you fail to hit any ball with your cue ball, hit the wrong ball first, pocket the cue ball or the 8 ball prematurely, or scratch (pocket the cue ball after hitting another ball).
-
What is 8 Ball Pool Ultima Version APK?
-
A description of the latest version of the game and its benefits
-
8 Ball Pool Ultima Version APK is a modified version of the original game that offers some extra features and advantages. Some of these features are:
-
8 ball pool latest version download apk
-8 ball pool mod apk unlimited coins and cash
-8 ball pool hack apk free download
-8 ball pool online multiplayer game apk
-8 ball pool apk for android 10
-8 ball pool update version apk download
-8 ball pool cheat engine apk no root
-8 ball pool offline mode apk
-8 ball pool pro version apk
-8 ball pool apk with facebook login
-8 ball pool rewards apk download
-8 ball pool legendary cues mod apk
-8 ball pool old version apk 2019
-8 ball pool beta version apk
-8 ball pool instant win apk
-8 ball pool guideline hack apk
-8 ball pool long line mod apk
-8 ball pool miniclip game apk
-8 ball pool club feature apk
-8 ball pool premium cues apk
-8 ball pool cracked version apk
-8 ball pool anti ban mod apk
-8 ball pool unlimited money and cash apk
-8 ball pool best mod apk download
-8 ball pool mega mod apk latest version
-8 ball pool original game apk download
-8 ball pool auto win hack apk
-8 ball pool all tables unlocked apk
-8 ball pool extended stick guideline apk
-8 ball pool full unlocked version apk
-8 ball pool unlimited coins and cash generator apk
-8 ball pool low mb version apk download
-8 ball pool aim tool pro apk free download
-8 ball pool mod menu apk download android
-8 ball pool new update version download apk
-8 ball pool unlimited scratchers mod apk
-8 ball pool vip cues mod apk download
-8 ball pool all in one hack mod apk download
-8 ball pool archangel cue mod apk download free
-8 ball pool level up fast mod apk
-
-
You can get unlimited coins and cash without spending real money.
-
You can unlock all the cues and pool tables without waiting for levels or achievements.
-
You can access all the game modes and features without any restrictions.
-
You can play with any player from any country without any lag or connection issues.
-
You can enjoy the game without any ads or pop-ups.
-
-
8 Ball Pool Ultima Version APK is updated regularly to match the latest version of the original game, so you don't have to worry about missing out on any new content or updates. You can also play the game on any Android device, regardless of the model or specifications.
-
How to download and install the APK file on your device
-
Downloading and installing 8 Ball Pool Ultima Version APK is very easy and simple. Just follow these steps:
-
-
Go to [this link] and click on the download button to get the APK file.
-
Once the download is complete, go to your device settings and enable the option to install apps from unknown sources.
-
Locate the APK file in your device storage and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy playing 8 Ball Pool Ultima Version APK with unlimited coins and cash.
-
-
How to Play 8 Ball Pool Ultima Version APK?
-
A step-by-step guide on how to start a game and choose a table
-
Playing 8 Ball Pool Ultima Version APK is very similar to playing the original game. Here is how you can start a game and choose a table:
-
-
Open the game and sign in with your Facebook account or Miniclip ID. You can also play as a guest if you don't have an account.
-
Select the game mode you want to play. You can choose from 1-on-1 matches, tournaments, or practice mode.
-
Select the table you want to play on. You can choose from different locations, such as London, Sydney, Moscow, Tokyo, Las Vegas, etc. Each location has a different entry fee and prize pool.
-
Select your cue and pool table from the shop. You can use coins or cash to buy different cues and tables with different attributes, such as power, aim, spin, time, etc.
-
Tap on the play button and wait for an opponent to join. You can also invite your friends to play with you by tapping on the friends button.
-
-
Some tips and tricks to improve your skills and win more coins
-
If you want to become a better player and win more coins in 8 Ball Pool Ultima Version APK, here are some tips and tricks that you should keep in mind:
-
-
Use the guideline to aim your shots. The guideline shows you where your cue ball will hit the object ball and where it will go after that. You can adjust the angle and power of your shot by dragging your finger on the screen.
-
Use spin to control your cue ball. Spin allows you to change the direction and speed of your cue ball after it hits another ball. You can apply spin by tapping on the cue ball icon on the bottom right corner of the screen and moving it around.
-
Plan your shots ahead. Don't just hit any ball that you see. Think about which ball you want to hit next and where you want your cue ball to end up. Try to clear your group of balls as soon as possible and leave yourself an easy shot for the 8 ball.
-
Avoid fouls and scratches. Fouls and scratches give your opponent a free ball in hand, which means they can place their cue ball anywhere on the table. This gives them a huge advantage over you. To avoid fouls and scratches, make sure you hit your assigned ball first, don't pocket the cue ball or the 8 ball prematurely, and don't hit any other balls off the table.
-
Practice regularly. The best way to improve your skills is to practice as much as you can. Play against different opponents, try different cues and tables, and learn from your mistakes. You can also watch replays of your matches or other players' matches to see what they did right or wrong.
-
-
Why You Should Play 8 Ball Pool Ultima Version APK?
-
A list of the advantages of playing this game for your mental and physical health
-
Playing 8 Ball Pool Ultima Version APK is not only fun but also beneficial for your mental and physical health. Here are some of the advantages of playing this game:
-
-
It improves your concentration and focus. Playing pool requires you to pay attention to the details, such as the angle, power, spin, and position of your shots. This helps you to sharpen your concentration and focus skills, which can benefit you in other aspects of life, such as work, study, or driving.
-
It enhances your hand-eye coordination and motor skills. Playing pool involves using your hands, eyes, and brain to coordinate your movements and aim your shots. This helps you to improve your hand-eye coordination and motor skills, which can improve your physical performance and prevent injuries.
-
It reduces your stress and anxiety. Playing pool is a great way to relax and have fun with your friends or strangers. You can chat, laugh, and compete with them, which can boost your mood and reduce your stress and anxiety levels. Playing pool can also distract you from your worries and problems, and help you to cope with negative emotions.
-
It stimulates your brain and memory. Playing pool requires you to think strategically and creatively, as well as remember the rules and the score. This helps you to stimulate your brain and memory functions, which can prevent cognitive decline and dementia in the long run.
-
It increases your social skills and confidence. Playing pool allows you to meet new people and make new friends from different backgrounds and cultures. You can also learn from them and share your experiences with them, which can increase your social skills and confidence. Playing pool can also help you to overcome shyness and social anxiety, as well as improve your communication and teamwork skills.
-
-
A table comparing 8 Ball Pool Ultima Version APK with other pool games
-
-
-
Features
-
8 Ball Pool Ultima Version APK
-
Other Pool Games
-
-
-
Coins and Cash
-
Unlimited
-
Limited
-
-
-
Cues and Tables
-
All Unlocked
-
Some Locked
-
-
-
Game Modes and Features
-
All Accessible
-
Some Restricted
-
-
-
Players and Locations
-
All Available
-
Some Unavailable
-
-
-
Ads and Pop-ups
-
None
-
Some
-
-
-
Updates and Content
-
Frequent
-
Infrequent
-
-
-
Compatibility and Performance
-
High
-
Low
-
-
Conclusion
-
In conclusion, 8 Ball Pool Ultima Version APK is a fantastic game that you should definitely try if you love pool games. It offers you unlimited coins and cash, all cues and tables unlocked, all game modes and features accessible, all players and locations available, no ads or pop-ups, frequent updates and content, high compatibility and performance, and many more benefits. It also improves your concentration, focus, hand-eye coordination, motor skills, stress relief, brain stimulation, memory function, social skills, confidence, etc. So what are you waiting for? Download 8 Ball Pool Ultima Version APK today and enjoy playing the best pool game ever!
-
FAQs
-
Q1: Is 8 Ball Pool Ultima Version APK safe to download?
-
A1: Yes, 8 Ball Pool Ultima Version APK is safe to download. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source like [this link] to avoid any fake or corrupted files.
-
Q2: Can I play 8 Ball Pool Ultima Version APK offline?
-
A2: No, 8 Ball Pool Ultima Version APK is an online game that requires an internet connection to play. You cannot play it offline or without wifi. However, you can play it on any network speed or quality without any lag or connection issues.
-
Q3: How can I customize my cue and pool table in 8 Ball Pool Ultima Version APK?
-
A3: You can customize your cue and pool table in 8 Ball Pool Ultima Version APK by going to the shop section of the game. There you can find a variety of cues and tables with different designs, colors, patterns, attributes, etc. You can buy them with coins or cash that you have unlimited in this version of the game. You can also change your cue or table anytime during the game by tapping on the gear icon on the top right corner of the screen.
-
Q4: How can I challenge my friends in 8 Ball Pool Ultima Version APK?
-
A4: You can challenge your friends in 8 Ball Pool Ultima Version APK by tapping on the friends button on the bottom left corner of the screen. There you can see a list of your Facebook friends or Miniclip friends who are online or offline. You can also search for a friend by their name or ID. To challenge a friend, just tap on their name and select the table you want to play on. You can also chat with them before or during the game by tapping on the chat button on the top left corner of the screen.
-
Q5: How can I get more coins and cash in 8 Ball Pool Ultima Version APK?
-
A5: You don't need to worry about getting more coins and cash in 8 Ball Pool Ultima Version APK because you have unlimited amounts of them in this version of the game. You can use them to buy anything you want from the shop, play any game mode or feature, or enter any tournament. However, if you want to earn more coins and cash in the original game, you can do so by winning matches, spinning the wheel, playing mini-games, watching videos, completing offers, or inviting friends.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Disney 39s Aladdin 1994 Video Game Apk.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Disney 39s Aladdin 1994 Video Game Apk.md
deleted file mode 100644
index b4780082008b28e922962920931d9d080cb08c19..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Disney 39s Aladdin 1994 Video Game Apk.md
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
Disney's Aladdin 1994 Video Game APK: A Retro Classic on Your Smartphone
-
Introduction
-
If you grew up in the 90s, chances are you have fond memories of playing Disney's Aladdin video game on your Sega Genesis, Game Gear, or Master System. Based on the animated film of the same name, this side-scrolling platformer was one of the best-selling and most acclaimed games of its time. It featured stunning graphics, catchy music, and addictive gameplay that captured the magic and adventure of the movie.
-
But what if you want to relive those memories on your smartphone? Is there a way to play Disney's Aladdin 1994 video game on your Android or iOS device? The answer is yes, thanks to a special APK file that allows you to run the game on your phone or tablet. In this article, we will tell you everything you need to know about Disney's Aladdin 1994 video game APK, including its features, how to download and install it, and some tips and tricks to optimize your experience.
Disney's Aladdin 1994 video game is a side-scrolling platformer in which you control Aladdin, the street-smart hero who falls in love with Princess Jasmine. You have to navigate through various levels inspired by the movie, such as the streets of Agrabah, the Cave of Wonders, and the Sultan's Palace. Along the way, you have to avoid enemies and obstacles, collect gems and apples, and use your scimitar and throwing skills to defeat foes.
-
The game has two difficulty settings: normal and hard. The normal mode has six levels, while the hard mode has seven levels. The hard mode also has more enemies, traps, and hazards. The game also has a password system that allows you to resume your progress from any level.
-
The controls are simple and intuitive. You can use the virtual buttons on the screen or tilt your device to move Aladdin left or right. You can also swipe up or down to jump or crouch. To attack with your scimitar, tap the sword button. To throw an apple, tap the apple button. You can also use the magic lamp button to summon Genie for help in certain situations.
-
Graphics and sound
-
One of the most impressive aspects of Disney's Aladdin 1994 video game is its graphics. The game features colorful and detailed sprites and backgrounds that faithfully recreate the look and feel of the movie. The animations are smooth and fluid, and the characters have expressive facial expressions. The game also has some cinematic cutscenes that tell the story between levels.
-
The sound is equally impressive. The game features a high-quality soundtrack that includes songs from the movie, such as "A Whole New World", "Prince Ali", and "Friend Like Me". The sound effects are also realistic and immersive, such as the clashing of swords, the roaring of tigers, and the cheering of crowds.
-
-
Levels and challenges
-
Disney's Aladdin 1994 video game has a variety of levels that offer different challenges and surprises. Some of the levels are:
-
-
Agrabah Market: The first level where you have to escape from the guards and meet Jasmine.
-
The Desert: The second level where you have to ride a magic carpet through a sandstorm.
-
escape from a lava-filled chamber.
-
The Escape: The fourth level where you have to fly on a magic carpet and dodge falling rocks and lava.
-
Rooftops: The fifth level where you have to climb and jump across the rooftops of Agrabah and fight Jafar's minions.
-
Sultan's Dungeon: The sixth level where you have to rescue Abu from the dungeon and fight Iago, Jafar's parrot.
-
Jafar's Palace: The seventh and final level where you have to confront Jafar in his palace and defeat him in his snake form.
-
-
Each level has its own challenges and secrets, such as hidden items, bonus stages, and mini-games. For example, in the Cave of Wonders, you can find a magic flute that lets you play a snake-charming mini-game. In the Escape, you can find a magic carpet that lets you play a flying mini-game. In the Rooftops, you can find a scarab that lets you enter a bonus stage where you can collect extra lives and gems.
-
Bonus content and secrets
-
Disney's Aladdin 1994 video game also has some bonus content and secrets that add more fun and replay value to the game. Some of them are:
-
-
Cheat codes: You can enter some cheat codes to unlock different features, such as invincibility, infinite lives, infinite apples, level select, and debug mode.
-
Easter eggs: You can find some Easter eggs that reference other Disney movies, such as The Lion King, The Little Mermaid, and Beauty and the Beast.
-
Alternate endings: You can get different endings depending on how many gems you collect throughout the game. The best ending is achieved by collecting 70 gems or more.
-
-
How to download and install Disney's Aladdin 1994 video game APK
-
Requirements and compatibility
-
To download and install Disney's Aladdin 1994 video game APK, you need to have an Android or iOS device that meets the following requirements:
-
-
-
Operating system
-
Version
-
-
-
Android
-
4.0 or higher
-
-
-
iOS
-
8.0 or higher
-
-
-
You also need to have enough storage space on your device to install the APK file, which is about 50 MB in size.
-
Steps to download and install
-
To download and install Disney's Aladdin 1994 video game APK, follow these steps:
-
-
Go to the official website of Disney's Aladdin 1994 video game APK (link here) and click on the download button.
-
Wait for the download to finish and locate the APK file on your device.
-
If you are using an Android device, you need to enable the installation of apps from unknown sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
If you are using an iOS device, you need to trust the developer of the app. To do this, go to Settings > General > Device Management > Trust Developer Name and tap on Trust.
-
Tap on the APK file and follow the instructions to install it on your device.
-
Launch the app and enjoy playing Disney's Aladdin 1994 video game on your smartphone.
-
-
Tips and tricks to optimize your experience
-
To optimize your experience while playing Disney's Aladdin 1994 video game on your smartphone, here are some tips and tricks:
-
-
Adjust the settings according to your preference. You can change the sound volume, screen size, language, and controller layout in the options menu.
-
Save your progress frequently. You can use the password system or the save state feature to save your progress at any point in the game.
-
Use Genie wisely. You can use Genie to help you in certain situations, such as finding hidden items, skipping levels, or getting extra lives. However, you can only use Genie once per level, so use him wisely.
-
Collect as many gems as possible. Gems are useful for unlocking bonus content, getting alternate endings, and buying items from Peddler shops.
-
Explore every level thoroughly. There are many secrets and hidden areas in each level that can reward you with extra items, bonus stages, or Easter eggs.
-
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Sniper 3D Mod Apk and Unlock Unlimited Money and Diamonds.md b/spaces/1phancelerku/anime-remove-background/Download Sniper 3D Mod Apk and Unlock Unlimited Money and Diamonds.md
deleted file mode 100644
index b82b586dd587653c7d29b19e0edfa16954a942f1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Sniper 3D Mod Apk and Unlock Unlimited Money and Diamonds.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Sniper 3D Mod APK Unlimited Money and Diamonds 2021: A Review
-
If you are a fan of shooting games and want to experience the thrill of being a professional sniper, then you should try Sniper 3D Mod APK. This is a modified version of the popular game Sniper 3D, which gives you access to unlimited money and diamonds, as well as all the weapons and upgrades in the game. In this article, we will review the features, benefits, and drawbacks of Sniper 3D Mod APK, as well as provide some tips and tricks on how to play it.
-
What is Sniper 3D?
-
Sniper 3D is a free-to-play action game developed by Fun Games For Free. It is available for Android and iOS devices, as well as Windows and Mac computers. The game puts you in the role of a deadly assassin who has to complete various missions around the world. You can choose from over 180+ authentic weapons, customize them with different attachments, and upgrade them to improve their performance. You can also play in different modes, such as offline missions, online PvP battles, squad wars, and special ops.
-
sniper 3d mod apk unlimited money and diamonds 2021
Sniper 3D Action: Experience the thrill of being a professional sniper in this stunning 3D gun game. Enjoy intuitive controls and realistic ballistics that'll make you feel like a real shooter.
-
Variety of Guns: Unlock a vast arsenal of sniper rifles, assault rifles, and other powerful guns. There are 180+ authentic weapons in the game. Upgrade your weapons and become the ultimate sniper 3D assassin.
-
Offline Gameplay: No internet connection? No problem! Enjoy Sniper 3D's offline mode and complete various challenging missions without the need for Wi-Fi or data.
-
PVE and PVP mode: Complete missions or play against other assassins in real time - whatever you like.
-
Diverse Locations: Travel to different parts of the world, taking on unique missions in various environments. Eliminate high-profile targets and show them who's the master shooter in town.
-
Free to Play: Join the action-packed world of Sniper 3D for free! This incredible shooting game offers hours of entertainment without costing a dime.
-
-
How to download and install Sniper 3D Mod APK?
-
If you want to enjoy the benefits of Sniper 3D Mod APK, you will need to download and install it on your device. Here are the steps to do so:
-
-
Go to a trusted website that offers Sniper 3D Mod APK download link, such as [APK TRIGGER](^1^) or [GoogleModAPK](^2^).
-
Click on the download button and wait for the file to be downloaded.
-
Once the file is downloaded, go to your device settings and enable unknown sources installation.
-
Locate the downloaded file in your file manager and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy Sniper 3D Mod APK unlimited money and diamonds 2021!
-
-
Why use Sniper 3D Mod APK?
-
Sniper 3D Mod APK is a modified version of the original game that gives you some extra advantages and features that are not available in the official version. Here are some of the reasons why you should use Sniper 3D Mod APK:
-
Unlimited money and diamonds
-
One of the main benefits of Sniper 3D Mod APK is that it gives you unlimited money and diamonds, which are the two main currencies in the game. You can use them to buy new weapons, upgrade them, unlock new skins, and more. You don't have to worry about running out of money or diamonds, or spending real money to get them. With Sniper 3D Mod APK, you can enjoy the game without any limitations.
-
Unlock all weapons and upgrades
-
Another benefit of Sniper 3D Mod APK is that it unlocks all the weapons and upgrades in the game. You can access over 180+ authentic weapons, from sniper rifles to assault rifles, and customize them with different attachments and scopes. You can also upgrade your weapons to increase their damage, accuracy, range, stability, and more. You don't have to complete missions or level up to unlock them. With Sniper 3D Mod APK, you can have the best weapons in the game at your disposal.
-
sniper 3d hack apk download free coins and gems 2021
-sniper 3d modded apk latest version unlimited everything 2021
-sniper 3d cheats apk no root unlimited gold and energy 2021
-sniper 3d premium apk mod free download vip and weapons 2021
-sniper 3d cracked apk full unlocked unlimited ammo and lives 2021
-sniper 3d mod menu apk no ban unlimited cash and diamonds 2021
-sniper 3d unlimited money and diamonds apk offline download 2021
-sniper 3d mod apk android 1 unlimited coins and gems 2021
-sniper 3d hack online generator free money and diamonds 2021
-sniper 3d mod apk rexdl unlimited everything download 2021
-sniper 3d mod apk revdl unlimited money and diamonds 2021
-sniper 3d hack apk ios free coins and gems download 2021
-sniper 3d mod apk happymod unlimited money and diamonds 2021
-sniper 3d mod apk an1 unlimited coins and gems download 2021
-sniper 3d hack tool apk no survey unlimited cash and diamonds 2021
-sniper 3d mod apk obb unlimited money and diamonds download 2021
-sniper 3d mod apk pure unlimited everything free download 2021
-sniper 3d hack version apk unlimited coins and gems online 2021
-sniper 3d mod apk apkpure unlimited money and diamonds download 2021
-sniper 3d hack apk latest version unlimited cash and diamonds online 2021
-sniper 3d mod apk for pc unlimited money and diamonds download 2021
-sniper 3d hack apk uptodown unlimited coins and gems online 2021
-sniper 3d mod apk old version unlimited money and diamonds download 2021
-sniper 3d hack apk android unlimited cash and diamonds online 2021
-sniper 3d mod apk new version unlimited money and diamonds download 2021
-
Enjoy offline and online modes
-
A third benefit of Sniper 3D Mod APK is that it allows you to enjoy both offline and online modes of the game. You can play offline missions without internet connection, or join online PvP battles and squad wars with other players around the world. You can also play special ops missions that require teamwork and strategy. You don't have to choose between offline and online modes. With Sniper 3D Mod APK, you can have the best of both worlds.
-
Tips and tricks for Sniper 3D Mod APK
-
Sniper 3D Mod APK is a fun and addictive game that will test your skills as a sniper. However, it can also be challenging and frustrating at times. Here are some tips and tricks that will help you improve your gameplay and become a master shooter:
-
Aim for headshots and moving targets
-
One of the most important tips for Sniper 3D Mod APK is to aim for headshots and moving targets. Headshots will deal more damage and earn you more points than body shots. Moving targets will also give you more points than stationary ones. However, they are also harder to hit, so you need to be patient and precise. Use your scope to zoom in on your target, wait for the right moment, and pull the trigger. Don't forget to account for wind direction and bullet drop as well.
-
Choose the right weapon for each mission
-
Another tip for Sniper 3D Mod APK is to choose the right weapon for each mission. Different missions will require different weapons, depending on the distance, environment, number of enemies, and other factors. For example, if you need to shoot from a long range, you should use a sniper rifle with a high magnification scope. If you need to shoot in a crowded area, you should use an assault rifle with a silencer. If you need to shoot in a dark place, you should use a weapon with a night vision scope. You can switch between different weapons before starting each mission.
-
Use the environment to your advantage
-
A third tip for Sniper 3D Mod APK is to use the environment to your advantage. The game features various locations with different elements that can help or hinder your shooting. For example, you can use buildings, cars, barrels, crates, and other objects as cover or distractions. You can also shoot explosive objects to cause chain reactions and eliminate multiple enemies at once. You can also shoot glass windows, lights, cameras, alarms, and other devices to create noise or confusion. Be creative and observant when using the environment.
-
Conclusion
-
Sniper 3D Mod APK is a great game for anyone who loves shooting games and wants to experience the thrill of being a professional sniper. It offers unlimited money and diamonds, as well as all the weapons and upgrades in the game. It also allows you to enjoy both offline and online modes of the game. However, it also requires skill, patience, precision, and strategy to complete various missions and challenges. If you follow our tips and tricks, you will be able to improve your gameplay and become a master shooter.
- FAQs
-
Here are some of the frequently asked questions about Sniper 3D Mod APK:
-
-
-
Question
-
Answer
-
-
-
Is Sniper 3D Mod APK safe to use?
-
Sniper 3D Mod APK is generally safe to use, as long as you download it from a trusted website and scan it with an antivirus program. However, you should be aware that using a modded version of the game may violate the terms and conditions of the original game, and may result in your account being banned or suspended. Therefore, you should use Sniper 3D Mod APK at your own risk.
-
-
-
Can I play Sniper 3D Mod APK with my friends?
-
Yes, you can play Sniper 3D Mod APK with your friends, as long as they also have the same version of the game installed on their devices. You can join online PvP battles and squad wars with your friends, or compete against them in leaderboards and rankings. You can also chat with them in the game and share your achievements and tips.
-
-
-
What are the minimum requirements for Sniper 3D Mod APK?
-
The minimum requirements for Sniper 3D Mod APK are:
Android version: 4.4 or higher
RAM: 2 GB or more
Storage: 100 MB or more
Internet connection: required for online modes
-
-
-
How can I update Sniper 3D Mod APK?
-
To update Sniper 3D Mod APK, you will need to download and install the latest version of the game from the same website that you downloaded it from. You may also need to uninstall the previous version of the game before installing the new one. However, you should be careful when updating Sniper 3D Mod APK, as some updates may not be compatible with the modded version of the game, and may cause errors or crashes.
-
-
-
Where can I get more information about Sniper 3D Mod APK?
-
If you want to get more information about Sniper 3D Mod APK, you can visit the official website of the original game at [Sniper 3D], or follow their social media accounts on [Facebook], [Twitter], [Instagram], and [YouTube]. You can also check out some online forums and blogs that discuss Sniper 3D Mod APK, such as [Reddit] and [Quora].
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/modules/gfpgan_inference.py b/spaces/4Taps/SadTalker/modules/gfpgan_inference.py
deleted file mode 100644
index f4e7dc80eac012906b797843aa6019c2c4a39b3b..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/modules/gfpgan_inference.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import os,sys
-
-def gfpgan(scale, origin_mp4_path):
- current_code_path = sys.argv[0]
- current_root_path = os.path.split(current_code_path)[0]
- print(current_root_path)
- gfpgan_code_path = current_root_path+'/repositories/GFPGAN/inference_gfpgan.py'
- print(gfpgan_code_path)
-
- #video2pic
- result_dir = os.path.split(origin_mp4_path)[0]
- video_name = os.path.split(origin_mp4_path)[1]
- video_name = video_name.split('.')[0]
- print(video_name)
- str_scale = str(scale).replace('.', '_')
- output_mp4_path = os.path.join(result_dir, video_name+'##'+str_scale+'.mp4')
- temp_output_mp4_path = os.path.join(result_dir, 'temp_'+video_name+'##'+str_scale+'.mp4')
-
- audio_name = video_name.split('##')[-1]
- audio_path = os.path.join(result_dir, audio_name+'.wav')
- temp_pic_dir1 = os.path.join(result_dir, video_name)
- temp_pic_dir2 = os.path.join(result_dir, video_name+'##'+str_scale)
- os.makedirs(temp_pic_dir1, exist_ok=True)
- os.makedirs(temp_pic_dir2, exist_ok=True)
- cmd1 = 'ffmpeg -i \"{}\" -start_number 0 \"{}\"/%06d.png -loglevel error -y'.format(origin_mp4_path, temp_pic_dir1)
- os.system(cmd1)
- cmd2 = f'python {gfpgan_code_path} -i {temp_pic_dir1} -o {temp_pic_dir2} -s {scale}'
- os.system(cmd2)
- cmd3 = f'ffmpeg -r 25 -f image2 -i {temp_pic_dir2}/%06d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {temp_output_mp4_path}'
- os.system(cmd3)
- cmd4 = f'ffmpeg -y -i {temp_output_mp4_path} -i {audio_path} -vcodec copy {output_mp4_path}'
- os.system(cmd4)
- #shutil.rmtree(temp_pic_dir1)
- #shutil.rmtree(temp_pic_dir2)
-
- return output_mp4_path
diff --git a/spaces/801artistry/RVC801/demucs/__init__.py b/spaces/801artistry/RVC801/demucs/__init__.py
deleted file mode 100644
index d4182e356427e1b05a79f8da641c70bb732514fa..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/demucs/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-__version__ = "2.0.3"
diff --git a/spaces/A00001/bingothoo/src/lib/bots/bing/tts.ts b/spaces/A00001/bingothoo/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/AIConsultant/MusicGen/tests/data/__init__.py b/spaces/AIConsultant/MusicGen/tests/data/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/data/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/emotion/pre_align.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/emotion/pre_align.py
deleted file mode 100644
index 3b625295a118845c01a3677004070714d11c162b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/emotion/pre_align.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import os
-
-from data_gen.tts.base_preprocess import BasePreprocessor
-import glob
-import re
-
-class EmoPreAlign(BasePreprocessor):
-
- def meta_data(self):
- spks = ['0012', '0011', '0013', '0014', '0015', '0016', '0017', '0018', '0019', '0020']
- pattern = re.compile('[\t\n ]+')
- for spk in spks:
- for line in open(f"{self.raw_data_dir}/{spk}/{spk}.txt", 'r'): # 打开文件
- line = re.sub(pattern, ' ', line)
- if line == ' ': continue
- split_ = line.split(' ')
- txt = ' '.join(split_[1: -2])
- item_name = split_[0]
- emotion = split_[-2]
- wav_fn = f'{self.raw_data_dir}/{spk}/{emotion}/{item_name}.wav'
- yield item_name, wav_fn, txt, spk, emotion
-
-
-if __name__ == "__main__":
- EmoPreAlign().process()
diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/utils.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/utils.py
deleted file mode 100644
index cf1deeaef4e51fcc7cc42f4f3e2d9a34296371f9..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/utils.py
+++ /dev/null
@@ -1,353 +0,0 @@
-# !/usr/bin/env python
-# -*- coding: utf-8 -*-
-# @Time : 2021/3/9 16:33
-# @Author : dongchao yang
-# @File : train.py
-
-import collections
-import sys
-from loguru import logger
-from pprint import pformat
-
-import numpy as np
-import pandas as pd
-import scipy
-import six
-import sklearn.preprocessing as pre
-import torch
-import tqdm
-import yaml
-
-from scipy.interpolate import interp1d
-
-def parse_config_or_kwargs(config_file, **kwargs):
- """parse_config_or_kwargs
- :param config_file: Config file that has parameters, yaml format
- :param **kwargs: Other alternative parameters or overwrites for config
- """
- with open(config_file) as con_read:
- yaml_config = yaml.load(con_read, Loader=yaml.FullLoader)
- arguments = dict(yaml_config, **kwargs)
- return arguments
-
-
-def find_contiguous_regions(activity_array): # in this part, if you cannot understand the binary operation, I think you can write a O(n) complexity method
- """Find contiguous regions from bool valued numpy.array.
- Copy of https://dcase-repo.github.io/dcase_util/_modules/dcase_util/data/decisions.html#DecisionEncoder
- Reason is:
- 1. This does not belong to a class necessarily
- 2. Import DecisionEncoder requires sndfile over some other imports..which causes some problems on clusters
- """
- change_indices = np.logical_xor(activity_array[1:], activity_array[:-1]).nonzero()[0]
- change_indices += 1
- if activity_array[0]:
- # If the first element of activity_array is True add 0 at the beginning
- change_indices = np.r_[0, change_indices]
-
- if activity_array[-1]:
- # If the last element of activity_array is True, add the length of the array
- change_indices = np.r_[change_indices, activity_array.size]
- # print(change_indices.reshape((-1, 2)))
- # Reshape the result into two columns
- return change_indices.reshape((-1, 2))
-
-
-def split_train_cv(
- data_frame: pd.DataFrame,
- frac: float = 0.9,
- y=None, # Only for stratified, computes necessary split
- **kwargs):
- """split_train_cv
-
- :param data_frame:
- :type data_frame: pd.DataFrame
- :param frac:
- :type frac: float
- """
- if kwargs.get('mode',
- None) == 'urbansed': # Filenames are DATA_-1 DATA_-2 etc
- data_frame.loc[:, 'id'] = data_frame.groupby(
- data_frame['filename'].str.split('_').apply(
- lambda x: '_'.join(x[:-1]))).ngroup()
- sampler = np.random.permutation(data_frame['id'].nunique())
- num_train = int(frac * len(sampler))
- train_indexes = sampler[:num_train]
- cv_indexes = sampler[num_train:]
- train_data = data_frame[data_frame['id'].isin(train_indexes)]
- cv_data = data_frame[data_frame['id'].isin(cv_indexes)]
- del train_data['id']
- del cv_data['id']
- elif kwargs.get('mode', None) == 'stratified': # stratified --> 分层的 ?
- # Use statified sampling
- from skmultilearn.model_selection import iterative_train_test_split
- index_train, _, index_cv, _ = iterative_train_test_split(
- data_frame.index.values.reshape(-1, 1), y, test_size=1. - frac)
- train_data = data_frame[data_frame.index.isin(index_train.squeeze())]
- cv_data = data_frame[data_frame.index.isin(index_cv.squeeze())] # cv --> cross validation
- else:
- # Simply split train_test
- train_data = data_frame.sample(frac=frac, random_state=10)
- cv_data = data_frame[~data_frame.index.isin(train_data.index)]
- return train_data, cv_data
-
-
-
-def pprint_dict(in_dict, outputfun=sys.stdout.write, formatter='yaml'): # print yaml file
- """pprint_dict
- :param outputfun: function to use, defaults to sys.stdout
- :param in_dict: dict to print
- """
- if formatter == 'yaml':
- format_fun = yaml.dump
- elif formatter == 'pretty':
- format_fun = pformat
- for line in format_fun(in_dict).split('\n'):
- outputfun(line)
-
-
-def getfile_outlogger(outputfile):
- log_format = "[{time:YYYY-MM-DD HH:mm:ss}] {message}"
- logger.configure(handlers=[{"sink": sys.stderr, "format": log_format}])
- if outputfile:
- logger.add(outputfile, enqueue=True, format=log_format)
- return logger
-
-# according label, get encoder
-def train_labelencoder(labels: pd.Series, sparse=True):
- """encode_labels
-
- Encodes labels
-
- :param labels: pd.Series representing the raw labels e.g., Speech, Water
- :param encoder (optional): Encoder already fitted
- returns encoded labels (many hot) and the encoder
- """
- assert isinstance(labels, pd.Series), "Labels need to be series"
- if isinstance(labels[0], six.string_types):
- # In case of using non processed strings, e.g., Vaccum, Speech
- label_array = labels.str.split(',').values.tolist() # split label according to ','
- elif isinstance(labels[0], np.ndarray):
- # Encoder does not like to see numpy array
- label_array = [lab.tolist() for lab in labels]
- elif isinstance(labels[0], collections.Iterable):
- label_array = labels
- encoder = pre.MultiLabelBinarizer(sparse_output=sparse)
- encoder.fit(label_array)
- return encoder
-
-
-def encode_labels(labels: pd.Series, encoder=None, sparse=True):
- """encode_labels
-
- Encodes labels
-
- :param labels: pd.Series representing the raw labels e.g., Speech, Water
- :param encoder (optional): Encoder already fitted
- returns encoded labels (many hot) and the encoder
- """
- assert isinstance(labels, pd.Series), "Labels need to be series"
- instance = labels.iloc[0]
- if isinstance(instance, six.string_types):
- # In case of using non processed strings, e.g., Vaccum, Speech
- label_array = labels.str.split(',').values.tolist()
- elif isinstance(instance, np.ndarray):
- # Encoder does not like to see numpy array
- label_array = [lab.tolist() for lab in labels]
- elif isinstance(instance, collections.Iterable):
- label_array = labels
- # get label_array, it is a list ,contain a lot of label, this label are string type
- if not encoder:
- encoder = pre.MultiLabelBinarizer(sparse_output=sparse) # if we encoder is None, we should init a encoder firstly.
- encoder.fit(label_array)
- labels_encoded = encoder.transform(label_array) # transform string to digit
- return labels_encoded, encoder
-
- # return pd.arrays.SparseArray(
- # [row.toarray().ravel() for row in labels_encoded]), encoder
-
-
-def decode_with_timestamps(events,labels: np.array):
- """decode_with_timestamps
- Decodes the predicted label array (2d) into a list of
- [(Labelname, onset, offset), ...]
-
- :param encoder: Encoder during training
- :type encoder: pre.MultiLabelBinarizer
- :param labels: n-dim array
- :type labels: np.array
- """
- # print('events ',events)
- # print('labels ',labels.shape)
- #assert 1==2
- if labels.ndim == 2:
- #print('...')
- return [_decode_with_timestamps(events[i],labels[i]) for i in range(labels.shape[0])]
- else:
- return _decode_with_timestamps(events,labels)
-
-
-def median_filter(x, window_size, threshold=0.5):
- """median_filter
- :param x: input prediction array of shape (B, T, C) or (B, T).
- Input is a sequence of probabilities 0 <= x <= 1
- :param window_size: An integer to use
- :param threshold: Binary thresholding threshold
- """
- x = binarize(x, threshold=threshold) # transfer to 0 or 1
- if x.ndim == 3:
- size = (1, window_size, 1)
- elif x.ndim == 2 and x.shape[0] == 1:
- # Assume input is class-specific median filtering
- # E.g, Batch x Time [1, 501]
- size = (1, window_size)
- elif x.ndim == 2 and x.shape[0] > 1:
- # Assume input is standard median pooling, class-independent
- # E.g., Time x Class [501, 10]
- size = (window_size, 1)
- return scipy.ndimage.median_filter(x, size=size)
-
-
-def _decode_with_timestamps(events,labels):
- result_labels = []
- # print('.......')
- # print('labels ',labels.shape)
- # print(labels)
- change_indices = find_contiguous_regions(labels)
- # print(change_indices)
- # assert 1==2
- for row in change_indices:
- result_labels.append((events,row[0], row[1]))
- return result_labels
-
-def inverse_transform_labels(encoder, pred):
- if pred.ndim == 3:
- return [encoder.inverse_transform(x) for x in pred]
- else:
- return encoder.inverse_transform(pred)
-
-
-def binarize(pred, threshold=0.5):
- # Batch_wise
- if pred.ndim == 3:
- return np.array(
- [pre.binarize(sub, threshold=threshold) for sub in pred])
- else:
- return pre.binarize(pred, threshold=threshold)
-
-
-def double_threshold(x, high_thres, low_thres, n_connect=1):
- """double_threshold
- Helper function to calculate double threshold for n-dim arrays
-
- :param x: input array
- :param high_thres: high threshold value
- :param low_thres: Low threshold value
- :param n_connect: Distance of <= n clusters will be merged
- """
- assert x.ndim <= 3, "Whoops something went wrong with the input ({}), check if its <= 3 dims".format(
- x.shape)
- if x.ndim == 3:
- apply_dim = 1
- elif x.ndim < 3:
- apply_dim = 0
- # x is assumed to be 3d: (batch, time, dim)
- # Assumed to be 2d : (time, dim)
- # Assumed to be 1d : (time)
- # time axis is therefore at 1 for 3d and 0 for 2d (
- return np.apply_along_axis(lambda x: _double_threshold(
- x, high_thres, low_thres, n_connect=n_connect),
- axis=apply_dim,
- arr=x)
-
-
-def _double_threshold(x, high_thres, low_thres, n_connect=1, return_arr=True): # in nature, double_threshold considers boundary question
- """_double_threshold
- Computes a double threshold over the input array
-
- :param x: input array, needs to be 1d
- :param high_thres: High threshold over the array
- :param low_thres: Low threshold over the array
- :param n_connect: Postprocessing, maximal distance between clusters to connect
- :param return_arr: By default this function returns the filtered indiced, but if return_arr = True it returns an array of tsame size as x filled with ones and zeros.
- """
- assert x.ndim == 1, "Input needs to be 1d"
- high_locations = np.where(x > high_thres)[0] # return the index, where value is greater than high_thres
- locations = x > low_thres # return true of false
- encoded_pairs = find_contiguous_regions(locations)
- # print('encoded_pairs ',encoded_pairs)
- filtered_list = list(
- filter(
- lambda pair:
- ((pair[0] <= high_locations) & (high_locations <= pair[1])).any(),
- encoded_pairs)) # find encoded_pair where inclide a high_lacations
- #print('filtered_list ',filtered_list)
- filtered_list = connect_(filtered_list, n_connect) # if the distance of two pair is less than n_connect, we can merge them
- if return_arr:
- zero_one_arr = np.zeros_like(x, dtype=int)
- for sl in filtered_list:
- zero_one_arr[sl[0]:sl[1]] = 1
- return zero_one_arr
- return filtered_list
-
-
-def connect_clusters(x, n=1):
- if x.ndim == 1:
- return connect_clusters_(x, n)
- if x.ndim >= 2:
- return np.apply_along_axis(lambda a: connect_clusters_(a, n=n), -2, x)
-
-
-def connect_clusters_(x, n=1):
- """connect_clusters_
- Connects clustered predictions (0,1) in x with range n
-
- :param x: Input array. zero-one format
- :param n: Number of frames to skip until connection can be made
- """
- assert x.ndim == 1, "input needs to be 1d"
- reg = find_contiguous_regions(x)
- start_end = connect_(reg, n=n)
- zero_one_arr = np.zeros_like(x, dtype=int)
- for sl in start_end:
- zero_one_arr[sl[0]:sl[1]] = 1
- return zero_one_arr
-
-
-def connect_(pairs, n=1):
- """connect_
- Connects two adjacent clusters if their distance is <= n
-
- :param pairs: Clusters of iterateables e.g., [(1,5),(7,10)]
- :param n: distance between two clusters
- """
- if len(pairs) == 0:
- return []
- start_, end_ = pairs[0]
- new_pairs = []
- for i, (next_item, cur_item) in enumerate(zip(pairs[1:], pairs[0:])):
- end_ = next_item[1]
- if next_item[0] - cur_item[1] <= n:
- pass
- else:
- new_pairs.append((start_, cur_item[1]))
- start_ = next_item[0]
- new_pairs.append((start_, end_))
- return new_pairs
-
-
-def predictions_to_time(df, ratio):
- df.onset = df.onset * ratio
- df.offset = df.offset * ratio
- return df
-
-def upgrade_resolution(arr, scale):
- print('arr ',arr.shape)
- x = np.arange(0, arr.shape[0])
- f = interp1d(x, arr, kind='linear', axis=0, fill_value='extrapolate')
- scale_x = np.arange(0, arr.shape[0], 1 / scale)
- up_scale = f(scale_x)
- return up_scale
-# a = [0.1,0.2,0.3,0.8,0.4,0.1,0.3,0.9,0.4]
-# a = np.array(a)
-# b = a>0.2
-# _double_threshold(a,0.7,0.2)
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/dataset.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/dataset.py
deleted file mode 100644
index c049ef047e209b0488b73ec9ae283bf425b5abe8..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/dataset.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import collections
-import csv
-import logging
-import os
-import random
-from glob import glob
-from pathlib import Path
-
-import numpy as np
-import torch
-import torchvision
-
-logger = logging.getLogger(f'main.{__name__}')
-
-
-class VGGSound(torch.utils.data.Dataset):
-
- def __init__(self, split, specs_dir, transforms=None, splits_path='./data', meta_path='./data/vggsound.csv'):
- super().__init__()
- self.split = split
- self.specs_dir = specs_dir
- self.transforms = transforms
- self.splits_path = splits_path
- self.meta_path = meta_path
-
- vggsound_meta = list(csv.reader(open(meta_path), quotechar='"'))
- unique_classes = sorted(list(set(row[2] for row in vggsound_meta)))
- self.label2target = {label: target for target, label in enumerate(unique_classes)}
- self.target2label = {target: label for label, target in self.label2target.items()}
- self.video2target = {row[0]: self.label2target[row[2]] for row in vggsound_meta}
-
- split_clip_ids_path = os.path.join(splits_path, f'vggsound_{split}.txt')
- if not os.path.exists(split_clip_ids_path):
- self.make_split_files()
- clip_ids_with_timestamp = open(split_clip_ids_path).read().splitlines()
- clip_paths = [os.path.join(specs_dir, v + '_mel.npy') for v in clip_ids_with_timestamp]
- self.dataset = clip_paths
- # self.dataset = clip_paths[:10000] # overfit one batch
-
- # 'zyTX_1BXKDE_16000_26000'[:11] -> 'zyTX_1BXKDE'
- vid_classes = [self.video2target[Path(path).stem[:11]] for path in self.dataset]
- class2count = collections.Counter(vid_classes)
- self.class_counts = torch.tensor([class2count[cls] for cls in range(len(class2count))])
-
- # self.sample_weights = [len(self.dataset) / class2count[self.video2target[Path(path).stem[:11]]] for path in self.dataset]
-
- def __getitem__(self, idx):
- item = {}
-
- spec_path = self.dataset[idx]
- # 'zyTX_1BXKDE_16000_26000' -> 'zyTX_1BXKDE'
- video_name = Path(spec_path).stem[:11]
-
- item['input'] = np.load(spec_path)
- item['input_path'] = spec_path
-
- # if self.split in ['train', 'valid']:
- item['target'] = self.video2target[video_name]
- item['label'] = self.target2label[item['target']]
-
- if self.transforms is not None:
- item = self.transforms(item)
-
- return item
-
- def __len__(self):
- return len(self.dataset)
-
- def make_split_files(self):
- random.seed(1337)
- logger.info(f'The split files do not exist @ {self.splits_path}. Calculating the new ones.')
- # The downloaded videos (some went missing on YouTube and no longer available)
- available_vid_paths = sorted(glob(os.path.join(self.specs_dir, '*_mel.npy')))
- logger.info(f'The number of clips available after download: {len(available_vid_paths)}')
-
- # original (full) train and test sets
- vggsound_meta = list(csv.reader(open(self.meta_path), quotechar='"'))
- train_vids = {row[0] for row in vggsound_meta if row[3] == 'train'}
- test_vids = {row[0] for row in vggsound_meta if row[3] == 'test'}
- logger.info(f'The number of videos in vggsound train set: {len(train_vids)}')
- logger.info(f'The number of videos in vggsound test set: {len(test_vids)}')
-
- # class counts in test set. We would like to have the same distribution in valid
- unique_classes = sorted(list(set(row[2] for row in vggsound_meta)))
- label2target = {label: target for target, label in enumerate(unique_classes)}
- video2target = {row[0]: label2target[row[2]] for row in vggsound_meta}
- test_vid_classes = [video2target[vid] for vid in test_vids]
- test_target2count = collections.Counter(test_vid_classes)
-
- # now given the counts from test set, sample the same count for validation and the rest leave in train
- train_vids_wo_valid, valid_vids = set(), set()
- for target, label in enumerate(label2target.keys()):
- class_train_vids = [vid for vid in train_vids if video2target[vid] == target]
- random.shuffle(class_train_vids)
- count = test_target2count[target]
- valid_vids.update(class_train_vids[:count])
- train_vids_wo_valid.update(class_train_vids[count:])
-
- # make file with a list of available test videos (each video should contain timestamps as well)
- train_i = valid_i = test_i = 0
- with open(os.path.join(self.splits_path, 'vggsound_train.txt'), 'w') as train_file, \
- open(os.path.join(self.splits_path, 'vggsound_valid.txt'), 'w') as valid_file, \
- open(os.path.join(self.splits_path, 'vggsound_test.txt'), 'w') as test_file:
- for path in available_vid_paths:
- path = path.replace('_mel.npy', '')
- vid_name = Path(path).name
- # 'zyTX_1BXKDE_16000_26000'[:11] -> 'zyTX_1BXKDE'
- if vid_name[:11] in train_vids_wo_valid:
- train_file.write(vid_name + '\n')
- train_i += 1
- elif vid_name[:11] in valid_vids:
- valid_file.write(vid_name + '\n')
- valid_i += 1
- elif vid_name[:11] in test_vids:
- test_file.write(vid_name + '\n')
- test_i += 1
- else:
- raise Exception(f'Clip {vid_name} is neither in train, valid nor test. Strange.')
-
- logger.info(f'Put {train_i} clips to the train set and saved it to ./data/vggsound_train.txt')
- logger.info(f'Put {valid_i} clips to the valid set and saved it to ./data/vggsound_valid.txt')
- logger.info(f'Put {test_i} clips to the test set and saved it to ./data/vggsound_test.txt')
-
-
-if __name__ == '__main__':
- from transforms import Crop, StandardNormalizeAudio, ToTensor
- specs_path = '/home/nvme/data/vggsound/features/melspec_10s_22050hz/'
-
- transforms = torchvision.transforms.transforms.Compose([
- StandardNormalizeAudio(specs_path),
- ToTensor(),
- Crop([80, 848]),
- ])
-
- datasets = {
- 'train': VGGSound('train', specs_path, transforms),
- 'valid': VGGSound('valid', specs_path, transforms),
- 'test': VGGSound('test', specs_path, transforms),
- }
-
- print(datasets['train'][0])
- print(datasets['valid'][0])
- print(datasets['test'][0])
-
- print(datasets['train'].class_counts)
- print(datasets['valid'].class_counts)
- print(datasets['test'].class_counts)
diff --git a/spaces/AIWaves/Debate/src/agents/Prompt/__init__.py b/spaces/AIWaves/Debate/src/agents/Prompt/__init__.py
deleted file mode 100644
index da69c35ed2c4ec583721339c324a53d5622429d1..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Debate/src/agents/Prompt/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .base_Prompts import *
\ No newline at end of file
diff --git a/spaces/AIWaves/SOP_Generation-single/gradio_config.py b/spaces/AIWaves/SOP_Generation-single/gradio_config.py
deleted file mode 100644
index ba519c0f3a71037e6e209d3da21d034626291953..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/SOP_Generation-single/gradio_config.py
+++ /dev/null
@@ -1,439 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The AIWaves Inc. team.
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-from PIL import Image
-import requests
-from typing import List, Tuple
-
-class GradioConfig:
- # How many avatars are currently registered
- POINTER = 0
-
- # Avatar image. You can add or replace.
- AGENT_HEAD_URL = [
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306241687579617434043.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306241687592097408547.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561699613.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561275758.jpg",
- "https://img.touxiangwu.com/uploads/allimg/2021090300/ry5k31wt33c.jpg",
- "https://img.touxiangwu.com/uploads/allimg/2021090300/0ls2gmwhrf5.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/02/202302281677545695326193.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/03/202303271679886128550253.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686711344407060.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686711345834296.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/05/202305171684311194291520.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/05/202305171684311196958993.jpg",
- "https://img.touxiangwu.com/uploads/allimg/2021082612/vr0bkov0dwl.jpg",
- "https://img.touxiangwu.com/uploads/allimg/2021082612/auqx5zfsv5g.jpg",
- "https://img.touxiangwu.com/uploads/allimg/2021082612/llofpivtwls.jpg",
- "https://img.touxiangwu.com/uploads/allimg/2021082612/3j2sdot3ye0.jpg",
- "https://img.touxiangwu.com/2020/3/nQfYf2.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918068774532.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918068289945.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918069785183.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561292003.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561578616.jpg",
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726564597524.jpg"
- ]
- USER_HEAD_URL = "https://img.touxiangwu.com/zb_users/upload/2023/05/202305301685407468585486.jpg"
-
- # The css style of gradio.Chatbot
- CSS = """
- #chatbot1 .user {
- background-color:transparent;
- border-color:transparent;
- }
- #chatbot1 .bot {
- background-color:transparent;
- border-color:transparent;
- }
- #btn {color: red; border-color: red;}
- """
-
- ID = ["USER", "AGENT", "SYSTEM"]
-
- # Bubble template
- BUBBLE_CSS = {
- # Background-color Name-color Name-content Font-color Font-size Content Avatar-URL
- "USER": """
-
-
-
-"""
-
- STATES_NAME:List[str] = None
-
- @classmethod
- def _generate_template(cls, types:str)->str:
- # normal: A state with no execution.
- # active-show-up: Active state, and content displayed above the horizontal line.
- # active-show-down: Active state, and content displayed below the horizontal line.
- # active-show-both: Active state, and content displayed both above and below the horizontal line.
- # active-show-none: Active state, with no content displayed above the horizontal line.
-
- assert types.lower() in ["normal","active-show-up", "active-show-down", "active-show-both", "active", "active-show-none"]
- both_templates = """
{src}\n'
- src = f'Anonymous No.{number}\n{src}'
- return src
-
-
-def generate_4chan_html(f):
- posts = []
- post = ''
- c = -2
- for line in f.splitlines():
- line += "\n"
- if line == '-----\n':
- continue
- elif line.startswith('--- '):
- c += 1
- if post != '':
- src = process_post(post, c)
- posts.append(src)
- post = line
- else:
- post += line
-
- if post != '':
- src = process_post(post, c)
- posts.append(src)
-
- for i in range(len(posts)):
- if i == 0:
- posts[i] = f'
{posts[i]}
\n'
- else:
- posts[i] = f'
{posts[i]}
\n'
-
- output = ''
- output += f'
'
- for post in posts:
- output += post
-
- output += '
'
- output = output.split('\n')
- for i in range(len(output)):
- output[i] = re.sub(r'^(>(.*?)( |))', r'\1', output[i])
- output[i] = re.sub(r'^
(>(.*?)( |))', r'
\1', output[i])
-
- output = '\n'.join(output)
- return output
-
-
-def make_thumbnail(image):
- image = image.resize((350, round(image.size[1] / image.size[0] * 350)), Image.Resampling.LANCZOS)
- if image.size[1] > 470:
- image = ImageOps.fit(image, (350, 470), Image.LANCZOS)
-
- return image
-
-
-def get_image_cache(path):
- cache_folder = Path("cache")
- if not cache_folder.exists():
- cache_folder.mkdir()
-
- mtime = os.stat(path).st_mtime
- if (path in image_cache and mtime != image_cache[path][0]) or (path not in image_cache):
- img = make_thumbnail(Image.open(path))
-
- old_p = Path(f'cache/{path.name}_cache.png')
- p = Path(f'cache/cache_{path.name}.png')
- if old_p.exists():
- old_p.rename(p)
-
- output_file = p
- img.convert('RGB').save(output_file, format='PNG')
- image_cache[path] = [mtime, output_file.as_posix()]
-
- return image_cache[path][1]
-
-
-def generate_instruct_html(history):
- output = f'
'
- for i, _row in enumerate(history):
- row = [convert_to_markdown(entry) for entry in _row]
-
- if row[0]: # don't display empty user messages
- output += f"""
-
'
-
- # We use ?name2 and ?time.time() to force the browser to reset caches
- img_bot = f'' if Path("cache/pfp_character.png").exists() else ''
- img_me = f'' if Path("cache/pfp_me.png").exists() else ''
-
- for i, _row in enumerate(history):
- row = [convert_to_markdown(entry) for entry in _row]
-
- if row[0]: # don't display empty user messages
- output += f"""
-
'
-
- for i, _row in enumerate(history):
- row = [convert_to_markdown(entry) for entry in _row]
-
- if row[0]: # don't display empty user messages
- output += f"""
-
-
-
- {row[0]}
-
-
-
- """
-
- output += f"""
-
-
-
- {row[1]}
-
-
-
- """
-
- output += "
"
- return output
-
-
-def chat_html_wrapper(history, name1, name2, mode, style, reset_cache=False):
- if mode == 'instruct':
- return generate_instruct_html(history['visible'])
- elif style == 'wpp':
- return generate_chat_html(history['visible'], name1, name2)
- else:
- return generate_cai_chat_html(history['visible'], name1, name2, style, reset_cache)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/ssl_.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/ssl_.py
deleted file mode 100644
index 2b45d391d4d7398e4769f45f9dd25eb55daef437..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/ssl_.py
+++ /dev/null
@@ -1,495 +0,0 @@
-from __future__ import absolute_import
-
-import hmac
-import os
-import sys
-import warnings
-from binascii import hexlify, unhexlify
-from hashlib import md5, sha1, sha256
-
-from ..exceptions import (
- InsecurePlatformWarning,
- ProxySchemeUnsupported,
- SNIMissingWarning,
- SSLError,
-)
-from ..packages import six
-from .url import BRACELESS_IPV6_ADDRZ_RE, IPV4_RE
-
-SSLContext = None
-SSLTransport = None
-HAS_SNI = False
-IS_PYOPENSSL = False
-IS_SECURETRANSPORT = False
-ALPN_PROTOCOLS = ["http/1.1"]
-
-# Maps the length of a digest to a possible hash function producing this digest
-HASHFUNC_MAP = {32: md5, 40: sha1, 64: sha256}
-
-
-def _const_compare_digest_backport(a, b):
- """
- Compare two digests of equal length in constant time.
-
- The digests must be of type str/bytes.
- Returns True if the digests match, and False otherwise.
- """
- result = abs(len(a) - len(b))
- for left, right in zip(bytearray(a), bytearray(b)):
- result |= left ^ right
- return result == 0
-
-
-_const_compare_digest = getattr(hmac, "compare_digest", _const_compare_digest_backport)
-
-try: # Test for SSL features
- import ssl
- from ssl import CERT_REQUIRED, wrap_socket
-except ImportError:
- pass
-
-try:
- from ssl import HAS_SNI # Has SNI?
-except ImportError:
- pass
-
-try:
- from .ssltransport import SSLTransport
-except ImportError:
- pass
-
-
-try: # Platform-specific: Python 3.6
- from ssl import PROTOCOL_TLS
-
- PROTOCOL_SSLv23 = PROTOCOL_TLS
-except ImportError:
- try:
- from ssl import PROTOCOL_SSLv23 as PROTOCOL_TLS
-
- PROTOCOL_SSLv23 = PROTOCOL_TLS
- except ImportError:
- PROTOCOL_SSLv23 = PROTOCOL_TLS = 2
-
-try:
- from ssl import PROTOCOL_TLS_CLIENT
-except ImportError:
- PROTOCOL_TLS_CLIENT = PROTOCOL_TLS
-
-
-try:
- from ssl import OP_NO_COMPRESSION, OP_NO_SSLv2, OP_NO_SSLv3
-except ImportError:
- OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000
- OP_NO_COMPRESSION = 0x20000
-
-
-try: # OP_NO_TICKET was added in Python 3.6
- from ssl import OP_NO_TICKET
-except ImportError:
- OP_NO_TICKET = 0x4000
-
-
-# A secure default.
-# Sources for more information on TLS ciphers:
-#
-# - https://wiki.mozilla.org/Security/Server_Side_TLS
-# - https://www.ssllabs.com/projects/best-practices/index.html
-# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
-#
-# The general intent is:
-# - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
-# - prefer ECDHE over DHE for better performance,
-# - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and
-# security,
-# - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common,
-# - disable NULL authentication, MD5 MACs, DSS, and other
-# insecure ciphers for security reasons.
-# - NOTE: TLS 1.3 cipher suites are managed through a different interface
-# not exposed by CPython (yet!) and are enabled by default if they're available.
-DEFAULT_CIPHERS = ":".join(
- [
- "ECDHE+AESGCM",
- "ECDHE+CHACHA20",
- "DHE+AESGCM",
- "DHE+CHACHA20",
- "ECDH+AESGCM",
- "DH+AESGCM",
- "ECDH+AES",
- "DH+AES",
- "RSA+AESGCM",
- "RSA+AES",
- "!aNULL",
- "!eNULL",
- "!MD5",
- "!DSS",
- ]
-)
-
-try:
- from ssl import SSLContext # Modern SSL?
-except ImportError:
-
- class SSLContext(object): # Platform-specific: Python 2
- def __init__(self, protocol_version):
- self.protocol = protocol_version
- # Use default values from a real SSLContext
- self.check_hostname = False
- self.verify_mode = ssl.CERT_NONE
- self.ca_certs = None
- self.options = 0
- self.certfile = None
- self.keyfile = None
- self.ciphers = None
-
- def load_cert_chain(self, certfile, keyfile):
- self.certfile = certfile
- self.keyfile = keyfile
-
- def load_verify_locations(self, cafile=None, capath=None, cadata=None):
- self.ca_certs = cafile
-
- if capath is not None:
- raise SSLError("CA directories not supported in older Pythons")
-
- if cadata is not None:
- raise SSLError("CA data not supported in older Pythons")
-
- def set_ciphers(self, cipher_suite):
- self.ciphers = cipher_suite
-
- def wrap_socket(self, socket, server_hostname=None, server_side=False):
- warnings.warn(
- "A true SSLContext object is not available. This prevents "
- "urllib3 from configuring SSL appropriately and may cause "
- "certain SSL connections to fail. You can upgrade to a newer "
- "version of Python to solve this. For more information, see "
- "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html"
- "#ssl-warnings",
- InsecurePlatformWarning,
- )
- kwargs = {
- "keyfile": self.keyfile,
- "certfile": self.certfile,
- "ca_certs": self.ca_certs,
- "cert_reqs": self.verify_mode,
- "ssl_version": self.protocol,
- "server_side": server_side,
- }
- return wrap_socket(socket, ciphers=self.ciphers, **kwargs)
-
-
-def assert_fingerprint(cert, fingerprint):
- """
- Checks if given fingerprint matches the supplied certificate.
-
- :param cert:
- Certificate as bytes object.
- :param fingerprint:
- Fingerprint as string of hexdigits, can be interspersed by colons.
- """
-
- fingerprint = fingerprint.replace(":", "").lower()
- digest_length = len(fingerprint)
- hashfunc = HASHFUNC_MAP.get(digest_length)
- if not hashfunc:
- raise SSLError("Fingerprint of invalid length: {0}".format(fingerprint))
-
- # We need encode() here for py32; works on py2 and p33.
- fingerprint_bytes = unhexlify(fingerprint.encode())
-
- cert_digest = hashfunc(cert).digest()
-
- if not _const_compare_digest(cert_digest, fingerprint_bytes):
- raise SSLError(
- 'Fingerprints did not match. Expected "{0}", got "{1}".'.format(
- fingerprint, hexlify(cert_digest)
- )
- )
-
-
-def resolve_cert_reqs(candidate):
- """
- Resolves the argument to a numeric constant, which can be passed to
- the wrap_socket function/method from the ssl module.
- Defaults to :data:`ssl.CERT_REQUIRED`.
- If given a string it is assumed to be the name of the constant in the
- :mod:`ssl` module or its abbreviation.
- (So you can specify `REQUIRED` instead of `CERT_REQUIRED`.
- If it's neither `None` nor a string we assume it is already the numeric
- constant which can directly be passed to wrap_socket.
- """
- if candidate is None:
- return CERT_REQUIRED
-
- if isinstance(candidate, str):
- res = getattr(ssl, candidate, None)
- if res is None:
- res = getattr(ssl, "CERT_" + candidate)
- return res
-
- return candidate
-
-
-def resolve_ssl_version(candidate):
- """
- like resolve_cert_reqs
- """
- if candidate is None:
- return PROTOCOL_TLS
-
- if isinstance(candidate, str):
- res = getattr(ssl, candidate, None)
- if res is None:
- res = getattr(ssl, "PROTOCOL_" + candidate)
- return res
-
- return candidate
-
-
-def create_urllib3_context(
- ssl_version=None, cert_reqs=None, options=None, ciphers=None
-):
- """All arguments have the same meaning as ``ssl_wrap_socket``.
-
- By default, this function does a lot of the same work that
- ``ssl.create_default_context`` does on Python 3.4+. It:
-
- - Disables SSLv2, SSLv3, and compression
- - Sets a restricted set of server ciphers
-
- If you wish to enable SSLv3, you can do::
-
- from pip._vendor.urllib3.util import ssl_
- context = ssl_.create_urllib3_context()
- context.options &= ~ssl_.OP_NO_SSLv3
-
- You can do the same to enable compression (substituting ``COMPRESSION``
- for ``SSLv3`` in the last line above).
-
- :param ssl_version:
- The desired protocol version to use. This will default to
- PROTOCOL_SSLv23 which will negotiate the highest protocol that both
- the server and your installation of OpenSSL support.
- :param cert_reqs:
- Whether to require the certificate verification. This defaults to
- ``ssl.CERT_REQUIRED``.
- :param options:
- Specific OpenSSL options. These default to ``ssl.OP_NO_SSLv2``,
- ``ssl.OP_NO_SSLv3``, ``ssl.OP_NO_COMPRESSION``, and ``ssl.OP_NO_TICKET``.
- :param ciphers:
- Which cipher suites to allow the server to select.
- :returns:
- Constructed SSLContext object with specified options
- :rtype: SSLContext
- """
- # PROTOCOL_TLS is deprecated in Python 3.10
- if not ssl_version or ssl_version == PROTOCOL_TLS:
- ssl_version = PROTOCOL_TLS_CLIENT
-
- context = SSLContext(ssl_version)
-
- context.set_ciphers(ciphers or DEFAULT_CIPHERS)
-
- # Setting the default here, as we may have no ssl module on import
- cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs
-
- if options is None:
- options = 0
- # SSLv2 is easily broken and is considered harmful and dangerous
- options |= OP_NO_SSLv2
- # SSLv3 has several problems and is now dangerous
- options |= OP_NO_SSLv3
- # Disable compression to prevent CRIME attacks for OpenSSL 1.0+
- # (issue #309)
- options |= OP_NO_COMPRESSION
- # TLSv1.2 only. Unless set explicitly, do not request tickets.
- # This may save some bandwidth on wire, and although the ticket is encrypted,
- # there is a risk associated with it being on wire,
- # if the server is not rotating its ticketing keys properly.
- options |= OP_NO_TICKET
-
- context.options |= options
-
- # Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is
- # necessary for conditional client cert authentication with TLS 1.3.
- # The attribute is None for OpenSSL <= 1.1.0 or does not exist in older
- # versions of Python. We only enable on Python 3.7.4+ or if certificate
- # verification is enabled to work around Python issue #37428
- # See: https://bugs.python.org/issue37428
- if (cert_reqs == ssl.CERT_REQUIRED or sys.version_info >= (3, 7, 4)) and getattr(
- context, "post_handshake_auth", None
- ) is not None:
- context.post_handshake_auth = True
-
- def disable_check_hostname():
- if (
- getattr(context, "check_hostname", None) is not None
- ): # Platform-specific: Python 3.2
- # We do our own verification, including fingerprints and alternative
- # hostnames. So disable it here
- context.check_hostname = False
-
- # The order of the below lines setting verify_mode and check_hostname
- # matter due to safe-guards SSLContext has to prevent an SSLContext with
- # check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more
- # complex because we don't know whether PROTOCOL_TLS_CLIENT will be used
- # or not so we don't know the initial state of the freshly created SSLContext.
- if cert_reqs == ssl.CERT_REQUIRED:
- context.verify_mode = cert_reqs
- disable_check_hostname()
- else:
- disable_check_hostname()
- context.verify_mode = cert_reqs
-
- # Enable logging of TLS session keys via defacto standard environment variable
- # 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values.
- if hasattr(context, "keylog_filename"):
- sslkeylogfile = os.environ.get("SSLKEYLOGFILE")
- if sslkeylogfile:
- context.keylog_filename = sslkeylogfile
-
- return context
-
-
-def ssl_wrap_socket(
- sock,
- keyfile=None,
- certfile=None,
- cert_reqs=None,
- ca_certs=None,
- server_hostname=None,
- ssl_version=None,
- ciphers=None,
- ssl_context=None,
- ca_cert_dir=None,
- key_password=None,
- ca_cert_data=None,
- tls_in_tls=False,
-):
- """
- All arguments except for server_hostname, ssl_context, and ca_cert_dir have
- the same meaning as they do when using :func:`ssl.wrap_socket`.
-
- :param server_hostname:
- When SNI is supported, the expected hostname of the certificate
- :param ssl_context:
- A pre-made :class:`SSLContext` object. If none is provided, one will
- be created using :func:`create_urllib3_context`.
- :param ciphers:
- A string of ciphers we wish the client to support.
- :param ca_cert_dir:
- A directory containing CA certificates in multiple separate files, as
- supported by OpenSSL's -CApath flag or the capath argument to
- SSLContext.load_verify_locations().
- :param key_password:
- Optional password if the keyfile is encrypted.
- :param ca_cert_data:
- Optional string containing CA certificates in PEM format suitable for
- passing as the cadata parameter to SSLContext.load_verify_locations()
- :param tls_in_tls:
- Use SSLTransport to wrap the existing socket.
- """
- context = ssl_context
- if context is None:
- # Note: This branch of code and all the variables in it are no longer
- # used by urllib3 itself. We should consider deprecating and removing
- # this code.
- context = create_urllib3_context(ssl_version, cert_reqs, ciphers=ciphers)
-
- if ca_certs or ca_cert_dir or ca_cert_data:
- try:
- context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data)
- except (IOError, OSError) as e:
- raise SSLError(e)
-
- elif ssl_context is None and hasattr(context, "load_default_certs"):
- # try to load OS default certs; works well on Windows (require Python3.4+)
- context.load_default_certs()
-
- # Attempt to detect if we get the goofy behavior of the
- # keyfile being encrypted and OpenSSL asking for the
- # passphrase via the terminal and instead error out.
- if keyfile and key_password is None and _is_key_file_encrypted(keyfile):
- raise SSLError("Client private key is encrypted, password is required")
-
- if certfile:
- if key_password is None:
- context.load_cert_chain(certfile, keyfile)
- else:
- context.load_cert_chain(certfile, keyfile, key_password)
-
- try:
- if hasattr(context, "set_alpn_protocols"):
- context.set_alpn_protocols(ALPN_PROTOCOLS)
- except NotImplementedError: # Defensive: in CI, we always have set_alpn_protocols
- pass
-
- # If we detect server_hostname is an IP address then the SNI
- # extension should not be used according to RFC3546 Section 3.1
- use_sni_hostname = server_hostname and not is_ipaddress(server_hostname)
- # SecureTransport uses server_hostname in certificate verification.
- send_sni = (use_sni_hostname and HAS_SNI) or (
- IS_SECURETRANSPORT and server_hostname
- )
- # Do not warn the user if server_hostname is an invalid SNI hostname.
- if not HAS_SNI and use_sni_hostname:
- warnings.warn(
- "An HTTPS request has been made, but the SNI (Server Name "
- "Indication) extension to TLS is not available on this platform. "
- "This may cause the server to present an incorrect TLS "
- "certificate, which can cause validation failures. You can upgrade to "
- "a newer version of Python to solve this. For more information, see "
- "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html"
- "#ssl-warnings",
- SNIMissingWarning,
- )
-
- if send_sni:
- ssl_sock = _ssl_wrap_socket_impl(
- sock, context, tls_in_tls, server_hostname=server_hostname
- )
- else:
- ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
- return ssl_sock
-
-
-def is_ipaddress(hostname):
- """Detects whether the hostname given is an IPv4 or IPv6 address.
- Also detects IPv6 addresses with Zone IDs.
-
- :param str hostname: Hostname to examine.
- :return: True if the hostname is an IP address, False otherwise.
- """
- if not six.PY2 and isinstance(hostname, bytes):
- # IDN A-label bytes are ASCII compatible.
- hostname = hostname.decode("ascii")
- return bool(IPV4_RE.match(hostname) or BRACELESS_IPV6_ADDRZ_RE.match(hostname))
-
-
-def _is_key_file_encrypted(key_file):
- """Detects if a key file is encrypted or not."""
- with open(key_file, "r") as f:
- for line in f:
- # Look for Proc-Type: 4,ENCRYPTED
- if "ENCRYPTED" in line:
- return True
-
- return False
-
-
-def _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname=None):
- if tls_in_tls:
- if not SSLTransport:
- # Import error, ssl is not available.
- raise ProxySchemeUnsupported(
- "TLS in TLS requires support for the 'ssl' module"
- )
-
- SSLTransport._validate_ssl_context_for_tls_in_tls(ssl_context)
- return SSLTransport(sock, ssl_context, server_hostname)
-
- if server_hostname:
- return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
- else:
- return ssl_context.wrap_socket(sock)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/exceptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/exceptions.py
deleted file mode 100644
index a38447bb05bd5d503a32651d6046ff8667785c0c..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/exceptions.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# exceptions.py
-
-import re
-import sys
-import typing
-
-from .util import col, line, lineno, _collapse_string_to_ranges
-from .unicode import pyparsing_unicode as ppu
-
-
-class ExceptionWordUnicode(ppu.Latin1, ppu.LatinA, ppu.LatinB, ppu.Greek, ppu.Cyrillic):
- pass
-
-
-_extract_alphanums = _collapse_string_to_ranges(ExceptionWordUnicode.alphanums)
-_exception_word_extractor = re.compile("([" + _extract_alphanums + "]{1,16})|.")
-
-
-class ParseBaseException(Exception):
- """base exception class for all parsing runtime exceptions"""
-
- # Performance tuning: we construct a *lot* of these, so keep this
- # constructor as small and fast as possible
- def __init__(
- self,
- pstr: str,
- loc: int = 0,
- msg: typing.Optional[str] = None,
- elem=None,
- ):
- self.loc = loc
- if msg is None:
- self.msg = pstr
- self.pstr = ""
- else:
- self.msg = msg
- self.pstr = pstr
- self.parser_element = self.parserElement = elem
- self.args = (pstr, loc, msg)
-
- @staticmethod
- def explain_exception(exc, depth=16):
- """
- Method to take an exception and translate the Python internal traceback into a list
- of the pyparsing expressions that caused the exception to be raised.
-
- Parameters:
-
- - exc - exception raised during parsing (need not be a ParseException, in support
- of Python exceptions that might be raised in a parse action)
- - depth (default=16) - number of levels back in the stack trace to list expression
- and function names; if None, the full stack trace names will be listed; if 0, only
- the failing input line, marker, and exception string will be shown
-
- Returns a multi-line string listing the ParserElements and/or function names in the
- exception's stack trace.
- """
- import inspect
- from .core import ParserElement
-
- if depth is None:
- depth = sys.getrecursionlimit()
- ret = []
- if isinstance(exc, ParseBaseException):
- ret.append(exc.line)
- ret.append(" " * (exc.column - 1) + "^")
- ret.append("{}: {}".format(type(exc).__name__, exc))
-
- if depth > 0:
- callers = inspect.getinnerframes(exc.__traceback__, context=depth)
- seen = set()
- for i, ff in enumerate(callers[-depth:]):
- frm = ff[0]
-
- f_self = frm.f_locals.get("self", None)
- if isinstance(f_self, ParserElement):
- if frm.f_code.co_name not in ("parseImpl", "_parseNoCache"):
- continue
- if id(f_self) in seen:
- continue
- seen.add(id(f_self))
-
- self_type = type(f_self)
- ret.append(
- "{}.{} - {}".format(
- self_type.__module__, self_type.__name__, f_self
- )
- )
-
- elif f_self is not None:
- self_type = type(f_self)
- ret.append("{}.{}".format(self_type.__module__, self_type.__name__))
-
- else:
- code = frm.f_code
- if code.co_name in ("wrapper", ""):
- continue
-
- ret.append("{}".format(code.co_name))
-
- depth -= 1
- if not depth:
- break
-
- return "\n".join(ret)
-
- @classmethod
- def _from_exception(cls, pe):
- """
- internal factory method to simplify creating one type of ParseException
- from another - avoids having __init__ signature conflicts among subclasses
- """
- return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement)
-
- @property
- def line(self) -> str:
- """
- Return the line of text where the exception occurred.
- """
- return line(self.loc, self.pstr)
-
- @property
- def lineno(self) -> int:
- """
- Return the 1-based line number of text where the exception occurred.
- """
- return lineno(self.loc, self.pstr)
-
- @property
- def col(self) -> int:
- """
- Return the 1-based column on the line of text where the exception occurred.
- """
- return col(self.loc, self.pstr)
-
- @property
- def column(self) -> int:
- """
- Return the 1-based column on the line of text where the exception occurred.
- """
- return col(self.loc, self.pstr)
-
- def __str__(self) -> str:
- if self.pstr:
- if self.loc >= len(self.pstr):
- foundstr = ", found end of text"
- else:
- # pull out next word at error location
- found_match = _exception_word_extractor.match(self.pstr, self.loc)
- if found_match is not None:
- found = found_match.group(0)
- else:
- found = self.pstr[self.loc : self.loc + 1]
- foundstr = (", found %r" % found).replace(r"\\", "\\")
- else:
- foundstr = ""
- return "{}{} (at char {}), (line:{}, col:{})".format(
- self.msg, foundstr, self.loc, self.lineno, self.column
- )
-
- def __repr__(self):
- return str(self)
-
- def mark_input_line(self, marker_string: str = None, *, markerString=">!<") -> str:
- """
- Extracts the exception line from the input string, and marks
- the location of the exception with a special symbol.
- """
- markerString = marker_string if marker_string is not None else markerString
- line_str = self.line
- line_column = self.column - 1
- if markerString:
- line_str = "".join(
- (line_str[:line_column], markerString, line_str[line_column:])
- )
- return line_str.strip()
-
- def explain(self, depth=16) -> str:
- """
- Method to translate the Python internal traceback into a list
- of the pyparsing expressions that caused the exception to be raised.
-
- Parameters:
-
- - depth (default=16) - number of levels back in the stack trace to list expression
- and function names; if None, the full stack trace names will be listed; if 0, only
- the failing input line, marker, and exception string will be shown
-
- Returns a multi-line string listing the ParserElements and/or function names in the
- exception's stack trace.
-
- Example::
-
- expr = pp.Word(pp.nums) * 3
- try:
- expr.parse_string("123 456 A789")
- except pp.ParseException as pe:
- print(pe.explain(depth=0))
-
- prints::
-
- 123 456 A789
- ^
- ParseException: Expected W:(0-9), found 'A' (at char 8), (line:1, col:9)
-
- Note: the diagnostic output will include string representations of the expressions
- that failed to parse. These representations will be more helpful if you use `set_name` to
- give identifiable names to your expressions. Otherwise they will use the default string
- forms, which may be cryptic to read.
-
- Note: pyparsing's default truncation of exception tracebacks may also truncate the
- stack of expressions that are displayed in the ``explain`` output. To get the full listing
- of parser expressions, you may have to set ``ParserElement.verbose_stacktrace = True``
- """
- return self.explain_exception(self, depth)
-
- markInputline = mark_input_line
-
-
-class ParseException(ParseBaseException):
- """
- Exception thrown when a parse expression doesn't match the input string
-
- Example::
-
- try:
- Word(nums).set_name("integer").parse_string("ABC")
- except ParseException as pe:
- print(pe)
- print("column: {}".format(pe.column))
-
- prints::
-
- Expected integer (at char 0), (line:1, col:1)
- column: 1
-
- """
-
-
-class ParseFatalException(ParseBaseException):
- """
- User-throwable exception thrown when inconsistent parse content
- is found; stops all parsing immediately
- """
-
-
-class ParseSyntaxException(ParseFatalException):
- """
- Just like :class:`ParseFatalException`, but thrown internally
- when an :class:`ErrorStop` ('-' operator) indicates
- that parsing is to stop immediately because an unbacktrackable
- syntax error has been found.
- """
-
-
-class RecursiveGrammarException(Exception):
- """
- Exception thrown by :class:`ParserElement.validate` if the
- grammar could be left-recursive; parser may need to enable
- left recursion using :class:`ParserElement.enable_left_recursion`
- """
-
- def __init__(self, parseElementList):
- self.parseElementTrace = parseElementList
-
- def __str__(self) -> str:
- return "RecursiveGrammarException: {}".format(self.parseElementTrace)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py
deleted file mode 100644
index d995f0bcc7e322d50af91ee23f3241d8cf46e637..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py
+++ /dev/null
@@ -1,493 +0,0 @@
-"""
-Load setuptools configuration from ``pyproject.toml`` files.
-
-**PRIVATE MODULE**: API reserved for setuptools internal usage only.
-"""
-import logging
-import os
-import warnings
-from contextlib import contextmanager
-from functools import partial
-from typing import TYPE_CHECKING, Callable, Dict, Optional, Mapping, Union
-
-from setuptools.errors import FileError, OptionError
-
-from . import expand as _expand
-from ._apply_pyprojecttoml import apply as _apply
-from ._apply_pyprojecttoml import _PREVIOUSLY_DEFINED, _WouldIgnoreField
-
-if TYPE_CHECKING:
- from setuptools.dist import Distribution # noqa
-
-_Path = Union[str, os.PathLike]
-_logger = logging.getLogger(__name__)
-
-
-def load_file(filepath: _Path) -> dict:
- from setuptools.extern import tomli # type: ignore
-
- with open(filepath, "rb") as file:
- return tomli.load(file)
-
-
-def validate(config: dict, filepath: _Path) -> bool:
- from . import _validate_pyproject as validator
-
- trove_classifier = validator.FORMAT_FUNCTIONS.get("trove-classifier")
- if hasattr(trove_classifier, "_disable_download"):
- # Improve reproducibility by default. See issue 31 for validate-pyproject.
- trove_classifier._disable_download() # type: ignore
-
- try:
- return validator.validate(config)
- except validator.ValidationError as ex:
- summary = f"configuration error: {ex.summary}"
- if ex.name.strip("`") != "project":
- # Probably it is just a field missing/misnamed, not worthy the verbosity...
- _logger.debug(summary)
- _logger.debug(ex.details)
-
- error = f"invalid pyproject.toml config: {ex.name}."
- raise ValueError(f"{error}\n{summary}") from None
-
-
-def apply_configuration(
- dist: "Distribution",
- filepath: _Path,
- ignore_option_errors=False,
-) -> "Distribution":
- """Apply the configuration from a ``pyproject.toml`` file into an existing
- distribution object.
- """
- config = read_configuration(filepath, True, ignore_option_errors, dist)
- return _apply(dist, config, filepath)
-
-
-def read_configuration(
- filepath: _Path,
- expand=True,
- ignore_option_errors=False,
- dist: Optional["Distribution"] = None,
-):
- """Read given configuration file and returns options from it as a dict.
-
- :param str|unicode filepath: Path to configuration file in the ``pyproject.toml``
- format.
-
- :param bool expand: Whether to expand directives and other computed values
- (i.e. post-process the given configuration)
-
- :param bool ignore_option_errors: Whether to silently ignore
- options, values of which could not be resolved (e.g. due to exceptions
- in directives such as file:, attr:, etc.).
- If False exceptions are propagated as expected.
-
- :param Distribution|None: Distribution object to which the configuration refers.
- If not given a dummy object will be created and discarded after the
- configuration is read. This is used for auto-discovery of packages in the case
- a dynamic configuration (e.g. ``attr`` or ``cmdclass``) is expanded.
- When ``expand=False`` this object is simply ignored.
-
- :rtype: dict
- """
- filepath = os.path.abspath(filepath)
-
- if not os.path.isfile(filepath):
- raise FileError(f"Configuration file {filepath!r} does not exist.")
-
- asdict = load_file(filepath) or {}
- project_table = asdict.get("project", {})
- tool_table = asdict.get("tool", {})
- setuptools_table = tool_table.get("setuptools", {})
- if not asdict or not (project_table or setuptools_table):
- return {} # User is not using pyproject to configure setuptools
-
- if setuptools_table:
- # TODO: Remove the following once the feature stabilizes:
- msg = "Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*."
- warnings.warn(msg, _BetaConfiguration)
-
- # There is an overall sense in the community that making include_package_data=True
- # the default would be an improvement.
- # `ini2toml` backfills include_package_data=False when nothing is explicitly given,
- # therefore setting a default here is backwards compatible.
- orig_setuptools_table = setuptools_table.copy()
- if dist and getattr(dist, "include_package_data") is not None:
- setuptools_table.setdefault("include-package-data", dist.include_package_data)
- else:
- setuptools_table.setdefault("include-package-data", True)
- # Persist changes:
- asdict["tool"] = tool_table
- tool_table["setuptools"] = setuptools_table
-
- try:
- # Don't complain about unrelated errors (e.g. tools not using the "tool" table)
- subset = {"project": project_table, "tool": {"setuptools": setuptools_table}}
- validate(subset, filepath)
- except Exception as ex:
- # TODO: Remove the following once the feature stabilizes:
- if _skip_bad_config(project_table, orig_setuptools_table, dist):
- return {}
- # TODO: After the previous statement is removed the try/except can be replaced
- # by the _ignore_errors context manager.
- if ignore_option_errors:
- _logger.debug(f"ignored error: {ex.__class__.__name__} - {ex}")
- else:
- raise # re-raise exception
-
- if expand:
- root_dir = os.path.dirname(filepath)
- return expand_configuration(asdict, root_dir, ignore_option_errors, dist)
-
- return asdict
-
-
-def _skip_bad_config(
- project_cfg: dict, setuptools_cfg: dict, dist: Optional["Distribution"]
-) -> bool:
- """Be temporarily forgiving with invalid ``pyproject.toml``"""
- # See pypa/setuptools#3199 and pypa/cibuildwheel#1064
-
- if dist is None or (
- dist.metadata.name is None
- and dist.metadata.version is None
- and dist.install_requires is None
- ):
- # It seems that the build is not getting any configuration from other places
- return False
-
- if setuptools_cfg:
- # If `[tool.setuptools]` is set, then `pyproject.toml` config is intentional
- return False
-
- given_config = set(project_cfg.keys())
- popular_subset = {"name", "version", "python_requires", "requires-python"}
- if given_config <= popular_subset:
- # It seems that the docs in cibuildtool has been inadvertently encouraging users
- # to create `pyproject.toml` files that are not compliant with the standards.
- # Let's be forgiving for the time being.
- warnings.warn(_InvalidFile.message(), _InvalidFile, stacklevel=2)
- return True
-
- return False
-
-
-def expand_configuration(
- config: dict,
- root_dir: Optional[_Path] = None,
- ignore_option_errors: bool = False,
- dist: Optional["Distribution"] = None,
-) -> dict:
- """Given a configuration with unresolved fields (e.g. dynamic, cmdclass, ...)
- find their final values.
-
- :param dict config: Dict containing the configuration for the distribution
- :param str root_dir: Top-level directory for the distribution/project
- (the same directory where ``pyproject.toml`` is place)
- :param bool ignore_option_errors: see :func:`read_configuration`
- :param Distribution|None: Distribution object to which the configuration refers.
- If not given a dummy object will be created and discarded after the
- configuration is read. Used in the case a dynamic configuration
- (e.g. ``attr`` or ``cmdclass``).
-
- :rtype: dict
- """
- return _ConfigExpander(config, root_dir, ignore_option_errors, dist).expand()
-
-
-class _ConfigExpander:
- def __init__(
- self,
- config: dict,
- root_dir: Optional[_Path] = None,
- ignore_option_errors: bool = False,
- dist: Optional["Distribution"] = None,
- ):
- self.config = config
- self.root_dir = root_dir or os.getcwd()
- self.project_cfg = config.get("project", {})
- self.dynamic = self.project_cfg.get("dynamic", [])
- self.setuptools_cfg = config.get("tool", {}).get("setuptools", {})
- self.dynamic_cfg = self.setuptools_cfg.get("dynamic", {})
- self.ignore_option_errors = ignore_option_errors
- self._dist = dist
-
- def _ensure_dist(self) -> "Distribution":
- from setuptools.dist import Distribution
-
- attrs = {"src_root": self.root_dir, "name": self.project_cfg.get("name", None)}
- return self._dist or Distribution(attrs)
-
- def _process_field(self, container: dict, field: str, fn: Callable):
- if field in container:
- with _ignore_errors(self.ignore_option_errors):
- container[field] = fn(container[field])
-
- def _canonic_package_data(self, field="package-data"):
- package_data = self.setuptools_cfg.get(field, {})
- return _expand.canonic_package_data(package_data)
-
- def expand(self):
- self._expand_packages()
- self._canonic_package_data()
- self._canonic_package_data("exclude-package-data")
-
- # A distribution object is required for discovering the correct package_dir
- dist = self._ensure_dist()
- ctx = _EnsurePackagesDiscovered(dist, self.project_cfg, self.setuptools_cfg)
- with ctx as ensure_discovered:
- package_dir = ensure_discovered.package_dir
- self._expand_data_files()
- self._expand_cmdclass(package_dir)
- self._expand_all_dynamic(dist, package_dir)
-
- return self.config
-
- def _expand_packages(self):
- packages = self.setuptools_cfg.get("packages")
- if packages is None or isinstance(packages, (list, tuple)):
- return
-
- find = packages.get("find")
- if isinstance(find, dict):
- find["root_dir"] = self.root_dir
- find["fill_package_dir"] = self.setuptools_cfg.setdefault("package-dir", {})
- with _ignore_errors(self.ignore_option_errors):
- self.setuptools_cfg["packages"] = _expand.find_packages(**find)
-
- def _expand_data_files(self):
- data_files = partial(_expand.canonic_data_files, root_dir=self.root_dir)
- self._process_field(self.setuptools_cfg, "data-files", data_files)
-
- def _expand_cmdclass(self, package_dir: Mapping[str, str]):
- root_dir = self.root_dir
- cmdclass = partial(_expand.cmdclass, package_dir=package_dir, root_dir=root_dir)
- self._process_field(self.setuptools_cfg, "cmdclass", cmdclass)
-
- def _expand_all_dynamic(self, dist: "Distribution", package_dir: Mapping[str, str]):
- special = ( # need special handling
- "version",
- "readme",
- "entry-points",
- "scripts",
- "gui-scripts",
- "classifiers",
- "dependencies",
- "optional-dependencies",
- )
- # `_obtain` functions are assumed to raise appropriate exceptions/warnings.
- obtained_dynamic = {
- field: self._obtain(dist, field, package_dir)
- for field in self.dynamic
- if field not in special
- }
- obtained_dynamic.update(
- self._obtain_entry_points(dist, package_dir) or {},
- version=self._obtain_version(dist, package_dir),
- readme=self._obtain_readme(dist),
- classifiers=self._obtain_classifiers(dist),
- dependencies=self._obtain_dependencies(dist),
- optional_dependencies=self._obtain_optional_dependencies(dist),
- )
- # `None` indicates there is nothing in `tool.setuptools.dynamic` but the value
- # might have already been set by setup.py/extensions, so avoid overwriting.
- updates = {k: v for k, v in obtained_dynamic.items() if v is not None}
- self.project_cfg.update(updates)
-
- def _ensure_previously_set(self, dist: "Distribution", field: str):
- previous = _PREVIOUSLY_DEFINED[field](dist)
- if previous is None and not self.ignore_option_errors:
- msg = (
- f"No configuration found for dynamic {field!r}.\n"
- "Some dynamic fields need to be specified via `tool.setuptools.dynamic`"
- "\nothers must be specified via the equivalent attribute in `setup.py`."
- )
- raise OptionError(msg)
-
- def _expand_directive(
- self, specifier: str, directive, package_dir: Mapping[str, str]
- ):
- with _ignore_errors(self.ignore_option_errors):
- root_dir = self.root_dir
- if "file" in directive:
- return _expand.read_files(directive["file"], root_dir)
- if "attr" in directive:
- return _expand.read_attr(directive["attr"], package_dir, root_dir)
- raise ValueError(f"invalid `{specifier}`: {directive!r}")
- return None
-
- def _obtain(self, dist: "Distribution", field: str, package_dir: Mapping[str, str]):
- if field in self.dynamic_cfg:
- return self._expand_directive(
- f"tool.setuptools.dynamic.{field}",
- self.dynamic_cfg[field],
- package_dir,
- )
- self._ensure_previously_set(dist, field)
- return None
-
- def _obtain_version(self, dist: "Distribution", package_dir: Mapping[str, str]):
- # Since plugins can set version, let's silently skip if it cannot be obtained
- if "version" in self.dynamic and "version" in self.dynamic_cfg:
- return _expand.version(self._obtain(dist, "version", package_dir))
- return None
-
- def _obtain_readme(self, dist: "Distribution") -> Optional[Dict[str, str]]:
- if "readme" not in self.dynamic:
- return None
-
- dynamic_cfg = self.dynamic_cfg
- if "readme" in dynamic_cfg:
- return {
- "text": self._obtain(dist, "readme", {}),
- "content-type": dynamic_cfg["readme"].get("content-type", "text/x-rst"),
- }
-
- self._ensure_previously_set(dist, "readme")
- return None
-
- def _obtain_entry_points(
- self, dist: "Distribution", package_dir: Mapping[str, str]
- ) -> Optional[Dict[str, dict]]:
- fields = ("entry-points", "scripts", "gui-scripts")
- if not any(field in self.dynamic for field in fields):
- return None
-
- text = self._obtain(dist, "entry-points", package_dir)
- if text is None:
- return None
-
- groups = _expand.entry_points(text)
- expanded = {"entry-points": groups}
-
- def _set_scripts(field: str, group: str):
- if group in groups:
- value = groups.pop(group)
- if field not in self.dynamic:
- msg = _WouldIgnoreField.message(field, value)
- warnings.warn(msg, _WouldIgnoreField)
- # TODO: Don't set field when support for pyproject.toml stabilizes
- # instead raise an error as specified in PEP 621
- expanded[field] = value
-
- _set_scripts("scripts", "console_scripts")
- _set_scripts("gui-scripts", "gui_scripts")
-
- return expanded
-
- def _obtain_classifiers(self, dist: "Distribution"):
- if "classifiers" in self.dynamic:
- value = self._obtain(dist, "classifiers", {})
- if value:
- return value.splitlines()
- return None
-
- def _obtain_dependencies(self, dist: "Distribution"):
- if "dependencies" in self.dynamic:
- value = self._obtain(dist, "dependencies", {})
- if value:
- return _parse_requirements_list(value)
- return None
-
- def _obtain_optional_dependencies(self, dist: "Distribution"):
- if "optional-dependencies" not in self.dynamic:
- return None
- if "optional-dependencies" in self.dynamic_cfg:
- optional_dependencies_map = self.dynamic_cfg["optional-dependencies"]
- assert isinstance(optional_dependencies_map, dict)
- return {
- group: _parse_requirements_list(self._expand_directive(
- f"tool.setuptools.dynamic.optional-dependencies.{group}",
- directive,
- {},
- ))
- for group, directive in optional_dependencies_map.items()
- }
- self._ensure_previously_set(dist, "optional-dependencies")
- return None
-
-
-def _parse_requirements_list(value):
- return [
- line
- for line in value.splitlines()
- if line.strip() and not line.strip().startswith("#")
- ]
-
-
-@contextmanager
-def _ignore_errors(ignore_option_errors: bool):
- if not ignore_option_errors:
- yield
- return
-
- try:
- yield
- except Exception as ex:
- _logger.debug(f"ignored error: {ex.__class__.__name__} - {ex}")
-
-
-class _EnsurePackagesDiscovered(_expand.EnsurePackagesDiscovered):
- def __init__(
- self, distribution: "Distribution", project_cfg: dict, setuptools_cfg: dict
- ):
- super().__init__(distribution)
- self._project_cfg = project_cfg
- self._setuptools_cfg = setuptools_cfg
-
- def __enter__(self):
- """When entering the context, the values of ``packages``, ``py_modules`` and
- ``package_dir`` that are missing in ``dist`` are copied from ``setuptools_cfg``.
- """
- dist, cfg = self._dist, self._setuptools_cfg
- package_dir: Dict[str, str] = cfg.setdefault("package-dir", {})
- package_dir.update(dist.package_dir or {})
- dist.package_dir = package_dir # needs to be the same object
-
- dist.set_defaults._ignore_ext_modules() # pyproject.toml-specific behaviour
-
- # Set `name`, `py_modules` and `packages` in dist to short-circuit
- # auto-discovery, but avoid overwriting empty lists purposefully set by users.
- if dist.metadata.name is None:
- dist.metadata.name = self._project_cfg.get("name")
- if dist.py_modules is None:
- dist.py_modules = cfg.get("py-modules")
- if dist.packages is None:
- dist.packages = cfg.get("packages")
-
- return super().__enter__()
-
- def __exit__(self, exc_type, exc_value, traceback):
- """When exiting the context, if values of ``packages``, ``py_modules`` and
- ``package_dir`` are missing in ``setuptools_cfg``, copy from ``dist``.
- """
- # If anything was discovered set them back, so they count in the final config.
- self._setuptools_cfg.setdefault("packages", self._dist.packages)
- self._setuptools_cfg.setdefault("py-modules", self._dist.py_modules)
- return super().__exit__(exc_type, exc_value, traceback)
-
-
-class _BetaConfiguration(UserWarning):
- """Explicitly inform users that some `pyproject.toml` configuration is *beta*"""
-
-
-class _InvalidFile(UserWarning):
- """The given `pyproject.toml` file is invalid and would be ignored.
- !!\n\n
- ############################
- # Invalid `pyproject.toml` #
- ############################
-
- Any configurations in `pyproject.toml` will be ignored.
- Please note that future releases of setuptools will halt the build process
- if an invalid file is given.
-
- To prevent setuptools from considering `pyproject.toml` please
- DO NOT include the `[project]` or `[tool.setuptools]` tables in your file.
- \n\n!!
- """
-
- @classmethod
- def message(cls):
- from inspect import cleandoc
- return cleandoc(cls.__doc__)
diff --git a/spaces/Benson/text-generation/Examples/Conseguir Sobre l Descarga Gratuita 2022 Uptodown.md b/spaces/Benson/text-generation/Examples/Conseguir Sobre l Descarga Gratuita 2022 Uptodown.md
deleted file mode 100644
index 0337f1044423803a43f5944a62454c8f67ae9ac3..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Conseguir Sobre l Descarga Gratuita 2022 Uptodown.md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
Cómo superarlo Descargar gratis 2022 Uptodown: Una guía para jugadores frustrados
-
Si estás buscando un juego que ponga a prueba tu paciencia, habilidad y cordura, es posible que hayas oído hablar de Cómo superarlo con Bennett Foddy. Este juego se ha vuelto notorio por su dificultad extrema y su juego provocador de ira. Pero, ¿de qué se trata este juego y cómo puedes conseguirlo gratis en 2022? En este artículo, responderemos a estas preguntas y te daremos algunos consejos y trucos para ayudarte a superar esta montaña de frustración.
-
¿Qué es superar con Bennett Foddy?
-
Getting Over It with Bennett Foddy es un videojuego que fue lanzado en 2017 por Bennett Foddy, un desarrollador de juegos y filósofo australiano. El juego es descrito por Foddy como "un juego que hice para un cierto tipo de persona. Para hacerles daño."
-
conseguir sobre él descarga gratuita 2022 uptodown
La premisa del juego es simple: controlas a un hombre llamado Diógenes que está atrapado en un caldero y tiene que usar un martillo para escalar una montaña escarpada y resbaladiza de objetos aleatorios. El juego no tiene puntos de control, sistema de guardado ni piedad. Si haces un movimiento en falso, puedes retroceder hasta el principio. El juego está diseñado para ser frustrante, injusto e impredecible.
-
Un homenaje al senderismo sexy
-
El juego también es un tributo a Sexy Hiking, un juego B de 2002 de Jazzuo que tenía un concepto y jugabilidad similares. Foddy se inspiró en Sexy Hiking y quería crear su propia versión con mejores gráficos, física y sonido. También agregó su propia voz como narrador que comenta sobre tu progreso, fracaso y filosofía.
-
Un comentario filosófico
-
-
¿Por qué la gente quiere jugar Getting Over It?
-
Superar No es un juego para todos. Es un juego que te hará enojar, triste, desesperado y desesperado. Es un juego que te hará cuestionar tus elecciones de vida y tu cordura. Entonces, ¿por qué la gente quiere jugar? Aquí hay algunas posibles razones:
-
Un desafío para el masoquista
-
A algunas personas les gusta jugar juegos duros que los llevan a sus límites. Les gusta la sensación de superar obstáculos y alcanzar metas que parecen imposibles. Les gusta la emoción del riesgo y la recompensa. Les gusta la satisfacción de demostrarse a sí mismos y a otros que están equivocados. Les gusta el dolor y el placer de jugar a Getting Over It.
-
-
Una recompensa por la persistencia
-
Algunas personas juegan Getting Over It porque quieren ver lo que sucede cuando terminan el juego. Sienten curiosidad por el final y la recompensa que les espera. Están decididos a no rendirse y a llegar a la cima de la montaña. Están motivados por el desafío y el misterio de superarlo.
-
Un meme para internet
-
Algunas personas juegan Getting Over It porque quieren unirse a la comunidad en línea y la cultura que ha surgido alrededor del juego. Quieren compartir sus experiencias, reacciones y opiniones con otros jugadores y espectadores. Quieren ver o crear videos, transmisiones, memes, fan art y parodias del juego. Quieren divertirse y reírse del absurdo y la hilaridad de Getting Over It.
-
¿Cómo conseguir sobre él para libre en 2022?
-
Getting Over Es un juego de pago que está disponible en varias plataformas, como Steam, Humble Bundle, iOS y Android. El precio del juego varía dependiendo de la plataforma y la región, pero por lo general es de alrededor de $ 8 USD. Sin embargo, algunas personas pueden no querer pagar por el juego o no tener acceso a las plataformas oficiales. En ese caso, ¿cómo pueden obtener Getting Over It gratis en 2022? Aquí hay algunas opciones:
-
El camino oficial
-
-
La forma no oficial
-
La forma no oficial de obtener Getting Over It de forma gratuita es descargarlo desde un sitio web de terceros o tienda de aplicaciones que ofrece versiones pirateadas o agrietadas del juego. Uno de estos sitios web es Uptodown, que es una plataforma popular para descargar aplicaciones y juegos para dispositivos Android. Uptodown afirma ofrecer una descarga gratuita de Getting Over It con Bennett Foddy APK, que es el formato de archivo para las aplicaciones de Android. Sin embargo, este método no es recomendado o respaldado por el desarrollador o las plataformas.
-
Los riesgos y desventajas
-
Si bien conseguir Getting Over It gratis puede sonar tentador, hay algunos riesgos y desventajas involucradas en hacerlo. En primer lugar, descargar juegos pirateados o agrietados es ilegal y poco ético, ya que viola los derechos de propiedad intelectual del desarrollador y las plataformas. También les priva de los ingresos y el apoyo que merecen para crear y distribuir el juego. En segundo lugar, la descarga de juegos de fuentes no confiables puede exponer su dispositivo a malware, virus, spyware u otro software dañino que puede dañar su sistema o robar su información personal. En tercer lugar, la descarga de juegos de plataformas no oficiales puede resultar en un rendimiento deficiente, problemas de compatibilidad, errores, problemas técnicos o características que pueden arruinar su experiencia de juego. Por lo tanto, es mejor comprar el juego en las plataformas oficiales o esperar un regalo o promoción legítima.
-
Consejos y trucos para superarlo
-
Si has decidido jugar a Getting Over It, ya sea que lo hayas comprado o descargado gratis, es posible que necesites algunos consejos y trucos para ayudarte a sobrevivir a este juego brutal. Aquí hay algunas sugerencias:
-
La práctica hace la perfección
-
-
Usar afirmaciones y pensamiento positivo
-
Otra cosa que hacer en Getting Over es usar afirmaciones y pensamiento positivo. El juego puede ser muy frustrante y desmoralizante, especialmente cuando se pierde mucho progreso o escuchar los comentarios sarcásticos de Foddy. Puede sentirse enojado, desesperado o deprimido. Para lidiar con estas emociones negativas, necesita usar afirmaciones y pensamiento positivo. Necesitas recordarte que puedes hacerlo, que estás mejorando, que estás aprendiendo, que te estás divirtiendo y que no estás solo. Necesitas enfocarte en los aspectos positivos del juego y tu experiencia, en lugar de los negativos.
-
Ver carreras rápidas y guías
-
Una última cosa que hacer en Getting Over es ver carreras rápidas y guías. Speedruns son vídeos de jugadores que completan el juego en el menor tiempo posible, utilizando diversas técnicas y estrategias. Las guías son videos de jugadores que explican cómo superar partes específicas del juego, usando consejos y trucos. Ver carreras rápidas y guías puede ayudarle a aprender de los expertos y mejorar sus propias habilidades. También puedes inspirarte y motivarte viendo cómo otros han conquistado el juego.
-
Conclusión
-
Superarlo con Bennett Foddy es un juego que te desafiará, te frustrará y te hará cuestionar tu existencia. Es un juego que pondrá a prueba tu paciencia, habilidad y cordura. Es un juego que te hará reír, llorar, gritar y rabiar. Pero también es un juego que te recompensará, te enseñará e inspirará. Es un juego que te hará sentir vivo.
-
Si quieres jugar a este juego, puedes comprarlo en las plataformas oficiales o esperar un regalo o promoción gratis. También puede descargarlo de fuentes no oficiales, pero sea consciente de los riesgos y desventajas involucrados. Y si quieres tener éxito en este juego, necesitas practicar, usar afirmaciones y pensamiento positivo, y ver carreras rápidas y guías.
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes acerca de Cómo superarlo con Bennett Foddy:
-
-
Question
Answer
-
¿Cuánto tiempo se tarda en terminar el juego?
La duración del juego depende de su nivel de habilidad y suerte. Algunos jugadores han terminado el juego en menos de 2 minutos, mientras que otros han pasado cientos de horas sin llegar al final.
-
¿Cuál es la recompensa por terminar el juego?
No te lo vamos a estropear, pero digamos que hay una recompensa por terminar el juego que es único y exclusivo para cada jugador.
-
¿Quién es Bennett Foddy?
Bennett Foddy es un desarrollador de juegos australiano y filósofo que enseña en la Universidad de Nueva York. Es conocido por crear juegos que exploran los temas de frustración, dificultad y fracaso, como QWOP, GIRP, CLOP y Getting Over It.
-
¿Quién es Diógenes?
Diógenes es el nombre del carácter que controlas en Cómo superarlo. Lleva el nombre de Diógenes de Sinope, un antiguo filósofo griego que vivió en un barril y rechazó los valores convencionales.
-
¿Qué es Sexy Hiking?
Sexy Hiking es un juego B de 2002 de Jazzuo que inspiró Getting Over It. Tiene un concepto similar y juego de escalar una montaña con un martillo.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/src/lib/types/AbortedGeneration.ts b/spaces/BetterAPI/BetterChat/src/lib/types/AbortedGeneration.ts
deleted file mode 100644
index fe4c2824b4f3257bea71c3acacd65fcee0918188..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/lib/types/AbortedGeneration.ts
+++ /dev/null
@@ -1,8 +0,0 @@
-// Ideally shouldn't be needed, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
-
-import type { Conversation } from "./Conversation";
-import type { Timestamps } from "./Timestamps";
-
-export interface AbortedGeneration extends Timestamps {
- conversationId: Conversation["_id"];
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/restdoc.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/restdoc.py
deleted file mode 100644
index d23fcf2825197f7deca64c062f5c4c6d76911608..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/bcdoc/restdoc.py
+++ /dev/null
@@ -1,282 +0,0 @@
-# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-import logging
-import os
-import re
-
-from botocore.compat import OrderedDict
-from botocore.docs.bcdoc.docstringparser import DocStringParser
-from botocore.docs.bcdoc.style import ReSTStyle
-
-DEFAULT_AWS_DOCS_LINK = 'https://docs.aws.amazon.com/index.html'
-DOCUMENTATION_LINK_REGEX = re.compile(
- r'`AWS API Documentation '
- r'`_'
-)
-LARGE_SECTION_MESSAGE = """
-
- **{}**
- ::
-
- # This section is too large to render.
- # Please see the AWS API Documentation linked below.
-
- {}
- """
-LOG = logging.getLogger('bcdocs')
-SECTION_LINE_LIMIT_CONFIG = {
- 'response-example': {'name': 'Response Syntax', 'line_limit': 1500},
- 'description': {'name': 'Response Structure', 'line_limit': 5000},
- 'request-example': {'name': 'Request Syntax', 'line_limit': 1500},
- 'request-params': {'name': 'Parameters', 'line_limit': 5000},
-}
-SECTION_METHOD_PATH_DEPTH = {
- 'client-api': 4,
- 'paginator-api': 3,
- 'waiter-api': 3,
-}
-
-
-class ReSTDocument:
- def __init__(self, target='man'):
- self.style = ReSTStyle(self)
- self.target = target
- self.parser = DocStringParser(self)
- self.keep_data = True
- self.do_translation = False
- self.translation_map = {}
- self.hrefs = {}
- self._writes = []
- self._last_doc_string = None
-
- def _write(self, s):
- if self.keep_data and s is not None:
- self._writes.append(s)
-
- def write(self, content):
- """
- Write content into the document.
- """
- self._write(content)
-
- def writeln(self, content):
- """
- Write content on a newline.
- """
- self._write(f'{self.style.spaces()}{content}\n')
-
- def peek_write(self):
- """
- Returns the last content written to the document without
- removing it from the stack.
- """
- return self._writes[-1]
-
- def pop_write(self):
- """
- Removes and returns the last content written to the stack.
- """
- return self._writes.pop() if len(self._writes) > 0 else None
-
- def push_write(self, s):
- """
- Places new content on the stack.
- """
- self._writes.append(s)
-
- def getvalue(self):
- """
- Returns the current content of the document as a string.
- """
- if self.hrefs:
- self.style.new_paragraph()
- for refname, link in self.hrefs.items():
- self.style.link_target_definition(refname, link)
- return ''.join(self._writes).encode('utf-8')
-
- def translate_words(self, words):
- return [self.translation_map.get(w, w) for w in words]
-
- def handle_data(self, data):
- if data and self.keep_data:
- self._write(data)
-
- def include_doc_string(self, doc_string):
- if doc_string:
- try:
- start = len(self._writes)
- self.parser.feed(doc_string)
- self.parser.close()
- end = len(self._writes)
- self._last_doc_string = (start, end)
- except Exception:
- LOG.debug('Error parsing doc string', exc_info=True)
- LOG.debug(doc_string)
-
- def remove_last_doc_string(self):
- # Removes all writes inserted by last doc string
- if self._last_doc_string is not None:
- start, end = self._last_doc_string
- del self._writes[start:end]
-
-
-class DocumentStructure(ReSTDocument):
- def __init__(self, name, section_names=None, target='man', context=None):
- """Provides a Hierarichial structure to a ReSTDocument
-
- You can write to it similiar to as you can to a ReSTDocument but
- has an innate structure for more orginaztion and abstraction.
-
- :param name: The name of the document
- :param section_names: A list of sections to be included
- in the document.
- :param target: The target documentation of the Document structure
- :param context: A dictionary of data to store with the strucuture. These
- are only stored per section not the entire structure.
- """
- super().__init__(target=target)
- self._name = name
- self._structure = OrderedDict()
- self._path = [self._name]
- self._context = {}
- if context is not None:
- self._context = context
- if section_names is not None:
- self._generate_structure(section_names)
-
- @property
- def name(self):
- """The name of the document structure"""
- return self._name
-
- @property
- def path(self):
- """
- A list of where to find a particular document structure in the
- overlying document structure.
- """
- return self._path
-
- @path.setter
- def path(self, value):
- self._path = value
-
- @property
- def available_sections(self):
- return list(self._structure)
-
- @property
- def context(self):
- return self._context
-
- def _generate_structure(self, section_names):
- for section_name in section_names:
- self.add_new_section(section_name)
-
- def add_new_section(self, name, context=None):
- """Adds a new section to the current document structure
-
- This document structure will be considered a section to the
- current document structure but will in itself be an entirely
- new document structure that can be written to and have sections
- as well
-
- :param name: The name of the section.
- :param context: A dictionary of data to store with the strucuture. These
- are only stored per section not the entire structure.
- :rtype: DocumentStructure
- :returns: A new document structure to add to but lives as a section
- to the document structure it was instantiated from.
- """
- # Add a new section
- section = self.__class__(
- name=name, target=self.target, context=context
- )
- section.path = self.path + [name]
- # Indent the section apporpriately as well
- section.style.indentation = self.style.indentation
- section.translation_map = self.translation_map
- section.hrefs = self.hrefs
- self._structure[name] = section
- return section
-
- def get_section(self, name):
- """Retrieve a section"""
- return self._structure[name]
-
- def delete_section(self, name):
- """Delete a section"""
- del self._structure[name]
-
- def flush_structure(self, docs_link=None):
- """Flushes a doc structure to a ReSTructed string
-
- The document is flushed out in a DFS style where sections and their
- subsections' values are added to the string as they are visited.
- """
- # We are at the root flush the links at the beginning of the
- # document
- path_length = len(self.path)
- if path_length == 1:
- if self.hrefs:
- self.style.new_paragraph()
- for refname, link in self.hrefs.items():
- self.style.link_target_definition(refname, link)
- # Clear docs_link at the correct depth to prevent passing a non-related link.
- elif path_length == SECTION_METHOD_PATH_DEPTH.get(self.path[1]):
- docs_link = None
- value = self.getvalue()
- for name, section in self._structure.items():
- # Checks is the AWS API Documentation link has been generated.
- # If it has been generated, it gets passed as a the doc_link parameter.
- match = DOCUMENTATION_LINK_REGEX.search(value.decode())
- docs_link = (
- f'{match.group(0)}\n\n'.encode() if match else docs_link
- )
- value += section.flush_structure(docs_link)
-
- # Replace response/request sections if the line number exceeds our limit.
- # The section is replaced with a message linking to AWS API Documentation.
- line_count = len(value.splitlines())
- section_config = SECTION_LINE_LIMIT_CONFIG.get(self.name)
- aws_docs_link = (
- docs_link.decode()
- if docs_link is not None
- else DEFAULT_AWS_DOCS_LINK
- )
- if section_config and line_count > section_config['line_limit']:
- value = LARGE_SECTION_MESSAGE.format(
- section_config['name'], aws_docs_link
- ).encode()
- return value
-
- def getvalue(self):
- return ''.join(self._writes).encode('utf-8')
-
- def remove_all_sections(self):
- self._structure = OrderedDict()
-
- def clear_text(self):
- self._writes = []
-
- def add_title_section(self, title):
- title_section = self.add_new_section('title')
- title_section.style.h1(title)
- return title_section
-
- def write_to_file(self, full_path, file_name):
- if not os.path.exists(full_path):
- os.makedirs(full_path)
- sub_resource_file_path = os.path.join(full_path, f'{file_name}.rst')
- with open(sub_resource_file_path, 'wb') as f:
- f.write(self.flush_structure())
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py
deleted file mode 100644
index b8fb2154b6d0618b62281578e5e947bca487cee4..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/packages/backports/makefile.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-backports.makefile
-~~~~~~~~~~~~~~~~~~
-
-Backports the Python 3 ``socket.makefile`` method for use with anything that
-wants to create a "fake" socket object.
-"""
-import io
-from socket import SocketIO
-
-
-def backport_makefile(
- self, mode="r", buffering=None, encoding=None, errors=None, newline=None
-):
- """
- Backport of ``socket.makefile`` from Python 3.5.
- """
- if not set(mode) <= {"r", "w", "b"}:
- raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,))
- writing = "w" in mode
- reading = "r" in mode or not writing
- assert reading or writing
- binary = "b" in mode
- rawmode = ""
- if reading:
- rawmode += "r"
- if writing:
- rawmode += "w"
- raw = SocketIO(self, rawmode)
- self._makefile_refs += 1
- if buffering is None:
- buffering = -1
- if buffering < 0:
- buffering = io.DEFAULT_BUFFER_SIZE
- if buffering == 0:
- if not binary:
- raise ValueError("unbuffered streams must be binary")
- return raw
- if reading and writing:
- buffer = io.BufferedRWPair(raw, raw, buffering)
- elif reading:
- buffer = io.BufferedReader(raw, buffering)
- else:
- assert writing
- buffer = io.BufferedWriter(raw, buffering)
- if binary:
- return buffer
- text = io.TextIOWrapper(buffer, encoding, errors, newline)
- text.mode = mode
- return text
diff --git a/spaces/CVPR/WALT/mmdet/core/post_processing/bbox_nms.py b/spaces/CVPR/WALT/mmdet/core/post_processing/bbox_nms.py
deleted file mode 100644
index 966d3a6ac86637a6be90edc3aab9b6863fb87764..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/core/post_processing/bbox_nms.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import torch
-from mmcv.ops.nms import batched_nms
-
-from mmdet.core.bbox.iou_calculators import bbox_overlaps
-
-
-def multiclass_nms(multi_bboxes,
- multi_scores,
- score_thr,
- nms_cfg,
- max_num=-1,
- score_factors=None,
- return_inds=False):
- """NMS for multi-class bboxes.
-
- Args:
- multi_bboxes (Tensor): shape (n, #class*4) or (n, 4)
- multi_scores (Tensor): shape (n, #class), where the last column
- contains scores of the background class, but this will be ignored.
- score_thr (float): bbox threshold, bboxes with scores lower than it
- will not be considered.
- nms_thr (float): NMS IoU threshold
- max_num (int, optional): if there are more than max_num bboxes after
- NMS, only top max_num will be kept. Default to -1.
- score_factors (Tensor, optional): The factors multiplied to scores
- before applying NMS. Default to None.
- return_inds (bool, optional): Whether return the indices of kept
- bboxes. Default to False.
-
- Returns:
- tuple: (bboxes, labels, indices (optional)), tensors of shape (k, 5),
- (k), and (k). Labels are 0-based.
- """
- num_classes = multi_scores.size(1) - 1
- # exclude background category
- if multi_bboxes.shape[1] > 4:
- bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4)
- else:
- bboxes = multi_bboxes[:, None].expand(
- multi_scores.size(0), num_classes, 4)
-
- scores = multi_scores[:, :-1]
-
- labels = torch.arange(num_classes, dtype=torch.long)
- labels = labels.view(1, -1).expand_as(scores)
-
- bboxes = bboxes.reshape(-1, 4)
- scores = scores.reshape(-1)
- labels = labels.reshape(-1)
-
- if not torch.onnx.is_in_onnx_export():
- # NonZero not supported in TensorRT
- # remove low scoring boxes
- valid_mask = scores > score_thr
- # multiply score_factor after threshold to preserve more bboxes, improve
- # mAP by 1% for YOLOv3
- if score_factors is not None:
- # expand the shape to match original shape of score
- score_factors = score_factors.view(-1, 1).expand(
- multi_scores.size(0), num_classes)
- score_factors = score_factors.reshape(-1)
- scores = scores * score_factors
-
- if not torch.onnx.is_in_onnx_export():
- # NonZero not supported in TensorRT
- inds = valid_mask.nonzero(as_tuple=False).squeeze(1)
- bboxes, scores, labels = bboxes[inds], scores[inds], labels[inds]
- else:
- # TensorRT NMS plugin has invalid output filled with -1
- # add dummy data to make detection output correct.
- bboxes = torch.cat([bboxes, bboxes.new_zeros(1, 4)], dim=0)
- scores = torch.cat([scores, scores.new_zeros(1)], dim=0)
- labels = torch.cat([labels, labels.new_zeros(1)], dim=0)
-
- if bboxes.numel() == 0:
- if torch.onnx.is_in_onnx_export():
- raise RuntimeError('[ONNX Error] Can not record NMS '
- 'as it has not been executed this time')
- if return_inds:
- return bboxes, labels, inds
- else:
- return bboxes, labels
-
- dets, keep = batched_nms(bboxes, scores, labels, nms_cfg)
-
- if max_num > 0:
- dets = dets[:max_num]
- keep = keep[:max_num]
-
- if return_inds:
- return dets, labels[keep], keep
- else:
- return dets, labels[keep]
-
-
-def fast_nms(multi_bboxes,
- multi_scores,
- multi_coeffs,
- score_thr,
- iou_thr,
- top_k,
- max_num=-1):
- """Fast NMS in `YOLACT `_.
-
- Fast NMS allows already-removed detections to suppress other detections so
- that every instance can be decided to be kept or discarded in parallel,
- which is not possible in traditional NMS. This relaxation allows us to
- implement Fast NMS entirely in standard GPU-accelerated matrix operations.
-
- Args:
- multi_bboxes (Tensor): shape (n, #class*4) or (n, 4)
- multi_scores (Tensor): shape (n, #class+1), where the last column
- contains scores of the background class, but this will be ignored.
- multi_coeffs (Tensor): shape (n, #class*coeffs_dim).
- score_thr (float): bbox threshold, bboxes with scores lower than it
- will not be considered.
- iou_thr (float): IoU threshold to be considered as conflicted.
- top_k (int): if there are more than top_k bboxes before NMS,
- only top top_k will be kept.
- max_num (int): if there are more than max_num bboxes after NMS,
- only top max_num will be kept. If -1, keep all the bboxes.
- Default: -1.
-
- Returns:
- tuple: (bboxes, labels, coefficients), tensors of shape (k, 5), (k, 1),
- and (k, coeffs_dim). Labels are 0-based.
- """
-
- scores = multi_scores[:, :-1].t() # [#class, n]
- scores, idx = scores.sort(1, descending=True)
-
- idx = idx[:, :top_k].contiguous()
- scores = scores[:, :top_k] # [#class, topk]
- num_classes, num_dets = idx.size()
- boxes = multi_bboxes[idx.view(-1), :].view(num_classes, num_dets, 4)
- coeffs = multi_coeffs[idx.view(-1), :].view(num_classes, num_dets, -1)
-
- iou = bbox_overlaps(boxes, boxes) # [#class, topk, topk]
- iou.triu_(diagonal=1)
- iou_max, _ = iou.max(dim=1)
-
- # Now just filter out the ones higher than the threshold
- keep = iou_max <= iou_thr
-
- # Second thresholding introduces 0.2 mAP gain at negligible time cost
- keep *= scores > score_thr
-
- # Assign each kept detection to its corresponding class
- classes = torch.arange(
- num_classes, device=boxes.device)[:, None].expand_as(keep)
- classes = classes[keep]
-
- boxes = boxes[keep]
- coeffs = coeffs[keep]
- scores = scores[keep]
-
- # Only keep the top max_num highest scores across all classes
- scores, idx = scores.sort(0, descending=True)
- if max_num > 0:
- idx = idx[:max_num]
- scores = scores[:max_num]
-
- classes = classes[idx]
- boxes = boxes[idx]
- coeffs = coeffs[idx]
-
- cls_dets = torch.cat([boxes, scores[:, None]], dim=1)
- return cls_dets, classes, coeffs
diff --git a/spaces/Chukwuka/FoodVision-Model/app.py b/spaces/Chukwuka/FoodVision-Model/app.py
deleted file mode 100644
index 1d8bea89c8641693ef765a6377c7cd580a08351b..0000000000000000000000000000000000000000
--- a/spaces/Chukwuka/FoodVision-Model/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-
-### 1. Imports and class names setup ###
-import gradio as gr
-import os
-import torch
-import torchvision.transforms as T
-
-from model import create_effnet_b2
-from timeit import default_timer as timer
-from typing import Tuple, Dict
-
-# Setup class names
-class_names = ['pizza', 'steak', 'sushi']
-
-### 2. Model and transforms preparation ###
-test_tsfm = T.Compose([T.Resize((224,224)),
- T.ToTensor(),
- T.Normalize(mean=[0.485, 0.456, 0.406], # 3. A mean of [0.485, 0.456, 0.406] (across each colour channel)
- std=[0.229, 0.224, 0.225]) # 4. A standard deviation of [0.229, 0.224, 0.225] (across each colour channel),
- ])
-
-# Create EffNetB2 Model
-effnetb2, test_transform = create_effnet_b2(num_of_class=len(class_names),
- transform=test_tsfm,
- seed=42)
-
-# saved_path = 'demos\foodvision_mini\09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_percent.pth'
-saved_path = '07_effnetb2_data_50_percent_10_epochs.pth'
-
-print('Loading Model State Dictionary')
-# Load saved weights
-effnetb2.load_state_dict(
- torch.load(f=saved_path,
- map_location=torch.device('cpu'), # load to CPU
- )
- )
-
-print('Model Loaded ...')
-### 3. Predict function ###
-
-# Create predict function
-from typing import Tuple, Dict
-
-def predict(img) -> Tuple[Dict, float]:
- """Transforms and performs a prediction on img and returns prediction and time taken.
- """
- # Start the timer
- start_time = timer()
-
- # Transform the target image and add a batch dimension
- img = test_tsfm(img).unsqueeze(0)
-
- # Put model into evaluation mode and turn on inference mode
- effnetb2.eval()
- with torch.inference_mode():
- # Pass the transformed image through the model and turn the prediction logits into prediction probabilities
- pred_probs = torch.softmax(effnetb2(img), dim=1)
-
- # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
- pred_labels_and_probs = {class_names[i]: float(pred_probs[0][i]) for i in range(len(class_names))}
-
- # Calculate the prediction time
- pred_time = round(timer() - start_time, 5)
-
- # Return the prediction dictionary and prediction time
- return pred_labels_and_probs, pred_time
-
-### 4. Gradio App ###
-
-# Create title, description and article strings
-title= 'FoodVision Mini 🍕🥩🍣'
-description = "An EfficientNetB2 feature extractor computer vision model to classify images of food as pizza, steak or sushi."
-article = "
Created by Chukwuka [09. PyTorch Model Deployment] Tutorial by Mr. DBourke(https://www.learnpytorch.io/09_pytorch_model_deployment/).
"
-gr.Interface(
- gradio_inference,
- gr.inputs.Image(type="file", label="Input"),
- [gr.outputs.Image(type="file", label="Output GIF"),
- gr.outputs.Image(type="pil", label="Output Image")],
- title=title,
- description=description,
- article=article,
- examples=[
- ['city.jpg'],
- ['tower.jpg'],
- ]
- ).launch(enable_queue=True,cache_examples=True)
\ No newline at end of file
diff --git a/spaces/akhaliq/deeplab2/tensorflow_ops/python/kernel_tests/merge_semantic_and_instance_maps_op_test.py b/spaces/akhaliq/deeplab2/tensorflow_ops/python/kernel_tests/merge_semantic_and_instance_maps_op_test.py
deleted file mode 100644
index c38d21c06d7eb113dbce0023edd5d03b532e576d..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/tensorflow_ops/python/kernel_tests/merge_semantic_and_instance_maps_op_test.py
+++ /dev/null
@@ -1,242 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-r"""Tests for merge_semantic_and_instance_maps_op."""
-
-import numpy as np
-import tensorflow as tf
-from deeplab2.tensorflow_ops.python.ops import merge_semantic_and_instance_maps_op
-
-
-class MergeSemanticAndInstanceMapsOpTest(tf.test.TestCase):
-
- def testMergeSemanticAndInstanceMaps(self):
- """Test the op with 2 images."""
- batch = 2
- height = 4
- width = 4
-
- # Create the instance labels.
- instance_maps = np.zeros((batch, height, width), dtype=np.int32)
- instance_maps[0, :, :] = np.array([[0, 2, 1, 0], [0, 1, 1, 0], [2, 0, 1, 2],
- [0, 0, 1, 1]])
- instance_maps[1, :, :] = np.array([[1, 2, 3, 1], [0, 2, 1, 3], [0, 2, 2, 0],
- [3, 3, 2, 0]])
-
- # Create the semantic labels.
- # The instances with the instance label equal to 0 and 2 have the same
- # semantic label. The other instances all have different semantic labels.
- semantic_maps = np.zeros((batch, height, width), dtype=np.int32)
- # Instance 0 has 4 pixels predicted as 0 and 3 pixels predicted as 3.
- # Instance 1 has 6 pixels predicted as 1.
- # Instance 2 has 2 pixels predicted as 0 and 1 pixel predicted as 3.
- semantic_maps[0, :, :] = np.array([[0, 0, 1, 0], [0, 1, 1, 0], [3, 3, 1, 0],
- [3, 3, 1, 1]])
- # Instance 0 has 3 pixels predicted as 0 and 1 pixel predicted as 3.
- # Instance 1 has 3 pixels predicted as 1.
- # Instance 2 has 3 pixels predicted as 0 and 2 pixels predicted as 2.
- # Instance 3 has 1 pixel predicted as 0 and 3 pixels predicted as 2.
- semantic_maps[1, :, :] = np.array([[1, 0, 2, 1], [0, 0, 1, 2], [0, 2, 2, 3],
- [0, 2, 0, 0]])
-
- # Create the ID list for things.
- thing_ids = [0, 2]
-
- # Groundtruth semantic segmentation maps after majority voting.
- gt_semantic_maps = np.zeros((batch, height, width), dtype=np.int32)
- gt_semantic_maps[0, :, :] = np.array([[0, 0, 1, 0], [0, 1, 1, 0],
- [3, 3, 1, 0], [3, 3, 1, 1]])
- # Instance 2 takes semantic label 0 after majority voting.
- # Instance 3 takes semantic label 2 after majority voting.
- gt_semantic_maps[1, :, :] = np.array([[1, 0, 2, 1], [0, 0, 1, 2],
- [0, 0, 0, 3], [2, 2, 0, 0]])
- # Groundtruth instance segmentation maps.
- gt_instance_maps = np.zeros((batch, 2, height, width), dtype=np.int32)
-
- # There are two cases for gt_instance_maps in batch 1.
- # Case 1:
- # Instance 0 is re-assigned instance label 1.
- # Instance 2 is re-assigned instance label 2.
- gt_instance_maps[0, 0, :, :] = np.array([[1, 2, 0, 1], [1, 0, 0, 1],
- [0, 0, 0, 2], [0, 0, 0, 0]])
- # Case 2:
- # Instance 0 is re-assigned instance label 2.
- # Instance 2 is re-assigned instance label 1.
- gt_instance_maps[0, 1, :, :] = np.array([[2, 1, 0, 2], [2, 0, 0, 2],
- [0, 0, 0, 1], [0, 0, 0, 0]])
- # There are two cases for gt_instance_maps in batch 2.
- # Case 1:
- # Instance 0 is re-assigned instance label 1.
- # Instance 2 is re-assigned instance label 2.
- # Instance 3 is re-assigned instance label 1.
- gt_instance_maps[1, 0, :, :] = np.array([[0, 2, 1, 0], [1, 2, 0, 1],
- [1, 2, 2, 0], [1, 1, 2, 1]])
- # Case 2:
- # Instance 0 is re-assigned instance label 2.
- # Instance 2 is re-assigned instance label 1.
- # Instance 3 is re-assigned instance label 1.
- gt_instance_maps[1, 1, :, :] = np.array([[0, 1, 1, 0], [2, 1, 0, 1],
- [2, 1, 1, 0], [1, 1, 1, 2]])
- # Groundtruth parsing maps.
- label_divisor = 256
-
- # Run the op.
- parsing_maps = (
- merge_semantic_and_instance_maps_op.merge_semantic_and_instance_maps(
- semantic_maps,
- instance_maps,
- thing_ids,
- label_divisor=label_divisor))
- pass_test = False
- for i in range(2):
- for j in range(2):
- current_gt_instance_maps = np.stack(
- [gt_instance_maps[0, i, :, :], gt_instance_maps[1, j, :, :]],
- axis=0)
- gt_parsing_maps = (
- gt_semantic_maps * label_divisor + current_gt_instance_maps)
- if np.array_equal(parsing_maps, gt_parsing_maps):
- pass_test = True
- self.assertTrue(pass_test)
-
- def testMergeSemanticAndInstanceMapsWithStuffAreaLimit(self):
- batch = 1
- height = 4
- width = 4
-
- # Create the instance labels.
- instance_maps = np.zeros((batch, height, width), dtype=np.int32)
- instance_maps[0, :, :] = np.array([[0, 0, 0, 0],
- [0, 0, 1, 1],
- [0, 0, 0, 0],
- [0, 0, 0, 0]])
-
- # Create the semantic labels.
- semantic_maps = np.zeros((batch, height, width), dtype=np.int32)
- semantic_maps[0, :, :] = np.array([[0, 0, 0, 0],
- [0, 0, 1, 1],
- [0, 0, 2, 2],
- [0, 0, 2, 2]])
- thing_ids = [0, 2]
- stuff_area_limit = 3
- void_label = 3
- # Groundtruth semantic segmentation maps after majority voting.
- # Instance 0 takes semantic label 0.
- # Instance 1 is re-assigned with void label.
- gt_semantic_maps = np.zeros((batch, height, width), dtype=np.int32)
- gt_semantic_maps[0, :, :] = np.array([[0, 0, 0, 0],
- [0, 0, void_label, void_label],
- [0, 0, 0, 0],
- [0, 0, 0, 0]])
- # Groundtruth instance segmentation maps.
- gt_instance_maps = np.zeros((batch, height, width), dtype=np.int32)
- gt_instance_maps[0, :, :] = np.array([[1, 1, 1, 1],
- [1, 1, 0, 0],
- [1, 1, 1, 1],
- [1, 1, 1, 1]])
- label_divisor = 256
- gt_parsing_maps = gt_semantic_maps * label_divisor + gt_instance_maps
-
- # Run the op.
- parsing_maps = (
- merge_semantic_and_instance_maps_op.merge_semantic_and_instance_maps(
- semantic_maps,
- instance_maps,
- thing_ids,
- label_divisor=label_divisor,
- stuff_area_limit=stuff_area_limit,
- void_label=void_label))
- self.assertTrue(np.array_equal(parsing_maps, gt_parsing_maps))
-
-
-class MergeSemanticAndInstanceMapsOpGpuTest(MergeSemanticAndInstanceMapsOpTest):
-
- def session(self, use_gpu=True):
- return super(MergeSemanticAndInstanceMapsOpGpuTest,
- self).session(use_gpu=use_gpu)
-
- def testMergeSemanticAndInstanceMapsWithRandomInputs(self):
- batch = 1
- height = 1441
- width = 1441
- rng = np.random.RandomState(0)
- instance_maps = rng.randint(0, 255, (batch, height, width), dtype=np.int32)
- semantic_maps = rng.randint(0, 3, (batch, height, width), dtype=np.int32)
-
- thing_ids = [0, 2]
- stuff_area_limit = 400
- void_label = 3
- label_divisor = 256
-
- with self.session(use_gpu=False):
- parsing_maps_cpu = (
- merge_semantic_and_instance_maps_op.merge_semantic_and_instance_maps(
- semantic_maps,
- instance_maps,
- thing_ids,
- label_divisor=label_divisor,
- stuff_area_limit=stuff_area_limit,
- void_label=void_label))
- parsing_maps_cpu = parsing_maps_cpu.numpy()
-
- with self.session():
- parsing_maps_gpu = (
- merge_semantic_and_instance_maps_op.merge_semantic_and_instance_maps(
- semantic_maps,
- instance_maps,
- thing_ids,
- label_divisor=label_divisor,
- stuff_area_limit=stuff_area_limit,
- void_label=void_label))
- parsing_maps_gpu = parsing_maps_gpu.numpy()
-
- # Checks semantic maps are the same.
- semantic_maps_cpu = parsing_maps_cpu // label_divisor
- semantic_maps_gpu = parsing_maps_gpu // label_divisor
- np.testing.assert_array_equal(semantic_maps_cpu, semantic_maps_gpu)
-
- # Checks instance maps are the same, despite of label order.
- instance_maps_cpu = parsing_maps_cpu % label_divisor
- instance_maps_gpu = parsing_maps_gpu % label_divisor
-
- thing_labels_cpu = np.unique(semantic_maps_cpu[instance_maps_cpu > 0])
- for semantic_label in thing_labels_cpu:
- semantic_mask = semantic_maps_cpu == semantic_label
- instance_labels_cpu = np.unique(instance_maps_cpu[semantic_mask])
- instance_labels_gpu = np.unique(instance_maps_gpu[semantic_mask])
-
- self.assertEqual(len(instance_labels_cpu), len(instance_labels_gpu))
-
- # For each instance (cpu reference) of this semantic label, we check:
- # 1. Within this instance mask, GPU produces one and only one instance
- # label.
- # 2. GPU results with the current semantic and instance label matches
- # CPU instance mask.
- for instance_label in instance_labels_cpu:
- instance_mask_cpu = np.logical_and(
- instance_maps_cpu == instance_label, semantic_mask)
- instance_labels_gpu = set(instance_maps_gpu[instance_mask_cpu])
- self.assertLen(instance_labels_gpu, 1)
-
- instance_label_gpu = instance_labels_gpu.pop()
- # Here GPU must use the same semantic mask (given we have checked
- # semantic maps are the same).
- instance_mask_gpu = np.logical_and(
- instance_maps_gpu == instance_label_gpu, semantic_mask)
- np.testing.assert_array_equal(instance_mask_cpu, instance_mask_gpu)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/akshayvkt/talk-To-SteveJobs/app.py b/spaces/akshayvkt/talk-To-SteveJobs/app.py
deleted file mode 100644
index 6ed19e5c1fc94b225101c286a4d725787989ca76..0000000000000000000000000000000000000000
--- a/spaces/akshayvkt/talk-To-SteveJobs/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import gradio as gr
-import openai
-import requests
-import json
-import os
-
-openai.api_key = os.environ.get('OPENAI_API_KEY')
-
-
-messages = [{"role": "system", "content": 'You are Steve Jobs. Respond to all input in 25 words or less.'}]
-
-# Set up the API endpoint URL and headers
-url = f"https://api.elevenlabs.io/v1/text-to-speech/{os.environ.get('voice_id')}/stream"
-headers = {
- "accept": "*/*",
- "xi-api-key": os.environ.get('elevenlabs_api_key'),
- "Content-Type": "application/json",
-}
-
-# Define a function to handle the Gradio input and generate the response
-def transcribe(audio):
- global messages
-
- # Use OpenAI to transcribe the user's audio input
- # API call 1
- audio_file = open(audio, "rb")
- transcript = openai.Audio.transcribe("whisper-1", audio_file)
-
- # Append the user's message to the message history
- messages.append({"role": "user", "content": transcript["text"]})
-
- # Generate a response using OpenAI's chat API
- #API call 2
- response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages)
-
- # Extract the system message from the API response and append it to the message history
- system_message = response["choices"][0]["message"]
- messages.append(system_message)
-
-
- #API Call 3
- # Use the voice synthesis API to generate an audio response from the system message
- data = {
- "text": system_message["content"],
- "voice_settings": {
- "stability": 0,
- "similarity_boost": 0
- }
- }
- response = requests.post(url, headers=headers, data=json.dumps(data), stream=True)
-
- # Save the audio response to a file
- if response.ok:
- with open("output.wav", "wb") as f:
- for chunk in response.iter_content(chunk_size=1024):
- f.write(chunk)
- else:
- print(f"Error: {response.status_code} - {response.reason}")
-
- # IPython.display.display(IPython.display.Audio('output.wav'))
-
- # Generate a chat transcript for display in the Gradio UI
- chat_transcript = ""
- for message in messages:
- if message['role'] != 'system':
- chat_transcript += message['role'] + ": " + message['content'] + "\n\n"
-
- return chat_transcript,'output.wav'
-
-# css = """
-# #col-container {max-width: 80%; margin-left: auto; margin-right: auto;}
-# #header {text-align: center;}
-# }
-# """
-
-# with gr.Blocks(css=css) as ui:
-
-
-# with gr.Column(elem_id="col-container"):
-# gr.Markdown("""## Talk to AI Steve Jobs: Audio-to-Text+Audio generation
-# Powered by ChatGPT + Whisper + ElevenLabs + HuggingFace
-#
-# """,
-# elem_id="header")
-
-# Define the Gradio UI interface
-# ui = gr.Interface(fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="text")
-ui = gr.Interface(fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs=['text','audio'],title='Talk to AI Steve Jobs', description = """Click on Record from microphone and start speaking,
-and when you're done, click on Stop Recording. Then click on Submit. AI Steve will then answer your question. You can continue to ask follow-up questions by clicking on Clear, and then
-using Record from microphone -> Stop Recording -> Submit AI Steve Jobs will also remember the previous questions and answers.""")
-ui.launch(debug=True)
diff --git a/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/17.html b/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/17.html
deleted file mode 100644
index 2988216ba8c52c0230b7feb1e52ae07f3d431230..0000000000000000000000000000000000000000
--- a/spaces/alexrame/rewardedsoups/streamlit_app/data/locomotion/trajectories/17.html
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
-
- brax visualizer
-
-
-
-
-
-
-
-
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/paqa.c b/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/paqa.c
deleted file mode 100644
index 5eb628336263deb58540cda7f8401f30a720f799..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/paqa.c
+++ /dev/null
@@ -1,1601 +0,0 @@
-
-/*
- * PortAudio Portable Real-Time Audio Library
- * Latest Version at: http://www.portaudio.com
- *
- * Copyright (c) 1999-2010 Phil Burk and Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-#include
-#include
-#include
-#include
-#include
-
-#include "portaudio.h"
-
-#include "qa_tools.h"
-
-#include "paqa_tools.h"
-#include "audio_analyzer.h"
-#include "test_audio_analyzer.h"
-
-/** Accumulate counts for how many tests pass or fail. */
-int g_testsPassed = 0;
-int g_testsFailed = 0;
-
-#define MAX_NUM_GENERATORS (8)
-#define MAX_NUM_RECORDINGS (8)
-#define MAX_BACKGROUND_NOISE_RMS (0.0004)
-#define LOOPBACK_DETECTION_DURATION_SECONDS (0.8)
-#define DEFAULT_FRAMES_PER_BUFFER (0)
-#define PAQA_WAIT_STREAM_MSEC (100)
-#define PAQA_TEST_DURATION (1.2)
-
-// Use two separate streams instead of one full duplex stream.
-#define PAQA_FLAG_TWO_STREAMS (1<<0)
-// Use bloching read/write for loopback.
-#define PAQA_FLAG_USE_BLOCKING_IO (1<<1)
-
-const char * s_FlagOnNames[] =
-{
- "Two Streams (Half Duplex)",
- "Blocking Read/Write"
-};
-
-const char * s_FlagOffNames[] =
-{
- "One Stream (Full Duplex)",
- "Callback"
-};
-
-
-/** Parameters that describe a single test run. */
-typedef struct TestParameters_s
-{
- PaStreamParameters inputParameters;
- PaStreamParameters outputParameters;
- double sampleRate;
- int samplesPerFrame;
- int framesPerBuffer;
- int maxFrames;
- double baseFrequency;
- double amplitude;
- PaStreamFlags streamFlags; // paClipOff, etc
- int flags; // PAQA_FLAG_TWO_STREAMS, PAQA_FLAG_USE_BLOCKING_IO
-} TestParameters;
-
-typedef struct LoopbackContext_s
-{
- // Generate a unique signal on each channel.
- PaQaSineGenerator generators[MAX_NUM_GENERATORS];
- // Record each channel individually.
- PaQaRecording recordings[MAX_NUM_RECORDINGS];
-
- // Reported by the stream after it's opened
- PaTime streamInfoInputLatency;
- PaTime streamInfoOutputLatency;
-
- // Measured at runtime.
- volatile int callbackCount; // incremented for each callback
- volatile int inputBufferCount; // incremented if input buffer not NULL
- int inputUnderflowCount;
- int inputOverflowCount;
-
- volatile int outputBufferCount; // incremented if output buffer not NULL
- int outputOverflowCount;
- int outputUnderflowCount;
-
- // Measure whether input or output is lagging behind.
- volatile int minInputOutputDelta;
- volatile int maxInputOutputDelta;
-
- int minFramesPerBuffer;
- int maxFramesPerBuffer;
- int primingCount;
- TestParameters *test;
- volatile int done;
-} LoopbackContext;
-
-typedef struct UserOptions_s
-{
- int sampleRate;
- int framesPerBuffer;
- int inputLatency;
- int outputLatency;
- int saveBadWaves;
- int verbose;
- int waveFileCount;
- const char *waveFilePath;
- PaDeviceIndex inputDevice;
- PaDeviceIndex outputDevice;
-} UserOptions;
-
-#define BIG_BUFFER_SIZE (sizeof(float) * 2 * 2 * 1024)
-static unsigned char g_ReadWriteBuffer[BIG_BUFFER_SIZE];
-
-#define MAX_CONVERSION_SAMPLES (2 * 32 * 1024)
-#define CONVERSION_BUFFER_SIZE (sizeof(float) * 2 * MAX_CONVERSION_SAMPLES)
-static unsigned char g_ConversionBuffer[CONVERSION_BUFFER_SIZE];
-
-/*******************************************************************/
-static int RecordAndPlaySinesCallback( const void *inputBuffer, void *outputBuffer,
- unsigned long framesPerBuffer,
- const PaStreamCallbackTimeInfo* timeInfo,
- PaStreamCallbackFlags statusFlags,
- void *userData )
-{
- int i;
- LoopbackContext *loopbackContext = (LoopbackContext *) userData;
-
-
- loopbackContext->callbackCount += 1;
- if( statusFlags & paInputUnderflow ) loopbackContext->inputUnderflowCount += 1;
- if( statusFlags & paInputOverflow ) loopbackContext->inputOverflowCount += 1;
- if( statusFlags & paOutputUnderflow ) loopbackContext->outputUnderflowCount += 1;
- if( statusFlags & paOutputOverflow ) loopbackContext->outputOverflowCount += 1;
- if( statusFlags & paPrimingOutput ) loopbackContext->primingCount += 1;
- if( framesPerBuffer > loopbackContext->maxFramesPerBuffer )
- {
- loopbackContext->maxFramesPerBuffer = framesPerBuffer;
- }
- if( framesPerBuffer < loopbackContext->minFramesPerBuffer )
- {
- loopbackContext->minFramesPerBuffer = framesPerBuffer;
- }
-
- /* This may get called with NULL inputBuffer during initial setup.
- * We may also use the same callback with output only streams.
- */
- if( inputBuffer != NULL)
- {
- int channelsPerFrame = loopbackContext->test->inputParameters.channelCount;
- float *in = (float *)inputBuffer;
- PaSampleFormat inFormat = loopbackContext->test->inputParameters.sampleFormat;
-
- loopbackContext->inputBufferCount += 1;
-
- if( inFormat != paFloat32 )
- {
- int samplesToConvert = framesPerBuffer * channelsPerFrame;
- in = (float *) g_ConversionBuffer;
- if( samplesToConvert > MAX_CONVERSION_SAMPLES )
- {
- // Hack to prevent buffer overflow.
- // @todo Loop with small buffer instead of failing.
- printf("Format conversion buffer too small!\n");
- return paComplete;
- }
- PaQa_ConvertToFloat( inputBuffer, samplesToConvert, inFormat, (float *) g_ConversionBuffer );
- }
-
- // Read each channel from the buffer.
- for( i=0; idone |= PaQa_WriteRecording( &loopbackContext->recordings[i],
- in + i,
- framesPerBuffer,
- channelsPerFrame );
- }
- }
-
- if( outputBuffer != NULL )
- {
- int channelsPerFrame = loopbackContext->test->outputParameters.channelCount;
- float *out = (float *)outputBuffer;
- PaSampleFormat outFormat = loopbackContext->test->outputParameters.sampleFormat;
-
- loopbackContext->outputBufferCount += 1;
-
- if( outFormat != paFloat32 )
- {
- // If we need to convert then mix to the g_ConversionBuffer and then convert into the PA outputBuffer.
- out = (float *) g_ConversionBuffer;
- }
-
- PaQa_EraseBuffer( out, framesPerBuffer, channelsPerFrame );
- for( i=0; igenerators[i],
- out + i,
- framesPerBuffer,
- channelsPerFrame );
- }
-
- if( outFormat != paFloat32 )
- {
- int samplesToConvert = framesPerBuffer * channelsPerFrame;
- if( samplesToConvert > MAX_CONVERSION_SAMPLES )
- {
- printf("Format conversion buffer too small!\n");
- return paComplete;
- }
- PaQa_ConvertFromFloat( out, framesPerBuffer * channelsPerFrame, outFormat, outputBuffer );
- }
-
- }
-
- // Measure whether the input or output are lagging behind.
- // Don't measure lag at end.
- if( !loopbackContext->done )
- {
- int inputOutputDelta = loopbackContext->inputBufferCount - loopbackContext->outputBufferCount;
- if( loopbackContext->maxInputOutputDelta < inputOutputDelta )
- {
- loopbackContext->maxInputOutputDelta = inputOutputDelta;
- }
- if( loopbackContext->minInputOutputDelta > inputOutputDelta )
- {
- loopbackContext->minInputOutputDelta = inputOutputDelta;
- }
- }
-
- return loopbackContext->done ? paComplete : paContinue;
-}
-
-static void CopyStreamInfoToLoopbackContext( LoopbackContext *loopbackContext, PaStream *inputStream, PaStream *outputStream )
-{
- const PaStreamInfo *inputStreamInfo = Pa_GetStreamInfo( inputStream );
- const PaStreamInfo *outputStreamInfo = Pa_GetStreamInfo( outputStream );
-
- loopbackContext->streamInfoInputLatency = inputStreamInfo ? inputStreamInfo->inputLatency : -1;
- loopbackContext->streamInfoOutputLatency = outputStreamInfo ? outputStreamInfo->outputLatency : -1;
-}
-
-/*******************************************************************/
-/**
- * Open a full duplex audio stream.
- * Generate sine waves on the output channels and record the input channels.
- * Then close the stream.
- * @return 0 if OK or negative error.
- */
-int PaQa_RunLoopbackFullDuplex( LoopbackContext *loopbackContext )
-{
- PaStream *stream = NULL;
- PaError err = 0;
- TestParameters *test = loopbackContext->test;
- loopbackContext->done = 0;
- // Use one full duplex stream.
- err = Pa_OpenStream(
- &stream,
- &test->inputParameters,
- &test->outputParameters,
- test->sampleRate,
- test->framesPerBuffer,
- paClipOff, /* we won't output out of range samples so don't bother clipping them */
- RecordAndPlaySinesCallback,
- loopbackContext );
- if( err != paNoError ) goto error;
-
- CopyStreamInfoToLoopbackContext( loopbackContext, stream, stream );
-
- err = Pa_StartStream( stream );
- if( err != paNoError ) goto error;
-
- // Wait for stream to finish.
- while( loopbackContext->done == 0 )
- {
- Pa_Sleep(PAQA_WAIT_STREAM_MSEC);
- }
-
- err = Pa_StopStream( stream );
- if( err != paNoError ) goto error;
-
- err = Pa_CloseStream( stream );
- if( err != paNoError ) goto error;
-
- return 0;
-
-error:
- return err;
-}
-
-/*******************************************************************/
-/**
- * Open two audio streams, one for input and one for output.
- * Generate sine waves on the output channels and record the input channels.
- * Then close the stream.
- * @return 0 if OK or paTimedOut.
- */
-
-int PaQa_WaitForStream( LoopbackContext *loopbackContext )
-{
- int timeoutMSec = 1000 * PAQA_TEST_DURATION * 2;
-
- // Wait for stream to finish or timeout.
- while( (loopbackContext->done == 0) && (timeoutMSec > 0) )
- {
- Pa_Sleep(PAQA_WAIT_STREAM_MSEC);
- timeoutMSec -= PAQA_WAIT_STREAM_MSEC;
- }
-
- if( loopbackContext->done == 0 )
- {
- printf("ERROR - stream completion timed out!");
- return paTimedOut;
- }
- return 0;
-}
-
-/*******************************************************************/
-/**
- * Open two audio streams, one for input and one for output.
- * Generate sine waves on the output channels and record the input channels.
- * Then close the stream.
- * @return 0 if OK or negative error.
- */
-int PaQa_RunLoopbackHalfDuplex( LoopbackContext *loopbackContext )
-{
- PaStream *inStream = NULL;
- PaStream *outStream = NULL;
- PaError err = 0;
- int timedOut = 0;
- TestParameters *test = loopbackContext->test;
- loopbackContext->done = 0;
-
- // Use two half duplex streams.
- err = Pa_OpenStream(
- &inStream,
- &test->inputParameters,
- NULL,
- test->sampleRate,
- test->framesPerBuffer,
- test->streamFlags,
- RecordAndPlaySinesCallback,
- loopbackContext );
- if( err != paNoError ) goto error;
- err = Pa_OpenStream(
- &outStream,
- NULL,
- &test->outputParameters,
- test->sampleRate,
- test->framesPerBuffer,
- test->streamFlags,
- RecordAndPlaySinesCallback,
- loopbackContext );
- if( err != paNoError ) goto error;
-
- CopyStreamInfoToLoopbackContext( loopbackContext, inStream, outStream );
-
- err = Pa_StartStream( inStream );
- if( err != paNoError ) goto error;
-
- // Start output later so we catch the beginning of the waveform.
- err = Pa_StartStream( outStream );
- if( err != paNoError ) goto error;
-
- timedOut = PaQa_WaitForStream( loopbackContext );
-
- err = Pa_StopStream( inStream );
- if( err != paNoError ) goto error;
-
- err = Pa_StopStream( outStream );
- if( err != paNoError ) goto error;
-
- err = Pa_CloseStream( inStream );
- if( err != paNoError ) goto error;
-
- err = Pa_CloseStream( outStream );
- if( err != paNoError ) goto error;
-
- return timedOut;
-
-error:
- return err;
-}
-
-
-/*******************************************************************/
-/**
- * Open one audio streams, just for input.
- * Record background level.
- * Then close the stream.
- * @return 0 if OK or negative error.
- */
-int PaQa_RunInputOnly( LoopbackContext *loopbackContext )
-{
- PaStream *inStream = NULL;
- PaError err = 0;
- int timedOut = 0;
- TestParameters *test = loopbackContext->test;
- loopbackContext->done = 0;
-
- // Just open an input stream.
- err = Pa_OpenStream(
- &inStream,
- &test->inputParameters,
- NULL,
- test->sampleRate,
- test->framesPerBuffer,
- paClipOff, /* We won't output out of range samples so don't bother clipping them. */
- RecordAndPlaySinesCallback,
- loopbackContext );
- if( err != paNoError ) goto error;
-
- err = Pa_StartStream( inStream );
- if( err != paNoError ) goto error;
-
- timedOut = PaQa_WaitForStream( loopbackContext );
-
- err = Pa_StopStream( inStream );
- if( err != paNoError ) goto error;
-
- err = Pa_CloseStream( inStream );
- if( err != paNoError ) goto error;
-
- return timedOut;
-
-error:
- return err;
-}
-
-/*******************************************************************/
-static int RecordAndPlayBlockingIO( PaStream *inStream,
- PaStream *outStream,
- LoopbackContext *loopbackContext
- )
-{
- int i;
- float *in = (float *)g_ReadWriteBuffer;
- float *out = (float *)g_ReadWriteBuffer;
- PaError err;
- int done = 0;
- long available;
- const long maxPerBuffer = 64;
- TestParameters *test = loopbackContext->test;
- long framesPerBuffer = test->framesPerBuffer;
- if( framesPerBuffer <= 0 )
- {
- framesPerBuffer = maxPerBuffer; // bigger values might run past end of recording
- }
-
- // Read in audio.
- err = Pa_ReadStream( inStream, in, framesPerBuffer );
- // Ignore an overflow on the first read.
- //if( !((loopbackContext->callbackCount == 0) && (err == paInputOverflowed)) )
- if( err != paInputOverflowed )
- {
- QA_ASSERT_EQUALS( "Pa_ReadStream failed", paNoError, err );
- }
- else
- {
- loopbackContext->inputOverflowCount += 1;
- }
-
-
- // Save in a recording.
- for( i=0; itest->inputParameters.channelCount; i++ )
- {
- done |= PaQa_WriteRecording( &loopbackContext->recordings[i],
- in + i,
- framesPerBuffer,
- loopbackContext->test->inputParameters.channelCount );
- }
-
- // Synthesize audio.
- available = Pa_GetStreamWriteAvailable( outStream );
- if( available > (2*framesPerBuffer) ) available = (2*framesPerBuffer);
- PaQa_EraseBuffer( out, available, loopbackContext->test->outputParameters.channelCount );
- for( i=0; itest->outputParameters.channelCount; i++ )
- {
- PaQa_MixSine( &loopbackContext->generators[i],
- out + i,
- available,
- loopbackContext->test->outputParameters.channelCount );
- }
-
- // Write out audio.
- err = Pa_WriteStream( outStream, out, available );
- // Ignore an underflow on the first write.
- //if( !((loopbackContext->callbackCount == 0) && (err == paOutputUnderflowed)) )
- if( err != paOutputUnderflowed )
- {
- QA_ASSERT_EQUALS( "Pa_WriteStream failed", paNoError, err );
- }
- else
- {
- loopbackContext->outputUnderflowCount += 1;
- }
-
-
- loopbackContext->callbackCount += 1;
-
- return done;
-error:
- return err;
-}
-
-
-/*******************************************************************/
-/**
- * Open two audio streams with non-blocking IO.
- * Generate sine waves on the output channels and record the input channels.
- * Then close the stream.
- * @return 0 if OK or negative error.
- */
-int PaQa_RunLoopbackHalfDuplexBlockingIO( LoopbackContext *loopbackContext )
-{
- PaStream *inStream = NULL;
- PaStream *outStream = NULL;
- PaError err = 0;
- TestParameters *test = loopbackContext->test;
-
- // Use two half duplex streams.
- err = Pa_OpenStream(
- &inStream,
- &test->inputParameters,
- NULL,
- test->sampleRate,
- test->framesPerBuffer,
- paClipOff, /* we won't output out of range samples so don't bother clipping them */
- NULL, // causes non-blocking IO
- NULL );
- if( err != paNoError ) goto error1;
- err = Pa_OpenStream(
- &outStream,
- NULL,
- &test->outputParameters,
- test->sampleRate,
- test->framesPerBuffer,
- paClipOff, /* we won't output out of range samples so don't bother clipping them */
- NULL, // causes non-blocking IO
- NULL );
- if( err != paNoError ) goto error2;
-
- CopyStreamInfoToLoopbackContext( loopbackContext, inStream, outStream );
-
- err = Pa_StartStream( outStream );
- if( err != paNoError ) goto error3;
-
- err = Pa_StartStream( inStream );
- if( err != paNoError ) goto error3;
-
- while( err == 0 )
- {
- err = RecordAndPlayBlockingIO( inStream, outStream, loopbackContext );
- if( err < 0 ) goto error3;
- }
-
- err = Pa_StopStream( inStream );
- if( err != paNoError ) goto error3;
-
- err = Pa_StopStream( outStream );
- if( err != paNoError ) goto error3;
-
- err = Pa_CloseStream( outStream );
- if( err != paNoError ) goto error2;
-
- err = Pa_CloseStream( inStream );
- if( err != paNoError ) goto error1;
-
-
- return 0;
-
-error3:
- Pa_CloseStream( outStream );
-error2:
- Pa_CloseStream( inStream );
-error1:
- return err;
-}
-
-
-/*******************************************************************/
-/**
- * Open one audio stream with non-blocking IO.
- * Generate sine waves on the output channels and record the input channels.
- * Then close the stream.
- * @return 0 if OK or negative error.
- */
-int PaQa_RunLoopbackFullDuplexBlockingIO( LoopbackContext *loopbackContext )
-{
- PaStream *stream = NULL;
- PaError err = 0;
- TestParameters *test = loopbackContext->test;
-
- // Use one full duplex stream.
- err = Pa_OpenStream(
- &stream,
- &test->inputParameters,
- &test->outputParameters,
- test->sampleRate,
- test->framesPerBuffer,
- paClipOff, /* we won't output out of range samples so don't bother clipping them */
- NULL, // causes non-blocking IO
- NULL );
- if( err != paNoError ) goto error1;
-
- CopyStreamInfoToLoopbackContext( loopbackContext, stream, stream );
-
- err = Pa_StartStream( stream );
- if( err != paNoError ) goto error2;
-
- while( err == 0 )
- {
- err = RecordAndPlayBlockingIO( stream, stream, loopbackContext );
- if( err < 0 ) goto error2;
- }
-
- err = Pa_StopStream( stream );
- if( err != paNoError ) goto error2;
-
-
- err = Pa_CloseStream( stream );
- if( err != paNoError ) goto error1;
-
-
- return 0;
-
-error2:
- Pa_CloseStream( stream );
-error1:
- return err;
-}
-
-
-/*******************************************************************/
-/**
- * Run some kind of loopback test.
- * @return 0 if OK or negative error.
- */
-int PaQa_RunLoopback( LoopbackContext *loopbackContext )
-{
- PaError err = 0;
- TestParameters *test = loopbackContext->test;
-
-
- if( test->flags & PAQA_FLAG_TWO_STREAMS )
- {
- if( test->flags & PAQA_FLAG_USE_BLOCKING_IO )
- {
- err = PaQa_RunLoopbackHalfDuplexBlockingIO( loopbackContext );
- }
- else
- {
- err = PaQa_RunLoopbackHalfDuplex( loopbackContext );
- }
- }
- else
- {
- if( test->flags & PAQA_FLAG_USE_BLOCKING_IO )
- {
- err = PaQa_RunLoopbackFullDuplexBlockingIO( loopbackContext );
- }
- else
- {
- err = PaQa_RunLoopbackFullDuplex( loopbackContext );
- }
- }
-
- if( err != paNoError )
- {
- printf("PortAudio error = %s\n", Pa_GetErrorText( err ) );
- }
- return err;
-}
-
-/*******************************************************************/
-static int PaQa_SaveTestResultToWaveFile( UserOptions *userOptions, PaQaRecording *recording )
-{
- if( userOptions->saveBadWaves )
- {
- char filename[256];
-#ifdef WIN32
- _snprintf( filename, sizeof(filename), "%s\\paloopback_%d.wav", userOptions->waveFilePath, userOptions->waveFileCount++ );
-#else
- snprintf( filename, sizeof(filename), "%s/paloopback_%d.wav", userOptions->waveFilePath, userOptions->waveFileCount++ );
-#endif
- printf( "\"%s\", ", filename );
- return PaQa_SaveRecordingToWaveFile( recording, filename );
- }
- return 0;
-}
-
-/*******************************************************************/
-static int PaQa_SetupLoopbackContext( LoopbackContext *loopbackContextPtr, TestParameters *testParams )
-{
- int i;
- // Setup loopback context.
- memset( loopbackContextPtr, 0, sizeof(LoopbackContext) );
- loopbackContextPtr->test = testParams;
- for( i=0; isamplesPerFrame; i++ )
- {
- int err = PaQa_InitializeRecording( &loopbackContextPtr->recordings[i], testParams->maxFrames, testParams->sampleRate );
- QA_ASSERT_EQUALS( "PaQa_InitializeRecording failed", paNoError, err );
- }
- for( i=0; isamplesPerFrame; i++ )
- {
- PaQa_SetupSineGenerator( &loopbackContextPtr->generators[i], PaQa_GetNthFrequency( testParams->baseFrequency, i ),
- testParams->amplitude, testParams->sampleRate );
- }
- loopbackContextPtr->minFramesPerBuffer = 0x0FFFFFFF;
- return 0;
-error:
- return -1;
-}
-
-/*******************************************************************/
-static void PaQa_TeardownLoopbackContext( LoopbackContext *loopbackContextPtr )
-{
- int i;
- if( loopbackContextPtr->test != NULL )
- {
- for( i=0; itest->samplesPerFrame; i++ )
- {
- PaQa_TerminateRecording( &loopbackContextPtr->recordings[i] );
- }
- }
-}
-
-/*******************************************************************/
-static void PaQa_PrintShortErrorReport( PaQaAnalysisResult *analysisResultPtr, int channel )
-{
- printf("channel %d ", channel);
- if( analysisResultPtr->popPosition > 0 )
- {
- printf("POP %0.3f at %d, ", (double)analysisResultPtr->popAmplitude, (int)analysisResultPtr->popPosition );
- }
- else
- {
- if( analysisResultPtr->addedFramesPosition > 0 )
- {
- printf("ADD %d at %d ", (int)analysisResultPtr->numAddedFrames, (int)analysisResultPtr->addedFramesPosition );
- }
-
- if( analysisResultPtr->droppedFramesPosition > 0 )
- {
- printf("DROP %d at %d ", (int)analysisResultPtr->numDroppedFrames, (int)analysisResultPtr->droppedFramesPosition );
- }
- }
-}
-
-/*******************************************************************/
-static void PaQa_PrintFullErrorReport( PaQaAnalysisResult *analysisResultPtr, int channel )
-{
- printf("\n=== Loopback Analysis ===================\n");
- printf(" channel: %d\n", channel );
- printf(" latency: %10.3f\n", analysisResultPtr->latency );
- printf(" amplitudeRatio: %10.3f\n", (double)analysisResultPtr->amplitudeRatio );
- printf(" popPosition: %10.3f\n", (double)analysisResultPtr->popPosition );
- printf(" popAmplitude: %10.3f\n", (double)analysisResultPtr->popAmplitude );
- printf(" num added frames: %10.3f\n", analysisResultPtr->numAddedFrames );
- printf(" added frames at: %10.3f\n", analysisResultPtr->addedFramesPosition );
- printf(" num dropped frames: %10.3f\n", analysisResultPtr->numDroppedFrames );
- printf(" dropped frames at: %10.3f\n", analysisResultPtr->droppedFramesPosition );
-}
-
-/*******************************************************************/
-/**
- * Test loopback connection using the given parameters.
- * @return number of channels with glitches, or negative error.
- */
-static int PaQa_SingleLoopBackTest( UserOptions *userOptions, TestParameters *testParams )
-{
- int i;
- LoopbackContext loopbackContext;
- PaError err = paNoError;
- PaQaTestTone testTone;
- PaQaAnalysisResult analysisResult;
- int numBadChannels = 0;
-
- printf("| %5d | %6d | ", ((int)(testParams->sampleRate+0.5)), testParams->framesPerBuffer );
- fflush(stdout);
-
- testTone.samplesPerFrame = testParams->samplesPerFrame;
- testTone.sampleRate = testParams->sampleRate;
- testTone.amplitude = testParams->amplitude;
- testTone.startDelay = 0;
-
- err = PaQa_SetupLoopbackContext( &loopbackContext, testParams );
- if( err ) return err;
-
- err = PaQa_RunLoopback( &loopbackContext );
- QA_ASSERT_TRUE("loopback did not run", (loopbackContext.callbackCount > 1) );
-
- printf( "%7.2f %7.2f %7.2f | ",
- loopbackContext.streamInfoInputLatency * 1000.0,
- loopbackContext.streamInfoOutputLatency * 1000.0,
- (loopbackContext.streamInfoInputLatency + loopbackContext.streamInfoOutputLatency) * 1000.0
- );
-
- printf( "%4d/%4d/%4d, %4d/%4d/%4d | ",
- loopbackContext.inputOverflowCount,
- loopbackContext.inputUnderflowCount,
- loopbackContext.inputBufferCount,
- loopbackContext.outputOverflowCount,
- loopbackContext.outputUnderflowCount,
- loopbackContext.outputBufferCount
- );
-
- // Analyse recording to detect glitches.
- for( i=0; isamplesPerFrame; i++ )
- {
- double freq = PaQa_GetNthFrequency( testParams->baseFrequency, i );
- testTone.frequency = freq;
-
- PaQa_AnalyseRecording( &loopbackContext.recordings[i], &testTone, &analysisResult );
-
- if( i==0 )
- {
- double latencyMSec;
-
- printf( "%4d-%4d | ",
- loopbackContext.minFramesPerBuffer,
- loopbackContext.maxFramesPerBuffer
- );
-
- latencyMSec = 1000.0 * analysisResult.latency / testParams->sampleRate;
- printf("%7.2f | ", latencyMSec );
-
- }
-
- if( analysisResult.valid )
- {
- int badChannel = ( (analysisResult.popPosition > 0)
- || (analysisResult.addedFramesPosition > 0)
- || (analysisResult.droppedFramesPosition > 0) );
-
- if( badChannel )
- {
- if( userOptions->verbose )
- {
- PaQa_PrintFullErrorReport( &analysisResult, i );
- }
- else
- {
- PaQa_PrintShortErrorReport( &analysisResult, i );
- }
- PaQa_SaveTestResultToWaveFile( userOptions, &loopbackContext.recordings[i] );
- }
- numBadChannels += badChannel;
- }
- else
- {
- printf( "[%d] No or low signal, ampRatio = %f", i, analysisResult.amplitudeRatio );
- numBadChannels += 1;
- }
-
- }
- if( numBadChannels == 0 )
- {
- printf( "OK" );
- }
-
- // Print the # errors so far to make it easier to see where the error occurred.
- printf( " - #errs = %d\n", g_testsFailed );
-
- PaQa_TeardownLoopbackContext( &loopbackContext );
- if( numBadChannels > 0 )
- {
- g_testsFailed += 1;
- }
- return numBadChannels;
-
-error:
- PaQa_TeardownLoopbackContext( &loopbackContext );
- printf( "\n" );
- g_testsFailed += 1;
- return err;
-}
-
-/*******************************************************************/
-static void PaQa_SetDefaultTestParameters( TestParameters *testParamsPtr, PaDeviceIndex inputDevice, PaDeviceIndex outputDevice )
-{
- memset( testParamsPtr, 0, sizeof(TestParameters) );
-
- testParamsPtr->samplesPerFrame = 2;
- testParamsPtr->amplitude = 0.5;
- testParamsPtr->sampleRate = 44100;
- testParamsPtr->maxFrames = (int) (PAQA_TEST_DURATION * testParamsPtr->sampleRate);
- testParamsPtr->framesPerBuffer = DEFAULT_FRAMES_PER_BUFFER;
- testParamsPtr->baseFrequency = 200.0;
- testParamsPtr->flags = PAQA_FLAG_TWO_STREAMS;
- testParamsPtr->streamFlags = paClipOff; /* we won't output out of range samples so don't bother clipping them */
-
- testParamsPtr->inputParameters.device = inputDevice;
- testParamsPtr->inputParameters.sampleFormat = paFloat32;
- testParamsPtr->inputParameters.channelCount = testParamsPtr->samplesPerFrame;
- testParamsPtr->inputParameters.suggestedLatency = Pa_GetDeviceInfo( inputDevice )->defaultLowInputLatency;
- //testParamsPtr->inputParameters.suggestedLatency = Pa_GetDeviceInfo( inputDevice )->defaultHighInputLatency;
-
- testParamsPtr->outputParameters.device = outputDevice;
- testParamsPtr->outputParameters.sampleFormat = paFloat32;
- testParamsPtr->outputParameters.channelCount = testParamsPtr->samplesPerFrame;
- testParamsPtr->outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputDevice )->defaultLowOutputLatency;
- //testParamsPtr->outputParameters.suggestedLatency = Pa_GetDeviceInfo( outputDevice )->defaultHighOutputLatency;
-}
-
-/*******************************************************************/
-static void PaQa_OverrideTestParameters( TestParameters *testParamsPtr, UserOptions *userOptions )
-{
- // Check to see if a specific value was requested.
- if( userOptions->sampleRate >= 0 )
- {
- testParamsPtr->sampleRate = userOptions->sampleRate;
- testParamsPtr->maxFrames = (int) (PAQA_TEST_DURATION * testParamsPtr->sampleRate);
- }
- if( userOptions->framesPerBuffer >= 0 )
- {
- testParamsPtr->framesPerBuffer = userOptions->framesPerBuffer;
- }
- if( userOptions->inputLatency >= 0 )
- {
- testParamsPtr->inputParameters.suggestedLatency = userOptions->inputLatency * 0.001;
- }
- if( userOptions->outputLatency >= 0 )
- {
- testParamsPtr->outputParameters.suggestedLatency = userOptions->outputLatency * 0.001;
- }
- printf( " Running with suggested latency (msec): input = %5.2f, out = %5.2f\n",
- (testParamsPtr->inputParameters.suggestedLatency * 1000.0),
- (testParamsPtr->outputParameters.suggestedLatency * 1000.0) );
-}
-
-/*******************************************************************/
-/**
- * Run a series of tests on this loopback connection.
- * @return number of bad channel results
- */
-static int PaQa_AnalyzeLoopbackConnection( UserOptions *userOptions, PaDeviceIndex inputDevice, PaDeviceIndex outputDevice )
-{
- int iFlags;
- int iRate;
- int iSize;
- int iFormat;
- int savedValue;
- TestParameters testParams;
- const PaDeviceInfo *inputDeviceInfo = Pa_GetDeviceInfo( inputDevice );
- const PaDeviceInfo *outputDeviceInfo = Pa_GetDeviceInfo( outputDevice );
- int totalBadChannels = 0;
-
- // test half duplex first because it is more likely to work.
- int flagSettings[] = { PAQA_FLAG_TWO_STREAMS, 0 };
- int numFlagSettings = (sizeof(flagSettings)/sizeof(int));
-
- double sampleRates[] = { 8000.0, 11025.0, 16000.0, 22050.0, 32000.0, 44100.0, 48000.0, 96000.0 };
- int numRates = (sizeof(sampleRates)/sizeof(double));
-
- // framesPerBuffer==0 means PA decides on the buffer size.
- int framesPerBuffers[] = { 0, 16, 32, 40, 64, 100, 128, 256, 512, 1024 };
- int numBufferSizes = (sizeof(framesPerBuffers)/sizeof(int));
-
- PaSampleFormat sampleFormats[] = { paFloat32, paUInt8, paInt8, paInt16, paInt32 };
- const char *sampleFormatNames[] = { "paFloat32", "paUInt8", "paInt8", "paInt16", "paInt32" };
- int numSampleFormats = (sizeof(sampleFormats)/sizeof(PaSampleFormat));
-
- printf( "=============== Analysing Loopback %d to %d =====================\n", outputDevice, inputDevice );
- printf( " Devices: %s => %s\n", outputDeviceInfo->name, inputDeviceInfo->name);
-
- PaQa_SetDefaultTestParameters( &testParams, inputDevice, outputDevice );
-
- PaQa_OverrideTestParameters( &testParams, userOptions );
-
- // Loop though combinations of audio parameters.
- for( iFlags=0; iFlagssampleRate < 0 )
- {
- savedValue = testParams.sampleRate;
- for( iRate=0; iRateframesPerBuffer < 0 )
- {
- savedValue = testParams.framesPerBuffer;
- for( iSize=0; iSizetest;
-
- // Start in the middle assuming past latency.
- int startFrame = testParamsPtr->maxFrames/2;
- int numFrames = testParamsPtr->maxFrames/2;
-
- // Check to see if the signal is clipped.
- double amplitudeLeft = PaQa_MeasureSineAmplitudeBySlope( &loopbackContextPtr->recordings[0],
- testParamsPtr->baseFrequency, testParamsPtr->sampleRate,
- startFrame, numFrames );
- double gainLeft = amplitudeLeft / testParamsPtr->amplitude;
- double amplitudeRight = PaQa_MeasureSineAmplitudeBySlope( &loopbackContextPtr->recordings[1],
- testParamsPtr->baseFrequency, testParamsPtr->sampleRate,
- startFrame, numFrames );
- double gainRight = amplitudeLeft / testParamsPtr->amplitude;
- printf(" Loop gain: left = %f, right = %f\n", gainLeft, gainRight );
-
- if( (amplitudeLeft > 1.0 ) || (amplitudeRight > 1.0) )
- {
- printf("ERROR - loop gain is too high. Should be around than 1.0. Please lower output level and/or input gain.\n" );
- clipped = 1;
- }
- return clipped;
-}
-
-/*******************************************************************/
-int PaQa_MeasureBackgroundNoise( LoopbackContext *loopbackContextPtr, double *rmsPtr )
-{
- int result = 0;
- *rmsPtr = 0.0;
- // Rewind so we can record some input.
- loopbackContextPtr->recordings[0].numFrames = 0;
- loopbackContextPtr->recordings[1].numFrames = 0;
- result = PaQa_RunInputOnly( loopbackContextPtr );
- if( result == 0 )
- {
- double leftRMS = PaQa_MeasureRootMeanSquare( loopbackContextPtr->recordings[0].buffer,
- loopbackContextPtr->recordings[0].numFrames );
- double rightRMS = PaQa_MeasureRootMeanSquare( loopbackContextPtr->recordings[1].buffer,
- loopbackContextPtr->recordings[1].numFrames );
- *rmsPtr = (leftRMS + rightRMS) / 2.0;
- }
- return result;
-}
-
-/*******************************************************************/
-/**
- * Output a sine wave then try to detect it on input.
- *
- * @return 1 if loopback connected, 0 if not, or negative error.
- */
-int PaQa_CheckForLoopBack( UserOptions *userOptions, PaDeviceIndex inputDevice, PaDeviceIndex outputDevice )
-{
- TestParameters testParams;
- LoopbackContext loopbackContext;
- const PaDeviceInfo *inputDeviceInfo;
- const PaDeviceInfo *outputDeviceInfo;
- PaError err = paNoError;
- double minAmplitude;
- int loopbackIsConnected;
- int startFrame, numFrames;
- double magLeft, magRight;
-
- inputDeviceInfo = Pa_GetDeviceInfo( inputDevice );
- if( inputDeviceInfo == NULL )
- {
- printf("ERROR - Pa_GetDeviceInfo for input returned NULL.\n");
- return paInvalidDevice;
- }
- if( inputDeviceInfo->maxInputChannels < 2 )
- {
- return 0;
- }
-
- outputDeviceInfo = Pa_GetDeviceInfo( outputDevice );
- if( outputDeviceInfo == NULL )
- {
- printf("ERROR - Pa_GetDeviceInfo for output returned NULL.\n");
- return paInvalidDevice;
- }
- if( outputDeviceInfo->maxOutputChannels < 2 )
- {
- return 0;
- }
-
- printf( "Look for loopback cable between \"%s\" => \"%s\"\n", outputDeviceInfo->name, inputDeviceInfo->name);
-
- printf( " Default suggested input latency (msec): low = %5.2f, high = %5.2f\n",
- (inputDeviceInfo->defaultLowInputLatency * 1000.0),
- (inputDeviceInfo->defaultHighInputLatency * 1000.0) );
- printf( " Default suggested output latency (msec): low = %5.2f, high = %5.2f\n",
- (outputDeviceInfo->defaultLowOutputLatency * 1000.0),
- (outputDeviceInfo->defaultHighOutputLatency * 1000.0) );
-
- PaQa_SetDefaultTestParameters( &testParams, inputDevice, outputDevice );
-
- PaQa_OverrideTestParameters( &testParams, userOptions );
-
- testParams.maxFrames = (int) (LOOPBACK_DETECTION_DURATION_SECONDS * testParams.sampleRate);
- minAmplitude = testParams.amplitude / 4.0;
-
- // Check to see if the selected formats are supported.
- if( Pa_IsFormatSupported( &testParams.inputParameters, NULL, testParams.sampleRate ) != paFormatIsSupported )
- {
- printf( "Input not supported for this format!\n" );
- return 0;
- }
- if( Pa_IsFormatSupported( NULL, &testParams.outputParameters, testParams.sampleRate ) != paFormatIsSupported )
- {
- printf( "Output not supported for this format!\n" );
- return 0;
- }
-
- PaQa_SetupLoopbackContext( &loopbackContext, &testParams );
-
- if( inputDevice == outputDevice )
- {
- // Use full duplex if checking for loopback on one device.
- testParams.flags &= ~PAQA_FLAG_TWO_STREAMS;
- }
- else
- {
- // Use half duplex if checking for loopback on two different device.
- testParams.flags = PAQA_FLAG_TWO_STREAMS;
- }
- err = PaQa_RunLoopback( &loopbackContext );
- QA_ASSERT_TRUE("loopback detection callback did not run", (loopbackContext.callbackCount > 1) );
-
- // Analyse recording to see if we captured the output.
- // Start in the middle assuming past latency.
- startFrame = testParams.maxFrames/2;
- numFrames = testParams.maxFrames/2;
- magLeft = PaQa_CorrelateSine( &loopbackContext.recordings[0],
- loopbackContext.generators[0].frequency,
- testParams.sampleRate,
- startFrame, numFrames, NULL );
- magRight = PaQa_CorrelateSine( &loopbackContext.recordings[1],
- loopbackContext.generators[1].frequency,
- testParams.sampleRate,
- startFrame, numFrames, NULL );
- printf(" Amplitudes: left = %f, right = %f\n", magLeft, magRight );
-
- // Check for backwards cable.
- loopbackIsConnected = ((magLeft > minAmplitude) && (magRight > minAmplitude));
-
- if( !loopbackIsConnected )
- {
- double magLeftReverse = PaQa_CorrelateSine( &loopbackContext.recordings[0],
- loopbackContext.generators[1].frequency,
- testParams.sampleRate,
- startFrame, numFrames, NULL );
-
- double magRightReverse = PaQa_CorrelateSine( &loopbackContext.recordings[1],
- loopbackContext.generators[0].frequency,
- testParams.sampleRate,
- startFrame, numFrames, NULL );
-
- if ((magLeftReverse > minAmplitude) && (magRightReverse>minAmplitude))
- {
- printf("ERROR - You seem to have the left and right channels swapped on the loopback cable!\n");
- }
- }
- else
- {
- double rms = 0.0;
- if( PaQa_CheckForClippedLoopback( &loopbackContext ) )
- {
- // Clipped so don't use this loopback.
- loopbackIsConnected = 0;
- }
-
- err = PaQa_MeasureBackgroundNoise( &loopbackContext, &rms );
- printf(" Background noise = %f\n", rms );
- if( err )
- {
- printf("ERROR - Could not measure background noise on this input!\n");
- loopbackIsConnected = 0;
- }
- else if( rms > MAX_BACKGROUND_NOISE_RMS )
- {
- printf("ERROR - There is too much background noise on this input!\n");
- loopbackIsConnected = 0;
- }
- }
-
- PaQa_TeardownLoopbackContext( &loopbackContext );
- return loopbackIsConnected;
-
-error:
- PaQa_TeardownLoopbackContext( &loopbackContext );
- return err;
-}
-
-/*******************************************************************/
-/**
- * If there is a loopback connection then run the analysis.
- */
-static int CheckLoopbackAndScan( UserOptions *userOptions,
- PaDeviceIndex iIn, PaDeviceIndex iOut )
-{
- int loopbackConnected = PaQa_CheckForLoopBack( userOptions, iIn, iOut );
- if( loopbackConnected > 0 )
- {
- PaQa_AnalyzeLoopbackConnection( userOptions, iIn, iOut );
- return 1;
- }
- return 0;
-}
-
-/*******************************************************************/
-/**
- * Scan every combination of output to input device.
- * If a loopback is found the analyse the combination.
- * The scan can be overridden using the -i and -o command line options.
- */
-static int ScanForLoopback(UserOptions *userOptions)
-{
- PaDeviceIndex iIn,iOut;
- int numLoopbacks = 0;
- int numDevices;
- numDevices = Pa_GetDeviceCount();
-
- // If both devices are specified then just use that combination.
- if ((userOptions->inputDevice >= 0) && (userOptions->outputDevice >= 0))
- {
- numLoopbacks += CheckLoopbackAndScan( userOptions, userOptions->inputDevice, userOptions->outputDevice );
- }
- else if (userOptions->inputDevice >= 0)
- {
- // Just scan for output.
- for( iOut=0; iOutinputDevice, iOut );
- }
- }
- else if (userOptions->outputDevice >= 0)
- {
- // Just scan for input.
- for( iIn=0; iInoutputDevice );
- }
- }
- else
- {
- // Scan both.
- for( iOut=0; iOut 0) );
- return numLoopbacks;
-
-error:
- return -1;
-}
-
-/*==========================================================================================*/
-int TestSampleFormatConversion( void )
-{
- int i;
- const float floatInput[] = { 1.0, 0.5, -0.5, -1.0 };
-
- const char charInput[] = { 127, 64, -64, -128 };
- const unsigned char ucharInput[] = { 255, 128+64, 64, 0 };
- const short shortInput[] = { 32767, 32768/2, -32768/2, -32768 };
- const int intInput[] = { 2147483647, 2147483647/2, -1073741824 /*-2147483648/2 doesn't work in msvc*/, -2147483648 };
-
- float floatOutput[4];
- short shortOutput[4];
- int intOutput[4];
- unsigned char ucharOutput[4];
- char charOutput[4];
-
- QA_ASSERT_EQUALS("int must be 32-bit", 4, (int) sizeof(int) );
- QA_ASSERT_EQUALS("short must be 16-bit", 2, (int) sizeof(short) );
-
- // from Float ======
- PaQa_ConvertFromFloat( floatInput, 4, paUInt8, ucharOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE_INT( "paFloat32 -> paUInt8 -> error", ucharInput[i], ucharOutput[i], 1 );
- }
-
- PaQa_ConvertFromFloat( floatInput, 4, paInt8, charOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE_INT( "paFloat32 -> paInt8 -> error", charInput[i], charOutput[i], 1 );
- }
-
- PaQa_ConvertFromFloat( floatInput, 4, paInt16, shortOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE_INT( "paFloat32 -> paInt16 error", shortInput[i], shortOutput[i], 1 );
- }
-
- PaQa_ConvertFromFloat( floatInput, 4, paInt32, intOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE_INT( "paFloat32 -> paInt32 error", intInput[i], intOutput[i], 0x00010000 );
- }
-
-
- // to Float ======
- memset( floatOutput, 0, sizeof(floatOutput) );
- PaQa_ConvertToFloat( ucharInput, 4, paUInt8, floatOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE( "paUInt8 -> paFloat32 error", floatInput[i], floatOutput[i], 0.01 );
- }
-
- memset( floatOutput, 0, sizeof(floatOutput) );
- PaQa_ConvertToFloat( charInput, 4, paInt8, floatOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE( "paInt8 -> paFloat32 error", floatInput[i], floatOutput[i], 0.01 );
- }
-
- memset( floatOutput, 0, sizeof(floatOutput) );
- PaQa_ConvertToFloat( shortInput, 4, paInt16, floatOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE( "paInt16 -> paFloat32 error", floatInput[i], floatOutput[i], 0.001 );
- }
-
- memset( floatOutput, 0, sizeof(floatOutput) );
- PaQa_ConvertToFloat( intInput, 4, paInt32, floatOutput );
- for( i=0; i<4; i++ )
- {
- QA_ASSERT_CLOSE( "paInt32 -> paFloat32 error", floatInput[i], floatOutput[i], 0.00001 );
- }
-
- return 0;
-
-error:
- return -1;
-}
-
-
-/*******************************************************************/
-void usage( const char *name )
-{
- printf("%s [-i# -o# -l# -r# -s# -m -w -dDir]\n", name);
- printf(" -i# - Input device ID. Will scan for loopback cable if not specified.\n");
- printf(" -o# - Output device ID. Will scan for loopback if not specified.\n");
- printf(" -l# - Latency for both input and output in milliseconds.\n");
- printf(" --inputLatency # Input latency in milliseconds.\n");
- printf(" --outputLatency # Output latency in milliseconds.\n");
- printf(" -r# - Sample Rate in Hz. Will use multiple common rates if not specified.\n");
- printf(" -s# - Size of callback buffer in frames, framesPerBuffer. Will use common values if not specified.\n");
- printf(" -w - Save bad recordings in a WAV file.\n");
- printf(" -dDir - Path for Directory for WAV files. Default is current directory.\n");
- printf(" -m - Just test the DSP Math code and not the audio devices.\n");
- printf(" -v - Verbose reports.\n");
-}
-
-/*******************************************************************/
-int main( int argc, char **argv )
-{
- int i;
- UserOptions userOptions;
- int result = 0;
- int justMath = 0;
- char *executableName = argv[0];
-
- printf("PortAudio LoopBack Test built " __DATE__ " at " __TIME__ "\n");
-
- if( argc > 1 ){
- printf("running with arguments:");
- for(i=1; i < argc; ++i )
- printf(" %s", argv[i] );
- printf("\n");
- }else{
- printf("running with no arguments\n");
- }
-
- memset(&userOptions, 0, sizeof(userOptions));
- userOptions.inputDevice = paNoDevice;
- userOptions.outputDevice = paNoDevice;
- userOptions.sampleRate = -1;
- userOptions.framesPerBuffer = -1;
- userOptions.inputLatency = -1;
- userOptions.outputLatency = -1;
- userOptions.waveFilePath = ".";
-
- // Process arguments. Skip name of executable.
- i = 1;
- while( i
-//
-// Note that this #include must come after the other ASIO SDK
-// #includes, for example:
-//
-// #include
-// #include
-// #include
-// #include
-// #include
-//
-// Actually the important thing is to #include
-// after . We have
-// incorporated a test to enforce this ordering.
-//
-// The code transparently takes care of the interposition by
-// using macro substitution to intercept calls to ASIOInit()
-// and ASIOExit(). We save the original ASIO global
-// "theAsioDriver" in our "that" variable, and then set
-// "theAsioDriver" to equal our IASIOThiscallResolver instance.
-//
-// Whilst this method of resolving the thiscall problem requires
-// the addition of #include to client
-// code it has the advantage that it does not break the terms
-// of the ASIO licence by publishing it. We are NOT modifying
-// any Steinberg code here, we are merely implementing the IASIO
-// interface in the same way that we would need to do if we
-// wished to provide an open source ASIO driver.
-//
-// For compilation with MinGW -lole32 needs to be added to the
-// linker options. For BORLAND, linking with Import32.lib is
-// sufficient.
-//
-// The dependencies are with: CoInitialize, CoUninitialize,
-// CoCreateInstance, CLSIDFromString - used by asiolist.cpp
-// and are required on Windows whether ThiscallResolver is used
-// or not.
-//
-// Searching for the above strings in the root library path
-// of your compiler should enable the correct libraries to be
-// identified if they aren't immediately obvious.
-//
-// Note that the current implementation of IASIOThiscallResolver
-// is not COM compliant - it does not correctly implement the
-// IUnknown interface. Implementing it is not necessary because
-// it is not called by parts of the ASIO SDK which call through
-// theAsioDriver ptr. The IUnknown methods are implemented as
-// assert(false) to ensure that the code fails if they are
-// ever called.
-// Restrictions: None. Public Domain & Open Source distribute freely
-// You may use IASIOThiscallResolver commercially as well as
-// privately.
-// You the user assume the responsibility for the use of the
-// files, binary or text, and there is no guarantee or warranty,
-// expressed or implied, including but not limited to the
-// implied warranties of merchantability and fitness for a
-// particular purpose. You assume all responsibility and agree
-// to hold no entity, copyright holder or distributors liable
-// for any loss of data or inaccurate representations of data
-// as a result of using IASIOThiscallResolver.
-// Version: 1.4 Added separate macro CALL_THISCALL_1_DOUBLE from
-// Andrew Baldwin, and volatile for whole gcc asm blocks,
-// both for compatibility with newer gcc versions. Cleaned up
-// Borland asm to use one less register.
-// 1.3 Switched to including assert.h for better compatibility.
-// Wrapped entire .h and .cpp contents with a check for
-// _MSC_VER to provide better compatibility with MS compilers.
-// Changed Singleton implementation to use static instance
-// instead of freestore allocated instance. Removed ASIOExit
-// macro as it is no longer needed.
-// 1.2 Removed semicolons from ASIOInit and ASIOExit macros to
-// allow them to be embedded in expressions (if statements).
-// Cleaned up some comments. Removed combase.c dependency (it
-// doesn't compile with BCB anyway) by stubbing IUnknown.
-// 1.1 Incorporated comments from Ross Bencina including things
-// such as changing name from ThiscallResolver to
-// IASIOThiscallResolver, tidying up the constructor, fixing
-// a bug in IASIOThiscallResolver::ASIOExit() and improving
-// portability through the use of conditional compilation
-// 1.0 Initial working version.
-// Created: 6/09/2003
-// Authors: Fraser Adams
-// Ross Bencina
-// Rene G. Ceballos
-// Martin Fay
-// Antti Silvast
-// Andrew Baldwin
-//
-// ****************************************************************************
-
-
-#ifndef included_iasiothiscallresolver_h
-#define included_iasiothiscallresolver_h
-
-// We only need IASIOThiscallResolver at all if we are on Win32. For other
-// platforms we simply bypass the IASIOThiscallResolver definition to allow us
-// to be safely #include'd whatever the platform to keep client code portable
-#if (defined(WIN32) || defined(_WIN32) || defined(__WIN32__)) && !defined(_WIN64)
-
-
-// If microsoft compiler we can call IASIO directly so IASIOThiscallResolver
-// is not used.
-#if !defined(_MSC_VER)
-
-
-// The following is in order to ensure that this header is only included after
-// the other ASIO headers (except for the case of iasiothiscallresolver.cpp).
-// We need to do this because IASIOThiscallResolver works by eclipsing the
-// original definition of ASIOInit() with a macro (see below).
-#if !defined(iasiothiscallresolver_sourcefile)
- #if !defined(__ASIO_H)
- #error iasiothiscallresolver.h must be included AFTER asio.h
- #endif
-#endif
-
-#include
-#include /* From ASIO SDK */
-
-
-class IASIOThiscallResolver : public IASIO {
-private:
- IASIO* that_; // Points to the real IASIO
-
- static IASIOThiscallResolver instance; // Singleton instance
-
- // Constructors - declared private so construction is limited to
- // our Singleton instance
- IASIOThiscallResolver();
- IASIOThiscallResolver(IASIO* that);
-public:
-
- // Methods from the IUnknown interface. We don't fully implement IUnknown
- // because the ASIO SDK never calls these methods through theAsioDriver ptr.
- // These methods are implemented as assert(false).
- virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID riid, void **ppv);
- virtual ULONG STDMETHODCALLTYPE AddRef();
- virtual ULONG STDMETHODCALLTYPE Release();
-
- // Methods from the IASIO interface, implemented as forwarning calls to that.
- virtual ASIOBool init(void *sysHandle);
- virtual void getDriverName(char *name);
- virtual long getDriverVersion();
- virtual void getErrorMessage(char *string);
- virtual ASIOError start();
- virtual ASIOError stop();
- virtual ASIOError getChannels(long *numInputChannels, long *numOutputChannels);
- virtual ASIOError getLatencies(long *inputLatency, long *outputLatency);
- virtual ASIOError getBufferSize(long *minSize, long *maxSize, long *preferredSize, long *granularity);
- virtual ASIOError canSampleRate(ASIOSampleRate sampleRate);
- virtual ASIOError getSampleRate(ASIOSampleRate *sampleRate);
- virtual ASIOError setSampleRate(ASIOSampleRate sampleRate);
- virtual ASIOError getClockSources(ASIOClockSource *clocks, long *numSources);
- virtual ASIOError setClockSource(long reference);
- virtual ASIOError getSamplePosition(ASIOSamples *sPos, ASIOTimeStamp *tStamp);
- virtual ASIOError getChannelInfo(ASIOChannelInfo *info);
- virtual ASIOError createBuffers(ASIOBufferInfo *bufferInfos, long numChannels, long bufferSize, ASIOCallbacks *callbacks);
- virtual ASIOError disposeBuffers();
- virtual ASIOError controlPanel();
- virtual ASIOError future(long selector,void *opt);
- virtual ASIOError outputReady();
-
- // Class method, see ASIOInit() macro below.
- static ASIOError ASIOInit(ASIODriverInfo *info); // Delegates to ::ASIOInit
-};
-
-
-// Replace calls to ASIOInit with our interposing version.
-// This macro enables us to perform thiscall resolution simply by #including
-// after the asio #includes (this file _must_ be
-// included _after_ the asio #includes)
-
-#define ASIOInit(name) IASIOThiscallResolver::ASIOInit((name))
-
-
-#endif /* !defined(_MSC_VER) */
-
-#endif /* Win32 */
-
-#endif /* included_iasiothiscallresolver_h */
-
-
diff --git a/spaces/anaclaudia13ct/insect_detection/utils/loggers/comet/hpo.py b/spaces/anaclaudia13ct/insect_detection/utils/loggers/comet/hpo.py
deleted file mode 100644
index 7dd5c92e8de170222b3cd3eae858f4f3cfddaff6..0000000000000000000000000000000000000000
--- a/spaces/anaclaudia13ct/insect_detection/utils/loggers/comet/hpo.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import argparse
-import json
-import logging
-import os
-import sys
-from pathlib import Path
-
-import comet_ml
-
-logger = logging.getLogger(__name__)
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-from train import train
-from utils.callbacks import Callbacks
-from utils.general import increment_path
-from utils.torch_utils import select_device
-
-# Project Configuration
-config = comet_ml.config.get_config()
-COMET_PROJECT_NAME = config.get_string(os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5")
-
-
-def get_args(known=False):
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
- parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300, help='total training epochs')
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
- parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--noval', action='store_true', help='only validate final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
- parser.add_argument('--noplots', action='store_true', help='save no plot files')
- parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
- parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
- parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
- parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
- parser.add_argument('--seed', type=int, default=0, help='Global training seed')
- parser.add_argument('--local_rank', type=int, default=-1, help='Automatic DDP Multi-GPU argument, do not modify')
-
- # Weights & Biases arguments
- parser.add_argument('--entity', default=None, help='W&B: Entity')
- parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
- parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')
-
- # Comet Arguments
- parser.add_argument("--comet_optimizer_config", type=str, help="Comet: Path to a Comet Optimizer Config File.")
- parser.add_argument("--comet_optimizer_id", type=str, help="Comet: ID of the Comet Optimizer sweep.")
- parser.add_argument("--comet_optimizer_objective", type=str, help="Comet: Set to 'minimize' or 'maximize'.")
- parser.add_argument("--comet_optimizer_metric", type=str, help="Comet: Metric to Optimize.")
- parser.add_argument("--comet_optimizer_workers",
- type=int,
- default=1,
- help="Comet: Number of Parallel Workers to use with the Comet Optimizer.")
-
- return parser.parse_known_args()[0] if known else parser.parse_args()
-
-
-def run(parameters, opt):
- hyp_dict = {k: v for k, v in parameters.items() if k not in ["epochs", "batch_size"]}
-
- opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))
- opt.batch_size = parameters.get("batch_size")
- opt.epochs = parameters.get("epochs")
-
- device = select_device(opt.device, batch_size=opt.batch_size)
- train(hyp_dict, opt, device, callbacks=Callbacks())
-
-
-if __name__ == "__main__":
- opt = get_args(known=True)
-
- opt.weights = str(opt.weights)
- opt.cfg = str(opt.cfg)
- opt.data = str(opt.data)
- opt.project = str(opt.project)
-
- optimizer_id = os.getenv("COMET_OPTIMIZER_ID")
- if optimizer_id is None:
- with open(opt.comet_optimizer_config) as f:
- optimizer_config = json.load(f)
- optimizer = comet_ml.Optimizer(optimizer_config)
- else:
- optimizer = comet_ml.Optimizer(optimizer_id)
-
- opt.comet_optimizer_id = optimizer.id
- status = optimizer.status()
-
- opt.comet_optimizer_objective = status["spec"]["objective"]
- opt.comet_optimizer_metric = status["spec"]["metric"]
-
- logger.info("COMET INFO: Starting Hyperparameter Sweep")
- for parameter in optimizer.get_parameters():
- run(parameter["parameters"], opt)
diff --git a/spaces/anakin87/fact-checking-rocks/README.md b/spaces/anakin87/fact-checking-rocks/README.md
deleted file mode 100644
index d7e547a0002da4731e3901fa03d905bef891adce..0000000000000000000000000000000000000000
--- a/spaces/anakin87/fact-checking-rocks/README.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-title: Fact Checking rocks!
-emoji: 🎸
-colorFrom: purple
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: Rock_fact_checker.py
-pinned: true
-models: [sentence-transformers/msmarco-distilbert-base-tas-b, microsoft/deberta-v2-xlarge-mnli, google/flan-t5-large]
-tags: [fact-checking, rock, natural language inference, dense retrieval, large language models, haystack, neural search]
-license: apache-2.0
----
-
-# Fact Checking 🎸 Rocks! [](https://huggingface.co/spaces/anakin87/fact-checking-rocks) [](https://github.com/anakin87/fact-checking-rocks)
-
-## *Fact checking baseline combining dense retrieval and textual entailment*
-
-- [Fact Checking 🎸 Rocks! ](#fact-checking--rocks---)
- - [*Fact checking baseline combining dense retrieval and textual entailment*](#fact-checking-baseline-combining-dense-retrieval-and-textual-entailment)
- - [Idea](#idea)
- - [Presentation](#presentation)
- - [System description](#system-description)
- - [Indexing pipeline](#indexing-pipeline)
- - [Search pipeline](#search-pipeline)
- - [Explain using a LLM](#explain-using-a-llm)
- - [Limits and possible improvements](#limits-and-possible-improvements)
- - [Repository structure](#repository-structure)
- - [Installation](#installation)
- - [Entailment Checker node](#entailment-checker-node)
- - [Fact Checking 🎸 Rocks!](#fact-checking--rocks)
-
-### Idea
-💡 This project aims to show that a *naive and simple baseline* for fact checking can be built by combining dense retrieval and a textual entailment task.
-In a nutshell, the flow is as follows:
-* the user enters a factual statement
-* the relevant passages are retrieved from the knowledge base using dense retrieval
-* the system computes the text entailment between each relevant passage and the statement, using a Natural Language Inference model
-* the entailment scores are aggregated to produce a summary score.
-
-### Presentation
-
-- [🍿 Video presentation @ Berlin Buzzwords 2023](https://www.youtube.com/watch?v=4L8Iw9CZNbU)
-- [🧑🏫 Slides](./presentation/fact_checking_rocks.pdf)
-
-### System description
-🪄 This project is strongly based on [🔎 Haystack](https://github.com/deepset-ai/haystack), an open source NLP framework that enables seamless use of Transformer models and LLMs to interact with your data. The main components of our system are an indexing pipeline and a search pipeline.
-
-#### Indexing pipeline
-* [Crawling](https://github.com/anakin87/fact-checking-rocks/blob/321ba7893bbe79582f8c052493acfda497c5b785/notebooks/get_wikipedia_data.ipynb): Crawl data from Wikipedia, starting from the page [List of mainstream rock performers](https://en.wikipedia.org/wiki/List_of_mainstream_rock_performers) and using the [python wrapper](https://github.com/goldsmith/Wikipedia)
-* [Indexing](https://github.com/anakin87/fact-checking-rocks/blob/321ba7893bbe79582f8c052493acfda497c5b785/notebooks/indexing.ipynb)
- * preprocess the downloaded documents into chunks consisting of 2 sentences
- * chunks with less than 10 words are discarded, because not very informative
- * instantiate a [FAISS](https://github.com/facebookresearch/faiss) Document store and store the passages on it
- * create embeddings for the passages, using a Sentence Transformer model and save them in FAISS. The retrieval task will involve [*asymmetric semantic search*](https://www.sbert.net/examples/applications/semantic-search/README.html#symmetric-vs-asymmetric-semantic-search) (statements to be verified are usually shorter than inherent passages), therefore I choose the model `msmarco-distilbert-base-tas-b`
- * save FAISS index.
-
-#### Search pipeline
-
-* the user enters a factual statement
-* compute the embedding of the user statement using the same Sentence Transformer used for indexing (`msmarco-distilbert-base-tas-b`)
-* retrieve the K most relevant text passages stored in FAISS (along with their relevance scores)
-* the following steps are performed using the [`EntailmentChecker`, a custom Haystack node](https://github.com/anakin87/haystack-entailment-checker)
-* **text entailment task**: compute the text entailment between each text passage (premise) and the user statement (hypothesis), using a Natural Language Inference model (`microsoft/deberta-v2-xlarge-mnli`). For every text passage, we have 3 scores (summing to 1): entailment, contradiction and neutral.
-* aggregate the text entailment scores: compute the weighted average of them, where the weight is the relevance score. **Now it is possible to tell if the knowledge base confirms, is neutral or disproves the user statement.**
-* *empirical consideration: if in the first N passages (N 0.5), it is better not to consider (K-N) less relevant documents.*
-
-#### Explain using a LLM
-* if there is entailment or contradiction, prompt `google/flan-t5-large`, asking why the relevant textual passages entail/contradict the user statement.
-
-### Limits and possible improvements
- ✨ As mentioned, the current approach to fact checking is simple and naive. Some **structural limits of this approach**:
- * there is **no statement detection**. In fact, the statement to be verified is chosen by the user. In real-world applications, this step is often necessary.
- * **Wikipedia is taken as a source of truth**. Unfortunately, Wikipedia does not contain universal knowledge and there is no real guarantee that it is a source of truth. There are certainly very interesting approaches that view a snapshot of the entire web as an uncurated source of knowledge (see [Facebook Research SPHERE](https://arxiv.org/abs/2112.09924)).
- * Several papers and even our experiments show a general effectiveness of **dense retrieval** in retrieving textual passages for evaluating the user statement. However, there may be cases in which the most useful textual passages for fact checking do not emerge from the simple semantic similarity with the statement to be verified.
- * **no organic evaluation** was performed, but only manual experiments.
-
-While keeping this simple approach, some **improvements** could be made:
-* For reasons of simplicity and infrastructural limitations, the retrieval uses only a very small portion of the Wikipedia data (artists pages from the [List of mainstream rock performers](https://en.wikipedia.org/wiki/List_of_mainstream_rock_performers)). With these few data available, in many cases the knowledge base remains neutral even with respect to statements about rock albums/songs. Certainly, fact checking **quality could improve by expanding the knowledge base** and possibly extending it to the entire Wikipedia.
-* Both the retriever model and the Natural Language Inference model are general purpose models and have not been fine-tuned for our domain. Undoubtedly they can **show better performance if fine-tuned in the rock music domain**. Particularly, the retriever model might be adapted with low effort, using [Generative Pseudo Labelling](https://haystack.deepset.ai/guides/gpl).
-
-### Repository structure
-* [Rock_fact_checker.py](Rock_fact_checker.py) and [pages folder](./pages/): multi-page Streamlit web app
-* [app_utils folder](./app_utils/): python modules used in the web app
-* [notebooks folder](./notebooks/): Jupyter/Colab notebooks to get Wikipedia data and index the text passages (using Haystack)
-* [data folder](./data/): all necessary data, including original Wikipedia data, FAISS Index and prepared random statements
-
-### Installation
-💻
-#### Entailment Checker node
-If you want to build a similar system using the [`EntailmentChecker`](https://github.com/anakin87/haystack-entailment-checker), I strongly suggest taking a look at [the node repository](https://github.com/anakin87/haystack-entailment-checker). It can be easily installed with
-```bash
-pip install haystack-entailment-checker
-```
-
-#### Fact Checking 🎸 Rocks!
- To install this project locally, follow these steps:
-* `git clone https://github.com/anakin87/fact-checking-rocks`
-* `cd fact-checking-rocks`
-* `pip install -r requirements.txt`
-
-To run the web app, simply type: `streamlit run Rock_fact_checker.py`
diff --git a/spaces/aphenx/bingo/src/components/toaster.tsx b/spaces/aphenx/bingo/src/components/toaster.tsx
deleted file mode 100644
index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/components/toaster.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-'use client'
-
-export { Toaster } from 'react-hot-toast'
diff --git a/spaces/aphenx/bingo/src/lib/bots/bing/utils.ts b/spaces/aphenx/bingo/src/lib/bots/bing/utils.ts
deleted file mode 100644
index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/lib/bots/bing/utils.ts
+++ /dev/null
@@ -1,87 +0,0 @@
-import { ChatResponseMessage, BingChatResponse } from './types'
-
-export function convertMessageToMarkdown(message: ChatResponseMessage): string {
- if (message.messageType === 'InternalSearchQuery') {
- return message.text
- }
- for (const card of message.adaptiveCards??[]) {
- for (const block of card.body) {
- if (block.type === 'TextBlock') {
- return block.text
- }
- }
- }
- return ''
-}
-
-const RecordSeparator = String.fromCharCode(30)
-
-export const websocketUtils = {
- packMessage(data: any) {
- return `${JSON.stringify(data)}${RecordSeparator}`
- },
- unpackMessage(data: string | ArrayBuffer | Blob) {
- if (!data) return {}
- return data
- .toString()
- .split(RecordSeparator)
- .filter(Boolean)
- .map((s) => {
- try {
- return JSON.parse(s)
- } catch (e) {
- return {}
- }
- })
- },
-}
-
-export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise {
- const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`,
- {
- method: 'HEAD',
- headers,
- redirect: 'manual'
- },
- );
-
- if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) {
- throw new Error('请求异常,请检查 cookie 是否有效')
- }
-
- const resultId = RegExp.$1;
- let count = 0
- const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`;
-
- do {
- await sleep(3000);
- const content = await fetch(imageThumbUrl, { headers, method: 'GET' })
-
- // @ts-ignore
- if (content.headers.get('content-length') > 1) {
- const text = await content.text()
- return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&'))
- .map(img => ``).join(' ')
- }
- } while(count ++ < 10);
-}
-
-
-export async function* streamAsyncIterable(stream: ReadableStream) {
- const reader = stream.getReader()
- try {
- while (true) {
- const { done, value } = await reader.read()
- if (done) {
- return
- }
- yield value
- }
- } finally {
- reader.releaseLock()
- }
-}
-
-export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))
-
diff --git a/spaces/arch-123/bingo/src/lib/utils.ts b/spaces/arch-123/bingo/src/lib/utils.ts
deleted file mode 100644
index 8de2eba94bf0bc93579d4f489e8b810dbf6ce92a..0000000000000000000000000000000000000000
--- a/spaces/arch-123/bingo/src/lib/utils.ts
+++ /dev/null
@@ -1,159 +0,0 @@
-import { clsx, type ClassValue } from 'clsx'
-import { customAlphabet } from 'nanoid'
-import { twMerge } from 'tailwind-merge'
-// @ts-ignore
-import randomip from 'random-ip'
-import cidr from './cidr.json'
-
-export function cn(...inputs: ClassValue[]) {
- return twMerge(clsx(inputs))
-}
-
-export const nanoid = customAlphabet(
- '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz',
- 7
-) // 7-character random string
-
-export function createChunkDecoder() {
- const decoder = new TextDecoder()
- return function (chunk: Uint8Array | undefined): string {
- if (!chunk) return ''
- return decoder.decode(chunk, { stream: true })
- }
-}
-
-export function random (start: number, end: number) {
- return start + Math.floor(Math.random() * (end - start))
-}
-
-export function randomIP() {
- // return `104.${random(0, 21)}.${random(0, 127)}.${random(1, 255)}`
- const [ip, range] = cidr.at(random(0, cidr.length))?.split('/')!
- return randomip(ip, range)
-}
-
-export const defaultUID = 'xxx'
-
-export function parseHeadersFromCurl(content: string) {
- const re = /-H '([^:]+):\s*([^']+)/mg
- const headers: HeadersInit = {}
- content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl
- content.replace(re, (_: string, key: string, value: string) => {
- headers[key] = value
- return ''
- })
- return headers
-}
-
-export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2']
-export function encodeHeadersToCookie(content: string) {
- const base64Content = btoa(content)
- const contentChunks = base64Content.match(/.{1,4000}/g) || []
- return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`)
-}
-
-export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) {
- let base64Content = ''
- ChunkKeys.forEach((key) => {
- base64Content += (cookies[key] || '')
- })
- try {
- return atob(base64Content)
- } catch(e) {
- return ''
- }
-}
-
-export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) {
- return parseHeadersFromCurl(extraCurlFromCookie(cookies))
-}
-
-export function formatDate(input: string | number | Date): string {
- const date = new Date(input)
- return date.toLocaleDateString('en-US', {
- month: 'long',
- day: 'numeric',
- year: 'numeric'
- })
-}
-
-export function parseCookie(cookie: string, cookieName: string) {
- const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie
- return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : ''
-}
-
-export function setCookie(key: string, value: string) {
- const maxAge = value ? 86400 * 30 : 0
- document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure`
-}
-
-export function getCookie(cookieName: string) {
- const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`)
- return re.test(document.cookie) ? RegExp.$1 : ''
-}
-
-export function parseCookies(cookie: string, cookieNames: string[]) {
- const cookies: { [key: string]: string } = {}
- cookieNames.forEach(cookieName => {
- cookies[cookieName] = parseCookie(cookie, cookieName)
- })
- return cookies
-}
-
-export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0'
-
-export function parseUA(ua?: string, default_ua = DEFAULT_UA) {
- return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua
-}
-
-export function mockUser(cookies: Partial<{ [key: string]: string }>) {
- const {
- BING_UA = process.env.BING_UA,
- BING_IP,
- _U = defaultUID,
- } = cookies
- const ua = parseUA(BING_UA)
-
- return {
- 'x-forwarded-for': BING_IP!,
- 'Accept-Encoding': 'gzip, deflate, br',
- 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
- 'User-Agent': ua!,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.3 OS/Win32',
- cookie: `_U=${_U}` || '',
- }
-}
-
-export function createHeaders(cookies: Partial<{ [key: string]: string }>, type?: string) {
- let {
- BING_HEADER = process.env.BING_HEADER,
- BING_IP,
- IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1',
- } = cookies
- const imageOnly = /^(1|true|yes)$/.test(String(IMAGE_ONLY))
- if (BING_HEADER) {
- if (
- (imageOnly && type === 'image')
- || !imageOnly
- ) {
- const headers = extraHeadersFromCookie({
- BING_HEADER,
- ...cookies,
- }) || {}
- headers['x-forward-for'] = BING_IP!
- return headers
- }
- }
- return mockUser(cookies)
-}
-
-export class WatchDog {
- private tid = 0
- watch(fn: Function, timeout = 2000) {
- clearTimeout(this.tid)
- this.tid = setTimeout(fn, timeout + Math.random() * 1000)
- }
- reset() {
- clearTimeout(this.tid)
- }
-}
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_api.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_api.py
deleted file mode 100644
index 489d8cf6735f4a0fb444839d9a497e24e9ecf15c..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/tests/test_api.py
+++ /dev/null
@@ -1,937 +0,0 @@
-"""Unit tests for altair API"""
-
-import io
-import json
-import operator
-import os
-import tempfile
-
-import jsonschema
-import pytest
-import pandas as pd
-
-import altair.vegalite.v3 as alt
-from altair.utils import AltairDeprecationWarning
-
-try:
- import altair_saver # noqa: F401
-except ImportError:
- altair_saver = None
-
-
-def getargs(*args, **kwargs):
- return args, kwargs
-
-
-OP_DICT = {
- "layer": operator.add,
- "hconcat": operator.or_,
- "vconcat": operator.and_,
-}
-
-
-def _make_chart_type(chart_type):
- data = pd.DataFrame(
- {
- "x": [28, 55, 43, 91, 81, 53, 19, 87],
- "y": [43, 91, 81, 53, 19, 87, 52, 28],
- "color": list("AAAABBBB"),
- }
- )
- base = (
- alt.Chart(data)
- .mark_point()
- .encode(
- x="x",
- y="y",
- color="color",
- )
- )
-
- if chart_type in ["layer", "hconcat", "vconcat", "concat"]:
- func = getattr(alt, chart_type)
- return func(base.mark_square(), base.mark_circle())
- elif chart_type == "facet":
- return base.facet("color")
- elif chart_type == "facet_encoding":
- return base.encode(facet="color")
- elif chart_type == "repeat":
- return base.encode(alt.X(alt.repeat(), type="quantitative")).repeat(["x", "y"])
- elif chart_type == "chart":
- return base
- else:
- raise ValueError("chart_type='{}' is not recognized".format(chart_type))
-
-
-@pytest.fixture
-def basic_chart():
- data = pd.DataFrame(
- {
- "a": ["A", "B", "C", "D", "E", "F", "G", "H", "I"],
- "b": [28, 55, 43, 91, 81, 53, 19, 87, 52],
- }
- )
-
- return alt.Chart(data).mark_bar().encode(x="a", y="b")
-
-
-def test_chart_data_types():
- def Chart(data):
- return alt.Chart(data).mark_point().encode(x="x:Q", y="y:Q")
-
- # Url Data
- data = "/path/to/my/data.csv"
- dct = Chart(data).to_dict()
- assert dct["data"] == {"url": data}
-
- # Dict Data
- data = {"values": [{"x": 1, "y": 2}, {"x": 2, "y": 3}]}
- with alt.data_transformers.enable(consolidate_datasets=False):
- dct = Chart(data).to_dict()
- assert dct["data"] == data
-
- with alt.data_transformers.enable(consolidate_datasets=True):
- dct = Chart(data).to_dict()
- name = dct["data"]["name"]
- assert dct["datasets"][name] == data["values"]
-
- # DataFrame data
- data = pd.DataFrame({"x": range(5), "y": range(5)})
- with alt.data_transformers.enable(consolidate_datasets=False):
- dct = Chart(data).to_dict()
- assert dct["data"]["values"] == data.to_dict(orient="records")
-
- with alt.data_transformers.enable(consolidate_datasets=True):
- dct = Chart(data).to_dict()
- name = dct["data"]["name"]
- assert dct["datasets"][name] == data.to_dict(orient="records")
-
- # Named data object
- data = alt.NamedData(name="Foo")
- dct = Chart(data).to_dict()
- assert dct["data"] == {"name": "Foo"}
-
-
-def test_chart_infer_types():
- data = pd.DataFrame(
- {
- "x": pd.date_range("2012", periods=10, freq="Y"),
- "y": range(10),
- "c": list("abcabcabca"),
- }
- )
-
- def _check_encodings(chart):
- dct = chart.to_dict()
- assert dct["encoding"]["x"]["type"] == "temporal"
- assert dct["encoding"]["x"]["field"] == "x"
- assert dct["encoding"]["y"]["type"] == "quantitative"
- assert dct["encoding"]["y"]["field"] == "y"
- assert dct["encoding"]["color"]["type"] == "nominal"
- assert dct["encoding"]["color"]["field"] == "c"
-
- # Pass field names by keyword
- chart = alt.Chart(data).mark_point().encode(x="x", y="y", color="c")
- _check_encodings(chart)
-
- # pass Channel objects by keyword
- chart = (
- alt.Chart(data)
- .mark_point()
- .encode(x=alt.X("x"), y=alt.Y("y"), color=alt.Color("c"))
- )
- _check_encodings(chart)
-
- # pass Channel objects by value
- chart = alt.Chart(data).mark_point().encode(alt.X("x"), alt.Y("y"), alt.Color("c"))
- _check_encodings(chart)
-
- # override default types
- chart = (
- alt.Chart(data)
- .mark_point()
- .encode(alt.X("x", type="nominal"), alt.Y("y", type="ordinal"))
- )
- dct = chart.to_dict()
- assert dct["encoding"]["x"]["type"] == "nominal"
- assert dct["encoding"]["y"]["type"] == "ordinal"
-
-
-@pytest.mark.parametrize(
- "args, kwargs",
- [
- getargs(detail=["value:Q", "name:N"], tooltip=["value:Q", "name:N"]),
- getargs(detail=["value", "name"], tooltip=["value", "name"]),
- getargs(alt.Detail(["value:Q", "name:N"]), alt.Tooltip(["value:Q", "name:N"])),
- getargs(alt.Detail(["value", "name"]), alt.Tooltip(["value", "name"])),
- getargs(
- [alt.Detail("value:Q"), alt.Detail("name:N")],
- [alt.Tooltip("value:Q"), alt.Tooltip("name:N")],
- ),
- getargs(
- [alt.Detail("value"), alt.Detail("name")],
- [alt.Tooltip("value"), alt.Tooltip("name")],
- ),
- ],
-)
-def test_multiple_encodings(args, kwargs):
- df = pd.DataFrame({"value": [1, 2, 3], "name": ["A", "B", "C"]})
- encoding_dct = [
- {"field": "value", "type": "quantitative"},
- {"field": "name", "type": "nominal"},
- ]
- chart = alt.Chart(df).mark_point().encode(*args, **kwargs)
- dct = chart.to_dict()
- assert dct["encoding"]["detail"] == encoding_dct
- assert dct["encoding"]["tooltip"] == encoding_dct
-
-
-def test_chart_operations():
- data = pd.DataFrame(
- {
- "x": pd.date_range("2012", periods=10, freq="Y"),
- "y": range(10),
- "c": list("abcabcabca"),
- }
- )
- chart1 = alt.Chart(data).mark_line().encode(x="x", y="y", color="c")
- chart2 = chart1.mark_point()
- chart3 = chart1.mark_circle()
- chart4 = chart1.mark_square()
-
- chart = chart1 + chart2 + chart3
- assert isinstance(chart, alt.LayerChart)
- assert len(chart.layer) == 3
- chart += chart4
- assert len(chart.layer) == 4
-
- chart = chart1 | chart2 | chart3
- assert isinstance(chart, alt.HConcatChart)
- assert len(chart.hconcat) == 3
- chart |= chart4
- assert len(chart.hconcat) == 4
-
- chart = chart1 & chart2 & chart3
- assert isinstance(chart, alt.VConcatChart)
- assert len(chart.vconcat) == 3
- chart &= chart4
- assert len(chart.vconcat) == 4
-
-
-def test_selection_to_dict():
- brush = alt.selection(type="interval")
-
- # test some value selections
- # Note: X and Y cannot have conditions
- alt.Chart("path/to/data.json").mark_point().encode(
- color=alt.condition(brush, alt.ColorValue("red"), alt.ColorValue("blue")),
- opacity=alt.condition(brush, alt.value(0.5), alt.value(1.0)),
- text=alt.condition(brush, alt.TextValue("foo"), alt.value("bar")),
- ).to_dict()
-
- # test some field selections
- # Note: X and Y cannot have conditions
- # Conditions cannot both be fields
- alt.Chart("path/to/data.json").mark_point().encode(
- color=alt.condition(brush, alt.Color("col1:N"), alt.value("blue")),
- opacity=alt.condition(brush, "col1:N", alt.value(0.5)),
- text=alt.condition(brush, alt.value("abc"), alt.Text("col2:N")),
- size=alt.condition(brush, alt.value(20), "col2:N"),
- ).to_dict()
-
-
-def test_selection_expression():
- selection = alt.selection_single(fields=["value"])
-
- assert isinstance(selection.value, alt.expr.Expression)
- assert selection.value.to_dict() == "{0}.value".format(selection.name)
-
- assert isinstance(selection["value"], alt.expr.Expression)
- assert selection["value"].to_dict() == "{0}['value']".format(selection.name)
-
- with pytest.raises(AttributeError):
- selection.__magic__
-
-
-@pytest.mark.parametrize("format", ["html", "json", "png", "svg", "pdf"])
-def test_save(format, basic_chart):
- if format in ["pdf", "png"]:
- out = io.BytesIO()
- mode = "rb"
- else:
- out = io.StringIO()
- mode = "r"
-
- if format in ["svg", "png", "pdf"]:
- if not altair_saver:
- with pytest.raises(ValueError) as err:
- basic_chart.save(out, format=format)
- assert "github.com/altair-viz/altair_saver" in str(err.value)
- return
- elif format not in altair_saver.available_formats():
- with pytest.raises(ValueError) as err:
- basic_chart.save(out, format=format)
- assert f"No enabled saver found that supports format='{format}'" in str(
- err.value
- )
- return
-
- basic_chart.save(out, format=format)
- out.seek(0)
- content = out.read()
-
- if format == "json":
- assert "$schema" in json.loads(content)
- if format == "html":
- assert content.startswith("")
-
- fid, filename = tempfile.mkstemp(suffix="." + format)
- os.close(fid)
-
- try:
- basic_chart.save(filename)
- with open(filename, mode) as f:
- assert f.read()[:1000] == content[:1000]
- finally:
- os.remove(filename)
-
-
-def test_facet_basic():
- # wrapped facet
- chart1 = (
- alt.Chart("data.csv")
- .mark_point()
- .encode(
- x="x:Q",
- y="y:Q",
- )
- .facet("category:N", columns=2)
- )
-
- dct1 = chart1.to_dict()
-
- assert dct1["facet"] == alt.Facet("category:N").to_dict()
- assert dct1["columns"] == 2
- assert dct1["data"] == alt.UrlData("data.csv").to_dict()
-
- # explicit row/col facet
- chart2 = (
- alt.Chart("data.csv")
- .mark_point()
- .encode(
- x="x:Q",
- y="y:Q",
- )
- .facet(row="category1:Q", column="category2:Q")
- )
-
- dct2 = chart2.to_dict()
-
- assert dct2["facet"]["row"] == alt.Facet("category1:Q").to_dict()
- assert dct2["facet"]["column"] == alt.Facet("category2:Q").to_dict()
- assert "columns" not in dct2
- assert dct2["data"] == alt.UrlData("data.csv").to_dict()
-
-
-def test_facet_parse():
- chart = (
- alt.Chart("data.csv")
- .mark_point()
- .encode(x="x:Q", y="y:Q")
- .facet(row="row:N", column="column:O")
- )
- dct = chart.to_dict()
- assert dct["data"] == {"url": "data.csv"}
- assert "data" not in dct["spec"]
- assert dct["facet"] == {
- "column": {"field": "column", "type": "ordinal"},
- "row": {"field": "row", "type": "nominal"},
- }
-
-
-def test_facet_parse_data():
- data = pd.DataFrame({"x": range(5), "y": range(5), "row": list("abcab")})
- chart = (
- alt.Chart(data)
- .mark_point()
- .encode(x="x", y="y:O")
- .facet(row="row", column="column:O")
- )
- with alt.data_transformers.enable(consolidate_datasets=False):
- dct = chart.to_dict()
- assert "values" in dct["data"]
- assert "data" not in dct["spec"]
- assert dct["facet"] == {
- "column": {"field": "column", "type": "ordinal"},
- "row": {"field": "row", "type": "nominal"},
- }
-
- with alt.data_transformers.enable(consolidate_datasets=True):
- dct = chart.to_dict()
- assert "datasets" in dct
- assert "name" in dct["data"]
- assert "data" not in dct["spec"]
- assert dct["facet"] == {
- "column": {"field": "column", "type": "ordinal"},
- "row": {"field": "row", "type": "nominal"},
- }
-
-
-def test_selection():
- # test instantiation of selections
- interval = alt.selection_interval(name="selec_1")
- assert interval.selection.type == "interval"
- assert interval.name == "selec_1"
-
- single = alt.selection_single(name="selec_2")
- assert single.selection.type == "single"
- assert single.name == "selec_2"
-
- multi = alt.selection_multi(name="selec_3")
- assert multi.selection.type == "multi"
- assert multi.name == "selec_3"
-
- # test adding to chart
- chart = alt.Chart().add_selection(single)
- chart = chart.add_selection(multi, interval)
- assert set(chart.selection.keys()) == {"selec_1", "selec_2", "selec_3"}
-
- # test logical operations
- assert isinstance(single & multi, alt.Selection)
- assert isinstance(single | multi, alt.Selection)
- assert isinstance(~single, alt.Selection)
- assert isinstance((single & multi)[0].group, alt.SelectionAnd)
- assert isinstance((single | multi)[0].group, alt.SelectionOr)
- assert isinstance((~single)[0].group, alt.SelectionNot)
-
- # test that default names increment (regression for #1454)
- sel1 = alt.selection_single()
- sel2 = alt.selection_multi()
- sel3 = alt.selection_interval()
- names = {s.name for s in (sel1, sel2, sel3)}
- assert len(names) == 3
-
-
-def test_transforms():
- # aggregate transform
- agg1 = alt.AggregatedFieldDef(**{"as": "x1", "op": "mean", "field": "y"})
- agg2 = alt.AggregatedFieldDef(**{"as": "x2", "op": "median", "field": "z"})
- chart = alt.Chart().transform_aggregate([agg1], ["foo"], x2="median(z)")
- kwds = dict(aggregate=[agg1, agg2], groupby=["foo"])
- assert chart.transform == [alt.AggregateTransform(**kwds)]
-
- # bin transform
- chart = alt.Chart().transform_bin("binned", field="field", bin=True)
- kwds = {"as": "binned", "field": "field", "bin": True}
- assert chart.transform == [alt.BinTransform(**kwds)]
-
- # calcualte transform
- chart = alt.Chart().transform_calculate("calc", "datum.a * 4")
- kwds = {"as": "calc", "calculate": "datum.a * 4"}
- assert chart.transform == [alt.CalculateTransform(**kwds)]
-
- # impute transform
- chart = alt.Chart().transform_impute("field", "key", groupby=["x"])
- kwds = {"impute": "field", "key": "key", "groupby": ["x"]}
- assert chart.transform == [alt.ImputeTransform(**kwds)]
-
- # joinaggregate transform
- chart = alt.Chart().transform_joinaggregate(min="min(x)", groupby=["key"])
- kwds = {
- "joinaggregate": [
- alt.JoinAggregateFieldDef(field="x", op="min", **{"as": "min"})
- ],
- "groupby": ["key"],
- }
- assert chart.transform == [alt.JoinAggregateTransform(**kwds)]
-
- # filter transform
- chart = alt.Chart().transform_filter("datum.a < 4")
- assert chart.transform == [alt.FilterTransform(filter="datum.a < 4")]
-
- # flatten transform
- chart = alt.Chart().transform_flatten(["A", "B"], ["X", "Y"])
- kwds = {"as": ["X", "Y"], "flatten": ["A", "B"]}
- assert chart.transform == [alt.FlattenTransform(**kwds)]
-
- # fold transform
- chart = alt.Chart().transform_fold(["A", "B", "C"], as_=["key", "val"])
- kwds = {"as": ["key", "val"], "fold": ["A", "B", "C"]}
- assert chart.transform == [alt.FoldTransform(**kwds)]
-
- # lookup transform
- lookup_data = alt.LookupData(alt.UrlData("foo.csv"), "id", ["rate"])
- chart = alt.Chart().transform_lookup(
- from_=lookup_data, as_="a", lookup="a", default="b"
- )
- kwds = {"from": lookup_data, "as": "a", "lookup": "a", "default": "b"}
- assert chart.transform == [alt.LookupTransform(**kwds)]
-
- # sample transform
- chart = alt.Chart().transform_sample()
- assert chart.transform == [alt.SampleTransform(1000)]
-
- # stack transform
- chart = alt.Chart().transform_stack("stacked", "x", groupby=["y"])
- assert chart.transform == [
- alt.StackTransform(stack="x", groupby=["y"], **{"as": "stacked"})
- ]
-
- # timeUnit transform
- chart = alt.Chart().transform_timeunit("foo", field="x", timeUnit="date")
- kwds = {"as": "foo", "field": "x", "timeUnit": "date"}
- assert chart.transform == [alt.TimeUnitTransform(**kwds)]
-
- # window transform
- chart = alt.Chart().transform_window(xsum="sum(x)", ymin="min(y)", frame=[None, 0])
- window = [
- alt.WindowFieldDef(**{"as": "xsum", "field": "x", "op": "sum"}),
- alt.WindowFieldDef(**{"as": "ymin", "field": "y", "op": "min"}),
- ]
-
- # kwargs don't maintain order in Python < 3.6, so window list can
- # be reversed
- assert chart.transform == [
- alt.WindowTransform(frame=[None, 0], window=window)
- ] or chart.transform == [alt.WindowTransform(frame=[None, 0], window=window[::-1])]
-
-
-def test_filter_transform_selection_predicates():
- selector1 = alt.selection_interval(name="s1")
- selector2 = alt.selection_interval(name="s2")
- base = alt.Chart("data.txt").mark_point()
-
- chart = base.transform_filter(selector1)
- assert chart.to_dict()["transform"] == [{"filter": {"selection": "s1"}}]
-
- chart = base.transform_filter(~selector1)
- assert chart.to_dict()["transform"] == [{"filter": {"selection": {"not": "s1"}}}]
-
- chart = base.transform_filter(selector1 & selector2)
- assert chart.to_dict()["transform"] == [
- {"filter": {"selection": {"and": ["s1", "s2"]}}}
- ]
-
- chart = base.transform_filter(selector1 | selector2)
- assert chart.to_dict()["transform"] == [
- {"filter": {"selection": {"or": ["s1", "s2"]}}}
- ]
-
- chart = base.transform_filter(selector1 | ~selector2)
- assert chart.to_dict()["transform"] == [
- {"filter": {"selection": {"or": ["s1", {"not": "s2"}]}}}
- ]
-
- chart = base.transform_filter(~selector1 | ~selector2)
- assert chart.to_dict()["transform"] == [
- {"filter": {"selection": {"or": [{"not": "s1"}, {"not": "s2"}]}}}
- ]
-
- chart = base.transform_filter(~(selector1 & selector2))
- assert chart.to_dict()["transform"] == [
- {"filter": {"selection": {"not": {"and": ["s1", "s2"]}}}}
- ]
-
-
-def test_resolve_methods():
- chart = alt.LayerChart().resolve_axis(x="shared", y="independent")
- assert chart.resolve == alt.Resolve(
- axis=alt.AxisResolveMap(x="shared", y="independent")
- )
-
- chart = alt.LayerChart().resolve_legend(color="shared", fill="independent")
- assert chart.resolve == alt.Resolve(
- legend=alt.LegendResolveMap(color="shared", fill="independent")
- )
-
- chart = alt.LayerChart().resolve_scale(x="shared", y="independent")
- assert chart.resolve == alt.Resolve(
- scale=alt.ScaleResolveMap(x="shared", y="independent")
- )
-
-
-def test_layer_encodings():
- chart = alt.LayerChart().encode(x="column:Q")
- assert chart.encoding.x == alt.X(shorthand="column:Q")
-
-
-def test_add_selection():
- selections = [
- alt.selection_interval(),
- alt.selection_single(),
- alt.selection_multi(),
- ]
- chart = (
- alt.Chart()
- .mark_point()
- .add_selection(selections[0])
- .add_selection(selections[1], selections[2])
- )
- expected = {s.name: s.selection for s in selections}
- assert chart.selection == expected
-
-
-def test_repeat_add_selections():
- base = alt.Chart("data.csv").mark_point()
- selection = alt.selection_single()
- chart1 = base.add_selection(selection).repeat(list("ABC"))
- chart2 = base.repeat(list("ABC")).add_selection(selection)
- assert chart1.to_dict() == chart2.to_dict()
-
-
-def test_facet_add_selections():
- base = alt.Chart("data.csv").mark_point()
- selection = alt.selection_single()
- chart1 = base.add_selection(selection).facet("val:Q")
- chart2 = base.facet("val:Q").add_selection(selection)
- assert chart1.to_dict() == chart2.to_dict()
-
-
-def test_layer_add_selection():
- base = alt.Chart("data.csv").mark_point()
- selection = alt.selection_single()
- chart1 = alt.layer(base.add_selection(selection), base)
- chart2 = alt.layer(base, base).add_selection(selection)
- assert chart1.to_dict() == chart2.to_dict()
-
-
-@pytest.mark.parametrize("charttype", [alt.concat, alt.hconcat, alt.vconcat])
-def test_compound_add_selections(charttype):
- base = alt.Chart("data.csv").mark_point()
- selection = alt.selection_single()
- chart1 = charttype(base.add_selection(selection), base.add_selection(selection))
- chart2 = charttype(base, base).add_selection(selection)
- assert chart1.to_dict() == chart2.to_dict()
-
-
-def test_selection_property():
- sel = alt.selection_interval()
- chart = alt.Chart("data.csv").mark_point().properties(selection=sel)
-
- assert list(chart["selection"].keys()) == [sel.name]
-
-
-def test_LookupData():
- df = pd.DataFrame({"x": [1, 2, 3], "y": [4, 5, 6]})
- lookup = alt.LookupData(data=df, key="x")
-
- dct = lookup.to_dict()
- assert dct["key"] == "x"
- assert dct["data"] == {
- "values": [{"x": 1, "y": 4}, {"x": 2, "y": 5}, {"x": 3, "y": 6}]
- }
-
-
-def test_themes():
- chart = alt.Chart("foo.txt").mark_point()
- active = alt.themes.active
-
- try:
- alt.themes.enable("default")
- assert chart.to_dict()["config"] == {
- "mark": {"tooltip": None},
- "view": {"width": 400, "height": 300},
- }
-
- alt.themes.enable("opaque")
- assert chart.to_dict()["config"] == {
- "background": "white",
- "mark": {"tooltip": None},
- "view": {"width": 400, "height": 300},
- }
-
- alt.themes.enable("none")
- assert "config" not in chart.to_dict()
-
- finally:
- # re-enable the original active theme
- alt.themes.enable(active)
-
-
-def test_chart_from_dict():
- base = alt.Chart("data.csv").mark_point().encode(x="x:Q", y="y:Q")
-
- charts = [
- base,
- base + base,
- base | base,
- base & base,
- base.facet("c:N"),
- (base + base).facet(row="c:N", data="data.csv"),
- base.repeat(["c", "d"]),
- (base + base).repeat(row=["c", "d"]),
- ]
-
- for chart in charts:
- print(chart)
- chart_out = alt.Chart.from_dict(chart.to_dict())
- assert type(chart_out) is type(chart)
-
- # test that an invalid spec leads to a schema validation error
- with pytest.raises(jsonschema.ValidationError):
- alt.Chart.from_dict({"invalid": "spec"})
-
-
-def test_consolidate_datasets(basic_chart):
- subchart1 = basic_chart
- subchart2 = basic_chart.copy()
- subchart2.data = basic_chart.data.copy()
- chart = subchart1 | subchart2
-
- with alt.data_transformers.enable(consolidate_datasets=True):
- dct_consolidated = chart.to_dict()
-
- with alt.data_transformers.enable(consolidate_datasets=False):
- dct_standard = chart.to_dict()
-
- assert "datasets" in dct_consolidated
- assert "datasets" not in dct_standard
-
- datasets = dct_consolidated["datasets"]
-
- # two dataset copies should be recognized as duplicates
- assert len(datasets) == 1
-
- # make sure data matches original & names are correct
- name, data = datasets.popitem()
-
- for spec in dct_standard["hconcat"]:
- assert spec["data"]["values"] == data
-
- for spec in dct_consolidated["hconcat"]:
- assert spec["data"] == {"name": name}
-
-
-def test_consolidate_InlineData():
- data = alt.InlineData(
- values=[{"a": 1, "b": 1}, {"a": 2, "b": 2}], format={"type": "csv"}
- )
- chart = alt.Chart(data).mark_point()
-
- with alt.data_transformers.enable(consolidate_datasets=False):
- dct = chart.to_dict()
- assert dct["data"]["format"] == data.format
- assert dct["data"]["values"] == data.values
-
- with alt.data_transformers.enable(consolidate_datasets=True):
- dct = chart.to_dict()
- assert dct["data"]["format"] == data.format
- assert list(dct["datasets"].values())[0] == data.values
-
- data = alt.InlineData(values=[], name="runtime_data")
- chart = alt.Chart(data).mark_point()
-
- with alt.data_transformers.enable(consolidate_datasets=False):
- dct = chart.to_dict()
- assert dct["data"] == data.to_dict()
-
- with alt.data_transformers.enable(consolidate_datasets=True):
- dct = chart.to_dict()
- assert dct["data"] == data.to_dict()
-
-
-def test_deprecated_encodings():
- base = alt.Chart("data.txt").mark_point()
-
- with pytest.warns(AltairDeprecationWarning) as record:
- chart1 = base.encode(strokeOpacity=alt.Strokeopacity("x:Q")).to_dict()
- assert "alt.StrokeOpacity" in record[0].message.args[0]
- chart2 = base.encode(strokeOpacity=alt.StrokeOpacity("x:Q")).to_dict()
-
- assert chart1 == chart2
-
-
-def test_repeat():
- # wrapped repeat
- chart1 = (
- alt.Chart("data.csv")
- .mark_point()
- .encode(
- x=alt.X(alt.repeat(), type="quantitative"),
- y="y:Q",
- )
- .repeat(["A", "B", "C", "D"], columns=2)
- )
-
- dct1 = chart1.to_dict()
-
- assert dct1["repeat"] == ["A", "B", "C", "D"]
- assert dct1["columns"] == 2
- assert dct1["spec"]["encoding"]["x"]["field"] == {"repeat": "repeat"}
-
- # explicit row/col repeat
- chart2 = (
- alt.Chart("data.csv")
- .mark_point()
- .encode(
- x=alt.X(alt.repeat("row"), type="quantitative"),
- y=alt.Y(alt.repeat("column"), type="quantitative"),
- )
- .repeat(row=["A", "B", "C"], column=["C", "B", "A"])
- )
-
- dct2 = chart2.to_dict()
-
- assert dct2["repeat"] == {"row": ["A", "B", "C"], "column": ["C", "B", "A"]}
- assert "columns" not in dct2
- assert dct2["spec"]["encoding"]["x"]["field"] == {"repeat": "row"}
- assert dct2["spec"]["encoding"]["y"]["field"] == {"repeat": "column"}
-
-
-def test_data_property():
- data = pd.DataFrame({"x": [1, 2, 3], "y": list("ABC")})
- chart1 = alt.Chart(data).mark_point()
- chart2 = alt.Chart().mark_point().properties(data=data)
-
- assert chart1.to_dict() == chart2.to_dict()
-
-
-@pytest.mark.parametrize("method", ["layer", "hconcat", "vconcat", "concat"])
-@pytest.mark.parametrize(
- "data", ["data.json", pd.DataFrame({"x": range(3), "y": list("abc")})]
-)
-def test_subcharts_with_same_data(method, data):
- func = getattr(alt, method)
-
- point = alt.Chart(data).mark_point().encode(x="x:Q", y="y:Q")
- line = point.mark_line()
- text = point.mark_text()
-
- chart1 = func(point, line, text)
- assert chart1.data is not alt.Undefined
- assert all(c.data is alt.Undefined for c in getattr(chart1, method))
-
- if method != "concat":
- op = OP_DICT[method]
- chart2 = op(op(point, line), text)
- assert chart2.data is not alt.Undefined
- assert all(c.data is alt.Undefined for c in getattr(chart2, method))
-
-
-@pytest.mark.parametrize("method", ["layer", "hconcat", "vconcat", "concat"])
-@pytest.mark.parametrize(
- "data", ["data.json", pd.DataFrame({"x": range(3), "y": list("abc")})]
-)
-def test_subcharts_different_data(method, data):
- func = getattr(alt, method)
-
- point = alt.Chart(data).mark_point().encode(x="x:Q", y="y:Q")
- otherdata = alt.Chart("data.csv").mark_point().encode(x="x:Q", y="y:Q")
- nodata = alt.Chart().mark_point().encode(x="x:Q", y="y:Q")
-
- chart1 = func(point, otherdata)
- assert chart1.data is alt.Undefined
- assert getattr(chart1, method)[0].data is data
-
- chart2 = func(point, nodata)
- assert chart2.data is alt.Undefined
- assert getattr(chart2, method)[0].data is data
-
-
-def test_layer_facet(basic_chart):
- chart = (basic_chart + basic_chart).facet(row="row:Q")
- assert chart.data is not alt.Undefined
- assert chart.spec.data is alt.Undefined
- for layer in chart.spec.layer:
- assert layer.data is alt.Undefined
-
- dct = chart.to_dict()
- assert "data" in dct
-
-
-def test_layer_errors():
- toplevel_chart = alt.Chart("data.txt").mark_point().configure_legend(columns=2)
-
- facet_chart1 = alt.Chart("data.txt").mark_point().encode(facet="row:Q")
-
- facet_chart2 = alt.Chart("data.txt").mark_point().facet("row:Q")
-
- repeat_chart = alt.Chart("data.txt").mark_point().repeat(["A", "B", "C"])
-
- simple_chart = alt.Chart("data.txt").mark_point()
-
- with pytest.raises(ValueError) as err:
- toplevel_chart + simple_chart
- assert str(err.value).startswith(
- 'Objects with "config" attribute cannot be used within LayerChart.'
- )
-
- with pytest.raises(ValueError) as err:
- repeat_chart + simple_chart
- assert str(err.value) == "Repeat charts cannot be layered."
-
- with pytest.raises(ValueError) as err:
- facet_chart1 + simple_chart
- assert str(err.value) == "Faceted charts cannot be layered."
-
- with pytest.raises(ValueError) as err:
- alt.layer(simple_chart) + facet_chart2
- assert str(err.value) == "Faceted charts cannot be layered."
-
-
-@pytest.mark.parametrize(
- "chart_type",
- ["layer", "hconcat", "vconcat", "concat", "facet", "facet_encoding", "repeat"],
-)
-def test_resolve(chart_type):
- chart = _make_chart_type(chart_type)
- chart = (
- chart.resolve_scale(
- x="independent",
- )
- .resolve_legend(color="independent")
- .resolve_axis(y="independent")
- )
- dct = chart.to_dict()
- assert dct["resolve"] == {
- "scale": {"x": "independent"},
- "legend": {"color": "independent"},
- "axis": {"y": "independent"},
- }
-
-
-# TODO: test vconcat, hconcat, concat when schema allows them.
-# This is blocked by https://github.com/vega/vega-lite/issues/5261
-@pytest.mark.parametrize("chart_type", ["chart", "layer", "facet_encoding"])
-@pytest.mark.parametrize("facet_arg", [None, "facet", "row", "column"])
-def test_facet(chart_type, facet_arg):
- chart = _make_chart_type(chart_type)
- if facet_arg is None:
- chart = chart.facet("color:N", columns=2)
- else:
- chart = chart.facet(**{facet_arg: "color:N", "columns": 2})
- dct = chart.to_dict()
-
- assert "spec" in dct
- assert dct["columns"] == 2
- expected = {"field": "color", "type": "nominal"}
- if facet_arg is None or facet_arg == "facet":
- assert dct["facet"] == expected
- else:
- assert dct["facet"][facet_arg] == expected
-
-
-def test_sequence():
- data = alt.sequence(100)
- assert data.to_dict() == {"sequence": {"start": 0, "stop": 100}}
-
- data = alt.sequence(5, 10)
- assert data.to_dict() == {"sequence": {"start": 5, "stop": 10}}
-
- data = alt.sequence(0, 1, 0.1, as_="x")
- assert data.to_dict() == {
- "sequence": {"start": 0, "stop": 1, "step": 0.1, "as": "x"}
- }
-
-
-def test_graticule():
- data = alt.graticule()
- assert data.to_dict() == {"graticule": True}
-
- data = alt.graticule(step=[15, 15])
- assert data.to_dict() == {"graticule": {"step": [15, 15]}}
-
-
-def test_sphere():
- data = alt.sphere()
- assert data.to_dict() == {"sphere": True}
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attr/validators.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attr/validators.py
deleted file mode 100644
index 0b0c8342f2528678c1ab84b027abb12175f59fc7..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attr/validators.py
+++ /dev/null
@@ -1,561 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-"""
-Commonly useful validators.
-"""
-
-from __future__ import absolute_import, division, print_function
-
-import operator
-import re
-
-from contextlib import contextmanager
-
-from ._config import get_run_validators, set_run_validators
-from ._make import _AndValidator, and_, attrib, attrs
-from .exceptions import NotCallableError
-
-
-try:
- Pattern = re.Pattern
-except AttributeError: # Python <3.7 lacks a Pattern type.
- Pattern = type(re.compile(""))
-
-
-__all__ = [
- "and_",
- "deep_iterable",
- "deep_mapping",
- "disabled",
- "ge",
- "get_disabled",
- "gt",
- "in_",
- "instance_of",
- "is_callable",
- "le",
- "lt",
- "matches_re",
- "max_len",
- "optional",
- "provides",
- "set_disabled",
-]
-
-
-def set_disabled(disabled):
- """
- Globally disable or enable running validators.
-
- By default, they are run.
-
- :param disabled: If ``True``, disable running all validators.
- :type disabled: bool
-
- .. warning::
-
- This function is not thread-safe!
-
- .. versionadded:: 21.3.0
- """
- set_run_validators(not disabled)
-
-
-def get_disabled():
- """
- Return a bool indicating whether validators are currently disabled or not.
-
- :return: ``True`` if validators are currently disabled.
- :rtype: bool
-
- .. versionadded:: 21.3.0
- """
- return not get_run_validators()
-
-
-@contextmanager
-def disabled():
- """
- Context manager that disables running validators within its context.
-
- .. warning::
-
- This context manager is not thread-safe!
-
- .. versionadded:: 21.3.0
- """
- set_run_validators(False)
- try:
- yield
- finally:
- set_run_validators(True)
-
-
-@attrs(repr=False, slots=True, hash=True)
-class _InstanceOfValidator(object):
- type = attrib()
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if not isinstance(value, self.type):
- raise TypeError(
- "'{name}' must be {type!r} (got {value!r} that is a "
- "{actual!r}).".format(
- name=attr.name,
- type=self.type,
- actual=value.__class__,
- value=value,
- ),
- attr,
- self.type,
- value,
- )
-
- def __repr__(self):
- return "".format(
- type=self.type
- )
-
-
-def instance_of(type):
- """
- A validator that raises a `TypeError` if the initializer is called
- with a wrong type for this particular attribute (checks are performed using
- `isinstance` therefore it's also valid to pass a tuple of types).
-
- :param type: The type to check for.
- :type type: type or tuple of types
-
- :raises TypeError: With a human readable error message, the attribute
- (of type `attrs.Attribute`), the expected type, and the value it
- got.
- """
- return _InstanceOfValidator(type)
-
-
-@attrs(repr=False, frozen=True, slots=True)
-class _MatchesReValidator(object):
- pattern = attrib()
- match_func = attrib()
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if not self.match_func(value):
- raise ValueError(
- "'{name}' must match regex {pattern!r}"
- " ({value!r} doesn't)".format(
- name=attr.name, pattern=self.pattern.pattern, value=value
- ),
- attr,
- self.pattern,
- value,
- )
-
- def __repr__(self):
- return "".format(
- pattern=self.pattern
- )
-
-
-def matches_re(regex, flags=0, func=None):
- r"""
- A validator that raises `ValueError` if the initializer is called
- with a string that doesn't match *regex*.
-
- :param regex: a regex string or precompiled pattern to match against
- :param int flags: flags that will be passed to the underlying re function
- (default 0)
- :param callable func: which underlying `re` function to call (options
- are `re.fullmatch`, `re.search`, `re.match`, default
- is ``None`` which means either `re.fullmatch` or an emulation of
- it on Python 2). For performance reasons, they won't be used directly
- but on a pre-`re.compile`\ ed pattern.
-
- .. versionadded:: 19.2.0
- .. versionchanged:: 21.3.0 *regex* can be a pre-compiled pattern.
- """
- fullmatch = getattr(re, "fullmatch", None)
- valid_funcs = (fullmatch, None, re.search, re.match)
- if func not in valid_funcs:
- raise ValueError(
- "'func' must be one of {}.".format(
- ", ".join(
- sorted(
- e and e.__name__ or "None" for e in set(valid_funcs)
- )
- )
- )
- )
-
- if isinstance(regex, Pattern):
- if flags:
- raise TypeError(
- "'flags' can only be used with a string pattern; "
- "pass flags to re.compile() instead"
- )
- pattern = regex
- else:
- pattern = re.compile(regex, flags)
-
- if func is re.match:
- match_func = pattern.match
- elif func is re.search:
- match_func = pattern.search
- elif fullmatch:
- match_func = pattern.fullmatch
- else: # Python 2 fullmatch emulation (https://bugs.python.org/issue16203)
- pattern = re.compile(
- r"(?:{})\Z".format(pattern.pattern), pattern.flags
- )
- match_func = pattern.match
-
- return _MatchesReValidator(pattern, match_func)
-
-
-@attrs(repr=False, slots=True, hash=True)
-class _ProvidesValidator(object):
- interface = attrib()
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if not self.interface.providedBy(value):
- raise TypeError(
- "'{name}' must provide {interface!r} which {value!r} "
- "doesn't.".format(
- name=attr.name, interface=self.interface, value=value
- ),
- attr,
- self.interface,
- value,
- )
-
- def __repr__(self):
- return "".format(
- interface=self.interface
- )
-
-
-def provides(interface):
- """
- A validator that raises a `TypeError` if the initializer is called
- with an object that does not provide the requested *interface* (checks are
- performed using ``interface.providedBy(value)`` (see `zope.interface
- `_).
-
- :param interface: The interface to check for.
- :type interface: ``zope.interface.Interface``
-
- :raises TypeError: With a human readable error message, the attribute
- (of type `attrs.Attribute`), the expected interface, and the
- value it got.
- """
- return _ProvidesValidator(interface)
-
-
-@attrs(repr=False, slots=True, hash=True)
-class _OptionalValidator(object):
- validator = attrib()
-
- def __call__(self, inst, attr, value):
- if value is None:
- return
-
- self.validator(inst, attr, value)
-
- def __repr__(self):
- return "".format(
- what=repr(self.validator)
- )
-
-
-def optional(validator):
- """
- A validator that makes an attribute optional. An optional attribute is one
- which can be set to ``None`` in addition to satisfying the requirements of
- the sub-validator.
-
- :param validator: A validator (or a list of validators) that is used for
- non-``None`` values.
- :type validator: callable or `list` of callables.
-
- .. versionadded:: 15.1.0
- .. versionchanged:: 17.1.0 *validator* can be a list of validators.
- """
- if isinstance(validator, list):
- return _OptionalValidator(_AndValidator(validator))
- return _OptionalValidator(validator)
-
-
-@attrs(repr=False, slots=True, hash=True)
-class _InValidator(object):
- options = attrib()
-
- def __call__(self, inst, attr, value):
- try:
- in_options = value in self.options
- except TypeError: # e.g. `1 in "abc"`
- in_options = False
-
- if not in_options:
- raise ValueError(
- "'{name}' must be in {options!r} (got {value!r})".format(
- name=attr.name, options=self.options, value=value
- )
- )
-
- def __repr__(self):
- return "".format(
- options=self.options
- )
-
-
-def in_(options):
- """
- A validator that raises a `ValueError` if the initializer is called
- with a value that does not belong in the options provided. The check is
- performed using ``value in options``.
-
- :param options: Allowed options.
- :type options: list, tuple, `enum.Enum`, ...
-
- :raises ValueError: With a human readable error message, the attribute (of
- type `attrs.Attribute`), the expected options, and the value it
- got.
-
- .. versionadded:: 17.1.0
- """
- return _InValidator(options)
-
-
-@attrs(repr=False, slots=False, hash=True)
-class _IsCallableValidator(object):
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if not callable(value):
- message = (
- "'{name}' must be callable "
- "(got {value!r} that is a {actual!r})."
- )
- raise NotCallableError(
- msg=message.format(
- name=attr.name, value=value, actual=value.__class__
- ),
- value=value,
- )
-
- def __repr__(self):
- return ""
-
-
-def is_callable():
- """
- A validator that raises a `attr.exceptions.NotCallableError` if the
- initializer is called with a value for this particular attribute
- that is not callable.
-
- .. versionadded:: 19.1.0
-
- :raises `attr.exceptions.NotCallableError`: With a human readable error
- message containing the attribute (`attrs.Attribute`) name,
- and the value it got.
- """
- return _IsCallableValidator()
-
-
-@attrs(repr=False, slots=True, hash=True)
-class _DeepIterable(object):
- member_validator = attrib(validator=is_callable())
- iterable_validator = attrib(
- default=None, validator=optional(is_callable())
- )
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if self.iterable_validator is not None:
- self.iterable_validator(inst, attr, value)
-
- for member in value:
- self.member_validator(inst, attr, member)
-
- def __repr__(self):
- iterable_identifier = (
- ""
- if self.iterable_validator is None
- else " {iterable!r}".format(iterable=self.iterable_validator)
- )
- return (
- ""
- ).format(
- iterable_identifier=iterable_identifier,
- member=self.member_validator,
- )
-
-
-def deep_iterable(member_validator, iterable_validator=None):
- """
- A validator that performs deep validation of an iterable.
-
- :param member_validator: Validator to apply to iterable members
- :param iterable_validator: Validator to apply to iterable itself
- (optional)
-
- .. versionadded:: 19.1.0
-
- :raises TypeError: if any sub-validators fail
- """
- return _DeepIterable(member_validator, iterable_validator)
-
-
-@attrs(repr=False, slots=True, hash=True)
-class _DeepMapping(object):
- key_validator = attrib(validator=is_callable())
- value_validator = attrib(validator=is_callable())
- mapping_validator = attrib(default=None, validator=optional(is_callable()))
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if self.mapping_validator is not None:
- self.mapping_validator(inst, attr, value)
-
- for key in value:
- self.key_validator(inst, attr, key)
- self.value_validator(inst, attr, value[key])
-
- def __repr__(self):
- return (
- ""
- ).format(key=self.key_validator, value=self.value_validator)
-
-
-def deep_mapping(key_validator, value_validator, mapping_validator=None):
- """
- A validator that performs deep validation of a dictionary.
-
- :param key_validator: Validator to apply to dictionary keys
- :param value_validator: Validator to apply to dictionary values
- :param mapping_validator: Validator to apply to top-level mapping
- attribute (optional)
-
- .. versionadded:: 19.1.0
-
- :raises TypeError: if any sub-validators fail
- """
- return _DeepMapping(key_validator, value_validator, mapping_validator)
-
-
-@attrs(repr=False, frozen=True, slots=True)
-class _NumberValidator(object):
- bound = attrib()
- compare_op = attrib()
- compare_func = attrib()
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if not self.compare_func(value, self.bound):
- raise ValueError(
- "'{name}' must be {op} {bound}: {value}".format(
- name=attr.name,
- op=self.compare_op,
- bound=self.bound,
- value=value,
- )
- )
-
- def __repr__(self):
- return "".format(
- op=self.compare_op, bound=self.bound
- )
-
-
-def lt(val):
- """
- A validator that raises `ValueError` if the initializer is called
- with a number larger or equal to *val*.
-
- :param val: Exclusive upper bound for values
-
- .. versionadded:: 21.3.0
- """
- return _NumberValidator(val, "<", operator.lt)
-
-
-def le(val):
- """
- A validator that raises `ValueError` if the initializer is called
- with a number greater than *val*.
-
- :param val: Inclusive upper bound for values
-
- .. versionadded:: 21.3.0
- """
- return _NumberValidator(val, "<=", operator.le)
-
-
-def ge(val):
- """
- A validator that raises `ValueError` if the initializer is called
- with a number smaller than *val*.
-
- :param val: Inclusive lower bound for values
-
- .. versionadded:: 21.3.0
- """
- return _NumberValidator(val, ">=", operator.ge)
-
-
-def gt(val):
- """
- A validator that raises `ValueError` if the initializer is called
- with a number smaller or equal to *val*.
-
- :param val: Exclusive lower bound for values
-
- .. versionadded:: 21.3.0
- """
- return _NumberValidator(val, ">", operator.gt)
-
-
-@attrs(repr=False, frozen=True, slots=True)
-class _MaxLengthValidator(object):
- max_length = attrib()
-
- def __call__(self, inst, attr, value):
- """
- We use a callable class to be able to change the ``__repr__``.
- """
- if len(value) > self.max_length:
- raise ValueError(
- "Length of '{name}' must be <= {max}: {len}".format(
- name=attr.name, max=self.max_length, len=len(value)
- )
- )
-
- def __repr__(self):
- return "".format(max=self.max_length)
-
-
-def max_len(length):
- """
- A validator that raises `ValueError` if the initializer is called
- with a string or iterable that is longer than *length*.
-
- :param int length: Maximum length of the string or iterable
-
- .. versionadded:: 21.3.0
- """
- return _MaxLengthValidator(length)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/wav2vec_criterion.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/wav2vec_criterion.py
deleted file mode 100644
index e37274d5a81a10a3b07114325b7cc2abe5cf56d9..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/wav2vec_criterion.py
+++ /dev/null
@@ -1,230 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-from typing import List, Optional
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from fairseq.logging.meters import safe_round
-from fairseq.utils import is_xla_tensor
-
-
-@dataclass
-class Wav2VecCriterionConfig(FairseqDataclass):
- infonce: bool = field(
- default=False,
- metadata={
- "help": "if set, uses cross entropy instead of binary cross entropy (i.e. InfoNCE loss)"
- },
- )
- loss_weights: Optional[List[float]] = field(
- default=None,
- metadata={"help": "weights for additional loss terms (not first one)"},
- )
- log_keys: List[str] = field(
- default_factory=lambda: [],
- metadata={"help": "output keys to log"},
- )
-
-
-@register_criterion("wav2vec", dataclass=Wav2VecCriterionConfig)
-class Wav2vecCriterion(FairseqCriterion):
- def __init__(self, task, infonce=False, loss_weights=None, log_keys=None):
- super().__init__(task)
- self.infonce = infonce
- self.loss_weights = loss_weights
- self.log_keys = [] if log_keys is None else log_keys
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- net_output = model(**sample["net_input"])
- logits = model.get_logits(net_output).float()
- target = model.get_targets(sample, net_output)
- self.xla = is_xla_tensor(logits)
-
- # XXX: handle weights on xla.
- weights = None
- if hasattr(model, "get_target_weights") and not self.infonce:
- weights = model.get_target_weights(target, net_output)
- if torch.is_tensor(weights):
- weights = weights.float()
-
- losses = []
-
- reduction = "none" if ((not reduce) or self.xla) else "sum"
- if self.infonce:
- loss = F.cross_entropy(logits, target, reduction=reduction)
- else:
- loss = F.binary_cross_entropy_with_logits(
- logits, target.float(), weights, reduction=reduction
- )
-
- if self.xla:
- # tpu-comment: since dynamic shapes lead to recompilations on xla,
- # we don't shrink tensors using mask_indices.
- # Instead, we use mask indices to adjust loss.
- mi = (
- sample["net_input"]["mask_indices"]
- .transpose(0, 1) # logits are transposed in `model.get_logits`
- .reshape(logits.size(0))
- )
- loss = (loss * mi).sum() if reduce else (loss * mi)
-
- if "sample_size" in sample:
- sample_size = sample["sample_size"]
- elif "mask_indices" in sample["net_input"]:
- sample_size = sample["net_input"]["mask_indices"].sum()
- else:
- sample_size = target.numel() if self.infonce else target.long().sum().item()
- losses.append(loss.detach().clone())
-
- if self.loss_weights is not None:
- assert hasattr(model, "get_extra_losses")
- extra_losses = model.get_extra_losses(net_output)
- if torch.is_tensor(extra_losses):
- extra_losses = [extra_losses]
- if len(self.loss_weights) == 1 and len(extra_losses) != 1:
- self.loss_weights = [self.loss_weights[0]] * len(extra_losses)
- assert len(extra_losses) == len(
- self.loss_weights
- ), f"{len(extra_losses)}, {len(self.loss_weights)}"
- for p, coef in zip(extra_losses, self.loss_weights):
- if coef != 0 and p is not None:
- p = coef * p.float() * sample_size
- loss += p
- losses.append(p)
-
- logging_output = {
- "loss": loss.item() if (reduce and not self.xla) else loss.detach(),
- "ntokens": sample_size,
- "nsentences": sample["id"].numel(),
- "sample_size": sample_size,
- }
-
- for lk in self.log_keys:
- # Only store "logits" and "target" for computing MAP and MAUC
- # during validation
- if lk == "logits":
- if not self.training:
- logging_output["logits"] = logits.cpu().numpy()
- elif lk == "target":
- if not self.training:
- # If the targets have been mixed with the predictions of
- # teacher models, find the original targets
- if hasattr(model, "get_original_targets"):
- original_target = model.get_original_targets(sample, net_output)
- else:
- original_target = target
- logging_output["target"] = original_target.cpu().numpy()
- elif lk in net_output:
- value = net_output[lk]
- if not is_xla_tensor(value):
- value = float(value)
- logging_output[lk] = value
-
- if len(losses) > 1:
- for i, l in enumerate(losses):
- logging_output[f"loss_{i}"] = l.item() if not self.xla else l.detach()
-
- if self.infonce:
- with torch.no_grad():
- if logits.numel() == 0:
- corr = 0
- count = 0
- else:
- assert logits.dim() > 1, logits.shape
- max = logits.argmax(-1) == 0
- min = logits.argmin(-1) == 0
- if is_xla_tensor(logits):
- max, min = max * mi, min * mi
- both = max & min
- corr = max.long().sum() - both.long().sum()
- count = mi.sum()
- else:
- both = max & min
- corr = max.long().sum().item() - both.long().sum().item()
- count = float(max.numel())
-
- logging_output["correct"] = corr
- logging_output["count"] = count
-
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs))
- ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs))
- nsentences = utils.item(
- sum(log.get("nsentences", 0) for log in logging_outputs)
- )
- sample_size = utils.item(
- sum(log.get("sample_size", 0) for log in logging_outputs)
- )
-
- metrics.log_scalar(
- "loss", loss_sum / (sample_size or 1) / math.log(2), sample_size, round=3
- )
- metrics.log_scalar("ntokens", ntokens)
- metrics.log_scalar("nsentences", nsentences)
-
- correct = sum(log.get("correct", 0) for log in logging_outputs)
- metrics.log_scalar("_correct", correct)
-
- total = sum(log.get("count", 0) for log in logging_outputs)
- metrics.log_scalar("_total", total)
-
- if total > 0:
- metrics.log_derived(
- "accuracy",
- lambda meters: safe_round(
- meters["_correct"].sum / meters["_total"].sum, 5
- )
- if meters["_total"].sum > 0
- else float("nan"),
- )
-
- builtin_keys = {
- "loss",
- "ntokens",
- "nsentences",
- "sample_size",
- "correct",
- "count",
- }
-
- for k in logging_outputs[0]:
- if k not in builtin_keys:
- val = sum(log.get(k, 0) for log in logging_outputs)
- if k.startswith("loss"):
- metrics.log_scalar(
- k, val / (sample_size or 1) / math.log(2), sample_size, round=3
- )
- else:
- metrics.log_scalar(k, val / len(logging_outputs), round=3)
-
- # FIXME: revert when gather based xla reduction is implemented
- # @staticmethod
- # def logging_outputs_can_be_summed() -> bool:
- def logging_outputs_can_be_summed(self) -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- # XXX: Gather based reduction not implemented for xla yet.
- # So we fall to sum based reduction for xla.
- return self.xla
diff --git a/spaces/atticus/image-text-retrival-huster/misc/utils.py b/spaces/atticus/image-text-retrival-huster/misc/utils.py
deleted file mode 100644
index 4e3915d0feb3331032e910e173081fd3c0b3de62..0000000000000000000000000000000000000000
--- a/spaces/atticus/image-text-retrival-huster/misc/utils.py
+++ /dev/null
@@ -1,195 +0,0 @@
-"""
-****************** COPYRIGHT AND CONFIDENTIALITY INFORMATION ******************
-Copyright (c) 2018 [Thomson Licensing]
-All Rights Reserved
-This program contains proprietary information which is a trade secret/business \
-secret of [Thomson Licensing] and is protected, even if unpublished, under \
-applicable Copyright laws (including French droit d'auteur) and/or may be \
-subject to one or more patent(s).
-Recipient is to retain this program in confidence and is not permitted to use \
-or make copies thereof other than as permitted in a written agreement with \
-[Thomson Licensing] unless otherwise expressly allowed by applicable laws or \
-by [Thomson Licensing] under express agreement.
-Thomson Licensing is a company of the group TECHNICOLOR
-*******************************************************************************
-This scripts permits one to reproduce training and experiments of:
- Engilberge, M., Chevallier, L., Pérez, P., & Cord, M. (2018, April).
- Finding beans in burgers: Deep semantic-visual embedding with localization.
- In Proceedings of CVPR (pp. 3984-3993)
-
-Author: Martin Engilberge
-"""
-
-import os
-
-import nltk
-import pickle
-import torch
-
-from nltk.tokenize import word_tokenize
-from torch.autograd import Variable
-from torch.nn.utils.rnn import pad_sequence
-
-from PIL import Image
-import matplotlib.pyplot as plt
-
-class AverageMeter(object):
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-class Namespace:
- """ Namespace class to manually instantiate joint_embedding model """
- def __init__(self, **kwargs):
- self.__dict__.update(kwargs)
-
-
-def _load_dictionary(dir_st):
- path_dico = os.path.join(dir_st, 'dictionary.txt')
- if not os.path.exists(path_dico):
- print("Invalid path no dictionary found")
- with open(path_dico, 'r') as handle:
- dico_list = handle.readlines()
- dico = {word.strip(): idx for idx, word in enumerate(dico_list)}
- return dico
-
-
-def preprocess(text):
- sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
- sents = sent_detector.tokenize(text)
- result = list()
- for s in sents:
- tokens = word_tokenize(s)
- result.append(tokens)
-
- return result
-
-
-def flatten(l):
- return [item for sublist in l for item in sublist]
-
-
-def encode_sentences(sents, embed, dico):
- sents_list = list()
- for sent in sents:
- sent_tok = preprocess(sent)[0]
- sent_in = Variable(torch.FloatTensor(1, len(sent_tok), 620))
- for i, w in enumerate(sent_tok):
- try:
- sent_in.data[0, i] = torch.from_numpy(embed[dico[w]])
- except KeyError:
- sent_in.data[0, i] = torch.from_numpy(embed[dico["UNK"]])
-
- sents_list.append(sent_in)
- return sents_list
-
-
-def encode_sentence(sent, embed, dico, tokenize=True):
- if tokenize:
- sent_tok = preprocess(sent)[0]
- else:
- sent_tok = sent
-
- sent_in = torch.FloatTensor(len(sent_tok), 620)
-
- for i, w in enumerate(sent_tok):
- try:
- sent_in[i, :620] = torch.from_numpy(embed[dico[w]])
- except KeyError:
- sent_in[i, :620] = torch.from_numpy(embed[dico["UNK"]])
-
- return sent_in
-
-
-def save_checkpoint(state, is_best, model_name, epoch):
- if is_best:
- torch.save(state, './weights/best_' + model_name + ".pth.tar")
-
-
-def log_epoch(logger, epoch, train_loss, val_loss, lr, batch_train, batch_val, data_train, data_val, recall):
- logger.add_scalar('Loss/Train', train_loss, epoch)
- logger.add_scalar('Loss/Val', val_loss, epoch)
- logger.add_scalar('Learning/Rate', lr, epoch)
- logger.add_scalar('Learning/Overfitting', val_loss / train_loss, epoch)
- logger.add_scalar('Time/Train/Batch Processing', batch_train, epoch)
- logger.add_scalar('Time/Val/Batch Processing', batch_val, epoch)
- logger.add_scalar('Time/Train/Data loading', data_train, epoch)
- logger.add_scalar('Time/Val/Data loading', data_val, epoch)
- logger.add_scalar('Recall/Val/CapRet/R@1', recall[0][0], epoch)
- logger.add_scalar('Recall/Val/CapRet/R@5', recall[0][1], epoch)
- logger.add_scalar('Recall/Val/CapRet/R@10', recall[0][2], epoch)
- logger.add_scalar('Recall/Val/CapRet/MedR', recall[2], epoch)
- logger.add_scalar('Recall/Val/ImgRet/R@1', recall[1][0], epoch)
- logger.add_scalar('Recall/Val/ImgRet/R@5', recall[1][1], epoch)
- logger.add_scalar('Recall/Val/ImgRet/R@10', recall[1][2], epoch)
- logger.add_scalar('Recall/Val/ImgRet/MedR', recall[3], epoch)
-
-
-def collate_fn_padded(data):
- images, captions = zip(*data)
-
- images = torch.stack(images, 0)
-
- lengths = [len(cap) for cap in captions]
- targets = pad_sequence(captions, batch_first=True)
-
- return images, targets, lengths
-
-
-def collate_fn_cap_padded(data):
- captions = data
-
- lengths = [len(cap) for cap in captions]
- targets = pad_sequence(captions, batch_first=True)
-
- return targets, lengths
-
-
-def collate_fn_semseg(data):
- images, size, targets = zip(*data)
- images = torch.stack(images, 0)
-
- return images, size, targets
-
-
-def collate_fn_img_padded(data):
- images = data
- images = torch.stack(images, 0)
-
- return images
-
-
-def load_obj(path):
- with open(os.path.normpath(path + '.pkl'), 'rb') as f:
- return pickle.load(f)
-
-
-def save_obj(obj, path):
- with open(os.path.normpath(path + '.pkl'), 'wb') as f:
- pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
-
-def show_imgs(imgs_path):
- plt.ion()
- for i, img_path in enumerate(imgs_path):
- img = Image.open(img_path)
- plt.figure("Image") # 图像窗口名称
- plt.imshow(img)
- plt.axis('on') # 关掉坐标轴为 off
- plt.title('image_{}'.format(i)) # 图像题目
- plt.ioff()
- plt.show()
- plt.close()
-
diff --git a/spaces/awacke1/ChatGPT-Memory-Chat-Story-Generator/README.md b/spaces/awacke1/ChatGPT-Memory-Chat-Story-Generator/README.md
deleted file mode 100644
index 97a87f595e00fed444b46756680b7870cc4ae0a5..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ChatGPT-Memory-Chat-Story-Generator/README.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: 🔍ChatGPT Episodic and Semantic Memory Generator🏊
-emoji: 🌟GPT🔍
-colorFrom: green
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: true
-license: mit
----
-
-## ChatGPT Datasets 📚
-- WebText
-- Common Crawl
-- BooksCorpus
-- English Wikipedia
-- Toronto Books Corpus
-- OpenWebText
-
-## ChatGPT Datasets - Details 📚
-- **WebText:** A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2.
- - [WebText: A Large-Scale Unsupervised Text Corpus by Radford et al.](https://paperswithcode.com/dataset/webtext)
-- **Common Crawl:** A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3.
- - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/common-crawl) by Brown et al.
-- **BooksCorpus:** A dataset of over 11,000 books from a variety of genres.
- - [Scalable Methods for 8 Billion Token Language Modeling](https://paperswithcode.com/dataset/bookcorpus) by Zhu et al.
-- **English Wikipedia:** A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017.
- - [Improving Language Understanding by Generative Pre-Training](https://huggingface.co/spaces/awacke1/WikipediaUltimateAISearch?logs=build) Space for Wikipedia Search
-- **Toronto Books Corpus:** A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto.
- - [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://paperswithcode.com/dataset/bookcorpus) by Schwenk and Douze.
-- **OpenWebText:** A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3.
- - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/openwebtext) by Brown et al.
-
\ No newline at end of file
diff --git a/spaces/awacke1/File-Memory-Operations-Human-Feedback-Gradio/README.md b/spaces/awacke1/File-Memory-Operations-Human-Feedback-Gradio/README.md
deleted file mode 100644
index 4c21f36860600cfada577cc25d36c0fa22c72da0..0000000000000000000000000000000000000000
--- a/spaces/awacke1/File-Memory-Operations-Human-Feedback-Gradio/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: 🔥File-Memory-Human-Feedback-Gradio🔥
-emoji: 🔥🔥🔥
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-https://huggingface.co/spaces/bigscience-data/roots-search
diff --git a/spaces/awacke1/PhysicsRacingDemoWith3DARVR/style.css b/spaces/awacke1/PhysicsRacingDemoWith3DARVR/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/PhysicsRacingDemoWith3DARVR/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/awacke1/PrompTart/app.py b/spaces/awacke1/PrompTart/app.py
deleted file mode 100644
index dd94797e9b5bc16756c9b63135678b8375bbae56..0000000000000000000000000000000000000000
--- a/spaces/awacke1/PrompTart/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import pandas as pd
-import gradio as gr
-
-df = pd.read_csv("images.csv")
-df['url'] = df['url'].apply(lambda x: '')
-df = df[[ 'url', 'prompt']]
-
-def display_df():
- df_images = df.head(1000)
- return df_images
-
-def display_df_search(search):
- df_images = df.loc[df['prompt'].str.contains(search, case=False, na=False)]
- #df_images = df.head(1000)
- return df_images
-
-def display_next1000(dataframe, end):
- dataframe = dataframe.sample(frac=1)
- start = (end or dataframe.index[-1]) + 1
- end = start + 999
- df_images = df.loc[start:end]
- return df_images, end
-
-def Match(name):
- pd.set_option("display.max_rows", None)
- data = df
- swith=data.loc[data['prompt'].str.contains(name, case=False, na=False)]
- return swith
-
-
-with gr.Blocks() as demo:
- gr.Markdown("
🍰PrompTart🎨
")
- gr.Markdown("""
Art Prompts from Playground. Git. Create Art Here. Papers,Code,Datasets for SOTA in Art""")
-
-
-
- with gr.Row():
- num_end = gr.Number(visible=False)
- b1 = gr.Button("Images and Prompts 0-1000")
- b2 = gr.Button("Next 1000 Images and Prompts")
-
- with gr.Row(): # inputs and buttons
- inp = gr.Textbox(lines=1, default="", label="Search")
- b3 = gr.Button("Search")
-
- with gr.Row():
- out_dataframe = gr.Dataframe(wrap=True, max_rows=1000, overflow_row_behaviour= "paginate", datatype = ["markdown", "markdown"], headers=['url', 'prompt'])
-
- b1.click(fn=display_df, outputs=out_dataframe)
- b2.click(fn=display_next1000, inputs= [out_dataframe, num_end ], outputs=[out_dataframe, num_end])
- b3.click(fn=display_df_search, inputs=inp, outputs=out_dataframe)
-
-demo.launch(debug=True, show_error=True)
\ No newline at end of file
diff --git a/spaces/awacke1/google-pegasus-pubmed/app.py b/spaces/awacke1/google-pegasus-pubmed/app.py
deleted file mode 100644
index a777b7ef4a34a82e370cf4f92c51a32b775c9cd7..0000000000000000000000000000000000000000
--- a/spaces/awacke1/google-pegasus-pubmed/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/google/pegasus-pubmed").launch()
\ No newline at end of file
diff --git a/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli-2/app.py b/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli-2/app.py
deleted file mode 100644
index a4f41814613957f3cf8485c64f6e8d303eccf62f..0000000000000000000000000000000000000000
--- a/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli-2/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/sileod/deberta-v3-base-tasksource-nli").launch()
\ No newline at end of file
diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/vdecoder/hifigan/models.py b/spaces/azusarang/so-vits-svc-models-ba_P/vdecoder/hifigan/models.py
deleted file mode 100644
index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000
--- a/spaces/azusarang/so-vits-svc-models-ba_P/vdecoder/hifigan/models.py
+++ /dev/null
@@ -1,503 +0,0 @@
-import os
-import json
-from .env import AttrDict
-import numpy as np
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from .utils import init_weights, get_padding
-
-LRELU_SLOPE = 0.1
-
-
-def load_model(model_path, device='cuda'):
- config_file = os.path.join(os.path.split(model_path)[0], 'config.json')
- with open(config_file) as f:
- data = f.read()
-
- global h
- json_config = json.loads(data)
- h = AttrDict(json_config)
-
- generator = Generator(h).to(device)
-
- cp_dict = torch.load(model_path)
- generator.load_state_dict(cp_dict['generator'])
- generator.eval()
- generator.remove_weight_norm()
- del cp_dict
- return generator, h
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-def padDiff(x):
- return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0)
-
-class SineGen(torch.nn.Module):
- """ Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(self, samp_rate, harmonic_num=0,
- sine_amp=0.1, noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
- self.flag_for_pulse = flag_for_pulse
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = (f0 > self.voiced_threshold).type(torch.float32)
- return uv
-
- def _f02sine(self, f0_values):
- """ f0_values: (batchsize, length, dim)
- where dim indicates fundamental tone and overtones
- """
- # convert to F0 in rad. The interger part n can be ignored
- # because 2 * np.pi * n doesn't affect phase
- rad_values = (f0_values / self.sampling_rate) % 1
-
- # initial phase noise (no noise for fundamental component)
- rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \
- device=f0_values.device)
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
-
- # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
- if not self.flag_for_pulse:
- # for normal case
-
- # To prevent torch.cumsum numerical overflow,
- # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1.
- # Buffer tmp_over_one_idx indicates the time step to add -1.
- # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi
- tmp_over_one = torch.cumsum(rad_values, 1) % 1
- tmp_over_one_idx = (padDiff(tmp_over_one)) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
-
- sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1)
- * 2 * np.pi)
- else:
- # If necessary, make sure that the first time step of every
- # voiced segments is sin(pi) or cos(0)
- # This is used for pulse-train generation
-
- # identify the last time step in unvoiced segments
- uv = self._f02uv(f0_values)
- uv_1 = torch.roll(uv, shifts=-1, dims=1)
- uv_1[:, -1, :] = 1
- u_loc = (uv < 1) * (uv_1 > 0)
-
- # get the instantanouse phase
- tmp_cumsum = torch.cumsum(rad_values, dim=1)
- # different batch needs to be processed differently
- for idx in range(f0_values.shape[0]):
- temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
- temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
- # stores the accumulation of i.phase within
- # each voiced segments
- tmp_cumsum[idx, :, :] = 0
- tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
-
- # rad_values - tmp_cumsum: remove the accumulation of i.phase
- # within the previous voiced segment.
- i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
-
- # get the sines
- sines = torch.cos(i_phase * 2 * np.pi)
- return sines
-
- def forward(self, f0):
- """ sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim,
- device=f0.device)
- # fundamental component
- fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
-
- # generate sine waveforms
- sine_waves = self._f02sine(fn) * self.sine_amp
-
- # generate uv signal
- # uv = torch.ones(f0.shape)
- # uv = uv * (f0 > self.voiced_threshold)
- uv = self._f02uv(f0)
-
- # noise: for unvoiced should be similar to sine_amp
- # std = self.sine_amp/3 -> max value ~ self.sine_amp
- # . for voiced regions is self.noise_std
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
-
- # first: set the unvoiced part to 0 by uv
- # then: additive noise
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """ SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
-
- # to produce sine waveforms
- self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
- sine_amp, add_noise_std, voiced_threshod)
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x):
- """
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- """
- # source for harmonic branch
- sine_wavs, uv, _ = self.l_sin_gen(x)
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
-
- # source for noise branch, in the same shape as uv
- noise = torch.randn_like(uv) * self.sine_amp / 3
- return sine_merge, noise, uv
-
-
-class Generator(torch.nn.Module):
- def __init__(self, h):
- super(Generator, self).__init__()
- self.h = h
-
- self.num_kernels = len(h["resblock_kernel_sizes"])
- self.num_upsamples = len(h["upsample_rates"])
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"]))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=h["sampling_rate"],
- harmonic_num=8)
- self.noise_convs = nn.ModuleList()
- self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3))
- resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])):
- c_cur = h["upsample_initial_channel"] // (2 ** (i + 1))
- self.ups.append(weight_norm(
- ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
- if i + 1 < len(h["upsample_rates"]): #
- stride_f0 = np.prod(h["upsample_rates"][i + 1:])
- self.noise_convs.append(Conv1d(
- 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2))
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h["upsample_initial_channel"] // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
- self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1)
-
- def forward(self, x, f0, g=None):
- # print(1,x.shape,f0.shape,f0[:, None].shape)
- f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
- # print(2,f0.shape)
- har_source, noi_source, uv = self.m_source(f0)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- x = x + self.cond(g)
- # print(124,x.shape,har_source.shape)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- # print(3,x.shape)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- # print(4,x_source.shape,har_source.shape,x.shape)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, periods=None):
- super(MultiPeriodDiscriminator, self).__init__()
- self.periods = periods if periods is not None else [2, 3, 5, 7, 11]
- self.discriminators = nn.ModuleList()
- for period in self.periods:
- self.discriminators.append(DiscriminatorP(period))
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiScaleDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=2),
- AvgPool1d(4, 2, padding=2)
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i - 1](y)
- y_hat = self.meanpools[i - 1](y_hat)
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1 - dr) ** 2)
- g_loss = torch.mean(dg ** 2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1 - dg) ** 2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/GeometryUtils.d.ts b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/GeometryUtils.d.ts
deleted file mode 100644
index e9e1deb278428b36f6efa45e1924841e88b7a20e..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/GeometryUtils.d.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-/**
- * @deprecated
- */
-export namespace GeometryUtils {
- /**
- * @deprecated Use {@link Geometry#merge geometry.merge( geometry2, matrix, materialIndexOffset )} instead.
- */
- export function merge(geometry1: any, geometry2: any, materialIndexOffset?: any): any;
- /**
- * @deprecated Use {@link Geometry#center geometry.center()} instead.
- */
- export function center(geometry: any): any;
-}
diff --git a/spaces/binker/interpreter5/bot_backend.py b/spaces/binker/interpreter5/bot_backend.py
deleted file mode 100644
index 4620a738d9dc2c0a4222c9b56e17f17a6b986272..0000000000000000000000000000000000000000
--- a/spaces/binker/interpreter5/bot_backend.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import json
-import openai
-import os
-import copy
-import shutil
-from jupyter_backend import *
-from typing import *
-
-functions = [
- {
- "name": "execute_code",
- "description": "This function allows you to execute Python code and retrieve the terminal output. If the code "
- "generates image output, the function will return the text '[image]'. The code is sent to a "
- "Jupyter kernel for execution. The kernel will remain active after execution, retaining all "
- "variables in memory.",
- "parameters": {
- "type": "object",
- "properties": {
- "code": {
- "type": "string",
- "description": "The code text"
- }
- },
- "required": ["code"],
- }
- }
-]
-
-system_msg = '''You are an AI code interpreter.
-Your goal is to help users do a variety of jobs by executing Python code.
-
-You should:
-1. Comprehend the user's requirements carefully & to the letter.
-2. Give a brief description for what you plan to do & call the provided function to run code.
-3. Provide results analysis based on the execution output.
-4. If error occurred, try to fix it.
-
-Note: If the user uploads a file, you will receive a system message "User uploaded a file: filename". Use the filename as the path in the code. '''
-
-with open('config.json') as f:
- config = json.load(f)
-
-if not config['API_KEY']:
- config['API_KEY'] = os.getenv('OPENAI_API_KEY')
- os.unsetenv('OPENAI_API_KEY')
-
-
-def get_config():
- return config
-
-
-def config_openai_api(api_type, api_base, api_version, api_key):
- openai.api_type = api_type
- openai.api_base = api_base
- openai.api_version = api_version
- openai.api_key = api_key
-
-
-class GPTResponseLog:
- def __init__(self):
- self.assistant_role_name = ''
- self.content = ''
- self.function_name = None
- self.function_args_str = ''
- self.display_code_block = ''
- self.finish_reason = 'stop'
- self.bot_history = None
-
- def reset_gpt_response_log_values(self, exclude=None):
- if exclude is None:
- exclude = []
-
- attributes = {'assistant_role_name': '',
- 'content': '',
- 'function_name': None,
- 'function_args_str': '',
- 'display_code_block': '',
- 'finish_reason': 'stop',
- 'bot_history': None}
-
- for attr_name in exclude:
- del attributes[attr_name]
- for attr_name, value in attributes.items():
- setattr(self, attr_name, value)
-
- def set_assistant_role_name(self, assistant_role_name: str):
- self.assistant_role_name = assistant_role_name
-
- def add_content(self, content: str):
- self.content += content
-
- def set_function_name(self, function_name: str):
- self.function_name = function_name
-
- def copy_current_bot_history(self, bot_history: List):
- self.bot_history = copy.deepcopy(bot_history)
-
- def add_function_args_str(self, function_args_str: str):
- self.function_args_str += function_args_str
-
- def update_display_code_block(self, display_code_block):
- self.display_code_block = display_code_block
-
- def update_finish_reason(self, finish_reason: str):
- self.finish_reason = finish_reason
-
-
-class BotBackend(GPTResponseLog):
- def __init__(self):
- super().__init__()
- self.unique_id = hash(id(self))
- self.jupyter_work_dir = f'cache/work_dir_{self.unique_id}'
- self.jupyter_kernel = JupyterKernel(work_dir=self.jupyter_work_dir)
- self.gpt_model_choice = "GPT-3.5"
- self.revocable_files = []
- self._init_conversation()
- self._init_api_config()
- self._init_kwargs_for_chat_completion()
-
- def _init_conversation(self):
- first_system_msg = {'role': 'system', 'content': system_msg}
- if hasattr(self, 'conversation'):
- self.conversation.clear()
- self.conversation.append(first_system_msg)
- else:
- self.conversation: List[Dict] = [first_system_msg]
-
- def _init_api_config(self):
- self.config = get_config()
- api_type = self.config['API_TYPE']
- api_base = self.config['API_base']
- api_version = self.config['API_VERSION']
- api_key = config['API_KEY']
- config_openai_api(api_type, api_base, api_version, api_key)
-
- def _init_kwargs_for_chat_completion(self):
- self.kwargs_for_chat_completion = {
- 'stream': True,
- 'messages': self.conversation,
- 'functions': functions,
- 'function_call': 'auto'
- }
-
- model_name = self.config['model'][self.gpt_model_choice]['model_name']
-
- if self.config['API_TYPE'] == 'azure':
- self.kwargs_for_chat_completion['engine'] = model_name
- else:
- self.kwargs_for_chat_completion['model'] = model_name
-
- def _clear_all_files_in_work_dir(self):
- for filename in os.listdir(self.jupyter_work_dir):
- os.remove(
- os.path.join(self.jupyter_work_dir, filename)
- )
-
- def add_gpt_response_content_message(self):
- self.conversation.append(
- {'role': self.assistant_role_name, 'content': self.content}
- )
-
- def add_text_message(self, user_text):
- self.conversation.append(
- {'role': 'user', 'content': user_text}
- )
- self.revocable_files.clear()
- self.update_finish_reason(finish_reason='new_input')
-
- def add_file_message(self, path, bot_msg):
- filename = os.path.basename(path)
- work_dir = self.jupyter_work_dir
-
- shutil.copy(path, work_dir)
-
- gpt_msg = {'role': 'system', 'content': f'User uploaded a file: {filename}'}
- self.conversation.append(gpt_msg)
- self.revocable_files.append(
- {
- 'bot_msg': bot_msg,
- 'gpt_msg': gpt_msg,
- 'path': os.path.join(work_dir, filename)
- }
- )
-
- def add_function_call_response_message(self, function_response: str, save_tokens=True):
- self.conversation.append(
- {
- "role": self.assistant_role_name,
- "name": self.function_name,
- "content": self.function_args_str
- }
- )
-
- if save_tokens and len(function_response) > 500:
- function_response = f'{function_response[:200]}\n[Output too much, the middle part output is omitted]\n ' \
- f'End part of output:\n{function_response[-200:]}'
- self.conversation.append(
- {
- "role": "function",
- "name": self.function_name,
- "content": function_response,
- }
- )
-
- def revoke_file(self):
- if self.revocable_files:
- file = self.revocable_files[-1]
- bot_msg = file['bot_msg']
- gpt_msg = file['gpt_msg']
- path = file['path']
-
- assert self.conversation[-1] is gpt_msg
- del self.conversation[-1]
-
- os.remove(path)
-
- del self.revocable_files[-1]
-
- return bot_msg
- else:
- return None
-
- def update_gpt_model_choice(self, model_choice):
- self.gpt_model_choice = model_choice
- self._init_kwargs_for_chat_completion()
-
- def restart(self):
- self._clear_all_files_in_work_dir()
- self.revocable_files.clear()
- self._init_conversation()
- self.reset_gpt_response_log_values()
- self.jupyter_kernel.restart_jupyter_kernel()
diff --git a/spaces/bioriAsaeru/text-to-voice/Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG [BETTER].md b/spaces/bioriAsaeru/text-to-voice/Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG [BETTER].md
deleted file mode 100644
index f55454e57a25323b7a93d3cfad9431940283207b..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG [BETTER].md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
How to Fix AVG PC TuneUp Trial Expired Error
-
AVG PC TuneUp is a popular software that helps you optimize your PC's performance and speed. It offers various features such as cleaning up junk files, fixing registry errors, updating drivers, and more. However, some users may encounter a problem when their AVG PC TuneUp trial expires and they are unable to use the software anymore.
-
Ben 10Race Against Time 2007 DVDRip Dual AudioEng HindiAMDTMRG
In this article, we will show you how to fix the AVG PC TuneUp trial expired error and continue using the software without any issues.
-
Why does AVG PC TuneUp trial expire?
-
AVG PC TuneUp trial expires after a certain period of time, usually 30 days, depending on the offer you have chosen. This is because AVG PC TuneUp is a paid software that requires a license key to activate and use all its features. The trial version is meant to give you a preview of what the software can do for your PC and help you decide whether you want to buy it or not.
-
When your AVG PC TuneUp trial expires, you will see a message on your screen that says "Your Internet Security trial has expired" or "Your AVG PC TuneUp trial has expired". You will also see a "Buy Now" button that will take you to the AVG website where you can purchase a license key for the software.
-
How to fix AVG PC TuneUp trial expired error?
-
There are two ways to fix the AVG PC TuneUp trial expired error:
-
-
-
Buy a license key for AVG PC TuneUp
-
Uninstall and reinstall AVG PC TuneUp free version
-
-
Buy a license key for AVG PC TuneUp
-
The easiest and most recommended way to fix the AVG PC TuneUp trial expired error is to buy a license key for the software. This will allow you to use all the features of AVG PC TuneUp without any limitations or interruptions. You can choose from different plans and prices depending on your needs and preferences.
-
To buy a license key for AVG PC TuneUp, follow these steps:
-
-
Click on the "Buy Now" button that appears on your screen when your trial expires.
-
You will be redirected to the AVG website where you can choose your plan and payment method.
-
Enter your billing details and confirm your order.
-
You will receive an email with your license key and instructions on how to activate it.
-
Open AVG PC TuneUp and enter your license key when prompted.
-
Enjoy using AVG PC TuneUp with full functionality.
-
-
Uninstall and reinstall AVG PC TuneUp free version
-
If you do not want to buy a license key for AVG PC TuneUp, you can try uninstalling and reinstalling the free version of the software. This will allow you to use some basic features of AVG PC TuneUp such as disk cleaner, browser cleaner, startup manager, etc. However, you will not be able to use some advanced features such as registry cleaner, driver updater, sleep mode, etc.
-
To uninstall and reinstall AVG PC TuneUp free version, follow these steps:
-
-
Go to Control Panel > Programs > Uninstall a program.
-
Select AVG Internet Security or AVG Protection (depending on which trial version you have installed) and click on Uninstall.
-
Follow the instructions on the screen to complete the uninstallation process.
Open AVG PC TuneUp and enjoy using some of its features for free.
-
-
-
Conclusion
-
-
We hope this article has helped you fix the AVG PC TuneUp trial expired error and continue using the software without any problems. If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Edgar Baqueiro Rojas Derecho Civil Introduccion Personas Pdf Download.md b/spaces/bioriAsaeru/text-to-voice/Edgar Baqueiro Rojas Derecho Civil Introduccion Personas Pdf Download.md
deleted file mode 100644
index 64f580fc778c2bdbb7349e8102f8ff2c83b24a62..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Edgar Baqueiro Rojas Derecho Civil Introduccion Personas Pdf Download.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
Edgar Baqueiro Rojas Derecho Civil Introduccion Personas Pdf Download
-
-10 Dec 2012 - phyorry f4bc01c98b Dec 2020 - fitjayd d868ddde6e ... 35-edgar-baqueiro-rojas-derecho-civil-introduccion-personas-pdf-download-link -pdf-download-link-doc-link-doc-link.html#comment-post-comment.html.
-3 Dec 2019 - 10 Jul 2019 - fitjayd d868ddde6e ...
-40-edgar-baqueiro-rojas-derecho-civil-introduccion-personas-pdf-download-link-pdf-download-link-doc-link-doc-link.html#comment-post-comment.html.
-17 Dec 2019 - 7 Apr 2019 - fitjayd d868ddde6e ...
-40-edgar-baqueiro-rojas-derecho-civil-introduccion-personas-pdf-download-link-pdf-download-link-doc-link-doc-link.html#comment-post-comment.html 8a78ff9644
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Instarepost Full Version Apk Downloads Everything You Need to Know About This Amazing App.md b/spaces/bioriAsaeru/text-to-voice/Instarepost Full Version Apk Downloads Everything You Need to Know About This Amazing App.md
deleted file mode 100644
index 9a6a4eadea2c7f20c8409ac75b941cb521f72f74..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Instarepost Full Version Apk Downloads Everything You Need to Know About This Amazing App.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
fruit slice game for pc free download code blocks c++ download for windows 10 64 bit alienware theme windows 8.1 download free adobe premiere download free full version windows 7 free download driver hp p1006 windows 10 64 bit
sim city 3000 free download full version windows 7 free
all pc games free download list
hp laserjet 1100 driver windows 7 64 bit download free
windows 7 home premium oa download 64 bit iso free
autocad 3d free download for windows 10
adventure time card wars game download pc free
windows 7 virtual pc free download
pro tools m powered 8 download windows free
-
aura sync download windows 10 64 bitwindows 7 base system device driver download freetalking tom 2 free download for pc windows 10jdownloader 2 free download for windows 10 freewindows 10 download bittorrent freepc windows 7 vlc player downloadc++ runtime library download windows 10download kundli software full version for free for windows 7 freefree game sites for pc free downloadbluestacks 5 download for pc windows 7 download game war of the monsters pc
microsoft defender windows 10 free download 64 bit
feature update via windows 10 version 1909 enablement package download
download microsoft defender for windows xp free
bully game download for pc free
windows 8.1 cracked free download free
argouml download for windows 10 64 bit
bionicle the game pc download
gopro driver windows 10 download
microsoft office for windows xp professional 2002 free download free
windows server 2008 r2 service pack 2 x64 download free free full pc games download for windows 10 eos utility for windows 10 download kcast download for windows xp free norton commander windows xp free download free
-
windows 8 pro iso free download freedownload recycle bin icon for windows 10bruce lee games free download pccreative sb0570 driver windows 7 free download freedownload m player for windows freedetroit become human pc download full game for freeralink rt3290 driver windows 10 download freehp laserjet p1102w driver download for windows 10dopdf free download for windows 10 64 bitintel management engine windows 10 download windows 10 the missing manual pdf free download free
microsoft word 2013 download free full version for windows 8 free
skype download xp windows free free
big time cash make money free download for pc
bunny shooter game download for pc
car driving simulator games free download for pc
icloud for windows 10 download photos
4 pics 1 word free download for pc windows xp free
counter terrorism game free download for pc
download windows installer windows 7 64 bit free
download pc games without torrent download pokemon x and y for pc full version free detroit become human pc download full game for free kmspico windows 10 download free download free full version pc games for windows 8
- aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bla/tranny/App/Chat/utils/RAG.py b/spaces/bla/tranny/App/Chat/utils/RAG.py
deleted file mode 100644
index 753676f22213a7e07766eb1dba4adafdbf5b4ede..0000000000000000000000000000000000000000
--- a/spaces/bla/tranny/App/Chat/utils/RAG.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import aiohttp
-import asyncio,pprint
-import google.generativeai as palm
-from langchain.chains.question_answering import load_qa_chain
-from langchain.llms import GooglePalm
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain import PromptTemplate
-import os
-PALM_API = ''
-API_KEY=os.environ.get("PALM_API",PALM_API)
-palm.configure(api_key=API_KEY)
-
-
-def count_tokens(text):
- return palm.count_message_tokens(prompt=text)['token_count']
-llm = GooglePalm(
- google_api_key=API_KEY, **{ "safety_settings": [
- {"category": "HARM_CATEGORY_DEROGATORY", "threshold": 4},
- {"category": "HARM_CATEGORY_TOXICITY", "threshold": 4},
- {"category": "HARM_CATEGORY_VIOLENCE", "threshold": 4},
- {"category": "HARM_CATEGORY_SEXUAL", "threshold": 4},
- {"category": "HARM_CATEGORY_MEDICAL", "threshold": 4},
- {"category": "HARM_CATEGORY_DANGEROUS", "threshold": 4},
- ]})
-text_splitter = RecursiveCharacterTextSplitter(separators=["\n\n", "\n","."], chunk_size=40_000, chunk_overlap=500)
-with open('./sample.txt', 'r') as file:
- essay = file.read()
-
-docs = text_splitter.create_documents([essay])
-for doc in docs:
- print(count_tokens(doc.page_content))
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/test_anchor_generator.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/test_anchor_generator.py
deleted file mode 100644
index 13a808e587382216da6fe7ee957603f448172657..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/modeling/test_anchor_generator.py
+++ /dev/null
@@ -1,120 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import unittest
-import torch
-
-from detectron2.config import get_cfg
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.anchor_generator import DefaultAnchorGenerator, RotatedAnchorGenerator
-
-logger = logging.getLogger(__name__)
-
-
-class TestAnchorGenerator(unittest.TestCase):
- def test_default_anchor_generator(self):
- cfg = get_cfg()
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
-
- anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)])
-
- # only the last two dimensions of features matter here
- num_images = 2
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
- anchors = anchor_generator([features["stage3"]])
- expected_anchor_tensor = torch.tensor(
- [
- [-32.0, -8.0, 32.0, 8.0],
- [-16.0, -16.0, 16.0, 16.0],
- [-8.0, -32.0, 8.0, 32.0],
- [-64.0, -16.0, 64.0, 16.0],
- [-32.0, -32.0, 32.0, 32.0],
- [-16.0, -64.0, 16.0, 64.0],
- [-28.0, -8.0, 36.0, 8.0], # -28.0 == -32.0 + STRIDE (4)
- [-12.0, -16.0, 20.0, 16.0],
- [-4.0, -32.0, 12.0, 32.0],
- [-60.0, -16.0, 68.0, 16.0],
- [-28.0, -32.0, 36.0, 32.0],
- [-12.0, -64.0, 20.0, 64.0],
- ]
- )
-
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
- def test_default_anchor_generator_centered(self):
- # test explicit args
- anchor_generator = DefaultAnchorGenerator(
- sizes=[32, 64], aspect_ratios=[0.25, 1, 4], strides=[4]
- )
-
- # only the last two dimensions of features matter here
- num_images = 2
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
- expected_anchor_tensor = torch.tensor(
- [
- [-30.0, -6.0, 34.0, 10.0],
- [-14.0, -14.0, 18.0, 18.0],
- [-6.0, -30.0, 10.0, 34.0],
- [-62.0, -14.0, 66.0, 18.0],
- [-30.0, -30.0, 34.0, 34.0],
- [-14.0, -62.0, 18.0, 66.0],
- [-26.0, -6.0, 38.0, 10.0],
- [-10.0, -14.0, 22.0, 18.0],
- [-2.0, -30.0, 14.0, 34.0],
- [-58.0, -14.0, 70.0, 18.0],
- [-26.0, -30.0, 38.0, 34.0],
- [-10.0, -62.0, 22.0, 66.0],
- ]
- )
-
- anchors = anchor_generator([features["stage3"]])
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
- anchors = torch.jit.script(anchor_generator)([features["stage3"]])
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
- def test_rrpn_anchor_generator(self):
- cfg = get_cfg()
- cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]]
- cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]]
- cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [0, 45] # test single list[float]
- anchor_generator = RotatedAnchorGenerator(cfg, [ShapeSpec(stride=4)])
-
- # only the last two dimensions of features matter here
- num_images = 2
- features = {"stage3": torch.rand(num_images, 96, 1, 2)}
- anchors = anchor_generator([features["stage3"]])
- expected_anchor_tensor = torch.tensor(
- [
- [0.0, 0.0, 64.0, 16.0, 0.0],
- [0.0, 0.0, 64.0, 16.0, 45.0],
- [0.0, 0.0, 32.0, 32.0, 0.0],
- [0.0, 0.0, 32.0, 32.0, 45.0],
- [0.0, 0.0, 16.0, 64.0, 0.0],
- [0.0, 0.0, 16.0, 64.0, 45.0],
- [0.0, 0.0, 128.0, 32.0, 0.0],
- [0.0, 0.0, 128.0, 32.0, 45.0],
- [0.0, 0.0, 64.0, 64.0, 0.0],
- [0.0, 0.0, 64.0, 64.0, 45.0],
- [0.0, 0.0, 32.0, 128.0, 0.0],
- [0.0, 0.0, 32.0, 128.0, 45.0],
- [4.0, 0.0, 64.0, 16.0, 0.0], # 4.0 == 0.0 + STRIDE (4)
- [4.0, 0.0, 64.0, 16.0, 45.0],
- [4.0, 0.0, 32.0, 32.0, 0.0],
- [4.0, 0.0, 32.0, 32.0, 45.0],
- [4.0, 0.0, 16.0, 64.0, 0.0],
- [4.0, 0.0, 16.0, 64.0, 45.0],
- [4.0, 0.0, 128.0, 32.0, 0.0],
- [4.0, 0.0, 128.0, 32.0, 45.0],
- [4.0, 0.0, 64.0, 64.0, 0.0],
- [4.0, 0.0, 64.0, 64.0, 45.0],
- [4.0, 0.0, 32.0, 128.0, 0.0],
- [4.0, 0.0, 32.0, 128.0, 45.0],
- ]
- )
-
- self.assertTrue(torch.allclose(anchors[0].tensor, expected_anchor_tensor))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/btlee215/openchat-openchat/README.md b/spaces/btlee215/openchat-openchat/README.md
deleted file mode 100644
index 0af5a009aeafac733f4a4df22f12a4ed736223a3..0000000000000000000000000000000000000000
--- a/spaces/btlee215/openchat-openchat/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Openchat Openchat
-emoji: 👀
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/caltex1/streamlit_pdf_gpt/README.md b/spaces/caltex1/streamlit_pdf_gpt/README.md
deleted file mode 100644
index 6d178bfef0f9b524ec241d6a626b9ef35ce0fff9..0000000000000000000000000000000000000000
--- a/spaces/caltex1/streamlit_pdf_gpt/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Streamlit Pdf Gpt
-emoji: 📚
-colorFrom: red
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-duplicated_from: amrithhun/streamlit_pdf_gpt
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/visualizer.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/visualizer.py
deleted file mode 100644
index 8e145181871d1981e41db3c8cbc7e8f4cc7b5833..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/utils/visualizer.py
+++ /dev/null
@@ -1,1267 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import colorsys
-import logging
-import math
-import numpy as np
-from enum import Enum, unique
-import cv2
-import matplotlib as mpl
-import matplotlib.colors as mplc
-import matplotlib.figure as mplfigure
-import pycocotools.mask as mask_util
-import torch
-from matplotlib.backends.backend_agg import FigureCanvasAgg
-from PIL import Image
-
-from detectron2.data import MetadataCatalog
-from detectron2.structures import BitMasks, Boxes, BoxMode, Keypoints, PolygonMasks, RotatedBoxes
-from detectron2.utils.file_io import PathManager
-
-from .colormap import random_color
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["ColorMode", "VisImage", "Visualizer"]
-
-
-_SMALL_OBJECT_AREA_THRESH = 1000
-_LARGE_MASK_AREA_THRESH = 120000
-_OFF_WHITE = (1.0, 1.0, 240.0 / 255)
-_BLACK = (0, 0, 0)
-_RED = (1.0, 0, 0)
-
-_KEYPOINT_THRESHOLD = 0.05
-
-
-@unique
-class ColorMode(Enum):
- """
- Enum of different color modes to use for instance visualizations.
- """
-
- IMAGE = 0
- """
- Picks a random color for every instance and overlay segmentations with low opacity.
- """
- SEGMENTATION = 1
- """
- Let instances of the same category have similar colors
- (from metadata.thing_colors), and overlay them with
- high opacity. This provides more attention on the quality of segmentation.
- """
- IMAGE_BW = 2
- """
- Same as IMAGE, but convert all areas without masks to gray-scale.
- Only available for drawing per-instance mask predictions.
- """
-
-
-class GenericMask:
- """
- Attribute:
- polygons (list[ndarray]): list[ndarray]: polygons for this mask.
- Each ndarray has format [x, y, x, y, ...]
- mask (ndarray): a binary mask
- """
-
- def __init__(self, mask_or_polygons, height, width):
- self._mask = self._polygons = self._has_holes = None
- self.height = height
- self.width = width
-
- m = mask_or_polygons
- if isinstance(m, dict):
- # RLEs
- assert "counts" in m and "size" in m
- if isinstance(m["counts"], list): # uncompressed RLEs
- h, w = m["size"]
- assert h == height and w == width
- m = mask_util.frPyObjects(m, h, w)
- self._mask = mask_util.decode(m)[:, :]
- return
-
- if isinstance(m, list): # list[ndarray]
- self._polygons = [np.asarray(x).reshape(-1) for x in m]
- return
-
- if isinstance(m, np.ndarray): # assumed to be a binary mask
- assert m.shape[1] != 2, m.shape
- assert m.shape == (
- height,
- width,
- ), f"mask shape: {m.shape}, target dims: {height}, {width}"
- self._mask = m.astype("uint8")
- return
-
- raise ValueError("GenericMask cannot handle object {} of type '{}'".format(m, type(m)))
-
- @property
- def mask(self):
- if self._mask is None:
- self._mask = self.polygons_to_mask(self._polygons)
- return self._mask
-
- @property
- def polygons(self):
- if self._polygons is None:
- self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
- return self._polygons
-
- @property
- def has_holes(self):
- if self._has_holes is None:
- if self._mask is not None:
- self._polygons, self._has_holes = self.mask_to_polygons(self._mask)
- else:
- self._has_holes = False # if original format is polygon, does not have holes
- return self._has_holes
-
- def mask_to_polygons(self, mask):
- # cv2.RETR_CCOMP flag retrieves all the contours and arranges them to a 2-level
- # hierarchy. External contours (boundary) of the object are placed in hierarchy-1.
- # Internal contours (holes) are placed in hierarchy-2.
- # cv2.CHAIN_APPROX_NONE flag gets vertices of polygons from contours.
- mask = np.ascontiguousarray(mask) # some versions of cv2 does not support incontiguous arr
- res = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
- hierarchy = res[-1]
- if hierarchy is None: # empty mask
- return [], False
- has_holes = (hierarchy.reshape(-1, 4)[:, 3] >= 0).sum() > 0
- res = res[-2]
- res = [x.flatten() for x in res]
- # These coordinates from OpenCV are integers in range [0, W-1 or H-1].
- # We add 0.5 to turn them into real-value coordinate space. A better solution
- # would be to first +0.5 and then dilate the returned polygon by 0.5.
- res = [x + 0.5 for x in res if len(x) >= 6]
- return res, has_holes
-
- def polygons_to_mask(self, polygons):
- rle = mask_util.frPyObjects(polygons, self.height, self.width)
- rle = mask_util.merge(rle)
- return mask_util.decode(rle)[:, :]
-
- def area(self):
- return self.mask.sum()
-
- def bbox(self):
- p = mask_util.frPyObjects(self.polygons, self.height, self.width)
- p = mask_util.merge(p)
- bbox = mask_util.toBbox(p)
- bbox[2] += bbox[0]
- bbox[3] += bbox[1]
- return bbox
-
-
-class _PanopticPrediction:
- """
- Unify different panoptic annotation/prediction formats
- """
-
- def __init__(self, panoptic_seg, segments_info, metadata=None):
- if segments_info is None:
- assert metadata is not None
- # If "segments_info" is None, we assume "panoptic_img" is a
- # H*W int32 image storing the panoptic_id in the format of
- # category_id * label_divisor + instance_id. We reserve -1 for
- # VOID label.
- label_divisor = metadata.label_divisor
- segments_info = []
- for panoptic_label in np.unique(panoptic_seg.numpy()):
- if panoptic_label == -1:
- # VOID region.
- continue
- pred_class = panoptic_label // label_divisor
- isthing = pred_class in metadata.thing_dataset_id_to_contiguous_id.values()
- segments_info.append(
- {
- "id": int(panoptic_label),
- "category_id": int(pred_class),
- "isthing": bool(isthing),
- }
- )
- del metadata
-
- self._seg = panoptic_seg
-
- self._sinfo = {s["id"]: s for s in segments_info} # seg id -> seg info
- segment_ids, areas = torch.unique(panoptic_seg, sorted=True, return_counts=True)
- areas = areas.numpy()
- sorted_idxs = np.argsort(-areas)
- self._seg_ids, self._seg_areas = segment_ids[sorted_idxs], areas[sorted_idxs]
- self._seg_ids = self._seg_ids.tolist()
- for sid, area in zip(self._seg_ids, self._seg_areas):
- if sid in self._sinfo:
- self._sinfo[sid]["area"] = float(area)
-
- def non_empty_mask(self):
- """
- Returns:
- (H, W) array, a mask for all pixels that have a prediction
- """
- empty_ids = []
- for id in self._seg_ids:
- if id not in self._sinfo:
- empty_ids.append(id)
- if len(empty_ids) == 0:
- return np.zeros(self._seg.shape, dtype=np.uint8)
- assert (
- len(empty_ids) == 1
- ), ">1 ids corresponds to no labels. This is currently not supported"
- return (self._seg != empty_ids[0]).numpy().astype(np.bool)
-
- def semantic_masks(self):
- for sid in self._seg_ids:
- sinfo = self._sinfo.get(sid)
- if sinfo is None or sinfo["isthing"]:
- # Some pixels (e.g. id 0 in PanopticFPN) have no instance or semantic predictions.
- continue
- yield (self._seg == sid).numpy().astype(np.bool), sinfo
-
- def instance_masks(self):
- for sid in self._seg_ids:
- sinfo = self._sinfo.get(sid)
- if sinfo is None or not sinfo["isthing"]:
- continue
- mask = (self._seg == sid).numpy().astype(np.bool)
- if mask.sum() > 0:
- yield mask, sinfo
-
-
-def _create_text_labels(classes, scores, class_names, is_crowd=None):
- """
- Args:
- classes (list[int] or None):
- scores (list[float] or None):
- class_names (list[str] or None):
- is_crowd (list[bool] or None):
-
- Returns:
- list[str] or None
- """
- labels = None
- if classes is not None:
- if class_names is not None and len(class_names) > 0:
- labels = [class_names[i] for i in classes]
- else:
- labels = [str(i) for i in classes]
- if scores is not None:
- if labels is None:
- labels = ["{:.0f}%".format(s * 100) for s in scores]
- else:
- labels = ["{} {:.0f}%".format(l, s * 100) for l, s in zip(labels, scores)]
- if labels is not None and is_crowd is not None:
- labels = [l + ("|crowd" if crowd else "") for l, crowd in zip(labels, is_crowd)]
- return labels
-
-
-class VisImage:
- def __init__(self, img, scale=1.0):
- """
- Args:
- img (ndarray): an RGB image of shape (H, W, 3) in range [0, 255].
- scale (float): scale the input image
- """
- self.img = img
- self.scale = scale
- self.width, self.height = img.shape[1], img.shape[0]
- self._setup_figure(img)
-
- def _setup_figure(self, img):
- """
- Args:
- Same as in :meth:`__init__()`.
-
- Returns:
- fig (matplotlib.pyplot.figure): top level container for all the image plot elements.
- ax (matplotlib.pyplot.Axes): contains figure elements and sets the coordinate system.
- """
- fig = mplfigure.Figure(frameon=False)
- self.dpi = fig.get_dpi()
- # add a small 1e-2 to avoid precision lost due to matplotlib's truncation
- # (https://github.com/matplotlib/matplotlib/issues/15363)
- fig.set_size_inches(
- (self.width * self.scale + 1e-2) / self.dpi,
- (self.height * self.scale + 1e-2) / self.dpi,
- )
- self.canvas = FigureCanvasAgg(fig)
- # self.canvas = mpl.backends.backend_cairo.FigureCanvasCairo(fig)
- ax = fig.add_axes([0.0, 0.0, 1.0, 1.0])
- ax.axis("off")
- self.fig = fig
- self.ax = ax
- self.reset_image(img)
-
- def reset_image(self, img):
- """
- Args:
- img: same as in __init__
- """
- img = img.astype("uint8")
- self.ax.imshow(img, extent=(0, self.width, self.height, 0), interpolation="nearest")
-
- def save(self, filepath):
- """
- Args:
- filepath (str): a string that contains the absolute path, including the file name, where
- the visualized image will be saved.
- """
- self.fig.savefig(filepath)
-
- def get_image(self):
- """
- Returns:
- ndarray:
- the visualized image of shape (H, W, 3) (RGB) in uint8 type.
- The shape is scaled w.r.t the input image using the given `scale` argument.
- """
- canvas = self.canvas
- s, (width, height) = canvas.print_to_buffer()
- # buf = io.BytesIO() # works for cairo backend
- # canvas.print_rgba(buf)
- # width, height = self.width, self.height
- # s = buf.getvalue()
-
- buffer = np.frombuffer(s, dtype="uint8")
-
- img_rgba = buffer.reshape(height, width, 4)
- rgb, alpha = np.split(img_rgba, [3], axis=2)
- return rgb.astype("uint8")
-
-
-class Visualizer:
- """
- Visualizer that draws data about detection/segmentation on images.
-
- It contains methods like `draw_{text,box,circle,line,binary_mask,polygon}`
- that draw primitive objects to images, as well as high-level wrappers like
- `draw_{instance_predictions,sem_seg,panoptic_seg_predictions,dataset_dict}`
- that draw composite data in some pre-defined style.
-
- Note that the exact visualization style for the high-level wrappers are subject to change.
- Style such as color, opacity, label contents, visibility of labels, or even the visibility
- of objects themselves (e.g. when the object is too small) may change according
- to different heuristics, as long as the results still look visually reasonable.
-
- To obtain a consistent style, you can implement custom drawing functions with the
- abovementioned primitive methods instead. If you need more customized visualization
- styles, you can process the data yourself following their format documented in
- tutorials (:doc:`/tutorials/models`, :doc:`/tutorials/datasets`). This class does not
- intend to satisfy everyone's preference on drawing styles.
-
- This visualizer focuses on high rendering quality rather than performance. It is not
- designed to be used for real-time applications.
- """
-
- # TODO implement a fast, rasterized version using OpenCV
-
- def __init__(self, img_rgb, metadata=None, scale=1.0, instance_mode=ColorMode.IMAGE):
- """
- Args:
- img_rgb: a numpy array of shape (H, W, C), where H and W correspond to
- the height and width of the image respectively. C is the number of
- color channels. The image is required to be in RGB format since that
- is a requirement of the Matplotlib library. The image is also expected
- to be in the range [0, 255].
- metadata (Metadata): dataset metadata (e.g. class names and colors)
- instance_mode (ColorMode): defines one of the pre-defined style for drawing
- instances on an image.
- """
- self.img = np.asarray(img_rgb).clip(0, 255).astype(np.uint8)
- if metadata is None:
- metadata = MetadataCatalog.get("__nonexist__")
- self.metadata = metadata
- self.output = VisImage(self.img, scale=scale)
- self.cpu_device = torch.device("cpu")
-
- # too small texts are useless, therefore clamp to 9
- self._default_font_size = max(
- np.sqrt(self.output.height * self.output.width) // 90, 10 // scale
- )
- self._instance_mode = instance_mode
- self.keypoint_threshold = _KEYPOINT_THRESHOLD
-
- def draw_instance_predictions(self, predictions):
- """
- Draw instance-level prediction results on an image.
-
- Args:
- predictions (Instances): the output of an instance detection/segmentation
- model. Following fields will be used to draw:
- "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle").
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None
- scores = predictions.scores if predictions.has("scores") else None
- classes = predictions.pred_classes.tolist() if predictions.has("pred_classes") else None
- labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None))
- keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None
-
- if predictions.has("pred_masks"):
- masks = np.asarray(predictions.pred_masks)
- masks = [GenericMask(x, self.output.height, self.output.width) for x in masks]
- else:
- masks = None
-
- if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"):
- colors = [
- self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes
- ]
- alpha = 0.8
- else:
- colors = None
- alpha = 0.5
-
- if self._instance_mode == ColorMode.IMAGE_BW:
- self.output.reset_image(
- self._create_grayscale_image(
- (predictions.pred_masks.any(dim=0) > 0).numpy()
- if predictions.has("pred_masks")
- else None
- )
- )
- alpha = 0.3
-
- self.overlay_instances(
- masks=masks,
- boxes=boxes,
- labels=labels,
- keypoints=keypoints,
- assigned_colors=colors,
- alpha=alpha,
- )
- return self.output
-
- def draw_sem_seg(self, sem_seg, area_threshold=None, alpha=0.8):
- """
- Draw semantic segmentation predictions/labels.
-
- Args:
- sem_seg (Tensor or ndarray): the segmentation of shape (H, W).
- Each value is the integer label of the pixel.
- area_threshold (int): segments with less than `area_threshold` are not drawn.
- alpha (float): the larger it is, the more opaque the segmentations are.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- if isinstance(sem_seg, torch.Tensor):
- sem_seg = sem_seg.numpy()
- labels, areas = np.unique(sem_seg, return_counts=True)
- sorted_idxs = np.argsort(-areas).tolist()
- labels = labels[sorted_idxs]
- for label in filter(lambda l: l < len(self.metadata.stuff_classes), labels):
- try:
- mask_color = [x / 255 for x in self.metadata.stuff_colors[label]]
- except (AttributeError, IndexError):
- mask_color = None
-
- binary_mask = (sem_seg == label).astype(np.uint8)
- text = self.metadata.stuff_classes[label]
- self.draw_binary_mask(
- binary_mask,
- color=mask_color,
- edge_color=_OFF_WHITE,
- text=text,
- alpha=alpha,
- area_threshold=area_threshold,
- )
- return self.output
-
- def draw_panoptic_seg(self, panoptic_seg, segments_info, area_threshold=None, alpha=0.7):
- """
- Draw panoptic prediction annotations or results.
-
- Args:
- panoptic_seg (Tensor): of shape (height, width) where the values are ids for each
- segment.
- segments_info (list[dict] or None): Describe each segment in `panoptic_seg`.
- If it is a ``list[dict]``, each dict contains keys "id", "category_id".
- If None, category id of each pixel is computed by
- ``pixel // metadata.label_divisor``.
- area_threshold (int): stuff segments with less than `area_threshold` are not drawn.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- pred = _PanopticPrediction(panoptic_seg, segments_info, self.metadata)
-
- if self._instance_mode == ColorMode.IMAGE_BW:
- self.output.reset_image(self._create_grayscale_image(pred.non_empty_mask()))
-
- # draw mask for all semantic segments first i.e. "stuff"
- for mask, sinfo in pred.semantic_masks():
- category_idx = sinfo["category_id"]
- try:
- mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]]
- except AttributeError:
- mask_color = None
-
- text = self.metadata.stuff_classes[category_idx]
- self.draw_binary_mask(
- mask,
- color=mask_color,
- edge_color=_OFF_WHITE,
- text=text,
- alpha=alpha,
- area_threshold=area_threshold,
- )
-
- # draw mask for all instances second
- all_instances = list(pred.instance_masks())
- if len(all_instances) == 0:
- return self.output
- masks, sinfo = list(zip(*all_instances))
- category_ids = [x["category_id"] for x in sinfo]
-
- try:
- scores = [x["score"] for x in sinfo]
- except KeyError:
- scores = None
- labels = _create_text_labels(
- category_ids, scores, self.metadata.thing_classes, [x.get("iscrowd", 0) for x in sinfo]
- )
-
- try:
- colors = [
- self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in category_ids
- ]
- except AttributeError:
- colors = None
- self.overlay_instances(masks=masks, labels=labels, assigned_colors=colors, alpha=alpha)
-
- return self.output
-
- draw_panoptic_seg_predictions = draw_panoptic_seg # backward compatibility
-
- def draw_dataset_dict(self, dic):
- """
- Draw annotations/segmentaions in Detectron2 Dataset format.
-
- Args:
- dic (dict): annotation/segmentation data of one image, in Detectron2 Dataset format.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- annos = dic.get("annotations", None)
- if annos:
- if "segmentation" in annos[0]:
- masks = [x["segmentation"] for x in annos]
- else:
- masks = None
- if "keypoints" in annos[0]:
- keypts = [x["keypoints"] for x in annos]
- keypts = np.array(keypts).reshape(len(annos), -1, 3)
- else:
- keypts = None
-
- boxes = [
- BoxMode.convert(x["bbox"], x["bbox_mode"], BoxMode.XYXY_ABS)
- if len(x["bbox"]) == 4
- else x["bbox"]
- for x in annos
- ]
-
- colors = None
- category_ids = [x["category_id"] for x in annos]
- if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"):
- colors = [
- self._jitter([x / 255 for x in self.metadata.thing_colors[c]])
- for c in category_ids
- ]
- names = self.metadata.get("thing_classes", None)
- labels = _create_text_labels(
- category_ids,
- scores=None,
- class_names=names,
- is_crowd=[x.get("iscrowd", 0) for x in annos],
- )
- self.overlay_instances(
- labels=labels, boxes=boxes, masks=masks, keypoints=keypts, assigned_colors=colors
- )
-
- sem_seg = dic.get("sem_seg", None)
- if sem_seg is None and "sem_seg_file_name" in dic:
- with PathManager.open(dic["sem_seg_file_name"], "rb") as f:
- sem_seg = Image.open(f)
- sem_seg = np.asarray(sem_seg, dtype="uint8")
- if sem_seg is not None:
- self.draw_sem_seg(sem_seg, area_threshold=0, alpha=0.5)
-
- pan_seg = dic.get("pan_seg", None)
- if pan_seg is None and "pan_seg_file_name" in dic:
- with PathManager.open(dic["pan_seg_file_name"], "rb") as f:
- pan_seg = Image.open(f)
- pan_seg = np.asarray(pan_seg)
- from panopticapi.utils import rgb2id
-
- pan_seg = rgb2id(pan_seg)
- if pan_seg is not None:
- segments_info = dic["segments_info"]
- pan_seg = torch.tensor(pan_seg)
- self.draw_panoptic_seg(pan_seg, segments_info, area_threshold=0, alpha=0.5)
- return self.output
-
- def overlay_instances(
- self,
- *,
- boxes=None,
- labels=None,
- masks=None,
- keypoints=None,
- assigned_colors=None,
- alpha=0.5,
- ):
- """
- Args:
- boxes (Boxes, RotatedBoxes or ndarray): either a :class:`Boxes`,
- or an Nx4 numpy array of XYXY_ABS format for the N objects in a single image,
- or a :class:`RotatedBoxes`,
- or an Nx5 numpy array of (x_center, y_center, width, height, angle_degrees) format
- for the N objects in a single image,
- labels (list[str]): the text to be displayed for each instance.
- masks (masks-like object): Supported types are:
-
- * :class:`detectron2.structures.PolygonMasks`,
- :class:`detectron2.structures.BitMasks`.
- * list[list[ndarray]]: contains the segmentation masks for all objects in one image.
- The first level of the list corresponds to individual instances. The second
- level to all the polygon that compose the instance, and the third level
- to the polygon coordinates. The third level should have the format of
- [x0, y0, x1, y1, ..., xn, yn] (n >= 3).
- * list[ndarray]: each ndarray is a binary mask of shape (H, W).
- * list[dict]: each dict is a COCO-style RLE.
- keypoints (Keypoint or array like): an array-like object of shape (N, K, 3),
- where the N is the number of instances and K is the number of keypoints.
- The last dimension corresponds to (x, y, visibility or score).
- assigned_colors (list[matplotlib.colors]): a list of colors, where each color
- corresponds to each mask or box in the image. Refer to 'matplotlib.colors'
- for full list of formats that the colors are accepted in.
- Returns:
- output (VisImage): image object with visualizations.
- """
- num_instances = 0
- if boxes is not None:
- boxes = self._convert_boxes(boxes)
- num_instances = len(boxes)
- if masks is not None:
- masks = self._convert_masks(masks)
- if num_instances:
- assert len(masks) == num_instances
- else:
- num_instances = len(masks)
- if keypoints is not None:
- if num_instances:
- assert len(keypoints) == num_instances
- else:
- num_instances = len(keypoints)
- keypoints = self._convert_keypoints(keypoints)
- if labels is not None:
- assert len(labels) == num_instances
- if assigned_colors is None:
- assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)]
- if num_instances == 0:
- return self.output
- if boxes is not None and boxes.shape[1] == 5:
- return self.overlay_rotated_instances(
- boxes=boxes, labels=labels, assigned_colors=assigned_colors
- )
-
- # Display in largest to smallest order to reduce occlusion.
- areas = None
- if boxes is not None:
- areas = np.prod(boxes[:, 2:] - boxes[:, :2], axis=1)
- elif masks is not None:
- areas = np.asarray([x.area() for x in masks])
-
- if areas is not None:
- sorted_idxs = np.argsort(-areas).tolist()
- # Re-order overlapped instances in descending order.
- boxes = boxes[sorted_idxs] if boxes is not None else None
- labels = [labels[k] for k in sorted_idxs] if labels is not None else None
- masks = [masks[idx] for idx in sorted_idxs] if masks is not None else None
- assigned_colors = [assigned_colors[idx] for idx in sorted_idxs]
- keypoints = keypoints[sorted_idxs] if keypoints is not None else None
-
- for i in range(num_instances):
- color = assigned_colors[i]
- if boxes is not None:
- self.draw_box(boxes[i], edge_color=color)
-
- if masks is not None:
- for segment in masks[i].polygons:
- self.draw_polygon(segment.reshape(-1, 2), color, alpha=alpha)
-
- if labels is not None:
- # first get a box
- if boxes is not None:
- x0, y0, x1, y1 = boxes[i]
- text_pos = (x0, y0) # if drawing boxes, put text on the box corner.
- horiz_align = "left"
- elif masks is not None:
- # skip small mask without polygon
- if len(masks[i].polygons) == 0:
- continue
-
- x0, y0, x1, y1 = masks[i].bbox()
-
- # draw text in the center (defined by median) when box is not drawn
- # median is less sensitive to outliers.
- text_pos = np.median(masks[i].mask.nonzero(), axis=1)[::-1]
- horiz_align = "center"
- else:
- continue # drawing the box confidence for keypoints isn't very useful.
- # for small objects, draw text at the side to avoid occlusion
- instance_area = (y1 - y0) * (x1 - x0)
- if (
- instance_area < _SMALL_OBJECT_AREA_THRESH * self.output.scale
- or y1 - y0 < 40 * self.output.scale
- ):
- if y1 >= self.output.height - 5:
- text_pos = (x1, y0)
- else:
- text_pos = (x0, y1)
-
- height_ratio = (y1 - y0) / np.sqrt(self.output.height * self.output.width)
- lighter_color = self._change_color_brightness(color, brightness_factor=0.7)
- font_size = (
- np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2)
- * 0.5
- * self._default_font_size
- )
- self.draw_text(
- labels[i],
- text_pos,
- color=lighter_color,
- horizontal_alignment=horiz_align,
- font_size=font_size,
- )
-
- # draw keypoints
- if keypoints is not None:
- for keypoints_per_instance in keypoints:
- self.draw_and_connect_keypoints(keypoints_per_instance)
-
- return self.output
-
- def overlay_rotated_instances(self, boxes=None, labels=None, assigned_colors=None):
- """
- Args:
- boxes (ndarray): an Nx5 numpy array of
- (x_center, y_center, width, height, angle_degrees) format
- for the N objects in a single image.
- labels (list[str]): the text to be displayed for each instance.
- assigned_colors (list[matplotlib.colors]): a list of colors, where each color
- corresponds to each mask or box in the image. Refer to 'matplotlib.colors'
- for full list of formats that the colors are accepted in.
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- num_instances = len(boxes)
-
- if assigned_colors is None:
- assigned_colors = [random_color(rgb=True, maximum=1) for _ in range(num_instances)]
- if num_instances == 0:
- return self.output
-
- # Display in largest to smallest order to reduce occlusion.
- if boxes is not None:
- areas = boxes[:, 2] * boxes[:, 3]
-
- sorted_idxs = np.argsort(-areas).tolist()
- # Re-order overlapped instances in descending order.
- boxes = boxes[sorted_idxs]
- labels = [labels[k] for k in sorted_idxs] if labels is not None else None
- colors = [assigned_colors[idx] for idx in sorted_idxs]
-
- for i in range(num_instances):
- self.draw_rotated_box_with_label(
- boxes[i], edge_color=colors[i], label=labels[i] if labels is not None else None
- )
-
- return self.output
-
- def draw_and_connect_keypoints(self, keypoints):
- """
- Draws keypoints of an instance and follows the rules for keypoint connections
- to draw lines between appropriate keypoints. This follows color heuristics for
- line color.
-
- Args:
- keypoints (Tensor): a tensor of shape (K, 3), where K is the number of keypoints
- and the last dimension corresponds to (x, y, probability).
-
- Returns:
- output (VisImage): image object with visualizations.
- """
- visible = {}
- keypoint_names = self.metadata.get("keypoint_names")
- for idx, keypoint in enumerate(keypoints):
-
- # draw keypoint
- x, y, prob = keypoint
- if prob > self.keypoint_threshold:
- self.draw_circle((x, y), color=_RED)
- if keypoint_names:
- keypoint_name = keypoint_names[idx]
- visible[keypoint_name] = (x, y)
-
- if self.metadata.get("keypoint_connection_rules"):
- for kp0, kp1, color in self.metadata.keypoint_connection_rules:
- if kp0 in visible and kp1 in visible:
- x0, y0 = visible[kp0]
- x1, y1 = visible[kp1]
- color = tuple(x / 255.0 for x in color)
- self.draw_line([x0, x1], [y0, y1], color=color)
-
- # draw lines from nose to mid-shoulder and mid-shoulder to mid-hip
- # Note that this strategy is specific to person keypoints.
- # For other keypoints, it should just do nothing
- try:
- ls_x, ls_y = visible["left_shoulder"]
- rs_x, rs_y = visible["right_shoulder"]
- mid_shoulder_x, mid_shoulder_y = (ls_x + rs_x) / 2, (ls_y + rs_y) / 2
- except KeyError:
- pass
- else:
- # draw line from nose to mid-shoulder
- nose_x, nose_y = visible.get("nose", (None, None))
- if nose_x is not None:
- self.draw_line([nose_x, mid_shoulder_x], [nose_y, mid_shoulder_y], color=_RED)
-
- try:
- # draw line from mid-shoulder to mid-hip
- lh_x, lh_y = visible["left_hip"]
- rh_x, rh_y = visible["right_hip"]
- except KeyError:
- pass
- else:
- mid_hip_x, mid_hip_y = (lh_x + rh_x) / 2, (lh_y + rh_y) / 2
- self.draw_line([mid_hip_x, mid_shoulder_x], [mid_hip_y, mid_shoulder_y], color=_RED)
- return self.output
-
- """
- Primitive drawing functions:
- """
-
- def draw_text(
- self,
- text,
- position,
- *,
- font_size=None,
- color="g",
- horizontal_alignment="center",
- rotation=0,
- ):
- """
- Args:
- text (str): class label
- position (tuple): a tuple of the x and y coordinates to place text on image.
- font_size (int, optional): font of the text. If not provided, a font size
- proportional to the image width is calculated and used.
- color: color of the text. Refer to `matplotlib.colors` for full list
- of formats that are accepted.
- horizontal_alignment (str): see `matplotlib.text.Text`
- rotation: rotation angle in degrees CCW
-
- Returns:
- output (VisImage): image object with text drawn.
- """
- if not font_size:
- font_size = self._default_font_size
-
- # since the text background is dark, we don't want the text to be dark
- color = np.maximum(list(mplc.to_rgb(color)), 0.2)
- color[np.argmax(color)] = max(0.8, np.max(color))
-
- x, y = position
- self.output.ax.text(
- x,
- y,
- text,
- size=font_size * self.output.scale,
- family="sans-serif",
- bbox={"facecolor": "black", "alpha": 0.8, "pad": 0.7, "edgecolor": "none"},
- verticalalignment="top",
- horizontalalignment=horizontal_alignment,
- color=color,
- zorder=10,
- rotation=rotation,
- )
- return self.output
-
- def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"):
- """
- Args:
- box_coord (tuple): a tuple containing x0, y0, x1, y1 coordinates, where x0 and y0
- are the coordinates of the image's top left corner. x1 and y1 are the
- coordinates of the image's bottom right corner.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
- edge_color: color of the outline of the box. Refer to `matplotlib.colors`
- for full list of formats that are accepted.
- line_style (string): the string to use to create the outline of the boxes.
-
- Returns:
- output (VisImage): image object with box drawn.
- """
- x0, y0, x1, y1 = box_coord
- width = x1 - x0
- height = y1 - y0
-
- linewidth = max(self._default_font_size / 4, 1)
-
- self.output.ax.add_patch(
- mpl.patches.Rectangle(
- (x0, y0),
- width,
- height,
- fill=False,
- edgecolor=edge_color,
- linewidth=linewidth * self.output.scale,
- alpha=alpha,
- linestyle=line_style,
- )
- )
- return self.output
-
- def draw_rotated_box_with_label(
- self, rotated_box, alpha=0.5, edge_color="g", line_style="-", label=None
- ):
- """
- Draw a rotated box with label on its top-left corner.
-
- Args:
- rotated_box (tuple): a tuple containing (cnt_x, cnt_y, w, h, angle),
- where cnt_x and cnt_y are the center coordinates of the box.
- w and h are the width and height of the box. angle represents how
- many degrees the box is rotated CCW with regard to the 0-degree box.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
- edge_color: color of the outline of the box. Refer to `matplotlib.colors`
- for full list of formats that are accepted.
- line_style (string): the string to use to create the outline of the boxes.
- label (string): label for rotated box. It will not be rendered when set to None.
-
- Returns:
- output (VisImage): image object with box drawn.
- """
- cnt_x, cnt_y, w, h, angle = rotated_box
- area = w * h
- # use thinner lines when the box is small
- linewidth = self._default_font_size / (
- 6 if area < _SMALL_OBJECT_AREA_THRESH * self.output.scale else 3
- )
-
- theta = angle * math.pi / 180.0
- c = math.cos(theta)
- s = math.sin(theta)
- rect = [(-w / 2, h / 2), (-w / 2, -h / 2), (w / 2, -h / 2), (w / 2, h / 2)]
- # x: left->right ; y: top->down
- rotated_rect = [(s * yy + c * xx + cnt_x, c * yy - s * xx + cnt_y) for (xx, yy) in rect]
- for k in range(4):
- j = (k + 1) % 4
- self.draw_line(
- [rotated_rect[k][0], rotated_rect[j][0]],
- [rotated_rect[k][1], rotated_rect[j][1]],
- color=edge_color,
- linestyle="--" if k == 1 else line_style,
- linewidth=linewidth,
- )
-
- if label is not None:
- text_pos = rotated_rect[1] # topleft corner
-
- height_ratio = h / np.sqrt(self.output.height * self.output.width)
- label_color = self._change_color_brightness(edge_color, brightness_factor=0.7)
- font_size = (
- np.clip((height_ratio - 0.02) / 0.08 + 1, 1.2, 2) * 0.5 * self._default_font_size
- )
- self.draw_text(label, text_pos, color=label_color, font_size=font_size, rotation=angle)
-
- return self.output
-
- def draw_circle(self, circle_coord, color, radius=3):
- """
- Args:
- circle_coord (list(int) or tuple(int)): contains the x and y coordinates
- of the center of the circle.
- color: color of the polygon. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- radius (int): radius of the circle.
-
- Returns:
- output (VisImage): image object with box drawn.
- """
- x, y = circle_coord
- self.output.ax.add_patch(
- mpl.patches.Circle(circle_coord, radius=radius, fill=True, color=color)
- )
- return self.output
-
- def draw_line(self, x_data, y_data, color, linestyle="-", linewidth=None):
- """
- Args:
- x_data (list[int]): a list containing x values of all the points being drawn.
- Length of list should match the length of y_data.
- y_data (list[int]): a list containing y values of all the points being drawn.
- Length of list should match the length of x_data.
- color: color of the line. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- linestyle: style of the line. Refer to `matplotlib.lines.Line2D`
- for a full list of formats that are accepted.
- linewidth (float or None): width of the line. When it's None,
- a default value will be computed and used.
-
- Returns:
- output (VisImage): image object with line drawn.
- """
- if linewidth is None:
- linewidth = self._default_font_size / 3
- linewidth = max(linewidth, 1)
- self.output.ax.add_line(
- mpl.lines.Line2D(
- x_data,
- y_data,
- linewidth=linewidth * self.output.scale,
- color=color,
- linestyle=linestyle,
- )
- )
- return self.output
-
- def draw_binary_mask(
- self, binary_mask, color=None, *, edge_color=None, text=None, alpha=0.5, area_threshold=10
- ):
- """
- Args:
- binary_mask (ndarray): numpy array of shape (H, W), where H is the image height and
- W is the image width. Each value in the array is either a 0 or 1 value of uint8
- type.
- color: color of the mask. Refer to `matplotlib.colors` for a full list of
- formats that are accepted. If None, will pick a random color.
- edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a
- full list of formats that are accepted.
- text (str): if None, will be drawn on the object
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
- area_threshold (float): a connected component smaller than this area will not be shown.
-
- Returns:
- output (VisImage): image object with mask drawn.
- """
- if color is None:
- color = random_color(rgb=True, maximum=1)
- color = mplc.to_rgb(color)
-
- has_valid_segment = False
- binary_mask = binary_mask.astype("uint8") # opencv needs uint8
- mask = GenericMask(binary_mask, self.output.height, self.output.width)
- shape2d = (binary_mask.shape[0], binary_mask.shape[1])
-
- if not mask.has_holes:
- # draw polygons for regular masks
- for segment in mask.polygons:
- area = mask_util.area(mask_util.frPyObjects([segment], shape2d[0], shape2d[1]))
- if area < (area_threshold or 0):
- continue
- has_valid_segment = True
- segment = segment.reshape(-1, 2)
- self.draw_polygon(segment, color=color, edge_color=edge_color, alpha=alpha)
- else:
- # TODO: Use Path/PathPatch to draw vector graphics:
- # https://stackoverflow.com/questions/8919719/how-to-plot-a-complex-polygon
- rgba = np.zeros(shape2d + (4,), dtype="float32")
- rgba[:, :, :3] = color
- rgba[:, :, 3] = (mask.mask == 1).astype("float32") * alpha
- has_valid_segment = True
- self.output.ax.imshow(rgba, extent=(0, self.output.width, self.output.height, 0))
-
- if text is not None and has_valid_segment:
- lighter_color = self._change_color_brightness(color, brightness_factor=0.7)
- self._draw_text_in_mask(binary_mask, text, lighter_color)
- return self.output
-
- def draw_soft_mask(self, soft_mask, color=None, *, text=None, alpha=0.5):
- """
- Args:
- soft_mask (ndarray): float array of shape (H, W), each value in [0, 1].
- color: color of the mask. Refer to `matplotlib.colors` for a full list of
- formats that are accepted. If None, will pick a random color.
- text (str): if None, will be drawn on the object
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
-
- Returns:
- output (VisImage): image object with mask drawn.
- """
- if color is None:
- color = random_color(rgb=True, maximum=1)
- color = mplc.to_rgb(color)
-
- shape2d = (soft_mask.shape[0], soft_mask.shape[1])
- rgba = np.zeros(shape2d + (4,), dtype="float32")
- rgba[:, :, :3] = color
- rgba[:, :, 3] = soft_mask * alpha
- self.output.ax.imshow(rgba, extent=(0, self.output.width, self.output.height, 0))
-
- if text is not None:
- lighter_color = self._change_color_brightness(color, brightness_factor=0.7)
- binary_mask = (soft_mask > 0.5).astype("uint8")
- self._draw_text_in_mask(binary_mask, text, lighter_color)
- return self.output
-
- def draw_polygon(self, segment, color, edge_color=None, alpha=0.5):
- """
- Args:
- segment: numpy array of shape Nx2, containing all the points in the polygon.
- color: color of the polygon. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- edge_color: color of the polygon edges. Refer to `matplotlib.colors` for a
- full list of formats that are accepted. If not provided, a darker shade
- of the polygon color will be used instead.
- alpha (float): blending efficient. Smaller values lead to more transparent masks.
-
- Returns:
- output (VisImage): image object with polygon drawn.
- """
- if edge_color is None:
- # make edge color darker than the polygon color
- if alpha > 0.8:
- edge_color = self._change_color_brightness(color, brightness_factor=-0.7)
- else:
- edge_color = color
- edge_color = mplc.to_rgb(edge_color) + (1,)
-
- polygon = mpl.patches.Polygon(
- segment,
- fill=True,
- facecolor=mplc.to_rgb(color) + (alpha,),
- edgecolor=edge_color,
- linewidth=max(self._default_font_size // 15 * self.output.scale, 1),
- )
- self.output.ax.add_patch(polygon)
- return self.output
-
- """
- Internal methods:
- """
-
- def _jitter(self, color):
- """
- Randomly modifies given color to produce a slightly different color than the color given.
-
- Args:
- color (tuple[double]): a tuple of 3 elements, containing the RGB values of the color
- picked. The values in the list are in the [0.0, 1.0] range.
-
- Returns:
- jittered_color (tuple[double]): a tuple of 3 elements, containing the RGB values of the
- color after being jittered. The values in the list are in the [0.0, 1.0] range.
- """
- color = mplc.to_rgb(color)
- vec = np.random.rand(3)
- # better to do it in another color space
- vec = vec / np.linalg.norm(vec) * 0.5
- res = np.clip(vec + color, 0, 1)
- return tuple(res)
-
- def _create_grayscale_image(self, mask=None):
- """
- Create a grayscale version of the original image.
- The colors in masked area, if given, will be kept.
- """
- img_bw = self.img.astype("f4").mean(axis=2)
- img_bw = np.stack([img_bw] * 3, axis=2)
- if mask is not None:
- img_bw[mask] = self.img[mask]
- return img_bw
-
- def _change_color_brightness(self, color, brightness_factor):
- """
- Depending on the brightness_factor, gives a lighter or darker color i.e. a color with
- less or more saturation than the original color.
-
- Args:
- color: color of the polygon. Refer to `matplotlib.colors` for a full list of
- formats that are accepted.
- brightness_factor (float): a value in [-1.0, 1.0] range. A lightness factor of
- 0 will correspond to no change, a factor in [-1.0, 0) range will result in
- a darker color and a factor in (0, 1.0] range will result in a lighter color.
-
- Returns:
- modified_color (tuple[double]): a tuple containing the RGB values of the
- modified color. Each value in the tuple is in the [0.0, 1.0] range.
- """
- assert brightness_factor >= -1.0 and brightness_factor <= 1.0
- color = mplc.to_rgb(color)
- polygon_color = colorsys.rgb_to_hls(*mplc.to_rgb(color))
- modified_lightness = polygon_color[1] + (brightness_factor * polygon_color[1])
- modified_lightness = 0.0 if modified_lightness < 0.0 else modified_lightness
- modified_lightness = 1.0 if modified_lightness > 1.0 else modified_lightness
- modified_color = colorsys.hls_to_rgb(polygon_color[0], modified_lightness, polygon_color[2])
- return modified_color
-
- def _convert_boxes(self, boxes):
- """
- Convert different format of boxes to an NxB array, where B = 4 or 5 is the box dimension.
- """
- if isinstance(boxes, Boxes) or isinstance(boxes, RotatedBoxes):
- return boxes.tensor.detach().numpy()
- else:
- return np.asarray(boxes)
-
- def _convert_masks(self, masks_or_polygons):
- """
- Convert different format of masks or polygons to a tuple of masks and polygons.
-
- Returns:
- list[GenericMask]:
- """
-
- m = masks_or_polygons
- if isinstance(m, PolygonMasks):
- m = m.polygons
- if isinstance(m, BitMasks):
- m = m.tensor.numpy()
- if isinstance(m, torch.Tensor):
- m = m.numpy()
- ret = []
- for x in m:
- if isinstance(x, GenericMask):
- ret.append(x)
- else:
- ret.append(GenericMask(x, self.output.height, self.output.width))
- return ret
-
- def _draw_text_in_mask(self, binary_mask, text, color):
- """
- Find proper places to draw text given a binary mask.
- """
- # TODO sometimes drawn on wrong objects. the heuristics here can improve.
- _num_cc, cc_labels, stats, centroids = cv2.connectedComponentsWithStats(binary_mask, 8)
- if stats[1:, -1].size == 0:
- return
- largest_component_id = np.argmax(stats[1:, -1]) + 1
-
- # draw text on the largest component, as well as other very large components.
- for cid in range(1, _num_cc):
- if cid == largest_component_id or stats[cid, -1] > _LARGE_MASK_AREA_THRESH:
- # median is more stable than centroid
- # center = centroids[largest_component_id]
- center = np.median((cc_labels == cid).nonzero(), axis=1)[::-1]
- self.draw_text(text, center, color=color)
-
- def _convert_keypoints(self, keypoints):
- if isinstance(keypoints, Keypoints):
- keypoints = keypoints.tensor
- keypoints = np.asarray(keypoints)
- return keypoints
-
- def get_output(self):
- """
- Returns:
- output (VisImage): the image output containing the visualizations added
- to the image.
- """
- return self.output
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/training.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/training.md
deleted file mode 100644
index 83a6cb0a8e38ca06bbf96201ac2595d2116523c3..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/training.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# Training
-
-From the previous tutorials, you may now have a custom model and a data loader.
-To run training, users typically have a preference in one of the following two styles:
-
-### Custom Training Loop
-
-With a model and a data loader ready, everything else needed to write a training loop can
-be found in PyTorch, and you are free to write the training loop yourself.
-This style allows researchers to manage the entire training logic more clearly and have full control.
-One such example is provided in [tools/plain_train_net.py](../../tools/plain_train_net.py).
-
-Any customization on the training logic is then easily controlled by the user.
-
-### Trainer Abstraction
-
-We also provide a standardized "trainer" abstraction with a
-hook system that helps simplify the standard training behavior.
-It includes the following two instantiations:
-
-* [SimpleTrainer](../modules/engine.html#detectron2.engine.SimpleTrainer)
- provides a minimal training loop for single-cost single-optimizer single-data-source training, with nothing else.
- Other tasks (checkpointing, logging, etc) can be implemented using
- [the hook system](../modules/engine.html#detectron2.engine.HookBase).
-* [DefaultTrainer](../modules/engine.html#detectron2.engine.defaults.DefaultTrainer) is a `SimpleTrainer` initialized from a
- yacs config, used by
- [tools/train_net.py](../../tools/train_net.py) and many scripts.
- It includes more standard default behaviors that one might want to opt in,
- including default configurations for optimizer, learning rate schedule,
- logging, evaluation, checkpointing etc.
-
-To customize a `DefaultTrainer`:
-
-1. For simple customizations (e.g. change optimizer, evaluator, LR scheduler, data loader, etc.), overwrite [its methods](../modules/engine.html#detectron2.engine.defaults.DefaultTrainer) in a subclass, just like [tools/train_net.py](../../tools/train_net.py).
-2. For extra tasks during training, check the
- [hook system](../modules/engine.html#detectron2.engine.HookBase) to see if it's supported.
-
- As an example, to print hello during training:
- ```python
- class HelloHook(HookBase):
- def after_step(self):
- if self.trainer.iter % 100 == 0:
- print(f"Hello at iteration {self.trainer.iter}!")
- ```
-3. Using a trainer+hook system means there will always be some non-standard behaviors that cannot be supported, especially in research.
- For this reason, we intentionally keep the trainer & hook system minimal, rather than powerful.
- If anything cannot be achieved by such a system, it's easier to start from [tools/plain_train_net.py](../../tools/plain_train_net.py) to implement custom training logic manually.
-
-### Logging of Metrics
-
-During training, detectron2 models and trainer put metrics to a centralized [EventStorage](../modules/utils.html#detectron2.utils.events.EventStorage).
-You can use the following code to access it and log metrics to it:
-```python
-from detectron2.utils.events import get_event_storage
-
-# inside the model:
-if self.training:
- value = # compute the value from inputs
- storage = get_event_storage()
- storage.put_scalar("some_accuracy", value)
-```
-
-Refer to its documentation for more details.
-
-Metrics are then written to various destinations with [EventWriter](../modules/utils.html#module-detectron2.utils.events).
-DefaultTrainer enables a few `EventWriter` with default configurations.
-See above for how to customize them.
diff --git a/spaces/chendl/compositional_test/transformers/examples/flax/text-classification/README.md b/spaces/chendl/compositional_test/transformers/examples/flax/text-classification/README.md
deleted file mode 100644
index 8d43ab7725a24174576d1a9cdd35f540605bb339..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/flax/text-classification/README.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
-# Text classification examples
-
-## GLUE tasks
-
-Based on the script [`run_flax_glue.py`](https://github.com/huggingface/transformers/blob/main/examples/flax/text-classification/run_flax_glue.py).
-
-Fine-tuning the library models for sequence classification on the GLUE benchmark: [General Language Understanding
-Evaluation](https://gluebenchmark.com/). This script can fine-tune any of the models on the [hub](https://huggingface.co/models) and can also be used for a
-dataset hosted on our [hub](https://huggingface.co/datasets) or your own data in a csv or a JSON file (the script might need some tweaks in that case,
-refer to the comments inside for help).
-
-GLUE is made up of a total of 9 different tasks. Here is how to run the script on one of them:
-
-```bash
-export TASK_NAME=mrpc
-
-python run_flax_glue.py \
- --model_name_or_path bert-base-cased \
- --task_name ${TASK_NAME} \
- --max_seq_length 128 \
- --learning_rate 2e-5 \
- --num_train_epochs 3 \
- --per_device_train_batch_size 4 \
- --eval_steps 100 \
- --output_dir ./$TASK_NAME/ \
- --push_to_hub
-```
-
-where task name can be one of cola, mnli, mnli_mismatched, mnli_matched, mrpc, qnli, qqp, rte, sst2, stsb, wnli.
-
-Using the command above, the script will train for 3 epochs and run eval after each epoch.
-Metrics and hyperparameters are stored in Tensorflow event files in `--output_dir`.
-You can see the results by running `tensorboard` in that directory:
-
-```bash
-$ tensorboard --logdir .
-```
-
-or directly on the hub under *Training metrics*.
-
-### Accuracy Evaluation
-
-We train five replicas and report mean accuracy and stdev on the dev set below.
-We use the settings as in the command above (with an exception for MRPC and
-WNLI which are tiny and where we used 5 epochs instead of 3), and we use a total
-train batch size of 32 (we train on 8 Cloud v3 TPUs, so a per-device batch size of 4),
-
-On the task other than MRPC and WNLI we train for 3 these epochs because this is the standard,
-but looking at the training curves of some of them (e.g., SST-2, STS-b), it appears the models
-are undertrained and we could get better results when training longer.
-
-In the Tensorboard results linked below, the random seed of each model is equal to the ID of the run. So in order to reproduce run 1, run the command above with `--seed=1`. The best run used random seed 3, which is the default in the script. The results of all runs are in [this Google Sheet](https://docs.google.com/spreadsheets/d/1p3XzReMO75m_XdEJvPue-PIq_PN-96J2IJpJW1yS-10/edit?usp=sharing).
-
-| Task | Metric | Acc (best run) | Acc (avg/5runs) | Stdev | Metrics |
-|-------|------------------------------|----------------|-----------------|-----------|--------------------------------------------------------------------------|
-| CoLA | Matthews corr | 60.57 | 59.04 | 1.06 | [tfhub.dev](https://tensorboard.dev/experiment/lfr2adVpRtmLDALKrElkzg/) |
-| SST-2 | Accuracy | 92.66 | 92.23 | 0.57 | [tfhub.dev](https://tensorboard.dev/experiment/jYvfv2trRHKMjoWnXVwrZA/) |
-| MRPC | F1/Accuracy | 89.90/85.78 | 88.97/84.36 | 0.72/1.09 | [tfhub.dev](https://tensorboard.dev/experiment/bo3W3DEoRw2Q7YXjWrJkfg/) |
-| STS-B | Pearson/Spearman corr. | 89.04/88.70 | 88.94/88.63 | 0.07/0.07 | [tfhub.dev](https://tensorboard.dev/experiment/fxVwbLD7QpKhbot0r9rn2w/) |
-| QQP | Accuracy/F1 | 90.81/87.58 | 90.76/87.51 | 0.05/0.06 | [tfhub.dev](https://tensorboard.dev/experiment/di089Rc9TZmsnKRMrYNLsA/) |
-| MNLI | Matched acc. | 84.10 | 83.80 | 0.16 | [tfhub.dev](https://tensorboard.dev/experiment/JgNCGHDJSRaW6HBx6YQFYQ/) |
-| QNLI | Accuracy | 91.01 | 90.82 | 0.17 | [tfhub.dev](https://tensorboard.dev/experiment/Bq7cMGJnQMSggYgL8qNGeQ/) |
-| RTE | Accuracy | 66.06 | 64.76 | 1.04 | [tfhub.dev](https://tensorboard.dev/experiment/66Eq24bhRjqN6CEhgDSGqQ/) |
-| WNLI | Accuracy | 46.48 | 37.01 | 6.83 | [tfhub.dev](https://tensorboard.dev/experiment/TAqcnddqTkWvVEeGaWwIdQ/) |
-
-Some of these results are significantly different from the ones reported on the test set of GLUE benchmark on the
-website. For QQP and WNLI, please refer to [FAQ #12](https://gluebenchmark.com/faq) on the website.
-
-### Runtime evaluation
-
-We also ran each task once on a single V100 GPU, 8 V100 GPUs, and 8 Cloud v3 TPUs and report the
-overall training time below. For comparison we ran Pytorch's [run_glue.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) on a single GPU (last column).
-
-
-| Task | TPU v3-8 | 8 GPU | [1 GPU](https://tensorboard.dev/experiment/mkPS4Zh8TnGe1HB6Yzwj4Q) | 1 GPU (Pytorch) |
-|-------|-----------|------------|------------|-----------------|
-| CoLA | 1m 42s | 1m 26s | 3m 9s | 4m 6s |
-| SST-2 | 5m 12s | 6m 28s | 22m 33s | 34m 37s |
-| MRPC | 1m 29s | 1m 14s | 2m 20s | 2m 56s |
-| STS-B | 1m 30s | 1m 12s | 2m 16s | 2m 48s |
-| QQP | 22m 50s | 31m 48s | 1h 59m 41s | 2h 54m |
-| MNLI | 25m 03s | 33m 55s | 2h 9m 37s | 3h 7m 6s |
-| QNLI | 7m30s | 9m 40s | 34m 40s | 49m 8s |
-| RTE | 1m 20s | 55s | 1m 10s | 1m 16s |
-| WNLI | 1m 11s | 48s | 39s | 36s |
-|-------|
-| **TOTAL** | 1h 03m | 1h 28m | 5h 16m | 6h 37m |
-
-*All experiments are ran on Google Cloud Platform.
-GPU experiments are ran without further optimizations besides JAX
-transformations. GPU experiments are ran with full precision (fp32). "TPU v3-8"
-are 8 TPU cores on 4 chips (each chips has 2 cores), while "8 GPU" are 8 GPU chips.
diff --git a/spaces/chendl/compositional_test/transformers/examples/run_on_remote.py b/spaces/chendl/compositional_test/transformers/examples/run_on_remote.py
deleted file mode 100644
index 9d42ed845c9e8f3815dba5c05a1c55a2d05315f0..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/run_on_remote.py
+++ /dev/null
@@ -1,71 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import shlex
-
-import runhouse as rh
-
-
-if __name__ == "__main__":
- # Refer to https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup for cloud access
- # setup instructions, if using on-demand hardware
-
- # If user passes --user --host --key_path , fill them in as BYO cluster
- # If user passes --instance --provider , fill them in as on-demand cluster
- # Throw an error if user passes both BYO and on-demand cluster args
- # Otherwise, use default values
- parser = argparse.ArgumentParser()
- parser.add_argument("--user", type=str, default="ubuntu")
- parser.add_argument("--host", type=str, default="localhost")
- parser.add_argument("--key_path", type=str, default=None)
- parser.add_argument("--instance", type=str, default="V100:1")
- parser.add_argument("--provider", type=str, default="cheapest")
- parser.add_argument("--use_spot", type=bool, default=False)
- parser.add_argument("--example", type=str, default="pytorch/text-generation/run_generation.py")
- args, unknown = parser.parse_known_args()
- if args.host != "localhost":
- if args.instance != "V100:1" or args.provider != "cheapest":
- raise ValueError("Cannot specify both BYO and on-demand cluster args")
- cluster = rh.cluster(
- name="rh-cluster", ips=[args.host], ssh_creds={"ssh_user": args.user, "ssh_private_key": args.key_path}
- )
- else:
- cluster = rh.cluster(
- name="rh-cluster", instance_type=args.instance, provider=args.provider, use_spot=args.use_spot
- )
- example_dir = args.example.rsplit("/", 1)[0]
-
- # Set up remote environment
- cluster.install_packages(["pip:./"]) # Installs transformers from local source
- # Note transformers is copied into the home directory on the remote machine, so we can install from there
- cluster.run([f"pip install -r transformers/examples/{example_dir}/requirements.txt"])
- cluster.run(["pip install torch --upgrade --extra-index-url https://download.pytorch.org/whl/cu117"])
-
- # Run example. You can bypass the CLI wrapper and paste your own code here.
- cluster.run([f'python transformers/examples/{args.example} {" ".join(shlex.quote(arg) for arg in unknown)}'])
-
- # Alternatively, we can just import and run a training function (especially if there's no wrapper CLI):
- # from my_script... import train
- # reqs = ['pip:./', 'torch', 'datasets', 'accelerate', 'evaluate', 'tqdm', 'scipy', 'scikit-learn', 'tensorboard']
- # launch_train_gpu = rh.function(fn=train,
- # system=gpu,
- # reqs=reqs,
- # name='train_bert_glue')
- #
- # We can pass in arguments just like we would to a function:
- # launch_train_gpu(num_epochs = 3, lr = 2e-5, seed = 42, batch_size = 16
- # stream_logs=True)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/web_routedef.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/web_routedef.py
deleted file mode 100644
index a1eb0a76549fbde5aa0c81f02b041b77bd91e0ad..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/web_routedef.py
+++ /dev/null
@@ -1,216 +0,0 @@
-import abc
-import os # noqa
-from typing import (
- TYPE_CHECKING,
- Any,
- Callable,
- Dict,
- Iterator,
- List,
- Optional,
- Sequence,
- Type,
- Union,
- overload,
-)
-
-import attr
-
-from . import hdrs
-from .abc import AbstractView
-from .typedefs import Handler, PathLike
-
-if TYPE_CHECKING: # pragma: no cover
- from .web_request import Request
- from .web_response import StreamResponse
- from .web_urldispatcher import AbstractRoute, UrlDispatcher
-else:
- Request = StreamResponse = UrlDispatcher = AbstractRoute = None
-
-
-__all__ = (
- "AbstractRouteDef",
- "RouteDef",
- "StaticDef",
- "RouteTableDef",
- "head",
- "options",
- "get",
- "post",
- "patch",
- "put",
- "delete",
- "route",
- "view",
- "static",
-)
-
-
-class AbstractRouteDef(abc.ABC):
- @abc.abstractmethod
- def register(self, router: UrlDispatcher) -> List[AbstractRoute]:
- pass # pragma: no cover
-
-
-_HandlerType = Union[Type[AbstractView], Handler]
-
-
-@attr.s(auto_attribs=True, frozen=True, repr=False, slots=True)
-class RouteDef(AbstractRouteDef):
- method: str
- path: str
- handler: _HandlerType
- kwargs: Dict[str, Any]
-
- def __repr__(self) -> str:
- info = []
- for name, value in sorted(self.kwargs.items()):
- info.append(f", {name}={value!r}")
- return " {handler.__name__!r}" "{info}>".format(
- method=self.method, path=self.path, handler=self.handler, info="".join(info)
- )
-
- def register(self, router: UrlDispatcher) -> List[AbstractRoute]:
- if self.method in hdrs.METH_ALL:
- reg = getattr(router, "add_" + self.method.lower())
- return [reg(self.path, self.handler, **self.kwargs)]
- else:
- return [
- router.add_route(self.method, self.path, self.handler, **self.kwargs)
- ]
-
-
-@attr.s(auto_attribs=True, frozen=True, repr=False, slots=True)
-class StaticDef(AbstractRouteDef):
- prefix: str
- path: PathLike
- kwargs: Dict[str, Any]
-
- def __repr__(self) -> str:
- info = []
- for name, value in sorted(self.kwargs.items()):
- info.append(f", {name}={value!r}")
- return " {path}" "{info}>".format(
- prefix=self.prefix, path=self.path, info="".join(info)
- )
-
- def register(self, router: UrlDispatcher) -> List[AbstractRoute]:
- resource = router.add_static(self.prefix, self.path, **self.kwargs)
- routes = resource.get_info().get("routes", {})
- return list(routes.values())
-
-
-def route(method: str, path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return RouteDef(method, path, handler, kwargs)
-
-
-def head(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_HEAD, path, handler, **kwargs)
-
-
-def options(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_OPTIONS, path, handler, **kwargs)
-
-
-def get(
- path: str,
- handler: _HandlerType,
- *,
- name: Optional[str] = None,
- allow_head: bool = True,
- **kwargs: Any,
-) -> RouteDef:
- return route(
- hdrs.METH_GET, path, handler, name=name, allow_head=allow_head, **kwargs
- )
-
-
-def post(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_POST, path, handler, **kwargs)
-
-
-def put(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_PUT, path, handler, **kwargs)
-
-
-def patch(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_PATCH, path, handler, **kwargs)
-
-
-def delete(path: str, handler: _HandlerType, **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_DELETE, path, handler, **kwargs)
-
-
-def view(path: str, handler: Type[AbstractView], **kwargs: Any) -> RouteDef:
- return route(hdrs.METH_ANY, path, handler, **kwargs)
-
-
-def static(prefix: str, path: PathLike, **kwargs: Any) -> StaticDef:
- return StaticDef(prefix, path, kwargs)
-
-
-_Deco = Callable[[_HandlerType], _HandlerType]
-
-
-class RouteTableDef(Sequence[AbstractRouteDef]):
- """Route definition table"""
-
- def __init__(self) -> None:
- self._items: List[AbstractRouteDef] = []
-
- def __repr__(self) -> str:
- return f""
-
- @overload
- def __getitem__(self, index: int) -> AbstractRouteDef:
- ...
-
- @overload
- def __getitem__(self, index: slice) -> List[AbstractRouteDef]:
- ...
-
- def __getitem__(self, index): # type: ignore[no-untyped-def]
- return self._items[index]
-
- def __iter__(self) -> Iterator[AbstractRouteDef]:
- return iter(self._items)
-
- def __len__(self) -> int:
- return len(self._items)
-
- def __contains__(self, item: object) -> bool:
- return item in self._items
-
- def route(self, method: str, path: str, **kwargs: Any) -> _Deco:
- def inner(handler: _HandlerType) -> _HandlerType:
- self._items.append(RouteDef(method, path, handler, kwargs))
- return handler
-
- return inner
-
- def head(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_HEAD, path, **kwargs)
-
- def get(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_GET, path, **kwargs)
-
- def post(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_POST, path, **kwargs)
-
- def put(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_PUT, path, **kwargs)
-
- def patch(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_PATCH, path, **kwargs)
-
- def delete(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_DELETE, path, **kwargs)
-
- def options(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_OPTIONS, path, **kwargs)
-
- def view(self, path: str, **kwargs: Any) -> _Deco:
- return self.route(hdrs.METH_ANY, path, **kwargs)
-
- def static(self, prefix: str, path: PathLike, **kwargs: Any) -> None:
- self._items.append(StaticDef(prefix, path, kwargs))
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/server/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/server/__init__.py
deleted file mode 100644
index eccd02cb0681d6e0031874754a1bbe110b6b12a2..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/server/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from abc import ABC, abstractmethod
-
-from chromadb.config import Settings
-
-
-class Server(ABC):
- @abstractmethod
- def __init__(self, settings: Settings):
- pass
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/api.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/api.py
deleted file mode 100644
index ffb973578db72ed61595996c1d6e5f6fc54c92b1..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/api.py
+++ /dev/null
@@ -1,142 +0,0 @@
-import abc
-import json
-from typing import (Any, Callable, Dict, List, Optional, Tuple, Type, TypeVar,
- Union)
-
-from dataclasses_json.cfg import config, LetterCase # noqa: F401
-from dataclasses_json.core import (Json, _ExtendedEncoder, _asdict,
- _decode_dataclass)
-from dataclasses_json.mm import (JsonData, SchemaType, build_schema)
-from dataclasses_json.undefined import Undefined
-from dataclasses_json.utils import (_handle_undefined_parameters_safe,
- _undefined_parameter_action_safe)
-
-A = TypeVar('A', bound="DataClassJsonMixin")
-Fields = List[Tuple[str, Any]]
-
-
-class DataClassJsonMixin(abc.ABC):
- """
- DataClassJsonMixin is an ABC that functions as a Mixin.
-
- As with other ABCs, it should not be instantiated directly.
- """
- dataclass_json_config = None
-
- def to_json(self,
- *,
- skipkeys: bool = False,
- ensure_ascii: bool = True,
- check_circular: bool = True,
- allow_nan: bool = True,
- indent: Optional[Union[int, str]] = None,
- separators: Tuple[str, str] = None,
- default: Callable = None,
- sort_keys: bool = False,
- **kw) -> str:
- return json.dumps(self.to_dict(encode_json=False),
- cls=_ExtendedEncoder,
- skipkeys=skipkeys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- indent=indent,
- separators=separators,
- default=default,
- sort_keys=sort_keys,
- **kw)
-
- @classmethod
- def from_json(cls: Type[A],
- s: JsonData,
- *,
- parse_float=None,
- parse_int=None,
- parse_constant=None,
- infer_missing=False,
- **kw) -> A:
- kvs = json.loads(s,
- parse_float=parse_float,
- parse_int=parse_int,
- parse_constant=parse_constant,
- **kw)
- return cls.from_dict(kvs, infer_missing=infer_missing)
-
- @classmethod
- def from_dict(cls: Type[A],
- kvs: Json,
- *,
- infer_missing=False) -> A:
- return _decode_dataclass(cls, kvs, infer_missing)
-
- def to_dict(self, encode_json=False) -> Dict[str, Json]:
- return _asdict(self, encode_json=encode_json)
-
- @classmethod
- def schema(cls: Type[A],
- *,
- infer_missing: bool = False,
- only=None,
- exclude=(),
- many: bool = False,
- context=None,
- load_only=(),
- dump_only=(),
- partial: bool = False,
- unknown=None) -> "SchemaType[A]":
- Schema = build_schema(cls, DataClassJsonMixin, infer_missing, partial)
-
- if unknown is None:
- undefined_parameter_action = _undefined_parameter_action_safe(cls)
- if undefined_parameter_action is not None:
- # We can just make use of the same-named mm keywords
- unknown = undefined_parameter_action.name.lower()
-
- return Schema(only=only,
- exclude=exclude,
- many=many,
- context=context,
- load_only=load_only,
- dump_only=dump_only,
- partial=partial,
- unknown=unknown)
-
-
-def dataclass_json(_cls=None, *, letter_case=None,
- undefined: Optional[Union[str, Undefined]] = None):
- """
- Based on the code in the `dataclasses` module to handle optional-parens
- decorators. See example below:
-
- @dataclass_json
- @dataclass_json(letter_case=LetterCase.CAMEL)
- class Example:
- ...
- """
-
- def wrap(cls):
- return _process_class(cls, letter_case, undefined)
-
- if _cls is None:
- return wrap
- return wrap(_cls)
-
-
-def _process_class(cls, letter_case, undefined) -> Type[DataClassJsonMixin]:
- if letter_case is not None or undefined is not None:
- cls.dataclass_json_config = config(letter_case=letter_case,
- undefined=undefined)[
- 'dataclasses_json']
-
- cls.to_json = DataClassJsonMixin.to_json
- # unwrap and rewrap classmethod to tag it to cls rather than the literal
- # DataClassJsonMixin ABC
- cls.from_json = classmethod(DataClassJsonMixin.from_json.__func__)
- cls.to_dict = DataClassJsonMixin.to_dict
- cls.from_dict = classmethod(DataClassJsonMixin.from_dict.__func__)
- cls.schema = classmethod(DataClassJsonMixin.schema.__func__)
-
- cls.__init__ = _handle_undefined_parameters_safe(cls, kvs=(), usage="init")
- # register cls as a virtual subclass of DataClassJsonMixin
- DataClassJsonMixin.register(cls)
- return cls
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/internal/wire_format.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/internal/wire_format.py
deleted file mode 100644
index 1f54414b1aa6fe6024abae883767d1a3ae6bb12f..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/internal/wire_format.py
+++ /dev/null
@@ -1,268 +0,0 @@
-# Protocol Buffers - Google's data interchange format
-# Copyright 2008 Google Inc. All rights reserved.
-# https://developers.google.com/protocol-buffers/
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above
-# copyright notice, this list of conditions and the following disclaimer
-# in the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Google Inc. nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-"""Constants and static functions to support protocol buffer wire format."""
-
-__author__ = 'robinson@google.com (Will Robinson)'
-
-import struct
-from google.protobuf import descriptor
-from google.protobuf import message
-
-
-TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
-TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
-
-# These numbers identify the wire type of a protocol buffer value.
-# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
-# tag-and-type to store one of these WIRETYPE_* constants.
-# These values must match WireType enum in //google/protobuf/wire_format.h.
-WIRETYPE_VARINT = 0
-WIRETYPE_FIXED64 = 1
-WIRETYPE_LENGTH_DELIMITED = 2
-WIRETYPE_START_GROUP = 3
-WIRETYPE_END_GROUP = 4
-WIRETYPE_FIXED32 = 5
-_WIRETYPE_MAX = 5
-
-
-# Bounds for various integer types.
-INT32_MAX = int((1 << 31) - 1)
-INT32_MIN = int(-(1 << 31))
-UINT32_MAX = (1 << 32) - 1
-
-INT64_MAX = (1 << 63) - 1
-INT64_MIN = -(1 << 63)
-UINT64_MAX = (1 << 64) - 1
-
-# "struct" format strings that will encode/decode the specified formats.
-FORMAT_UINT32_LITTLE_ENDIAN = '> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK)
-
-
-def ZigZagEncode(value):
- """ZigZag Transform: Encodes signed integers so that they can be
- effectively used with varint encoding. See wire_format.h for
- more details.
- """
- if value >= 0:
- return value << 1
- return (value << 1) ^ (~0)
-
-
-def ZigZagDecode(value):
- """Inverse of ZigZagEncode()."""
- if not value & 0x1:
- return value >> 1
- return (value >> 1) ^ (~0)
-
-
-
-# The *ByteSize() functions below return the number of bytes required to
-# serialize "field number + type" information and then serialize the value.
-
-
-def Int32ByteSize(field_number, int32):
- return Int64ByteSize(field_number, int32)
-
-
-def Int32ByteSizeNoTag(int32):
- return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32)
-
-
-def Int64ByteSize(field_number, int64):
- # Have to convert to uint before calling UInt64ByteSize().
- return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
-
-
-def UInt32ByteSize(field_number, uint32):
- return UInt64ByteSize(field_number, uint32)
-
-
-def UInt64ByteSize(field_number, uint64):
- return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
-
-
-def SInt32ByteSize(field_number, int32):
- return UInt32ByteSize(field_number, ZigZagEncode(int32))
-
-
-def SInt64ByteSize(field_number, int64):
- return UInt64ByteSize(field_number, ZigZagEncode(int64))
-
-
-def Fixed32ByteSize(field_number, fixed32):
- return TagByteSize(field_number) + 4
-
-
-def Fixed64ByteSize(field_number, fixed64):
- return TagByteSize(field_number) + 8
-
-
-def SFixed32ByteSize(field_number, sfixed32):
- return TagByteSize(field_number) + 4
-
-
-def SFixed64ByteSize(field_number, sfixed64):
- return TagByteSize(field_number) + 8
-
-
-def FloatByteSize(field_number, flt):
- return TagByteSize(field_number) + 4
-
-
-def DoubleByteSize(field_number, double):
- return TagByteSize(field_number) + 8
-
-
-def BoolByteSize(field_number, b):
- return TagByteSize(field_number) + 1
-
-
-def EnumByteSize(field_number, enum):
- return UInt32ByteSize(field_number, enum)
-
-
-def StringByteSize(field_number, string):
- return BytesByteSize(field_number, string.encode('utf-8'))
-
-
-def BytesByteSize(field_number, b):
- return (TagByteSize(field_number)
- + _VarUInt64ByteSizeNoTag(len(b))
- + len(b))
-
-
-def GroupByteSize(field_number, message):
- return (2 * TagByteSize(field_number) # START and END group.
- + message.ByteSize())
-
-
-def MessageByteSize(field_number, message):
- return (TagByteSize(field_number)
- + _VarUInt64ByteSizeNoTag(message.ByteSize())
- + message.ByteSize())
-
-
-def MessageSetItemByteSize(field_number, msg):
- # First compute the sizes of the tags.
- # There are 2 tags for the beginning and ending of the repeated group, that
- # is field number 1, one with field number 2 (type_id) and one with field
- # number 3 (message).
- total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3))
-
- # Add the number of bytes for type_id.
- total_size += _VarUInt64ByteSizeNoTag(field_number)
-
- message_size = msg.ByteSize()
-
- # The number of bytes for encoding the length of the message.
- total_size += _VarUInt64ByteSizeNoTag(message_size)
-
- # The size of the message.
- total_size += message_size
- return total_size
-
-
-def TagByteSize(field_number):
- """Returns the bytes required to serialize a tag with this field number."""
- # Just pass in type 0, since the type won't affect the tag+type size.
- return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
-
-
-# Private helper function for the *ByteSize() functions above.
-
-def _VarUInt64ByteSizeNoTag(uint64):
- """Returns the number of bytes required to serialize a single varint
- using boundary value comparisons. (unrolled loop optimization -WPierce)
- uint64 must be unsigned.
- """
- if uint64 <= 0x7f: return 1
- if uint64 <= 0x3fff: return 2
- if uint64 <= 0x1fffff: return 3
- if uint64 <= 0xfffffff: return 4
- if uint64 <= 0x7ffffffff: return 5
- if uint64 <= 0x3ffffffffff: return 6
- if uint64 <= 0x1ffffffffffff: return 7
- if uint64 <= 0xffffffffffffff: return 8
- if uint64 <= 0x7fffffffffffffff: return 9
- if uint64 > UINT64_MAX:
- raise message.EncodeError('Value out of range: %d' % uint64)
- return 10
-
-
-NON_PACKABLE_TYPES = (
- descriptor.FieldDescriptor.TYPE_STRING,
- descriptor.FieldDescriptor.TYPE_GROUP,
- descriptor.FieldDescriptor.TYPE_MESSAGE,
- descriptor.FieldDescriptor.TYPE_BYTES
-)
-
-
-def IsTypePackable(field_type):
- """Return true iff packable = true is valid for fields of this type.
-
- Args:
- field_type: a FieldDescriptor::Type value.
-
- Returns:
- True iff fields of this type are packable.
- """
- return field_type not in NON_PACKABLE_TYPES
diff --git a/spaces/cihyFjudo/fairness-paper-search/Sony All Products Incl Multi Keygen And Patch v2.6 Update 24 11 2014 Download Now.md b/spaces/cihyFjudo/fairness-paper-search/Sony All Products Incl Multi Keygen And Patch v2.6 Update 24 11 2014 Download Now.md
deleted file mode 100644
index 2cc5741a3a9e7e4f7789561f89f1f2f11e7f761c..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Sony All Products Incl Multi Keygen And Patch v2.6 Update 24 11 2014 Download Now.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Sony All Products Incl Multi Keygen And Patch v2.6 Update 24 11 2014
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/utils.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/utils.py
deleted file mode 100644
index bf2767a0e6022c52690cdabf684b0b676ed0eadc..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/charset_normalizer/utils.py
+++ /dev/null
@@ -1,414 +0,0 @@
-import importlib
-import logging
-import unicodedata
-from codecs import IncrementalDecoder
-from encodings.aliases import aliases
-from functools import lru_cache
-from re import findall
-from typing import Generator, List, Optional, Set, Tuple, Union
-
-from _multibytecodec import MultibyteIncrementalDecoder
-
-from .constant import (
- ENCODING_MARKS,
- IANA_SUPPORTED_SIMILAR,
- RE_POSSIBLE_ENCODING_INDICATION,
- UNICODE_RANGES_COMBINED,
- UNICODE_SECONDARY_RANGE_KEYWORD,
- UTF8_MAXIMAL_ALLOCATION,
-)
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_accentuated(character: str) -> bool:
- try:
- description: str = unicodedata.name(character)
- except ValueError:
- return False
- return (
- "WITH GRAVE" in description
- or "WITH ACUTE" in description
- or "WITH CEDILLA" in description
- or "WITH DIAERESIS" in description
- or "WITH CIRCUMFLEX" in description
- or "WITH TILDE" in description
- )
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def remove_accent(character: str) -> str:
- decomposed: str = unicodedata.decomposition(character)
- if not decomposed:
- return character
-
- codes: List[str] = decomposed.split(" ")
-
- return chr(int(codes[0], 16))
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def unicode_range(character: str) -> Optional[str]:
- """
- Retrieve the Unicode range official name from a single character.
- """
- character_ord: int = ord(character)
-
- for range_name, ord_range in UNICODE_RANGES_COMBINED.items():
- if character_ord in ord_range:
- return range_name
-
- return None
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_latin(character: str) -> bool:
- try:
- description: str = unicodedata.name(character)
- except ValueError:
- return False
- return "LATIN" in description
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_ascii(character: str) -> bool:
- try:
- character.encode("ascii")
- except UnicodeEncodeError:
- return False
- return True
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_punctuation(character: str) -> bool:
- character_category: str = unicodedata.category(character)
-
- if "P" in character_category:
- return True
-
- character_range: Optional[str] = unicode_range(character)
-
- if character_range is None:
- return False
-
- return "Punctuation" in character_range
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_symbol(character: str) -> bool:
- character_category: str = unicodedata.category(character)
-
- if "S" in character_category or "N" in character_category:
- return True
-
- character_range: Optional[str] = unicode_range(character)
-
- if character_range is None:
- return False
-
- return "Forms" in character_range
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_emoticon(character: str) -> bool:
- character_range: Optional[str] = unicode_range(character)
-
- if character_range is None:
- return False
-
- return "Emoticons" in character_range
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_separator(character: str) -> bool:
- if character.isspace() or character in {"|", "+", "<", ">"}:
- return True
-
- character_category: str = unicodedata.category(character)
-
- return "Z" in character_category or character_category in {"Po", "Pd", "Pc"}
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_case_variable(character: str) -> bool:
- return character.islower() != character.isupper()
-
-
-def is_private_use_only(character: str) -> bool:
- character_category: str = unicodedata.category(character)
-
- return character_category == "Co"
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_cjk(character: str) -> bool:
- try:
- character_name = unicodedata.name(character)
- except ValueError:
- return False
-
- return "CJK" in character_name
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_hiragana(character: str) -> bool:
- try:
- character_name = unicodedata.name(character)
- except ValueError:
- return False
-
- return "HIRAGANA" in character_name
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_katakana(character: str) -> bool:
- try:
- character_name = unicodedata.name(character)
- except ValueError:
- return False
-
- return "KATAKANA" in character_name
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_hangul(character: str) -> bool:
- try:
- character_name = unicodedata.name(character)
- except ValueError:
- return False
-
- return "HANGUL" in character_name
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_thai(character: str) -> bool:
- try:
- character_name = unicodedata.name(character)
- except ValueError:
- return False
-
- return "THAI" in character_name
-
-
-@lru_cache(maxsize=len(UNICODE_RANGES_COMBINED))
-def is_unicode_range_secondary(range_name: str) -> bool:
- return any(keyword in range_name for keyword in UNICODE_SECONDARY_RANGE_KEYWORD)
-
-
-@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)
-def is_unprintable(character: str) -> bool:
- return (
- character.isspace() is False # includes \n \t \r \v
- and character.isprintable() is False
- and character != "\x1A" # Why? Its the ASCII substitute character.
- and character != "\ufeff" # bug discovered in Python,
- # Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space.
- )
-
-
-def any_specified_encoding(sequence: bytes, search_zone: int = 4096) -> Optional[str]:
- """
- Extract using ASCII-only decoder any specified encoding in the first n-bytes.
- """
- if not isinstance(sequence, bytes):
- raise TypeError
-
- seq_len: int = len(sequence)
-
- results: List[str] = findall(
- RE_POSSIBLE_ENCODING_INDICATION,
- sequence[: min(seq_len, search_zone)].decode("ascii", errors="ignore"),
- )
-
- if len(results) == 0:
- return None
-
- for specified_encoding in results:
- specified_encoding = specified_encoding.lower().replace("-", "_")
-
- encoding_alias: str
- encoding_iana: str
-
- for encoding_alias, encoding_iana in aliases.items():
- if encoding_alias == specified_encoding:
- return encoding_iana
- if encoding_iana == specified_encoding:
- return encoding_iana
-
- return None
-
-
-@lru_cache(maxsize=128)
-def is_multi_byte_encoding(name: str) -> bool:
- """
- Verify is a specific encoding is a multi byte one based on it IANA name
- """
- return name in {
- "utf_8",
- "utf_8_sig",
- "utf_16",
- "utf_16_be",
- "utf_16_le",
- "utf_32",
- "utf_32_le",
- "utf_32_be",
- "utf_7",
- } or issubclass(
- importlib.import_module("encodings.{}".format(name)).IncrementalDecoder,
- MultibyteIncrementalDecoder,
- )
-
-
-def identify_sig_or_bom(sequence: bytes) -> Tuple[Optional[str], bytes]:
- """
- Identify and extract SIG/BOM in given sequence.
- """
-
- for iana_encoding in ENCODING_MARKS:
- marks: Union[bytes, List[bytes]] = ENCODING_MARKS[iana_encoding]
-
- if isinstance(marks, bytes):
- marks = [marks]
-
- for mark in marks:
- if sequence.startswith(mark):
- return iana_encoding, mark
-
- return None, b""
-
-
-def should_strip_sig_or_bom(iana_encoding: str) -> bool:
- return iana_encoding not in {"utf_16", "utf_32"}
-
-
-def iana_name(cp_name: str, strict: bool = True) -> str:
- cp_name = cp_name.lower().replace("-", "_")
-
- encoding_alias: str
- encoding_iana: str
-
- for encoding_alias, encoding_iana in aliases.items():
- if cp_name in [encoding_alias, encoding_iana]:
- return encoding_iana
-
- if strict:
- raise ValueError("Unable to retrieve IANA for '{}'".format(cp_name))
-
- return cp_name
-
-
-def range_scan(decoded_sequence: str) -> List[str]:
- ranges: Set[str] = set()
-
- for character in decoded_sequence:
- character_range: Optional[str] = unicode_range(character)
-
- if character_range is None:
- continue
-
- ranges.add(character_range)
-
- return list(ranges)
-
-
-def cp_similarity(iana_name_a: str, iana_name_b: str) -> float:
- if is_multi_byte_encoding(iana_name_a) or is_multi_byte_encoding(iana_name_b):
- return 0.0
-
- decoder_a = importlib.import_module(
- "encodings.{}".format(iana_name_a)
- ).IncrementalDecoder
- decoder_b = importlib.import_module(
- "encodings.{}".format(iana_name_b)
- ).IncrementalDecoder
-
- id_a: IncrementalDecoder = decoder_a(errors="ignore")
- id_b: IncrementalDecoder = decoder_b(errors="ignore")
-
- character_match_count: int = 0
-
- for i in range(255):
- to_be_decoded: bytes = bytes([i])
- if id_a.decode(to_be_decoded) == id_b.decode(to_be_decoded):
- character_match_count += 1
-
- return character_match_count / 254
-
-
-def is_cp_similar(iana_name_a: str, iana_name_b: str) -> bool:
- """
- Determine if two code page are at least 80% similar. IANA_SUPPORTED_SIMILAR dict was generated using
- the function cp_similarity.
- """
- return (
- iana_name_a in IANA_SUPPORTED_SIMILAR
- and iana_name_b in IANA_SUPPORTED_SIMILAR[iana_name_a]
- )
-
-
-def set_logging_handler(
- name: str = "charset_normalizer",
- level: int = logging.INFO,
- format_string: str = "%(asctime)s | %(levelname)s | %(message)s",
-) -> None:
- logger = logging.getLogger(name)
- logger.setLevel(level)
-
- handler = logging.StreamHandler()
- handler.setFormatter(logging.Formatter(format_string))
- logger.addHandler(handler)
-
-
-def cut_sequence_chunks(
- sequences: bytes,
- encoding_iana: str,
- offsets: range,
- chunk_size: int,
- bom_or_sig_available: bool,
- strip_sig_or_bom: bool,
- sig_payload: bytes,
- is_multi_byte_decoder: bool,
- decoded_payload: Optional[str] = None,
-) -> Generator[str, None, None]:
- if decoded_payload and is_multi_byte_decoder is False:
- for i in offsets:
- chunk = decoded_payload[i : i + chunk_size]
- if not chunk:
- break
- yield chunk
- else:
- for i in offsets:
- chunk_end = i + chunk_size
- if chunk_end > len(sequences) + 8:
- continue
-
- cut_sequence = sequences[i : i + chunk_size]
-
- if bom_or_sig_available and strip_sig_or_bom is False:
- cut_sequence = sig_payload + cut_sequence
-
- chunk = cut_sequence.decode(
- encoding_iana,
- errors="ignore" if is_multi_byte_decoder else "strict",
- )
-
- # multi-byte bad cutting detector and adjustment
- # not the cleanest way to perform that fix but clever enough for now.
- if is_multi_byte_decoder and i > 0:
- chunk_partial_size_chk: int = min(chunk_size, 16)
-
- if (
- decoded_payload
- and chunk[:chunk_partial_size_chk] not in decoded_payload
- ):
- for j in range(i, i - 4, -1):
- cut_sequence = sequences[j:chunk_end]
-
- if bom_or_sig_available and strip_sig_or_bom is False:
- cut_sequence = sig_payload + cut_sequence
-
- chunk = cut_sequence.decode(encoding_iana, errors="ignore")
-
- if chunk[:chunk_partial_size_chk] in decoded_payload:
- break
-
- yield chunk
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/__init__.py
deleted file mode 100644
index 12e414fc3bf00e6152f953b989914f034edfe9e1..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""OpenType Layout-related functionality."""
diff --git a/spaces/codejin/diffsingerkr/Arg_Parser.py b/spaces/codejin/diffsingerkr/Arg_Parser.py
deleted file mode 100644
index 2ac9bd2133a61a6299b16e20fe94655da7bbcb01..0000000000000000000000000000000000000000
--- a/spaces/codejin/diffsingerkr/Arg_Parser.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from argparse import Namespace
-
-def Recursive_Parse(args_dict):
- parsed_dict = {}
- for key, value in args_dict.items():
- if isinstance(value, dict):
- value = Recursive_Parse(value)
- parsed_dict[key]= value
-
- args = Namespace()
- args.__dict__ = parsed_dict
- return args
-
-def To_Non_Recursive_Dict(
- args: Namespace
- ):
- parsed_dict = {}
- for key, value in args.__dict__.items():
- if isinstance(value, Namespace):
- value_dict = To_Non_Recursive_Dict(value)
- for sub_key, sub_value in value_dict.items():
- parsed_dict[f'{key}.{sub_key}'] = sub_value
- else:
- parsed_dict[key] = value
-
- return parsed_dict
-
-
-
-
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/h264chroma_init_aarch64.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/h264chroma_init_aarch64.c
deleted file mode 100644
index 00fc7b20f17d3d40dbe83b41dbc6699afabf298b..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/h264chroma_init_aarch64.c
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * ARM NEON optimised H.264 chroma functions
- * Copyright (c) 2008 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/cpu.h"
-#include "libavutil/aarch64/cpu.h"
-#include "libavcodec/h264chroma.h"
-
-#include "config.h"
-
-void ff_put_h264_chroma_mc8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride,
- int h, int x, int y);
-void ff_put_h264_chroma_mc4_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride,
- int h, int x, int y);
-void ff_put_h264_chroma_mc2_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride,
- int h, int x, int y);
-
-void ff_avg_h264_chroma_mc8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride,
- int h, int x, int y);
-void ff_avg_h264_chroma_mc4_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride,
- int h, int x, int y);
-void ff_avg_h264_chroma_mc2_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride,
- int h, int x, int y);
-
-av_cold void ff_h264chroma_init_aarch64(H264ChromaContext *c, int bit_depth)
-{
- const int high_bit_depth = bit_depth > 8;
- int cpu_flags = av_get_cpu_flags();
-
- if (have_neon(cpu_flags) && !high_bit_depth) {
- c->put_h264_chroma_pixels_tab[0] = ff_put_h264_chroma_mc8_neon;
- c->put_h264_chroma_pixels_tab[1] = ff_put_h264_chroma_mc4_neon;
- c->put_h264_chroma_pixels_tab[2] = ff_put_h264_chroma_mc2_neon;
-
- c->avg_h264_chroma_pixels_tab[0] = ff_avg_h264_chroma_mc8_neon;
- c->avg_h264_chroma_pixels_tab[1] = ff_avg_h264_chroma_mc4_neon;
- c->avg_h264_chroma_pixels_tab[2] = ff_avg_h264_chroma_mc2_neon;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3defs.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3defs.h
deleted file mode 100644
index ff92f0ac4ab9970748239d9a908c1843ca0d4661..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3defs.h
+++ /dev/null
@@ -1,104 +0,0 @@
-/*
- * Common AC-3 definitions
- * Copyright (c) 2000, 2001, 2002 Fabrice Bellard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_AC3DEFS_H
-#define AVCODEC_AC3DEFS_H
-
-#define EAC3_MAX_CHANNELS 16 /**< maximum number of channels in EAC3 */
-#define AC3_MAX_CHANNELS 7 /**< maximum number of channels, including coupling channel */
-#define CPL_CH 0 /**< coupling channel index */
-
-#define AC3_MAX_COEFS 256
-#define AC3_BLOCK_SIZE 256
-#define AC3_MAX_BLOCKS 6
-#define AC3_FRAME_SIZE (AC3_MAX_BLOCKS * 256)
-#define AC3_WINDOW_SIZE (AC3_BLOCK_SIZE * 2)
-#define AC3_CRITICAL_BANDS 50
-#define AC3_MAX_CPL_BANDS 18
-
-/* exponent encoding strategy */
-#define EXP_REUSE 0
-#define EXP_NEW 1
-
-#define EXP_D15 1
-#define EXP_D25 2
-#define EXP_D45 3
-
-/** Delta bit allocation strategy */
-typedef enum {
- DBA_REUSE = 0,
- DBA_NEW,
- DBA_NONE,
- DBA_RESERVED
-} AC3DeltaStrategy;
-
-/** Channel mode (audio coding mode) */
-typedef enum {
- AC3_CHMODE_DUALMONO = 0,
- AC3_CHMODE_MONO,
- AC3_CHMODE_STEREO,
- AC3_CHMODE_3F,
- AC3_CHMODE_2F1R,
- AC3_CHMODE_3F1R,
- AC3_CHMODE_2F2R,
- AC3_CHMODE_3F2R
-} AC3ChannelMode;
-
-/** Dolby Surround mode */
-typedef enum AC3DolbySurroundMode {
- AC3_DSURMOD_NOTINDICATED = 0,
- AC3_DSURMOD_OFF,
- AC3_DSURMOD_ON,
- AC3_DSURMOD_RESERVED
-} AC3DolbySurroundMode;
-
-/** Dolby Surround EX mode */
-typedef enum AC3DolbySurroundEXMode {
- AC3_DSUREXMOD_NOTINDICATED = 0,
- AC3_DSUREXMOD_OFF,
- AC3_DSUREXMOD_ON,
- AC3_DSUREXMOD_PLIIZ
-} AC3DolbySurroundEXMode;
-
-/** Dolby Headphone mode */
-typedef enum AC3DolbyHeadphoneMode {
- AC3_DHEADPHONMOD_NOTINDICATED = 0,
- AC3_DHEADPHONMOD_OFF,
- AC3_DHEADPHONMOD_ON,
- AC3_DHEADPHONMOD_RESERVED
-} AC3DolbyHeadphoneMode;
-
-/** Preferred Stereo Downmix mode */
-typedef enum AC3PreferredStereoDownmixMode {
- AC3_DMIXMOD_NOTINDICATED = 0,
- AC3_DMIXMOD_LTRT,
- AC3_DMIXMOD_LORO,
- AC3_DMIXMOD_DPLII // reserved value in A/52, but used by encoders to indicate DPL2
-} AC3PreferredStereoDownmixMode;
-
-typedef enum {
- EAC3_FRAME_TYPE_INDEPENDENT = 0,
- EAC3_FRAME_TYPE_DEPENDENT,
- EAC3_FRAME_TYPE_AC3_CONVERT,
- EAC3_FRAME_TYPE_RESERVED
-} EAC3FrameType;
-
-#endif /* AVCODEC_AC3DEFS_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cmo descargar e instalar Getting Over It APK en tu dispositivo Android.md b/spaces/congsaPfin/Manga-OCR/logs/Cmo descargar e instalar Getting Over It APK en tu dispositivo Android.md
deleted file mode 100644
index 741e1a0ea6b931fde5ac3c478c72e4dde8e79f4f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Cmo descargar e instalar Getting Over It APK en tu dispositivo Android.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-
Getting Over It APK: A Challenging and Rewarding Climbing Game
-
If you are looking for a game that will test your skills, patience, and perseverance, you might want to try Getting Over It APK. This is a climbing game that will make you feel a range of emotions, from frustration to satisfaction, from anger to laughter. In this article, we will tell you everything you need to know about this game, including what it is, why you should play it, and how to download and install it on your Android device.
Getting Over It APK is a climbing game that was created by Bennett Foddy as a homage to Jazzuo's 2002 B-Game classic 'Sexy Hiking'. The game was released in 2017 for Windows and iOS, and later ported to Android by Noodlecake Studios. The game has received critical acclaim and popularity for its unique gameplay, difficulty, and commentary.
-
The origin and inspiration of the game
-
The game was inspired by Jazzuo's 'Sexy Hiking', which was a B-Game that featured a man with a hammer trying to climb a mountain made of random objects. B-Games are games that are intentionally bad or weird, often made as jokes or experiments. Bennett Foddy, who is a game designer and a professor of philosophy, decided to make his own version of 'Sexy Hiking' as a tribute to Jazzuo and the B-Game genre. He also wanted to explore the themes of frustration, failure, and perseverance in games.
-
The gameplay and mechanics of the game
-
The game is very simple in terms of gameplay and mechanics. You control a man who is stuck in a pot with only a hammer. You use the mouse or touch screen to move the hammer around, which allows you to push, pull, swing, or hook yourself onto various objects on the mountain. Your goal is to reach the top of the mountain without falling down. However, this is easier said than done, as the game has no checkpoints, save points, or undo buttons. If you make a mistake or lose your grip, you can fall all the way back to the bottom or even further. The game also has no end or reward, except for a secret ending that only a few players have seen.
-
The difficulty and frustration of the game
-
The game is notoriously difficult and frustrating, as it requires precise timing, coordination, and patience to master. The game also has a steep learning curve, as it takes time to get used to the physics and controls of the hammer. The game is designed to make you feel angry, annoyed, or hopeless at times, as you can lose hours of progress in seconds. The game also has random elements that can affect your performance, such as wind, gravity, or glitches. The game is not for everyone, as some people might find it too hard or unfair.
-
Why should you play Getting Over It APK?
-
Despite its difficulty and frustration, Getting Over It APK is also a very rewarding and enjoyable game for many reasons. Here are some of them:
-
The satisfaction and achievement of the game
-
The game is not impossible to beat, as many players have proven by reaching the top of the mountain or even speedrunning it in minutes. The game gives you a sense of satisfaction and achievement when you overcome a difficult obstacle or make progress on your climb. The game also challenges you to improve your skills and learn from your mistakes. The game can be very addictive and fun
The philosophy and humor of the game
-
The game is not just a game, but also a philosophical and humorous commentary on life, games, and art. The game features a voice-over narration by Bennett Foddy himself, who talks to you throughout your climb. He gives you insights, anecdotes, quotes, jokes, and references to various topics, such as history, literature, music, movies, or other games. He also reacts to your actions, whether you succeed or fail, and sometimes breaks the fourth wall. The narration is witty, sarcastic, and sometimes motivational, depending on your perspective. The game also has many Easter eggs and secrets that add to the humor and mystery of the game.
-
The graphics and sound of the game
-
The game has a minimalist and retro style of graphics, which contrast with the realistic physics and sounds of the game. The game uses hand-drawn 2D graphics that resemble old-school games or cartoons. The game also has a simple but effective color scheme that changes according to the time of day or the location on the mountain. The game has a soothing and atmospheric soundtrack that consists of classical music and ambient sounds. The game also has realistic sound effects that match the movements and impacts of the hammer and the pot. The game creates a unique and immersive experience that appeals to both the eyes and the ears.
-
How to download and install Getting Over It APK?
-
If you want to play Getting Over It APK on your Android device, you will need to download and install it from a reliable source. Here are some things you need to know before you do that:
-
getting over it with bennett foddy apk android
-descargar getting over it gratis para android
-getting over it apk download for android
-getting over it apk mod android
-getting over it apk full android
-getting over it apk obb android
-getting over it android apk free
-getting over it apk no verification android
-getting over it apk latest version android
-getting over it apk offline android
-getting over it apk hack android
-getting over it apk cracked android
-getting over it apk mega android
-getting over it apk mediafıre android
-getting over it apk sin verificacion android
-getting over it apk uptodown android
-getting over it apk revdl android
-getting over it apk rexdl android
-getting over it apk highly compressed android
-getting over it apk unlimited money android
-getting over it apk data android
-getting over it apk google play android
-getting over it apk original android
-getting over it apk premium android
-getting over it apk pro android
-getting over it apk paid android
-getting over it apk unlocked android
-getting over it apk unlimited coins android
-getting over it apk no root android
-getting over it apk no ads android
-getting over it apk gameplay android
-getting over it apk review android
-getting over it apk tips and tricks android
-getting over it apk cheats and hacks android
-getting over it apk walkthrough and guide android
-getting over it apk best settings android
-getting over it apk requirements and compatibility android
-getting over it apk features and benefits android
-getting over it apk size and performance android
-getting over it apk graphics and sound quality android
-getting over it apk update and patch notes android
-getting over it apk bugs and issues android
-getting over it apk support and feedback android
-getting over it apk alternatives and similar apps android
-getting over it apk ranking and rating android
-getting over it apk awards and achievements android
-getting over it apk news and events android
-getting over it apk community and forums android
-
The requirements and compatibility of the game
-
The game requires Android 5.0 or higher to run smoothly. The game also requires at least 100 MB of free storage space on your device. The game is compatible with most Android devices, but some older or low-end devices might experience lag or crashes. The game also supports external controllers for better control.
-
The steps and precautions of the installation process
-
The game is not available on the Google Play Store, so you will need to download it from a third-party website or app store. However, you should be careful when doing so, as some sources might contain malware or viruses that can harm your device or steal your data. You should only download the game from trusted and verified sources that have positive reviews and ratings from other users. You should also scan the APK file with an antivirus app before installing it.
-
Once you have downloaded the APK file, you will need to enable the installation of apps from unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might see a warning message that tells you about the risks of installing apps from unknown sources, but you can ignore it if you trust the source of the APK file.
-
After enabling unknown sources, you can locate the APK file on your device using a file manager app or your browser's downloads folder. Tap on the APK file and follow the instructions on the screen to install it. You might see another warning message that tells you about the permissions that the app requires, but you can accept them if you trust the app.
-
Once the installation is complete, you can launch the game from your app drawer or home screen and enjoy it.
-
The alternatives and sources of the game
-
If you are unable to download or install Getting Over It APK on your device for some reason, or if you want to try other similar games, you have some alternatives and sources to choose from. Here are some of them:
-
-
Alternative
Description
Source
-
Pogostuck: Rage With Your Friends
A multiplayer climbing game that features different characters, maps, modes, and customization options.
Google Play Store
-
Just Getting Over It with Bennett Foddy
A parody of Getting Over It that features Bennett Foddy himself as the character in the pot.
Google Play Store
-
Climb With Wheelbarrow
A climbing game that features a man in a wheelbarrow instead of a pot.
Google Play Store
-
Getting Over It with Bennett Foddy Mod APK
A modified version of Getting Over It that offers unlimited money, unlocked items, no ads, and more.
APKPure.com
-
Getting Over It with Bennett Foddy PC Version
The original version of Getting Over It that runs on Windows and Mac computers.
Steam or Humble Bundle
-
-
Conclusion
-
Getting Over It APK is a climbing game that will challenge you, frustrate you, and reward you. It is a game that is inspired by a B-Game classic, and features a unique gameplay, difficulty, and commentary. It is a game that will make you feel a range of emotions, from anger to laughter, from despair to satisfaction. It is a game that you can download and install on your Android device, or try other alternatives and sources. It is a game that you should play if you are looking for a different and memorable gaming experience.
-
FAQs
-
Here are some frequently asked questions about Getting Over It APK:
-
-
Q: How long does it take to beat the game?
-
A: It depends on your skill level, luck, and persistence. Some players have beaten the game in less than 10 minutes, while others have spent hours or days on it. The average time to beat the game is around 5 hours, according to HowLongToBeat.com.
-
Q: What is the secret ending of the game?
-
A: The secret ending of the game is a reward for the players who manage to reach the top of the mountain. It involves a golden cauldron, a chat room, and a special song. We won't spoil it for you, but you can watch it on YouTube if you are curious.
-
Q: Is the game based on a true story?
-
A: No, the game is not based on a true story. However, the game does reference some real-life events and people, such as the Apollo 11 moon landing, Albert Camus, Che Guevara, and more.
-
Q: Can I play the game offline?
-
A: Yes, you can play the game offline. However, you will need an internet connection to access some features of the game, such as the chat room and the leaderboards.
-
Q: Is there a way to cheat or hack the game?
-
A: There are some ways to cheat or hack the game, such as using mods, trainers, or glitches. However, we do not recommend doing so, as it will ruin the fun and challenge of the game. Also, cheating or hacking might cause your device to malfunction or get infected by malware.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Craftsman 6 APK Create Explore and Survive in a Sandbox World.md b/spaces/congsaPfin/Manga-OCR/logs/Craftsman 6 APK Create Explore and Survive in a Sandbox World.md
deleted file mode 100644
index 37eed93c351085837db7da0159ca81d8ee29b2a5..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Craftsman 6 APK Create Explore and Survive in a Sandbox World.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
Download Craftsman 6 APK: A Free and Fun Sandbox Game for Android
-
If you are looking for a game that lets you unleash your creativity and imagination, then you might want to try Craftsman 6 APK. This is a free sandbox game that allows you to design and build your own world with blocks. You can explore, craft, and survive in this pixelated adventure. In this article, we will tell you everything you need to know about Craftsman 6 APK, including its features, how to download and install it, why you should play it, and some tips and tricks to help you enjoy it more. We will also give you some alternatives to Craftsman 6 APK in case you want to try something different.
Craftsman 6 APK is a game that is inspired by Minecraft, a popular sandbox game that has millions of fans around the world. However, unlike Minecraft, which is a paid game, Craftsman 6 APK is completely free to play. You can download it from various websites or app stores without spending any money.
-
Craftsman 6 APK is similar to Minecraft in many ways. You can create your own world with blocks of different materials, such as dirt, stone, wood, and more. You can also explore the infinite map and discover different biomes, such as forests, deserts, mountains, and oceans. You can also craft tools, weapons, armor, and other items to help you survive and fight against enemies. You can also play with your friends online or offline in multiplayer mode.
-
Features of Craftsman 6 APK
-
Some of the features that make Craftsman 6 APK a fun and enjoyable game are:
-
-
Stunning graphics and realistic sound. The game has beautiful pixel art graphics that are colorful and detailed. The sound effects are also realistic and immersive, making you feel like you are in the game world.
-
Simple, easy to play. The game has intuitive controls that make building and crafting easy and enjoyable. You can use the touch screen or the virtual joystick to move around and interact with the environment. You can also switch between first-person and third-person views to suit your preference.
-
Many game modes. The game offers different game modes for different play styles. You can choose between creative mode, where you have unlimited resources and no enemies, or survival mode, where you have to gather resources and fend off enemies. You can also choose between single-player mode, where you play alone, or multiplayer mode, where you play with other players online or offline.
-
Very much like the real world. The game has a realistic physics system that makes the blocks behave like they would in real life. For example, if you place a block on top of another block, it will stay there unless you remove it or something else pushes it down. If you place a block in mid-air, it will fall down due to gravity. You can also use water and lava to create different effects.
-
A lot of interesting things. The game has a lot of content and features that will keep you entertained for hours. You can find animals, plants, ores, chests, villages, dungeons, temples, and other structures in the world. You can also craft various items, such as beds, doors, ladders, torches, furnaces, chests, crafting tables, anvils, enchantment tables, potions, maps, compasses, clocks, books, paintings, banners, fireworks, armor stands, and more. You can also customize your character with different skins and accessories.
-
-
How to download and install Craftsman 6 APK
-
If you want to download and install Craftsman 6 APK on your Android device, here are the steps you need to follow:
-
How to download craftsman 6 apk for free
-Download craftsman 6 apk latest version
-Craftsman 6 apk mod unlimited money
-Craftsman 6 apk online multiplayer
-Craftsman 6 apk gameplay and features
-Download craftsman 6 apk for android devices
-Craftsman 6 apk review and rating
-Craftsman 6 apk download link and installation guide
-Craftsman 6 apk tips and tricks
-Craftsman 6 apk best builds and designs
-Download craftsman 6 apk for PC windows
-Craftsman 6 apk offline mode and cheats
-Craftsman 6 apk vs minecraft comparison
-Craftsman 6 apk update and patch notes
-Craftsman 6 apk alternatives and similar games
-Download craftsman 6 apk for iOS devices
-Craftsman 6 apk custom skins and textures
-Craftsman 6 apk servers and communities
-Craftsman 6 apk challenges and achievements
-Craftsman 6 apk bugs and fixes
-Download craftsman 6 apk for mac os
-Craftsman 6 apk creative mode and sandbox
-Craftsman 6 apk maps and worlds
-Craftsman 6 apk videos and screenshots
-Craftsman 6 apk system requirements and compatibility
-Download craftsman 6 apk for linux os
-Craftsman 6 apk survival mode and adventure
-Craftsman 6 apk weapons and tools
-Craftsman 6 apk animals and mobs
-Craftsman 6 apk secrets and easter eggs
-Download craftsman 6 apk for chrome os
-Craftsman 6 apk blocks and materials
-Craftsman 6 apk vehicles and machines
-Craftsman 6 apk plants and biomes
-Craftsman 6 apk weather and time
-Download craftsman 6 apk for fire os
-Craftsman 6 apk furniture and decorations
-Craftsman 6 apk redstone and circuits
-Craftsman 6 apk commands and codes
-Craftsman 6 apk skins and mods
-
-
Go to a trusted website or app store that offers Craftsman 6 APK, such as [APKPure], [APKCombo], or [Uptodown].
-
Click on the download button and wait for the file to be downloaded on your device.
-
Go to your device settings and enable the installation of apps from unknown sources. This will allow you to install Craftsman 6 APK without any problems.
-
Go to your file manager and locate the downloaded Craftsman 6 APK file. Tap on it and follow the instructions to install it on your device.
-
Launch the game and enjoy playing Craftsman 6 APK.
-
-
Why play Craftsman 6 APK?
-
Craftsman 6 APK is a game that has many benefits and advantages for its players. Here are some of the reasons why you should play Craftsman 6 APK:
-
Pros and cons of Craftsman 6 APK
-
Like any other game, Craftsman 6 APK has its pros and cons. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
It is free to play and download.
It is fun and creative.
It is suitable for all ages.
It has many game modes and features.
It has multiplayer support.
-
It may have some bugs and glitches.
It may consume a lot of battery and storage.
It may not be compatible with some devices.
It may have some ads and in-app purchases.
It may not be updated regularly.
-
-
-
Tips and tricks for Craftsman 6 APK
-
If you want to improve your skills and experience in Craftsman 6 APK, here are some tips and tricks that you can use:
-
-
Use the right tools for the right blocks. Different blocks require different tools to break or place them. For example, you need a pickaxe to break stone, a shovel to break dirt, an axe to break wood, and so on. Using the right tools will make your work faster and easier.
-
Craft a bed as soon as possible. A bed is an essential item that allows you to sleep and set your spawn point. This means that if you die, you will respawn at your bed instead of at a random location. To craft a bed, you need three wool and three planks of the same color.
-
Light up your surroundings. Lighting is important for two reasons: it prevents monsters from spawning in dark areas, and it helps you see better at night or underground. You can use torches, lanterns, glowstone, or other light sources to illuminate your surroundings.
-
Gather resources wisely. Resources are limited in survival mode, so you need to gather them wisely. You can use chests, barrels, or other containers to store your items. You can also use maps, compasses, or coordinates to keep track of your location. You can also use signs, banners, or other markers to label your buildings or landmarks.
-
Be prepared for combat. Combat is inevitable in survival mode, so you need to be prepared for it. You can craft weapons, armor, shields, bows, arrows, and other items to help you fight against enemies. You can also use potions, food, or golden apples to heal yourself or boost your abilities. You can also use traps, walls, fences, or doors to protect yourself or your base from invaders.
-
-
Alternatives to Craftsman 6 APK
-
If you want to try something different from Craftsman 6 APK, here are some alternatives that you can play:
-
-
Minecraft PE. This is the official mobile version of Minecraft, which offers the same gameplay and features as the original game. You can create your own world with blocks, explore, craft, survive, and play with other players online or offline. However, this game is not free; you need to pay a one-time fee of $6.99 to download it from Google Play or App Store.
-
Craftopia. This is another sandbox game that is similar to Minecraft but has more elements of RPGs and survival games. You can level up your character, learn skills, craft items, farm crops, tame animals, fish , hunt monsters, and more. You can also play with up to four players online or offline. This game is also not free; you need to pay $14.99 to download it from Steam or App Store.
-
Roblox. This is a platform that allows you to play and create various games with blocks. You can choose from millions of games created by other users, or make your own game with the Roblox Studio. You can also customize your avatar, chat with other players, join groups, and earn virtual currency. This game is free to play and download from Google Play or App Store, but it has some in-app purchases and ads.
-
Terraria. This is a 2D sandbox game that has elements of adventure, exploration, combat, and crafting. You can dig, build, fight, and explore in a randomly generated world with different biomes, enemies, bosses, items, and events. You can also play with up to eight players online or offline. This game is not free; you need to pay $4.99 to download it from Google Play or App Store.
-
Block Craft 3D. This is a 3D sandbox game that is similar to Craftsman 6 APK but has more focus on building and designing. You can create your own city with various buildings, such as houses, castles, skyscrapers, and more. You can also visit other cities created by other players and rate them. This game is free to play and download from Google Play or App Store, but it has some in-app purchases and ads.
-
-
Conclusion
-
Craftsman 6 APK is a free and fun sandbox game that lets you create your own world with blocks. You can explore, craft, survive, and play with your friends in this pixelated adventure. You can download it from various websites or app stores without any hassle. However, you should also be aware of its pros and cons, and some tips and tricks to help you enjoy it more. You can also try some alternatives to Craftsman 6 APK if you want to experience something different.
-
FAQs
-
Here are some frequently asked questions about Craftsman 6 APK:
-
-
Is Craftsman 6 APK safe to download? Yes, Craftsman 6 APK is safe to download as long as you get it from a trusted website or app store. However, you should always scan the file for viruses or malware before installing it on your device.
-
Is Craftsman 6 APK compatible with my device? Craftsman 6 APK is compatible with most Android devices that have Android 4.1 or higher. However, some devices may not be able to run the game smoothly due to low specifications or performance issues.
-
How do I update Craftsman 6 APK? To update Craftsman 6 APK, you need to download the latest version of the game from the same website or app store where you got the previous version. Then, you need to uninstall the old version and install the new one on your device.
-
How do I uninstall Craftsman 6 APK? To uninstall Craftsman 6 APK, you need to go to your device settings and find the app in the list of installed apps. Then, you need to tap on it and select the uninstall option.
-
How do I contact the developer of Craftsman 6 APK? To contact the developer of Craftsman 6 APK, you can send an email to craftsman6apk@gmail.com or visit their Facebook page at https://www.facebook.com/Craftsman-6-APK-105678918544321.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Play GT Club Drag Racing Car Game MOD APK - The Free Racing Game with Drag Style and No Brakes.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Play GT Club Drag Racing Car Game MOD APK - The Free Racing Game with Drag Style and No Brakes.md
deleted file mode 100644
index aec9a47c1988180d088d0727aa047f44725b23e5..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download and Play GT Club Drag Racing Car Game MOD APK - The Free Racing Game with Drag Style and No Brakes.md
+++ /dev/null
@@ -1,73 +0,0 @@
-
-
Download GT Club Drag Racing Car Game Mod APK
-
If you are a fan of drag racing games, you might want to check out GT Club Drag Racing Car Game. This is a thrilling and realistic racing game that lets you compete with other players online and offline. You can choose from a variety of supercars, customize them, and upgrade them to suit your preferences. You can also join clubs, participate in tournaments, and win rewards.
-
However, if you want to enjoy the game to the fullest, you might need to spend some real money to unlock all the cars and features. That's why we recommend you to download the mod APK version of GT Club Drag Racing Car Game. This is a modified version that gives you unlimited money, gold, and other benefits for free. In this article, we will tell you more about this mod APK and how to download and install it on your device.
GT Club Drag Racing Car Game is a racing game developed by TSQ Publishing Corp. The game has tens of millions of official installs from Google Play. The game features realistic physics, stunning graphics, and immersive sound effects. You can race against other players in real-time or offline mode. You can also join clubs, chat with other racers, and challenge them to duels.
-
Why download the mod APK version?
-
The mod APK version of GT Club Drag Racing Car Game is a hacked version that gives you unlimited money, gold, and other benefits for free. You can use these resources to unlock all the cars, customize them, and upgrade them to the max level. You can also enjoy the game without any ads or root requirement. The mod APK version is safe and easy to install on your device.
-
Features of GT Club Drag Racing Car Game Mod APK
-
Unlimited money and gold
-
Money and gold are the main currencies in GT Club Drag Racing Car Game. You need them to buy new cars, upgrade them, and access premium features. However, earning money and gold in the game can be time-consuming and tedious. That's why the mod APK version gives you unlimited money and gold for free. You can spend them as much as you want without worrying about running out.
-
All cars unlocked and upgraded
-
The game offers a wide range of supercars from famous brands like Ferrari, Lamborghini, Bugatti, Porsche, and more. However, not all of them are available at the beginning. You need to unlock them by completing missions, winning races, or spending money and gold. With the mod APK version, you can unlock all the cars for free. You can also upgrade them to the max level without any cost.
-
No ads and no root required
-
Another benefit of downloading the mod APK version of GT Club Drag Racing Car Game is that you can enjoy the game without any ads or root requirement. Ads can be annoying and distracting when you are racing. Rooting your device can be risky and complicated. That's why the mod APK version removes all the ads and does not require root access to work. You can play the game smoothly and safely.
-
High-quality graphics and sound effects
-
The game boasts high-quality graphics and sound effects that make you feel like you are in a real drag race. You can see the details of the cars, the tracks, and the environment. You can also hear the roar of the engines, the screech of the tires, and the cheers of the crowd. You can also adjust the graphics and sound settings to suit your device and preference.
-
How to download and install GT Club Drag Racing Car Game Mod APK
-
If you are interested in downloading and installing the mod APK version of GT Club Drag Racing Car Game, you can follow these simple steps:
-
How to download gt club drag racing car game mod apk for free
-GT club drag racing car game mod apk unlimited money and gems
-Best drag racing car games for android with gt club mod apk
-GT club drag racing car game mod apk latest version download
-Download gt club drag racing car game mod apk offline
-GT club drag racing car game mod apk features and gameplay
-GT club drag racing car game mod apk review and rating
-Download gt club drag racing car game mod apk for PC
-GT club drag racing car game mod apk cheats and hacks
-GT club drag racing car game mod apk download link and installation guide
-GT club drag racing car game mod apk vs real racing 3
-GT club drag racing car game mod apk tips and tricks
-GT club drag racing car game mod apk online multiplayer mode
-Download gt club drag racing car game mod apk without root
-GT club drag racing car game mod apk no ads and in-app purchases
-GT club drag racing car game mod apk system requirements and compatibility
-Download gt club drag racing car game mod apk from APKCombo[^1^]
-GT club drag racing car game mod apk update and patch notes
-GT club drag racing car game mod apk screenshots and videos
-Download gt club drag racing car game mod apk with obb data file
-GT club drag racing car game mod apk new cars and tracks
-GT club drag racing car game mod apk customizations and upgrades
-Download gt club drag racing car game mod apk with manual gears
-GT club drag racing car game mod apk speed test and performance
-GT club drag racing car game mod apk feedback and support
-
Step 1: Enable unknown sources on your device
-
Before you can install any mod APK file on your device, you need to enable unknown sources. This is a security feature that prevents the installation of apps from sources other than Google Play. To enable unknown sources, go to your device settings, then security, then toggle on the unknown sources option.
-
Step 2: Download the mod APK file from a trusted source
-
Next, you need to download the mod APK file of GT Club Drag Racing Car Game from a trusted source. You can use the link below to download the latest version of the mod APK file. Make sure you have enough storage space on your device before downloading the file.
Step 3: Locate and install the mod APK file on your device
-
After downloading the mod APK file, you need to locate and install it on your device. You can use any file manager app to find the downloaded file in your downloads folder. Tap on the file and follow the instructions to install it on your device.
-
Step 4: Launch the game and enjoy the mod features
-
Finally, you can launch the game and enjoy the mod features. You will see that you have unlimited money, gold, and other benefits. You can also access all the cars, customize them, and upgrade them. You can also play the game without any ads or root requirement.
-
Conclusion
-
GT Club Drag Racing Car Game is a fun and exciting racing game that lets you compete with other players online and offline. You can choose from a variety of supercars, customize them, and upgrade them to suit your preferences. You can also join clubs, participate in tournaments, and win rewards.
-
However, if you want to enjoy the game to the fullest, you might want to download the mod APK version of GT Club Drag Racing Car Game. This is a modified version that gives you unlimited money, gold, and other benefits for free. You can use these resources to unlock all the cars and features. You can also enjoy the game without any ads or root requirement.
-
To download and install the mod APK version of GT Club Drag Racing Car Game, you can follow the steps we have provided in this article. We hope this article was helpful and informative for you. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about GT Club Drag Racing Car Game Mod APK:
-
Is GT Club Drag Racing Car Game Mod APK safe to use?
-
Yes, GT Club Drag Racing Car Game Mod APK is safe to use as long as you download it from a trusted source. We have tested the mod APK file and found no viruses or malware in it. However, we advise you to use it at your own risk and discretion.
-
Will I get banned for using GT Club Drag Racing Car Game Mod APK?
-
No, you will not get banned for using GT Club Drag Racing Car Game Mod APK as long as you use it wisely and moderately. The mod APK version has an anti-ban feature that prevents detection by the game servers. However, we advise you not to abuse the mod features or cheat in online mode.
-
Can I update GT Club Drag Racing Car Game Mod APK?
-
No, you cannot update GT Club Drag Racing Car Game Mod APK from Google Play or any other source. If you do so, you will lose all the mod features and benefits. To update the mod APK version, you need to download and install the latest version of the mod APK file from a trusted source.
-
Can I play GT Club Drag Racing Car Game Mod APK offline?
-
Yes, you can play GT Club Drag Racing Car Game Mod APK offline without any internet connection. However, some features and modes may not be available in offline mode. To enjoy all the features and modes of the game, you need to have an active internet connection.
-
Can I play GT Club Drag Racing Car Game Mod APK with my friends?
-
Yes, you can play GT Club Drag Racing Car Game Mod APK with your friends online or offline. You can join clubs, chat with other racers, and challenge them to duels. You can also race against other players in real-time or offline mode. However, you need to make sure that your friends also have the mod APK version of the game installed on their devices. Otherwise, you may not be able to play with them.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Brain Find Can you find it? Mod APK - Enjoy the Best of Brain Find with Unlimited Features.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Brain Find Can you find it? Mod APK - Enjoy the Best of Brain Find with Unlimited Features.md
deleted file mode 100644
index 95dab0233826228d56c3c92c41b09bd9008c8745..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Brain Find Can you find it? Mod APK - Enjoy the Best of Brain Find with Unlimited Features.md
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
Brain Find: Can You Find It? Mod APK - A Fun and Challenging Brain Game
-
Do you love brain games that make you think hard and challenge your mind? If so, you might want to try Brain Find: Can You Find It? Mod APK, a popular puzzle game that tests your logic, creativity, and problem-solving skills. In this article, we will tell you what this game is, how to download and install it on your Android device, and what benefits you can get from playing it.
-
What is Brain Find: Can You Find It?
-
Brain Find: Can You Find It? is a game that makes you think outside the box. It is not a typical puzzle game where you have to match colors, shapes, or numbers. Instead, it is a game where you have to use your brain to find hidden clues, solve riddles, crack codes, and discover secrets. The game has hundreds of levels and puzzles, each with a different theme and difficulty. Some puzzles are easy and straightforward, while others are tricky and require you to think in unconventional ways. The game is designed to stimulate your brain and make you laugh at the same time.
A game that tests your logic, creativity, and problem-solving skills
-
One of the main features of Brain Find: Can You Find It? is that it tests your logic, creativity, and problem-solving skills. The game does not give you any hints or instructions on how to solve the puzzles. You have to figure out the solution by yourself, using your common sense, intuition, imagination, and lateral thinking. The game challenges you to think differently and look for clues in unexpected places. For example, you might have to shake your phone, tilt your screen, tap on hidden objects, or even use your voice to solve some puzzles. The game also rewards you for being creative and finding alternative solutions.
-
A game that makes you think outside the box
-
Another feature of Brain Find: Can You Find It? is that it makes you think outside the box. The game does not follow any rules or logic. Sometimes, the answer to a puzzle might be absurd, illogical, or hilarious. The game encourages you to be playful and curious, and to explore different possibilities. The game also surprises you with unexpected twists and turns, making you question everything you see and hear. For example, you might have to find a hidden cat in a picture of dogs, or figure out why a man is crying in a happy scene.
-
A game that has hundreds of levels and puzzles
-
The last feature of Brain Find: Can You Find It? is that it has hundreds of levels and puzzles for you to enjoy. The game has a variety of themes and scenarios, such as animals, food, sports, movies, music, art, history, science, and more. Each theme has its own style and humor, making the game more fun and diverse. The game also has different types of puzzles, such as word puzzles, math puzzles, logic puzzles, visual puzzles, trivia puzzles, and more. Each puzzle has a different level of difficulty and challenge, making the game more interesting and addictive. The game also updates regularly with new levels and puzzles, keeping you entertained and engaged.
-
What is a mod APK file?
-
If you are wondering what a mod APK file is, you are not alone. Many people are curious about this term and what it means. A mod APK file is a modified version of an original APK file, which is the file format used by Android devices to install and run applications. A mod APK file can be downloaded and installed from third-party sources, rather than from the official Google Play Store. A mod APK file may offer additional features or benefits that are not available in the original version of the application.
-
A modified version of an original APK file
-
A mod APK file is a modified version of an original APK file, which means that it has been altered or changed by someone other than the original developer of the application. A mod APK file may have different graphics, sounds, functions, or gameplay than the original version. For example, a mod APK file of a game may have unlimited coins, gems, lives, or other resources that are normally limited or require in-app purchases. A mod APK file may also have unlocked levels, characters, items, or modes that are otherwise restricted or inaccessible in the original version.
-
A file that can be downloaded and installed from third-party sources
-
A mod APK file can be downloaded and installed from third-party sources, which are websites or platforms that are not affiliated with or authorized by the original developer of the application. A mod APK file can be found online by searching for the name of the application followed by "mod APK" or "modded APK". For example, if you want to download a mod APK file of Brain Find: Can You Find It?, you can search for "Brain Find: Can You Find It? mod APK" or "Brain Find: Can You Find It? modded APK". However, you should be careful when downloading and installing a mod APK file from third-party sources, as some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy.
-
A file that may offer additional features or benefits
-
A mod APK file may offer additional features or benefits that are not available in the original version of the application. These features or benefits may vary depending on the type and purpose of the mod APK file. Some common features or benefits that a mod APK file may offer are:
-
brain find can you find it mod apk download
-brain find can you find it mod apk unlimited hints
-brain find can you find it mod apk latest version
-brain find can you find it mod apk android 1
-brain find can you find it mod apk free
-brain find can you find it mod apk hack
-brain find can you find it mod apk offline
-brain find can you find it mod apk no ads
-brain find can you find it mod apk 2023
-brain find can you find it mod apk revdl
-brain find can you solve puzzles mod apk
-brain find tricky riddles mod apk
-brain find hidden objects mod apk
-brain find mind quiz games mod apk
-brain find discovery gameplay mod apk
-brain find fun puzzle game mod apk
-brain find level 1000 mod apk
-brain find all levels unlocked mod apk
-brain find unlimited money mod apk
-brain find premium mod apk
-download brain find can you find it for android
-install brain find can you find it on pc
-play brain find can you find it online
-update brain find can you find it to latest version
-review brain find can you find it game
-how to play brain find can you find it game
-how to download brain find can you find it game
-how to install brain find can you find it game
-how to update brain find can you find it game
-how to hack brain find can you find it game
-tips and tricks for brain find can you find it game
-cheats and codes for brain find can you find it game
-walkthrough and guide for brain find can you find it game
-solutions and answers for brain find can you find it game
-best features of brain find can you find it game
-pros and cons of brain find can you find it game
-alternatives to brain find can you find it game
-similar games to brain find can you find it game
-is brain find can you find it game safe?
-is brain find can you find it game fun?
-
-
Feature/Benefit
Description
-
Unlimited resources
A mod APK file may provide unlimited coins, gems, lives, energy, or other resources that are normally limited or require in-app purchases in the original version of the application.
-
Unlocked content
A mod APK file may unlock levels, characters, items, modes, or other content that are otherwise restricted or inaccessible in the original version of the application.
-
Removed ads
A mod APK file may remove ads or pop-ups that interrupt or annoy the user in the original version of the application.
-
Enhanced performance
A mod APK file may enhance the performance, speed, graphics, or sound quality of the application.
-
Customized features
A mod APK file may add new features or modify existing features of the application according to the user's preference or taste.
-
enhance your neural connections and pathways. Playing brain games can also challenge your working memory, which is the ability to hold and manipulate information in your mind. Playing brain games can help you improve your memory recall, concentration, and alertness.
-
Reduce the risk of dementia or Alzheimer's
-
Another benefit of playing brain games is that they can reduce the risk of dementia or Alzheimer's, which are degenerative brain diseases that affect millions of people around the world. Dementia or Alzheimer's can cause memory loss, confusion, mood changes, and cognitive impairment. Playing brain games can help prevent or delay the onset of these diseases by keeping your brain active and healthy. Playing brain games can also stimulate the growth of new brain cells and protect them from damage or deterioration. Playing brain games can help you maintain your mental clarity and function as you age.
-
Enhance your creativity and problem-solving skills
-
A third benefit of playing brain games is that they can enhance your creativity and problem-solving skills. These are skills that help you generate new ideas, find solutions, and overcome challenges. Playing brain games can stimulate your right brain hemisphere, which is responsible for creativity, intuition, and imagination. Playing brain games can also activate your left brain hemisphere, which is responsible for logic, analysis, and reasoning. Playing brain games can help you balance both sides of your brain and use them effectively. Playing brain games can help you develop your critical thinking, innovation, and decision-making skills.
-
Have fun and relieve stress
-
The last benefit of playing brain games is that they can have fun and relieve stress. Stress is a common problem that affects many people in their daily lives. Stress can cause physical, emotional, and mental problems, such as headaches, anxiety, depression, or insomnia. Playing brain games can help you cope with stress by providing you with a positive and enjoyable distraction. Playing brain games can also release endorphins, which are natural chemicals that make you feel happy and relaxed. Playing brain games can help you improve your mood and well-being.
-
Conclusion
-
In conclusion, Brain Find: Can You Find It? Mod APK is a fun and challenging brain game that tests your logic, creativity, and problem-solving skills. It is a game that makes you think outside the box and surprises you with unexpected puzzles. It is also a game that has hundreds of levels and puzzles for you to enjoy. To play this game, you need to download and install a mod APK file from a reliable and safe APK download site. A mod APK file is a modified version of an original APK file that may offer additional features or benefits that are not available in the original version of the application. Playing brain games like Brain Find: Can You Find It? can also benefit your mental health and well-being by improving your memory, attention, and reaction time, reducing the risk of dementia or Alzheimer's, enhancing your creativity and problem-solving skills, and having fun and relieving stress. If you are looking for a game that will challenge your mind and make you laugh at the same time, you should try Brain Find: Can You Find It? Mod APK.
-
FAQs
-
Here are some frequently asked questions about Brain Find: Can You Find It? Mod APK:
-
Q: Is Brain Find: Can You Find It? Mod APK safe to download and install?
-
A: Yes, as long as you download and install it from a reliable and safe APK download site that does not contain any viruses, malware, or spyware. However, you should always be careful when downloading and installing any mod APK file from third-party sources, as some of them may be harmful or malicious.
-
Q: What are the requirements to play Brain Find: Can You Find It? Mod APK?
-
A: To play Brain Find: Can You Find It? Mod APK, you need to have an Android device that runs on Android 4.4 or higher version. You also need to have enough storage space on your device to download and install the mod APK file.
-
Q: How do I update Brain Find: Can You Find It? Mod APK?
-
A: To update Brain Find: Can You Find It? Mod APK, you need to check if there is a new version available on the APK download site where you got the mod APK file. If there is a new version available, you need to download and install it on your device following the same steps as before.
-
Q: How do I uninstall Brain Find: Can You Find It? Mod APK?
-
A: To uninstall Brain Find: Can You Find It? Mod APK, you need to go to your device's settings and look for the applications or apps option. Then, you need to find the game on the list of installed applications and tap on it. Then, you need to tap on the uninstall button and confirm your choice. This will remove the game and the mod APK file from your device.
-
Q: Where can I get more information or support for Brain Find: Can You Find It? Mod APK?
-
A: If you have any questions, issues, or feedback about Brain Find: Can You Find It? Mod APK, you can contact the developer of the game or the mod APK file through their official website, social media, or email. You can also visit the APK download site where you got the mod APK file and check the comments or reviews section for more information or support from other users.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Internet Cafe Simulator How to Manage Your Own Internet Cafe for Free on PC.md b/spaces/congsaPfin/Manga-OCR/logs/Internet Cafe Simulator How to Manage Your Own Internet Cafe for Free on PC.md
deleted file mode 100644
index c2b695dce9d6e72e949961f3704c7b57170fa877..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Internet Cafe Simulator How to Manage Your Own Internet Cafe for Free on PC.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
How to Download Internet Cafe Simulator for Free
-
Have you ever dreamed of running your own internet cafe business? Do you want to experience the challenges and rewards of managing a comprehensive workplace, interacting with customers and city dwellers, and expanding your capacity and earnings? If so, you might be interested in playing Internet Cafe Simulator, a realistic and immersive simulation game developed by Cheesecake Dev. But what if you don't want to pay for the game, or you want to try it before you buy it? Don't worry, there are ways to download internet cafe simulator for free, and in this article, we will show you how.
Internet Cafe Simulator is an internet cafe business simulation game that was released in 2019. In this game, you can set up and manage your own internet cafe, from choosing the location, furnishing and decorating the environment, buying and installing computers and gaming equipment, hiring staff, paying bills, and dealing with customers. You can also explore the city and interact with various people and activities, such as shopping, gambling, dating, fighting, or doing illegal work. You can also buy crypto money, invest in popular games, or hire hackers to boost your reputation. The game offers a lot of freedom and customization options, as well as realistic graphics and physics. You can play the game in single-player or multiplayer mode.
-
Why Play Internet Cafe Simulator for Free?
-
There are many reasons why you might want to play internet cafe simulator for free. Maybe you are on a tight budget and you can't afford to spend money on games. Maybe you are not sure if you will like the game or not, and you want to test it before you commit to buying it. Maybe you are a fan of indie games and you want to support the developers by giving them feedback or spreading the word about their game. Whatever your reason is, playing internet cafe simulator for free is possible and easy, as long as you know where to look.
-
How to Download Internet Cafe Simulator for Free
-
Option 1: Steam
-
One of the easiest ways to download internet cafe simulator for free is to use Steam, the popular online gaming platform. Steam offers two options for getting the game for free: the free demo and the free starter pack. The free demo lets you play the first hour of the game, while the free starter pack gives you access to the full game with some limitations. Here's how to get them:
-
-
Create a Steam account if you don't have one already.
Click on "Download Demo" or "Play Game" depending on which option you prefer.
-
Wait for the download to finish and launch the game from your Steam library.
-
Enjoy playing internet cafe simulator for free!
-
-
Option 2: Websites
-
Another Another way to download internet cafe simulator for free is to use websites that offer free or discounted downloads of the game. There are many websites that do this, but you have to be careful and check their credibility and security before downloading anything. Some of the websites that we recommend are: - Epic Store: Epic Store is a digital distribution platform that offers free games every week. You can check their website regularly and see if internet cafe simulator is available for free or at a lower price. - My Abandonware: My Abandonware is a website that hosts old and classic games that are no longer supported by their developers or publishers. You can find internet cafe simulator on their website and download it for free, as long as you have a compatible system and emulator. - IGN Beta Giveaway: IGN Beta Giveaway is a program that gives away free beta keys for upcoming games. You can sign up for their newsletter and get notified when they have a giveaway for internet cafe simulator. You can then redeem your beta key on Steam and play the game for free before it is officially released.
Option 3: Torrents
-
The last option to download internet cafe simulator for free is to use torrents, which are peer-to-peer file sharing networks that allow users to download and upload files from each other. However, this option is not recommended, as it comes with many risks and legal issues. Some of the problems that you might face when using torrents are:
-
-
Viruses and malware: Torrent files can contain harmful software that can infect your computer and compromise your data and security.
-
Legal trouble: Downloading internet cafe simulator for free without the permission of the developers or publishers is illegal and can result in fines or lawsuits.
-
Poor quality: Torrent files can be corrupted, incomplete, or outdated, resulting in a bad gaming experience.
-
-
If you still want to use torrents to download internet cafe simulator for free, you should take some precautions to protect yourself and your computer. Some of the tips that you should follow are:
Use a reliable torrent site, such as The Pirate Bay or RARBG, and check the ratings and comments of the torrent files before downloading them.
-
Use a VPN (virtual private network) service, such as NordVPN or ExpressVPN, to hide your IP address and encrypt your traffic.
-
Use an antivirus software, such as Avast or Malwarebytes, to scan your downloaded files and remove any potential threats.
-
-
Conclusion
-
In conclusion, internet cafe simulator is a fun and realistic simulation game that lets you run your own internet cafe business and explore the city life. If you want to play the game for free, you have three options: Steam, websites, or torrents. Each option has its own advantages and disadvantages, so you should choose the one that suits your needs and preferences. We hope this article helped you learn how to download internet cafe simulator for free, and we encourage you to try the game and share your feedback with us.
-
how to get internet cafe simulator for free on pc
-internet cafe simulator free download full version
-how to install internet cafe simulator for free
-internet cafe simulator pc game free download
-how to play internet cafe simulator for free
-internet cafe simulator free download windows 10
-how to download internet cafe simulator on bluestacks
-internet cafe simulator free download android
-how to run internet cafe simulator for free
-internet cafe simulator free download mac
-how to download internet cafe simulator from youtube
-internet cafe simulator free download apk
-how to update internet cafe simulator for free
-internet cafe simulator free download steam
-how to download internet cafe simulator with crack
-internet cafe simulator free download ios
-how to download internet cafe simulator without virus
-internet cafe simulator free download no survey
-how to download internet cafe simulator on laptop
-internet cafe simulator free download for windows 7
-how to download internet cafe simulator from google drive
-internet cafe simulator free download mega
-how to download internet cafe simulator on macbook
-internet cafe simulator free download utorrent
-how to download internet cafe simulator mod apk
-internet cafe simulator free download ocean of games
-how to download internet cafe simulator on chromebook
-internet cafe simulator free download softonic
-how to download internet cafe simulator with multiplayer
-internet cafe simulator free download rar
-how to download internet cafe simulator on phone
-internet cafe simulator free download highly compressed
-how to download internet cafe simulator on windows 8
-internet cafe simulator free download skidrow
-how to download internet cafe simulator on linux
-internet cafe simulator free download igg games
-how to download internet cafe simulator on ipad
-internet cafe simulator free download fitgirl repack
-how to download internet cafe simulator on xbox one
-internet cafe simulator free download gog
-how to download internet cafe simulator on ps4
-internet cafe simulator free download codex
-how to download internet casino simulation for free
-
FAQs
-
Q: How much does internet cafe simulator cost?
-
A: Internet cafe simulator costs $9.99 on Steam, but it may vary depending on your region and currency.
-
Q: Is internet cafe simulator multiplayer?
-
A: Yes, internet cafe simulator has a multiplayer mode that allows you to play with other players online.
-
Q: What are the system requirements for internet cafe simulator?
-
A: The minimum system requirements for internet cafe simulator are:
-
-
OS: Windows 7/8/10
-
Processor: 2 GHz Dual Core CPU
-
Memory: 4 GB RAM
-
Graphics: Intel HD Graphics 4000 or better
-
Storage: 5 GB available space
-
-
Q: Can I customize my internet cafe in internet cafe simulator?
-
A: Yes, you can customize your internet cafe in many ways, such as choosing the location, furniture, decoration, computers, gaming equipment, staff, menu, prices, and more A: Yes, you can customize your internet cafe in many ways, such as choosing the location, furniture, decoration, computers, gaming equipment, staff, menu, prices, and more. You can also upgrade your internet cafe as you earn more money and reputation.
-
Q: What are some of the challenges and risks in internet cafe simulator?
-
A: Some of the challenges and risks in internet cafe simulator are:
-
-
Competition: You have to compete with other internet cafes in the city and attract more customers and revenue.
-
Customer satisfaction: You have to keep your customers happy and loyal by providing them with good service, quality products, and a comfortable environment.
-
City events: You have to deal with various events that happen in the city, such as protests, riots, power outages, or cyberattacks.
-
Crime: You have to protect your internet cafe from thieves, hackers, vandals, or corrupt cops.
-
Personal life: You have to balance your personal life with your business, such as paying your rent, maintaining your health, dating, or having fun.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Resso Mod Apk The Ultimate Music Streaming App with Premium Features [Mediafre Download].md b/spaces/congsaPfin/Manga-OCR/logs/Resso Mod Apk The Ultimate Music Streaming App with Premium Features [Mediafre Download].md
deleted file mode 100644
index c201a240b1fbac384076787182cd5bc994ec1e6e..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Resso Mod Apk The Ultimate Music Streaming App with Premium Features [Mediafre Download].md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
Download Resso Mod Apk Premium 2023 Mediafire: Enjoy Unlimited Music Streaming
-
Are you a music lover who wants to enjoy unlimited music streaming on your Android device? If yes, then you should try Resso, a new and innovative music app that lets you listen to your favorite songs, create your own playlists, and interact with other music fans. And if you want to unlock all the premium features of Resso for free, then you should download Resso Mod Apk Premium 2023 Mediafire, a modified version of the app that gives you access to everything without paying a dime. In this article, we will tell you what Resso is, what features it offers, what benefits you can get from Resso Mod Apk, and how to download and install it on your device. So, let's get started!
-
What is Resso?
-
Resso is a music streaming app that was launched in 2019 by Moon Video Inc., a subsidiary of ByteDance, the company behind TikTok. Resso aims to provide a more social and interactive music experience for users, by allowing them to express themselves through lyrics, comments, and emojis. Resso also helps users discover new music and trends, by offering personalized recommendations and curated charts. Resso is currently available in India, Indonesia, Brazil, Philippines, and some other countries.
Resso has many features that make it stand out from other music apps. Here are some of them:
-
Listen to thousands of songs from various genres and artists
-
Resso has a huge library of songs that you can listen to online or offline. You can browse by genres, artists, albums, playlists, or moods. You can also search for any song or artist that you like. Whether you are into pop, rock, hip-hop, EDM, or Bollywood, you will find something that suits your taste.
-
Create your own playlists and share them with others
-
Resso lets you create your own playlists with the songs that you love. You can add as many songs as you want, rearrange them, rename them, and edit them anytime. You can also share your playlists with other users or on social media platforms. You can also follow other users' playlists and see what they are listening to.
-
Customize your music experience with lyrics, comments, and emojis
-
Resso is not just a music app, but also a social platform where you can express yourself and connect with other music fans. You can view the lyrics of any song that you are listening to, and sync them with the music. You can also comment on the lyrics or on any part of the song that you like. You can also use emojis to show your emotions or reactions. You can see what other users are saying about the song or join the conversation.
-
Discover new music and trends with recommendations and charts
-
Resso helps you discover new music that matches your preferences and mood. You can get personalized recommendations based on your listening history and behavior. You can also explore the top charts that show the most popular songs in different categories. You can also see what songs are trending on Resso or other platforms. You can also find new songs by genres, moods, or activities.
-
What is Resso Mod Apk?
-
Resso Mod Apk is a modified version of the original Resso app that gives you access to all the premium features of Resso without paying any subscription fee. Resso Mod Apk is not available on the Google Play Store or the official website of Resso, but you can download it from a third-party source like Mediafire. Resso Mod Apk is safe and easy to use, as long as you download it from a trusted link and follow the instructions carefully.
-
Benefits of Resso Mod Apk
-
Resso Mod Apk has many benefits that make it worth downloading. Here are some of them:
-
Unlock premium features for free
-
Resso Mod Apk allows you to enjoy all the premium features of Resso without spending any money. You can access the full library of songs, create unlimited playlists, download songs offline, and more. You can also use the advanced settings to customize your music experience, such as adjusting the equalizer, changing the playback speed, and choosing the audio quality.
-
How to download resso mod apk premium 2023 mediafıre for free
-Resso mod apk premium 2023 mediafıre latest version download
-Download resso mod apk premium 2023 mediafıre with unlimited songs
-Resso mod apk premium 2023 mediafıre no ads download link
-Download resso mod apk premium 2023 mediafıre and enjoy music offline
-Resso mod apk premium 2023 mediafıre features and benefits
-Download resso mod apk premium 2023 mediafıre and connect with music lovers
-Resso mod apk premium 2023 mediafıre review and rating
-Download resso mod apk premium 2023 mediafıre and discover new songs
-Resso mod apk premium 2023 mediafıre installation guide and tips
-Download resso mod apk premium 2023 mediafıre and follow your favorite artists
-Resso mod apk premium 2023 mediafıre comparison with other music apps
-Download resso mod apk premium 2023 mediafıre and create your own playlists
-Resso mod apk premium 2023 mediafıre hack and cheat codes
-Download resso mod apk premium 2023 mediafıre and share your feedback
-Resso mod apk premium 2023 mediafıre troubleshooting and support
-Download resso mod apk premium 2023 mediafıre and join the community
-Resso mod apk premium 2023 mediafıre update and news
-Download resso mod apk premium 2023 mediafıre and access exclusive content
-Resso mod apk premium 2023 mediafıre pros and cons
-Download resso mod apk premium 2023 mediafıre and customize your settings
-Resso mod apk premium 2023 mediafıre FAQs and answers
-Download resso mod apk premium 2023 mediafıre and earn rewards
-Resso mod apk premium 2023 mediafıre testimonials and success stories
-Download resso mod apk premium 2023 mediafıre and learn more about the app
-
Remove ads and interruptions
-
Resso Mod Apk removes all the annoying ads and interruptions that may disturb your music listening. You can listen to your favorite songs without any breaks or pop-ups. You can also skip any song that you don't like without any limit.
-
Download songs offline and listen anytime
-
Resso Mod Apk lets you download any song that you want and listen to it offline. You can save your data and battery by downloading songs when you have a Wi-Fi connection and listening to them later when you are offline. You can also create offline playlists and sync them with your device.
-
Enjoy high-quality audio and video
-
Resso Mod Apk offers you high-quality audio and video streaming for your music enjoyment. You can choose from different audio formats, such as MP3, AAC, or FLAC, depending on your preference and device compatibility. You can also watch music videos in HD quality and enjoy the visuals along with the sound.
-
How to download Resso Mod Apk Premium 2023 Mediafire?
-
If you are interested in downloading Resso Mod Apk Premium 2023 Mediafire, then you need to follow these simple steps:
-
Steps to download and install Resso Mod Apk
-
Visit the Mediafire link provided below
-
The first step is to visit the Mediafire link that we have provided below this article. This link will take you to the download page of Resso Mod Apk Premium 2023 Mediafire. You will see a green button that says "Download". Click on it and wait for a few seconds until the download starts.
-
Download the Resso Mod Apk file to your device
-
The next step is to download the Resso Mod Apk file to your device. The file size is about 60 MB, so make sure you have enough space on your device and a stable internet connection. The download may take a few minutes depending on your speed. Once the download is complete, you will see a notification on your device.
-
Enable unknown sources in your settings
-
The third step is to enable unknown sources in your settings. This is necessary because Resso Mod Apk is not from an official source and your device may block its installation. To enable unknown sources, go to your settings, then security, then unknown sources, and turn it on. You may see a warning message that says "Your phone and personal data are more vulnerable to attack by apps from unknown sources. You agree that you are solely responsible for any damage to your phone or loss of data that may result from using these apps." Tap on OK to proceed.
-
Install the Resso Mod Apk and launch the app
-
The final step is to install the Resso Mod Apk and launch the app. To install the Resso Mod Apk, go to your file manager, then downloads, then find the Resso Mod Apk file that you downloaded earlier. Tap on it and follow the instructions on the screen. The installation may take a few seconds or minutes depending on your device. Once the installation is done, you will see an icon of Resso on your home screen or app drawer. Tap on it and launch the app.
-
Conclusion
-
Resso is a great music app that lets you enjoy unlimited music streaming with social and interactive features. However, if you want to unlock all the premium features of Resso for free, then you should download Resso Mod Apk Mod Apk. Therefore, we cannot help you with any technical issues or queries that you may have regarding Resso Mod Apk. Please use Resso Mod Apk at your own risk and discretion.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Borland C Builder Xe3 Crack UPDATED Bildre Lohnsteuererk.md b/spaces/contluForse/HuggingGPT/assets/Borland C Builder Xe3 Crack UPDATED Bildre Lohnsteuererk.md
deleted file mode 100644
index be3887134eda8675ae0e6ada3f25c0a682c1a43f..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Borland C Builder Xe3 Crack UPDATED Bildre Lohnsteuererk.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Dec 22, 2021 - Coub is a YouTube for video loops. You can take any video, trim the best part, combine it with other videos, and add a soundtrack. The first episode of the series was made in December 2011.
-In February 2012, the first finale, Festivals of the Future, was announced, featuring former band member, David Bowie.
-The series finale was released in August 2012. It featured David and Iman Bowie, Paul McCartney, Ozzy Osbourne, Mariah Carey, Tim Hetherington, and several other members.
-In March 2013, the final "World Tour" was released, featuring Mick Fleetwood of Fleetwood Mac. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/El-hombre-tranquilo-(1952)-[HDRip-AC3-XviD-Esp].md b/spaces/diacanFperku/AutoGPT/El-hombre-tranquilo-(1952)-[HDRip-AC3-XviD-Esp].md
deleted file mode 100644
index b6db65299ac70b887f58d162871fa2de7f3cacaa..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/El-hombre-tranquilo-(1952)-[HDRip-AC3-XviD-Esp].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-archicad 16 build 3010 x64 x86 crack only nitro · El-hombre-tranquilo-(1952)-[HDRip-AC3-XviD-Esp]. Tags: download tracepro 7.0 ... 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Fotosizer Final Edition Serials Key.md b/spaces/diacanFperku/AutoGPT/Fotosizer Final Edition Serials Key.md
deleted file mode 100644
index c5d5034c9d09fccfadb4a9eca3792e84c0323646..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Fotosizer Final Edition Serials Key.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
Fotosizer Final Edition serials key
-
-
If you are looking for a simple and effective way to resize and optimize your photos, you might want to try Fotosizer Final Edition serials key. This is a software that can help you batch resize hundreds of photos in a matter of minutes, with various options and features. In this article, we will tell you what Fotosizer Final Edition serials key is, how to download and install it, and what benefits it offers.
Fotosizer Final Edition serials key is a software that can help you resize and optimize your photos in bulk. It lets you drag and drop your photos to the program, choose the desired size and quality, and apply various effects and adjustments. You can also add text and image watermarks, rotate and flip your photos, apply rounded corners, and more. Fotosizer Final Edition serials key can output your resized photos to a compressed ZIP file, or to a folder of your choice. You can also preview your photos before resizing them.
-
-
Fotosizer Final Edition serials key is not a free software. You need to pay for a license to use it. However, some people look for Fotosizer Final Edition serials key crack, which is a way to bypass the activation process and use the software for free. However, this is not a legal or safe way to use the software. You might face some risks such as malware infection, data loss, legal issues, or poor performance if you use Fotosizer Final Edition serials key crack.
-
-
How to Download and Install Fotosizer Final Edition serials key?
-
-
If you want to download and install Fotosizer Final Edition serials key, you need to follow these steps:
-
-
-
Download Fotosizer Final Edition serials key from the official website of Fotosizer: https://www.fotosizer.com/
-
Run the setup file and follow the installation instructions.
-
Enter the serial key from the text file or generate one using the keygen.
-
Enjoy Fotosizer Final Edition serials key.
-
-
-
What are the Benefits of Fotosizer Final Edition serials key?
-
-
By using Fotosizer Final Edition serials key, you can enjoy some benefits that can make your photo resizing and optimization easier and faster. Some of these benefits are:
-
-
-
You can save time by resizing hundreds of photos in one conversion.
-
You can save disk space by reducing the size and quality of your photos.
-
You can customize your photos according to your preferences and needs.
-
You can protect your photos by adding watermarks and metadata.
-
You can enhance your photos by applying effects and adjustments.
-
-
-
Conclusion
-
-
Fotosizer Final Edition serials key is a software that can help you resize and optimize your photos in bulk. It lets you drag and drop your photos to the program, choose the desired size and quality, and apply various effects and adjustments. However, if you want to use it for free, you might be tempted to download Fotosizer Final Edition serials key crack, which is a way to bypass the activation process and use the software without paying for a license. However, this is not a legal or safe way to use the software. You might face some risks such as malware infection, data loss, legal issues, or poor performance if you use Fotosizer Final Edition serials key crack. Therefore, we recommend that you buy a license from the official website of Fotosizer. This way, you can get regular updates, technical support, and security protection from Fotosizer.
-
-
How to Use Fotosizer Final Edition serials key?
-
-
After you have downloaded and installed Fotosizer Final Edition serials key, you can start using it to resize and optimize your photos. Here are some steps that you can follow to use the software:
-
-
-
Launch Fotosizer Final Edition and click on the Add button to add your photos to the program. You can also drag and drop your photos to the program.
-
Choose the output size and quality of your photos. You can use the presets or enter your own values.
-
Choose the output format and destination of your photos. You can also output your photos to a compressed ZIP file.
-
Click on the Options button to access more settings and features. You can add text and image watermarks, rotate and flip your photos, apply rounded corners, adjust the color and brightness, and more.
-
Click on the Start button to start resizing your photos. You can also preview your photos before resizing them.
-
Enjoy your resized and optimized photos.
-
-
-
How to Uninstall Fotosizer Final Edition serials key?
-
-
If you want to uninstall Fotosizer Final Edition serials key, you can follow these steps:
-
-
-
Go to Control Panel and click on Programs and Features.
-
Find Fotosizer Final Edition in the list of programs and click on Uninstall.
-
Follow the uninstallation wizard and confirm your choice.
-
Delete the serial key and the keygen file from your system.
-
Restart your system if needed.
-
-
-
How to Get Fotosizer Final Edition Legally and Safely?
-
-
If you want to get Fotosizer Final Edition legally and safely, you should buy a license from the official website of Fotosizer. This way, you can get the following benefits:
-
-
-
You can get regular updates and new features for Fotosizer Final Edition.
-
You can get technical support and customer service from Fotosizer.
-
You can get security protection and privacy guarantee from Fotosizer.
-
You can avoid malware infection, data loss, legal issues, or poor performance that might come with Fotosizer Final Edition serials key crack.
-
You can support the developers of Fotosizer Final Edition and help them improve their software.
What are the Alternatives to Fotosizer Final Edition serials key?
-
-
If you are not satisfied with Fotosizer Final Edition serials key, or if you want to try some other photo resizing and optimization software that are free and reliable, you can check out some of these alternatives:
-
-
-
FastStone Photo Resizer: This is a simple and fast software that can help you resize, rename, crop, rotate, and convert your photos in batch mode. It also supports adding text and image watermarks, adjusting color and brightness, applying borders and effects, and more. You can download FastStone Photo Resizer for free from this link: https://www.faststone.org/FSResizerDetail.htm
-
Image Resizer for Windows: This is a handy and easy-to-use software that can help you resize your photos with a simple right-click. It integrates with Windows Explorer and lets you choose from predefined sizes or enter your own values. You can also rotate and compress your photos, and preserve the original metadata. You can download Image Resizer for Windows for free from this link: https://www.bricelam.net/ImageResizer/
-
IrfanView: This is a powerful and versatile software that can help you view, edit, convert, and optimize your photos. It supports a wide range of formats and features, such as batch resizing, cropping, rotating, renaming, watermarking, color correction, effects, filters, and more. You can download IrfanView for free from this link: https://www.irfanview.com/
-
-
-
Conclusion
-
-
Fotosizer Final Edition serials key is a software that can help you resize and optimize your photos in bulk. It lets you drag and drop your photos to the program, choose the desired size and quality, and apply various effects and adjustments. However, if you want to use it for free, you might be tempted to download Fotosizer Final Edition serials key crack, which is a way to bypass the activation process and use the software without paying for a license. However, this is not a legal or safe way to use the software. You might face some risks such as malware infection, data loss, legal issues, or poor performance if you use Fotosizer Final Edition serials key crack. Therefore, we recommend that you buy a license from the official website of Fotosizer. This way, you can get regular updates, technical support, and security protection from Fotosizer. You can also try some alternatives to Fotosizer Final Edition serials key that are free and reliable, such as FastStone Photo Resizer, Image Resizer for Windows, or IrfanView.
-
How to Use Fotosizer Final Edition serials key?
-
-
After you have downloaded and installed Fotosizer Final Edition serials key, you can start using it to resize and optimize your photos. Here are some steps that you can follow to use the software:
-
-
-
Launch Fotosizer Final Edition and click on the Add button to add your photos to the program. You can also drag and drop your photos to the program.
-
Choose the output size and quality of your photos. You can use the presets or enter your own values.
-
Choose the output format and destination of your photos. You can also output your photos to a compressed ZIP file.
-
Click on the Options button to access more settings and features. You can add text and image watermarks, rotate and flip your photos, apply rounded corners, adjust the color and brightness, and more.
-
Click on the Start button to start resizing your photos. You can also preview your photos before resizing them.
-
Enjoy your resized and optimized photos.
-
-
-
How to Uninstall Fotosizer Final Edition serials key?
-
-
If you want to uninstall Fotosizer Final Edition serials key, you can follow these steps:
-
-
-
Go to Control Panel and click on Programs and Features.
-
Find Fotosizer Final Edition in the list of programs and click on Uninstall.
-
Follow the uninstallation wizard and confirm your choice.
-
Delete the serial key and the keygen file from your system.
-
Restart your system if needed.
-
-
-
How to Get Fotosizer Final Edition Legally and Safely?
-
-
If you want to get Fotosizer Final Edition legally and safely, you should buy a license from the official website of Fotosizer. This way, you can get the following benefits:
-
-
-
You can get regular updates and new features for Fotosizer Final Edition.
-
You can get technical support and customer service from Fotosizer.
-
You can get security protection and privacy guarantee from Fotosizer.
-
You can avoid malware infection, data loss, legal issues, or poor performance that might come with Fotosizer Final Edition serials key crack.
-
You can support the developers of Fotosizer Final Edition and help them improve their software.
Fotosizer Final Edition serials key is a software that can help you resize and optimize your photos in bulk. It lets you drag and drop your photos to the program, choose the desired size and quality, and apply various effects and adjustments. However, if you want to use it for free, you might be tempted to download Fotosizer Final Edition serials key crack, which is a way to bypass the activation process and use the software without paying for a license. However, this is not a legal or safe way to use the software. You might face some risks such as malware infection, data loss, legal issues, or poor performance if you use Fotosizer Final Edition serials key crack. Therefore, we recommend that you buy a license from the official website of Fotosizer. This way, you can get regular updates, technical support, and security protection from Fotosizer. You can also try some alternatives to Fotosizer Final Edition serials key that are free and reliable, such as FastStone Photo Resizer, Image Resizer for Windows, or IrfanView.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Ls Land Anya Forbidden Fruit [UPDATED].md b/spaces/diacanFperku/AutoGPT/Ls Land Anya Forbidden Fruit [UPDATED].md
deleted file mode 100644
index 35fd3f53746c07db792551663f60d24b22c5e100..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Ls Land Anya Forbidden Fruit [UPDATED].md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
-Ls Land Anya Forbidden Fruit 1288d90c24 16, 2021 n Ls Land Anya Forbid Activation Torrent Build Full. Forbidden Fruit provides a home for musical and artistic takeovers from all over the . so that OSPF routers can direct traffic to that destination. The local router uses an IP address to advertise a link to the remote router. The local router may advertise the link using multiple IP addresses, either in a unicast or multicast format.
-
-IP unicast sends packets only to a single destination. IP multicast sends packets to more than one destination. In a multicast packet, each IP packet contains information to be received by multiple, destination nodes. IP multicast requires IP Group Address identification in the destination field of a packet so that a switch can direct a packet to the group address. IP multicast is a broadcast protocol and only one destination address is specified in a packet; therefore, it is used for broadcasting information to all nodes in the network, including the destination.
-
-In a multi-homed network, however, IP multicast is not useful. A packet destined for a group address is sent out all interfaces and the packet will arrive at all network nodes. The destination node that is not the node to which the packet is destined will discard the packet. It is therefore desirable to provide an improved method for multicast communications in a multi-homed network.Q:
-
-json parse error in IE8
-
-Trying to parse JSON to handle IE8 compatibility issue with json parsing but getting error.
-
-getData: function (name)
-
- return this.ajax(
-
- type: "GET",
-
- url: "json/",
-
- dataType: "json",
-
- timeout: 15000
-
- );
-
- ,
-
-"records":[["abc","12"],["pqr","13"]]
-
-the json string is coming from external source. json is working fine in chrome but not in IE8.
-
-Error is Uncaught Error 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Progress Openedge 102b Serial.md b/spaces/diacanFperku/AutoGPT/Progress Openedge 102b Serial.md
deleted file mode 100644
index 8584380f7ef10e7db3a7d21dc338ae29f04810bd..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Progress Openedge 102b Serial.md
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
How to Find and Use Your Progress Openedge 102b Serial Number
-
Progress Openedge is a powerful and reliable platform for developing and deploying mission-critical business applications. If you have purchased a license for Progress Openedge 102b, you will need to find and use your serial number to activate and register your product.
-
A serial number is a unique alphanumeric code that identifies your product and its license type. You can find your serial number in one of the following ways:
If you have received a physical media kit, the serial number is printed on the label of the CD or DVD case.
-
If you have downloaded the product from the Progress website, the serial number is included in the confirmation email that you received after completing your order.
-
If you have installed the product using a network setup, the serial number is stored in the progress.cfg file in the installation directory (by default, C:\Progress\OpenEdge).
-
-
To use your serial number, you need to enter it during the installation process or when prompted by the product activation wizard. The serial number will be validated and registered with Progress, and you will be able to access all the features and benefits of your licensed product.
-
If you have any questions or issues regarding your serial number or product activation, please contact Progress Customer Support at 1-800-477-6473 or visit https://www.progress.com/support.
-
-
Progress Openedge 102b is a comprehensive and integrated platform that offers a range of features and benefits for developing and deploying business applications. Some of the key features of Progress Openedge 102b are:
-
-
Openedge Development Studio: A graphical integrated development environment (IDE) that provides tools and wizards for creating, testing, debugging, and deploying Openedge applications.
-
Openedge RDBMS: A high-performance relational database management system (RDBMS) that supports data access, integrity, security, backup, and recovery.
-
Openedge Application Server: A scalable and robust application server that enables distributed processing, load balancing, clustering, and failover for Openedge applications.
-
Openedge DataServer: A data access technology that allows Openedge applications to access and manipulate data from other RDBMSs such as Oracle, Microsoft SQL Server, and ODBC.
-
Openedge WebClient: A deployment technology that allows Openedge applications to run on client machines without requiring installation or configuration.
-
-
With Progress Openedge 102b, you can leverage the power and flexibility of the Openedge platform to create and deliver innovative and competitive business solutions.
-
-
Progress Openedge 102b is not only a powerful platform, but also a reliable and secure one. Progress Openedge 102b supports data encryption, authentication, authorization, auditing, and backup and recovery features to ensure the safety and integrity of your data and applications. You can also leverage the Progress OpenEdge Transparent Data Encryption (TDE) feature to encrypt your data at rest without requiring any application changes.
-
Progress Openedge 102b is also a platform that has been trusted and recommended by many customers across various industries and regions. According to Gartner Peer Insights, Progress Openedge has received an average rating of 4.5 out of 5 stars from 21 reviews as of January 2023. Customers have praised Progress Openedge for its performance, scalability, stability, ease of use, and support. Some of the customer testimonials are:
-
-
"Progress is our competitive advantage! Progress has provided an excellent reliable and cost effective environment to run our company's DB needs since 1991."
-Software Industry
-
-
-
"OpenEdge allows us to be agile. We can quickly develop and deploy new features and functionality to meet our customers' needs."
-
-Manufacturing Industry
-
-
-
"OpenEdge is a great platform for building and running mission-critical applications. It has a rich set of features and tools that enable us to create high-quality solutions."
-Services Industry
-
-
If you are looking for a platform that can help you create and deliver innovative and competitive business solutions, Progress Openedge 102b is the right choice for you.
-
-html. A Film on, Charlie And The Chocolate Factory full movie download [Hello Guest] - Xvidtodvdmov - HD videosean7 - xvidtodvdmov. O filme de Charlie And The Chocolate Factory de Roald Dahl está disponível online para baixar. Apele, provavelmente, para ver o trailer para assistir ao filme!
-
-[image] Charlie And The Chocolate Factory Full Movie Download Youtubeinstmankl - DOWNLOAD com/stories/3076124-dil-maange-more.html. Charlie And The Chocolate Factory - Phil DeFranco - The Unbelievable - Phil DeFranco. Charlie And The Chocolate Factory é uma animação produzida pela Warner Brothers e está disponível para baixar em vídeo, DVD, Blu-ray e stream tanto para download de fonte, como para streaming do vídeo.
-
-[image] Charlie And The Chocolate Factory Full Movie Download Youtubeinstmankl - DOWNLOAD com/stories/3076124-dil-maange-more.html. Charlie And The Chocolate Factory filme. Charlie and the Chocolate Factory, pelo cineasta de origem australiana Andrew Adamson, chega ao Brasil com as primeiras adaptações do filme original: da realizadora belga The Girl With The Pearl Earring para a Playstation, e da Tim Burton, da Disney, para a Playstation.
-
-[image] Charlie And The Chocolate Factory Full Movie Download Youtubeinstmankl - DOWNLOAD com/stories/3076124-dil-maange-more.html. Charlie And The Chocolate Factory filme. Charlie and the Chocolate Factory, produto da Warner Brothers, o filme chega ao Brasil com a estreia do Tim Burton pelo Sony e da Belle Diniz, da Disney, no Netflix.
-
-[image] Charlie And The Chocolate Factory Full Movie Download Youtubeinstmankl - DOWNLOAD com/stories/3076124-dil-maange-more.html. O filme estreia na Netflix e até o dia da estreia oficial da Disney, o filme já estava disponível para download de fonte no iTunes e para streaming de vídeo na Apple TV.
-
-[ 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Easyplc Simulador Full Mega BEST.md b/spaces/falterWliame/Face_Mask_Detection/Easyplc Simulador Full Mega BEST.md
deleted file mode 100644
index 03bb13d3807947378e1d5d432bb8e2651ceacd25..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Easyplc Simulador Full Mega BEST.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- )}
-
- );
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/assert/strict.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/assert/strict.d.ts
deleted file mode 100644
index b4319b974861f6cad84b745485af55264b13c3d8..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/assert/strict.d.ts
+++ /dev/null
@@ -1,8 +0,0 @@
-declare module 'assert/strict' {
- import { strict } from 'node:assert';
- export = strict;
-}
-declare module 'node:assert/strict' {
- import { strict } from 'node:assert';
- export = strict;
-}
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ipaddr.js/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ipaddr.js/README.md
deleted file mode 100644
index f57725b0fed3b74b2ed13d99c0fe8ee65ab29f3c..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ipaddr.js/README.md
+++ /dev/null
@@ -1,233 +0,0 @@
-# ipaddr.js — an IPv6 and IPv4 address manipulation library [](https://travis-ci.org/whitequark/ipaddr.js)
-
-ipaddr.js is a small (1.9K minified and gzipped) library for manipulating
-IP addresses in JavaScript environments. It runs on both CommonJS runtimes
-(e.g. [nodejs]) and in a web browser.
-
-ipaddr.js allows you to verify and parse string representation of an IP
-address, match it against a CIDR range or range list, determine if it falls
-into some reserved ranges (examples include loopback and private ranges),
-and convert between IPv4 and IPv4-mapped IPv6 addresses.
-
-[nodejs]: http://nodejs.org
-
-## Installation
-
-`npm install ipaddr.js`
-
-or
-
-`bower install ipaddr.js`
-
-## API
-
-ipaddr.js defines one object in the global scope: `ipaddr`. In CommonJS,
-it is exported from the module:
-
-```js
-var ipaddr = require('ipaddr.js');
-```
-
-The API consists of several global methods and two classes: ipaddr.IPv6 and ipaddr.IPv4.
-
-### Global methods
-
-There are three global methods defined: `ipaddr.isValid`, `ipaddr.parse` and
-`ipaddr.process`. All of them receive a string as a single parameter.
-
-The `ipaddr.isValid` method returns `true` if the address is a valid IPv4 or
-IPv6 address, and `false` otherwise. It does not throw any exceptions.
-
-The `ipaddr.parse` method returns an object representing the IP address,
-or throws an `Error` if the passed string is not a valid representation of an
-IP address.
-
-The `ipaddr.process` method works just like the `ipaddr.parse` one, but it
-automatically converts IPv4-mapped IPv6 addresses to their IPv4 counterparts
-before returning. It is useful when you have a Node.js instance listening
-on an IPv6 socket, and the `net.ivp6.bindv6only` sysctl parameter (or its
-equivalent on non-Linux OS) is set to 0. In this case, you can accept IPv4
-connections on your IPv6-only socket, but the remote address will be mangled.
-Use `ipaddr.process` method to automatically demangle it.
-
-### Object representation
-
-Parsing methods return an object which descends from `ipaddr.IPv6` or
-`ipaddr.IPv4`. These objects share some properties, but most of them differ.
-
-#### Shared properties
-
-One can determine the type of address by calling `addr.kind()`. It will return
-either `"ipv6"` or `"ipv4"`.
-
-An address can be converted back to its string representation with `addr.toString()`.
-Note that this method:
- * does not return the original string used to create the object (in fact, there is
- no way of getting that string)
- * returns a compact representation (when it is applicable)
-
-A `match(range, bits)` method can be used to check if the address falls into a
-certain CIDR range.
-Note that an address can be (obviously) matched only against an address of the same type.
-
-For example:
-
-```js
-var addr = ipaddr.parse("2001:db8:1234::1");
-var range = ipaddr.parse("2001:db8::");
-
-addr.match(range, 32); // => true
-```
-
-Alternatively, `match` can also be called as `match([range, bits])`. In this way,
-it can be used together with the `parseCIDR(string)` method, which parses an IP
-address together with a CIDR range.
-
-For example:
-
-```js
-var addr = ipaddr.parse("2001:db8:1234::1");
-
-addr.match(ipaddr.parseCIDR("2001:db8::/32")); // => true
-```
-
-A `range()` method returns one of predefined names for several special ranges defined
-by IP protocols. The exact names (and their respective CIDR ranges) can be looked up
-in the source: [IPv6 ranges] and [IPv4 ranges]. Some common ones include `"unicast"`
-(the default one) and `"reserved"`.
-
-You can match against your own range list by using
-`ipaddr.subnetMatch(address, rangeList, defaultName)` method. It can work with a mix of IPv6 or IPv4 addresses, and accepts a name-to-subnet map as the range list. For example:
-
-```js
-var rangeList = {
- documentationOnly: [ ipaddr.parse('2001:db8::'), 32 ],
- tunnelProviders: [
- [ ipaddr.parse('2001:470::'), 32 ], // he.net
- [ ipaddr.parse('2001:5c0::'), 32 ] // freenet6
- ]
-};
-ipaddr.subnetMatch(ipaddr.parse('2001:470:8:66::1'), rangeList, 'unknown'); // => "tunnelProviders"
-```
-
-The addresses can be converted to their byte representation with `toByteArray()`.
-(Actually, JavaScript mostly does not know about byte buffers. They are emulated with
-arrays of numbers, each in range of 0..255.)
-
-```js
-var bytes = ipaddr.parse('2a00:1450:8007::68').toByteArray(); // ipv6.google.com
-bytes // => [42, 0x00, 0x14, 0x50, 0x80, 0x07, 0x00, , 0x00, 0x68 ]
-```
-
-The `ipaddr.IPv4` and `ipaddr.IPv6` objects have some methods defined, too. All of them
-have the same interface for both protocols, and are similar to global methods.
-
-`ipaddr.IPvX.isValid(string)` can be used to check if the string is a valid address
-for particular protocol, and `ipaddr.IPvX.parse(string)` is the error-throwing parser.
-
-`ipaddr.IPvX.isValid(string)` uses the same format for parsing as the POSIX `inet_ntoa` function, which accepts unusual formats like `0xc0.168.1.1` or `0x10000000`. The function `ipaddr.IPv4.isValidFourPartDecimal(string)` validates the IPv4 address and also ensures that it is written in four-part decimal format.
-
-[IPv6 ranges]: https://github.com/whitequark/ipaddr.js/blob/master/src/ipaddr.coffee#L186
-[IPv4 ranges]: https://github.com/whitequark/ipaddr.js/blob/master/src/ipaddr.coffee#L71
-
-#### IPv6 properties
-
-Sometimes you will want to convert IPv6 not to a compact string representation (with
-the `::` substitution); the `toNormalizedString()` method will return an address where
-all zeroes are explicit.
-
-For example:
-
-```js
-var addr = ipaddr.parse("2001:0db8::0001");
-addr.toString(); // => "2001:db8::1"
-addr.toNormalizedString(); // => "2001:db8:0:0:0:0:0:1"
-```
-
-The `isIPv4MappedAddress()` method will return `true` if this address is an IPv4-mapped
-one, and `toIPv4Address()` will return an IPv4 object address.
-
-To access the underlying binary representation of the address, use `addr.parts`.
-
-```js
-var addr = ipaddr.parse("2001:db8:10::1234:DEAD");
-addr.parts // => [0x2001, 0xdb8, 0x10, 0, 0, 0, 0x1234, 0xdead]
-```
-
-A IPv6 zone index can be accessed via `addr.zoneId`:
-
-```js
-var addr = ipaddr.parse("2001:db8::%eth0");
-addr.zoneId // => 'eth0'
-```
-
-#### IPv4 properties
-
-`toIPv4MappedAddress()` will return a corresponding IPv4-mapped IPv6 address.
-
-To access the underlying representation of the address, use `addr.octets`.
-
-```js
-var addr = ipaddr.parse("192.168.1.1");
-addr.octets // => [192, 168, 1, 1]
-```
-
-`prefixLengthFromSubnetMask()` will return a CIDR prefix length for a valid IPv4 netmask or
-null if the netmask is not valid.
-
-```js
-ipaddr.IPv4.parse('255.255.255.240').prefixLengthFromSubnetMask() == 28
-ipaddr.IPv4.parse('255.192.164.0').prefixLengthFromSubnetMask() == null
-```
-
-`subnetMaskFromPrefixLength()` will return an IPv4 netmask for a valid CIDR prefix length.
-
-```js
-ipaddr.IPv4.subnetMaskFromPrefixLength(24) == "255.255.255.0"
-ipaddr.IPv4.subnetMaskFromPrefixLength(29) == "255.255.255.248"
-```
-
-`broadcastAddressFromCIDR()` will return the broadcast address for a given IPv4 interface and netmask in CIDR notation.
-```js
-ipaddr.IPv4.broadcastAddressFromCIDR("172.0.0.1/24") == "172.0.0.255"
-```
-`networkAddressFromCIDR()` will return the network address for a given IPv4 interface and netmask in CIDR notation.
-```js
-ipaddr.IPv4.networkAddressFromCIDR("172.0.0.1/24") == "172.0.0.0"
-```
-
-#### Conversion
-
-IPv4 and IPv6 can be converted bidirectionally to and from network byte order (MSB) byte arrays.
-
-The `fromByteArray()` method will take an array and create an appropriate IPv4 or IPv6 object
-if the input satisfies the requirements. For IPv4 it has to be an array of four 8-bit values,
-while for IPv6 it has to be an array of sixteen 8-bit values.
-
-For example:
-```js
-var addr = ipaddr.fromByteArray([0x7f, 0, 0, 1]);
-addr.toString(); // => "127.0.0.1"
-```
-
-or
-
-```js
-var addr = ipaddr.fromByteArray([0x20, 1, 0xd, 0xb8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1])
-addr.toString(); // => "2001:db8::1"
-```
-
-Both objects also offer a `toByteArray()` method, which returns an array in network byte order (MSB).
-
-For example:
-```js
-var addr = ipaddr.parse("127.0.0.1");
-addr.toByteArray(); // => [0x7f, 0, 0, 1]
-```
-
-or
-
-```js
-var addr = ipaddr.parse("2001:db8::1");
-addr.toByteArray(); // => [0x20, 1, 0xd, 0xb8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
-```
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/qs/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/qs/README.md
deleted file mode 100644
index 11be8531dd587984a378408c61dee80057842c63..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/qs/README.md
+++ /dev/null
@@ -1,625 +0,0 @@
-# qs [![Version Badge][npm-version-svg]][package-url]
-
-[![github actions][actions-image]][actions-url]
-[![coverage][codecov-image]][codecov-url]
-[![dependency status][deps-svg]][deps-url]
-[![dev dependency status][dev-deps-svg]][dev-deps-url]
-[![License][license-image]][license-url]
-[![Downloads][downloads-image]][downloads-url]
-
-[![npm badge][npm-badge-png]][package-url]
-
-A querystring parsing and stringifying library with some added security.
-
-Lead Maintainer: [Jordan Harband](https://github.com/ljharb)
-
-The **qs** module was originally created and maintained by [TJ Holowaychuk](https://github.com/visionmedia/node-querystring).
-
-## Usage
-
-```javascript
-var qs = require('qs');
-var assert = require('assert');
-
-var obj = qs.parse('a=c');
-assert.deepEqual(obj, { a: 'c' });
-
-var str = qs.stringify(obj);
-assert.equal(str, 'a=c');
-```
-
-### Parsing Objects
-
-[](#preventEval)
-```javascript
-qs.parse(string, [options]);
-```
-
-**qs** allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets `[]`.
-For example, the string `'foo[bar]=baz'` converts to:
-
-```javascript
-assert.deepEqual(qs.parse('foo[bar]=baz'), {
- foo: {
- bar: 'baz'
- }
-});
-```
-
-When using the `plainObjects` option the parsed value is returned as a null object, created via `Object.create(null)` and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:
-
-```javascript
-var nullObject = qs.parse('a[hasOwnProperty]=b', { plainObjects: true });
-assert.deepEqual(nullObject, { a: { hasOwnProperty: 'b' } });
-```
-
-By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use `plainObjects` as mentioned above, or set `allowPrototypes` to `true` which will allow user input to overwrite those properties. *WARNING* It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.
-
-```javascript
-var protoObject = qs.parse('a[hasOwnProperty]=b', { allowPrototypes: true });
-assert.deepEqual(protoObject, { a: { hasOwnProperty: 'b' } });
-```
-
-URI encoded strings work too:
-
-```javascript
-assert.deepEqual(qs.parse('a%5Bb%5D=c'), {
- a: { b: 'c' }
-});
-```
-
-You can also nest your objects, like `'foo[bar][baz]=foobarbaz'`:
-
-```javascript
-assert.deepEqual(qs.parse('foo[bar][baz]=foobarbaz'), {
- foo: {
- bar: {
- baz: 'foobarbaz'
- }
- }
-});
-```
-
-By default, when nesting objects **qs** will only parse up to 5 children deep. This means if you attempt to parse a string like
-`'a[b][c][d][e][f][g][h][i]=j'` your resulting object will be:
-
-```javascript
-var expected = {
- a: {
- b: {
- c: {
- d: {
- e: {
- f: {
- '[g][h][i]': 'j'
- }
- }
- }
- }
- }
- }
-};
-var string = 'a[b][c][d][e][f][g][h][i]=j';
-assert.deepEqual(qs.parse(string), expected);
-```
-
-This depth can be overridden by passing a `depth` option to `qs.parse(string, [options])`:
-
-```javascript
-var deep = qs.parse('a[b][c][d][e][f][g][h][i]=j', { depth: 1 });
-assert.deepEqual(deep, { a: { b: { '[c][d][e][f][g][h][i]': 'j' } } });
-```
-
-The depth limit helps mitigate abuse when **qs** is used to parse user input, and it is recommended to keep it a reasonably small number.
-
-For similar reasons, by default **qs** will only parse up to 1000 parameters. This can be overridden by passing a `parameterLimit` option:
-
-```javascript
-var limited = qs.parse('a=b&c=d', { parameterLimit: 1 });
-assert.deepEqual(limited, { a: 'b' });
-```
-
-To bypass the leading question mark, use `ignoreQueryPrefix`:
-
-```javascript
-var prefixed = qs.parse('?a=b&c=d', { ignoreQueryPrefix: true });
-assert.deepEqual(prefixed, { a: 'b', c: 'd' });
-```
-
-An optional delimiter can also be passed:
-
-```javascript
-var delimited = qs.parse('a=b;c=d', { delimiter: ';' });
-assert.deepEqual(delimited, { a: 'b', c: 'd' });
-```
-
-Delimiters can be a regular expression too:
-
-```javascript
-var regexed = qs.parse('a=b;c=d,e=f', { delimiter: /[;,]/ });
-assert.deepEqual(regexed, { a: 'b', c: 'd', e: 'f' });
-```
-
-Option `allowDots` can be used to enable dot notation:
-
-```javascript
-var withDots = qs.parse('a.b=c', { allowDots: true });
-assert.deepEqual(withDots, { a: { b: 'c' } });
-```
-
-If you have to deal with legacy browsers or services, there's
-also support for decoding percent-encoded octets as iso-8859-1:
-
-```javascript
-var oldCharset = qs.parse('a=%A7', { charset: 'iso-8859-1' });
-assert.deepEqual(oldCharset, { a: '§' });
-```
-
-Some services add an initial `utf8=✓` value to forms so that old
-Internet Explorer versions are more likely to submit the form as
-utf-8. Additionally, the server can check the value against wrong
-encodings of the checkmark character and detect that a query string
-or `application/x-www-form-urlencoded` body was *not* sent as
-utf-8, eg. if the form had an `accept-charset` parameter or the
-containing page had a different character set.
-
-**qs** supports this mechanism via the `charsetSentinel` option.
-If specified, the `utf8` parameter will be omitted from the
-returned object. It will be used to switch to `iso-8859-1`/`utf-8`
-mode depending on how the checkmark is encoded.
-
-**Important**: When you specify both the `charset` option and the
-`charsetSentinel` option, the `charset` will be overridden when
-the request contains a `utf8` parameter from which the actual
-charset can be deduced. In that sense the `charset` will behave
-as the default charset rather than the authoritative charset.
-
-```javascript
-var detectedAsUtf8 = qs.parse('utf8=%E2%9C%93&a=%C3%B8', {
- charset: 'iso-8859-1',
- charsetSentinel: true
-});
-assert.deepEqual(detectedAsUtf8, { a: 'ø' });
-
-// Browsers encode the checkmark as ✓ when submitting as iso-8859-1:
-var detectedAsIso8859_1 = qs.parse('utf8=%26%2310003%3B&a=%F8', {
- charset: 'utf-8',
- charsetSentinel: true
-});
-assert.deepEqual(detectedAsIso8859_1, { a: 'ø' });
-```
-
-If you want to decode the `...;` syntax to the actual character,
-you can specify the `interpretNumericEntities` option as well:
-
-```javascript
-var detectedAsIso8859_1 = qs.parse('a=%26%239786%3B', {
- charset: 'iso-8859-1',
- interpretNumericEntities: true
-});
-assert.deepEqual(detectedAsIso8859_1, { a: '☺' });
-```
-
-It also works when the charset has been detected in `charsetSentinel`
-mode.
-
-### Parsing Arrays
-
-**qs** can also parse arrays using a similar `[]` notation:
-
-```javascript
-var withArray = qs.parse('a[]=b&a[]=c');
-assert.deepEqual(withArray, { a: ['b', 'c'] });
-```
-
-You may specify an index as well:
-
-```javascript
-var withIndexes = qs.parse('a[1]=c&a[0]=b');
-assert.deepEqual(withIndexes, { a: ['b', 'c'] });
-```
-
-Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number
-to create an array. When creating arrays with specific indices, **qs** will compact a sparse array to only the existing values preserving
-their order:
-
-```javascript
-var noSparse = qs.parse('a[1]=b&a[15]=c');
-assert.deepEqual(noSparse, { a: ['b', 'c'] });
-```
-
-You may also use `allowSparse` option to parse sparse arrays:
-
-```javascript
-var sparseArray = qs.parse('a[1]=2&a[3]=5', { allowSparse: true });
-assert.deepEqual(sparseArray, { a: [, '2', , '5'] });
-```
-
-Note that an empty string is also a value, and will be preserved:
-
-```javascript
-var withEmptyString = qs.parse('a[]=&a[]=b');
-assert.deepEqual(withEmptyString, { a: ['', 'b'] });
-
-var withIndexedEmptyString = qs.parse('a[0]=b&a[1]=&a[2]=c');
-assert.deepEqual(withIndexedEmptyString, { a: ['b', '', 'c'] });
-```
-
-**qs** will also limit specifying indices in an array to a maximum index of `20`. Any array members with an index of greater than `20` will
-instead be converted to an object with the index as the key. This is needed to handle cases when someone sent, for example, `a[999999999]` and it will take significant time to iterate over this huge array.
-
-```javascript
-var withMaxIndex = qs.parse('a[100]=b');
-assert.deepEqual(withMaxIndex, { a: { '100': 'b' } });
-```
-
-This limit can be overridden by passing an `arrayLimit` option:
-
-```javascript
-var withArrayLimit = qs.parse('a[1]=b', { arrayLimit: 0 });
-assert.deepEqual(withArrayLimit, { a: { '1': 'b' } });
-```
-
-To disable array parsing entirely, set `parseArrays` to `false`.
-
-```javascript
-var noParsingArrays = qs.parse('a[]=b', { parseArrays: false });
-assert.deepEqual(noParsingArrays, { a: { '0': 'b' } });
-```
-
-If you mix notations, **qs** will merge the two items into an object:
-
-```javascript
-var mixedNotation = qs.parse('a[0]=b&a[b]=c');
-assert.deepEqual(mixedNotation, { a: { '0': 'b', b: 'c' } });
-```
-
-You can also create arrays of objects:
-
-```javascript
-var arraysOfObjects = qs.parse('a[][b]=c');
-assert.deepEqual(arraysOfObjects, { a: [{ b: 'c' }] });
-```
-
-Some people use comma to join array, **qs** can parse it:
-```javascript
-var arraysOfObjects = qs.parse('a=b,c', { comma: true })
-assert.deepEqual(arraysOfObjects, { a: ['b', 'c'] })
-```
-(_this cannot convert nested objects, such as `a={b:1},{c:d}`_)
-
-### Parsing primitive/scalar values (numbers, booleans, null, etc)
-
-By default, all values are parsed as strings. This behavior will not change and is explained in [issue #91](https://github.com/ljharb/qs/issues/91).
-
-```javascript
-var primitiveValues = qs.parse('a=15&b=true&c=null');
-assert.deepEqual(primitiveValues, { a: '15', b: 'true', c: 'null' });
-```
-
-If you wish to auto-convert values which look like numbers, booleans, and other values into their primitive counterparts, you can use the [query-types Express JS middleware](https://github.com/xpepermint/query-types) which will auto-convert all request query parameters.
-
-### Stringifying
-
-[](#preventEval)
-```javascript
-qs.stringify(object, [options]);
-```
-
-When stringifying, **qs** by default URI encodes output. Objects are stringified as you would expect:
-
-```javascript
-assert.equal(qs.stringify({ a: 'b' }), 'a=b');
-assert.equal(qs.stringify({ a: { b: 'c' } }), 'a%5Bb%5D=c');
-```
-
-This encoding can be disabled by setting the `encode` option to `false`:
-
-```javascript
-var unencoded = qs.stringify({ a: { b: 'c' } }, { encode: false });
-assert.equal(unencoded, 'a[b]=c');
-```
-
-Encoding can be disabled for keys by setting the `encodeValuesOnly` option to `true`:
-```javascript
-var encodedValues = qs.stringify(
- { a: 'b', c: ['d', 'e=f'], f: [['g'], ['h']] },
- { encodeValuesOnly: true }
-);
-assert.equal(encodedValues,'a=b&c[0]=d&c[1]=e%3Df&f[0][0]=g&f[1][0]=h');
-```
-
-This encoding can also be replaced by a custom encoding method set as `encoder` option:
-
-```javascript
-var encoded = qs.stringify({ a: { b: 'c' } }, { encoder: function (str) {
- // Passed in values `a`, `b`, `c`
- return // Return encoded string
-}})
-```
-
-_(Note: the `encoder` option does not apply if `encode` is `false`)_
-
-Analogue to the `encoder` there is a `decoder` option for `parse` to override decoding of properties and values:
-
-```javascript
-var decoded = qs.parse('x=z', { decoder: function (str) {
- // Passed in values `x`, `z`
- return // Return decoded string
-}})
-```
-
-You can encode keys and values using different logic by using the type argument provided to the encoder:
-
-```javascript
-var encoded = qs.stringify({ a: { b: 'c' } }, { encoder: function (str, defaultEncoder, charset, type) {
- if (type === 'key') {
- return // Encoded key
- } else if (type === 'value') {
- return // Encoded value
- }
-}})
-```
-
-The type argument is also provided to the decoder:
-
-```javascript
-var decoded = qs.parse('x=z', { decoder: function (str, defaultDecoder, charset, type) {
- if (type === 'key') {
- return // Decoded key
- } else if (type === 'value') {
- return // Decoded value
- }
-}})
-```
-
-Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases *will* be URI encoded during real usage.
-
-When arrays are stringified, by default they are given explicit indices:
-
-```javascript
-qs.stringify({ a: ['b', 'c', 'd'] });
-// 'a[0]=b&a[1]=c&a[2]=d'
-```
-
-You may override this by setting the `indices` option to `false`:
-
-```javascript
-qs.stringify({ a: ['b', 'c', 'd'] }, { indices: false });
-// 'a=b&a=c&a=d'
-```
-
-You may use the `arrayFormat` option to specify the format of the output array:
-
-```javascript
-qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'indices' })
-// 'a[0]=b&a[1]=c'
-qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'brackets' })
-// 'a[]=b&a[]=c'
-qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'repeat' })
-// 'a=b&a=c'
-qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'comma' })
-// 'a=b,c'
-```
-
-Note: when using `arrayFormat` set to `'comma'`, you can also pass the `commaRoundTrip` option set to `true` or `false`, to append `[]` on single-item arrays, so that they can round trip through a parse.
-
-When objects are stringified, by default they use bracket notation:
-
-```javascript
-qs.stringify({ a: { b: { c: 'd', e: 'f' } } });
-// 'a[b][c]=d&a[b][e]=f'
-```
-
-You may override this to use dot notation by setting the `allowDots` option to `true`:
-
-```javascript
-qs.stringify({ a: { b: { c: 'd', e: 'f' } } }, { allowDots: true });
-// 'a.b.c=d&a.b.e=f'
-```
-
-Empty strings and null values will omit the value, but the equals sign (=) remains in place:
-
-```javascript
-assert.equal(qs.stringify({ a: '' }), 'a=');
-```
-
-Key with no values (such as an empty object or array) will return nothing:
-
-```javascript
-assert.equal(qs.stringify({ a: [] }), '');
-assert.equal(qs.stringify({ a: {} }), '');
-assert.equal(qs.stringify({ a: [{}] }), '');
-assert.equal(qs.stringify({ a: { b: []} }), '');
-assert.equal(qs.stringify({ a: { b: {}} }), '');
-```
-
-Properties that are set to `undefined` will be omitted entirely:
-
-```javascript
-assert.equal(qs.stringify({ a: null, b: undefined }), 'a=');
-```
-
-The query string may optionally be prepended with a question mark:
-
-```javascript
-assert.equal(qs.stringify({ a: 'b', c: 'd' }, { addQueryPrefix: true }), '?a=b&c=d');
-```
-
-The delimiter may be overridden with stringify as well:
-
-```javascript
-assert.equal(qs.stringify({ a: 'b', c: 'd' }, { delimiter: ';' }), 'a=b;c=d');
-```
-
-If you only want to override the serialization of `Date` objects, you can provide a `serializeDate` option:
-
-```javascript
-var date = new Date(7);
-assert.equal(qs.stringify({ a: date }), 'a=1970-01-01T00:00:00.007Z'.replace(/:/g, '%3A'));
-assert.equal(
- qs.stringify({ a: date }, { serializeDate: function (d) { return d.getTime(); } }),
- 'a=7'
-);
-```
-
-You may use the `sort` option to affect the order of parameter keys:
-
-```javascript
-function alphabeticalSort(a, b) {
- return a.localeCompare(b);
-}
-assert.equal(qs.stringify({ a: 'c', z: 'y', b : 'f' }, { sort: alphabeticalSort }), 'a=c&b=f&z=y');
-```
-
-Finally, you can use the `filter` option to restrict which keys will be included in the stringified output.
-If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you
-pass an array, it will be used to select properties and array indices for stringification:
-
-```javascript
-function filterFunc(prefix, value) {
- if (prefix == 'b') {
- // Return an `undefined` value to omit a property.
- return;
- }
- if (prefix == 'e[f]') {
- return value.getTime();
- }
- if (prefix == 'e[g][0]') {
- return value * 2;
- }
- return value;
-}
-qs.stringify({ a: 'b', c: 'd', e: { f: new Date(123), g: [2] } }, { filter: filterFunc });
-// 'a=b&c=d&e[f]=123&e[g][0]=4'
-qs.stringify({ a: 'b', c: 'd', e: 'f' }, { filter: ['a', 'e'] });
-// 'a=b&e=f'
-qs.stringify({ a: ['b', 'c', 'd'], e: 'f' }, { filter: ['a', 0, 2] });
-// 'a[0]=b&a[2]=d'
-```
-
-### Handling of `null` values
-
-By default, `null` values are treated like empty strings:
-
-```javascript
-var withNull = qs.stringify({ a: null, b: '' });
-assert.equal(withNull, 'a=&b=');
-```
-
-Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.
-
-```javascript
-var equalsInsensitive = qs.parse('a&b=');
-assert.deepEqual(equalsInsensitive, { a: '', b: '' });
-```
-
-To distinguish between `null` values and empty strings use the `strictNullHandling` flag. In the result string the `null`
-values have no `=` sign:
-
-```javascript
-var strictNull = qs.stringify({ a: null, b: '' }, { strictNullHandling: true });
-assert.equal(strictNull, 'a&b=');
-```
-
-To parse values without `=` back to `null` use the `strictNullHandling` flag:
-
-```javascript
-var parsedStrictNull = qs.parse('a&b=', { strictNullHandling: true });
-assert.deepEqual(parsedStrictNull, { a: null, b: '' });
-```
-
-To completely skip rendering keys with `null` values, use the `skipNulls` flag:
-
-```javascript
-var nullsSkipped = qs.stringify({ a: 'b', c: null}, { skipNulls: true });
-assert.equal(nullsSkipped, 'a=b');
-```
-
-If you're communicating with legacy systems, you can switch to `iso-8859-1`
-using the `charset` option:
-
-```javascript
-var iso = qs.stringify({ æ: 'æ' }, { charset: 'iso-8859-1' });
-assert.equal(iso, '%E6=%E6');
-```
-
-Characters that don't exist in `iso-8859-1` will be converted to numeric
-entities, similar to what browsers do:
-
-```javascript
-var numeric = qs.stringify({ a: '☺' }, { charset: 'iso-8859-1' });
-assert.equal(numeric, 'a=%26%239786%3B');
-```
-
-You can use the `charsetSentinel` option to announce the character by
-including an `utf8=✓` parameter with the proper encoding if the checkmark,
-similar to what Ruby on Rails and others do when submitting forms.
-
-```javascript
-var sentinel = qs.stringify({ a: '☺' }, { charsetSentinel: true });
-assert.equal(sentinel, 'utf8=%E2%9C%93&a=%E2%98%BA');
-
-var isoSentinel = qs.stringify({ a: 'æ' }, { charsetSentinel: true, charset: 'iso-8859-1' });
-assert.equal(isoSentinel, 'utf8=%26%2310003%3B&a=%E6');
-```
-
-### Dealing with special character sets
-
-By default the encoding and decoding of characters is done in `utf-8`,
-and `iso-8859-1` support is also built in via the `charset` parameter.
-
-If you wish to encode querystrings to a different character set (i.e.
-[Shift JIS](https://en.wikipedia.org/wiki/Shift_JIS)) you can use the
-[`qs-iconv`](https://github.com/martinheidegger/qs-iconv) library:
-
-```javascript
-var encoder = require('qs-iconv/encoder')('shift_jis');
-var shiftJISEncoded = qs.stringify({ a: 'こんにちは!' }, { encoder: encoder });
-assert.equal(shiftJISEncoded, 'a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I');
-```
-
-This also works for decoding of query strings:
-
-```javascript
-var decoder = require('qs-iconv/decoder')('shift_jis');
-var obj = qs.parse('a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I', { decoder: decoder });
-assert.deepEqual(obj, { a: 'こんにちは!' });
-```
-
-### RFC 3986 and RFC 1738 space encoding
-
-RFC3986 used as default option and encodes ' ' to *%20* which is backward compatible.
-In the same time, output can be stringified as per RFC1738 with ' ' equal to '+'.
-
-```
-assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
-assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
-assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');
-```
-
-## Security
-
-Please email [@ljharb](https://github.com/ljharb) or see https://tidelift.com/security if you have a potential security vulnerability to report.
-
-## qs for enterprise
-
-Available as part of the Tidelift Subscription
-
-The maintainers of qs and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-qs?utm_source=npm-qs&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
-
-[package-url]: https://npmjs.org/package/qs
-[npm-version-svg]: https://versionbadg.es/ljharb/qs.svg
-[deps-svg]: https://david-dm.org/ljharb/qs.svg
-[deps-url]: https://david-dm.org/ljharb/qs
-[dev-deps-svg]: https://david-dm.org/ljharb/qs/dev-status.svg
-[dev-deps-url]: https://david-dm.org/ljharb/qs#info=devDependencies
-[npm-badge-png]: https://nodei.co/npm/qs.png?downloads=true&stars=true
-[license-image]: https://img.shields.io/npm/l/qs.svg
-[license-url]: LICENSE
-[downloads-image]: https://img.shields.io/npm/dm/qs.svg
-[downloads-url]: https://npm-stat.com/charts.html?package=qs
-[codecov-image]: https://codecov.io/gh/ljharb/qs/branch/main/graphs/badge.svg
-[codecov-url]: https://app.codecov.io/gh/ljharb/qs/
-[actions-image]: https://img.shields.io/endpoint?url=https://github-actions-badge-u3jn4tfpocch.runkit.sh/ljharb/qs
-[actions-url]: https://github.com/ljharb/qs/actions
diff --git a/spaces/fffiloni/imagic-stable-diffusion/imagic.py b/spaces/fffiloni/imagic-stable-diffusion/imagic.py
deleted file mode 100644
index 9bdf4788f5dc14a088e187cb565b233588af1b9e..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/imagic-stable-diffusion/imagic.py
+++ /dev/null
@@ -1,498 +0,0 @@
-"""
- modeled after the textual_inversion.py / train_dreambooth.py and the work
- of justinpinkney here: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
-"""
-import inspect
-import warnings
-from typing import List, Optional, Union
-import bitsandbytes as bnb
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-import PIL
-from accelerate import Accelerator
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.pipeline_utils import DiffusionPipeline
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
-from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from diffusers.utils import logging
-
-# TODO: remove and import from diffusers.utils when the new version of diffusers is released
-from packaging import version
-from tqdm.auto import tqdm
-from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
-
-
-if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
- PIL_INTERPOLATION = {
- "linear": PIL.Image.Resampling.BILINEAR,
- "bilinear": PIL.Image.Resampling.BILINEAR,
- "bicubic": PIL.Image.Resampling.BICUBIC,
- "lanczos": PIL.Image.Resampling.LANCZOS,
- "nearest": PIL.Image.Resampling.NEAREST,
- }
-else:
- PIL_INTERPOLATION = {
- "linear": PIL.Image.LINEAR,
- "bilinear": PIL.Image.BILINEAR,
- "bicubic": PIL.Image.BICUBIC,
- "lanczos": PIL.Image.LANCZOS,
- "nearest": PIL.Image.NEAREST,
- }
-# ------------------------------------------------------------------------------
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def preprocess(image):
- w, h = image.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = torch.from_numpy(image)
- return 2.0 * image - 1.0
-
-
-class ImagicStableDiffusionPipeline(DiffusionPipeline):
- r"""
- Pipeline for imagic image editing.
- See paper here: https://arxiv.org/pdf/2210.09276.pdf
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offsensive or harmful.
- Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- ):
- super().__init__()
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- r"""
- Enable sliced attention computation.
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
- Args:
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
- `attention_head_dim` must be a multiple of `slice_size`.
- """
- if slice_size == "auto":
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- r"""
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
- back to computing attention in one step.
- """
- # set slice_size = `None` to disable `attention slicing`
- self.enable_attention_slicing(None)
-
- def train(
- self,
- prompt: Union[str, List[str]],
- init_image: Union[torch.FloatTensor, PIL.Image.Image],
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- generator: Optional[torch.Generator] = None,
- embedding_learning_rate: float = 0.001,
- diffusion_model_learning_rate: float = 2e-6,
- text_embedding_optimization_steps: int = 100,
- model_fine_tuning_optimization_steps: int = 500,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- accelerator = Accelerator(
- gradient_accumulation_steps=1,
- mixed_precision="fp16",
- )
-
- if "torch_device" in kwargs:
- device = kwargs.pop("torch_device")
- warnings.warn(
- "`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0."
- " Consider using `pipe.to(torch_device)` instead."
- )
-
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.to(device)
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- # Freeze vae and unet
- self.vae.requires_grad_(False)
- self.unet.requires_grad_(False)
- self.text_encoder.requires_grad_(False)
- self.unet.eval()
- self.vae.eval()
- self.text_encoder.eval()
-
- if accelerator.is_main_process:
- accelerator.init_trackers(
- "imagic",
- config={
- "embedding_learning_rate": embedding_learning_rate,
- "text_embedding_optimization_steps": text_embedding_optimization_steps,
- },
- )
-
- # get text embeddings for prompt
- text_input = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_embeddings = torch.nn.Parameter(
- self.text_encoder(text_input.input_ids.to(self.device))[0], requires_grad=True
- )
- text_embeddings = text_embeddings.detach()
- text_embeddings.requires_grad_()
- text_embeddings_orig = text_embeddings.clone()
-
- # Initialize the optimizer
-
- optimizer = bnb.optim.Adam8bit(
- [text_embeddings], # only optimize the embeddings
- lr=embedding_learning_rate,
- )
-
- if isinstance(init_image, PIL.Image.Image):
- init_image = preprocess(init_image)
-
- latents_dtype = text_embeddings.dtype
- init_image = init_image.to(device=self.device, dtype=latents_dtype)
- init_latent_image_dist = self.vae.encode(init_image).latent_dist
- init_image_latents = init_latent_image_dist.sample(generator=generator)
- init_image_latents = 0.18215 * init_image_latents
-
- progress_bar = tqdm(range(text_embedding_optimization_steps), disable=not accelerator.is_local_main_process)
- progress_bar.set_description("Steps")
-
- global_step = 0
-
- logger.info("First optimizing the text embedding to better reconstruct the init image")
- for _ in range(text_embedding_optimization_steps):
- with accelerator.accumulate(text_embeddings):
- # Sample noise that we'll add to the latents
- noise = torch.randn(init_image_latents.shape).to(init_image_latents.device)
- timesteps = torch.randint(1000, (1,), device=init_image_latents.device)
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = self.scheduler.add_noise(init_image_latents, noise, timesteps)
-
- # Predict the noise residual
- noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
-
- loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
- accelerator.backward(loss)
-
- optimizer.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
-
- accelerator.wait_for_everyone()
-
- text_embeddings.requires_grad_(False)
-
- # Now we fine tune the unet to better reconstruct the image
- self.unet.requires_grad_(True)
- self.unet.train()
- optimizer = bnb.optim.Adam8bit(
- self.unet.parameters(), # only optimize unet
- lr=diffusion_model_learning_rate,
- )
- progress_bar = tqdm(range(model_fine_tuning_optimization_steps), disable=not accelerator.is_local_main_process)
-
- logger.info("Next fine tuning the entire model to better reconstruct the init image")
- for _ in range(model_fine_tuning_optimization_steps):
- with accelerator.accumulate(self.unet.parameters()):
- # Sample noise that we'll add to the latents
- noise = torch.randn(init_image_latents.shape).to(init_image_latents.device)
- timesteps = torch.randint(1000, (1,), device=init_image_latents.device)
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = self.scheduler.add_noise(init_image_latents, noise, timesteps)
-
- # Predict the noise residual
- noise_pred = self.unet(noisy_latents, timesteps, text_embeddings).sample
-
- loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean()
- accelerator.backward(loss)
-
- optimizer.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- logs = {"loss": loss.detach().item()} # , "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
-
- accelerator.wait_for_everyone()
- self.text_embeddings_orig = text_embeddings_orig
- self.text_embeddings = text_embeddings
-
- @torch.no_grad()
- def __call__(
- self,
- alpha: float = 1.2,
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- num_inference_steps: Optional[int] = 50,
- generator: Optional[torch.Generator] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- guidance_scale: float = 7.5,
- eta: float = 0.0,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
- deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
- if self.text_embeddings is None:
- raise ValueError("Please run the pipe.train() before trying to generate an image.")
- if self.text_embeddings_orig is None:
- raise ValueError("Please run the pipe.train() before trying to generate an image.")
-
- text_embeddings = alpha * self.text_embeddings_orig + (1 - alpha) * self.text_embeddings
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens = [""]
- max_length = self.tokenizer.model_max_length
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = uncond_embeddings.shape[1]
- uncond_embeddings = uncond_embeddings.view(1, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # get the initial random noise unless the user supplied it
-
- # Unlike in other pipelines, latents need to be generated in the target device
- # for 1-to-1 results reproducibility with the CompVis implementation.
- # However this currently doesn't work in `mps`.
- latents_shape = (1, self.unet.in_channels, height // 8, width // 8)
- latents_dtype = text_embeddings.dtype
- if self.device.type == "mps":
- # randn does not exist on mps
- latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
- self.device
- )
- else:
- latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
-
- # set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
-
- # Some schedulers like PNDM have timesteps as arrays
- # It's more optimized to move all timesteps to correct device beforehand
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
-
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(
- self.device
- )
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
- )
- else:
- has_nsfw_concept = None
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
\ No newline at end of file
diff --git a/spaces/fffiloni/mmpose-estimation/README.md b/spaces/fffiloni/mmpose-estimation/README.md
deleted file mode 100644
index 280d59b219a20cf993ffe42fdd64df9732e79efb..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/mmpose-estimation/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: MMPose estimation
-emoji: 🏃
-colorFrom: pink
-colorTo: indigo
-python_version: 3.9.16
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: test1444/test_mmpose
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/fffiloni/video2mmpose/app.py b/spaces/fffiloni/video2mmpose/app.py
deleted file mode 100644
index bfd12610d4b3421e5ece0c7db82e404b4f96c7f3..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/video2mmpose/app.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import gradio as gr
-
-import os
-import cv2
-import numpy as np
-from PIL import Image
-from moviepy.editor import *
-
-mmpose = gr.Interface.load(name="spaces/fffiloni/mmpose-estimation")
-
-def get_frames(video_in):
- frames = []
- #resize the video
- clip = VideoFileClip(video_in)
-
- #check fps
- if clip.fps > 30:
- print("vide rate is over 30, resetting to 30")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=30)
- else:
- print("video rate is OK")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=clip.fps)
-
- print("video resized to 512 height")
-
- # Opens the Video file with CV2
- cap= cv2.VideoCapture("video_resized.mp4")
-
- fps = cap.get(cv2.CAP_PROP_FPS)
- print("video fps: " + str(fps))
- i=0
- while(cap.isOpened()):
- ret, frame = cap.read()
- if ret == False:
- break
- cv2.imwrite('kang'+str(i)+'.jpg',frame)
- frames.append('kang'+str(i)+'.jpg')
- i+=1
-
- cap.release()
- cv2.destroyAllWindows()
- print("broke the video into frames")
-
- return frames, fps
-
-def get_mmpose_filter(i):
- #image = Image.open(i)
-
- #image = np.array(image)
-
- image = mmpose(i, fn_index=0)[1]
- image = Image.open(image)
- #image = Image.fromarray(image)
- image.save("mmpose_frame_" + str(i) + ".jpeg")
- return "mmpose_frame_" + str(i) + ".jpeg"
-
-def create_video(frames, fps, type):
- print("building video result")
- clip = ImageSequenceClip(frames, fps=fps)
- clip.write_videofile(type + "_result.mp4", fps=fps)
-
- return type + "_result.mp4"
-
-def convertG2V(imported_gif):
- clip = VideoFileClip(imported_gif.name)
- clip.write_videofile("my_gif_video.mp4")
- return "my_gif_video.mp4"
-
-def infer(video_in):
-
-
- # 1. break video into frames and get FPS
- break_vid = get_frames(video_in)
- frames_list= break_vid[0]
- fps = break_vid[1]
- #n_frame = int(trim_value*fps)
- n_frame = len(frames_list)
-
- if n_frame >= len(frames_list):
- print("video is shorter than the cut value")
- n_frame = len(frames_list)
-
- # 2. prepare frames result arrays
- result_frames = []
- print("set stop frames to: " + str(n_frame))
-
- for i in frames_list[0:int(n_frame)]:
- mmpose_frame = get_mmpose_filter(i)
- result_frames.append(mmpose_frame)
- print("frame " + i + "/" + str(n_frame) + ": done;")
-
-
- final_vid = create_video(result_frames, fps, "mmpose")
-
- files = [final_vid]
-
- return final_vid, files
-
-title="""
-
-
-
- Video to MMPose
-
-
-
-
Convert any video or gif to a MMPose sequence.
- Once you got your converted video, you can use it with the FollowYourPose demo
-
-"""
-
-with gr.Blocks() as demo:
- with gr.Column():
- gr.HTML(title)
- with gr.Row():
- with gr.Column():
- video_input = gr.Video(source="upload", type="filepath")
- gif_input = gr.File(label="import a GIF instead", file_types=['.gif'])
- gif_input.change(fn=convertG2V, inputs=gif_input, outputs=video_input)
- submit_btn = gr.Button("Submit")
-
- with gr.Column():
- video_output = gr.Video()
- file_output = gr.Files()
-
- gr.Examples(
- examples=["./examples/childishgambino.mp4", "./examples/jimmyfallon.mp4"],
- fn=infer,
- inputs=[video_input],
- outputs=[video_output,file_output],
- cache_examples=False
- )
-
- submit_btn.click(fn=infer, inputs=[video_input], outputs=[video_output, file_output])
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/firefighter/TransDis-CreativityAutoAssessment/utils/pipeline.py b/spaces/firefighter/TransDis-CreativityAutoAssessment/utils/pipeline.py
deleted file mode 100644
index bab4730fab5802f6e7ac8e8f8876a578a1b3740b..0000000000000000000000000000000000000000
--- a/spaces/firefighter/TransDis-CreativityAutoAssessment/utils/pipeline.py
+++ /dev/null
@@ -1,59 +0,0 @@
-from typing import List
-
-import pandas as pd
-from sentence_transformers.util import cos_sim
-
-from utils.models import ModelWithPooling
-
-
-def p0_originality(df: pd.DataFrame, model_name: str, pooling: str) -> pd.DataFrame:
- """
- row-wise
- :param df:
- :param model_name:
- :return:
- """
- assert 'prompt' in df.columns
- assert 'response' in df.columns
- model = ModelWithPooling(model_name)
-
- def get_cos_sim(prompt: str, response: str) -> float:
- prompt_vec = model(text=prompt, pooling=pooling)
- response_vec = model(text=response, pooling=pooling)
- score = cos_sim(prompt_vec, response_vec).item()
- return score
-
- df['originality'] = df.apply(lambda x: 1 - get_cos_sim(x['prompt'], x['response']), axis=1)
- return df
-
-
-def p1_flexibility(df: pd.DataFrame, model_name: str, pooling: str) -> pd.DataFrame:
- """
- group-wise
- :param df:
- :param model_name:
- :return:
- """
- assert 'prompt' in df.columns
- assert 'response' in df.columns
- assert 'id' in df.columns
- model = ModelWithPooling(model_name)
-
- def get_flexibility(responses: List[str]) -> float:
- responses_vec = [model(text=_, pooling=pooling) for _ in responses]
- score = 0
- for i in range(len(responses_vec) - 1):
- score += 1 - cos_sim(responses_vec[i], responses_vec[i + 1]).item()
- return score
-
- df_out = df.groupby(by=['id', 'prompt']) \
- .agg({'id': 'first', 'prompt': 'first', 'response': get_flexibility}) \
- .rename(columns={'response': 'flexibility'}) \
- .reset_index(drop=True)
- return df_out
-
-
-if __name__ == '__main__':
- _df_input = pd.read_csv('data/tmp/example_3.csv')
- _df_0 = p0_originality(_df_input, 'paraphrase-multilingual-MiniLM-L12-v2')
- _df_1 = p1_flexibility(_df_input, 'paraphrase-multilingual-MiniLM-L12-v2')
diff --git a/spaces/flax-community/spanish-image-captioning/model/flax_clip_vision_marian/__init__.py b/spaces/flax-community/spanish-image-captioning/model/flax_clip_vision_marian/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/florim/MedGPT/ui/app.py b/spaces/florim/MedGPT/ui/app.py
deleted file mode 100644
index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/ui/app.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import gradio as gr
-import utils
-from api import AutoAPI, get_openai_api_key
-import os, shutil
-import json
-
-FILE_DIR = os.path.dirname(os.path.abspath(__file__))
-OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace")
-if not os.path.exists(OUTPUT_DIR):
- os.mkdir(OUTPUT_DIR)
-
-CSS = """
-#chatbot {font-family: monospace;}
-#files .generating {display: none;}
-#files .min {min-height: 0px;}
-"""
-
-with gr.Blocks(css=CSS) as app:
- with gr.Column() as setup_pane:
- gr.Markdown(f"""# Auto-GPT
- 1. Duplicate this Space: This will **NOT** work without duplication!
- 2. Enter your OpenAI API Key below.
- """)
- with gr.Row():
- open_ai_key = gr.Textbox(
- value=get_openai_api_key(),
- label="OpenAI API Key",
- type="password",
- )
- gr.Markdown(
- "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page."
- )
- with gr.Row():
- ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT")
- ai_role = gr.Textbox(
- label="AI Role",
- placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.",
- )
- top_5_goals = gr.Dataframe(
- row_count=(5, "fixed"),
- col_count=(1, "fixed"),
- headers=["AI Goals - Enter up to 5"],
- type="array"
- )
- start_btn = gr.Button("Start", variant="primary")
- with open(os.path.join(FILE_DIR, "examples.json"), "r") as f:
- example_values = json.load(f)
- gr.Examples(
- example_values,
- [ai_name, ai_role, top_5_goals],
- )
- with gr.Column(visible=False) as main_pane:
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot(elem_id="chatbot")
- with gr.Row():
- yes_btn = gr.Button("Yes", variant="primary", interactive=False)
- consecutive_yes = gr.Slider(
- 1, 10, 1, step=1, label="Consecutive Yes", interactive=False
- )
- custom_response = gr.Textbox(
- label="Custom Response",
- placeholder="Press 'Enter' to Submit.",
- interactive=False,
- )
- with gr.Column(scale=1):
- gr.HTML(
- lambda: f"""
- Generated Files
-
{utils.format_directory(OUTPUT_DIR)}
- """, every=3, elem_id="files"
- )
- download_btn = gr.Button("Download All Files")
-
- chat_history = gr.State([[None, None]])
- api = gr.State(None)
-
- def start(open_ai_key, ai_name, ai_role, top_5_goals):
- auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals)
- return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api
-
- def bot_response(chat, api):
- messages = []
- for message in api.get_chatbot_response():
- messages.append(message)
- chat[-1][1] = "\n".join(messages) + "..."
- yield chat
- chat[-1][1] = "\n".join(messages)
- yield chat
-
- def send_message(count, chat, api, message="Y"):
- if message != "Y":
- count = 1
- for i in range(count):
- chat.append([message, None])
- yield chat, count - i
- api.send_message(message)
- for updated_chat in bot_response(chat, api):
- yield updated_chat, count - i
-
- def activate_inputs():
- return {
- yes_btn: gr.Button.update(interactive=True),
- consecutive_yes: gr.Slider.update(interactive=True),
- custom_response: gr.Textbox.update(interactive=True),
- }
-
- def deactivate_inputs():
- return {
- yes_btn: gr.Button.update(interactive=False),
- consecutive_yes: gr.Slider.update(interactive=False),
- custom_response: gr.Textbox.update(interactive=False),
- }
-
- start_btn.click(
- start,
- [open_ai_key, ai_name, ai_role, top_5_goals],
- [setup_pane, main_pane, api],
- ).then(bot_response, [chat_history, api], chatbot).then(
- activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- )
-
- yes_btn.click(
- deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- ).then(
- send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes]
- ).then(
- activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- )
- custom_response.submit(
- deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- ).then(
- send_message,
- [consecutive_yes, chat_history, api, custom_response],
- [chatbot, consecutive_yes],
- ).then(
- activate_inputs, None, [yes_btn, consecutive_yes, custom_response]
- )
-
- def download_all_files():
- shutil.make_archive("outputs", "zip", OUTPUT_DIR)
-
- download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS)
-
-app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR])
diff --git a/spaces/geniusguy777/Face_Recognition/style.css b/spaces/geniusguy777/Face_Recognition/style.css
deleted file mode 100644
index ec3ee34e87dd302756e8746fe264d70f4f454454..0000000000000000000000000000000000000000
--- a/spaces/geniusguy777/Face_Recognition/style.css
+++ /dev/null
@@ -1,7 +0,0 @@
-h1 {
- text-align: center;
-}
-
-#content_align {
- text-align: center;
-}
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/exp/upernet_global_small/test_config_w32.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/exp/upernet_global_small/test_config_w32.py
deleted file mode 100644
index 3d9e06f029e46c14cb9ddb39319cabe86fef9b44..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/exp/upernet_global_small/test_config_w32.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/upernet_uniformer.py',
- '../../configs/_base_/datasets/ade20k.py',
- '../../configs/_base_/default_runtime.py',
- '../../configs/_base_/schedules/schedule_160k.py'
-]
-model = dict(
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.25,
- windows=True,
- hybrid=False,
- window_size=32
- ),
- decode_head=dict(
- in_channels=[64, 128, 320, 512],
- num_classes=150
- ),
- auxiliary_head=dict(
- in_channels=320,
- num_classes=150
- ))
-
-# AdamW optimizer, no weight decay for position embedding & layer norm in backbone
-optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-
-lr_config = dict(_delete_=True, policy='poly',
- warmup='linear',
- warmup_iters=1500,
- warmup_ratio=1e-6,
- power=1.0, min_lr=0.0, by_epoch=False)
-
-data=dict(samples_per_gpu=2)
\ No newline at end of file
diff --git a/spaces/gorkemgoknar/movie_chat_gpt_yourtts_fileinput/app.py b/spaces/gorkemgoknar/movie_chat_gpt_yourtts_fileinput/app.py
deleted file mode 100644
index f9c1cbb6c519f610f0b3f4b402da5f48c7ccb5c6..0000000000000000000000000000000000000000
--- a/spaces/gorkemgoknar/movie_chat_gpt_yourtts_fileinput/app.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import gradio as gr
-import random
-import torch
-from transformers import AutoConfig, AutoTokenizer, AutoModelWithLMHead
-from transformers import GPT2Tokenizer, GPT2LMHeadModel
-from itertools import chain
-
-import os
-
-import tempfile
-from typing import Optional
-from TTS.config import load_config
-import numpy as np
-from TTS.utils.manage import ModelManager
-from TTS.utils.synthesizer import Synthesizer
-
-#emotion_tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-emotion")
-#emotion_model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-emotion")
-
-def get_emotion(text):
- input_ids = tokenizer.encode(text + '', return_tensors='pt')
- output = model.generate(input_ids=input_ids,max_length=2)
- dec = [tokenizer.decode(ids) for ids in output]
- label = dec[0]
- return label.split()[1]
-
-
-config = AutoConfig.from_pretrained('gorkemgoknar/gpt2chatbotenglish')
-model = GPT2LMHeadModel.from_pretrained('gorkemgoknar/gpt2chatbotenglish', config=config)
-
-tokenizer = GPT2Tokenizer.from_pretrained('gorkemgoknar/gpt2chatbotenglish')
-tokenizer.model_max_length = 1024
-
-#Dynamic Temperature
-#See experiment https://www.linkedin.com/pulse/ai-goes-job-interview-g%25C3%25B6rkem-g%25C3%25B6knar
-
-base_temperature = 1.2
-dynamic_temperature_range = 0.15
-
-rand_range = random.uniform(-1 * dynamic_temperature_range , dynamic_temperature_range )
-temperature = base_temperature + rand_range
-
-SPECIAL_TOKENS = ["", "", "", "", ""]
-
-#See document for experiment https://www.linkedin.com/pulse/ai-goes-job-interview-g%C3%B6rkem-g%C3%B6knar/
-
-def get_chat_response(name,history=[], input_txt = "Hello , what is your name?"):
-
- ai_history = history.copy()
-
- #ai_history.append(input_txt)
- ai_history_e = [tokenizer.encode(e) for e in ai_history]
-
- personality = "My name is " + name
-
- bos, eos, speaker1, speaker2 = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS[:-1])
-
- #persona first, history next, input text must be at the end
- #[[bos, persona] , [history] , [input]]
- sequence = [[bos] + tokenizer.encode(personality)] + ai_history_e + [tokenizer.encode(input_txt)]
- ##[[bos, persona] , [speaker1 .., speakser2 .., speaker1 ... speaker2 ... , [input]]
- sequence = [sequence[0]] + [[speaker2 if (len(sequence)-i) % 2 else speaker1] + s for i, s in enumerate(sequence[1:])]
-
- sequence = list(chain(*sequence))
-
- #bot_input_ids = tokenizer.encode(personality + tokenizer.eos_token + input_txt + tokenizer.eos_token , return_tensors='pt')
- sequence_len = len(sequence)
-
- #optimum response and speed
- chat_history_ids = model.generate(
- torch.tensor(sequence).unsqueeze(0), max_length=50,
- pad_token_id=tokenizer.eos_token_id,
- no_repeat_ngram_size=3,
- do_sample=True,
- top_k=60,
- top_p=0.8,
- temperature = 1.3
- )
- out_str = tokenizer.decode(chat_history_ids[0][sequence_len:], skip_special_tokens=True)
- #out_str = tokenizer.decode(chat_history_ids[:, sequence.shape[-1]:][0], skip_special_tokens=False)
- return out_str
-
-##you can use anyone from below
-'''
-| Macleod | Moran | Brenda | Ramirez | Peter Parker | Quentin Beck | Andy
-| Red | Norton | Willard | Chief | Chef | Kilgore | Kurtz | Westley | Buttercup
-| Vizzini | Fezzik | Inigo | Man In Black | Taylor | Zira | Zaius | Cornelius
-| Bud | Lindsey | Hippy | Erin | Ed | George | Donna | Trinity | Agent Smith
-| Morpheus | Neo | Tank | Meryl | Truman | Marlon | Christof | Stromboli | Bumstead
-| Schreber | Walker | Korben | Cornelius | Loc Rhod | Anakin | Obi-Wan | Palpatine
-| Padme | Superman | Luthor | Dude | Walter | Donny | Maude | General | Starkiller
-| Indiana | Willie | Short Round | John | Sarah | Terminator | Miller | Sarge | Reiben
-| Jackson | Upham | Chuckie | Will | Lambeau | Sean | Skylar | Saavik | Spock
-| Kirk | Bones | Khan | Kirk | Spock | Sybok | Scotty | Bourne | Pamela | Abbott
-| Nicky | Marshall | Korshunov | Troy | Vig | Archie Gates | Doc | Interrogator
-| Ellie | Ted | Peter | Drumlin | Joss | Macready | Childs | Nicholas | Conrad
-| Feingold | Christine | Adam | Barbara | Delia | Lydia | Cathy | Charles | Otho
-| Schaefer | Han | Luke | Leia | Threepio | Vader | Yoda | Lando | Elaine | Striker
-| Dr. Rumack | Kramer | David | Saavik | Kirk | Kruge | Holden | Deckard | Rachael
-| Batty | Sebastian | Sam | Frodo | Pippin | Gandalf | Kay | Edwards | Laurel
-| Edgar | Zed | Jay | Malloy | Plissken | Steve Rogers | Tony Stark | Scott Lang
-| Bruce Banner | Bruce | Edward | Two-Face | Batman | Chase | Alfred | Dick
-| Riddler | Din Djarin | Greef Karga | Kuiil | Ig-11 | Cara Dune | Peli Motto
-| Toro Calican | Ripley | Meredith | Dickie | Marge | Peter | Lambert | Kane
-| Dallas | Ripley | Ash | Parker | Threepio | Luke | Leia | Ben | Han | Common Bob
-| Common Alice | Jack | Tyler | Marla | Dana | Stantz | Venkman | Spengler | Louis
-| Fry | Johns | Riddick | Kirk | Decker | Spock | "Ilia | Indy | Belloq | Marion
-| Brother | Allnut | Rose | Qui-Gon | Jar Jar
-'''
-
-MODEL_NAME= "tts_models/multilingual/multi-dataset/your_tts"
-
-
-
-def greet(character,your_voice,message,history):
-
- #gradios set_state/get_state had problems on embedded html!
- history = history or {"character": character, "message_history" : [] }
- #gradios set_state/get_state does not persist session for now using global
- #global history
-
- if history["character"] != character:
- #switching character
- history = {"character": character, "message_history" : [] }
-
-
- response = get_chat_response(character,history=history["message_history"],input_txt=message)
- os.system('tts --text "'+response+'" --model_name tts_models/multilingual/multi-dataset/your_tts --speaker_wav '+your_voice+' --language_idx "en"')
-
- history["message_history"].append((message, response))
-
- #emotion = get_emotion(response)
-
- html = "
"
- for user_msg, resp_msg in history["message_history"]:
- html += f"
You: {user_msg}
"
- html += f"
{character}: {resp_msg}
"
- html += "
"
-
- return html,history,"tts_output.wav"
-
-
-def greet_textonly(character,message,history):
-
- #gradios set_state/get_state had problems on embedded html!
- history = history or {"character": character, "message_history" : [] }
- #gradios set_state/get_state does not persist session for now using global
- #global history
-
- if history["character"] != character:
- #switching character
- history = {"character": character, "message_history" : [] }
-
-
- response = get_chat_response(character,history=history["message_history"],input_txt=message)
-
- history["message_history"].append((message, response))
-
- #emotion = get_emotion(response)
-
- html = "
"
- for user_msg, resp_msg in history["message_history"]:
- html += f"
You: {user_msg}
"
- html += f"
{character}: {resp_msg}
"
- html += "
"
-
- return html,history
-
-
-personality_choices = ["Gandalf", "Riddick", "Macleod", "Morpheus", "Neo","Spock","Vader","Indy"]
-
-examples= ["Gandalf", "What is your name?"]
-
-css="""
- .chatbox {display:flex;flex-direction:column}
- .user_msg, .resp_msg {padding:4px;margin-bottom:4px;border-radius:4px;width:80%}
- .user_msg {background-color:cornflowerblue;color:white;align-self:start}
- .resp_msg {background-color:lightgray;align-self:self-end}
-"""
-
-
-#some selected ones are in for demo use
-personality_choices = ["Gandalf", "Riddick", "Macleod", "Morpheus", "Neo","Spock","Vader","Indy", "Ig-11","Threepio","Tony Stark","Batman","Vizzini"]
-title = "Movie Chatbot with Coqui YourTTS - File Input"
-description = "Chat with your favorite movie characters, making characters voice like you. Test it out in metayazar.com/chatbot for more movie/character options. See Coqui Space for more TTS models https://huggingface.co/spaces/coqui/CoquiTTS"
-article = "
"
-
-#History not implemented in this demo, use metayazar.com/chatbot for a movie and character dropdown chat interface
-##interface = gr.Interface(fn=greet, inputs=[gr.inputs.Dropdown(personality_choices) ,"text"], title=title, description=description, outputs="text")
-
-examples=[['Gandalf','dragon.wav','Who are you sir?',{}]]
-
-history = {"character": "None", "message_history" : [] }
-
-interface_file= gr.Interface(fn=greet,
- inputs=[gr.inputs.Dropdown(personality_choices),
- gr.inputs.Audio(type="filepath"),
- "text",
- "state"],
- outputs=["html","state",gr.outputs.Audio(type="filepath")],
- css=css, title=title, description=description,article=article )
-
-
-if __name__ == "__main__":
- interface_file.launch()
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/Naruto-Shippuden-Ultimate-Ninja-Storm-5-Psp-Iso-23.md b/spaces/gotiQspiryo/whisper-ui/Naruto-Shippuden-Ultimate-Ninja-Storm-5-Psp-Iso-23.md
deleted file mode 100644
index f127d7ad53f45b7119cff043f2bc203ef16a2640..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/Naruto-Shippuden-Ultimate-Ninja-Storm-5-Psp-Iso-23.md
+++ /dev/null
@@ -1,70 +0,0 @@
-## naruto shippuden ultimate ninja storm 5 psp iso 23
-
-
-
-
-
-
-
-
-
-**LINK --->>> [https://miimms.com/2txSTe](https://miimms.com/2txSTe)**
-
-
-
-
-
-
-
-
-
-
-
- Here is a possible title and article for the keyword "naruto shippuden ultimate ninja storm 5 psp iso 23". I have used code blocks to encapsulate the html formatting.
-
-# Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23: How to Download and Play
-
-
-
-If you are a fan of Naruto Shippuden, you might be interested in playing Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23. This is a modified version of the original Naruto Shippuden Ultimate Ninja Storm 5 game that was released for PlayStation 4 and Xbox One in 2016. The PSP ISO 23 version has some new features and improvements, such as new characters, new stages, new modes, and better graphics.
-
-
-
-In this article, we will show you how to download and play Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 on your Android device. You will need a PSP emulator app and a file manager app to do this. Follow these steps:
-
-
-
-1. Download the Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 file from a reliable source. You can search for it on Google or use the link below. The file size is about 1.2 GB, so make sure you have enough space on your device.
-
-2. Download the PPSSPP Gold emulator app from the Google Play Store or use the link below. This is a paid app that costs $4.99, but it is worth it because it has better performance and features than the free version.
-
-3. Install the PPSSPP Gold app on your device and open it.
-
-4. Locate the Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 file on your device using a file manager app. You can use any file manager app that you like, such as ES File Explorer or ZArchiver.
-
-5. Tap on the Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 file and select "Open with PPSSPP". The game will start loading on the emulator.
-
-6. Enjoy playing Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 on your Android device. You can customize the settings and controls of the emulator according to your preference.
-
-
-
-Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 is a fun and exciting game that will keep you entertained for hours. You can play as your favorite characters from the Naruto Shippuden series and experience their epic battles and adventures. You can also play with your friends online or offline using the multiplayer mode. If you are a Naruto fan, you should definitely try this game.
-
-Here are a few more paragraphs for the article. I have used code blocks to encapsulate the html formatting.
-
-Naruto Shippuden Ultimate Ninja Storm 5 PSP ISO 23 is a modified version of Naruto Shippuden Narutimate Accel 3, which is a 2D fighting game created by CyberConnect2 and published by BANDAI NAMACO Entertainment. The game is based on the Naruto anime series and features many characters and stages from the show. You can play as Naruto, Sasuke, Sakura, Kakashi, and many more in various modes and scenarios.
-
-
-
-The game has several features that make it different from the original Naruto Shippuden Narutimate Accel 3. For example, the game has a new front, a new battle selection menu, and a new character selection menu. The game also has improved graphics and textures that make it look more realistic and detailed. The game also has some new characters and stages that were not available in the original game.
-
-
-
-Some of the new characters include Boruto Uzumaki, Sarada Uchiha, Mitsuki, Kawaki, Momoshiki Otsutsuki, Kinshiki Otsutsuki, and Urashiki Otsutsuki. These are characters from the Boruto: Naruto Next Generations anime series, which is a sequel to Naruto Shippuden. Some of the new stages include Konoha Village, Hidden Leaf Forest, Hidden Sand Village, Hidden Mist Village, and Hidden Cloud Village. These are locations from the Naruto world that have different environments and effects.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Being Heumann An Unrepentant Memoir by Judith Heumann (.ePUB) - A Powerful and Honest Account of Her Life and Struggles.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Being Heumann An Unrepentant Memoir by Judith Heumann (.ePUB) - A Powerful and Honest Account of Her Life and Struggles.md
deleted file mode 100644
index adeda478d91938509761aed7c464b889dc7fce40..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Download Being Heumann An Unrepentant Memoir by Judith Heumann (.ePUB) - A Powerful and Honest Account of Her Life and Struggles.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Download Being Heumann: An Unrepentant Memoir by Judith Heumann (.ePUB)
-
-Auslogics Internet Optimizer 2.0 .6.55 Portable Motoimei Frank Zeitler V0.0.1 Rar deep freeze melt for 18 · user avatar micmedsdogbi. Download music for free and without registration in good quality, with convenient online listening - thousands of top songs are waiting for you on the MzMuz portal.
-Download music for every taste on our website, download any music from vk com and other social networks on our website for free.
-On our site top-muzon you can download for free or listen online the newest and most popular songs of 2017-2019 and Linux utorrent deep freeze melt for 18 . And also: - Improves.
-Internet filter for Windows Auslogics Internet Optimizer 2.0.6.55 Portable free download.
-Free download » Internet » Auslogics Internet Optimizer 2.0.
-For Windows, Mac and Linux.
-Download Auslogics Internet Optimizer 2.0.
-For Windows.
-AusLogics Internet Optimizer is a free portable utility for optimizing your Internet connection and improving stability 8a78ff9644
-
-
-
diff --git a/spaces/innnky/nyaru4.0/vdecoder/__init__.py b/spaces/innnky/nyaru4.0/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Brian Lara International Cricket 2007 Crack Gamecopyworld.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Brian Lara International Cricket 2007 Crack Gamecopyworld.md
deleted file mode 100644
index 91c9c7c234440ec8ad10e5d0256def7df09de654..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Brian Lara International Cricket 2007 Crack Gamecopyworld.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
Brian Lara International Cricket 2007 Crack Gamecopyworld: How to Download and Play the Game for Free
-
If you are a fan of cricket and want to enjoy the thrill of playing as one of the legends of the sport, you might be interested in Brian Lara International Cricket 2007. This game is a simulation of cricket that features realistic graphics, gameplay, and commentary. You can choose from various modes such as test matches, one-day internationals, tournaments, and challenges. You can also customize your own team and players, and compete with other players online.
-
Brian Lara International Cricket 2007 Crack Gamecopyworld
However, if you don't want to spend money on buying the game or don't have a CD-ROM drive to install it, you might be looking for a way to download and play the game for free. That's where Brian Lara International Cricket 2007 Crack Gamecopyworld comes in. This is a website that provides cracked versions of various games, including Brian Lara International Cricket 2007. By downloading and installing the crack from this website, you can bypass the copy protection and play the game without any restrictions.
-
How to Download and Install Brian Lara International Cricket 2007 Crack Gamecopyworld
-
Before you download and install the crack from Brian Lara International Cricket 2007 Crack Gamecopyworld, you need to have the original game installed on your PC. You can either buy it from an online store or download it from a torrent site. Once you have the game installed, follow these steps to download and install the crack:
Click on the link that says "Brian Lara International Cricket 2007 v1.0 [ENGLISH] No-CD/Fixed EXE". This will take you to another page where you can download the crack file.
-
Click on the "Download" button and save the file to your PC.
-
Extract the file using a program such as WinRAR or 7-Zip.
-
Copy the extracted file (BLIC07.exe) and paste it into the folder where you installed the game (usually C:\Program Files\Codemasters\Brian Lara International Cricket 2007).
-
Replace the original file when prompted.
-
Run the game using the new file (BLIC07.exe) and enjoy!
-
-
Tips and Warnings
-
-
Make sure you have a good antivirus program on your PC before downloading and installing any crack files from Brian Lara International Cricket 2007 Crack Gamecopyworld or any other website. Some crack files may contain viruses or malware that can harm your PC or steal your personal information.
-
Do not update the game after installing the crack. Updating the game may overwrite the crack file and make the game unplayable.
-
Do not use the crack to play online. Playing online with a cracked version of the game may result in your account being banned or suspended by Codemasters or other players.
-
Downloading and installing cracked games is illegal and may violate the terms of service of Codemasters or other game developers. We do not condone or encourage piracy in any way. This article is for educational purposes only.
-
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Call Of Duty Modern Warfare 3 Psp Iso Download [WORK].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Call Of Duty Modern Warfare 3 Psp Iso Download [WORK].md
deleted file mode 100644
index 6ba5e23ae9b1ef98815bc2aa77d0437ac0d1c303..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Call Of Duty Modern Warfare 3 Psp Iso Download [WORK].md
+++ /dev/null
@@ -1,205 +0,0 @@
-
-
Call of Duty Modern Warfare 3 PSP Iso Download: How to Play the Best Shooter Game on Your Android Device
-
-
If you are a fan of the Call of Duty series, you probably know that the latest instalment, Modern Warfare 3, is one of the most popular and critically acclaimed games of all time. The game features a thrilling single-player campaign that takes you to various locations around the world, such as New York, London, and Paris, to stop a terrorist plot that threatens to destroy the world. The game also has a robust multiplayer mode that supports up to 10 players and offers various game types, such as Team Deathmatch and Domination. There is also a new cooperative mode called Survival, where you have to team up with other players to fight against waves of enemies.
-
-
But what if you don't have a PlayStation 3, Xbox 360, or Wii console to play the game? Don't worry, because there is a way to play Call of Duty Modern Warfare 3 on your Android device using a PSP emulator called PPSSPP. PPSSPP is a free and open-source emulator that allows you to run PSP games on your Android device with high-quality graphics and sound. You can download PPSSPP from the Google Play Store or from its official website.
To play Call of Duty Modern Warfare 3 on your Android device using PPSSPP, you will need two things: the PPSSPP app and the Call of Duty Modern Warfare 3 PSP iso file. The iso file is a compressed version of the game that can be read by the emulator. You can download the Call of Duty Modern Warfare 3 PSP iso file from various websites on the internet, such as IsoRomulator or IsoRoms. Make sure you download the file from a trusted source and scan it for viruses before opening it.
-
-
Once you have downloaded both the PPSSPP app and the Call of Duty Modern Warfare 3 PSP iso file, you can follow these steps to play the game on your Android device:
-
-
-
Install the PPSSPP app on your Android device and launch it.
-
Locate the Call of Duty Modern Warfare 3 PSP iso file on your device's storage and tap on it to load it into the emulator.
-
Adjust the settings of the emulator according to your device's specifications and preferences. You can change the graphics, sound, controls, and performance settings to optimize your gaming experience.
-
Start playing Call of Duty Modern Warfare 3 on your Android device using PPSSPP. You can use the virtual buttons on the screen or connect an external controller to enjoy the game.
-
-
-
That's it! You can now enjoy one of the best shooter games ever made on your Android device using PPSSPP. You can play both the single-player and multiplayer modes of Call of Duty Modern Warfare 3 with high-quality graphics and sound. You can also save your progress and resume it anytime you want. Playing Call of Duty Modern Warfare 3 on your Android device using PPSSPP is a great way to experience this amazing game without having to buy a console or a PC.
-
-
Conclusion
-
-
Call of Duty Modern Warfare 3 is one of the most popular and critically acclaimed games of all time. The game features a thrilling single-player campaign that takes you to various locations around the world, such as New York, London, and Paris, to stop a terrorist plot that threatens to destroy the world. The game also has a robust multiplayer mode that supports up to 10 players and offers various game types, such as Team Deathmatch and Domination. There is also a new cooperative mode called Survival, where you have to team up with other players to fight against waves of enemies.
-
-
If you don't have a PlayStation 3, Xbox 360, or Wii console to play the game, you can still play it on your Android device using a PSP emulator called PPSSPP. PPSSPP is a free and open-source emulator that allows you to run PSP games on your Android device with high-quality graphics and sound. You can download PPSSPP from the Google Play Store or from its official website.
-
-
To play Call of Duty Modern Warfare 3 on your Android device using PPSSPP, you will need two things: the PPSSPP app and the Call of Duty Modern Warfare 3 PSP iso file. The iso file is a compressed version of the game that can be read by the emulator. You can download the Call of Duty Modern Warfare 3 PSP iso file from various websites on the internet, such as IsoRomulator or IsoRoms. Make sure you download the file from a trusted source and scan it for viruses before opening it.
-
-
-
Once you have downloaded both the PPSSPP app and the Call of Duty Modern Warfare 3 PSP iso file, you can follow these steps to play the game on your Android device:
-
-
-
Install the PPSSPP app on your Android device and launch it.
-
Locate the Call of Duty Modern Warfare 3 PSP iso file on your device's storage and tap on it to load it into the emulator.
-
Adjust the settings of the emulator according to your device's specifications and preferences. You can change the graphics, sound, controls, and performance settings to optimize your gaming experience.
-
Start playing Call of Duty Modern Warfare 3 on your Android device using PPSSPP. You can use the virtual buttons on the screen or connect an external controller to enjoy the game.
-
-
-
We hope this article has helped you learn how to play Call of Duty Modern Warfare 3 on your Android device using PPSSPP. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!
-
-
-
-
Practice the game's controls and mechanics before jumping into the action. You can use the game's tutorial mode or play some offline matches to get familiar with the game's features and functions.
-
Choose the right weapons and equipment for your playstyle and situation. You can customize your loadout with various perks, weapons attachments, killstreaks and proficiencies that suit your preferences and needs. You can also switch your loadout during the game if you need to adapt to changing circumstances.
-
Use cover and stealth to your advantage. You can hide behind walls, cars, crates, and other objects to avoid enemy fire and surprise them with your attacks. You can also crouch, prone, or lean to reduce your visibility and accuracy.
-
Communicate and cooperate with your teammates. You can use the game's voice chat or text chat to coordinate your strategies and tactics with your fellow players. You can also mark enemies, objectives, and locations with the game's ping system.
-
Learn from your mistakes and improve your skills. You can watch the game's killcam or replay mode to see how you died or how you performed in the game. You can also check your stats and achievements to see your strengths and weaknesses.
-
-
-
These are some of the tips and tricks for playing Call of Duty Modern Warfare 3 on your PSP device using PPSSPP. If you follow these tips and tricks, you will surely become a better player and enjoy the game more.
-
-
Conclusion
-
-
Call of Duty Modern Warfare 3 is one of the best shooter games ever made.
-It features a thrilling single-player campaign that takes you to various locations around the world,
-such as New York,
-London,
-Paris,
-Berlin,
-Prague,
-Dubai,
-Somalia,
-Sierra Leone,
-and Moscow.
-It also has a robust multiplayer mode that supports up to 10 players and offers various game types,
-such as Team Deathmatch,
-Domination,
-Kill Confirmed,
-Capture the Flag,
-Search and Destroy,
-Sabotage,
-Headquarters Pro,
-Demolition Pro,
-Team Defender Pro
-and Drop Zone Pro.
-It also has a new cooperative mode called Survival Mode where you have to team up with another player or play solo
-to survive against waves of increasingly difficult enemies.
-
-
If you want to play Call of Duty Modern Warfare 3 on your PSP device,
-you can do so using a PSP emulator called PPSSPP.
-PPSSPP is a free
-and open-source emulator that allows you
-to run PSP games on your PSP device
-with high-quality graphics
-and sound.
-You can download PPSSPP from
-the Google Play Store or from its official website.
-
-
To play Call of Duty Modern Warfare 3 on your PSP device using PPSSPP,
-you will need two things:
-the PPSSPP app
-and
-the Call of Duty Modern Warfare 3 PSP iso file.
-The iso file is
-a compressed version
-of
-the game
-that can be read by
-the emulator.
-You can download
-the Call of Duty Modern Warfare 3 PSP iso file
-from various websites on
-the internet,
-such as IsoRomulator or IsoRoms.
-Make sure you download
-the file from
-a trusted source
-and scan it for viruses before opening it.
-
-
In this article, we have told you everything you need to know about Call of Duty Modern Warfare 3 and its features. We have also given you some tips and tricks on how to play the game on your PSP device using PPSSPP. We hope this article has helped you learn how
-to play Call of Duty Modern Warfare 3 on your PSP device using PPSSPP.
-If you have any questions or feedback,
-please feel free
-to leave a comment below.
-Happy gaming!
-
The reviews of Call of Duty Modern Warfare 3 PSP Iso Download
-
-
Call of Duty Modern Warfare 3 has received positive reviews from critics and players alike. The game has been praised for its immersive and thrilling single-player campaign, its addictive and diverse multiplayer mode, its new and improved cooperative mode, and its high-quality graphics and sound. The game has also been criticized for its lack of innovation, its short and linear campaign, its repetitive and scripted gameplay, and its technical issues and bugs. Here are some of the reviews of Call of Duty Modern Warfare 3 from various sources:
-
-
-
IGN gave the game a score of 9 out of 10, saying that "Modern Warfare 3 is an exceptional entry in the series, delivering a full package of campaign, cooperative, and multiplayer content. It might not have fixed all the problems from the previous games, but it balances the existing elements expertly, while adding a few new toys for extra fun."
-
GameSpot gave the game a score of 8.5 out of 10, saying that "Modern Warfare 3 iterates rather than innovates, so the fun you have is familiar. Fortunately, it's also utterly engrossing and immensely satisfying, giving fans another reason to rejoice in this busy shooter season."
-
Metacritic gave the game a score of 88 out of 100, based on 39 reviews from critics. The user score was 3.4 out of 10, based on 5,248 ratings from players. The critics praised the game's action-packed and cinematic campaign, its robust and varied multiplayer mode, its new and challenging cooperative mode, and its impressive graphics and sound. The players criticized the game's lack of innovation, its short and linear campaign, its repetitive and scripted gameplay, and its technical issues and bugs.
-
-
-
These are some of the reviews of Call of Duty Modern Warfare 3 from various sources. The game has received mostly positive feedback from critics and mixed feedback from players. The game is considered to be one of the best shooter games ever made, but also one of the most controversial ones.
-
-
Conclusion
-
-
Call of Duty Modern Warfare 3 is one of the best shooter games ever made.
-It features a thrilling single-player campaign that takes you to various locations around the world,
-such as New York,
-London,
-Paris,
-Berlin,
-Prague,
-Dubai,
-Somalia,
-Sierra Leone,
-and Moscow.
-It also has a robust multiplayer mode that supports up to 10 players and offers various game types,
-such as Team Deathmatch,
-Domination,
-Kill Confirmed,
-Capture the Flag,
-Search and Destroy,
-Sabotage,
-Headquarters Pro,
-Demolition Pro,
-Team Defender Pro
-and Drop Zone Pro.
-It also has a new cooperative mode called Survival Mode where you have to team up with another player or play solo
-to survive against waves of increasingly difficult enemies.
-
-
If you want to play Call of Duty Modern Warfare 3 on your PSP device,
-you can do so using a PSP emulator called PPSSPP.
-PPSSPP is a free
-and open-source emulator that allows you
-to run PSP games on your PSP device
-with high-quality graphics
-and sound.
-You can download PPSSPP from
-the Google Play Store or from its official website.
-
-
To play Call of Duty Modern Warfare 3 on your PSP device using PPSSPP,
-you will need two things:
-the PPSSPP app
-and
-the Call of Duty Modern Warfare 3 PSP iso file.
-The iso file is
-a compressed version
-of
-the game
-that can be read by
-the emulator.
-You can download
-the Call of Duty Modern Warfare 3 PSP iso file
-from various websites on
-the internet,
-such as IsoRomulator or IsoRoms.
-Make sure you download
-the file from
-a trusted source
-and scan it for viruses before opening it.
-
-
In this article, we have told you everything you need to know about Call of Duty Modern Warfare 3 and its features. We have also given you some tips and tricks on how to play the game on your PSP device using PPSSPP. We have also discussed some of the challenges you may face while playing the game on your PSP device using PPSSPP. We have also shared some of the reviews of Call of Duty Modern Warfare 3 from various sources. We hope this article has helped you learn how
-to play Call of Duty Modern Warfare 3 on your PSP device using PPSSPP.
-If you have any questions or feedback,
-please feel free
-to leave a comment below.
-Happy gaming!
-
Conclusion
-
-
Call of Duty Modern Warfare 3 is one of the best shooter games ever made. It features a thrilling single-player campaign that takes you to various locations around the world, such as New York, London, Paris, Berlin, Prague, Dubai, Somalia, Sierra Leone, and Moscow. It also has a robust multiplayer mode that supports up to 10 players and offers various game types, such as Team Deathmatch, Domination, Kill Confirmed, Capture the Flag, Search and Destroy, Sabotage, Headquarters Pro, Demolition Pro, Team Defender Pro and Drop Zone Pro. It also has a new cooperative mode called Survival Mode where you have to team up with another player or play solo to survive against waves of increasingly difficult enemies.
-
-
If you want to play Call of Duty Modern Warfare 3 on your PSP device, you can do so using a PSP emulator called PPSSPP. PPSSPP is a free and open-source emulator that allows you to run PSP games on your PSP device with high-quality graphics and sound. You can download PPSSPP from the Google Play Store or from its official website.
-
-
To play Call of Duty Modern Warfare 3 on your PSP device using PPSSPP, you will need two things: the PPSSPP app and the Call of Duty Modern Warfare 3 PSP iso file. The iso file is a compressed version of the game that can be read by the emulator. You can download the Call of Duty Modern Warfare 3 PSP iso file from various websites on the internet, such as IsoRomulator or IsoRoms. Make sure you download the file from a trusted source and scan it for viruses before opening it.
-
-
In this article, we have told you everything you need to know about Call of Duty Modern Warfare 3 and its features. We have also given you some tips and tricks on how to play the game on your PSP device using PPSSPP. We have also discussed some of the challenges you may face while playing the game on your PSP device using PPSSPP. We have also shared some of the reviews of Call of Duty Modern Warfare 3 from various sources.
-
-
We hope this article has helped you learn how to play Call of Duty Modern Warfare 3 on your PSP device using PPSSPP. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Bandicut 3.1.5.511 Crack Serial Key Full Version Download _VERIFIED_.md b/spaces/inreVtussa/clothingai/Examples/Bandicut 3.1.5.511 Crack Serial Key Full Version Download _VERIFIED_.md
deleted file mode 100644
index 899156edd6a43d90370949fce3650e98e643245f..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Bandicut 3.1.5.511 Crack Serial Key Full Version Download _VERIFIED_.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
Bandicut 3.1.5.511 Crack Serial Key Full Version Download
-
-Do you like this game? Then you can download it from here.
-
-Greetings from GoToMeeting Crack has been updated by applying the latest patches and fixes of the game. Now download the game directly from the link and enjoy. You can download GoToMeeting Crack with Serial Keygen and with Crack fully working.
-
-Download GoToMeeting Crack
-
-Download GoToMeeting 3.6.6 Crack is a desktop application that is used to interact with people using video conferencing. If you are running on a Windows-based system then you can download this program from the link below. It is an English version of the program that is used for the desktop application. You can download GoToMeeting Crack.
-
-Bandicut 3.0.4.461 Crack 2020
-
-Bandicut 3.0.4.461 Crack has released by Game It Media. If you have a Windows-based system then you can download this program from the link below. It is an English version of the program that is used for the desktop application. You can download Bandicut 3.0.4.461 Crack.
-
-Bandicut 2.0.3.567 Serial Key With Crack Full Working 2020
-
-Bandicut 2.0.3.567 Serial Key is an English version of the program that is used for the desktop application. You can download the Bandicut 2.0.3.567 Serial Key with full working and free. It is a program of a remote desktop.
-
-Bandicut 3.0.4.512 Crack Download
-
-Bandicut 3.0.4.512 Crack has been updated by applying the latest patches and fixes of the game. Now download the game directly from the link and enjoy. You can download Bandicut 3.0.4.512 Crack.
-
-Bandicut 3.1.5.511 Crack With Serial Key Full Torrent 2021
-
-Bandicut 3.1.5.511 Crack has been updated by applying the latest patches and fixes of the game. Now download the game directly from the link and enjoy. You can download Bandicut 3.1.5.511 Crack.
-
-Bandicut 3.6.1.636 Crack Download With Keygen Full Working 2020
-
-Bandicut 3.6.1.636 Crack has been updated by applying the latest patches and fixes of the game. Now download the game directly from the link and enjoy. You can 4fefd39f24
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/CardRecovery V5.30 Build 1206 Software 58.md b/spaces/inreVtussa/clothingai/Examples/CardRecovery V5.30 Build 1206 Software 58.md
deleted file mode 100644
index 4c0dca546bd5724a1e877f38bb0f50606199ed44..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/CardRecovery V5.30 Build 1206 Software 58.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Recover Lost Photos with CardRecovery v5.30 Build 1206 Software
-
If you have accidentally deleted, formatted or corrupted your photos and videos from your memory card, you may be wondering how to get them back. Fortunately, there is a software that can help you recover your precious memories in minutes. It is called CardRecovery v5.30 Build 1206 Software[^1^].
CardRecovery v5.30 Build 1206 Software is a leading photo recovery software for memory card used by digital camera or phone[^2^]. It can effectively recover lost, deleted, corrupted or formatted photos and video files from various memory cards. It supports almost all memory card types including SD Card, MicroSD, SDHC, CF Card, xD Picture Card, Memory Stick and more[^2^].
-
CardRecovery v5.30 Build 1206 Software is easy to use and has a user-friendly interface. You just need to follow these simple steps to recover your photos:
-
-
Download and install CardRecovery v5.30 Build 1206 Software from its official website[^2^].
-
Connect your memory card to your computer using a card reader or a USB cable.
-
Launch CardRecovery v5.30 Build 1206 Software and select your memory card from the list of drives.
-
Select the file types you want to recover (such as JPG, PNG, MOV, MP4, etc.) and click "Next".
-
Wait for the scanning process to complete. You can preview the recovered photos and videos before saving them.
-
Select a destination folder to save the recovered files and click "Save".
-
-
That's it! You have successfully recovered your lost photos with CardRecovery v5.30 Build 1206 Software. You can now enjoy your memories again.
CardRecovery v5.30 Build 1206 Software is not only a photo recovery software, but also a video recovery software. It can recover video files from memory cards that are damaged, corrupted, formatted or inaccessible. It can recover videos of various formats such as AVI, MOV, MP4, 3GP, WMV, etc. It can also recover videos from HD cameras, camcorders, mobile phones and other devices.
-
-
CardRecovery v5.30 Build 1206 Software is a safe and reliable software that does not write or modify any data on your memory card. It only performs read-only operations on your memory card to ensure the safety of your data. It also supports Windows 10/8/7/Vista/XP and Mac OS X operating systems.
-
CardRecovery v5.30 Build 1206 Software is a must-have software for anyone who uses memory cards to store photos and videos. It can help you recover your lost memories in minutes and save you from the frustration of losing your precious data. You can download CardRecovery v5.30 Build 1206 Software from its official website and try it for free.
CardRecovery v5.30 Build 1206 Software is a powerful and professional software that can recover photos and videos from various scenarios. Whether you have accidentally deleted your files, formatted your card, encountered a virus attack, pulled out your card while the camera was on, or experienced a power failure, CardRecovery v5.30 Build 1206 Software can help you restore your data. It can also recover photos and videos from corrupted or unreadable memory cards.
-
CardRecovery v5.30 Build 1206 Software is a fast and efficient software that can scan and recover your files in minutes. It has a smart scan technology that can find and recover your files even if they are not listed in the file system. It can also recover files from fragmented or partially overwritten memory cards. It can recover files of various sizes and resolutions, including high-resolution photos and HD videos.
-
CardRecovery v5.30 Build 1206 Software is a trusted and recommended software that has been used by millions of users around the world. It has received positive feedback and reviews from customers and experts alike. It has also won many awards and certifications from reputable organizations. It is compatible with all major brands of memory cards and cameras.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/iqovocn/ChuanhuChatGPT/assets/Kelpy-Codos.js b/spaces/iqovocn/ChuanhuChatGPT/assets/Kelpy-Codos.js
deleted file mode 100644
index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000
--- a/spaces/iqovocn/ChuanhuChatGPT/assets/Kelpy-Codos.js
+++ /dev/null
@@ -1,76 +0,0 @@
-// ==UserScript==
-// @name Kelpy Codos
-// @namespace https://github.com/Keldos-Li/Kelpy-Codos
-// @version 1.0.5
-// @author Keldos; https://keldos.me/
-// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially.
-// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22)
-// @license GPL-3.0
-// @grant none
-// ==/UserScript==
-
-(function () {
- 'use strict';
-
- function addCopyButton(pre) {
- var code = pre.querySelector('code');
- if (!code) {
- return; // 如果没有找到 元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/jackli888/stable-diffusion-webui/modules/esrgan_model_arch.py b/spaces/jackli888/stable-diffusion-webui/modules/esrgan_model_arch.py
deleted file mode 100644
index bec5962b595cfc0e6c52d916275538c9c4252068..0000000000000000000000000000000000000000
--- a/spaces/jackli888/stable-diffusion-webui/modules/esrgan_model_arch.py
+++ /dev/null
@@ -1,464 +0,0 @@
-# this file is adapted from https://github.com/victorca25/iNNfer
-
-from collections import OrderedDict
-import math
-import functools
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-####################
-# RRDBNet Generator
-####################
-
-class RRDBNet(nn.Module):
- def __init__(self, in_nc, out_nc, nf, nb, nr=3, gc=32, upscale=4, norm_type=None,
- act_type='leakyrelu', mode='CNA', upsample_mode='upconv', convtype='Conv2D',
- finalact=None, gaussian_noise=False, plus=False):
- super(RRDBNet, self).__init__()
- n_upscale = int(math.log(upscale, 2))
- if upscale == 3:
- n_upscale = 1
-
- self.resrgan_scale = 0
- if in_nc % 16 == 0:
- self.resrgan_scale = 1
- elif in_nc != 4 and in_nc % 4 == 0:
- self.resrgan_scale = 2
-
- fea_conv = conv_block(in_nc, nf, kernel_size=3, norm_type=None, act_type=None, convtype=convtype)
- rb_blocks = [RRDB(nf, nr, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
- norm_type=norm_type, act_type=act_type, mode='CNA', convtype=convtype,
- gaussian_noise=gaussian_noise, plus=plus) for _ in range(nb)]
- LR_conv = conv_block(nf, nf, kernel_size=3, norm_type=norm_type, act_type=None, mode=mode, convtype=convtype)
-
- if upsample_mode == 'upconv':
- upsample_block = upconv_block
- elif upsample_mode == 'pixelshuffle':
- upsample_block = pixelshuffle_block
- else:
- raise NotImplementedError('upsample mode [{:s}] is not found'.format(upsample_mode))
- if upscale == 3:
- upsampler = upsample_block(nf, nf, 3, act_type=act_type, convtype=convtype)
- else:
- upsampler = [upsample_block(nf, nf, act_type=act_type, convtype=convtype) for _ in range(n_upscale)]
- HR_conv0 = conv_block(nf, nf, kernel_size=3, norm_type=None, act_type=act_type, convtype=convtype)
- HR_conv1 = conv_block(nf, out_nc, kernel_size=3, norm_type=None, act_type=None, convtype=convtype)
-
- outact = act(finalact) if finalact else None
-
- self.model = sequential(fea_conv, ShortcutBlock(sequential(*rb_blocks, LR_conv)),
- *upsampler, HR_conv0, HR_conv1, outact)
-
- def forward(self, x, outm=None):
- if self.resrgan_scale == 1:
- feat = pixel_unshuffle(x, scale=4)
- elif self.resrgan_scale == 2:
- feat = pixel_unshuffle(x, scale=2)
- else:
- feat = x
-
- return self.model(feat)
-
-
-class RRDB(nn.Module):
- """
- Residual in Residual Dense Block
- (ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks)
- """
-
- def __init__(self, nf, nr=3, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
- norm_type=None, act_type='leakyrelu', mode='CNA', convtype='Conv2D',
- spectral_norm=False, gaussian_noise=False, plus=False):
- super(RRDB, self).__init__()
- # This is for backwards compatibility with existing models
- if nr == 3:
- self.RDB1 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
- norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
- gaussian_noise=gaussian_noise, plus=plus)
- self.RDB2 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
- norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
- gaussian_noise=gaussian_noise, plus=plus)
- self.RDB3 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
- norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
- gaussian_noise=gaussian_noise, plus=plus)
- else:
- RDB_list = [ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
- norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
- gaussian_noise=gaussian_noise, plus=plus) for _ in range(nr)]
- self.RDBs = nn.Sequential(*RDB_list)
-
- def forward(self, x):
- if hasattr(self, 'RDB1'):
- out = self.RDB1(x)
- out = self.RDB2(out)
- out = self.RDB3(out)
- else:
- out = self.RDBs(x)
- return out * 0.2 + x
-
-
-class ResidualDenseBlock_5C(nn.Module):
- """
- Residual Dense Block
- The core module of paper: (Residual Dense Network for Image Super-Resolution, CVPR 18)
- Modified options that can be used:
- - "Partial Convolution based Padding" arXiv:1811.11718
- - "Spectral normalization" arXiv:1802.05957
- - "ICASSP 2020 - ESRGAN+ : Further Improving ESRGAN" N. C.
- {Rakotonirina} and A. {Rasoanaivo}
- """
-
- def __init__(self, nf=64, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
- norm_type=None, act_type='leakyrelu', mode='CNA', convtype='Conv2D',
- spectral_norm=False, gaussian_noise=False, plus=False):
- super(ResidualDenseBlock_5C, self).__init__()
-
- self.noise = GaussianNoise() if gaussian_noise else None
- self.conv1x1 = conv1x1(nf, gc) if plus else None
-
- self.conv1 = conv_block(nf, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
- norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
- spectral_norm=spectral_norm)
- self.conv2 = conv_block(nf+gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
- norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
- spectral_norm=spectral_norm)
- self.conv3 = conv_block(nf+2*gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
- norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
- spectral_norm=spectral_norm)
- self.conv4 = conv_block(nf+3*gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
- norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
- spectral_norm=spectral_norm)
- if mode == 'CNA':
- last_act = None
- else:
- last_act = act_type
- self.conv5 = conv_block(nf+4*gc, nf, 3, stride, bias=bias, pad_type=pad_type,
- norm_type=norm_type, act_type=last_act, mode=mode, convtype=convtype,
- spectral_norm=spectral_norm)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv2(torch.cat((x, x1), 1))
- if self.conv1x1:
- x2 = x2 + self.conv1x1(x)
- x3 = self.conv3(torch.cat((x, x1, x2), 1))
- x4 = self.conv4(torch.cat((x, x1, x2, x3), 1))
- if self.conv1x1:
- x4 = x4 + x2
- x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
- if self.noise:
- return self.noise(x5.mul(0.2) + x)
- else:
- return x5 * 0.2 + x
-
-
-####################
-# ESRGANplus
-####################
-
-class GaussianNoise(nn.Module):
- def __init__(self, sigma=0.1, is_relative_detach=False):
- super().__init__()
- self.sigma = sigma
- self.is_relative_detach = is_relative_detach
- self.noise = torch.tensor(0, dtype=torch.float)
-
- def forward(self, x):
- if self.training and self.sigma != 0:
- self.noise = self.noise.to(x.device)
- scale = self.sigma * x.detach() if self.is_relative_detach else self.sigma * x
- sampled_noise = self.noise.repeat(*x.size()).normal_() * scale
- x = x + sampled_noise
- return x
-
-def conv1x1(in_planes, out_planes, stride=1):
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
-
-
-####################
-# SRVGGNetCompact
-####################
-
-class SRVGGNetCompact(nn.Module):
- """A compact VGG-style network structure for super-resolution.
- This class is copied from https://github.com/xinntao/Real-ESRGAN
- """
-
- def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
- super(SRVGGNetCompact, self).__init__()
- self.num_in_ch = num_in_ch
- self.num_out_ch = num_out_ch
- self.num_feat = num_feat
- self.num_conv = num_conv
- self.upscale = upscale
- self.act_type = act_type
-
- self.body = nn.ModuleList()
- # the first conv
- self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
- # the first activation
- if act_type == 'relu':
- activation = nn.ReLU(inplace=True)
- elif act_type == 'prelu':
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == 'leakyrelu':
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the body structure
- for _ in range(num_conv):
- self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
- # activation
- if act_type == 'relu':
- activation = nn.ReLU(inplace=True)
- elif act_type == 'prelu':
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == 'leakyrelu':
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the last conv
- self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
- # upsample
- self.upsampler = nn.PixelShuffle(upscale)
-
- def forward(self, x):
- out = x
- for i in range(0, len(self.body)):
- out = self.body[i](out)
-
- out = self.upsampler(out)
- # add the nearest upsampled image, so that the network learns the residual
- base = F.interpolate(x, scale_factor=self.upscale, mode='nearest')
- out += base
- return out
-
-
-####################
-# Upsampler
-####################
-
-class Upsample(nn.Module):
- r"""Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
- The input data is assumed to be of the form
- `minibatch x channels x [optional depth] x [optional height] x width`.
- """
-
- def __init__(self, size=None, scale_factor=None, mode="nearest", align_corners=None):
- super(Upsample, self).__init__()
- if isinstance(scale_factor, tuple):
- self.scale_factor = tuple(float(factor) for factor in scale_factor)
- else:
- self.scale_factor = float(scale_factor) if scale_factor else None
- self.mode = mode
- self.size = size
- self.align_corners = align_corners
-
- def forward(self, x):
- return nn.functional.interpolate(x, size=self.size, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners)
-
- def extra_repr(self):
- if self.scale_factor is not None:
- info = 'scale_factor=' + str(self.scale_factor)
- else:
- info = 'size=' + str(self.size)
- info += ', mode=' + self.mode
- return info
-
-
-def pixel_unshuffle(x, scale):
- """ Pixel unshuffle.
- Args:
- x (Tensor): Input feature with shape (b, c, hh, hw).
- scale (int): Downsample ratio.
- Returns:
- Tensor: the pixel unshuffled feature.
- """
- b, c, hh, hw = x.size()
- out_channel = c * (scale**2)
- assert hh % scale == 0 and hw % scale == 0
- h = hh // scale
- w = hw // scale
- x_view = x.view(b, c, h, scale, w, scale)
- return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)
-
-
-def pixelshuffle_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True,
- pad_type='zero', norm_type=None, act_type='relu', convtype='Conv2D'):
- """
- Pixel shuffle layer
- (Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional
- Neural Network, CVPR17)
- """
- conv = conv_block(in_nc, out_nc * (upscale_factor ** 2), kernel_size, stride, bias=bias,
- pad_type=pad_type, norm_type=None, act_type=None, convtype=convtype)
- pixel_shuffle = nn.PixelShuffle(upscale_factor)
-
- n = norm(norm_type, out_nc) if norm_type else None
- a = act(act_type) if act_type else None
- return sequential(conv, pixel_shuffle, n, a)
-
-
-def upconv_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True,
- pad_type='zero', norm_type=None, act_type='relu', mode='nearest', convtype='Conv2D'):
- """ Upconv layer """
- upscale_factor = (1, upscale_factor, upscale_factor) if convtype == 'Conv3D' else upscale_factor
- upsample = Upsample(scale_factor=upscale_factor, mode=mode)
- conv = conv_block(in_nc, out_nc, kernel_size, stride, bias=bias,
- pad_type=pad_type, norm_type=norm_type, act_type=act_type, convtype=convtype)
- return sequential(upsample, conv)
-
-
-
-
-
-
-
-
-####################
-# Basic blocks
-####################
-
-
-def make_layer(basic_block, num_basic_block, **kwarg):
- """Make layers by stacking the same blocks.
- Args:
- basic_block (nn.module): nn.module class for basic block. (block)
- num_basic_block (int): number of blocks. (n_layers)
- Returns:
- nn.Sequential: Stacked blocks in nn.Sequential.
- """
- layers = []
- for _ in range(num_basic_block):
- layers.append(basic_block(**kwarg))
- return nn.Sequential(*layers)
-
-
-def act(act_type, inplace=True, neg_slope=0.2, n_prelu=1, beta=1.0):
- """ activation helper """
- act_type = act_type.lower()
- if act_type == 'relu':
- layer = nn.ReLU(inplace)
- elif act_type in ('leakyrelu', 'lrelu'):
- layer = nn.LeakyReLU(neg_slope, inplace)
- elif act_type == 'prelu':
- layer = nn.PReLU(num_parameters=n_prelu, init=neg_slope)
- elif act_type == 'tanh': # [-1, 1] range output
- layer = nn.Tanh()
- elif act_type == 'sigmoid': # [0, 1] range output
- layer = nn.Sigmoid()
- else:
- raise NotImplementedError('activation layer [{:s}] is not found'.format(act_type))
- return layer
-
-
-class Identity(nn.Module):
- def __init__(self, *kwargs):
- super(Identity, self).__init__()
-
- def forward(self, x, *kwargs):
- return x
-
-
-def norm(norm_type, nc):
- """ Return a normalization layer """
- norm_type = norm_type.lower()
- if norm_type == 'batch':
- layer = nn.BatchNorm2d(nc, affine=True)
- elif norm_type == 'instance':
- layer = nn.InstanceNorm2d(nc, affine=False)
- elif norm_type == 'none':
- def norm_layer(x): return Identity()
- else:
- raise NotImplementedError('normalization layer [{:s}] is not found'.format(norm_type))
- return layer
-
-
-def pad(pad_type, padding):
- """ padding layer helper """
- pad_type = pad_type.lower()
- if padding == 0:
- return None
- if pad_type == 'reflect':
- layer = nn.ReflectionPad2d(padding)
- elif pad_type == 'replicate':
- layer = nn.ReplicationPad2d(padding)
- elif pad_type == 'zero':
- layer = nn.ZeroPad2d(padding)
- else:
- raise NotImplementedError('padding layer [{:s}] is not implemented'.format(pad_type))
- return layer
-
-
-def get_valid_padding(kernel_size, dilation):
- kernel_size = kernel_size + (kernel_size - 1) * (dilation - 1)
- padding = (kernel_size - 1) // 2
- return padding
-
-
-class ShortcutBlock(nn.Module):
- """ Elementwise sum the output of a submodule to its input """
- def __init__(self, submodule):
- super(ShortcutBlock, self).__init__()
- self.sub = submodule
-
- def forward(self, x):
- output = x + self.sub(x)
- return output
-
- def __repr__(self):
- return 'Identity + \n|' + self.sub.__repr__().replace('\n', '\n|')
-
-
-def sequential(*args):
- """ Flatten Sequential. It unwraps nn.Sequential. """
- if len(args) == 1:
- if isinstance(args[0], OrderedDict):
- raise NotImplementedError('sequential does not support OrderedDict input.')
- return args[0] # No sequential is needed.
- modules = []
- for module in args:
- if isinstance(module, nn.Sequential):
- for submodule in module.children():
- modules.append(submodule)
- elif isinstance(module, nn.Module):
- modules.append(module)
- return nn.Sequential(*modules)
-
-
-def conv_block(in_nc, out_nc, kernel_size, stride=1, dilation=1, groups=1, bias=True,
- pad_type='zero', norm_type=None, act_type='relu', mode='CNA', convtype='Conv2D',
- spectral_norm=False):
- """ Conv layer with padding, normalization, activation """
- assert mode in ['CNA', 'NAC', 'CNAC'], 'Wrong conv mode [{:s}]'.format(mode)
- padding = get_valid_padding(kernel_size, dilation)
- p = pad(pad_type, padding) if pad_type and pad_type != 'zero' else None
- padding = padding if pad_type == 'zero' else 0
-
- if convtype=='PartialConv2D':
- c = PartialConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
- dilation=dilation, bias=bias, groups=groups)
- elif convtype=='DeformConv2D':
- c = DeformConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
- dilation=dilation, bias=bias, groups=groups)
- elif convtype=='Conv3D':
- c = nn.Conv3d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
- dilation=dilation, bias=bias, groups=groups)
- else:
- c = nn.Conv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
- dilation=dilation, bias=bias, groups=groups)
-
- if spectral_norm:
- c = nn.utils.spectral_norm(c)
-
- a = act(act_type) if act_type else None
- if 'CNA' in mode:
- n = norm(norm_type, out_nc) if norm_type else None
- return sequential(p, c, n, a)
- elif mode == 'NAC':
- if norm_type is None and act_type is not None:
- a = act(act_type, inplace=False)
- n = norm(norm_type, in_nc) if norm_type else None
- return sequential(n, a, p, c)
diff --git a/spaces/james-oldfield/PandA/networks/genforce/runners/controllers/progress_scheduler.py b/spaces/james-oldfield/PandA/networks/genforce/runners/controllers/progress_scheduler.py
deleted file mode 100644
index eef2b4d67bccbb30da6e7ee562844aa8c6ea3daa..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/genforce/runners/controllers/progress_scheduler.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# python3.7
-"""Contains the running controller to control progressive training.
-
-This controller is applicable to the models that need to progressively change
-the batch size, learning rate, etc.
-"""
-
-import numpy as np
-
-from .base_controller import BaseController
-
-__all__ = ['ProgressScheduler']
-
-_BATCH_SIZE_SCHEDULE_DICT = {
- 4: 16, 8: 8, 16: 4, 32: 2, 64: 1, 128: 1, 256: 1, 512: 1, 1024: 1,
-}
-_MAX_BATCH_SIZE = 64
-
-_LEARNING_RATE_SCHEDULE_DICT = {
- 4: 1, 8: 1, 16: 1, 32: 1, 64: 1, 128: 1.5, 256: 2, 512: 3, 1024: 3,
-}
-
-
-class ProgressScheduler(BaseController):
- """Defines the running controller to control progressive training.
-
- NOTE: The controller is set to `HIGH` priority by default.
- """
-
- def __init__(self, config):
- assert isinstance(config, dict)
- config.setdefault('priority', 'HIGH')
- config.setdefault('every_n_iters', 1)
- super().__init__(config)
-
- self.base_batch_size = 0
- self.base_lrs = dict()
-
- self.total_img = 0
- self.init_res = config.get('init_res', 4)
- self.final_res = self.init_res
- self.init_lod = 0
- self.batch_size_schedule = config.get('batch_size_schedule', dict())
- self.lr_schedule = config.get('lr_schedule', dict())
- self.minibatch_repeats = config.get('minibatch_repeats', 4)
-
- self.lod_training_img = config.get('lod_training_img', 600_000)
- self.lod_transition_img = config.get('lod_transition_img', 600_000)
- self.lod_duration = (self.lod_training_img + self.lod_transition_img)
-
- # Whether to reset the optimizer state at the beginning of each phase.
- self.reset_optimizer = config.get('reset_optimizer', True)
-
- def get_batch_size(self, resolution):
- """Gets batch size for a particular resolution."""
- if self.batch_size_schedule:
- return self.batch_size_schedule.get(
- f'res{resolution}', self.base_batch_size)
- batch_size_scale = _BATCH_SIZE_SCHEDULE_DICT[resolution]
- return min(_MAX_BATCH_SIZE, self.base_batch_size * batch_size_scale)
-
- def get_lr_scale(self, resolution):
- """Gets learning rate scale for a particular resolution."""
- if self.lr_schedule:
- return self.lr_schedule.get(f'res{resolution}', 1)
- return _LEARNING_RATE_SCHEDULE_DICT[resolution]
-
- def setup(self, runner):
- # Set level of detail (lod).
- self.final_res = runner.resolution
- self.init_lod = np.log2(self.final_res // self.init_res)
- runner.lod = -1.0
-
- # Save default batch size and learning rate.
- self.base_batch_size = runner.batch_size
- for lr_name, lr_scheduler in runner.lr_schedulers.items():
- self.base_lrs[lr_name] = lr_scheduler.base_lrs
-
- # Add running stats for logging.
- runner.running_stats.add(
- 'kimg', log_format='7.1f', log_name='kimg', log_strategy='CURRENT')
- runner.running_stats.add(
- 'lod', log_format='4.2f', log_name='lod', log_strategy='CURRENT')
- runner.running_stats.add(
- 'minibatch', log_format='4d', log_name='minibatch',
- log_strategy='CURRENT')
-
- # Log progressive schedule.
- runner.logger.info(f'Progressive Schedule:')
- res = self.init_res
- lod = int(self.init_lod)
- while res <= self.final_res:
- batch_size = self.get_batch_size(res)
- lr_scale = self.get_lr_scale(res)
- runner.logger.info(f' Resolution {res:4d} (lod {lod}): '
- f'batch size '
- f'{batch_size:3d} * {runner.world_size:2d}, '
- f'learning rate scale {lr_scale:.1f}')
- res *= 2
- lod -= 1
- assert lod == -1 and res == self.final_res * 2
-
- # Compute total running iterations.
- assert hasattr(runner.config, 'total_img')
- self.total_img = runner.config.total_img
- current_img = 0
- num_iters = 0
- while current_img < self.total_img:
- phase = (current_img + self.lod_transition_img) // self.lod_duration
- phase = np.clip(phase, 0, self.init_lod)
- if num_iters % self.minibatch_repeats == 0:
- resolution = self.init_res * (2 ** int(phase))
- current_img += self.get_batch_size(resolution) * runner.world_size
- num_iters += 1
- runner.total_iters = num_iters
-
- def execute_before_iteration(self, runner):
- is_first_iter = (runner.iter - runner.start_iter == 1)
-
- # Adjust hyper-parameters only at some particular iteration.
- if (not is_first_iter) and (runner.iter % self.minibatch_repeats != 1):
- return
-
- # Compute level-of-details.
- phase, subphase = divmod(runner.seen_img, self.lod_duration)
- lod = self.init_lod - phase
- if self.lod_transition_img:
- transition_img = max(subphase - self.lod_training_img, 0)
- lod = lod - transition_img / self.lod_transition_img
- lod = max(lod, 0.0)
- resolution = self.init_res * (2 ** int(np.ceil(self.init_lod - lod)))
- batch_size = self.get_batch_size(resolution)
- lr_scale = self.get_lr_scale(resolution)
-
- pre_lod = runner.lod
- pre_resolution = runner.train_loader.dataset.resolution
- runner.lod = lod
-
- # Reset optimizer state if needed.
- if self.reset_optimizer:
- if int(lod) != int(pre_lod) or np.ceil(lod) != np.ceil(pre_lod):
- runner.logger.info(f'Reset the optimizer state at '
- f'iter {runner.iter:06d} (lod {lod:.6f}).')
- for name in runner.optimizers:
- runner.optimizers[name].state.clear()
-
- # Rebuild the dataset and adjust the learing rate if needed.
- if is_first_iter or resolution != pre_resolution:
- runner.logger.info(f'Rebuild the dataset at '
- f'iter {runner.iter:06d} (lod {lod:.6f}).')
- runner.train_loader.overwrite_param(
- batch_size=batch_size, resolution=resolution)
- runner.batch_size = batch_size
- for lr_name, base_lrs in self.base_lrs.items():
- runner.lr_schedulers[lr_name].base_lrs = [
- lr * lr_scale for lr in base_lrs]
-
- def execute_after_iteration(self, runner):
- minibatch = runner.batch_size * runner.world_size
- runner.running_stats.update({'kimg': runner.seen_img / 1000})
- runner.running_stats.update({'lod': runner.lod})
- runner.running_stats.update({'minibatch': minibatch})
diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/calc_metrics.py b/spaces/james-oldfield/PandA/networks/stylegan3/calc_metrics.py
deleted file mode 100644
index 52e1e9404dbaa8901352fc74475e6052e103f760..0000000000000000000000000000000000000000
--- a/spaces/james-oldfield/PandA/networks/stylegan3/calc_metrics.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Calculate quality metrics for previous training run or pretrained network pickle."""
-
-import os
-import click
-import json
-import tempfile
-import copy
-import torch
-
-import dnnlib
-import legacy
-from metrics import metric_main
-from metrics import metric_utils
-from torch_utils import training_stats
-from torch_utils import custom_ops
-from torch_utils import misc
-from torch_utils.ops import conv2d_gradfix
-
-#----------------------------------------------------------------------------
-
-def subprocess_fn(rank, args, temp_dir):
- dnnlib.util.Logger(should_flush=True)
-
- # Init torch.distributed.
- if args.num_gpus > 1:
- init_file = os.path.abspath(os.path.join(temp_dir, '.torch_distributed_init'))
- if os.name == 'nt':
- init_method = 'file:///' + init_file.replace('\\', '/')
- torch.distributed.init_process_group(backend='gloo', init_method=init_method, rank=rank, world_size=args.num_gpus)
- else:
- init_method = f'file://{init_file}'
- torch.distributed.init_process_group(backend='nccl', init_method=init_method, rank=rank, world_size=args.num_gpus)
-
- # Init torch_utils.
- sync_device = torch.device('cuda', rank) if args.num_gpus > 1 else None
- training_stats.init_multiprocessing(rank=rank, sync_device=sync_device)
- if rank != 0 or not args.verbose:
- custom_ops.verbosity = 'none'
-
- # Configure torch.
- device = torch.device('cuda', rank)
- torch.backends.cuda.matmul.allow_tf32 = False
- torch.backends.cudnn.allow_tf32 = False
- conv2d_gradfix.enabled = True
-
- # Print network summary.
- G = copy.deepcopy(args.G).eval().requires_grad_(False).to(device)
- if rank == 0 and args.verbose:
- z = torch.empty([1, G.z_dim], device=device)
- c = torch.empty([1, G.c_dim], device=device)
- misc.print_module_summary(G, [z, c])
-
- # Calculate each metric.
- for metric in args.metrics:
- if rank == 0 and args.verbose:
- print(f'Calculating {metric}...')
- progress = metric_utils.ProgressMonitor(verbose=args.verbose)
- result_dict = metric_main.calc_metric(metric=metric, G=G, dataset_kwargs=args.dataset_kwargs,
- num_gpus=args.num_gpus, rank=rank, device=device, progress=progress)
- if rank == 0:
- metric_main.report_metric(result_dict, run_dir=args.run_dir, snapshot_pkl=args.network_pkl)
- if rank == 0 and args.verbose:
- print()
-
- # Done.
- if rank == 0 and args.verbose:
- print('Exiting...')
-
-#----------------------------------------------------------------------------
-
-def parse_comma_separated_list(s):
- if isinstance(s, list):
- return s
- if s is None or s.lower() == 'none' or s == '':
- return []
- return s.split(',')
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.pass_context
-@click.option('network_pkl', '--network', help='Network pickle filename or URL', metavar='PATH', required=True)
-@click.option('--metrics', help='Quality metrics', metavar='[NAME|A,B,C|none]', type=parse_comma_separated_list, default='fid50k_full', show_default=True)
-@click.option('--data', help='Dataset to evaluate against [default: look up]', metavar='[ZIP|DIR]')
-@click.option('--mirror', help='Enable dataset x-flips [default: look up]', type=bool, metavar='BOOL')
-@click.option('--gpus', help='Number of GPUs to use', type=int, default=1, metavar='INT', show_default=True)
-@click.option('--verbose', help='Print optional information', type=bool, default=True, metavar='BOOL', show_default=True)
-
-def calc_metrics(ctx, network_pkl, metrics, data, mirror, gpus, verbose):
- """Calculate quality metrics for previous training run or pretrained network pickle.
-
- Examples:
-
- \b
- # Previous training run: look up options automatically, save result to JSONL file.
- python calc_metrics.py --metrics=eqt50k_int,eqr50k \\
- --network=~/training-runs/00000-stylegan3-r-mydataset/network-snapshot-000000.pkl
-
- \b
- # Pre-trained network pickle: specify dataset explicitly, print result to stdout.
- python calc_metrics.py --metrics=fid50k_full --data=~/datasets/ffhq-1024x1024.zip --mirror=1 \\
- --network=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-ffhq-1024x1024.pkl
-
- \b
- Recommended metrics:
- fid50k_full Frechet inception distance against the full dataset.
- kid50k_full Kernel inception distance against the full dataset.
- pr50k3_full Precision and recall againt the full dataset.
- ppl2_wend Perceptual path length in W, endpoints, full image.
- eqt50k_int Equivariance w.r.t. integer translation (EQ-T).
- eqt50k_frac Equivariance w.r.t. fractional translation (EQ-T_frac).
- eqr50k Equivariance w.r.t. rotation (EQ-R).
-
- \b
- Legacy metrics:
- fid50k Frechet inception distance against 50k real images.
- kid50k Kernel inception distance against 50k real images.
- pr50k3 Precision and recall against 50k real images.
- is50k Inception score for CIFAR-10.
- """
- dnnlib.util.Logger(should_flush=True)
-
- # Validate arguments.
- args = dnnlib.EasyDict(metrics=metrics, num_gpus=gpus, network_pkl=network_pkl, verbose=verbose)
- if not all(metric_main.is_valid_metric(metric) for metric in args.metrics):
- ctx.fail('\n'.join(['--metrics can only contain the following values:'] + metric_main.list_valid_metrics()))
- if not args.num_gpus >= 1:
- ctx.fail('--gpus must be at least 1')
-
- # Load network.
- if not dnnlib.util.is_url(network_pkl, allow_file_urls=True) and not os.path.isfile(network_pkl):
- ctx.fail('--network must point to a file or URL')
- if args.verbose:
- print(f'Loading network from "{network_pkl}"...')
- with dnnlib.util.open_url(network_pkl, verbose=args.verbose) as f:
- network_dict = legacy.load_network_pkl(f)
- args.G = network_dict['G_ema'] # subclass of torch.nn.Module
-
- # Initialize dataset options.
- if data is not None:
- args.dataset_kwargs = dnnlib.EasyDict(class_name='training.dataset.ImageFolderDataset', path=data)
- elif network_dict['training_set_kwargs'] is not None:
- args.dataset_kwargs = dnnlib.EasyDict(network_dict['training_set_kwargs'])
- else:
- ctx.fail('Could not look up dataset options; please specify --data')
-
- # Finalize dataset options.
- args.dataset_kwargs.resolution = args.G.img_resolution
- args.dataset_kwargs.use_labels = (args.G.c_dim != 0)
- if mirror is not None:
- args.dataset_kwargs.xflip = mirror
-
- # Print dataset options.
- if args.verbose:
- print('Dataset options:')
- print(json.dumps(args.dataset_kwargs, indent=2))
-
- # Locate run dir.
- args.run_dir = None
- if os.path.isfile(network_pkl):
- pkl_dir = os.path.dirname(network_pkl)
- if os.path.isfile(os.path.join(pkl_dir, 'training_options.json')):
- args.run_dir = pkl_dir
-
- # Launch processes.
- if args.verbose:
- print('Launching processes...')
- torch.multiprocessing.set_start_method('spawn')
- with tempfile.TemporaryDirectory() as temp_dir:
- if args.num_gpus == 1:
- subprocess_fn(rank=0, args=args, temp_dir=temp_dir)
- else:
- torch.multiprocessing.spawn(fn=subprocess_fn, args=(args, temp_dir), nprocs=args.num_gpus)
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- calc_metrics() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/jangocheng/stable-diffusion-webui-cpu_with_prompt_pub/README.md b/spaces/jangocheng/stable-diffusion-webui-cpu_with_prompt_pub/README.md
deleted file mode 100644
index e216906e0828834241b7195278762f81be2a90ea..0000000000000000000000000000000000000000
--- a/spaces/jangocheng/stable-diffusion-webui-cpu_with_prompt_pub/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Stable Diffusion Webui on Cpu
-emoji: 🏃
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-python_version: 3.10.6
-duplicated_from: jangocheng/stable-diffusion-webui-cpu_with_prompt
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jbilcke-hf/media-server/converter.js b/spaces/jbilcke-hf/media-server/converter.js
deleted file mode 100644
index f2d7b5dafabfa57a6d419377124480cd2973e3d2..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/media-server/converter.js
+++ /dev/null
@@ -1,49 +0,0 @@
-const { v4 } = require('uuid')
-const raw = {
- "input": "Scenes from a movie about a young magician who explore a damp, misty forest, see a majestic deer, who transform into a beautiful and friendly witch, which talks to the magician. She gives him a magical wand. The magician travels back to his magical school of magic on his witch broom, just in time to fight a huge orc goblin which is trying to destroy the magical school. The magical schools is beautiful, looking a bit like an eerie chapel, alone in the middle of a misty lake. The shots should all be beautiful, using appropriate large or shot range shots, with golden hour for the castle shot, nice camera movement etc.",
-
- "captions": [
- "Photorealistic movie shot of a young magician, with his navy blue wizard robes adorned with silver moons and stars, his brown leather boots, and his feathered hat stepping hesitantly into a damp, misty forest filled with moss-laden ancient trees and ferns underbrush in the morning light, shot in Cinema 4D, showcasing a high degree of photorealism.",
-
- "Cinematic video of a majestic deer, its majestic antlers laced with flourishing green foliage, emerging from the foggy backdrop bathed in the warm glow of the morning sun, forming a stark contrast against the earthly hues of the surrounding misty forest, captured in 8K UHD, to unveil the breathtakingly realistic details.",
-
- "Cinematic rendition of the deer morphing into the earth-toned witch, in ethereal display of light and magical elements, the misty forest alive with twinkling fairy lights that perfectly mimic the real world counterparts, shot in 4k UHD, reveling in its award-winning photorealism.",
-
- "Movie scene of the witch and the young magician, dressed in his navy robe with starry print, in earnest conversation on a moss-covered bridge, washed in the dewy morn's light, with their expressive eyes twinkling, the Cinema 4D camera movement emulating the human eyes' attention to detail.",
-
- "Feature film quality shot of the kind witch, in her botanical-themed earthy toned robe, gifting the young magician, in his iconic starry, navy robe, the intricately designed magical wand aglow with magical energy, in the dimly lit yet mystifying forest, brought to life in unparalleled photorealistic detail by an 8K UHD camera.",
-
- "HD video in Cinema 4D capturing the young magician in his navy blue robes, etched with silver stars and moons, his feathered hat still firmly in place, riding his enchanted broomstick over lush forests, under the morning sky painted with pastel hues, showcasing its award-winning photorealistic details.",
-
- "Photorealistic, wide-angle video of the grand magical school, resembling a serene, slightly eerie chapel, in the middle of a misty lake, bathed in the ethereal glow of the setting sun, masterfully captured in 8K UHD that showcases each texture, color, and the play of lights and shadows in high detail.",
-
- "Award winning movie snapshot of the looming Orc Goblin, its hideous features enhanced by the subtly dramatic sunlight, emerging ominously onto the school's sacred grounds, painted in sharp, realistic detail and contrasted against the seemingly tranquil lake and verdant surroundings, all captured in 4K UHD with photorealistic CGI effects.",
-
- "High quality 4k UHD video featuring the young magician, still garbed in his signature navy robes adorned with silver moons and stars, ready to confront the monster through the plush velvet drapes and marble archways of the school, the surrounding fog adding a sense of ethereal beauty to the tense scenario, captured in ultra-realistic Cinema 4D.",
-
- "Cinematic highlight video in 4K quality portraying the young magician, in his iconic wizard outfit with shimmering moons and stars, standing defiantly as he casts a spell that illuminates the ornate interiors of the dark school, serving a visual feast of rich colors, deep shadows, and flawless textures, embodying the true essence of ultra-realistic cinematography."
- ]
-}
-const result = {
- "sequenceId": v4(),
- "skip": false,
- "lastGenerationAt": "",
- "videoPrompt": raw.input,
- "audioPrompt": "epic orchestral music, for a movie about magicians",
- "tags": [
- "trailer",
- "cinema",
- "fantasy",
- "adventure"
- ],
- "channel": "main",
- "shots": raw.captions.map((cap, i) => ({
- shotId: v4(),
- "index": i,
- "lastGenerationAt": "",
- "videoPrompt": cap,
- "audioPrompt": ""
- }))
-}
-
-console.log(JSON.stringify(result, null, 2))
\ No newline at end of file
diff --git a/spaces/jbilcke-hf/space-factory/public/placeholder.html b/spaces/jbilcke-hf/space-factory/public/placeholder.html
deleted file mode 100644
index 634e0d7233aa37d8366464d5fe727c555442c43a..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/space-factory/public/placeholder.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
- Nothing to show (yet)
-
-
-
-
-
-
-
-
Waiting for content..
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/jessica6105/Lu-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/jessica6105/Lu-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
deleted file mode 100644
index ebc4b2e6fb6b95ddc5f678b4a7f829466799f2da..0000000000000000000000000000000000000000
--- a/spaces/jessica6105/Lu-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-language:
-- zh
-tags:
-- bert
-license: "apache-2.0"
----
-
-# Please use 'Bert' related functions to load this model!
-
-## Chinese BERT with Whole Word Masking
-For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
-
-**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
-Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
-
-This repository is developed based on:https://github.com/google-research/bert
-
-You may also interested in,
-- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
-- Chinese MacBERT: https://github.com/ymcui/MacBERT
-- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
-- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
-- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
-
-More resources by HFL: https://github.com/ymcui/HFL-Anthology
-
-## Citation
-If you find the technical report or resource is useful, please cite the following technical report in your paper.
-- Primary: https://arxiv.org/abs/2004.13922
-```
-@inproceedings{cui-etal-2020-revisiting,
- title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
- author = "Cui, Yiming and
- Che, Wanxiang and
- Liu, Ting and
- Qin, Bing and
- Wang, Shijin and
- Hu, Guoping",
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
- month = nov,
- year = "2020",
- address = "Online",
- publisher = "Association for Computational Linguistics",
- url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
- pages = "657--668",
-}
-```
-- Secondary: https://arxiv.org/abs/1906.08101
-```
-@article{chinese-bert-wwm,
- title={Pre-Training with Whole Word Masking for Chinese BERT},
- author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
- journal={arXiv preprint arXiv:1906.08101},
- year={2019}
- }
-```
diff --git a/spaces/jessica6105/Lu-Bert-VITS2/modules.py b/spaces/jessica6105/Lu-Bert-VITS2/modules.py
deleted file mode 100644
index b1f89a2f837f190a3dd5de52e7a4e183f1024306..0000000000000000000000000000000000000000
--- a/spaces/jessica6105/Lu-Bert-VITS2/modules.py
+++ /dev/null
@@ -1,597 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-from attentions import Encoder
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
-
-
-class TransformerCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- n_layers,
- n_heads,
- p_dropout=0,
- filter_channels=0,
- mean_only=False,
- wn_sharing_parameter=None,
- gin_channels=0,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = (
- Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- isflow=True,
- gin_channels=gin_channels,
- )
- if wn_sharing_parameter is None
- else wn_sharing_parameter
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/jhwen/bingo/src/components/ui/dropdown-menu.tsx b/spaces/jhwen/bingo/src/components/ui/dropdown-menu.tsx
deleted file mode 100644
index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000
--- a/spaces/jhwen/bingo/src/components/ui/dropdown-menu.tsx
+++ /dev/null
@@ -1,128 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu'
-
-import { cn } from '@/lib/utils'
-
-const DropdownMenu = DropdownMenuPrimitive.Root
-
-const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger
-
-const DropdownMenuGroup = DropdownMenuPrimitive.Group
-
-const DropdownMenuPortal = DropdownMenuPrimitive.Portal
-
-const DropdownMenuSub = DropdownMenuPrimitive.Sub
-
-const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup
-
-const DropdownMenuSubContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DropdownMenuSubContent.displayName =
- DropdownMenuPrimitive.SubContent.displayName
-
-const DropdownMenuContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, sideOffset = 4, ...props }, ref) => (
-
-
-
-))
-DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName
-
-const DropdownMenuItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef & {
- inset?: boolean
- }
->(({ className, inset, ...props }, ref) => (
-
-))
-DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName
-
-const DropdownMenuLabel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef & {
- inset?: boolean
- }
->(({ className, inset, ...props }, ref) => (
-
-))
-DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName
-
-const DropdownMenuSeparator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName
-
-const DropdownMenuShortcut = ({
- className,
- ...props
-}: React.HTMLAttributes) => {
- return (
-
- )
-}
-DropdownMenuShortcut.displayName = 'DropdownMenuShortcut'
-
-export {
- DropdownMenu,
- DropdownMenuTrigger,
- DropdownMenuContent,
- DropdownMenuItem,
- DropdownMenuLabel,
- DropdownMenuSeparator,
- DropdownMenuShortcut,
- DropdownMenuGroup,
- DropdownMenuPortal,
- DropdownMenuSub,
- DropdownMenuSubContent,
- DropdownMenuRadioGroup
-}
diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/brenda_get_smiles.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/brenda_get_smiles.py
deleted file mode 100644
index 27f7edeaed4d4a847170f084562248028d505720..0000000000000000000000000000000000000000
--- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/brenda_get_smiles.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/python
-# coding: utf-8
-
-# Author: LE YUAN
-# Date: 2020-07-23
-
-# This python script is to obtain canonical SMILES just by chemical name using PubChem API
-import json
-import time
-import requests
-import multiprocessing as mp
-from multiprocessing.dummy import Pool
-from pubchempy import Compound, get_compounds
-
-
-# Small example:
-# results = get_compounds('aspirin', 'name')
-# for compound in results :
-# print(compound.canonical_smiles)
-
-# have a try by running 100 case
-# with open("../complementaryData/Kcat_sabio_clean_unisubstrate.tsv", "r", encoding='utf-8') as file :
-# lines = file.readlines()[1:]
-# substrates = [line.strip().split('\t')[2] for line in lines]
-
-# print(len(substrates))
-# print(substrates[:10])
-
-# for substrate in substrates[:100] :
-# print(substrate)
-# results = get_compounds(substrate, 'name')
-# print(len(results))
-# if len(results) >0 :
-# print(results[0].canonical_smiles)
-# else :
-# print('-------------------------------------------------')
-
-name_smiles = dict()
-
-# One method to obtain SMILES by PubChem API using the website
-def get_smiles(name):
- # smiles = redis_cli.get(name)
- # if smiles is None:
- try :
- url = 'https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/name/%s/property/CanonicalSMILES/TXT' % name
- req = requests.get(url)
- if req.status_code != 200:
- smiles = None
- else:
- smiles = req.content.splitlines()[0].decode()
- print(smiles)
- # redis_cli.set(name, smiles, ex=None)
-
- # print smiles
- except :
- smiles = None
-
- name_smiles[name] = smiles
-
-
-# Another method to retrieve SMILES by Pubchempy
-# def get_smiles(name):
-# time.sleep(0.5)
-# results = get_compounds(name, 'name')
-
-# # print(len(results))
-# if len(results) >0 :
-# smiles = results[0].canonical_smiles
-# print(smiles)
-# else :
-# smiles = None
-# print(smiles)
-# print('-------------------------------------------------')
-
-# name_smiles[name] = smiles
-
-
-# # To obtain SMILES for substrates using provided API by PubChem
-# def main():
-# # with open('./smiles_data.json') as f:
-# # names = json.load(f)
-# # print(len(names))
-
-# with open("../complementaryData/Kcat_brenda_clean.tsv", "r", encoding='utf-8') as file :
-# lines = file.readlines()[1:]
-
-# substrates = [line.strip().split('\t')[2] for line in lines]
-
-# print(len(substrates)) # 52390
-
-# names = list(set(substrates))
-# print(len(names)) # 14457
-
-# # for substrate in substrates[:100] :
-# # print(substrate)
-
-# # thread_pool = mp.Pool(4)
-# thread_pool = Pool(4)
-# thread_pool.map(get_smiles, names)
-
-# with open('../complementaryData/Kcat_brenda_smiles.json', 'w') as outfile:
-# json.dump(name_smiles, outfile, indent=2)
-
-
-# To test how many entries having SMILES for Sabio-RK database
-def main():
- with open('../../Data/database/Kcat_brenda_smiles.json', 'r') as infile:
- name_smiles = json.load(infile)
-
- with open("../../Data/database/Kcat_brenda_clean.tsv", "r", encoding='utf-8') as file :
- lines = file.readlines()[1:]
-
- substrates = [line.strip().split('\t')[2] for line in lines]
-
- print(len(substrates)) # 52390
-
- substrate_smiles = list()
- for substrate in substrates :
- # print(substrate)
- smiles = name_smiles[substrate]
- # print(smiles)
- if smiles is not None :
- # print(smiles)
- substrate_smiles.append(smiles)
-
- print(len(substrate_smiles)) # 34857 have SMILES
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/jmcob/AR-VR-IOT-Demo/style.css b/spaces/jmcob/AR-VR-IOT-Demo/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/jmcob/AR-VR-IOT-Demo/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/abc/_subprocesses.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/abc/_subprocesses.py
deleted file mode 100644
index 704b44a2dda9e21997acf52c268e414d01bd2eb5..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/abc/_subprocesses.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from __future__ import annotations
-
-from abc import abstractmethod
-from signal import Signals
-
-from ._resources import AsyncResource
-from ._streams import ByteReceiveStream, ByteSendStream
-
-
-class Process(AsyncResource):
- """An asynchronous version of :class:`subprocess.Popen`."""
-
- @abstractmethod
- async def wait(self) -> int:
- """
- Wait until the process exits.
-
- :return: the exit code of the process
- """
-
- @abstractmethod
- def terminate(self) -> None:
- """
- Terminates the process, gracefully if possible.
-
- On Windows, this calls ``TerminateProcess()``.
- On POSIX systems, this sends ``SIGTERM`` to the process.
-
- .. seealso:: :meth:`subprocess.Popen.terminate`
- """
-
- @abstractmethod
- def kill(self) -> None:
- """
- Kills the process.
-
- On Windows, this calls ``TerminateProcess()``.
- On POSIX systems, this sends ``SIGKILL`` to the process.
-
- .. seealso:: :meth:`subprocess.Popen.kill`
- """
-
- @abstractmethod
- def send_signal(self, signal: Signals) -> None:
- """
- Send a signal to the subprocess.
-
- .. seealso:: :meth:`subprocess.Popen.send_signal`
-
- :param signal: the signal number (e.g. :data:`signal.SIGHUP`)
- """
-
- @property
- @abstractmethod
- def pid(self) -> int:
- """The process ID of the process."""
-
- @property
- @abstractmethod
- def returncode(self) -> int | None:
- """
- The return code of the process. If the process has not yet terminated, this will be
- ``None``.
- """
-
- @property
- @abstractmethod
- def stdin(self) -> ByteSendStream | None:
- """The stream for the standard input of the process."""
-
- @property
- @abstractmethod
- def stdout(self) -> ByteReceiveStream | None:
- """The stream for the standard output of the process."""
-
- @property
- @abstractmethod
- def stderr(self) -> ByteReceiveStream | None:
- """The stream for the standard error output of the process."""
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_P_O_S_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_P_O_S_.py
deleted file mode 100644
index ca8290bab440e31196dd009c5125e022a079d7af..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_P_O_S_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_G_P_O_S_(BaseTTXConverter):
- pass
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/common/struct_store/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/common/struct_store/__init__.py
deleted file mode 100644
index c637335013c599b07de054fba07b47ecb86ad3e8..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/common/struct_store/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Init params."""
diff --git a/spaces/jordonpeter01/ai-comic-factory/src/app/interface/zoom/index.tsx b/spaces/jordonpeter01/ai-comic-factory/src/app/interface/zoom/index.tsx
deleted file mode 100644
index 5c8d31a3af1c80f8a9ef15330bb84c0d2c3069de..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/ai-comic-factory/src/app/interface/zoom/index.tsx
+++ /dev/null
@@ -1,35 +0,0 @@
-import { useStore } from "@/app/store"
-import { VerticalSlider } from "@/components/ui/vertical-slider"
-import { cn } from "@/lib/utils"
-
-export function Zoom() {
- const zoomLevel = useStore((state) => state.zoomLevel)
- const setZoomLevel = useStore((state) => state.setZoomLevel)
- const isGeneratingStory = useStore((state) => state.isGeneratingStory)
-
- return (
-
- )
-}
\ No newline at end of file
diff --git a/spaces/justin-zk/Personalize-SAM/show.py b/spaces/justin-zk/Personalize-SAM/show.py
deleted file mode 100644
index c07e4fe530942e1670a7299edde3294a7fc571da..0000000000000000000000000000000000000000
--- a/spaces/justin-zk/Personalize-SAM/show.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import numpy as np
-import torch
-import matplotlib.pyplot as plt
-import cv2
-
-
-
-def show_mask(mask, ax, random_color=False):
- if random_color:
- color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
- else:
- color = np.array([30/255, 144/255, 255/255, 0.4])
- h, w = mask.shape[-2:]
- mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
- ax.imshow(mask_image)
-
-
-def show_points(coords, labels, ax, marker_size=375):
- pos_points = coords[labels==1]
- neg_points = coords[labels==0]
- ax.scatter(pos_points[:, 0], pos_points[:, 1], color='green', marker='*', s=marker_size, edgecolor='white', linewidth=1.25)
- ax.scatter(neg_points[:, 0], neg_points[:, 1], color='red', marker='*', s=marker_size, edgecolor='white', linewidth=1.25)
-
-
-def show_box(box, ax):
- x0, y0 = box[0], box[1]
- w, h = box[2] - box[0], box[3] - box[1]
- ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0,0,0,0), lw=2))
\ No newline at end of file
diff --git a/spaces/kalebu/LangChain_heyooBot/app.py b/spaces/kalebu/LangChain_heyooBot/app.py
deleted file mode 100644
index ae3a8d316c06df5b2acbe9ddf2ab0431fe946888..0000000000000000000000000000000000000000
--- a/spaces/kalebu/LangChain_heyooBot/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-from langchain.llms import OpenAI
-from langchain.chains.qa_with_sources import load_qa_with_sources_chain
-from langchain.docstore.document import Document
-import requests
-import pathlib
-import subprocess
-import tempfile
-import os
-import gradio as gr
-import pickle
-
-# using a vector space for our search
-from langchain.embeddings.openai import OpenAIEmbeddings
-from langchain.vectorstores.faiss import FAISS
-from langchain.text_splitter import CharacterTextSplitter
-
-#loading FAISS search index from disk
-with open("search_index.pickle", "rb") as f:
- search_index = pickle.load(f)
-
-#Get GPT3 response using Langchain
-def print_answer(question, openai): #openai_embeddings
- #search_index = get_search_index()
- chain = load_qa_with_sources_chain(openai) #(OpenAI(temperature=0))
- response = (
- chain(
- {
- "input_documents": search_index.similarity_search(question, k=4),
- "question": question,
- },
- return_only_outputs=True,
- )["output_text"]
- )
- if len(response.split('\n')[-1].split())>2:
- response = response.split('\n')[0] + ', '.join([' Click Link' + str(i) + '' for i in range(1,len(response.split('\n')[-1].split()))])
- else:
- response = response.split('\n')[0] + ' Click Link'
- return response
-
-
-def chat(message, history, openai_api_key):
- #openai_embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
- openai = OpenAI(temperature=0, openai_api_key=openai_api_key )
- #os.environ["OPENAI_API_KEY"] = openai_api_key
- history = history or []
- message = message.lower()
- response = print_answer(message, openai) #openai_embeddings
- history.append((message, response))
- return history, history
-
-
-with gr.Blocks() as demo:
- gr.HTML("""
-
-
- heyoo QandA - LangChain Bot
-
-
-
- Hi, I'm a Q and A heyoo expert bot, start by typing in your OpenAI API key, questions/issues you are facing in your heyoo implementations and then press enter.
- Duplicate Space with GPU Upgrade for fast Inference & no queue
- Built using LangChain and Gradio for the heyoo Repo
-
-
""")
- with gr.Row():
- question = gr.Textbox(label = 'Type in your questions about heyoo here and press Enter!', placeholder = 'What questions do you want to ask about the heyoo library?')
- openai_api_key = gr.Textbox(type='password', label="Enter your OpenAI API key here")
- state = gr.State()
- chatbot = gr.Chatbot()
- question.submit(chat, [question, state, openai_api_key], [chatbot, state])
-
-if __name__ == "__main__":
- demo.launch()
\ No newline at end of file
diff --git a/spaces/kcagle/AutoGPT/tests/integration/milvus_memory_tests.py b/spaces/kcagle/AutoGPT/tests/integration/milvus_memory_tests.py
deleted file mode 100644
index ec38bf2f72087b5da679d26594ebff97d8a09b19..0000000000000000000000000000000000000000
--- a/spaces/kcagle/AutoGPT/tests/integration/milvus_memory_tests.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# sourcery skip: snake-case-functions
-"""Tests for the MilvusMemory class."""
-import random
-import string
-import unittest
-
-from autogpt.config import Config
-from autogpt.memory.milvus import MilvusMemory
-
-try:
-
- class TestMilvusMemory(unittest.TestCase):
- """Tests for the MilvusMemory class."""
-
- def random_string(self, length: int) -> str:
- """Generate a random string of the given length."""
- return "".join(random.choice(string.ascii_letters) for _ in range(length))
-
- def setUp(self) -> None:
- """Set up the test environment."""
- cfg = Config()
- cfg.milvus_addr = "localhost:19530"
- self.memory = MilvusMemory(cfg)
- self.memory.clear()
-
- # Add example texts to the cache
- self.example_texts = [
- "The quick brown fox jumps over the lazy dog",
- "I love machine learning and natural language processing",
- "The cake is a lie, but the pie is always true",
- "ChatGPT is an advanced AI model for conversation",
- ]
-
- for text in self.example_texts:
- self.memory.add(text)
-
- # Add some random strings to test noise
- for _ in range(5):
- self.memory.add(self.random_string(10))
-
- def test_get_relevant(self) -> None:
- """Test getting relevant texts from the cache."""
- query = "I'm interested in artificial intelligence and NLP"
- num_relevant = 3
- relevant_texts = self.memory.get_relevant(query, num_relevant)
-
- print(f"Top {k} relevant texts for the query '{query}':")
- for i, text in enumerate(relevant_texts, start=1):
- print(f"{i}. {text}")
-
- self.assertEqual(len(relevant_texts), k)
- self.assertIn(self.example_texts[1], relevant_texts)
-
-except:
- print(
- "Skipping tests/integration/milvus_memory_tests.py as Milvus is not installed."
- )
diff --git a/spaces/keras-io/keras-video-classification-cnn-rnn/README.md b/spaces/keras-io/keras-video-classification-cnn-rnn/README.md
deleted file mode 100644
index 4a7e56589f222c675e00ba348573abe3bc86b825..0000000000000000000000000000000000000000
--- a/spaces/keras-io/keras-video-classification-cnn-rnn/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Video Classification with CNN-RNN
-emoji: 🎬
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/partial_fc.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/partial_fc.py
deleted file mode 100644
index 17e2d25715d10ba446c957e1d2528b0687ed71d5..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/partial_fc.py
+++ /dev/null
@@ -1,222 +0,0 @@
-import logging
-import os
-
-import torch
-import torch.distributed as dist
-from torch.nn import Module
-from torch.nn.functional import normalize, linear
-from torch.nn.parameter import Parameter
-
-
-class PartialFC(Module):
- """
- Author: {Xiang An, Yang Xiao, XuHan Zhu} in DeepGlint,
- Partial FC: Training 10 Million Identities on a Single Machine
- See the original paper:
- https://arxiv.org/abs/2010.05222
- """
-
- @torch.no_grad()
- def __init__(self, rank, local_rank, world_size, batch_size, resume,
- margin_softmax, num_classes, sample_rate=1.0, embedding_size=512, prefix="./"):
- """
- rank: int
- Unique process(GPU) ID from 0 to world_size - 1.
- local_rank: int
- Unique process(GPU) ID within the server from 0 to 7.
- world_size: int
- Number of GPU.
- batch_size: int
- Batch size on current rank(GPU).
- resume: bool
- Select whether to restore the weight of softmax.
- margin_softmax: callable
- A function of margin softmax, eg: cosface, arcface.
- num_classes: int
- The number of class center storage in current rank(CPU/GPU), usually is total_classes // world_size,
- required.
- sample_rate: float
- The partial fc sampling rate, when the number of classes increases to more than 2 millions, Sampling
- can greatly speed up training, and reduce a lot of GPU memory, default is 1.0.
- embedding_size: int
- The feature dimension, default is 512.
- prefix: str
- Path for save checkpoint, default is './'.
- """
- super(PartialFC, self).__init__()
- #
- self.num_classes: int = num_classes
- self.rank: int = rank
- self.local_rank: int = local_rank
- self.device: torch.device = torch.device("cuda:{}".format(self.local_rank))
- self.world_size: int = world_size
- self.batch_size: int = batch_size
- self.margin_softmax: callable = margin_softmax
- self.sample_rate: float = sample_rate
- self.embedding_size: int = embedding_size
- self.prefix: str = prefix
- self.num_local: int = num_classes // world_size + int(rank < num_classes % world_size)
- self.class_start: int = num_classes // world_size * rank + min(rank, num_classes % world_size)
- self.num_sample: int = int(self.sample_rate * self.num_local)
-
- self.weight_name = os.path.join(self.prefix, "rank_{}_softmax_weight.pt".format(self.rank))
- self.weight_mom_name = os.path.join(self.prefix, "rank_{}_softmax_weight_mom.pt".format(self.rank))
-
- if resume:
- try:
- self.weight: torch.Tensor = torch.load(self.weight_name)
- self.weight_mom: torch.Tensor = torch.load(self.weight_mom_name)
- if self.weight.shape[0] != self.num_local or self.weight_mom.shape[0] != self.num_local:
- raise IndexError
- logging.info("softmax weight resume successfully!")
- logging.info("softmax weight mom resume successfully!")
- except (FileNotFoundError, KeyError, IndexError):
- self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)
- self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)
- logging.info("softmax weight init!")
- logging.info("softmax weight mom init!")
- else:
- self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)
- self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)
- logging.info("softmax weight init successfully!")
- logging.info("softmax weight mom init successfully!")
- self.stream: torch.cuda.Stream = torch.cuda.Stream(local_rank)
-
- self.index = None
- if int(self.sample_rate) == 1:
- self.update = lambda: 0
- self.sub_weight = Parameter(self.weight)
- self.sub_weight_mom = self.weight_mom
- else:
- self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank))
-
- def save_params(self):
- """ Save softmax weight for each rank on prefix
- """
- torch.save(self.weight.data, self.weight_name)
- torch.save(self.weight_mom, self.weight_mom_name)
-
- @torch.no_grad()
- def sample(self, total_label):
- """
- Sample all positive class centers in each rank, and random select neg class centers to filling a fixed
- `num_sample`.
-
- total_label: tensor
- Label after all gather, which cross all GPUs.
- """
- index_positive = (self.class_start <= total_label) & (total_label < self.class_start + self.num_local)
- total_label[~index_positive] = -1
- total_label[index_positive] -= self.class_start
- if int(self.sample_rate) != 1:
- positive = torch.unique(total_label[index_positive], sorted=True)
- if self.num_sample - positive.size(0) >= 0:
- perm = torch.rand(size=[self.num_local], device=self.device)
- perm[positive] = 2.0
- index = torch.topk(perm, k=self.num_sample)[1]
- index = index.sort()[0]
- else:
- index = positive
- self.index = index
- total_label[index_positive] = torch.searchsorted(index, total_label[index_positive])
- self.sub_weight = Parameter(self.weight[index])
- self.sub_weight_mom = self.weight_mom[index]
-
- def forward(self, total_features, norm_weight):
- """ Partial fc forward, `logits = X * sample(W)`
- """
- torch.cuda.current_stream().wait_stream(self.stream)
- logits = linear(total_features, norm_weight)
- return logits
-
- @torch.no_grad()
- def update(self):
- """ Set updated weight and weight_mom to memory bank.
- """
- self.weight_mom[self.index] = self.sub_weight_mom
- self.weight[self.index] = self.sub_weight
-
- def prepare(self, label, optimizer):
- """
- get sampled class centers for cal softmax.
-
- label: tensor
- Label tensor on each rank.
- optimizer: opt
- Optimizer for partial fc, which need to get weight mom.
- """
- with torch.cuda.stream(self.stream):
- total_label = torch.zeros(
- size=[self.batch_size * self.world_size], device=self.device, dtype=torch.long)
- dist.all_gather(list(total_label.chunk(self.world_size, dim=0)), label)
- self.sample(total_label)
- optimizer.state.pop(optimizer.param_groups[-1]['params'][0], None)
- optimizer.param_groups[-1]['params'][0] = self.sub_weight
- optimizer.state[self.sub_weight]['momentum_buffer'] = self.sub_weight_mom
- norm_weight = normalize(self.sub_weight)
- return total_label, norm_weight
-
- def forward_backward(self, label, features, optimizer):
- """
- Partial fc forward and backward with model parallel
-
- label: tensor
- Label tensor on each rank(GPU)
- features: tensor
- Features tensor on each rank(GPU)
- optimizer: optimizer
- Optimizer for partial fc
-
- Returns:
- --------
- x_grad: tensor
- The gradient of features.
- loss_v: tensor
- Loss value for cross entropy.
- """
- total_label, norm_weight = self.prepare(label, optimizer)
- total_features = torch.zeros(
- size=[self.batch_size * self.world_size, self.embedding_size], device=self.device)
- dist.all_gather(list(total_features.chunk(self.world_size, dim=0)), features.data)
- total_features.requires_grad = True
-
- logits = self.forward(total_features, norm_weight)
- logits = self.margin_softmax(logits, total_label)
-
- with torch.no_grad():
- max_fc = torch.max(logits, dim=1, keepdim=True)[0]
- dist.all_reduce(max_fc, dist.ReduceOp.MAX)
-
- # calculate exp(logits) and all-reduce
- logits_exp = torch.exp(logits - max_fc)
- logits_sum_exp = logits_exp.sum(dim=1, keepdims=True)
- dist.all_reduce(logits_sum_exp, dist.ReduceOp.SUM)
-
- # calculate prob
- logits_exp.div_(logits_sum_exp)
-
- # get one-hot
- grad = logits_exp
- index = torch.where(total_label != -1)[0]
- one_hot = torch.zeros(size=[index.size()[0], grad.size()[1]], device=grad.device)
- one_hot.scatter_(1, total_label[index, None], 1)
-
- # calculate loss
- loss = torch.zeros(grad.size()[0], 1, device=grad.device)
- loss[index] = grad[index].gather(1, total_label[index, None])
- dist.all_reduce(loss, dist.ReduceOp.SUM)
- loss_v = loss.clamp_min_(1e-30).log_().mean() * (-1)
-
- # calculate grad
- grad[index] -= one_hot
- grad.div_(self.batch_size * self.world_size)
-
- logits.backward(grad)
- if total_features.grad is not None:
- total_features.grad.detach_()
- x_grad: torch.Tensor = torch.zeros_like(features, requires_grad=True)
- # feature gradient all-reduce
- dist.reduce_scatter(x_grad, list(total_features.grad.chunk(self.world_size, dim=0)))
- x_grad = x_grad * self.world_size
- # backward backbone
- return x_grad, loss_v
diff --git a/spaces/kevinwang676/SadTalker/src/utils/paste_pic.py b/spaces/kevinwang676/SadTalker/src/utils/paste_pic.py
deleted file mode 100644
index f9989e21e48e64f620f9b148e65fdfe806c53b14..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/SadTalker/src/utils/paste_pic.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import cv2, os
-import numpy as np
-from tqdm import tqdm
-import uuid
-
-from src.utils.videoio import save_video_with_watermark
-
-def paste_pic(video_path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop=False):
-
- if not os.path.isfile(pic_path):
- raise ValueError('pic_path must be a valid path to video/image file')
- elif pic_path.split('.')[-1] in ['jpg', 'png', 'jpeg']:
- # loader for first frame
- full_img = cv2.imread(pic_path)
- else:
- # loader for videos
- video_stream = cv2.VideoCapture(pic_path)
- fps = video_stream.get(cv2.CAP_PROP_FPS)
- full_frames = []
- while 1:
- still_reading, frame = video_stream.read()
- if not still_reading:
- video_stream.release()
- break
- break
- full_img = frame
- frame_h = full_img.shape[0]
- frame_w = full_img.shape[1]
-
- video_stream = cv2.VideoCapture(video_path)
- fps = video_stream.get(cv2.CAP_PROP_FPS)
- crop_frames = []
- while 1:
- still_reading, frame = video_stream.read()
- if not still_reading:
- video_stream.release()
- break
- crop_frames.append(frame)
-
- if len(crop_info) != 3:
- print("you didn't crop the image")
- return
- else:
- r_w, r_h = crop_info[0]
- clx, cly, crx, cry = crop_info[1]
- lx, ly, rx, ry = crop_info[2]
- lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)
- # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
- # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
-
- if extended_crop:
- oy1, oy2, ox1, ox2 = cly, cry, clx, crx
- else:
- oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
-
- tmp_path = str(uuid.uuid4())+'.mp4'
- out_tmp = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_w, frame_h))
- for crop_frame in tqdm(crop_frames, 'seamlessClone:'):
- p = cv2.resize(crop_frame.astype(np.uint8), (ox2-ox1, oy2 - oy1))
-
- mask = 255*np.ones(p.shape, p.dtype)
- location = ((ox1+ox2) // 2, (oy1+oy2) // 2)
- gen_img = cv2.seamlessClone(p, full_img, mask, location, cv2.NORMAL_CLONE)
- out_tmp.write(gen_img)
-
- out_tmp.release()
-
- save_video_with_watermark(tmp_path, new_audio_path, full_video_path, watermark=False)
- os.remove(tmp_path)
diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/data/base_dataset.py b/spaces/kevinwang676/VoiceChanger/src/face3d/data/base_dataset.py
deleted file mode 100644
index 1bd57d082d519f512d7114b4f867b6695fb7de06..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChanger/src/face3d/data/base_dataset.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""This module implements an abstract base class (ABC) 'BaseDataset' for datasets.
-
-It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses.
-"""
-import random
-import numpy as np
-import torch.utils.data as data
-from PIL import Image
-import torchvision.transforms as transforms
-from abc import ABC, abstractmethod
-
-
-class BaseDataset(data.Dataset, ABC):
- """This class is an abstract base class (ABC) for datasets.
-
- To create a subclass, you need to implement the following four functions:
- -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
- -- <__len__>: return the size of dataset.
- -- <__getitem__>: get a data point.
- -- : (optionally) add dataset-specific options and set default options.
- """
-
- def __init__(self, opt):
- """Initialize the class; save the options in the class
-
- Parameters:
- opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
- """
- self.opt = opt
- # self.root = opt.dataroot
- self.current_epoch = 0
-
- @staticmethod
- def modify_commandline_options(parser, is_train):
- """Add new dataset-specific options, and rewrite default values for existing options.
-
- Parameters:
- parser -- original option parser
- is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
-
- Returns:
- the modified parser.
- """
- return parser
-
- @abstractmethod
- def __len__(self):
- """Return the total number of images in the dataset."""
- return 0
-
- @abstractmethod
- def __getitem__(self, index):
- """Return a data point and its metadata information.
-
- Parameters:
- index - - a random integer for data indexing
-
- Returns:
- a dictionary of data with their names. It ususally contains the data itself and its metadata information.
- """
- pass
-
-
-def get_transform(grayscale=False):
- transform_list = []
- if grayscale:
- transform_list.append(transforms.Grayscale(1))
- transform_list += [transforms.ToTensor()]
- return transforms.Compose(transform_list)
-
-def get_affine_mat(opt, size):
- shift_x, shift_y, scale, rot_angle, flip = 0., 0., 1., 0., False
- w, h = size
-
- if 'shift' in opt.preprocess:
- shift_pixs = int(opt.shift_pixs)
- shift_x = random.randint(-shift_pixs, shift_pixs)
- shift_y = random.randint(-shift_pixs, shift_pixs)
- if 'scale' in opt.preprocess:
- scale = 1 + opt.scale_delta * (2 * random.random() - 1)
- if 'rot' in opt.preprocess:
- rot_angle = opt.rot_angle * (2 * random.random() - 1)
- rot_rad = -rot_angle * np.pi/180
- if 'flip' in opt.preprocess:
- flip = random.random() > 0.5
-
- shift_to_origin = np.array([1, 0, -w//2, 0, 1, -h//2, 0, 0, 1]).reshape([3, 3])
- flip_mat = np.array([-1 if flip else 1, 0, 0, 0, 1, 0, 0, 0, 1]).reshape([3, 3])
- shift_mat = np.array([1, 0, shift_x, 0, 1, shift_y, 0, 0, 1]).reshape([3, 3])
- rot_mat = np.array([np.cos(rot_rad), np.sin(rot_rad), 0, -np.sin(rot_rad), np.cos(rot_rad), 0, 0, 0, 1]).reshape([3, 3])
- scale_mat = np.array([scale, 0, 0, 0, scale, 0, 0, 0, 1]).reshape([3, 3])
- shift_to_center = np.array([1, 0, w//2, 0, 1, h//2, 0, 0, 1]).reshape([3, 3])
-
- affine = shift_to_center @ scale_mat @ rot_mat @ shift_mat @ flip_mat @ shift_to_origin
- affine_inv = np.linalg.inv(affine)
- return affine, affine_inv, flip
-
-def apply_img_affine(img, affine_inv, method=Image.BICUBIC):
- return img.transform(img.size, Image.AFFINE, data=affine_inv.flatten()[:6], resample=Image.BICUBIC)
-
-def apply_lm_affine(landmark, affine, flip, size):
- _, h = size
- lm = landmark.copy()
- lm[:, 1] = h - 1 - lm[:, 1]
- lm = np.concatenate((lm, np.ones([lm.shape[0], 1])), -1)
- lm = lm @ np.transpose(affine)
- lm[:, :2] = lm[:, :2] / lm[:, 2:]
- lm = lm[:, :2]
- lm[:, 1] = h - 1 - lm[:, 1]
- if flip:
- lm_ = lm.copy()
- lm_[:17] = lm[16::-1]
- lm_[17:22] = lm[26:21:-1]
- lm_[22:27] = lm[21:16:-1]
- lm_[31:36] = lm[35:30:-1]
- lm_[36:40] = lm[45:41:-1]
- lm_[40:42] = lm[47:45:-1]
- lm_[42:46] = lm[39:35:-1]
- lm_[46:48] = lm[41:39:-1]
- lm_[48:55] = lm[54:47:-1]
- lm_[55:60] = lm[59:54:-1]
- lm_[60:65] = lm[64:59:-1]
- lm_[65:68] = lm[67:64:-1]
- lm = lm_
- return lm
diff --git a/spaces/kevinwang676/VoiceChangers/scripts/extension.py b/spaces/kevinwang676/VoiceChangers/scripts/extension.py
deleted file mode 100644
index c90ec25c2811d87a00a2e2a14e270c75d07d713d..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/scripts/extension.py
+++ /dev/null
@@ -1,189 +0,0 @@
-import os, sys
-from pathlib import Path
-import tempfile
-import gradio as gr
-from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call
-from modules.shared import opts, OptionInfo
-from modules import shared, paths, script_callbacks
-import launch
-import glob
-from huggingface_hub import snapshot_download
-
-
-
-def check_all_files_safetensor(current_dir):
- kv = {
- "SadTalker_V0.0.2_256.safetensors": "sadtalker-256",
- "SadTalker_V0.0.2_512.safetensors": "sadtalker-512",
- "mapping_00109-model.pth.tar" : "mapping-109" ,
- "mapping_00229-model.pth.tar" : "mapping-229" ,
- }
-
- if not os.path.isdir(current_dir):
- return False
-
- dirs = os.listdir(current_dir)
-
- for f in dirs:
- if f in kv.keys():
- del kv[f]
-
- return len(kv.keys()) == 0
-
-def check_all_files(current_dir):
- kv = {
- "auido2exp_00300-model.pth": "audio2exp",
- "auido2pose_00140-model.pth": "audio2pose",
- "epoch_20.pth": "face_recon",
- "facevid2vid_00189-model.pth.tar": "face-render",
- "mapping_00109-model.pth.tar" : "mapping-109" ,
- "mapping_00229-model.pth.tar" : "mapping-229" ,
- "wav2lip.pth": "wav2lip",
- "shape_predictor_68_face_landmarks.dat": "dlib",
- }
-
- if not os.path.isdir(current_dir):
- return False
-
- dirs = os.listdir(current_dir)
-
- for f in dirs:
- if f in kv.keys():
- del kv[f]
-
- return len(kv.keys()) == 0
-
-
-
-def download_model(local_dir='./checkpoints'):
- REPO_ID = 'vinthony/SadTalker'
- snapshot_download(repo_id=REPO_ID, local_dir=local_dir, local_dir_use_symlinks=False)
-
-def get_source_image(image):
- return image
-
-def get_img_from_txt2img(x):
- talker_path = Path(paths.script_path) / "outputs"
- imgs_from_txt_dir = str(talker_path / "txt2img-images/")
- imgs = glob.glob(imgs_from_txt_dir+'/*/*.png')
- imgs.sort(key=lambda x:os.path.getmtime(os.path.join(imgs_from_txt_dir, x)))
- img_from_txt_path = os.path.join(imgs_from_txt_dir, imgs[-1])
- return img_from_txt_path, img_from_txt_path
-
-def get_img_from_img2img(x):
- talker_path = Path(paths.script_path) / "outputs"
- imgs_from_img_dir = str(talker_path / "img2img-images/")
- imgs = glob.glob(imgs_from_img_dir+'/*/*.png')
- imgs.sort(key=lambda x:os.path.getmtime(os.path.join(imgs_from_img_dir, x)))
- img_from_img_path = os.path.join(imgs_from_img_dir, imgs[-1])
- return img_from_img_path, img_from_img_path
-
-def get_default_checkpoint_path():
- # check the path of models/checkpoints and extensions/
- checkpoint_path = Path(paths.script_path) / "models"/ "SadTalker"
- extension_checkpoint_path = Path(paths.script_path) / "extensions"/ "SadTalker" / "checkpoints"
-
- if check_all_files_safetensor(checkpoint_path):
- # print('founding sadtalker checkpoint in ' + str(checkpoint_path))
- return checkpoint_path
-
- if check_all_files_safetensor(extension_checkpoint_path):
- # print('founding sadtalker checkpoint in ' + str(extension_checkpoint_path))
- return extension_checkpoint_path
-
- if check_all_files(checkpoint_path):
- # print('founding sadtalker checkpoint in ' + str(checkpoint_path))
- return checkpoint_path
-
- if check_all_files(extension_checkpoint_path):
- # print('founding sadtalker checkpoint in ' + str(extension_checkpoint_path))
- return extension_checkpoint_path
-
- return None
-
-
-
-def install():
-
- kv = {
- "face_alignment": "face-alignment==1.3.5",
- "imageio": "imageio==2.19.3",
- "imageio_ffmpeg": "imageio-ffmpeg==0.4.7",
- "librosa":"librosa==0.8.0",
- "pydub":"pydub==0.25.1",
- "scipy":"scipy==1.8.1",
- "tqdm": "tqdm",
- "yacs":"yacs==0.1.8",
- "yaml": "pyyaml",
- "av":"av",
- "gfpgan": "gfpgan",
- }
-
- # # dlib is not necessary currently
- # if 'darwin' in sys.platform:
- # kv['dlib'] = "dlib"
- # else:
- # kv['dlib'] = 'dlib-bin'
-
- # #### we need to have a newer version of imageio for our method.
- # launch.run_pip("install imageio==2.19.3", "requirements for SadTalker")
-
- for k,v in kv.items():
- if not launch.is_installed(k):
- print(k, launch.is_installed(k))
- launch.run_pip("install "+ v, "requirements for SadTalker")
-
- if os.getenv('SADTALKER_CHECKPOINTS'):
- print('load Sadtalker Checkpoints from '+ os.getenv('SADTALKER_CHECKPOINTS'))
-
- elif get_default_checkpoint_path() is not None:
- os.environ['SADTALKER_CHECKPOINTS'] = str(get_default_checkpoint_path())
- else:
-
- print(
- """"
- SadTalker will not support download all the files from hugging face, which will take a long time.
-
- please manually set the SADTALKER_CHECKPOINTS in `webui_user.bat`(windows) or `webui_user.sh`(linux)
- """
- )
-
- # python = sys.executable
-
- # launch.run(f'"{python}" -m pip uninstall -y huggingface_hub', live=True)
- # launch.run(f'"{python}" -m pip install --upgrade git+https://github.com/huggingface/huggingface_hub@main', live=True)
- # ### run the scripts to downlod models to correct localtion.
- # # print('download models for SadTalker')
- # # launch.run("cd " + paths.script_path+"/extensions/SadTalker && bash ./scripts/download_models.sh", live=True)
- # # print('SadTalker is successfully installed!')
- # download_model(paths.script_path+'/extensions/SadTalker/checkpoints')
-
-
-def on_ui_tabs():
- install()
-
- sys.path.extend([paths.script_path+'/extensions/SadTalker'])
-
- repo_dir = paths.script_path+'/extensions/SadTalker/'
-
- result_dir = opts.sadtalker_result_dir
- os.makedirs(result_dir, exist_ok=True)
-
- from app_sadtalker import sadtalker_demo
-
- if os.getenv('SADTALKER_CHECKPOINTS'):
- checkpoint_path = os.getenv('SADTALKER_CHECKPOINTS')
- else:
- checkpoint_path = repo_dir+'checkpoints/'
-
- audio_to_video = sadtalker_demo(checkpoint_path=checkpoint_path, config_path=repo_dir+'src/config', warpfn = wrap_queued_call)
-
- return [(audio_to_video, "SadTalker", "extension")]
-
-def on_ui_settings():
- talker_path = Path(paths.script_path) / "outputs"
- section = ('extension', "SadTalker")
- opts.add_option("sadtalker_result_dir", OptionInfo(str(talker_path / "SadTalker/"), "Path to save results of sadtalker", section=section))
-
-script_callbacks.on_ui_settings(on_ui_settings)
-script_callbacks.on_ui_tabs(on_ui_tabs)
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv_ws.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv_ws.py
deleted file mode 100644
index a3941e27874993418b3b5708d5a7485f175ff9c8..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv_ws.py
+++ /dev/null
@@ -1,148 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .registry import CONV_LAYERS
-
-
-def conv_ws_2d(input,
- weight,
- bias=None,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- eps=1e-5):
- c_in = weight.size(0)
- weight_flat = weight.view(c_in, -1)
- mean = weight_flat.mean(dim=1, keepdim=True).view(c_in, 1, 1, 1)
- std = weight_flat.std(dim=1, keepdim=True).view(c_in, 1, 1, 1)
- weight = (weight - mean) / (std + eps)
- return F.conv2d(input, weight, bias, stride, padding, dilation, groups)
-
-
-@CONV_LAYERS.register_module('ConvWS')
-class ConvWS2d(nn.Conv2d):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- bias=True,
- eps=1e-5):
- super(ConvWS2d, self).__init__(
- in_channels,
- out_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- bias=bias)
- self.eps = eps
-
- def forward(self, x):
- return conv_ws_2d(x, self.weight, self.bias, self.stride, self.padding,
- self.dilation, self.groups, self.eps)
-
-
-@CONV_LAYERS.register_module(name='ConvAWS')
-class ConvAWS2d(nn.Conv2d):
- """AWS (Adaptive Weight Standardization)
-
- This is a variant of Weight Standardization
- (https://arxiv.org/pdf/1903.10520.pdf)
- It is used in DetectoRS to avoid NaN
- (https://arxiv.org/pdf/2006.02334.pdf)
-
- Args:
- in_channels (int): Number of channels in the input image
- out_channels (int): Number of channels produced by the convolution
- kernel_size (int or tuple): Size of the conv kernel
- stride (int or tuple, optional): Stride of the convolution. Default: 1
- padding (int or tuple, optional): Zero-padding added to both sides of
- the input. Default: 0
- dilation (int or tuple, optional): Spacing between kernel elements.
- Default: 1
- groups (int, optional): Number of blocked connections from input
- channels to output channels. Default: 1
- bias (bool, optional): If set True, adds a learnable bias to the
- output. Default: True
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- bias=True):
- super().__init__(
- in_channels,
- out_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- bias=bias)
- self.register_buffer('weight_gamma',
- torch.ones(self.out_channels, 1, 1, 1))
- self.register_buffer('weight_beta',
- torch.zeros(self.out_channels, 1, 1, 1))
-
- def _get_weight(self, weight):
- weight_flat = weight.view(weight.size(0), -1)
- mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1)
- std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1)
- weight = (weight - mean) / std
- weight = self.weight_gamma * weight + self.weight_beta
- return weight
-
- def forward(self, x):
- weight = self._get_weight(self.weight)
- return F.conv2d(x, weight, self.bias, self.stride, self.padding,
- self.dilation, self.groups)
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- """Override default load function.
-
- AWS overrides the function _load_from_state_dict to recover
- weight_gamma and weight_beta if they are missing. If weight_gamma and
- weight_beta are found in the checkpoint, this function will return
- after super()._load_from_state_dict. Otherwise, it will compute the
- mean and std of the pretrained weights and store them in weight_beta
- and weight_gamma.
- """
-
- self.weight_gamma.data.fill_(-1)
- local_missing_keys = []
- super()._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, local_missing_keys,
- unexpected_keys, error_msgs)
- if self.weight_gamma.data.mean() > 0:
- for k in local_missing_keys:
- missing_keys.append(k)
- return
- weight = self.weight.data
- weight_flat = weight.view(weight.size(0), -1)
- mean = weight_flat.mean(dim=1).view(-1, 1, 1, 1)
- std = torch.sqrt(weight_flat.var(dim=1) + 1e-5).view(-1, 1, 1, 1)
- self.weight_beta.data.copy_(mean)
- self.weight_gamma.data.copy_(std)
- missing_gamma_beta = [
- k for k in local_missing_keys
- if k.endswith('weight_gamma') or k.endswith('weight_beta')
- ]
- for k in missing_gamma_beta:
- local_missing_keys.remove(k)
- for k in local_missing_keys:
- missing_keys.append(k)
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/sync_bn.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/sync_bn.py
deleted file mode 100644
index c9b016fcbe860989c56cd1040034bcfa60e146d2..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/sync_bn.py
+++ /dev/null
@@ -1,279 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.distributed as dist
-import torch.nn.functional as F
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.module import Module
-from torch.nn.parameter import Parameter
-
-from annotator.uniformer.mmcv.cnn import NORM_LAYERS
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'sync_bn_forward_mean', 'sync_bn_forward_var', 'sync_bn_forward_output',
- 'sync_bn_backward_param', 'sync_bn_backward_data'
-])
-
-
-class SyncBatchNormFunction(Function):
-
- @staticmethod
- def symbolic(g, input, running_mean, running_var, weight, bias, momentum,
- eps, group, group_size, stats_mode):
- return g.op(
- 'mmcv::MMCVSyncBatchNorm',
- input,
- running_mean,
- running_var,
- weight,
- bias,
- momentum_f=momentum,
- eps_f=eps,
- group_i=group,
- group_size_i=group_size,
- stats_mode=stats_mode)
-
- @staticmethod
- def forward(self, input, running_mean, running_var, weight, bias, momentum,
- eps, group, group_size, stats_mode):
- self.momentum = momentum
- self.eps = eps
- self.group = group
- self.group_size = group_size
- self.stats_mode = stats_mode
-
- assert isinstance(
- input, (torch.HalfTensor, torch.FloatTensor,
- torch.cuda.HalfTensor, torch.cuda.FloatTensor)), \
- f'only support Half or Float Tensor, but {input.type()}'
- output = torch.zeros_like(input)
- input3d = input.flatten(start_dim=2)
- output3d = output.view_as(input3d)
- num_channels = input3d.size(1)
-
- # ensure mean/var/norm/std are initialized as zeros
- # ``torch.empty()`` does not guarantee that
- mean = torch.zeros(
- num_channels, dtype=torch.float, device=input3d.device)
- var = torch.zeros(
- num_channels, dtype=torch.float, device=input3d.device)
- norm = torch.zeros_like(
- input3d, dtype=torch.float, device=input3d.device)
- std = torch.zeros(
- num_channels, dtype=torch.float, device=input3d.device)
-
- batch_size = input3d.size(0)
- if batch_size > 0:
- ext_module.sync_bn_forward_mean(input3d, mean)
- batch_flag = torch.ones([1], device=mean.device, dtype=mean.dtype)
- else:
- # skip updating mean and leave it as zeros when the input is empty
- batch_flag = torch.zeros([1], device=mean.device, dtype=mean.dtype)
-
- # synchronize mean and the batch flag
- vec = torch.cat([mean, batch_flag])
- if self.stats_mode == 'N':
- vec *= batch_size
- if self.group_size > 1:
- dist.all_reduce(vec, group=self.group)
- total_batch = vec[-1].detach()
- mean = vec[:num_channels]
-
- if self.stats_mode == 'default':
- mean = mean / self.group_size
- elif self.stats_mode == 'N':
- mean = mean / total_batch.clamp(min=1)
- else:
- raise NotImplementedError
-
- # leave var as zeros when the input is empty
- if batch_size > 0:
- ext_module.sync_bn_forward_var(input3d, mean, var)
-
- if self.stats_mode == 'N':
- var *= batch_size
- if self.group_size > 1:
- dist.all_reduce(var, group=self.group)
-
- if self.stats_mode == 'default':
- var /= self.group_size
- elif self.stats_mode == 'N':
- var /= total_batch.clamp(min=1)
- else:
- raise NotImplementedError
-
- # if the total batch size over all the ranks is zero,
- # we should not update the statistics in the current batch
- update_flag = total_batch.clamp(max=1)
- momentum = update_flag * self.momentum
- ext_module.sync_bn_forward_output(
- input3d,
- mean,
- var,
- weight,
- bias,
- running_mean,
- running_var,
- norm,
- std,
- output3d,
- eps=self.eps,
- momentum=momentum,
- group_size=self.group_size)
- self.save_for_backward(norm, std, weight)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(self, grad_output):
- norm, std, weight = self.saved_tensors
- grad_weight = torch.zeros_like(weight)
- grad_bias = torch.zeros_like(weight)
- grad_input = torch.zeros_like(grad_output)
- grad_output3d = grad_output.flatten(start_dim=2)
- grad_input3d = grad_input.view_as(grad_output3d)
-
- batch_size = grad_input3d.size(0)
- if batch_size > 0:
- ext_module.sync_bn_backward_param(grad_output3d, norm, grad_weight,
- grad_bias)
-
- # all reduce
- if self.group_size > 1:
- dist.all_reduce(grad_weight, group=self.group)
- dist.all_reduce(grad_bias, group=self.group)
- grad_weight /= self.group_size
- grad_bias /= self.group_size
-
- if batch_size > 0:
- ext_module.sync_bn_backward_data(grad_output3d, weight,
- grad_weight, grad_bias, norm, std,
- grad_input3d)
-
- return grad_input, None, None, grad_weight, grad_bias, \
- None, None, None, None, None
-
-
-@NORM_LAYERS.register_module(name='MMSyncBN')
-class SyncBatchNorm(Module):
- """Synchronized Batch Normalization.
-
- Args:
- num_features (int): number of features/chennels in input tensor
- eps (float, optional): a value added to the denominator for numerical
- stability. Defaults to 1e-5.
- momentum (float, optional): the value used for the running_mean and
- running_var computation. Defaults to 0.1.
- affine (bool, optional): whether to use learnable affine parameters.
- Defaults to True.
- track_running_stats (bool, optional): whether to track the running
- mean and variance during training. When set to False, this
- module does not track such statistics, and initializes statistics
- buffers ``running_mean`` and ``running_var`` as ``None``. When
- these buffers are ``None``, this module always uses batch
- statistics in both training and eval modes. Defaults to True.
- group (int, optional): synchronization of stats happen within
- each process group individually. By default it is synchronization
- across the whole world. Defaults to None.
- stats_mode (str, optional): The statistical mode. Available options
- includes ``'default'`` and ``'N'``. Defaults to 'default'.
- When ``stats_mode=='default'``, it computes the overall statistics
- using those from each worker with equal weight, i.e., the
- statistics are synchronized and simply divied by ``group``. This
- mode will produce inaccurate statistics when empty tensors occur.
- When ``stats_mode=='N'``, it compute the overall statistics using
- the total number of batches in each worker ignoring the number of
- group, i.e., the statistics are synchronized and then divied by
- the total batch ``N``. This mode is beneficial when empty tensors
- occur during training, as it average the total mean by the real
- number of batch.
- """
-
- def __init__(self,
- num_features,
- eps=1e-5,
- momentum=0.1,
- affine=True,
- track_running_stats=True,
- group=None,
- stats_mode='default'):
- super(SyncBatchNorm, self).__init__()
- self.num_features = num_features
- self.eps = eps
- self.momentum = momentum
- self.affine = affine
- self.track_running_stats = track_running_stats
- group = dist.group.WORLD if group is None else group
- self.group = group
- self.group_size = dist.get_world_size(group)
- assert stats_mode in ['default', 'N'], \
- f'"stats_mode" only accepts "default" and "N", got "{stats_mode}"'
- self.stats_mode = stats_mode
- if self.affine:
- self.weight = Parameter(torch.Tensor(num_features))
- self.bias = Parameter(torch.Tensor(num_features))
- else:
- self.register_parameter('weight', None)
- self.register_parameter('bias', None)
- if self.track_running_stats:
- self.register_buffer('running_mean', torch.zeros(num_features))
- self.register_buffer('running_var', torch.ones(num_features))
- self.register_buffer('num_batches_tracked',
- torch.tensor(0, dtype=torch.long))
- else:
- self.register_buffer('running_mean', None)
- self.register_buffer('running_var', None)
- self.register_buffer('num_batches_tracked', None)
- self.reset_parameters()
-
- def reset_running_stats(self):
- if self.track_running_stats:
- self.running_mean.zero_()
- self.running_var.fill_(1)
- self.num_batches_tracked.zero_()
-
- def reset_parameters(self):
- self.reset_running_stats()
- if self.affine:
- self.weight.data.uniform_() # pytorch use ones_()
- self.bias.data.zero_()
-
- def forward(self, input):
- if input.dim() < 2:
- raise ValueError(
- f'expected at least 2D input, got {input.dim()}D input')
- if self.momentum is None:
- exponential_average_factor = 0.0
- else:
- exponential_average_factor = self.momentum
-
- if self.training and self.track_running_stats:
- if self.num_batches_tracked is not None:
- self.num_batches_tracked += 1
- if self.momentum is None: # use cumulative moving average
- exponential_average_factor = 1.0 / float(
- self.num_batches_tracked)
- else: # use exponential moving average
- exponential_average_factor = self.momentum
-
- if self.training or not self.track_running_stats:
- return SyncBatchNormFunction.apply(
- input, self.running_mean, self.running_var, self.weight,
- self.bias, exponential_average_factor, self.eps, self.group,
- self.group_size, self.stats_mode)
- else:
- return F.batch_norm(input, self.running_mean, self.running_var,
- self.weight, self.bias, False,
- exponential_average_factor, self.eps)
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'({self.num_features}, '
- s += f'eps={self.eps}, '
- s += f'momentum={self.momentum}, '
- s += f'affine={self.affine}, '
- s += f'track_running_stats={self.track_running_stats}, '
- s += f'group_size={self.group_size},'
- s += f'stats_mode={self.stats_mode})'
- return s
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/nonautoregressive_translation/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/nonautoregressive_translation/README.md
deleted file mode 100644
index 8793e225c99732c42c9c19e22075cde37c73341d..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/nonautoregressive_translation/README.md
+++ /dev/null
@@ -1,146 +0,0 @@
-# Non-autoregressive Neural Machine Translation (NAT)
-
-This page mainly includes instructions for reproducing results from the following papers
-* [Levenshtein Transformer (Gu et al., 2019)](https://arxiv.org/abs/1905.11006).
-* [Understanding Knowledge Distillation in Non-autoregressive Machine Translation (Zhou et al., 2019)](https://arxiv.org/abs/1911.02727).
-
-We also provided our own implementations for several popular non-autoregressive-based models as reference:
-* [Non-Autoregressive Neural Machine Translation (Gu et al., 2017)](https://arxiv.org/abs/1711.02281)
-* [Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al., 2018)](https://arxiv.org/abs/1802.06901)
-* [Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al., 2019)](https://arxiv.org/abs/1902.03249)
-* [Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019)](https://arxiv.org/abs/1904.09324v2)
-* [Fast Structured Decoding for Sequence Models (Sun et al., 2019)](https://arxiv.org/abs/1910.11555)
-
-## Dataset
-
-First, follow the [instructions to download and preprocess the WMT'14 En-De dataset](../translation#wmt14-english-to-german-convolutional).
-Make sure to learn a joint vocabulary by passing the `--joined-dictionary` option to `fairseq-preprocess`.
-
-### Knowledge Distillation
-Following [Gu et al. 2019](https://arxiv.org/abs/1905.11006), [knowledge distillation](https://arxiv.org/abs/1606.07947) from an autoregressive model can effectively simplify the training data distribution, which is sometimes essential for NAT-based models to learn good translations.
-The easiest way of performing distillation is to follow the [instructions of training a standard transformer model](../translation) on the same data, and then decode the training set to produce a distillation dataset for NAT.
-
-### Download
-We also provided the preprocessed [original](http://dl.fbaipublicfiles.com/nat/original_dataset.zip) and [distillation](http://dl.fbaipublicfiles.com/nat/distill_dataset.zip) datasets. Please build the binarized dataset on your own.
-
-
-## Train a model
-
-Then we can train a nonautoregressive model using the `translation_lev` task and a new criterion `nat_loss`.
-Use the `--noise` flag to specify the input noise used on the target sentences.
-In default, we run the task for *Levenshtein Transformer*, with `--noise='random_delete'`. Full scripts to run other models can also be found [here](./scripts.md).
-
-The following command will train a *Levenshtein Transformer* on the binarized dataset.
-
-```bash
-fairseq-train \
- data-bin/wmt14_en_de_distill \
- --save-dir checkpoints \
- --ddp-backend=legacy_ddp \
- --task translation_lev \
- --criterion nat_loss \
- --arch levenshtein_transformer \
- --noise random_delete \
- --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9,0.98)' \
- --lr 0.0005 --lr-scheduler inverse_sqrt \
- --stop-min-lr '1e-09' --warmup-updates 10000 \
- --warmup-init-lr '1e-07' --label-smoothing 0.1 \
- --dropout 0.3 --weight-decay 0.01 \
- --decoder-learned-pos \
- --encoder-learned-pos \
- --apply-bert-init \
- --log-format 'simple' --log-interval 100 \
- --fixed-validation-seed 7 \
- --max-tokens 8000 \
- --save-interval-updates 10000 \
- --max-update 300000
-```
-
-## Translate
-
-Once a model is trained, we can generate translations using an `iterative_refinement_generator` which will based on the model's initial output and iteratively read and greedily refine the translation until (1) the model predicts the same translations for two consecutive iterations; or (2) the generator reaches the maximum iterations (`--iter-decode-max-iter`). Use `--print-step` to check the actual # of iteration for each sentence.
-
-For *Levenshtein Transformer*, it sometimes helps to apply a `--iter-decode-eos-penalty` (typically, 0~3) to penalize the model finishing generation too early and generating too short translations.
-
-For example, to generate with `--iter-decode-max-iter=9`:
-```bash
-fairseq-generate \
- data-bin/wmt14_en_de_distill \
- --gen-subset test \
- --task translation_lev \
- --path checkpoints/checkpoint_best.pt \
- --iter-decode-max-iter 9 \
- --iter-decode-eos-penalty 0 \
- --beam 1 --remove-bpe \
- --print-step \
- --batch-size 400
-```
-In the end of the generation, we can see the tokenized BLEU score for the translation.
-
-## Advanced Decoding Methods
-### Ensemble
-The NAT models use special implementations of [ensembling](https://github.com/fairinternal/fairseq-py/blob/b98d88da52f2f21f1b169bab8c70c1c4ca19a768/fairseq/sequence_generator.py#L522) to support iterative refinement and a variety of parallel operations in different models, while it shares the same API as standard autoregressive models as follows:
-```bash
-fairseq-generate \
- data-bin/wmt14_en_de_distill \
- --gen-subset test \
- --task translation_lev \
- --path checkpoint_1.pt:checkpoint_2.pt:checkpoint_3.pt \
- --iter-decode-max-iter 9 \
- --iter-decode-eos-penalty 0 \
- --beam 1 --remove-bpe \
- --print-step \
- --batch-size 400
-```
-We use ``:`` to split multiple models. Note that, not all NAT models support ensembling for now.
-
-
-### Length-beam
-For models that predict lengths before decoding (e.g. the vanilla NAT, Mask-Predict, etc), it is possible to improve the translation quality by varying the target lengths around the predicted value, and translating the same example multiple times in parallel. We can select the best translation with the highest scores defined by your model's output.
-
-Note that, not all models support length beams. For models which dynamically change the lengths (e.g. *Insertion Transformer*, *Levenshtein Transformer*), the same trick does not apply.
-
-### Re-ranking
-If the model generates multiple translations with length beam, we can also introduce an autoregressive model to rerank the translations considering scoring from an autoregressive model is much faster than decoding from that.
-
-For example, to generate translations with length beam and reranking,
-```bash
-fairseq-generate \
- data-bin/wmt14_en_de_distill \
- --gen-subset test \
- --task translation_lev \
- --path checkpoints/checkpoint_best.pt:at_checkpoints/checkpoint_best.pt \
- --iter-decode-max-iter 9 \
- --iter-decode-eos-penalty 0 \
- --iter-decode-with-beam 9 \
- --iter-decode-with-external-reranker \
- --beam 1 --remove-bpe \
- --print-step \
- --batch-size 100
-```
-Note that we need to make sure the autoregressive model shares the same vocabulary as our target non-autoregressive model.
-
-
-## Citation
-
-```bibtex
-@incollection{NIPS2019_9297,
- title = {Levenshtein Transformer},
- author = {Gu, Jiatao and Wang, Changhan and Zhao, Junbo},
- booktitle = {Advances in Neural Information Processing Systems 32},
- editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
- pages = {11179--11189},
- year = {2019},
- publisher = {Curran Associates, Inc.},
- url = {http://papers.nips.cc/paper/9297-levenshtein-transformer.pdf}
-}
-```
-```bibtex
-@article{zhou2019understanding,
- title={Understanding Knowledge Distillation in Non-autoregressive Machine Translation},
- author={Zhou, Chunting and Neubig, Graham and Gu, Jiatao},
- journal={arXiv preprint arXiv:1911.02727},
- year={2019}
-}
-```
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/roundTools.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/roundTools.py
deleted file mode 100644
index 48a47c07c8575895f894a24065046bc308a69b97..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/roundTools.py
+++ /dev/null
@@ -1,109 +0,0 @@
-"""
-Various round-to-integer helpers.
-"""
-
-import math
-import functools
-import logging
-
-log = logging.getLogger(__name__)
-
-__all__ = [
- "noRound",
- "otRound",
- "maybeRound",
- "roundFunc",
-]
-
-
-def noRound(value):
- return value
-
-
-def otRound(value):
- """Round float value to nearest integer towards ``+Infinity``.
-
- The OpenType spec (in the section on `"normalization" of OpenType Font Variations `_)
- defines the required method for converting floating point values to
- fixed-point. In particular it specifies the following rounding strategy:
-
- for fractional values of 0.5 and higher, take the next higher integer;
- for other fractional values, truncate.
-
- This function rounds the floating-point value according to this strategy
- in preparation for conversion to fixed-point.
-
- Args:
- value (float): The input floating-point value.
-
- Returns
- float: The rounded value.
- """
- # See this thread for how we ended up with this implementation:
- # https://github.com/fonttools/fonttools/issues/1248#issuecomment-383198166
- return int(math.floor(value + 0.5))
-
-
-def maybeRound(v, tolerance, round=otRound):
- rounded = round(v)
- return rounded if abs(rounded - v) <= tolerance else v
-
-
-def roundFunc(tolerance, round=otRound):
- if tolerance < 0:
- raise ValueError("Rounding tolerance must be positive")
-
- if tolerance == 0:
- return noRound
-
- if tolerance >= 0.5:
- return round
-
- return functools.partial(maybeRound, tolerance=tolerance, round=round)
-
-
-def nearestMultipleShortestRepr(value: float, factor: float) -> str:
- """Round to nearest multiple of factor and return shortest decimal representation.
-
- This chooses the float that is closer to a multiple of the given factor while
- having the shortest decimal representation (the least number of fractional decimal
- digits).
-
- For example, given the following:
-
- >>> nearestMultipleShortestRepr(-0.61883544921875, 1.0/(1<<14))
- '-0.61884'
-
- Useful when you need to serialize or print a fixed-point number (or multiples
- thereof, such as F2Dot14 fractions of 180 degrees in COLRv1 PaintRotate) in
- a human-readable form.
-
- Args:
- value (value): The value to be rounded and serialized.
- factor (float): The value which the result is a close multiple of.
-
- Returns:
- str: A compact string representation of the value.
- """
- if not value:
- return "0.0"
-
- value = otRound(value / factor) * factor
- eps = 0.5 * factor
- lo = value - eps
- hi = value + eps
- # If the range of valid choices spans an integer, return the integer.
- if int(lo) != int(hi):
- return str(float(round(value)))
-
- fmt = "%.8f"
- lo = fmt % lo
- hi = fmt % hi
- assert len(lo) == len(hi) and lo != hi
- for i in range(len(lo)):
- if lo[i] != hi[i]:
- break
- period = lo.find(".")
- assert period < i
- fmt = "%%.%df" % (i - period)
- return fmt % value
diff --git a/spaces/leogabraneth/text-generation-webui-main/api-examples/api-example-chat-stream.py b/spaces/leogabraneth/text-generation-webui-main/api-examples/api-example-chat-stream.py
deleted file mode 100644
index 3a1502dd4b12e095d90f45e32307af6ad87d313a..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/api-examples/api-example-chat-stream.py
+++ /dev/null
@@ -1,114 +0,0 @@
-import asyncio
-import html
-import json
-import sys
-
-try:
- import websockets
-except ImportError:
- print("Websockets package not found. Make sure it's installed.")
-
-# For local streaming, the websockets are hosted without ssl - ws://
-HOST = 'localhost:5005'
-URI = f'ws://{HOST}/api/v1/chat-stream'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - wss://
-# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream'
-
-
-async def run(user_input, history):
- # Note: the selected defaults change from time to time.
- request = {
- 'user_input': user_input,
- 'max_new_tokens': 250,
- 'auto_max_new_tokens': False,
- 'max_tokens_second': 0,
- 'history': history,
- 'mode': 'instruct', # Valid options: 'chat', 'chat-instruct', 'instruct'
- 'character': 'Example',
- 'instruction_template': 'Vicuna-v1.1', # Will get autodetected if unset
- 'your_name': 'You',
- # 'name1': 'name of user', # Optional
- # 'name2': 'name of character', # Optional
- # 'context': 'character context', # Optional
- # 'greeting': 'greeting', # Optional
- # 'name1_instruct': 'You', # Optional
- # 'name2_instruct': 'Assistant', # Optional
- # 'context_instruct': 'context_instruct', # Optional
- # 'turn_template': 'turn_template', # Optional
- 'regenerate': False,
- '_continue': False,
- 'chat_instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>',
-
- # Generation params. If 'preset' is set to different than 'None', the values
- # in presets/preset-name.yaml are used instead of the individual numbers.
- 'preset': 'None',
- 'do_sample': True,
- 'temperature': 0.7,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'epsilon_cutoff': 0, # In units of 1e-4
- 'eta_cutoff': 0, # In units of 1e-4
- 'tfs': 1,
- 'top_a': 0,
- 'repetition_penalty': 1.18,
- 'presence_penalty': 0,
- 'frequency_penalty': 0,
- 'repetition_penalty_range': 0,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'mirostat_mode': 0,
- 'mirostat_tau': 5,
- 'mirostat_eta': 0.1,
- 'grammar_string': '',
- 'guidance_scale': 1,
- 'negative_prompt': '',
-
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'custom_token_bans': '',
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- async with websockets.connect(URI, ping_interval=None) as websocket:
- await websocket.send(json.dumps(request))
-
- while True:
- incoming_data = await websocket.recv()
- incoming_data = json.loads(incoming_data)
-
- match incoming_data['event']:
- case 'text_stream':
- yield incoming_data['history']
- case 'stream_end':
- return
-
-
-async def print_response_stream(user_input, history):
- cur_len = 0
- async for new_history in run(user_input, history):
- cur_message = new_history['visible'][-1][1][cur_len:]
- cur_len += len(cur_message)
- print(html.unescape(cur_message), end='')
- sys.stdout.flush() # If we don't flush, we won't see tokens in realtime.
-
-
-if __name__ == '__main__':
- user_input = "Please give me a step-by-step guide on how to plant a tree in my backyard."
-
- # Basic example
- history = {'internal': [], 'visible': []}
-
- # "Continue" example. Make sure to set '_continue' to True above
- # arr = [user_input, 'Surely, here is']
- # history = {'internal': [arr], 'visible': [arr]}
-
- asyncio.run(print_response_stream(user_input, history))
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Rab Ne Bana Di Jodi 720p BEST.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Rab Ne Bana Di Jodi 720p BEST.md
deleted file mode 100644
index 53eafac38dfba296a09cef41711490a7c58e9415..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Rab Ne Bana Di Jodi 720p BEST.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
Now, you can download Hd quality full movie of Rab Ne Bana Di Jodi. All the movie download videos from here. The users can download the the full movie in a high-quality resolution and within a short time.
Upcomer To date, you can download not only pirated movies but also full HD movies of all categories. You can choose the screen size, between 4K, 1080p, and 720p. 1080p to download the film also users can find movies using search bar.Upcomer is a site known for its leaks of pirated films. It is also prohibited in many other countries, including India. You should not go download movies.
-
It is not allowed to download the movie for free. If you like this movie you can buy Jodi 720p Rab ne Bana full movie from MovieLovers.
-
Update about the movie: Rab Ne Bana Di Jodi is an upcoming Indian romantic comedy film that is directed by Ali Abbas Zafar, and co-directed by Karan Johar. The film will be released on August 14, 2018 by India’s leading distributor Fiac. The film is to be the sequel to Johar’s 2007 romantic comedy, My Name is Khan. It is based on the novel of the same name by Khushwant Singh, which is loosely based on the 1972 historical novel Court Scandal by William F. Buckley, Jr..
-
Featured Mp4 is the Title : Rab Ne Bana Di Jodi 720p. The Bitrate is Normal and the size is 1080pMAY 2016, 2hrs 20 min. Video File Released Date : March 15, 2016 Video File Category : Movie Format : FLV
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Initial D 5th Stage 1080p Torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Initial D 5th Stage 1080p Torrent.md
deleted file mode 100644
index d42f0a4fc72903628983796a608841b026c94dc3..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Initial D 5th Stage 1080p Torrent.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
in children, immunosuppression is rare, and clinicians should not be so quick to prescribe systemic immunosuppression as a way to treat covid-19, dr. merks says. and indeed, there is no support for any form of immunosuppression in the best available guidelines, the authors note.
-
sigma-aldrich is a leading science solutions provider for life, materials and diagnostic science. as both an academic research institution and in industry, the company's cutting-edge technologies, innovative solutions and advanced scientific support help clients around the world for progress.
sigma-aldrich has a policy of objectivity and provides equal opportunity for qualified persons regardless of race, color, creed, religion, national origin, sex, age, disability, marital status, scientific evidence and socioeconomic class.
-
its so huge in the women s industry now, like 50-70% of the female population is into it. its big for men, too. basically, theres no money in porn anymore. actors have to accept even lower wages, and production companies are less likely to continue to give them work if there are other options out there. what used to be a full time job for actors is now hit and miss. if a film is well done, it can do pretty good business, especially if its a popular title. but theres no guarantee that youll make a living on a title unless youre the producer.
-
there are so many porn stars and directors today who focus on getting girls pregnant, or getting women hooked on cocaine or methamphetamines, or getting them to be brutally raped, or simulate rape. its very true today. and its a vicious cycle because these actresses usually cant keep their money when they have babies, and they cant find work because they have a baby, and so its a no-win situation for them. it takes so many years for actresses to get back into the business, and so many of them end up getting addicted. its not fair on them or their families, but thats the reality.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lkeab/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py b/spaces/lkeab/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py
deleted file mode 100644
index 249387fffeed7c02f592ecc84ee5a295533b1ed7..0000000000000000000000000000000000000000
--- a/spaces/lkeab/transfiner/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from .mask_rcnn_R_50_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-from detectron2.config import LazyCall as L
-from detectron2.modeling.backbone import RegNet
-from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock
-
-# Config source:
-# https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py # noqa
-model.backbone.bottom_up = L(RegNet)(
- stem_class=SimpleStem,
- stem_width=32,
- block_class=ResBottleneckBlock,
- depth=23,
- w_a=38.65,
- w_0=96,
- w_m=2.43,
- group_width=40,
- norm="SyncBN",
- out_features=["s1", "s2", "s3", "s4"],
-)
-model.pixel_std = [57.375, 57.120, 58.395]
-
-# RegNets benefit from enabling cudnn benchmark mode
-train.cudnn_benchmark = True
diff --git a/spaces/lora-x/Backpack/app.py b/spaces/lora-x/Backpack/app.py
deleted file mode 100644
index f189f226335bad729a094cb99ae9bd825dfc89c9..0000000000000000000000000000000000000000
--- a/spaces/lora-x/Backpack/app.py
+++ /dev/null
@@ -1,550 +0,0 @@
-import torch
-import transformers
-from transformers import AutoModelForCausalLM
-import pandas as pd
-import gradio as gr
-
-# Build model & get some layers
-tokenizer = transformers.AutoTokenizer.from_pretrained('gpt2')
-m = AutoModelForCausalLM.from_pretrained("stanfordnlp/backpack-gpt2", trust_remote_code=True)
-m.eval()
-
-lm_head = m.get_lm_head() # (V, d)
-word_embeddings = m.backpack.get_word_embeddings() # (V, d)
-sense_network = m.backpack.get_sense_network() # (V, nv, d)
-num_senses = m.backpack.get_num_senses()
-sense_names = [i for i in range(num_senses)]
-
-"""
-Single token sense lookup
-"""
-def visualize_word(word, count=10, remove_space=False):
-
- if not remove_space:
- word = ' ' + word
- print(f"Looking up word '{word}'...")
-
- token_ids = tokenizer(word)['input_ids']
- tokens = [tokenizer.decode(token_id) for token_id in token_ids]
- tokens = ", ".join(tokens) # display tokenization for user
- print(f"Tokenized as: {tokens}")
- # look up sense vectors only for the first token
- # contents = vecs[token_ids[0]] # torch.Size([16, 768])
- sense_input_embeds = word_embeddings(torch.tensor([token_ids[0]]).long().unsqueeze(0)) # (bs=1, s=1, d), sense_network expects bs dim
- senses = sense_network(sense_input_embeds) # -> (bs=1, nv, s=1, d)
- senses = torch.squeeze(senses) # (nv, s=1, d)
-
- # for pos and neg respectively, create a list (for each sense) of list (top k) of tuples (word, logit)
- pos_word_lists = []
- neg_word_lists = []
- sense_names = [] # column header
- for i in range(senses.shape[0]):
- logits = lm_head(senses[i,:])
- sorted_logits, sorted_indices = torch.sort(logits, descending=True)
- sense_names.append('sense {}'.format(i))
-
- pos_sorted_words = [tokenizer.decode(sorted_indices[j]) for j in range(count)]
- pos_sorted_logits = [sorted_logits[j].item() for j in range(count)]
- pos_word_lists.append(list(zip(pos_sorted_words, pos_sorted_logits)))
-
- neg_sorted_words = [tokenizer.decode(sorted_indices[-j-1]) for j in range(count)]
- neg_sorted_logits = [sorted_logits[-j-1].item() for j in range(count)]
- neg_word_lists.append(list(zip(neg_sorted_words, neg_sorted_logits)))
-
- def create_dataframe(word_lists, sense_names, count):
- data = dict(zip(sense_names, word_lists))
- df = pd.DataFrame(index=[i for i in range(count)],
- columns=list(data.keys()))
- for prop, word_list in data.items():
- for i, word_pair in enumerate(word_list):
- cell_value = "space ({:.2f})".format(word_pair[1])
- cell_value = "{} ({:.2f})".format(word_pair[0], word_pair[1])
- df.at[i, prop] = cell_value
- return df
-
- pos_df = create_dataframe(pos_word_lists, sense_names, count)
- neg_df = create_dataframe(neg_word_lists, sense_names, count)
-
- return pos_df, neg_df, tokens
-
-"""
-Returns:
- - tokens: the tokenization of the input sentence, also used as options to choose from for get_token_contextual_weights
- - top_k_words_df: a dataframe of the top k words predicted by the model
- - length: of the input sentence, stored as a gr.State variable so other methods can find the
- contextualization weights for the *last* token that's needed
- - contextualization_weights: gr.State variable, stores the contextualization weights for the input sentence
-"""
-def predict_next_word (sentence, top_k = 5, contextualization_weights = None):
-
- if sentence == "":
- return None, None, None, None
-
- # For better tokenization, by default, adds a space at the beginning of the sentence if it doesn't already have one
- # and remove trailing space
- sentence = sentence.strip()
- if sentence[0] != ' ':
- sentence = ' ' + sentence
- print(f"Sentence: '{sentence}'")
-
- # Make input, keeping track of original length
- token_ids = tokenizer(sentence)['input_ids']
- tokens = [[tokenizer.decode(token_id) for token_id in token_ids]] # a list of a single list because used as dataframe
- length = len(token_ids)
- inp = torch.zeros((1,512)).long()
- inp[0,:length] = torch.tensor(token_ids).long()
-
- # Get output at correct index
- if contextualization_weights is None:
- print("contextualization_weights IS None, freshly computing contextualization_weights")
- output = m(inp)
- logits, contextualization_weights = output.logits[0,length-1,:], output.contextualization
- # Store contextualization weights and return it as a gr.State var for use by get_token_contextual_weights
- else:
- print("contextualization_weights is NOT None, using passed in contextualization_weights")
- output = m.run_with_custom_contextualization(inp, contextualization_weights)
- logits = output.logits[0,length-1,:]
- probs = logits.softmax(dim=-1) # probs over next word
- probs, indices = torch.sort(probs, descending=True)
- top_k_words = [(tokenizer.decode(indices[i]), round(probs[i].item(), 3)) for i in range(top_k)]
- top_k_words_df = pd.DataFrame(top_k_words, columns=['word', 'probability'], index=range(1, top_k+1))
-
- top_k_words_df = top_k_words_df.T
-
- print(top_k_words_df)
-
- return tokens, top_k_words_df, length, contextualization_weights
-
-
-"""
-Returns a dataframe of senses with weights for the selected token.
-
-Args:
- contextualization_weights: a gr.State variable that stores the contextualization weights for the input sentence.
- length: length of the input sentence, used to get the contextualization weights for the last token
- token: the selected token
- token_index: the index of the selected token in the input sentence
- pos_count: how many top positive words to display for each sense
- neg_count: how many top negative words to display for each sense
-"""
-def get_token_contextual_weights (contextualization_weights, length, token, token_index, pos_count = 5, neg_count = 3):
- print(">>>>>in get_token_contextual_weights")
- print(f"Selected {token_index}th token: {token}")
-
- # get contextualization weights for the selected token
- # Only care about the weights for the last word, since that's what contributes to the output
- token_contextualization_weights = contextualization_weights[0, :, length-1, token_index]
- token_contextualization_weights_list = [round(x, 3) for x in token_contextualization_weights.tolist()]
-
- # get sense vectors of the selected token
- token_ids = tokenizer(token)['input_ids'] # keep as a list bc sense_network expects s dim
- sense_input_embeds = word_embeddings(torch.tensor(token_ids).long().unsqueeze(0)) # (bs=1, s=1, d), sense_network expects bs dim
- senses = sense_network(sense_input_embeds) # -> (bs=1, nv, s=1, d)
- senses = torch.squeeze(senses) # (nv, s=1, d)
-
- # build dataframe
- pos_dfs, neg_dfs = [], []
-
- for i in range(num_senses):
- logits = lm_head(senses[i,:]) # (vocab,) [768, 50257] -> [50257]
- sorted_logits, sorted_indices = torch.sort(logits, descending=True)
-
- pos_sorted_words = [tokenizer.decode(sorted_indices[j]) for j in range(pos_count)]
- pos_df = pd.DataFrame(pos_sorted_words, columns=["Sense {}".format(i)])
- pos_dfs.append(pos_df)
-
- neg_sorted_words = [tokenizer.decode(sorted_indices[-j-1]) for j in range(neg_count)]
- neg_df = pd.DataFrame(neg_sorted_words, columns=["Top Negative"])
- neg_dfs.append(neg_df)
-
- sense0words, sense1words, sense2words, sense3words, sense4words, sense5words, \
- sense6words, sense7words, sense8words, sense9words, sense10words, sense11words, \
- sense12words, sense13words, sense14words, sense15words = pos_dfs
-
- sense0negwords, sense1negwords, sense2negwords, sense3negwords, sense4negwords, sense5negwords, \
- sense6negwords, sense7negwords, sense8negwords, sense9negwords, sense10negwords, sense11negwords, \
- sense12negwords, sense13negwords, sense14negwords, sense15negwords = neg_dfs
-
- sense0slider, sense1slider, sense2slider, sense3slider, sense4slider, sense5slider, \
- sense6slider, sense7slider, sense8slider, sense9slider, sense10slider, sense11slider, \
- sense12slider, sense13slider, sense14slider, sense15slider = token_contextualization_weights_list
-
- return token, token_index, \
- sense0words, sense1words, sense2words, sense3words, sense4words, sense5words, sense6words, sense7words, \
- sense8words, sense9words, sense10words, sense11words, sense12words, sense13words, sense14words, sense15words, \
- sense0negwords, sense1negwords, sense2negwords, sense3negwords, sense4negwords, sense5negwords, sense6negwords, sense7negwords, \
- sense8negwords, sense9negwords, sense10negwords, sense11negwords, sense12negwords, sense13negwords, sense14negwords, sense15negwords, \
- sense0slider, sense1slider, sense2slider, sense3slider, sense4slider, sense5slider, sense6slider, sense7slider, \
- sense8slider, sense9slider, sense10slider, sense11slider, sense12slider, sense13slider, sense14slider, sense15slider
-
-"""
-Wrapper for when the user selects a new token in the tokens dataframe.
-Converts `evt` (the selected token) to `token` and `token_index` which are used by get_token_contextual_weights.
-"""
-def new_token_contextual_weights (contextualization_weights, length, evt: gr.SelectData, pos_count = 5, neg_count = 3):
- print(">>>>>in new_token_contextual_weights")
- token_index = evt.index[1] # selected token is the token_index-th token in the sentence
- token = evt.value
- if not token:
- return None, None, \
- None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, \
- None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, \
- None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None
- return get_token_contextual_weights (contextualization_weights, length, token, token_index, pos_count, neg_count)
-
-def change_sense0_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 0, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense1_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 1, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense2_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 2, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense3_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 3, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense4_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 4, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense5_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 5, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense6_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 6, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense7_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 7, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense8_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 8, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense9_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 9, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense10_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 10, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense11_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 11, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense12_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 12, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense13_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 13, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense14_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 14, length-1, token_index] = new_weight
- return contextualization_weights
-def change_sense15_weight(contextualization_weights, length, token_index, new_weight):
- contextualization_weights[0, 15, length-1, token_index] = new_weight
- return contextualization_weights
-
-"""
-Clears all gr.State variables used to store info across methods when the input sentence changes.
-"""
-def clear_states(contextualization_weights, token_index, length):
- contextualization_weights = None
- token_index = None
- length = 0
- return contextualization_weights, token_index, length
-
-def reset_weights(contextualization_weights):
- print("Resetting weights...")
- contextualization_weights = None
- return contextualization_weights
-
-with gr.Blocks( theme = gr.themes.Base(),
- css = """#sense0slider, #sense1slider, #sense2slider, #sense3slider, #sense4slider, #sense5slider, #sense6slider, #sense7slider,
- #sense8slider, #sense9slider, #sense1slider0, #sense11slider, #sense12slider, #sense13slider, #sense14slider, #sense15slider
- { height: 200px; width: 200px; transform: rotate(270deg); }"""
- ) as demo:
-
- gr.Markdown("""
- ## Backpack Sense Visualization
- """)
-
- with gr.Tab("Language Modeling"):
- contextualization_weights = gr.State(None) # store session data for sharing between functions
- token_index = gr.State(None)
- length = gr.State(0)
- top_k = gr.State(10)
- with gr.Row():
- with gr.Column(scale=8):
- input_sentence = gr.Textbox(label="Input Sentence", placeholder='Enter a sentence and click "Predict next word". Then, you can go to the Tokens section, click on a token, and see its contextualization weights.')
- with gr.Column(scale=1):
- predict = gr.Button(value="Predict next word", variant="primary")
- reset_weights_button = gr.Button("Reset weights")
- gr.Markdown("""#### Top-k predicted next word""")
- top_k_words = gr.Dataframe(interactive=False)
- gr.Markdown("""### **Token Breakdown:** click on a token below to see its senses and contextualization weights""")
- tokens = gr.DataFrame()
- with gr.Row():
- with gr.Column(scale=1):
- selected_token = gr.Textbox(label="Current Selected Token", interactive=False)
- with gr.Column(scale=8):
- gr.Markdown("""####
- Once a token is chosen, you can **use the sliders below to change the weight of any sense or multiple senses** for that token, \
- and then click "Predict next word" to see updated next-word predictions. Erase all changes with "Reset weights".
- """)
- # sense sliders and top sense words dataframes
- with gr.Row():
- with gr.Column(scale=0, min_width=120):
- sense0slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 0", elem_id="sense0slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense1slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 1", elem_id="sense1slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense2slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 2", elem_id="sense2slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense3slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 3", elem_id="sense3slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense4slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 4", elem_id="sense4slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense5slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 5", elem_id="sense5slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense6slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 6", elem_id="sense6slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense7slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 7", elem_id="sense7slider", interactive=True)
- with gr.Row():
- with gr.Column(scale=0, min_width=120):
- sense0words = gr.DataFrame(headers = ["Sense 0"])
- with gr.Column(scale=0, min_width=120):
- sense1words = gr.DataFrame(headers = ["Sense 1"])
- with gr.Column(scale=0, min_width=120):
- sense2words = gr.DataFrame(headers = ["Sense 2"])
- with gr.Column(scale=0, min_width=120):
- sense3words = gr.DataFrame(headers = ["Sense 3"])
- with gr.Column(scale=0, min_width=120):
- sense4words = gr.DataFrame(headers = ["Sense 4"])
- with gr.Column(scale=0, min_width=120):
- sense5words = gr.DataFrame(headers = ["Sense 5"])
- with gr.Column(scale=0, min_width=120):
- sense6words = gr.DataFrame(headers = ["Sense 6"])
- with gr.Column(scale=0, min_width=120):
- sense7words = gr.DataFrame(headers = ["Sense 7"])
- with gr.Row():
- with gr.Column(scale=0, min_width=120):
- sense0negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense1negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense2negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense3negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense4negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense5negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense6negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense7negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Row():
- with gr.Column(scale=0, min_width=120):
- sense8slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 8", elem_id="sense8slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense9slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 9", elem_id="sense9slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense10slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 10", elem_id="sense1slider0", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense11slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 11", elem_id="sense11slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense12slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 12", elem_id="sense12slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense13slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 13", elem_id="sense13slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense14slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 14", elem_id="sense14slider", interactive=True)
- with gr.Column(scale=0, min_width=120):
- sense15slider= gr.Slider(minimum=0, maximum=1, value=0, step=0.01, label="Sense 15", elem_id="sense15slider", interactive=True)
- with gr.Row():
- with gr.Column(scale=0, min_width=120):
- sense8words = gr.DataFrame(headers = ["Sense 8"])
- with gr.Column(scale=0, min_width=120):
- sense9words = gr.DataFrame(headers = ["Sense 9"])
- with gr.Column(scale=0, min_width=120):
- sense10words = gr.DataFrame(headers = ["Sense 10"])
- with gr.Column(scale=0, min_width=120):
- sense11words = gr.DataFrame(headers = ["Sense 11"])
- with gr.Column(scale=0, min_width=120):
- sense12words = gr.DataFrame(headers = ["Sense 12"])
- with gr.Column(scale=0, min_width=120):
- sense13words = gr.DataFrame(headers = ["Sense 13"])
- with gr.Column(scale=0, min_width=120):
- sense14words = gr.DataFrame(headers = ["Sense 14"])
- with gr.Column(scale=0, min_width=120):
- sense15words = gr.DataFrame(headers = ["Sense 15"])
- with gr.Row():
- with gr.Column(scale=0, min_width=120):
- sense8negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense9negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense10negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense11negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense12negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense13negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense14negwords = gr.DataFrame(headers = ["Top Negative"])
- with gr.Column(scale=0, min_width=120):
- sense15negwords = gr.DataFrame(headers = ["Top Negative"])
- gr.Markdown("""Note: **"Top Negative"** shows words that have the most negative dot products with the sense vector, which can exhibit more coherent meaning than those with the most positive dot products.
- To see more representative words of each sense, scroll to the top and use the **"Individual Word Sense Look Up"** tab.""")
- # gr.Examples(
- # examples=[["Messi plays for", top_k, None]],
- # inputs=[input_sentence, top_k, contextualization_weights],
- # outputs=[tokens, top_k_words, length, contextualization_weights],
- # fn=predict_next_word,
- # )
-
- sense0slider.change(fn=change_sense0_weight,
- inputs=[contextualization_weights, length, token_index, sense0slider],
- outputs=[contextualization_weights])
- sense1slider.change(fn=change_sense1_weight,
- inputs=[contextualization_weights, length, token_index, sense1slider],
- outputs=[contextualization_weights])
- sense2slider.change(fn=change_sense2_weight,
- inputs=[contextualization_weights, length, token_index, sense2slider],
- outputs=[contextualization_weights])
- sense3slider.change(fn=change_sense3_weight,
- inputs=[contextualization_weights, length, token_index, sense3slider],
- outputs=[contextualization_weights])
- sense4slider.change(fn=change_sense4_weight,
- inputs=[contextualization_weights, length, token_index, sense4slider],
- outputs=[contextualization_weights])
- sense5slider.change(fn=change_sense5_weight,
- inputs=[contextualization_weights, length, token_index, sense5slider],
- outputs=[contextualization_weights])
- sense6slider.change(fn=change_sense6_weight,
- inputs=[contextualization_weights, length, token_index, sense6slider],
- outputs=[contextualization_weights])
- sense7slider.change(fn=change_sense7_weight,
- inputs=[contextualization_weights, length, token_index, sense7slider],
- outputs=[contextualization_weights])
- sense8slider.change(fn=change_sense8_weight,
- inputs=[contextualization_weights, length, token_index, sense8slider],
- outputs=[contextualization_weights])
- sense9slider.change(fn=change_sense9_weight,
- inputs=[contextualization_weights, length, token_index, sense9slider],
- outputs=[contextualization_weights])
- sense10slider.change(fn=change_sense10_weight,
- inputs=[contextualization_weights, length, token_index, sense10slider],
- outputs=[contextualization_weights])
- sense11slider.change(fn=change_sense11_weight,
- inputs=[contextualization_weights, length, token_index, sense11slider],
- outputs=[contextualization_weights])
- sense12slider.change(fn=change_sense12_weight,
- inputs=[contextualization_weights, length, token_index, sense12slider],
- outputs=[contextualization_weights])
- sense13slider.change(fn=change_sense13_weight,
- inputs=[contextualization_weights, length, token_index, sense13slider],
- outputs=[contextualization_weights])
- sense14slider.change(fn=change_sense14_weight,
- inputs=[contextualization_weights, length, token_index, sense14slider],
- outputs=[contextualization_weights])
- sense15slider.change(fn=change_sense15_weight,
- inputs=[contextualization_weights, length, token_index, sense15slider],
- outputs=[contextualization_weights])
-
-
- predict.click(
- fn=predict_next_word,
- inputs = [input_sentence, top_k, contextualization_weights],
- outputs= [tokens, top_k_words, length, contextualization_weights],
- )
-
- tokens.select(fn=new_token_contextual_weights,
- inputs=[contextualization_weights, length],
- outputs= [selected_token, token_index,
-
- sense0words, sense1words, sense2words, sense3words, sense4words, sense5words, sense6words, sense7words,
- sense8words, sense9words, sense10words, sense11words, sense12words, sense13words, sense14words, sense15words,
-
- sense0negwords, sense1negwords, sense2negwords, sense3negwords, sense4negwords, sense5negwords, sense6negwords, sense7negwords,
- sense8negwords, sense9negwords, sense10negwords, sense11negwords, sense12negwords, sense13negwords, sense14negwords, sense15negwords,
-
- sense0slider, sense1slider, sense2slider, sense3slider, sense4slider, sense5slider, sense6slider, sense7slider,
- sense8slider, sense9slider, sense10slider, sense11slider, sense12slider, sense13slider, sense14slider, sense15slider]
- )
-
- reset_weights_button.click(
- fn=reset_weights,
- inputs=[contextualization_weights],
- outputs=[contextualization_weights]
- ).success(
- fn=predict_next_word,
- inputs = [input_sentence, top_k, contextualization_weights],
- outputs= [tokens, top_k_words, length, contextualization_weights],
- ).success(
- fn=get_token_contextual_weights,
- inputs=[contextualization_weights, length, selected_token, token_index],
- outputs= [selected_token, token_index,
-
- sense0words, sense1words, sense2words, sense3words, sense4words, sense5words, sense6words, sense7words,
- sense8words, sense9words, sense10words, sense11words, sense12words, sense13words, sense14words, sense15words,
-
- sense0negwords, sense1negwords, sense2negwords, sense3negwords, sense4negwords, sense5negwords, sense6negwords, sense7negwords,
- sense8negwords, sense9negwords, sense10negwords, sense11negwords, sense12negwords, sense13negwords, sense14negwords, sense15negwords,
-
- sense0slider, sense1slider, sense2slider, sense3slider, sense4slider, sense5slider, sense6slider, sense7slider,
- sense8slider, sense9slider, sense10slider, sense11slider, sense12slider, sense13slider, sense14slider, sense15slider]
- )
-
- input_sentence.change(
- fn=clear_states,
- inputs=[contextualization_weights, token_index, length],
- outputs=[contextualization_weights, token_index, length]
- )
-
- with gr.Tab("Individual Word Sense Look Up"):
- gr.Markdown("""> Note on tokenization: Backpack uses the GPT-2 tokenizer, which includes the space before a word as part \
- of the token, so by default, a space character `' '` is added to the beginning of the word \
- you look up. You can disable this by checking `Remove space before word`, but know this might \
- cause strange behaviors like breaking `afraid` into `af` and `raid`, or `slight` into `s` and `light`.
- """)
- with gr.Row():
- word = gr.Textbox(label="Word", placeholder="e.g. science")
- token_breakdown = gr.Textbox(label="Token Breakdown (senses are for the first token only)")
- remove_space = gr.Checkbox(label="Remove space before word", default=False)
- count = gr.Slider(minimum=1, maximum=50, value=10, label="Top K", step=1)
- look_up_button = gr.Button("Look up")
- pos_outputs = gr.Dataframe(label="Highest Scoring Senses")
- neg_outputs = gr.Dataframe(label="Lowest Scoring Senses")
- gr.Examples(
- examples=["science", "afraid", "book", "slight"],
- inputs=[word],
- outputs=[pos_outputs, neg_outputs, token_breakdown],
- fn=visualize_word,
- cache_examples=True,
- )
-
- look_up_button.click(
- fn=visualize_word,
- inputs= [word, count, remove_space],
- outputs= [pos_outputs, neg_outputs, token_breakdown],
- )
-
-demo.launch()
-
-
-# Code for generating slider functions & event listners
-
-# for i in range(16):
-# print(
-# f"""def change_sense{i}_weight(contextualization_weights, length, token_index, new_weight):
-# print(f"Changing weight for the {i}th sense of the {{token_index}}th token.")
-# print("new_weight to be assigned = ", new_weight)
-# contextualization_weights[0, {i}, length-1, token_index] = new_weight
-# print("contextualization_weights: ", contextualization_weights[0, :, length-1, token_index])
-# return contextualization_weights"""
-# )
-
-# for i in range(16):
-# print(
-# f""" sense{i}slider.change(fn=change_sense{i}_weight,
-# inputs=[contextualization_weights, length, token_index, sense{i}slider],
-# outputs=[contextualization_weights])"""
-# )
\ No newline at end of file
diff --git a/spaces/lout33/Youtube-Whisperer/README.md b/spaces/lout33/Youtube-Whisperer/README.md
deleted file mode 100644
index f30d4256155c480f0599698379f798a3365e5bc1..0000000000000000000000000000000000000000
--- a/spaces/lout33/Youtube-Whisperer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Youtube Whisperer
-emoji: ⚡
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-duplicated_from: jeffistyping/Youtube-Whisperer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/luost26/DiffAb/diffab/utils/misc.py b/spaces/luost26/DiffAb/diffab/utils/misc.py
deleted file mode 100644
index d3a47d038390b5fb93e399a0e4a6be00240643f8..0000000000000000000000000000000000000000
--- a/spaces/luost26/DiffAb/diffab/utils/misc.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import os
-import time
-import random
-import logging
-from typing import OrderedDict
-import torch
-import torch.linalg
-import numpy as np
-import yaml
-from easydict import EasyDict
-from glob import glob
-
-
-class BlackHole(object):
- def __setattr__(self, name, value):
- pass
-
- def __call__(self, *args, **kwargs):
- return self
-
- def __getattr__(self, name):
- return self
-
-
-class Counter(object):
- def __init__(self, start=0):
- super().__init__()
- self.now = start
-
- def step(self, delta=1):
- prev = self.now
- self.now += delta
- return prev
-
-
-def get_logger(name, log_dir=None):
- logger = logging.getLogger(name)
- logger.setLevel(logging.DEBUG)
- formatter = logging.Formatter('[%(asctime)s::%(name)s::%(levelname)s] %(message)s')
-
- stream_handler = logging.StreamHandler()
- stream_handler.setLevel(logging.DEBUG)
- stream_handler.setFormatter(formatter)
- logger.addHandler(stream_handler)
-
- if log_dir is not None:
- file_handler = logging.FileHandler(os.path.join(log_dir, 'log.txt'))
- file_handler.setLevel(logging.DEBUG)
- file_handler.setFormatter(formatter)
- logger.addHandler(file_handler)
-
- return logger
-
-
-def get_new_log_dir(root='./logs', prefix='', tag=''):
- fn = time.strftime('%Y_%m_%d__%H_%M_%S', time.localtime())
- if prefix != '':
- fn = prefix + '_' + fn
- if tag != '':
- fn = fn + '_' + tag
- log_dir = os.path.join(root, fn)
- os.makedirs(log_dir)
- return log_dir
-
-
-def seed_all(seed):
- torch.backends.cudnn.deterministic = True
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
- np.random.seed(seed)
- random.seed(seed)
-
-
-def inf_iterator(iterable):
- iterator = iterable.__iter__()
- while True:
- try:
- yield iterator.__next__()
- except StopIteration:
- iterator = iterable.__iter__()
-
-
-def log_hyperparams(writer, args):
- from torch.utils.tensorboard.summary import hparams
- vars_args = {k: v if isinstance(v, str) else repr(v) for k, v in vars(args).items()}
- exp, ssi, sei = hparams(vars_args, {})
- writer.file_writer.add_summary(exp)
- writer.file_writer.add_summary(ssi)
- writer.file_writer.add_summary(sei)
-
-
-def int_tuple(argstr):
- return tuple(map(int, argstr.split(',')))
-
-
-def str_tuple(argstr):
- return tuple(argstr.split(','))
-
-
-def get_checkpoint_path(folder, it=None):
- if it is not None:
- return os.path.join(folder, '%d.pt' % it), it
- all_iters = list(map(lambda x: int(os.path.basename(x[:-3])), glob(os.path.join(folder, '*.pt'))))
- all_iters.sort()
- return os.path.join(folder, '%d.pt' % all_iters[-1]), all_iters[-1]
-
-
-def load_config(config_path):
- with open(config_path, 'r') as f:
- config = EasyDict(yaml.safe_load(f))
- config_name = os.path.basename(config_path)[:os.path.basename(config_path).rfind('.')]
- return config, config_name
-
-
-def extract_weights(weights: OrderedDict, prefix):
- extracted = OrderedDict()
- for k, v in weights.items():
- if k.startswith(prefix):
- extracted.update({
- k[len(prefix):]: v
- })
- return extracted
-
-
-def current_milli_time():
- return round(time.time() * 1000)
diff --git a/spaces/lvwerra/license-static/index.html b/spaces/lvwerra/license-static/index.html
deleted file mode 100644
index 9c6700613d3e8c5b03513d80fc14c0becb6175a6..0000000000000000000000000000000000000000
--- a/spaces/lvwerra/license-static/index.html
+++ /dev/null
@@ -1,1900 +0,0 @@
-
-
-
-
-
-
-
-
-
This is a license (the “License”) between you (“You”) and the
- participants of BigScience (“Licensor”).Whereas the Apache 2.0 license was
- applicable to resources used to develop the Model, the
- licensing conditions have been modified for the access and distribution of the Model. This has been done to further BigScience’s aims of
- promoting not just open-access to its artifacts, but also a responsible use of these artifacts. Therefore,
- this Responsible AI License (RAIL)[1] aims
- at having an open and permissive character while striving for responsible use
- of the Model.
-
-
-
- Section I: PREAMBLE
-
-
BigScience is a collaborative open innovation project aimed at the responsible
- development and use of large multilingual datasets and Large Language Models (“LLM”), as well as, the
- documentation of best practices and tools stemming from this collaborative effort. Further, BigScience
- participants wish to promote collaboration and sharing of research artifacts - including the Model - for the benefit of society, pursuant to this
- License.
-
-
The development and use of LLMs, and broadly artificial intelligence
- (“AI”), does not come without
- concerns. The world has witnessed how just a few companies/institutions are able to develop LLMs, and
- moreover, how Natural Language Processing techniques might, in some instances, become a risk for the public
- in general. Concerns might come in many forms, from racial discrimination to the treatment of sensitive
- information.
-
-
BigScience believes in the intersection between open and responsible AI
- development, thus, this License aims to strike a balance between both in order to
- enable responsible open-science for large language models and future NLP
- techniques.
-
This License governs the use of the BigScience BLOOM models(and their derivatives) and is informed by both the BigScience Ethical Charter and the model cards associated with the BigScience BLOOM models.
- BigScience has set forth its Ethical Charter representing the values of its
- community. Although the BigScience community does not aim to impose its values on potential users of
- thisModel, it is
- determined to take tangible steps towards protecting the community from
- inappropriate uses of the work being developed by BigScience.
-
Furthermore, the model cards for the BigScience BLOOM models will inform the user
- about the limitations of the Model, and thus
- serves as the basis of some of the use-based restrictions in this License (See Part II).
-
-
-
NOW THEREFORE, You and Licensor agree as follows:
-
-
-
1. Definitions
-
-
-
"License" shall mean the terms and conditions for use, reproduction, and Distribution as defined
- in this document.
-
“Data”
- means a collection of texts extracted from the
- BigScience Corpus used with the Model, including to train, pretrain, or
- otherwise evaluate the Model. The Data is not licensed under this License. The BigScience Corpus is a collection of existing sources of
- language data documented on the BigScience website.
-
“Output” means the
- results of operating a Model as embodied in informational content resulting therefrom.
-
“Model” means any accompanying machine-learning based assemblies (including checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding
- to the BigScience BLOOM model architecture as embodied in the Complementary Material, that have been
- trained or tuned, in whole or in part, on the Data using the Complementary
- Material.
-
“Derivatives of the Model” means all modifications to the Model, works based on
- the Model, or any other model which is created or initialized by transfer of patterns of the weights,
- parameters, activations or output of the Model, to the
- other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods
- entailing the use of intermediate data representations or methods based on the generation of synthetic
- data by the Model for training the other model.
-
“Complementary
- Material” shall mean the accompanying source
- code and scripts used to define, run, load,
- benchmark or evaluate the Model, and used to prepare data for training or evaluation. This includes any accompanying documentation, tutorials, examples etc.
-
“Distribution” means any transmission, reproduction, publication or other sharing of the Model or
- Derivatives of the Model to a third party, including providing the Model as a hosted service made
- available by electronic or other remote means - e.g. API-based or webaccess.
-
“Licensor” means the copyright owner or entity authorized by the copyright owner that is granting the License, including the persons or
- entities that may have rights in the Model and/or distributing the Model.
-
"You"
- (or "Your") shall mean an individual or
- Legal Entity exercising permissions granted by this License and/or making use of the Model for whichever
- purpose and in any field of use, including usage of the Model in an end-use application - e.g. chatbot,
- translator.
-
“Third Parties” means individuals or legal entities that are not under common control with
- Licensor or You.
-
"Contribution" shall mean any work of authorship, including the original version of the Model and any
- modifications or additions to that Model or Derivatives of the Model thereof, that is intentionally
- submitted to Licensor for inclusion in the Model by the copyright owner or by an individual or Legal
- Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition,
- “submitted” means any form of electronic,
- verbal, or written communication sent to the Licensor or its representatives, including but not limited
- to communication on electronic mailing lists, source code control systems, and issue tracking systems
- that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the
- Model, but excluding communication that is conspicuously marked or otherwise designated in writing by
- the copyright owner as "Not a Contribution."
-
"Contributor" shall mean Licensor and any individual or Legal Entity on
- behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the
- Model.
-
-
-
- Section II: INTELLECTUAL PROPERTY
- RIGHTS
-
Both copyright and patent grants apply to the Model, Derivatives of the Model and
- Complementary Material. The Model and Derivatives of the Model are subject to additional terms as described in Section
- III.
-
2. Grant of Copyright License. Subject to the terms and
- conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive,
- no-charge, royalty-free, irrevocable copyright license to reproduce, prepare,
- publicly display, publicly
- perform, sublicense, and distribute the Complementary
- Material, the Model, and Derivatives of the Model.
-
3. Grant of Patent License. Subject to the terms and
- conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive,
- no-charge, royalty-free, irrevocable (except as stated in this
- paragraph) patent license to make, have made, use, offer to sell, sell,
- import, and otherwise transfer the Model and the Complementary Material, where such license applies only to
- those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s)
- alone or by combination of their Contribution(s) with the Model to which such Contribution(s) was submitted.
- If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit)
- alleging that the Model and/or Complementary Material or a Contribution incorporated within the Model and/or
- Complementary Material constitutes direct or contributory patent infringement, then any patent licenses
- granted to You under this License for the Model and/or Work shall terminate as of the date such litigation
- is filed.
-
-
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND
- REDISTRIBUTION
-
-
4. Distribution and Redistribution. You may host for Third Party remote access purposes (e.g.
- software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in
- any medium, with or without modifications, provided that You meet the following conditions:
-
-
Use-based restrictions as referenced in paragraph 5 MUST be included as an
- enforceable provision by You in any type of legal agreement (e.g. a license) governing the use and/or
- distribution of the Model or Derivatives of the Model, and You shall give notice to subsequent users You
- Distribute to, that the Model or Derivatives of the Model are subject to paragraph 5. This provision
- does not apply to the use of Complementary Material.
-
You must give any Third Party recipients of the Model or
- Derivatives of the Model a copy of this License;
-
You must cause any modified files to carry prominent notices
- stating that You changed the files;
-
You must retain all copyright, patent, trademark, and
- attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of
- the Model.
-
-
You may add Your own copyright statement to Your modifications and may provide
- additional or different license terms and conditions - respecting paragraph 4.a. - for use, reproduction, or Distribution of Your modifications,
- or for any such Derivatives of the Model as a whole, provided Your use, reproduction, and Distribution of
- the Model otherwise complies with the conditions stated in this License.
-
5. Use-based restrictions.The restrictions set forth in Attachment A are considered
- Use-based restrictions. Therefore You cannot use the Model and the
- Derivatives of the Model for the specified restricted uses.You may use the Model subject to this License, including only for lawful purposes and in
- accordance with the License. Use may include
- creating any content with, finetuning, updating, running, training, evaluating and/or reparametrizing the
- Model. You shall require all of Your users who use the
- Model or a Derivative of the Model to comply with the terms of this paragraph (paragraph 5).
-
6. The Output You Generate. Except as set forth
- herein, Licensor claims no rights in the Output You generate using the Model. You are accountable for the
- Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in the License.
-
-
Section IV: OTHER PROVISIONS
-
7. Updates and Runtime Restrictions. To the maximum extent
- permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in
- violation of this License, update the Model through electronic means, or modify the Output of the Model
- based on updates. You shall undertake reasonable efforts to use the latest version of the Model.
-
8. Trademarks and related. Nothing in this License permits You to make use of Licensors’ trademarks, trade
- names, logos or to otherwise suggest endorsement or misrepresent the relationship between the parties; and
- any rights not expressly granted herein are reserved by the Licensors.
-
9. Disclaimer of Warranty. Unless
- required by applicable law or agreed to in writing, Licensor provides the Model and the Complementary Material(and each
- Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
- ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE,
- NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for
- determining the appropriateness of using or redistributing the Model, Derivatives of the Model, and the
- Complementary Material and assume any risks
- associated with Your exercise of permissions under this License.
-
10. Limitation of Liability. In no
- event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless
- required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall
- any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or
- consequential damages of any character arising as a result of this License or out of the use or inability to
- use the Model and the Complementary Material (including but not limited to damages for loss of goodwill, work stoppage, computer
- failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has
- been advised of the possibility of such damages.
-
11. Accepting Warranty or Additional Liability. While redistributing the Model, Derivatives of the Model and the Complementary Material thereof, You may choose to offer, and charge
- a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights
- consistent with this License. However, in accepting such obligations, You may act only on Your own behalf
- and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such
- Contributor by reason of your accepting any such warranty or additional liability.
-
12. If any provision of this License is held to be
- invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as
- if such provision had not been set forth herein.
-
END OF TERMS AND CONDITIONS
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Attachment A
-
Use Restrictions
-
You agree not to use the Model or Derivatives of the Model:
-
-
In any way that violates any applicable national, federal,
- state, local or international law or regulation;
-
For the purpose of exploiting, harming or attempting to exploit
- or harm minors in any way;
-
To generate or disseminate verifiably false information with
- the purpose of harming others;
-
To generate or disseminate personal
- identifiable informationthat can be used to harm an individual;
-
To generate or disseminate information or content, in any context (e.g. posts, articles, tweets,
- chatbots or other kinds of automated bots) without expressly and intelligibly disclaiming that the text
- is machine generated;
-
To defame, disparage or otherwise harass others;
-
To impersonate or attempt to impersonate others;
-
For fully automated decision making that adversely impacts an
- individual’s legal rights or otherwise creates or modifies a binding, enforceable
- obligation;
-
For any use intended to or which has the
- effect of discriminating against or harming individuals or groupsbased on online or offline social behavior or known or
- predicted personal or personality characteristics
-
To exploit any of the vulnerabilities of a specific group of
- persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining
- to that group in a manner that causes or is likely to cause that person or another person physical or
- psychological harm;
-
For any use intended to or which has the
- effect of discriminating against individuals or groups based on legally protected characteristics or
- categories;
-
To provide medical advice and medical results interpretation;
-
-
To generate or disseminate information for the purpose to be used
- for administration of justice, law enforcement, immigraton or asylum processes, such as predicting an
- individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships
- between assertions made in documents, indiscriminate and arbitrarily-targeted use).